text
stringlengths 14
1.76M
|
---|
# Steady collision of two jets issuing from two axially symmetric channels†
Lili Dua,1, Yongfu Wangb,2
###### Abstract.
In the classical survey (Chapter 16.2, Mathematics in industrial problem, Vol.
24, Springer-Verlag, New York, 1989), A. Friedman proposed an open problem on
the collision of two incompressible jets emerging from two axially symmetric
nozzles. In this paper, we concerned with the mathematical theory on this
collision problem, and establish the well-posedness theory on hydrodynamic
impinging outgoing jets issuing from two coaxial axially symmetric nozzles.
More precisely, we showed that for any given mass fluxes $M_{1}>0$ and
$M_{2}<0$ in two nozzles respectively, that there exists an incompressible,
inviscid impinging outgoing jet with contact discontinuity, which issues from
two given semi-infinitely long axially symmetric nozzles and extends to
infinity. Moreover, the constant pressure free stream surfaces of the
impinging jet initiate smoothly from the mouths of the two nozzles and shrink
to some asymptotic conical surface. There exists a smooth surface separating
the two incompressible fluids and the contact discontinuity occurs on the
surface. Furthermore, we showed that there is no stagnation point in the flow
field and its closure, except one point on the symmetric axis. Some asymptotic
behavior of the impinging jet in upstream and downstream, geometric properties
of the free stream surfaces are also obtained. The main results in this paper
solved the open problem on the collision of two incompressible axially
symmetric jets in [24].
† Du is supported by NSFC grant 11971331 and Sichuan Youth Science and
Technology Foundation (No. 21CXTD0076). Wang is supported by NSFC grant
11801460 and the Applied Fundamental Research Plan of Sichuan Province (No.
2021YJ0520).
1 E-Mail<EMAIL_ADDRESS>2 E-mail<EMAIL_ADDRESS>Corresponding author
a Department of Mathematics, Sichuan University,
Chengdu 610064, P. R. China.
b School of Economic Mathematics,
Southwestern University of Finance and Economics,
Chengdu 611130, P. R. China.
2010 Mathematics Subject Classification: 76B10; 76B03; 35Q31; 35J25.
Key words: Impinging outgoing jets, incompressible flows, free boundary,
existence and uniqueness, contact discontinuity.
## 1\. Introduction
The three-dimensional incompressible, stationary and inviscid flow is governed
by the Euler equations
$\left\\{\begin{array}[]{l}\sum_{i=1}^{3}\frac{\partial u_{i}}{\partial
x_{i}}=0,\\\ \sum_{i=1}^{3}u_{i}\frac{\partial u_{j}}{\partial
x_{i}}+\frac{1}{\rho}\frac{\partial P}{\partial x_{j}}=0,\quad\text{for}\quad
j=1,2,3,\end{array}\right.$ (1.1)
with the irrotational condition
$\displaystyle\nabla\times(u_{1},u_{2},u_{3})=0.$
Here, $\displaystyle(u_{1},u_{2},u_{3})$ is the velocity, $\displaystyle P$
denotes the pressure of the flow, and positive constant $\displaystyle\rho$
denotes density.
In this paper, we shall be concerned with steady, irrotational incompressible
impinging jets issuing from two semi-infinitely long axially symmetric nozzles
with variable cross-section. We will investigate the well-posedness theory of
the collision problem of two jets issuing from two general axially symmetric
nozzles, and solve the open problem (1) proposed by A. Friedman in 1989.
Consider the axially symmetric flows in this paper and let $\displaystyle
U(x,r)$, $\displaystyle V(x,r)$ and $\displaystyle W(x,r)$ be the axial
velocity, the radial velocity and the swirl velocity respectively,
$\displaystyle x=x_{1}$ and $\displaystyle r=\sqrt{x_{2}^{2}+x_{3}^{2}}$.
Furthermore, we seek such an axially symmetric flow without swirl, one has
$u_{1}=U(x,r),\quad u_{2}=V(x,r)\frac{x_{2}}{r},\quad
u_{3}=V(x,r)\frac{x_{3}}{r}.$ (1.2)
Then, instead of (1.1), we have
$\left\\{\begin{array}[]{l}\left(rU\right)_{x}+\left(rV\right)_{r}=0,\\\
\left(r\rho U^{2}\right)_{x}+\left(r\rho UV\right)_{r}+rP_{x}=0,\\\
\left(r\rho UV\right)_{x}+\left(r\rho V^{2}\right)_{r}+rP_{r}=0.\\\
\end{array}\right.$ (1.3)
Consider the flow issuing from the two semi-infinitely long nozzles as (see
Figure 1)
$\displaystyle\mathcal{N}_{1}=\left\\{(x,r)\in{\mathbb{R}}_{+}^{2}\left|f_{1}(r)<x<-1,\
r<R_{1}\right.\right\\},$
and
$\displaystyle\mathcal{N}_{2}=\left\\{(x,r)\in{\mathbb{R}}_{+}^{2}\left|1<x<f_{2}(r),\
r<R_{2}\right.\right\\},$
where $\displaystyle{\mathbb{R}}_{+}^{2}={\mathbb{R}}^{1}\times[0,+\infty)$,
$\displaystyle R_{1}$, $\displaystyle R_{2}>0$, $\displaystyle f_{1}(r)$ and
$\displaystyle f_{2}(r)$ are smooth functions and satisfy that
$f_{i}(r)=(-1)^{i}\infty,\quad\quad r\leq r_{i},\quad(r_{i}>0)$ (1.4)
and
$f_{i}(R_{i})=(-1)^{i},$ (1.5)
for $\displaystyle R_{i}>r_{i}$ and $\displaystyle i=1,2$. Without loss of
generality, we assume that $\displaystyle R_{1}=R_{2}=R$.
Figure 1. Two axially symmetric semi-infinitely long nozzles
For convenience, we denote the symmetric axis
$\displaystyle N_{0}=\\{(x,r)|r=0,-\infty<x<+\infty\\},$
the nozzle walls
$\displaystyle N_{1}=\\{(x,r)|x=f_{1}(r),\ r_{1}<r<R\\},\quad
N_{2}=\\{(x,r)|x=f_{2}(r),\ r_{2}<r<R\\},$
and the edge points of the nozzle walls $\displaystyle A_{1}=(-1,R)$ and
$\displaystyle A_{2}=(1,R)$.
In this paper, we consider two ideal, nonmiscible, irrotational fluids
$\displaystyle(U_{1},V_{1},P_{1},\rho_{1})$ and
$\displaystyle(U_{2},V_{2},P_{2},\rho_{2})$ issuing from two semi-infinite
axisymmetric nozzles. We designate by
$\displaystyle(U_{1},V_{1},P_{1},\rho_{1})$ and
$\displaystyle(U_{2},V_{2},P_{2},\rho_{2})$ be the axial velocity, the radial
velocity and the pressure of the fluid I and the fluid II, respectively.
Denote $\displaystyle\Omega_{i}$ as the fluid field of the $\displaystyle
i$-th fluid for $\displaystyle i=1,2$, and
$\displaystyle(U,V,P,\rho)=\left\\{\begin{array}[]{l}(U_{1},V_{1},P_{1},\rho_{1})\
\ \ \text{in}\ \ \Omega_{1},\\\ (U_{2},V_{2},P_{2},\rho_{2})\ \ \ \text{in}\ \
\Omega_{2},\end{array}\right.\text{as the two-phase fluids}.$
In this paper, we seek a contact discontinuity $\displaystyle(U,V,P,\rho)$
with a smooth interface $\displaystyle\Gamma$: $\displaystyle\\{x=g(r)\\}$
between the two fluids. And $\displaystyle(U,V,P,\rho)$ is a week solution of
(1.3) in the distributional sense and
$\displaystyle(U_{i},V_{i},P_{i},\rho_{i})$ solves the incompressible Euler
system (1.3) classically in $\displaystyle\Omega_{i}$ for $\displaystyle
i=1,2$. (Please see Figure 3).
Figure 2. Collision of two jets
Figure 3. Impinging outgoing jet
Then the Rankine-Hugoniot jump conditions on $\displaystyle\Gamma$ become
$-\left[\begin{array}[]{l}\rho U\\\ \rho U^{2}+P\\\ \rho UV\\\
\end{array}\right]+g^{\prime}(r)\left[\begin{array}[]{l}\rho V\\\ \rho UV\\\
\rho V^{2}+P\\\ \end{array}\right]=0,$ (1.6)
where $\displaystyle[\cdot]$ denotes the jump of a related function crossing
the interface $\displaystyle\Gamma$.
Set
$\displaystyle\mathfrak{m}_{i}=\rho_{i}\left(g^{\prime}(r)V_{i}-U_{i}\right)$
($\displaystyle i=1,2$) be the mass flux across the interface, if
$\displaystyle\mathfrak{m}_{1}=\mathfrak{m}_{2}=0$ on the interface
$\displaystyle\Gamma$, then $\displaystyle(U,V,P,\rho)$ is a contact
discontinuity. The Rankine-Hugoniot conditions (1.6) read as
$-U_{1}+g^{\prime}(r)V_{1}=0,\ \ -U_{2}+g^{\prime}(r)V_{2}=0\ \text{and}\
P_{1}=P_{2}.$ (1.7)
The condition (1.7) implies that the normal velocities on both sides of the
interface $\displaystyle\Gamma$ vanish, while the tangential velocity on both
side of $\displaystyle\Gamma$ may have nontrivial jump.
Furthermore, the well-known Bernoulli’s law can be written as
$\displaystyle(U_{i},V_{i})\cdot\nabla\left(\frac{1}{2}(U_{i}^{2}+V_{i}^{2})+\frac{P_{i}}{\rho_{i}}\right)=0,\
\ \text{for}\ \ i=1,2,$
namely,
$\displaystyle\frac{U_{1}^{2}+V_{1}^{2}}{2}+\frac{P_{1}}{\rho_{1}}=\mathfrak{B}_{1}\
\ \text{and}\ \
\frac{U_{2}^{2}+V_{2}^{2}}{2}+\frac{P_{2}}{\rho_{2}}=\mathfrak{B}_{2},$
where $\displaystyle\mathfrak{B}_{1}$ and $\displaystyle\mathfrak{B}_{2}$
denote the Bernoulli’s constants of the two fluids, respectively, in view of
(1.7), then
$\rho_{1}\left(U_{1}^{2}+V_{1}^{2}\right)-\rho_{2}\left(U_{2}^{2}+V_{2}^{2}\right)=2\left(\rho_{1}\mathfrak{B}_{1}-\rho_{2}\mathfrak{B}_{2}\right)\triangleq\Lambda\
\ \ \text{on}\ \ \Gamma,$ (1.8)
without loss of generality, we assume $\displaystyle\Lambda\geq 0$.
On another hand, on the free surfaces $\displaystyle\Gamma_{1}$ and
$\displaystyle\Gamma_{2}$, the pressure is assumed to be the constant
atmosphere pressure $\displaystyle P_{at}$ (in absence of gravity and surface
tension), namely,
$P=P_{at}\ \ \text{on $\displaystyle\Gamma_{1}\cup\Gamma_{2}$.}$ (1.9)
Here is our problem of fluid mechanics: determine an impinging outgoing jet
$\displaystyle(U,V,P,\rho)$ issuing from two nozzles
$\displaystyle\mathcal{N}_{1}$ and $\displaystyle\mathcal{N}_{2}$ with two
mass fluxes $\displaystyle M_{1}$ and $\displaystyle M_{2}$, bounded by two
free surfaces $\displaystyle\Gamma_{1}$ and $\displaystyle\Gamma_{2}$ on which
the pressure is a constant $\displaystyle P_{at}$. Furthermore, on the
interface, the Rankine-Hugoniot conditions (1.7) and (1.8) hold.
On the solid walls $\displaystyle N_{1}$ and $\displaystyle N_{2}$, the flow
satisfies the slip-boundary condition,
$(U_{i},V_{i})\cdot\vec{n}_{i}=0,\ \ \ \ \ \text{on}\ \ N_{i},$ (1.10)
where $\displaystyle\vec{n}_{i}$ is the unit outer normal of the wall
$\displaystyle N_{i}$, for $\displaystyle i=1,2$. Moreover, on the symmetry
axis $\displaystyle N_{0}$,
$V_{i}=0.$ (1.11)
Denote $\displaystyle M_{1}$ and $\displaystyle M_{2}$ as the mass fluxes in
nozzles $\displaystyle\mathcal{N}_{1}$ and $\displaystyle\mathcal{N}_{2}$,
respectively, then
$\int_{\Sigma_{i}}(rU,rV,0)\cdot\vec{l_{i}}dS=\frac{M_{i}}{2\pi},$ (1.12)
where $\displaystyle\Sigma_{i}$ is any curve transversal to the $\displaystyle
x$-axis direction and $\displaystyle\vec{l_{i}}$ is the normal of
$\displaystyle\Sigma_{i}$ in the positive $\displaystyle x$-axis direction for
$\displaystyle i=1,2$.
### 1.1. Impinging outgoing jet problem and main results
We define the axially symmetric impinging outgoing jet problem as follows.
Axially symmetric impinging outgoing jet problem. For given any mass fluxes
$\displaystyle M_{1}>0$ and $\displaystyle M_{2}<0$ in the two semi-infinitely
long axially symmetric nozzles $\displaystyle\mathcal{N}_{1}$ and
$\displaystyle\mathcal{N}_{2}$, respectively, there exists an axially
symmetric impinging outgoing jet extending to the infinity, the free stream
surfaces initiate at the edges of the nozzles smoothly and shrink to some
conical surface at the far field, and a smooth interface separates the two
jets, furthermore, the pressure remains a constant on free stream surfaces
(see Figure 3).
Next, we give the definition of the solution to the impinging outgoing jet
problem.
A solution to the axially symmetric impinging outgoing jet problem. A
quintuple $\displaystyle(U,V,P,\Gamma_{1},\Gamma_{2})$ is called a solution to
the axially symmetric impinging outgoing jet problem, provided that
(1). The smooth surfaces $\displaystyle\Gamma_{1}$ and
$\displaystyle\Gamma_{2}$ are given by two functions $\displaystyle
x=g_{1}(r)\in C^{1}{((R,+\infty))}$ and $\displaystyle x=g_{2}(r)\in
C^{1}{((R,+\infty))}$ with $\displaystyle g_{1}(r)<g_{2}(r)$, and
$g_{1}(R+0)=f_{1}(R-0),\quad g_{2}(R+0)=f_{2}(R-0)\quad\text{({\it continuous
fit conditions}),}$ (1.13)
and
$g_{1}^{\prime}(R+0)=f_{1}^{\prime}(R-0),\quad
g_{2}^{\prime}(R+0)=f_{2}^{\prime}(R-0)\quad\text{({\it smooth fit
conditions})}.$ (1.14)
Moreover, there exists an asymptotic direction
$\displaystyle\nu=(\cos\theta,\sin\theta)$ with
$\displaystyle\theta\in(0,\pi)$, such that $\displaystyle g_{1}$ and
$\displaystyle g_{2}$ are close to the asymptotic direction $\displaystyle\nu$
at far field (See Figure 5), ie.,
$\lim_{r\rightarrow\infty}\left(g_{2}(r)-g_{1}(r)\right)=0\quad\text{and}\quad\lim_{r\rightarrow\infty}g^{\prime}_{1}(r)=\lim_{r\rightarrow\infty}g^{\prime}_{2}(r)=\cot\theta,$
(1.15)
the angle $\displaystyle\theta$ is called the asymptotic deflection angle of
the impinging outgoing jet.
(2). Denote the flow field $\displaystyle G$ bounded by the symmetric axis
$\displaystyle N_{0}$, the nozzle walls $\displaystyle N_{1},N_{2}$ and the
free boundaries $\displaystyle\Gamma_{1},\Gamma_{2}$.
$\displaystyle(U,V,P)\in\left(C^{1,\alpha}(G)\cap
C^{0}(\overline{G})\right)^{3}$ solves the steady incompressible Euler system
(1.3), the boundary condition (1.10), the Rankine-Hugoniot conditions (1.7)
and the mass flux conditions (1.12);
(3). The radial velocity $\displaystyle V$ is positive in flow field and its
closure, except the symmetric axis and interface $\displaystyle\Gamma$,
namely, $\displaystyle V>0$ in
$\displaystyle\bar{G}\setminus\left(N_{0}\cup\Gamma\right)$;
(4). $\displaystyle P=P_{at}$ on $\displaystyle\Gamma_{1}\cup\Gamma_{2}$;
(5). The interface $\displaystyle\Gamma$ satisfies the condition (1.8).
The first result in this paper is the existence of the impinging outgoing jet
as follows.
###### Theorem 1.1.
For any given atmosphere pressure $\displaystyle P_{at}$, mass fluxes
$\displaystyle M_{1}>0$, $\displaystyle M_{2}<0$ and $\displaystyle\Lambda\geq
0$ issuing from the two axially symmetric nozzles
$\displaystyle\mathcal{N}_{1}$ and $\displaystyle\mathcal{N}_{2}$,
respectively, there exists a solution
$\displaystyle(U,V,P,\Gamma_{1},\Gamma_{2})$ to the axially symmetric
impinging outgoing jet problem. Furthermore, there exists a $\displaystyle
C^{1}$-smooth surface $\displaystyle x=g(r)$ satisfying $\displaystyle
g_{1}(r)<g(r)<g_{2}(r)$ for $\displaystyle R<r<\infty$, which separates the
two fluids and initiates at the branching point on the symmetric axis (Figure
5). Furthermore, there exists a positive constant $\displaystyle\lambda$, such
that
$r(g(r)-g_{1}(r))\rightarrow\frac{M_{1}}{2\pi\sqrt{\rho_{1}(\Lambda+\lambda})\sin\theta}\
\ \text{and}\ \
r(g(r)-g_{2}(r))\rightarrow\frac{M_{2}}{2\pi\sqrt{\rho_{2}\lambda}\sin\theta}\quad\quad\text{as}\quad
r\rightarrow+\infty.$ (1.16)
We would like to give the following comments on the existence theorem.
###### Remark 1.1.
One of key points in this work is the appearance of the interface between the
two fluids, which is also a free boundary and is determined by the solution
itself. In this paper, the impinging outgoing jets possess a smooth surface
separating the two immiscible fluids, which intersects the symmetric axis at a
unique point, called the _branching point_. However, the appearance of the
interface takes many essential difficulties to solve the free boundary problem
in mathematics. The first one is the non-trivial jump of the velocity field on
the interface (see (1.8)), which leads that we have to seek a non-smooth
solution in the whole fluid field. The second one is that the interface is
common boundaries of two fluids, which is totally free. And we shall define
the interface as the level set of the stream function and show that it is
indeed a smooth curve. The third one is the regularity of the two-phase fluids
near the branching point.
Figure 4. Axisymmetric impinging outgoing jet flow in cylindrical coordinates
Figure 5. Impinging outgoing jet and the interface $\displaystyle\Gamma$
###### Remark 1.2.
There are many numerical results on the impinging free jets in absence of
rigid nozzle walls, such as unsymmetrically impinging jets in [29], impinging
free jets in [28], compressible impinging jets in [10]. However, here we have
to consider the geometry of both solid boundaries and free boundaries, the one
of main difficulties is to verify the continuous fit and smooth fit
conditions. In present work, an essential point is that we can choose a
suitable pair of parameters $\displaystyle(\lambda,\theta)$, such that the
continuous fit conditions are fulfilled. In other word, the parameters
$\displaystyle\lambda$ and $\displaystyle\theta$ can be determined by the
continuous fit conditions, which is the main difference from the analysis of
impinging free jets without rigid boundaries. Therefore, we first solve the
free boundary problem for any $\displaystyle\lambda$ and
$\displaystyle\theta$, and then show the existence of a suitable pair of
parameters $\displaystyle(\lambda,\theta)$ to guarantee the continuous fit
conditions of impinging outgoing jet. Furthermore, we can show that the
continuous fit conditions imply the smooth fit conditions.
Theorem 1.1 gives that there exists a pair of parameters
$\displaystyle(\lambda,\theta)$ to guarantee the existence of the axially
symmetric impinging outgoing jet. However, to the best of our knowledge, the
uniqueness of the jet with two free boundaries is totally open. Next, for
$\displaystyle\Lambda=0$, we give the uniqueness results on the axially
symmetric impinging outgoing jet, the idea borrows from the recent work [11]
on the uniqueness of the asymmetric incompressible jet.
###### Theorem 1.2.
(Uniqueness of the axially impinging outgoing jet)(1) Given any parameters
$\displaystyle(\lambda,\theta)$, such that the continuous fit conditions
(1.13) hold, then the axially symmetric impinging outgoing jet
$\displaystyle(U,V,P,\Gamma_{1},\Gamma_{2})$ established in Theorem 1.1 is
unique.
(2) Suppose that there exist two pairs of the parameters
$\displaystyle(\lambda,\theta)$ and $\displaystyle(\lambda,\tilde{\theta})$,
such that the continuous fit conditions (1.13) to the axially symmetric
impinging outgoing jet hold, then $\displaystyle\theta=\tilde{\theta}$.
Next, we give the asymptotic behaviors and the decay rate of the impinging
outgoing jet in the far field.
###### Theorem 1.3.
The impinging outgoing jet flow $\displaystyle(U,V,P,\Gamma_{1},\Gamma_{2})$
established in Theorem 1.1 satisfies the following asymptotic behavior in far
fields,
$(U(x,r),V(x,r),P(x,r))\rightarrow\left(\frac{M_{1}}{\pi\rho_{1}r_{1}^{2}},0,\frac{\lambda+\Lambda}{2\rho_{1}}+P_{at}-\frac{M_{1}^{2}}{2\rho_{1}\pi^{2}r_{1}^{4}}\right),$
(1.17)
and
$\nabla U\rightarrow 0,\ \ \nabla V\rightarrow 0,\ \ \nabla P\rightarrow 0,$
(1.18)
as $\displaystyle x\rightarrow-\infty$, in any compact subset of
$\displaystyle(0,r_{1})$ and
$(U(x,r),V(x,r),P(x,r))\rightarrow\left(\frac{M_{2}}{\pi\rho_{2}r_{1}^{2}},0,\frac{\lambda}{2\rho_{2}}+P_{at}-\frac{M_{2}^{2}}{2\rho_{2}\pi^{2}r_{2}^{4}}\right),$
(1.19)
and
$\nabla U\rightarrow 0,\ \ \nabla V\rightarrow 0,\ \ \nabla P\rightarrow 0,$
(1.20)
as $\displaystyle x\rightarrow+\infty$, in any compact subset of
$\displaystyle(0,r_{2})$, and in the downstream,
$(U(x,r),V(x,r),P(x,r))\rightarrow\left(\sqrt{\frac{\Lambda+\lambda}{\rho_{1}}}\cos\theta,\sqrt{\frac{\Lambda+\lambda}{\rho_{1}}}\sin\theta,P_{at}\right),$
(1.21)
uniformly in any compact subset of $\displaystyle\Omega_{1}$ as $\displaystyle
r\rightarrow+\infty$, and
$(U(x,r),V(x,r),P(x,r))\rightarrow\left(\sqrt{\frac{\lambda}{\rho_{2}}}\cos\theta,\sqrt{\frac{\lambda}{\rho_{2}}}\sin\theta,P_{at}\right),$
(1.22)
uniformly in any compact subset of $\displaystyle\Omega_{2}$ as $\displaystyle
r\rightarrow+\infty$, and
$\nabla U\rightarrow 0,\ \ \nabla V\rightarrow 0,\ \ \nabla P\rightarrow 0,$
(1.23)
uniformly in any compact subset of $\displaystyle\Omega_{1}\cup\Omega_{2}$ as
$\displaystyle r\rightarrow+\infty$.
Furthermore, for any $\displaystyle\alpha\in(0,2)$, one has
$r^{\alpha}\left(\left|U_{1}(x,r)-\sqrt{\frac{\Lambda+\lambda}{\rho_{1}}}\cos\theta\right|+\left|V_{1}(x,r)-\sqrt{\frac{\Lambda+\lambda}{\rho_{1}}}\sin\theta\right|\right)\rightarrow
0,$ (1.24)
$r^{\alpha}\left(\left|U_{2}(x,r)-\sqrt{\frac{\lambda}{\rho_{2}}}\cos\theta\right|+\left|V_{2}(x,r)-\sqrt{\frac{\lambda}{\rho_{2}}}\sin\theta\right|\right)\rightarrow
0,$ (1.25)
as $\displaystyle r\rightarrow+\infty$.
###### Remark 1.3.
The one of main differences between the impinging outgoing jet in two-
dimensional and axially symmetric case is that the two-dimensional outgoing
jet possesses a uniform positive width in far field, and however, the distance
of free streamlines goes to zero in downstream in axially symmetric case.
Here, we have to establish the decay estimates of outgoing jets and the free
boundaries in far field. Indeed, the facts (1.16) and (1.24) give the decay
rates of the velocity field and the distance of the two free stream surfaces
in downstream. In particular, (1.16) implies that the optimal decay rate of
the distance of two free stream surfaces is $\displaystyle\frac{1}{r}$ in
downstream.
### 1.2. Motivation and history of the problem
The motivation to investigate the impinging outgoing jets from two nozzles
comes from Chapter V. $\displaystyle\S$ 5 in the classical book [9] by G.
Birkhoff and E. H. Zarantonello, in which the impinging outgoing jets from two
plane symmetric cylinders were considered. Except for some simple channel
geometries, the impinging problem of two jets can not be solved analytically,
as was shown in the monographs [9] and [27]. Here, we consider the general
case that the impinging jet issuing from two axially symmetric nozzles with
variable cross-section.
Figure 6. Collision of two jets (Figure 16.7 in [24])
Another motivation to investigate the impinging outgoing jets issuing from two
nozzles comes from the Chapter 16 in famous survey [24]. The physical problem
is also related to the shaped charge question in [13]. As mentioned in Page
152 [24], ”… _we can formulate this problem as a collision of two jets, say a
garden hose and a fire hose; see Figure 16.7._ ” (see Figure 6) and A.
Friedman proposed an open problem that
“ _Problem (1). Analyze the axially symmetric free boundary problem associated
with the flow in Figure 16.7 in incompressible case._ ”
On another side, there are many numerical results on this impinging outgoing
jet problem, such as the incompressible problem for an arbitrary polygonal
nozzle in [14], and the incompressible jet with gravity in [15] and so on.
Moreover, Hurean and Weber in [28] considered the impinging of two
incompressible ideal free jets (in the absence of rigid nozzle walls)
numerically, and some existence results on two compressible free jets were
also investigated in [10]. However, we also would like to mention the
numerical result on asymmetric impinging free jets in [29].
The study of liquid jets issuing from containers is centuries old. As far back
as 1868, Helmholtz and Kirchhoff introduced the classical theory of free
streamlines in two-dimensional jets. The steady irrotational flows of ideal
incompressible fluid, bounded by nozzle walls and free streamlines were
investigated. The following decades saw extensions of a great many different
kinds of two-dimensional flows, on the basis of the complex analysis methods
by Planck, Joukowsky, Réthy, Levi-Civita, Greenhill and others.
Some substantial post-war monographs are those of Birkhoff-Zarantonello [9],
Gurevich [27], Milne-Thomson [30]. For two-dimensional irrotational case, a
generalized Schwarz-Christoffel transformation, combined with a Fourier
technique to formulate a free boundary problem into a nonlinear integral-
differential equation, some existence results on jets in special nozzles have
been constructed. However, two-dimensional jets have been given most of the
attention in the existence theory, and the limited amount of work on
axisymmetric jets has been confined. The reason is that the complex analysis
method which has been adapted to two-dimensional jets has noneffective in the
axially symmetric case. A first breakthrough work on the axially symmetric
free streamline was due to Garabedian, Lewy and Schiffer in [25] in 1952, in
which some existence results on the axially symmetric cavity were established
by variational approach. Furthermore, Alt, Caffarelli and Friedman developed
the variational method to deal the free streamlines problem in their elegant
works [1, 8]. Based on their framework, some well-posedness results on axially
symmetric jet in [4], asymmetric jet in [2], jet with gravity in [3], and jets
with two fluids in [6, 7] have been established. In this paper, some
fundamental ideas on the existence theory are still borrowed from the
variational method in [1]. Recently, if we assume that the fluid is smooth
across the interface $\displaystyle\Gamma$ (ie. $\displaystyle\Lambda=0$)
apriorily, some existence results for incompressible plane symmetric impinging
outgoing jets has been obtained in [17]. In fact, the interface
$\displaystyle\Gamma$ is a contact discontinuity and the jump
$\displaystyle\Lambda$ is always non-trivial and non-zero generally. As we
mentioned before, we have to investigate the non-zero jump
$\displaystyle\Lambda\neq 0$, which is one of main differences between this
paper to the previous paper [17]. As a first step to attack the original
problem on impinging outgoing jets with nontrivial jump, Wang and Xiang in
[31] considered a toy model on the incompressible fluids issuing from two
infinity long co-axis and symmetric nozzles without jet free boundary and
established some properties on the contact discontinuity between the two
fluids. Therefore, the objective of the present paper is to establish the
well-posedness theory on the impinging outgoing jet problem and solve the open
problem proposed by A. Frideman in 1989.
### 1.3. Methodology
From the physical problem here, the main difference and difficulty here stems
from the shape of nozzle walls, we have to find a mechanism, such that the
free boundaries of the jets connecting smoothly at the edge of the nozzle
walls (so-called _continuous fit and smooth fit conditions_). Another main
difficulty is how to analyze the interface with contact discontinuity between
the two incompressible ideal fluids.
We would like to comment the main ideas of the proof as follows. The
variational method developed by H. Alt, L. Caffarelli and A. Friedman in 1980s
has been shown to be powerful and effective to solve the free streamline
theory for more general models. In two-dimensional case, the stream function
is harmonic in the fluid domain, while in axially symmetric case, it solves a
linear elliptic equation with some lower-order term, and we have to deal with
the regularity near the symmetric axis. This is the first difficulty in this
paper. The second one is that the distance of the two free boundaries
converges to a positive constant in two-dimensional case (see [17]), and
however, the distance goes to zero in axially symmetric jet here. Therefore,
we can not borrow some uniform special flow to show some fundamental
properties of the free boundaries as in 2D case, such as the vanishing and
non-vanishing of free boundary, and the asymptotic behaviors of the jet in far
fields, and so on. Here, our strategy is to establish firstly the decay rate
of distance of the two free boundaries and the impinging outgoing jet in far
field, and then to obtain the desired properties via some rescaling arguments.
The third principal difficulty in this paper centers about how to choose the
suitable parameters $\displaystyle(\lambda,\theta)$ to assure the continuous
fit conditions in impinging outgoing jets. Some continuous dependent
relationships and monotonic properties to the impinging outgoing jets with
respect to the parameters $\displaystyle(\lambda,\theta)$ are established and
guarantee the fact to be fulfilled. The fourth difficulty here is the
occurrence of three free boundaries $\displaystyle\Gamma_{1}$,
$\displaystyle\Gamma_{2}$ and $\displaystyle\Gamma$, which is main difference
to the previous results, such as [17, 31]. In particular, the solution is not
smooth across the interface $\displaystyle\Gamma$ and it is a contact
discontinuity between the two fluids. To our knowledge, this is the first
well-posedness work on the jet flows problem with three free boundaries.
The remain of this paper is organized as follows. First, we establish the free
boundary value problem to the physical problem in Section 2. The solvability
of the free boundary value problem follows from the standard variational
approach, which has been developed by Alt, Caffarelli and Friedman in the
celebrated works [1, 2, 4]. Moreover, some properties of the free boundaries
will be obtained and we can verify the continuous fit and smooth fit
conditions for suitable parameters $\displaystyle\lambda$ and
$\displaystyle\theta$. Additionally, we will investigate the existence and
properties of the interface between the two fluids. Hence, we can obtain the
existence of impinging outgoing jet in Section 3. In Section 4, we will give
the uniqueness of the impinging outgoing jet and the parameters. In Section 5,
the asymptotic behavior of the impinging outgoing jet is obtained along the
blow-up argument, which has been used to deal with the subsonic compressible
flows in infinitely long nozzles in [16, 18, 19, 20, 21, 22, 32, 33, 34]. Some
results on the variational problem are given in the Appendix.
## 2\. Mathematical settings of the free boundary problem
Due to the continuity equation (1.3), one can introduce stream functions
$\displaystyle\Psi_{i}(x,r)$ ($\displaystyle i=1,2$) such that
$\partial_{x}\Psi_{i}=-r\rho_{i}V_{i},\ \ \ \
\partial_{r}\Psi_{i}=r\rho_{i}U_{i}.$ (2.1)
In order to convenient to the analysis, we introduce the scaled stream
function $\displaystyle\psi_{i}=\frac{\Psi_{i}}{\sqrt{\rho_{i}}}$
($\displaystyle i=1,2$), and denote $\displaystyle\psi$ as
$\displaystyle\psi=\left\\{\begin{array}[]{l}\psi_{1}\ \ \ \text{in}\ \
\Omega_{1},\\\ \psi_{2}\ \ \ \text{in}\ \ \Omega_{2}.\end{array}\right.$
This together with the irrotational condition gives
$\Delta\psi-\frac{1}{r}\frac{\partial\psi}{\partial r}=0\ \ \text{in
$\displaystyle\Omega_{1}\cup\Omega_{2}$}.$ (2.2)
Here and after, $\displaystyle\Omega_{i}$ denotes the flow field, bounded by
the nozzle walls $\displaystyle N_{i}$, the symmetric axis $\displaystyle
N_{0}$, the interface $\displaystyle\Gamma$ and the free boundaries
$\displaystyle\Gamma_{i}$ ($\displaystyle i=1,2$).
In this paper, we expect to seek an axially symmetric impinging outgoing jet
flow with positive vertical velocity, and thus denote $\displaystyle\Omega$
bounded by $\displaystyle N_{i}$, $\displaystyle N_{0}$ and $\displaystyle
L_{i}$ ($\displaystyle i=1,2$) as the possible flow field of impinging
outgoing jet (see Figure 7), where
$\displaystyle L_{1}=\\{(x,r)\mid r=R,x<-1\\}\ \ \text{and}\ \
L_{2}=\\{(x,r)\mid r=R,x>1\\}.$
Figure 7. The possible flow field $\displaystyle\Omega$
Moreover, the nozzles $\displaystyle N_{1}\cup N_{2}$ and the free boundaries
$\displaystyle\Gamma_{1}\cup\Gamma_{2}$ are streamlines, thus
$\displaystyle\psi$ remains some constant on those boundaries, without loss of
generality, we can impose $\displaystyle\psi=m_{1}$ on $\displaystyle
N_{1}\cup\Gamma_{1}$ and $\displaystyle\psi=m_{2}$ on $\displaystyle
N_{2}\cup\Gamma_{2}$. Thus, the free boundaries $\displaystyle\Gamma_{1}$ and
$\displaystyle\Gamma_{2}$ can be defined as
$\displaystyle\Gamma_{1}=\Omega\cap\partial\left\\{\psi<m_{1}\right\\},\ \
\Gamma_{2}=\Omega\cap\partial\left\\{\psi>m_{2}\right\\},$
respectively, where $\displaystyle\Omega$ is the possible flow field defined
before and $\displaystyle m_{i}=\frac{M_{i}}{2\pi\sqrt{\rho_{i}}}$
($\displaystyle i=1,2$). And the constant pressure boundary condition on free
boundaries can be rewritten as
$\left|\frac{1}{r}\frac{\partial\psi}{\partial\nu}\right|=\sqrt{\Lambda+\lambda}\
\ \text{on}~{}~{}\Gamma_{1},\ \ \ \ \
\left|\frac{1}{r}\frac{\partial\psi}{\partial\nu}\right|=\sqrt{\lambda}\ \
\text{on}~{}~{}\Gamma_{2},$ (2.3)
where $\displaystyle\nu$ is unit outward normal to the free stream surfaces
$\displaystyle\Gamma_{1}$ and $\displaystyle\Gamma_{2}$.
Hence, we formulate the boundary value problem to the stream function
$\displaystyle\psi$,
$\left\\{\begin{array}[]{ll}\Delta\psi-\frac{1}{r}\frac{\partial\psi}{\partial
r}=0,&\text{in}\ \ \ \Omega_{1}\cup\Omega_{2},\\\
\left|\frac{1}{r}\frac{\partial\psi}{\partial\nu}\right|=\sqrt{\Lambda+\lambda},&\text{on}\
\ \ \Gamma_{1},\ \ \ \ \
\left|\frac{1}{r}\frac{\partial\psi}{\partial\nu}\right|=\sqrt{\lambda},\ \ \
\ \text{on}\ \ \ \Gamma_{2},\\\
\left|\frac{\nabla\psi^{+}}{r}\right|^{2}-\left|\frac{\nabla\psi^{-}}{r}\right|^{2}=\Lambda,&\text{on}\
\ \ \Gamma,\\\ \psi=m_{1},&\text{on}\ \ \ N_{1}\cup\Gamma_{1},\ \ \ \ \
\psi=m_{2},\ \ \ \text{on}\ \ N_{2}\cup\Gamma_{2},\\\ \psi=0,&\text{on}\ \
N_{0}\cup\Gamma,\end{array}\right.$ (2.4)
where $\displaystyle\psi^{\pm}(X_{0})$ ($\displaystyle X_{0}\in\Gamma$)
denotes the limit of $\displaystyle\psi(X)$ with $\displaystyle
X\in\\{\pm\psi>0\\}$, as $\displaystyle X\rightarrow X_{0}$.
We would like to emphasize that the undetermined constants
$\displaystyle\lambda$ and $\displaystyle\theta$ are regarded as two
parameters to solve the free boundary problem. We will solve the free boundary
problem for any $\displaystyle\lambda$ and $\displaystyle\theta$, and then to
show the existence of suitable parameters to guarantee the continuous fit
conditions.
## 3\. Existence of the impinging outgoing jets
### 3.1. Truncated variational problem
In order to solve the free boundary value problem (2.4), we first introduce
some notations and two auxiliary functions as follows. Define domain
$\displaystyle D$ as
$\displaystyle D=\Omega\cap\left\\{r<R\right\\}.$
Next, we define two bounded functions $\displaystyle\Phi_{1}$ and
$\displaystyle\Phi_{2}$ as follows,
$\displaystyle\text{$\displaystyle\Delta\Phi_{1}-\frac{1}{r}\frac{\partial\Phi_{1}}{\partial
r}=0$ in $\displaystyle D$, and $\displaystyle m_{2}<\Phi_{1}<m_{1}$ in
$\displaystyle D$},$
$\displaystyle\Phi_{1}(x,r)=\left\\{\begin{array}[]{ll}0,&\text{if}\ \
(x,r)\in N_{0},\\\ m_{2},&\text{if}\ \ r_{2}<r\leq R,~{}(x,r)~{}\text{lies
right}~{}N_{2},\\\ m_{1},&\text{if}\ \ r_{1}<r\leq R,~{}(x,r)~{}\text{lies
left}~{}N_{1},\\\ m_{1},&\text{if}\ \ (x,r)\in\Omega\cap\left\\{r\geq
R\right\\},\end{array}\right.$ (3.1)
and
$\displaystyle\text{$\displaystyle\Delta\Phi_{2}-\frac{1}{r}\frac{\partial\Phi_{2}}{\partial
r}=0$ in $\displaystyle D$, $\displaystyle m_{2}<\Phi_{2}<m_{1}$ in
$\displaystyle D$},$
$\displaystyle\Phi_{2}(x,r)=\left\\{\begin{array}[]{ll}0,&\text{if}\ \
(x,r)\in N_{0},\\\ m_{2},&\text{if}\ \ r_{2}<r\leq R,~{}(x,r)~{}\text{lies
right}~{}N_{2},\\\ m_{1},&\text{if}\ \ r_{1}<r\leq R,~{}(x,r)~{}\text{lies
left}~{}N_{1},\\\ m_{2},&\text{if}\ \ (x,r)\in\Omega\cap\left\\{r\geq
R\right\\}.\end{array}\right.$ (3.2)
We introduce the admissible set as
$\displaystyle K=\left\\{\psi\in
H_{loc}^{1}(\Omega)|~{}\Phi_{2}\leq\psi\leq\Phi_{1}\right\\},$
set $\displaystyle e=\left(-\sin\theta,\cos\theta\right)$ with
$\displaystyle\theta\in[0,\pi]$, and a functional
$J_{\lambda,\theta}(\psi)=\int_{\Omega}r\left|\frac{\nabla\psi}{r}-\left(\sqrt{\Lambda+\lambda}\chi_{\\{0<\psi<m_{1}\\}}+\sqrt{\lambda}\chi_{\\{m_{2}<\psi\leq
0\\}}\right)e\right|^{2}dxdr.$ (3.3)
Since the functional $\displaystyle J_{\lambda,\theta}$ is unbounded for any
$\displaystyle\psi\in K$, we have to truncate the possible flow field
$\displaystyle\Omega$ and formulate truncated problems as follows.
For any $\displaystyle\mu>1$ and $\displaystyle i=1,2$, we define
$\displaystyle r_{i,\mu}=\min\left\\{r\right|(-1)^{i}\mu=f_{i}(r)\\},\quad\
H_{i,\mu}=\left\\{\left((-1)^{i}\mu,r\right)|~{}0\leq r\leq
r_{i,\mu}\right\\},$ (3.4) $\displaystyle
N_{0,\mu}=N_{0}\cap\\{-\mu<x<\mu\\},\ \text{and}\
N_{i,\mu}=\left\\{(x,r)|~{}x=f_{i}(r),r_{i,\mu}<r\leq R\right\\}.$
Moreover, we introduce a cutoff domain $\displaystyle\Omega_{\mu}$ (see Figure
8) as
$\text{$\displaystyle\Omega_{\mu}$ is bounded by $\displaystyle N_{i,\mu}$,
$\displaystyle L_{i}$, $\displaystyle N_{0,\mu}$ and $\displaystyle
H_{i,\mu}$},$ (3.5)
and denote
$\displaystyle D_{\mu}=\Omega_{\mu}\cap\left\\{r<R\right\\}.$
Figure 8. The truncated domain $\displaystyle\Omega_{\mu}$
We also introduce an admissible set
$\displaystyle K_{\mu}=\left\\{\psi\in
K\left|\psi=\frac{m_{1}}{r_{1,\mu}^{2}}r^{2}~{}\text{on}~{}H_{1,\mu},\right.~{}~{}\psi=\frac{m_{2}}{r_{2,\mu}^{2}}r^{2}~{}\text{on}~{}H_{2,\mu}\right\\},$
and an auxiliary functional
$\displaystyle
J_{\lambda,\theta,\mu}(\psi)=\int_{\Omega_{\mu}}r\left|\frac{\nabla\psi}{r}-\left(\sqrt{\Lambda+\lambda}\chi_{\\{0<\psi<m_{1}\\}}+\sqrt{\lambda}\chi_{\\{m_{2}<\psi\leq
0\\}}\right)e\right|^{2}dX,\ \ \psi\in K_{\mu},$
where $\displaystyle\chi_{E}$ is the indicator function of the set
$\displaystyle E$. Here and after, we denote $\displaystyle dX=dxdr$ and
$\displaystyle X=(x,r)$ for simplicity.
The truncated variational problem ($\displaystyle P_{\lambda,\theta,\mu}$):
Find a $\displaystyle\psi_{\lambda,\theta,\mu}\in K_{\mu}$, such that
$J_{\lambda,\theta,\mu}(\psi_{\lambda,\theta,\mu})=\mathop{\min}\limits_{\psi\in
K_{\mu}}J_{\lambda,\theta,\mu}(\psi).$ (3.6)
Furthermore, the free boundaries of the truncated variational problem
($\displaystyle P_{\lambda,\theta,\mu}$) are defined as follows.
###### Definition 3.1.
The set
$\displaystyle\Gamma_{1,\mu}=\Omega_{\mu}\cap\partial\left\\{\psi_{\lambda,\theta,\mu}<m_{1}\right\\},$
is called the left free boundary, and
$\displaystyle\Gamma_{2,\mu}=\Omega_{\mu}\cap\partial\left\\{\psi_{\lambda,\theta,\mu}>m_{2}\right\\},$
is called the right free boundary.
Furthermore, define
$\displaystyle\Gamma_{\mu}=\Omega_{\mu}\cap\\{\psi_{\lambda,\theta,\mu}=0\\},$
be the interface separating the two fluids.
### 3.2. Existence of minimizer to the truncated variational problem
First, we give the existence of the minimizer to the truncated variational
problem.
###### Proposition 3.1.
For any $\displaystyle\lambda>0$, $\displaystyle\theta\in[0,\pi]$ and
$\displaystyle\mu>1$, there exists a minimizer
$\displaystyle\psi_{\lambda,\theta,\mu}\in K_{\mu}$ to the truncated
variational problem ($\displaystyle P_{\lambda,\theta,\mu}$).
###### Proof.
Due to Theorem 1.3 in [1], it suffices to construct a function
$\displaystyle\psi_{0}\in K_{\mu}$ such that $\displaystyle
J_{\lambda,\theta,\mu}(\psi_{0})<+\infty$.
Case 1: For $\displaystyle\theta\in(0,\pi)$ in $\displaystyle\Omega_{\mu}$.
Indeed, for some sufficiently large
$\displaystyle
R_{0}>\max\left\\{1,\frac{m_{1}}{\sqrt{\Lambda+\lambda}(R+1)\sin\theta}-(R+1)\cot\theta,-\frac{m_{2}}{\sqrt{\lambda}(R+1)\sin\theta}+(R+1)\cot\theta\right\\},$
and taking $\displaystyle\overline{\psi}$ be a smooth function such that
$\displaystyle\psi_{0}\in K_{\mu}$. Define $\displaystyle\psi_{0}$ in
$\displaystyle\Omega_{\mu}$ as follows
$\displaystyle\psi_{0}(X)=\left\\{\begin{array}[]{ll}\min\left\\{\max\left\\{\sqrt{\Lambda+\lambda}r\left(r\cos\theta-x\sin\theta\right),0\right\\},m_{1}\right\\},&\text{if}\
\ r\geq R+1,\ \ r\cos\theta-x\sin\theta\geq 0,\\\
\max\left\\{\min\left\\{\sqrt{\lambda}r\left(r\cos\theta-x\sin\theta\right),0\right\\},m_{2}\right\\},&\text{if}\
\ r\geq R+1,\ \ r\cos\theta-x\sin\theta\leq 0,\\\ m_{1},&\text{if}\ \ x\leq-
R_{0},\ \ R\leq r\leq R+1,\\\ m_{2},&\text{if}\ \ x\geq R_{0},\ \ R\leq r\leq
R+1,\\\ \overline{\psi}(X),&\text{if
$\displaystyle(x,r)\in\Omega_{\mu,R_{0}}$},\\\
\eta(x)\frac{m_{1}}{r_{1,\mu}^{2}}r^{2}+(1-\eta(x))\frac{m_{2}}{r_{2,\mu}^{2}}r^{2},&\text{if
$\displaystyle(x,r)\in\Omega^{\prime}_{\mu}$}.\end{array}\right.$
Here,
$\displaystyle\Omega^{\prime}_{\mu}=\Omega_{\mu}\cap\\{r\leq\min\\{r_{1,\mu},r_{2,\mu}\\}\\}$,
$\displaystyle\eta(x)$ be a cut-off function satisfying
$\eta(x)=1\ \ \text{for}\ \ x\leq-\mu,\ \ \eta(x)=\frac{\mu-x}{2\mu}\ \
\text{for}\ \ -\mu\leq x\leq\mu,\ \ \eta(x)=0\ \ \text{for}\ \ x\geq\mu,$
(3.7)
and
$\displaystyle\Omega_{\mu,R_{0}}=\Omega_{\mu}\cap\left\\{\min\\{r_{1,\mu},r_{2,\mu}\\}\leq
r\leq R\right\\}\cup\left\\{-R_{0}\leq x\leq R_{0},R\leq r\leq R+1\right\\}.$
It is easy to check that $\displaystyle
J_{\lambda,\theta,\mu}(\psi_{0})<+\infty$.
Case 2: For $\displaystyle\theta=0$ or $\displaystyle\pi$.
Without loss of generality, assume $\displaystyle\theta=0$. It suffice to
define a function $\displaystyle\psi_{0}(X)$ as follows. Set
$\displaystyle\tilde{\Omega}_{\mu,R_{0}}=\Omega_{\mu}\cap\left\\{\min\\{r_{1,\mu},r_{2,\mu}\\}\leq
r\leq R\right\\}\cup\left\\{-2\leq x\leq 2,R\leq r\leq R_{0}\right\\},$
for some sufficiently large $\displaystyle
R_{0}>\sqrt{R^{2}+\frac{2m_{1}}{\sqrt{\Lambda+\lambda}}-\frac{2m_{2}}{\sqrt{\lambda}}}$,
and define a function $\displaystyle\psi_{0}$ as
$\displaystyle\psi_{0}(X)=\left\\{\begin{array}[]{ll}\frac{\sqrt{\lambda}(r^{2}-R^{2})}{2}+m_{2},&\text{if}\
\ x\geq 2,\ \ R\leq r\leq\sqrt{R^{2}-\frac{2m_{2}}{\sqrt{\lambda}}},\\\
\min\left\\{\frac{\sqrt{\Lambda+\lambda}}{2}\left(r^{2}+\frac{2m_{2}}{\sqrt{\lambda}}-R^{2}\right),m_{1}\right\\},&\text{if}\
\ x\geq 2,\ \ r\geq\sqrt{R^{2}-\frac{2m_{2}}{\sqrt{\lambda}}},\\\
m_{1},&\text{if}\ \ x\leq 2,\ \ r\geq R_{0},\\\ m_{1},&\text{if}\ \ x\leq-2,\
\ R\leq r\leq R_{0},\\\ \overline{\psi}(X),&\text{if}\ \
(x,r)\in\tilde{\Omega}_{\mu,R_{0}},\\\
\eta(x)\frac{m_{1}}{r_{1,\mu}^{2}}r^{2}+(1-\eta(x))\frac{m_{2}}{r_{2,\mu}^{2}}r^{2},&\text{if
$\displaystyle(x,r)\in\Omega_{\mu}\cap\left\\{r\leq\\{r_{1,\mu},r_{2,\mu}\\}\right\\}$},\end{array}\right.$
where $\displaystyle\eta(x)$ is defined as (3.7), and
$\displaystyle\overline{\psi}$ be a smooth function such that
$\displaystyle\psi_{0}\in K_{\mu}$.
Therefore, we finish the proof of Proposition 3.1. ∎
Next, we will obtain the regularity of the minimizer.
###### Proposition 3.2.
Let $\displaystyle\psi_{\lambda,\theta,\mu}$ be a minimizer to the truncated
variational problem ($\displaystyle P_{\lambda,\theta,\mu}$), and for any open
subset
$\displaystyle\Omega_{0}\subset\subset\Omega_{\mu}\cap\\{m_{2}<\psi_{\lambda,\theta,\mu}<m_{1}\\}\cap\\{\psi_{\lambda,\theta,\mu}\neq
0\\}$, then $\displaystyle\psi_{\lambda,\theta,\mu}\in C^{0,1}(\Omega_{\mu})$,
$\displaystyle\psi_{\lambda,\theta,\mu}\in C^{2,\sigma}(\Omega_{0})$ and
$\displaystyle\psi_{\lambda,\theta,\mu}\in C^{1,\sigma}(\Omega_{0}\cup
N_{1,\mu}\cup N_{2,\mu})$ for some $\displaystyle 0<\sigma<1$.
###### Proof.
Firstly, $\displaystyle\psi_{\lambda,\theta,\mu}\in C^{0,1}(\Omega_{\mu})$
follows in the same manner as Corollary 4.4 in [6].
Next, the standard interior estimates to linear elliptic equation in Chapter 8
in [26] gives $\displaystyle\psi_{\lambda,\theta,\mu}\in
C^{2,\sigma}(\Omega_{0})$ and $\displaystyle\psi_{\lambda,\theta,\mu}\in
C^{1,\sigma}(N_{1,\mu}\cup N_{2,\mu})$.
The regularity of $\displaystyle\psi_{\lambda,\theta,\mu}$ near the axis
$\displaystyle N_{0,\mu}$ can be obtained by the standard arguments as in [16]
and [33].
Therefore, we finish the proof of Lemma 3.2. ∎
### 3.3. Uniqueness and monotonicity of the minimizer
Firstly, we will give a lower bound and an upper bound to the minimizer
$\displaystyle\psi_{\lambda,\theta,\mu}$.
###### Lemma 3.3.
For any minimizer $\displaystyle\psi_{\lambda,\theta,\mu}$ to the variational
problem ($\displaystyle P_{\lambda,\theta,\mu}$), one has
$\max\left\\{\frac{m_{2}}{r_{2,\mu}^{2}}r^{2},m_{2}\right\\}\leq\psi_{\lambda,\theta,\mu}(x,r)\leq\min\left\\{\frac{m_{1}}{r_{1,\mu}^{2}}r^{2},m_{1}\right\\}\
\ \ \ \text{in}\ \ \ \Omega_{\mu},$ (3.8)
where $\displaystyle r_{1,\mu}$ and $\displaystyle r_{2,\mu}$ are defined in
(3.4).
###### Proof.
Firstly, consider the lower bound of $\displaystyle\psi_{\lambda,\theta,\mu}$.
Set
$\displaystyle\phi_{1}=\max\left\\{\frac{m_{2}}{r_{2,\mu}^{2}}r^{2},m_{2}\right\\}$,
$\displaystyle\phi_{2}=\min\left\\{\frac{m_{1}}{r_{1,\mu}^{2}}r^{2},m_{1}\right\\}$
and $\displaystyle\psi=\psi_{\lambda,\theta,\mu}$ for simplicity.
Firstly, since $\displaystyle\psi\in K_{\mu}$, one has
$m_{2}\leq\psi\leq m_{1}\ \ \text{in}\ \ \Omega_{\mu}.$ (3.9)
Next, we shall prove that
$\displaystyle\phi_{1}\leq\psi\ \ \text{in $\displaystyle\Omega_{\mu}$}.$
Due to the fact that $\displaystyle\max\left\\{\psi,\phi_{1}\right\\}\in
K_{\mu}$, we have
$\displaystyle J_{\lambda,\theta,\mu}(\psi)\leq
J_{\lambda,\theta,\mu}(\max\left\\{\psi,\phi_{1}\right\\}).$
Furthermore, the fact
$\phi_{1}=m_{2}\ \ \text{for $\displaystyle r\geq r_{2,\mu}$},$ (3.10)
gives that
$\displaystyle\psi\geq\phi_{1}\ \ \text{in}\ \ \Omega_{\mu}\cap\\{r\geq
r_{2,\mu}\\}.$
Therefore, we obtain
$\displaystyle 0\geq$
$\displaystyle\int_{\Omega_{\mu,r_{2,\mu}}}r\left|\frac{\nabla\psi}{r}-\left(\sqrt{\Lambda+\lambda}\chi_{\\{0<\psi<m_{1}\\}}+\sqrt{\lambda}\chi_{\\{m_{2}<\psi\leq
0\\}}\right)\cdot e\right|^{2}dX$ (3.11)
$\displaystyle-\int_{\Omega_{\mu},r_{2,\mu}}r\left|\frac{\nabla\max\left\\{\psi,\phi_{1}\right\\}}{r}-\left(\sqrt{\Lambda+\lambda}\chi_{\\{0<\max\left\\{\psi,\phi_{1}\right\\}<m_{1}\\}}+\sqrt{\lambda}\chi_{\\{m_{2}<\max\left\\{\psi,\phi_{1}\right\\}\leq
0\\}}\right)\cdot e\right|^{2}dX,$
here, $\displaystyle\Omega_{\mu,r_{2,\mu}}=\Omega_{\mu}\cap\\{r<r_{2,\mu}\\}$.
This together with the similar arguments as Lemma 3.4 in [31] yields to
$\displaystyle\int_{\Omega_{\mu,r_{2,\mu}}}\left|\nabla\min\left(\psi-\phi_{1},0\right)\right|^{2}dX\leq
0,$
which implies that
$\displaystyle\psi-\phi_{1}=\text{constant}\ \ \ \
\text{in}~{}\Omega_{\mu,r_{2,\mu}}.$
Since $\displaystyle\psi\geq\phi_{1}$ on
$\displaystyle\partial\Omega_{\mu,r_{2,\mu}}$, we conclude that
$\displaystyle\psi\geq\phi_{1}\ \ \ \ \text{in}~{}\Omega_{\mu,r_{2,\mu}}.$
Therefore, we obtain the lower bound of
$\displaystyle\psi_{\lambda,\theta,\mu}$ in (3.8).
Next, we can now proceed as before to show the upper bound of
$\displaystyle\psi_{\lambda,\theta,\mu}$.
Similarly, it suffices to prove that the upper bound holds in
$\displaystyle\Omega_{\mu,r_{1,\mu}}=\Omega_{\mu}\cap\left\\{r<r_{1,\mu}\right\\}$.
Due to (6.2), one has
$\displaystyle\Delta\psi_{\lambda,\theta,\mu}-\frac{1}{r}\frac{\partial\psi_{\lambda,\theta,\mu}}{\partial
r}\geq 0\ \ \text{in $\displaystyle\Omega_{\mu,r_{1,\mu}}$ in a weak sense,}$
and
$\displaystyle\psi_{\lambda,\theta,\mu}\leq\frac{m_{1}}{r_{1,\mu}^{2}}r^{2}$
on $\displaystyle\partial\Omega_{\mu,r_{1,\mu}}$, then the maximum principle
implies
$\displaystyle\psi_{\lambda,\theta,\mu}(x,r)\leq\frac{m_{1}}{r_{1,\mu}^{2}}r^{2}$
in $\displaystyle\Omega_{\mu,r_{1,\mu}}$. We complete the proof of Lemma 3.3.
∎
Next, in view of Lemma 3.3, using the similar arguments Proposition 3.5 in
[31], we will establish the uniqueness and some monotonicity of the minimizer
$\displaystyle\psi_{\lambda,\theta,\mu}$ to the variational problem
($\displaystyle P_{\lambda,\theta,\mu}$), and we omit the proof here.
###### Proposition 3.4.
For any $\displaystyle\lambda\in(0,+\infty)$ and
$\displaystyle\theta\in[0,\pi]$, the minimizer
$\displaystyle\psi_{\lambda,\theta,\mu}$ to the truncated variational problem
($\displaystyle P_{\lambda,\theta,\mu}$) is unique. Furthermore, the solution
$\displaystyle\psi_{\lambda,\theta,\mu}$ is monotonic with respect to
$\displaystyle x$, namely
$\psi_{\lambda,\theta,\mu}(x,r)\leq\psi_{\lambda,\theta,\mu}(\tilde{x},r)\ \ \
\ \text{for any}~{}~{}x\geq\tilde{x}.$ (3.12)
### 3.4. Some properties of the free boundaries
#### 3.4.1. Preliminaries
Before we investigate the properties of the free boundaries, we give some
important auxiliary lemmas, and we refer the proofs in [1, 2, 23]. So we only
state the result and omit the proof as follows.
###### Lemma 3.5.
There exists a universal constant $\displaystyle c>0$, such that for
$\displaystyle
X_{0}=(x_{0},r_{0})\in\Omega_{\mu}\cap\\{\psi_{\lambda,\theta,\mu}<0\\}$ and
$\displaystyle
B_{r}(X_{0})\subset\Omega_{\mu}\cap\\{\psi_{\lambda,\theta,\mu}<0\\}$ with
$\displaystyle\frac{1}{r}\fint_{\partial
B_{r}(X_{0})}\left(\psi_{\lambda,\theta,\mu}-m_{2}\right)dS\geq\sqrt{\lambda}cr_{0},$
then we have $\displaystyle\psi_{\lambda,\theta,\mu}>m_{2}$ in $\displaystyle
B_{r}(X_{0})$; Similarly, $\displaystyle
X_{0}=(x_{0},r_{0})\in\Omega_{\mu}\cap\\{\psi_{\lambda,\theta,\mu}>0\\}$ and
$\displaystyle
B_{r}(X_{0})\subset\Omega_{\mu}\cap\\{\psi_{\lambda,\theta,\mu}>0\\}$, if
$\displaystyle\frac{1}{r}\fint_{\partial
B_{r}(X_{0})}\left(m_{1}-\psi_{\lambda,\theta,\mu}\right)dS\geq\sqrt{\lambda+\Lambda}cr_{0},$
then we have $\displaystyle\psi_{\lambda,\theta,\mu}<m_{1}$ in $\displaystyle
B_{r}(X_{0})$. Here and after, $\displaystyle B_{r}(X)$ denotes some ball with
radius $\displaystyle r>0$ and center $\displaystyle X\in\Omega_{\mu}$.
Next, we will establish a non-degeneracy lemma to
$\displaystyle\psi_{\lambda,\theta,\mu}-m_{2}$ and $\displaystyle
m_{1}-\psi_{\lambda,\theta,\mu}$ as follows.
###### Lemma 3.6.
(Non-degeneracy lemma) For any $\displaystyle 0<\kappa_{1}<1$, there exists a
positive constant $\displaystyle c$ (depending on $\displaystyle\kappa_{1}$),
if $\displaystyle
B_{r}(X_{0})\subset\Omega_{\mu}\cap\\{\psi_{\lambda,\theta,\mu}<0\\}$
($\displaystyle X_{0}=(x_{0},r_{0})$) and
$\displaystyle\frac{1}{r}\fint_{\partial
B_{r}(X_{0})}\left(\psi_{\lambda,\theta,\mu}-m_{2}\right)dS\leq\sqrt{\lambda}cr_{0},~{}~{}and~{}~{}\psi_{\lambda,\theta,\mu}<0~{}~{}in~{}~{}B_{r}(X_{0}),$
then $\displaystyle\psi_{\lambda,\theta,\mu}=m_{2}$ in $\displaystyle
B_{\kappa_{1}r}(X_{0})$; Similarly, for any $\displaystyle 0<\kappa_{2}<1$,
there exists a positive constant $\displaystyle c$ (depending on
$\displaystyle\kappa_{2}$) and
$\displaystyle\frac{1}{r}\fint_{\partial
B_{r}(X_{0})}\left(m_{1}-\psi_{\lambda,\theta,\mu}\right)dS\leq\sqrt{\lambda+\Lambda}cr_{0},~{}~{}and~{}~{}\psi_{\lambda,\theta,\mu}>0~{}~{}in~{}~{}B_{r}(X_{0}),$
then $\displaystyle\psi_{\lambda,\theta,\mu}=m_{1}$ in $\displaystyle
B_{\kappa_{2}r}(X_{0})$.
A direct application of Lemma 3.6 gives the following lemma.
###### Lemma 3.7.
Suppose that $\displaystyle
X_{0}=(x_{0},r_{0})\in\overline{\\{\psi_{\lambda,\theta,\mu}>m_{2}\\}\cap(\Omega_{\mu}\backslash
D_{\mu})}$ and $\displaystyle\psi_{\lambda,\theta,\mu}<0$ in $\displaystyle
B_{r}(X_{0})$ for some $\displaystyle r>0$, then
$\frac{1}{r}\fint_{\partial
B_{r}(X_{0})}\left(\psi_{\lambda,\theta,\mu}-m_{2}\right)dS\geq\sqrt{\lambda}cr_{0}.$
(3.13)
In particular,
$\sup_{\partial
B_{r}(X_{0})}\left(\psi_{\lambda,\theta,\mu}-m_{2}\right)\geq\sqrt{\lambda}cr_{0}r.$
(3.14)
We shall establish a non-oscillation lemma, which implies that the free
boundary $\displaystyle\Gamma_{i,\mu}$ for $\displaystyle i=1,2$ cannot
oscillate near the solid boundaries. Without loss of generality, consider the
right free boundary $\displaystyle\Gamma_{2,\mu}$, and introduce a domain
$\displaystyle G\subset\Omega_{\mu}\backslash D_{\mu}$ bounded by
$\displaystyle x=x_{1},\quad x=x_{1}+h\ (h>0),$
and
$\displaystyle\gamma_{1}:X=X^{1}(t)=(x^{1}(t),r^{1}(t)),\quad\gamma_{2}:X=X^{2}(t)=(x^{2}(t),r^{2}(t)),$
where $\displaystyle 0\leq t\leq T$ with
$\displaystyle x_{1}<x^{i}(t)<x_{1}+h\ \ \text{for}\ \ 0<t<T,$
and
$\displaystyle x^{i}(0)=x_{1},\ \ x^{i}(T)=x_{1}+h,\ \ r_{1}\leq r^{i}(t)\leq
r_{1}+\delta,\ \ i=1,2.$
Furthermore, the arc $\displaystyle\gamma_{2}$ lies above the arc
$\displaystyle\gamma_{1}$, this implies that $\displaystyle
r^{1}(0)<r^{2}(0)$, $\displaystyle\gamma_{1}$ and $\displaystyle\gamma_{2}$ do
not intersect, $\displaystyle\gamma_{2}$ is contained in
$\displaystyle\Gamma_{2,\mu}$, either
$\displaystyle\text{Case}\ 1.\ \gamma_{1}\ \ \text{is contained in}\
\Gamma_{2,\mu},\quad\text{(see Figure \ref{fi7})}$
or
$\displaystyle\text{Case}\ 2.\ \gamma_{1}\ \ \text{lies on}\ \\{r=R,x>1\\},\
\text{and then}\ \ r_{1}=R,\ x_{1}\geq 1.\quad\text{(see Figure \ref{fi8})}$
Figure 9. Case 1
Figure 10. Case 2
Let the domain $\displaystyle G\subset\\{\psi_{\lambda,\theta,\mu}>m_{2}\\}$
be a neighborhood of $\displaystyle\gamma_{1}$ and $\displaystyle\gamma_{2}$,
and $\displaystyle\psi_{\lambda,\theta,\mu}<0$ in $\displaystyle G$ and for
some $\displaystyle c^{*}>0$, we have
$\displaystyle\text{dist}(G,\overline{A_{1}A_{2}})>c^{*}.$
###### Lemma 3.8.
(Non-oscillation lemma) Under the foregoing assumptions, there exists a
positive constant $\displaystyle C$ depending only on $\displaystyle\lambda$,
$\displaystyle m_{2}$ and $\displaystyle c^{*}$ such that
$h\leq C\delta.$ (3.15)
The proof is similar to Lemma 4.1 in [2] and Lemma 5.6 [4], we omit it here.
Finally, we give the uniform bound of the gradient to the minimizer, which is
independent of $\displaystyle m_{1}$ and $\displaystyle m_{2}$. Please see
Lemma 8.1 in [4] and Lemma 5.2 in [2] for the proof.
###### Lemma 3.9.
Let $\displaystyle X_{0}=(x_{0},r_{0})$ be a free boundary point in
$\displaystyle\Omega_{\mu}\setminus\overline{D}_{\mu}$ and $\displaystyle G$
be a bounded domain with $\displaystyle X_{0}\in G$,
$\displaystyle\overline{G}\subset{\Omega_{\mu}\setminus\overline{D}_{\mu}}$.
There exists a constant $\displaystyle C>0$ depending only on
$\displaystyle\lambda$, $\displaystyle G$ and $\displaystyle\Lambda$, such
that
$\frac{\left|\nabla\psi_{\lambda,\theta,\mu}\right|}{r}\leq
C~{}~{}\text{in}~{}~{}G.$ (3.16)
#### 3.4.2. Some properties of the free boundaries
It follows from the monotonicity of $\displaystyle\psi_{\lambda,\theta,\mu}$
with respect to $\displaystyle x$ that the free boundaries are $\displaystyle
r$-graph, namely, the free boundaries $\displaystyle\Gamma_{i,\mu}$
($\displaystyle i=1,2$) intersect $\displaystyle r=r_{0}$ either one single
point or a segment for any $\displaystyle r_{0}\in(R,+\infty)$. Thus, there
exist four mappings $\displaystyle g_{1,\lambda,\theta,\mu}(r)$ with
$\displaystyle r>R$, $\displaystyle g_{2,\lambda,\theta,\mu}(r)$ with
$\displaystyle r>R$, $\displaystyle g_{\lambda,\theta,\mu}(r)$ with
$\displaystyle r>0$ and $\displaystyle\tilde{g}_{\lambda,\theta,\mu}(r)$ with
$\displaystyle r>0$ such that
$\left\\{0<\psi_{\lambda,\theta,\mu}<m_{1}\right\\}\cap\Omega_{\mu}=\left\\{\tilde{g}_{1,\lambda,\theta,\mu}(r)<x<g_{\lambda,\theta,\mu}(r)\right\\}\cap\Omega_{\mu},$
(3.17)
and
$\left\\{m_{2}<\psi_{\lambda,\theta,\mu}<0\right\\}\cap\Omega_{\mu}=\left\\{\tilde{g}_{\lambda,\theta,\mu}(r)<x<\tilde{g}_{2,\lambda,\theta,\mu}(r)\right\\}\cap\Omega_{\mu},$
(3.18)
where
$\displaystyle\tilde{g}_{1,\lambda,\theta,\mu}(r)=\left\\{\begin{array}[]{ll}f_{1}(r)&\text{for}~{}0<r\leq
R,\\\ g_{1,\lambda,\theta,\mu}(r)&\text{for
}~{}R<r<+\infty,\end{array}\right.$
and
$\displaystyle\tilde{g}_{2,\lambda,\theta,\mu}(r)=\left\\{\begin{array}[]{ll}f_{2}(r)&\text{for}~{}0<r\leq
R,\\\ g_{2,\lambda,\theta,\mu}(r)&\text{for
}~{}R<r<+\infty.\end{array}\right.$
Indeed, along similar arguments as in [2], we obtain that $\displaystyle
g_{i,\lambda,\theta,\mu}(r)$ is indeed a general continuous function in
$\displaystyle[R,+\infty)$, and $\displaystyle g_{i,\lambda,\theta,\mu}(R)$ is
defined as $\displaystyle\lim_{r\rightarrow R^{+}}g_{i,\lambda,\theta,\mu}(r)$
for $\displaystyle i=1,2$. Furthermore, due to Lemma 3.3 in [7] and
Proposition 4.1 in [31], the interface $\displaystyle
g_{\lambda,\theta,\mu}(r)\equiv\tilde{g}_{\lambda,\theta,\mu}(y)$ is indeed a
continuous function in $\displaystyle[0,+\infty)$, and we omit the proof here.
###### Lemma 3.10.
The free boundary $\displaystyle\Gamma_{i,\mu}:x=g_{i,\lambda,\theta,\mu}(r)$
is a generalized continuous function in $\displaystyle R\leq r<+\infty$ with
values in $\displaystyle[-\infty,+\infty]$ ($\displaystyle i=1,2$),
respectively. Furthermore, the interface $\displaystyle\Gamma_{\mu}$:
$\displaystyle x=g_{\lambda,\theta,\mu}(r)$ is bounded continuous functions in
$\displaystyle 0<r<+\infty$, $\displaystyle
g_{\lambda,\theta,\mu}(0+0)\triangleq\lim_{r\rightarrow
0^{+}}g_{\lambda,\theta,\mu}(r)$ exists and is finite.
In order to study the limit behavior of the solution as $\displaystyle
r\rightarrow+\infty$, we first establish the decay estimate of the minimizer
in far field as follows. This is one of the crucial parts in this paper.
###### Lemma 3.11.
For any $\displaystyle\theta\in(0,\pi)$ and $\displaystyle r_{0}>2R$, there
exists a constant $\displaystyle C$ (independent of $\displaystyle r_{0}$)
such that
$\int_{\Omega_{\mu}\cap\\{r>r_{0}\\}}r\left|\frac{\nabla\psi_{\lambda,\theta,\mu}}{r}-\left(\sqrt{\Lambda+\lambda}\chi_{\\{0<\psi_{\lambda,\mu,\theta}<m_{1}\\}}+\sqrt{\lambda}\chi_{\\{m_{2}<\psi_{\lambda,\theta,\mu}\leq
0\\}}\right)e\right|^{2}dX\leq\frac{C}{r_{0}^{3}}.$ (3.19)
###### Proof.
Denote $\displaystyle\psi(x,r)=\psi_{\lambda,\theta,\mu}(x,r)$, $\displaystyle
g(r)=g_{\lambda,\theta,\mu}(r)$ and $\displaystyle
g_{i}(r)=g_{i,\lambda,\theta,\mu}(r)$ ($\displaystyle i=1,2$) for simplicity.
For any $\displaystyle r_{0}>2R$, define
$\displaystyle
S(r_{0})=\int_{\Omega_{\mu}\cap\\{\frac{r_{0}}{2}<r<r_{0}\\}}r\left|\frac{\nabla\psi_{\lambda,\theta,\mu}}{r}-\left(\sqrt{\Lambda+\lambda}\chi_{\\{m_{2}<\psi_{\lambda,\mu,\theta}<0\\}}+\sqrt{\lambda}\chi_{\\{m_{2}<\psi_{\lambda,\theta,\mu}\leq
0\\}}\right)e\right|^{2}dX.$
Taking advantage of the mean value theorem, there exists some
$\displaystyle\tilde{r}\in\left(\frac{r_{0}}{2},r_{0}\right)$ such that
$\displaystyle S(r_{0})=$
$\displaystyle\frac{r_{0}}{2\tilde{r}}\left\\{\int_{g_{1}(\tilde{r})}^{g(\tilde{r})}\left|\nabla\psi(x,\tilde{r})-\sqrt{\Lambda+\lambda}\tilde{r}e\right|^{2}dx+\int_{g(\tilde{r})}^{g_{2}(\tilde{r})}\left|\nabla\psi(x,\tilde{r})-\sqrt{\lambda}\tilde{r}e\right|^{2}dx\right\\}$
(3.20) $\displaystyle\geq$
$\displaystyle\frac{1}{2}\left\\{\int_{g_{1}(\tilde{r})}^{g(\tilde{r})}\left|\nabla\psi(x,\tilde{r})-\sqrt{\Lambda+\lambda}\tilde{r}e\right|^{2}dx+\int_{g(\tilde{r})}^{g_{2}(\tilde{r})}\left|\nabla\psi(x,\tilde{r})-\sqrt{\lambda}\tilde{r}e\right|^{2}dx\right\\}.$
We choose a function $\displaystyle w(x,r)$ as follows
$w(x,r)=\left\\{\begin{array}[]{ll}\psi(x,r),&\text{in}\ \
\Omega_{\mu}\cap\\{r\leq\tilde{r}\\},\\\
\eta(r)\bar{\psi}(x,r)+(1-\eta(r))\phi(x,r),&\text{in}\ \
\Omega_{\mu}\cap\\{r\geq\tilde{r}\\},\end{array}\right.$ (3.21)
where
$\displaystyle\eta(r)=\max\left\\{0,\frac{\bar{r}-r}{\bar{r}-\tilde{r}}\right\\}$
with $\displaystyle\bar{r}=\tilde{r}+\frac{1}{\tilde{r}}$,
$\displaystyle\bar{\psi}(x,r)=\psi(x-(r-\tilde{r})\cot\theta,\tilde{r}),$
and
$\displaystyle\phi(x,r)=$
$\displaystyle\min\left\\{\max\left\\{\sqrt{\Lambda+\lambda}r\left((r-\tilde{r})\cos\theta-(x-g(\tilde{r}))\sin\theta\right),0\right\\},m_{1}\right\\}$
$\displaystyle+\max\left\\{\min\left\\{\sqrt{\lambda}r\left((r-\tilde{r})\cos\theta-(x-g(\tilde{r}))\sin\theta\right),0\right\\},m_{2}\right\\}.$
It’s easy to check that $\displaystyle w(x,r)\in K_{\mu}$, then $\displaystyle
J_{\lambda,\theta,\mu}(\psi)\leq J_{\lambda,\theta,\mu}(w)$, which implies
$\displaystyle\int_{\Omega_{\mu}\cap\\{r>\tilde{r}\\}}r\left|\frac{\nabla\psi_{\lambda,\theta,\mu}}{r}-\left(\sqrt{\Lambda+\lambda}\chi_{\\{m_{2}<\psi_{\lambda,\mu,\theta}<0\\}}+\sqrt{\lambda}\chi_{\\{m_{2}<\psi_{\lambda,\theta,\mu}\leq
0\\}}\right)e\right|^{2}dX$ (3.22) $\displaystyle\leq$
$\displaystyle\int_{\Omega_{\mu}\cap\\{r>\tilde{r}\\}}r\left|\frac{\nabla
w}{r}-\left(\sqrt{\Lambda+\lambda}\chi_{\\{m_{2}<w<0\\}}+\sqrt{\lambda}\chi_{\\{m_{2}<w\leq
0\\}}\right)e\right|^{2}dX$ $\displaystyle=$
$\displaystyle\int_{\Omega_{\mu}\cap\\{\tilde{r}<r\leq\bar{r}\\}}r\left|\frac{\nabla
w}{r}-\left(\sqrt{\Lambda+\lambda}\chi_{\\{m_{2}<w<0\\}}+\sqrt{\lambda}\chi_{\\{m_{2}<w\leq
0\\}}\right)e\right|^{2}dX$
$\displaystyle+\int_{\Omega_{\mu}\cap\\{r\geq\bar{r}\\}}r\left|\frac{\nabla
w}{r}-\left(\sqrt{\Lambda+\lambda}\chi_{\\{m_{2}<w<0\\}}+\sqrt{\lambda}\chi_{\\{m_{2}<w\leq
0\\}}\right)e\right|^{2}dX.$
First, similar arguments as Lemma 3.10 in [10] and Lemma 4.1 in [11]. We
obtain
$\int_{\Omega_{\mu}\cap\\{r>\tilde{r}\\}}r\left|\frac{\nabla\psi}{r}-\left(\sqrt{\Lambda+\lambda}\chi_{\\{0<\psi<m_{1}\\}}+\sqrt{\lambda}\chi_{\\{m_{2}<\psi\leq
0\\}}\right)e\right|^{2}dX\leq\frac{C}{r_{0}^{3}}+\frac{S(r_{0})}{16},$ (3.23)
where $\displaystyle C$ is independent of $\displaystyle r_{0}$.
Next, $\displaystyle S(2r_{0})$ can be calculated as follows,
$\displaystyle S(2r_{0})=$
$\displaystyle\int_{\Omega_{\mu}\cap\\{r_{0}<r<2r_{0}\\}}r\left|\frac{\nabla\psi}{r}-\left(\sqrt{\Lambda+\lambda}\chi_{\\{0<\psi<m_{1}\\}}+\sqrt{\lambda}\chi_{\\{m_{2}<\psi\leq
0\\}}\right)e\right|^{2}dX$ (3.24) $\displaystyle\leq$
$\displaystyle\int_{\Omega_{\mu}\cap\\{r>\tilde{r}\\}}r\left|\frac{\nabla\psi}{r}-\left(\sqrt{\Lambda+\lambda}\chi_{\\{0<\psi<m_{1}\\}}+\sqrt{\lambda}\chi_{\\{m_{2}<\psi\leq
0\\}}\right)e\right|^{2}dX$ $\displaystyle\leq$
$\displaystyle\frac{C}{r_{0}^{3}}+\frac{S(r_{0})}{16},$
where we have used
$\displaystyle\tilde{r}\in\left(\frac{r_{0}}{2},r_{0}\right)$ and (3.23).
Using mathematical induction for any $\displaystyle n\in\mathbb{N}$ and
(3.24), one has
$S(2^{n+1}R)\leq\frac{2C}{\left(2^{n}R\right)^{3}},\ \ n=0,1....$ (3.25)
Indeed, (3.25) holds for $\displaystyle n=0$ when choose $\displaystyle C$
large enough. If (3.25) holds for $\displaystyle n$, one has
$\displaystyle S(2^{n+2}R)=S(2\cdot
2^{n+1}R)\leq\frac{C}{(2^{n+1}R)^{3}}+\frac{S(2^{n+1}R)}{16}\leq\frac{C}{(2^{n+1}R)^{3}}+\frac{1}{16}\frac{2C}{(2^{n}R)^{3}}=\frac{2C}{\left(2^{n+1}R\right)^{3}},$
which implies (3.25) holds for $\displaystyle n+1$.
Therefore, for any $\displaystyle r_{0}>2R$, there exists a $\displaystyle
n_{0}$ such that $\displaystyle 2^{n_{0}}R\leq r_{0}\leq 2^{n_{0}+1}R$, this
together with (3.25) yields to
$\displaystyle\int_{\Omega_{\mu}\cap\\{r>r_{0}\\}}r\left|\frac{\nabla\psi}{r}-\left(\sqrt{\Lambda+\lambda}\chi_{\\{0<\psi<m_{1}\\}}+\sqrt{\lambda}\chi_{\\{m_{2}<\psi\leq
0\\}}\right)e\right|^{2}dX$ $\displaystyle\leq$
$\displaystyle\int_{\Omega_{\mu}\cap\\{r>2^{n_{0}}R\\}}r\left|\frac{\nabla\psi}{r}-\left(\sqrt{\Lambda+\lambda}\chi_{\\{0<\psi<m_{1}\\}}+\sqrt{\lambda}\chi_{\\{m_{2}<\psi\leq
0\\}}\right)e\right|^{2}dX$ $\displaystyle\leq$
$\displaystyle\sum_{j=n_{0}}^{+\infty}S(2^{j+1}R)\leq\frac{\tilde{C}}{R^{3}r_{0}^{3}},$
where have used the following fact
$\displaystyle\\\ \sum_{j=n_{0}}^{+\infty}S(2^{j+1}R)\leq
2C\sum_{j=n_{0}}^{+\infty}\frac{1}{\left(2^{j}R\right)^{3}}\leq\frac{32C}{\left(2^{n_{0}+1}R\right)^{3}}\leq\frac{\tilde{C}}{r_{0}^{3}}.$
This completes the proof of Lemma 3.11.
∎
Firstly, we will show that some convergence of the minimizer in far field and
the free boundaries approach to the asymptotic direction
$\displaystyle\theta\in(0,\pi)$ as $\displaystyle r\rightarrow+\infty$ in the
far field.
###### Lemma 3.12.
Let $\displaystyle\theta\in(0,\pi)$,
$\displaystyle\psi_{n}(\tilde{x},\tilde{r})=\psi_{\lambda,\theta,\mu}\left(x_{n}+\frac{\tilde{x}}{r_{n}},r_{n}+\frac{\tilde{r}}{r_{n}}\right)$
with $\displaystyle X_{n}=(x_{n},r_{n})\in\Gamma_{1,\mu}$ and $\displaystyle
r_{n}\rightarrow+\infty$,
$\displaystyle\tilde{X}=(\tilde{x},\tilde{r})\in\mathbb{R}^{2}$, then for a
subsequence
$\displaystyle\psi_{n}(\tilde{x},\tilde{r})\rightarrow\Theta(\tilde{x},\tilde{r})\triangleq\left\\{\begin{array}[]{ll}m_{1},&\text{if}\quad\tilde{r}\cos\theta-\tilde{x}\sin\theta\geq
0,\\\
m_{1}+\sqrt{\Lambda+\lambda}(\tilde{r}\cos\theta-\tilde{x}\sin\theta),&\text{if
}\quad-\frac{m_{1}}{\sqrt{\Lambda+\lambda}}\leq\tilde{r}\cos\theta-\tilde{x}\sin\theta\leq
0,\\\
\frac{m_{1}\sqrt{\lambda}}{\sqrt{\Lambda+\lambda}}+\sqrt{\lambda}(\tilde{r}\cos\theta-\tilde{x}\sin\theta),&\text{if
}\quad\frac{m_{2}}{\sqrt{\lambda}}-\frac{m_{1}}{\sqrt{\Lambda+\lambda}}\leq\tilde{r}\cos\theta-\tilde{x}\sin\theta\leq-\frac{m_{1}}{\sqrt{\Lambda+\lambda}},\\\
m_{2},&\text{if}\quad\tilde{r}\cos\theta-\tilde{x}\sin\theta\leq\frac{m_{2}}{\sqrt{\lambda}}-\frac{m_{1}}{\sqrt{\Lambda+\lambda}},\end{array}\right.$
(3.26)
uniformly in any compact subset of $\displaystyle\mathbb{R}^{2}$. Furthermore,
$\displaystyle g^{\prime}_{1,\lambda,\theta,\mu}(r)\rightarrow\cot\theta\ \
\text{as}\ \ r\rightarrow+\infty.$
The similar conclusion holds for $\displaystyle X_{n}\in\Gamma_{2,\mu}$.
###### Proof.
Set $\displaystyle\psi=\psi_{\lambda,\theta,\mu}$, $\displaystyle
g(r)=g_{\lambda,\theta,\mu}(r)$ and $\displaystyle
g_{i}(r)=g_{i,\lambda,\theta,\mu}(r)$ ($\displaystyle i=1,2$) for simplicity.
Define $\displaystyle x=x_{n}+\frac{\tilde{x}}{r_{n}}$ and $\displaystyle
r=r_{n}+\frac{\tilde{r}}{r_{n}}$.
For any $\displaystyle R_{0}>0$, one has
$\displaystyle\int_{\\{|r-r_{n}|<R_{0}\\}}r\left|\frac{\nabla\psi}{r}-\left(\sqrt{\Lambda+\lambda}\chi_{\\{0<\psi<m_{1}\\}}+\sqrt{\lambda}\chi_{\\{m_{2}<\psi\leq
0\\}}\right)e\right|^{2}dX$ (3.27) $\displaystyle=$
$\displaystyle\int_{\\{|\tilde{r}|<R_{0}r_{n}\\}}\left(\frac{1}{r_{n}}+\frac{\tilde{r}}{r_{n}^{3}}\right)\left|\frac{\tilde{\nabla}\psi_{n}}{1+\frac{\tilde{r}}{r_{n}^{2}}}-\left(\sqrt{\Lambda+\lambda}\chi_{\\{0<\psi_{n}<m_{1}\\}}+\sqrt{\lambda}\chi_{\\{m_{2}<\psi_{n}\leq
0\\}}\right)e\right|^{2}d\tilde{X},$
where
$\displaystyle\tilde{\nabla}=(\partial_{\tilde{x}},\partial_{\tilde{r}})$.
For large $\displaystyle r_{n}>R_{0}+2R$, in view of (3.19), we obtain
$\displaystyle\int_{\\{|r-r_{n}|<R_{0}\\}}r\left|\frac{\nabla\psi}{r}-\left(\sqrt{\Lambda+\lambda}\chi_{\\{0<\psi<m_{1}\\}}+\sqrt{\lambda}\chi_{\\{m_{2}<\psi\leq
0\\}}\right)e\right|^{2}dX\leq\frac{C}{\left(r_{n}-R_{0}\right)^{3}},$
which together with (3.27) implies that
$\int_{\Omega_{\mu}\cap\\{|\tilde{r}|<R_{0}r_{n}\\}}\left(1+\frac{\tilde{r}}{r_{n}^{2}}\right)\left|\frac{\tilde{\nabla}\psi_{n}}{1+\frac{\tilde{r}}{r_{n}^{2}}}-\left(\sqrt{\Lambda+\lambda}\chi_{\\{0<\psi_{n}<m_{1}\\}}+\sqrt{\lambda}\chi_{\\{m_{2}<\psi_{n}\leq
0\\}}\right)e\right|^{2}d\tilde{X}\leq\frac{Cr_{n}}{\left(r_{n}-R_{0}\right)^{3}}.$
(3.28)
Recalling Proposition 3.2 and $\displaystyle\Theta(\tilde{x},\tilde{r})\in
H_{loc}^{1}(\mathbb{R}^{2})$, then there exist a subsequence
$\displaystyle\psi_{n_{k}}$ and two functions
$\displaystyle\gamma_{1},\gamma_{2}\in[0,1]$ such that
$\displaystyle\psi_{n_{k}}(\tilde{x},\tilde{r})\rightarrow\Theta(\tilde{x},\tilde{r})\
\ \text{weakly in $\displaystyle H_{loc}^{1}(\mathbb{R}^{2})$},$
$\displaystyle\psi_{n_{k}}(\tilde{x},\tilde{r})\rightarrow\Theta(\tilde{x},\tilde{r})\
\ \text{a.e. in $\displaystyle\mathbb{R}^{2}$},$
$\displaystyle\chi_{\\{0<\psi_{n_{k}}<m_{1}\\}}\rightarrow\gamma_{1}\ \
\text{weak-star in $\displaystyle L_{loc}^{\infty}(\mathbb{R}^{2})$,
$\displaystyle\gamma_{1}=1$ a.e. on
$\displaystyle\\{0<\Theta(\tilde{x},\tilde{r})<m_{1}\\}$},$
and
$\displaystyle\chi_{\\{m_{2}<\psi_{n_{k}}\leq 0\\}}\rightarrow\gamma_{2}\ \
\text{weak-star in $\displaystyle L_{loc}^{\infty}(\mathbb{R}^{2})$,
$\displaystyle\gamma_{2}=1$ a.e. on
$\displaystyle\\{m_{2}<\Theta(\tilde{x},\tilde{r})\leq 0\\}$},$
as $\displaystyle k\rightarrow+\infty$. This together with (3.28) gives that
$\tilde{\nabla}\Theta=\sqrt{\Lambda+\lambda}e\chi_{\\{0<\psi_{0}<m_{1}\\}}+\sqrt{\lambda}e\chi_{\\{m_{2}<\psi_{0}\leq
0\\}}\ \ \text{a.e.},$ (3.29)
in any compact subset $\displaystyle\Omega^{\prime}$ of
$\displaystyle\mathbb{R}^{2}$.
Lemma 3.9 implies that for sufficiently large $\displaystyle n$
$\displaystyle|\tilde{\nabla}\psi_{n}(\tilde{x},\tilde{r})|=\left|\frac{1}{r_{n}}\nabla\psi\left(x_{n}+\frac{\tilde{x}}{\tilde{r}_{n}},r_{n}+\frac{\tilde{r}}{r_{n}}\right)\right|\leq
c_{0},$
where the constant $\displaystyle c_{0}$ is independent of $\displaystyle
R_{0}$. Hence, we conclude that there exists a subsequence
$\displaystyle\psi_{n_{k}}\rightarrow\Theta(\tilde{x},\tilde{r})$ uniformly in
any compact subset of $\displaystyle\mathbb{R}^{2}$ and
$\displaystyle
m_{1}-\psi_{n}(\tilde{x},\tilde{r})=\psi(x_{n},r_{n})-\psi\left(x_{n}+\frac{\tilde{x}}{\tilde{r}_{n}},r_{n}+\frac{\tilde{r}}{r_{n}}\right)\leq|\tilde{\nabla}\psi_{n}||\tilde{X}|,\
\ \text{for}\ \ |\tilde{X}|<\frac{m_{1}}{c_{0}},$
which implies $\displaystyle\psi_{n}(\tilde{x},\tilde{r})>0$.
The non-degeneracy lemma 3.6 implies that
$\displaystyle\frac{1}{r}\fint_{\partial
B_{r}(0)}(m_{1}-\psi_{n}(\tilde{x},\tilde{r}))d\tilde{S}=\frac{1}{r_{n}}\frac{1}{\frac{r}{r_{n}}}\fint_{\partial
B_{\frac{r}{r_{n}}}(X_{n})}(m_{1}-\psi(x,r))dS\geq c\sqrt{\Lambda+\lambda},\ \
\text{for}\ \ r<\frac{m_{1}}{c_{0}},$
taking $\displaystyle n\rightarrow+\infty$, which implies that
$\displaystyle\Theta\not\equiv m_{1}$ in $\displaystyle B_{r}(0)$ and
$\displaystyle\Theta(0)=m_{1}$.
Define
$t=\tilde{x}\cos\theta+\tilde{r}\sin\theta\ \ \text{and}\ \
s=\tilde{r}\cos\theta-\tilde{x}\sin\theta,$ (3.30)
and $\displaystyle w(t,s)=\Theta(\tilde{x},\tilde{r})$, then (3.29) implies
that
$\displaystyle\frac{\partial w}{\partial t}=0,\ \ \frac{\partial w}{\partial
s}=\sqrt{\Lambda+\lambda}e\chi_{\\{0<w<m_{1}\\}}+\sqrt{\lambda}e\chi_{\\{m_{2}<w\leq
0\\}}\ \ \text{a.e. in $\displaystyle\Omega^{\prime}_{\mu}$}\ \ \text{and}\ \
w(0)=m_{1}.$
A direction computation gives that
$\displaystyle w(t,s)=\left\\{\begin{array}[]{ll}m_{1}&\text{if}~{}~{}s\geq
0,\\\ m_{1}+\sqrt{\Lambda+\lambda}s,&\text{if
}~{}-\frac{m_{1}}{\sqrt{\Lambda+\lambda}}\leq s\leq 0,\\\
\frac{m_{1}\sqrt{\lambda}}{\sqrt{\Lambda+\lambda}}+\sqrt{\lambda}s,&\text{if
}\quad\frac{m_{2}}{\sqrt{\lambda}}-\frac{m_{1}}{\sqrt{\Lambda+\lambda}}\leq
s\leq-\frac{m_{1}}{\sqrt{\Lambda+\lambda}},\\\
m_{2},&\text{if}~{}~{}s\leq\frac{m_{2}}{\sqrt{\lambda}}-\frac{m_{1}}{\sqrt{\Lambda+\lambda}},\end{array}\right.$
which yields (3.26).
Next, let $\displaystyle X_{0}=(t,s)$ with $\displaystyle s>0$ or
$\displaystyle
s<\frac{m_{2}}{\sqrt{\lambda}}-\frac{m_{1}}{\sqrt{\Lambda+\lambda}}$, for
small $\displaystyle r>0$, then
$\displaystyle\lim_{n\rightarrow+\infty}\frac{1}{r}\fint_{\partial
B_{r}(X_{0})}(m_{1}-\psi_{n})dS=0,\ \ \text{or}\ \
\lim_{n\rightarrow+\infty}\frac{1}{r}\fint_{\partial
B_{r}(X_{0})}(\psi_{n}-m_{2})dS=0,$
respectively. Then, the non-degeneracy lemma implies that $\displaystyle
X_{0}$ is not a free boundary point for sufficiently large $\displaystyle n$.
Similarly, for the case
$\displaystyle\frac{m_{2}}{\sqrt{\lambda}}-\frac{m_{1}}{\sqrt{\Lambda+\lambda}}<s<0$,
one gets
$\displaystyle\lim_{n\rightarrow+\infty}\frac{1}{r}\fint_{\partial
B_{r}(X_{0})}(m_{1}-\psi_{n})dS\rightarrow+\infty,\ \ \text{or}\ \
\lim_{n\rightarrow+\infty}\frac{1}{r}\fint_{\partial
B_{r}(X_{0})}(\psi_{n}-m_{2})dS\rightarrow+\infty,\ \ \text{as}\ \
r\rightarrow 0,$
and then $\displaystyle X_{0}$ is not a free boundary point for sufficiently
large $\displaystyle n$.
Then, one has
$\partial\\{\psi_{n}>0\\}\rightarrow\left\\{s=\frac{m_{2}}{\sqrt{\lambda}}-\frac{m_{1}}{\sqrt{\Lambda+\lambda}}\right\\}\
\ \text{and}\ \ \partial\\{\psi_{n}<m_{1}\\}\rightarrow\left\\{s=0\right\\},$
(3.31)
locally in Hausdorff distance (see Definition 3.1 in [23]).
Noticing the flatness conditions in Section 7 in [1] for the free boundaries,
there exists a
$\displaystyle\xi_{1,n}\in\left(\min\left\\{r_{n},r_{n}+\frac{\tilde{r}}{r_{n}}\right\\},\max\left\\{r_{n},r_{n}+\frac{\tilde{r}}{r_{n}}\right\\}\right)$
such that
$\displaystyle\tilde{x}=r_{n}\left(g_{1}\left(r_{n}+\frac{\tilde{r}}{r_{n}}\right)-x_{n}\right)=r_{n}\left(g_{1}\left(r_{n}+\frac{\tilde{r}}{r_{n}}\right)-g_{1}(r_{n})\right)=g_{1}^{\prime}(\xi_{1,n})\tilde{r},$
thus, we obtain
$\displaystyle
g_{1}^{\prime}\left(r_{n}+\frac{\tilde{r}}{r_{n}}\right)\rightarrow\cot\theta,\
\ \text{as}\ \ n\rightarrow+\infty.$
Therefore, we complete the proof of Lemma 3.12. ∎
Next, for the critical cases $\displaystyle\theta=0$ or
$\displaystyle\theta=\pi$, we have the following facts.
###### Proposition 3.13.
Assume that there exist some free boundary points $\displaystyle
X_{n}=(x_{n},r_{n})\in\Gamma_{2,\mu}$, such that $\displaystyle
r_{n}\rightarrow\xi>R$ and $\displaystyle x_{n}\rightarrow+\infty$, where
$\displaystyle\xi$ is a finite positive number, then $\displaystyle\theta=0$.
Moreover, let
$\displaystyle\psi_{n}(\tilde{X})=\psi_{\lambda,\theta,\mu}(x_{n}+\tilde{x},r_{n}+\tilde{r})$,
then
$\displaystyle\psi_{n}(\tilde{X})\rightarrow\min\left\\{\max\left\\{\sqrt{\lambda}\left(\frac{m_{2}}{\sqrt{\lambda}}+\frac{\tilde{r}^{2}}{2}+\xi\tilde{r}\right),m_{2}\right\\},0\right\\}+\max\left\\{\min\left\\{\sqrt{\Lambda+\lambda}\left(\frac{m_{2}}{\sqrt{\lambda}}+\frac{\tilde{r}^{2}}{2}+\xi\tilde{r}\right),m_{1}\right\\},0\right\\},$
uniformly in any compact subset of
$\displaystyle\\{(\tilde{x},\tilde{r})\mid\tilde{r}>R-\xi\\}$. If
$\displaystyle
r_{n}\rightarrow\xi>\sqrt{R^{2}+\frac{2m_{1}}{\sqrt{\Lambda+\lambda}}-\frac{2m_{2}}{\sqrt{\lambda}}}$
and $\displaystyle x_{n}\rightarrow-\infty$, $\displaystyle\xi$ is a finite
positive number, then $\displaystyle\theta=\pi$ and
$\displaystyle\psi_{n}(\tilde{X})\rightarrow\min\left\\{\max\left\\{\sqrt{\lambda}\left(\frac{m_{2}}{\sqrt{\lambda}}-\frac{\tilde{r}^{2}}{2}-\xi\tilde{r}\right),m_{2}\right\\},0\right\\}+\max\left\\{\min\left\\{\sqrt{\Lambda+\lambda}\left(\frac{m_{2}}{\sqrt{\lambda}}-\frac{\tilde{r}^{2}}{2}-\xi\tilde{r}\right),m_{1}\right\\},0\right\\},$
uniformly in any compact subset of
$\displaystyle\\{(\tilde{x},\tilde{r})\mid\tilde{r}>R-\xi\\}$. The similar
assertion holds for $\displaystyle X_{n}=(x_{n},r_{n})\in\Gamma_{1,\mu}$.
###### Proof.
If $\displaystyle X_{n}=(x_{n},r_{n})\in\Gamma_{2,\mu}$ with $\displaystyle
r_{n}\rightarrow\xi$ ($\displaystyle\xi$ is a finite positive number) and
$\displaystyle x_{n}\rightarrow+\infty$. Set $\displaystyle x=x_{n}+\tilde{x}$
and $\displaystyle r=r_{n}+\tilde{r}$. For any large $\displaystyle R_{0}>0$,
the boundedness of $\displaystyle
J_{\lambda,\theta,\mu}(\psi_{\lambda,\theta,\mu})$ gives that
$\displaystyle\int_{\Omega_{\mu}\cap\\{|x-x_{n}|<R_{0}\\}\cap\\{R-\xi<r-r_{n}<R_{0}\\}}r\left|\frac{\nabla\psi}{r}-\left(\sqrt{\Lambda+\lambda}\chi_{\\{0<\psi<m_{1}\\}}+\sqrt{\lambda}\chi_{\\{m_{2}<\psi\leq
0\\}}\right)e\right|^{2}dX$ $\displaystyle=$
$\displaystyle\int_{\tilde{\Omega}_{\mu}\cap\\{|\tilde{x}|<R_{0}\\}\cap\\{R-\xi<\tilde{r}<R_{0}\\}}\left(r_{n}+\tilde{r}\right)\left|\frac{\tilde{\nabla}\psi_{n}}{r_{n}+\tilde{r}}-\left(\sqrt{\Lambda+\lambda}\chi_{\\{0<\psi_{n}<m_{1}\\}}+\sqrt{\lambda}\chi_{\\{m_{2}<\psi_{n}\leq
0\\}}\right)e\right|^{2}d\tilde{X}\rightarrow 0,$
as $\displaystyle n\rightarrow+\infty$.
Along the similar arguments in Lemma 3.12, then there exists a subsequence
$\displaystyle\psi_{n_{k}}$ such that
$\displaystyle\psi_{n_{k}}\rightarrow\psi_{0}\ \ \text{weakly in
$\displaystyle H_{loc}(\Omega^{\prime}_{\mu})$},$
and
$\displaystyle\psi_{n_{k}}\rightarrow\psi_{0}\ \ \text{a.e. in
$\displaystyle\Omega^{\prime}_{\mu}$},$
as $\displaystyle k\rightarrow+\infty$, and
$\displaystyle\nabla\psi_{0}=(\tilde{r}+\xi)\left(\sqrt{\Lambda+\lambda}\chi_{\\{0<\psi_{0}<m_{1}\\}}+\sqrt{\lambda}\chi_{\\{m_{2}<\psi_{0}\leq
0\\}}\right)(-\sin\theta,\cos\theta)\ \ \text{a.e.},$
in any compact subset $\displaystyle\Omega^{\prime}$ of
$\displaystyle\\{(\tilde{x},\tilde{r})\mid\tilde{r}>R-\xi\\}$. Furthermore,
it’s easy to see that $\displaystyle\psi_{0}(0,0)=m_{2}$,
$\displaystyle\psi_{0}\not\equiv m_{2}$ in any neighborhood of
$\displaystyle(0,0)$ and
$\psi(x_{n}+\tilde{x},R-\xi+r_{n})\rightarrow\psi_{0}(\tilde{x},R-\xi)=m_{2}\
\ \text{if}\ \ x_{n}\rightarrow+\infty.$ (3.32)
Next, we claim $\displaystyle\theta=0$. Suppose not, if
$\displaystyle\theta=\pi$, indeed, similar arguments as Lemma 3.12, we obtain
$\displaystyle\psi_{0}(\tilde{x},\tilde{r})=\min\left\\{\max\left\\{\sqrt{\lambda}\left(\frac{m_{2}}{\sqrt{\lambda}}-\frac{\tilde{r}^{2}}{2}-\xi\tilde{r}\right),m_{2}\right\\},0\right\\}+\max\left\\{\min\left\\{\sqrt{\Lambda+\lambda}\left(\frac{m_{2}}{\sqrt{\lambda}}-\frac{\tilde{r}^{2}}{2}-\xi\tilde{r}\right),m_{1}\right\\},0\right\\},$
which contradicts with (3.32).
If $\displaystyle\theta\in(0,\pi)$, since $\displaystyle\psi_{0}$ is smooth in
any compact subset of $\displaystyle
G\subset\\{m_{2}<\psi_{0}<0\\}\cap\\{\tilde{r}\geq R-\xi\\}$, one has
$\displaystyle\frac{\partial^{2}\psi_{0}}{\partial\tilde{x}\partial\tilde{r}}=-\sqrt{\lambda}\sin\theta,\
\ \frac{\partial^{2}\psi_{0}}{\partial\tilde{r}\partial\tilde{x}}=0,$
which derives a contradiction with $\displaystyle\theta\in(0,\pi)$.
Therefore, we obtain $\displaystyle\theta=0$. Along the similar arguments in
Lemma 3.12, one has
$\displaystyle\psi_{n}(\tilde{X})\rightarrow\min\left\\{\max\left\\{\sqrt{\lambda}\left(\frac{m_{2}}{\sqrt{\lambda}}+\frac{\tilde{r}^{2}}{2}+\xi\tilde{r}\right),m_{2}\right\\},0\right\\}+\max\left\\{\min\left\\{\sqrt{\Lambda+\lambda}\left(\frac{m_{2}}{\sqrt{\lambda}}+\frac{\tilde{r}^{2}}{2}+\xi\tilde{r}\right),m_{1}\right\\},0\right\\},$
uniformly in any compact subset of
$\displaystyle\\{(\tilde{x},\tilde{r})\mid\tilde{r}>R-\xi\\}$. The similar
conclusion holds if $\displaystyle X_{n}=(x_{n},r_{n})\in\Gamma_{1,\mu}$ with
$\displaystyle r_{n}\rightarrow\xi>R$ and $\displaystyle
x_{n}\rightarrow-\infty$.
Similarly, we can obtain the conclusion for $\displaystyle\theta=\pi$. Thus,
we complete the proof of Proposition 3.13.
∎
Now, we can obtain the convergence rate of distance of the two free boundaries
and the minimizer as follows.
###### Lemma 3.14.
For any $\displaystyle\theta\in(0,\pi)$ and $\displaystyle\alpha\in(0,2)$, the
free boundaries $\displaystyle x=g_{1,\lambda,\theta,\mu}(r)$, $\displaystyle
x=g_{2,\lambda,\theta,\mu}(r)$, the interface $\displaystyle
x=g_{\lambda,\theta,\mu}(r)$ and the minimizer
$\displaystyle\psi_{\lambda,\theta,\mu}$ satisfy
$r(g_{\lambda,\theta,\mu}(r)-g_{1,\lambda,\theta,\mu}(r))\rightarrow\frac{m_{1}}{\sqrt{\Lambda+\lambda}\sin\theta},$
(3.33)
$r(g_{\lambda,\theta,\mu}(r)-g_{2,\lambda,\theta,\mu}(r))\rightarrow\frac{m_{2}}{\sqrt{\lambda}\sin\theta},$
(3.34)
and
$r^{\alpha}\left(\frac{\nabla\psi_{\lambda,\theta,\mu}}{r}-\left(\sqrt{\Lambda+\lambda}\chi_{\\{0<\psi_{\lambda,\theta,\mu}<m_{1}\\}}+\sqrt{\lambda}\chi_{\\{m_{2}<\psi_{\lambda,\theta,\mu}\leq
0\\}}\right)e\right)\rightarrow 0,$ (3.35)
as $\displaystyle r\rightarrow+\infty$.
###### Proof.
Define
$\displaystyle\psi_{n}(\tilde{x},\tilde{r})=\psi_{\lambda,\theta,\mu}\left(x_{n}+\frac{\tilde{x}}{r_{n}},r_{n}+\frac{\tilde{r}}{r_{n}}\right)$
with $\displaystyle(x_{n},r_{n})\in\Gamma_{1,\lambda,\theta,\mu}$,
$\displaystyle r_{n}\rightarrow+\infty$. Set $\displaystyle
x=x_{n}+\frac{\tilde{x}}{r_{n}}$ and $\displaystyle
r=r_{n}+\frac{\tilde{r}}{r_{n}}$. The free boundaries and interface of
$\displaystyle\psi_{n}(\tilde{x},\tilde{r})$ are given by
$\left\\{(\tilde{x},\tilde{r})\mid
x_{n}+\frac{\tilde{x}}{r_{n}}=g_{i,\lambda,\theta,\mu}\left(r_{n}+\frac{\tilde{r}}{r_{n}}\right),i=1,2\right\\}\
\ \text{and}\ \ \left\\{(\tilde{x},\tilde{r})\mid
x_{n}+\frac{\tilde{x}}{r_{n}}=g_{\lambda,\theta,\mu}\left(r_{n}+\frac{\tilde{r}}{r_{n}}\right)\right\\}.$
(3.36)
Thanks to Lemma 3.12, we have
$\displaystyle\partial\\{0<\psi_{n}<m_{1}\\}$ converges to
$\displaystyle\partial\\{0<\Theta<m_{1}\\}$ locally in Hausdorff distance.
This together with (3.36) and (3.26), taking $\displaystyle\tilde{r}=0$,
yields that
$\displaystyle\tilde{x}\sin\theta=r_{n}\left(g_{\lambda,\theta,\mu}(r_{n})-x_{n}\right)\sin\theta=r_{n}\left(g_{\lambda,\theta,\mu}(r_{n})-g_{1,\lambda,\theta,\mu}(r_{n})\right)\sin\theta\rightarrow\frac{m_{1}}{\sqrt{\Lambda+\lambda}}.$
Furthermore, set
$\displaystyle\psi_{n}(\tilde{x},\tilde{r})=\psi_{\lambda,\theta,\mu}\left(x_{n}+\tilde{x},r_{n}+\tilde{r}\right)$
with $\displaystyle(x_{n},r_{n})\in\Gamma_{1,\lambda,\theta,\mu}$,
$\displaystyle r_{n}\rightarrow+\infty$, for any $\displaystyle\tilde{R}>0$,
and large $\displaystyle r_{n}>\tilde{R}+2R$, similarly in Lemma 3.11, one has
$\displaystyle\int_{\Omega_{\mu}\cap\\{|r-r_{n}|<\tilde{R}\\}}r^{2\alpha}\left|\frac{\nabla\psi_{\lambda,\theta,\mu}}{r}-\left(\sqrt{\Lambda+\lambda}\chi_{\\{0<\psi_{\lambda,\theta,\mu}<m_{1}\\}}+\sqrt{\lambda}\chi_{\\{m_{2}<\psi_{\lambda,\theta,\mu}\leq
0\\}}\right)e\right|^{2}dX$ $\displaystyle=$
$\displaystyle\int_{\\{|\tilde{r}|<\tilde{R}\\}}\left(\tilde{r}+r_{n}\right)^{2\alpha}\left|\frac{\tilde{\nabla}\psi_{n}}{\tilde{r}+r_{n}}-\left(\sqrt{\Lambda+\lambda}\chi_{\\{0<\psi_{n}<m_{1}\\}}+\sqrt{\lambda}\chi_{\\{m_{2}<\psi_{n}\leq
0\\}}\right)e\right|^{2}d\tilde{X}$ $\displaystyle\leq$
$\displaystyle\frac{C(r_{n}+\tilde{R})^{2\alpha-1}}{\left(r_{n}-\tilde{R}\right)^{3}}.$
Hence for any $\displaystyle\alpha\in(0,2)$ and
$\displaystyle(\tilde{x},\tilde{r})\in\mathbb{R}^{2}$, one has
$\displaystyle\left(\tilde{r}+r_{n}\right)^{\alpha}\left|\frac{\tilde{\nabla}\psi_{n}}{\tilde{r}+r_{n}}-\left(\sqrt{\Lambda+\lambda}\chi_{\\{0<\psi_{n}<m_{1}\\}}+\sqrt{\lambda}\chi_{\\{m_{2}<\psi_{n}\leq
0\\}}\right)e\right|\rightarrow 0\ \ \text{as $\displaystyle
n\rightarrow+\infty$},$
taking $\displaystyle\tilde{r}=0$, $\displaystyle x=x_{n}+\tilde{x}$ and
$\displaystyle r=r_{n}$ yields to the desired estimate (3.35).
Therefore, we complete the proof of Lemma 3.14. ∎
Next, we will prove that one of free boundaries will vanish, provided that the
asymptotic direction of the outgoing jet is horizontal. We call that
$\displaystyle\Gamma_{1,\mu}$ vanishes in
$\displaystyle\Omega_{\mu}\cap\\{r>R\\}$, means
$\displaystyle\psi_{\lambda,\theta,\mu}<m_{1}$ in
$\displaystyle\Omega_{\mu}\cap\\{r>R\\}$, and similarly, we call that the free
boundary $\displaystyle\Gamma_{2,\mu}$ vanishes in
$\displaystyle\Omega_{\mu}\cap\\{r>R\\}$ means that
$\displaystyle\psi_{\lambda,\theta,\mu}>m_{2}$ in
$\displaystyle\Omega_{\mu}\cap\\{r>R\\}$.
###### Proposition 3.15.
(1). If $\displaystyle\theta=\pi$, then the left free boundary
$\displaystyle\Gamma_{1,\mu}$ vanishes in
$\displaystyle\Omega_{\mu}\cap\\{r>R\\}$;
(2). If $\displaystyle\theta=0$, then the right free boundary
$\displaystyle\Gamma_{2,\mu}$ vanishes in
$\displaystyle\Omega_{\mu}\cap\\{r>R\\}$.
###### Proof.
Denote $\displaystyle\psi=\psi_{\lambda,\theta,\mu}$ for simplicity.
For $\displaystyle\theta=\pi$, then $\displaystyle e=(0,-1)$, for
$\displaystyle(x,r)\in\Omega_{\mu}$, set
$\displaystyle\psi_{0}(x,r)=\max\left\\{\min\left\\{m_{1}-\frac{\sqrt{\Lambda+\lambda}(r^{2}-R^{2})}{2},m_{1}\right\\},0\right\\}+\min\left\\{\max\left\\{\frac{\sqrt{\lambda}(R^{2}+\frac{2m_{1}}{\sqrt{\Lambda+\lambda}}-r^{2})}{2},m_{2}\right\\},0\right\\}.$
Next, we claim that
$\psi(x,r)\leq\psi_{0}(x,r)\ \ \text{in $\displaystyle\Omega_{\mu}$}.$ (3.37)
Suppose that the assertion (3.37) is not true, recalling that
$\displaystyle\min\\{\psi,\psi_{0}\\}\in K_{\mu}$ and the uniqueness of
minimizer, we obtain
$\displaystyle
J_{\lambda,\theta,\mu}(\psi)<J_{\lambda,\theta,\mu}(\min\\{\psi,\psi_{0}\\}).$
This implies that there exists some sufficiently large $\displaystyle
R_{0}>\max\left\\{\sqrt{\frac{2m_{1}}{\sqrt{\Lambda+\lambda}}-\frac{2m_{2}}{\sqrt{\lambda}}+R^{2}},1\right\\}$,
and
$\displaystyle 0>$
$\displaystyle\int_{\Omega_{\mu,R_{0}}}r\left|\frac{\nabla\psi}{r}-\left(\sqrt{\Lambda+\lambda}\chi_{\\{0<\psi<m_{1}\\}}+\sqrt{\lambda}\chi_{\\{m_{2}<\psi\leq
0\\}}\right)e\right|^{2}dX$ (3.38)
$\displaystyle-\int_{\Omega_{\mu,R_{0}}}r\left|\frac{\nabla\min\\{\psi,\psi_{0}\\}}{r}-\left(\sqrt{\Lambda+\lambda}\chi_{\\{0<\min\\{\psi,\psi_{0}\\}<m_{1}\\}}+\sqrt{\lambda}\chi_{\\{m_{2}<\min\\{\psi,\psi_{0}\\}\leq
0\\}}\right)e\right|^{2}dX$ $\displaystyle=$
$\displaystyle\int_{\Omega_{\mu,R_{0}}}\frac{\nabla\max\\{\psi-\psi_{0},0\\}\cdot\nabla(\psi+\psi_{0})}{r}dX$
$\displaystyle-2\sqrt{\Lambda+\lambda}\int_{\Omega_{\mu,R_{0}}}\nabla\psi\cdot
e\chi_{\\{0<\psi<m_{1}\\}}-\nabla\min\left\\{\psi,\psi_{0}\right\\}\cdot
e\chi_{\\{0<\min\left\\{\psi,\psi_{0}\right\\}<m_{1}\\}}dX$
$\displaystyle-2\sqrt{\lambda}\int_{\Omega_{\mu,R_{0}}}\nabla\psi\cdot
e\chi_{\\{m_{2}<\psi\leq 0\\}}-\nabla\min\left\\{\psi,\psi_{0}\right\\}\cdot
e\chi_{\\{m_{2}<\min\left\\{\psi,\psi_{0}\right\\}\leq 0\\}}dX$
$\displaystyle+\int_{\Omega_{\mu,R_{0}}}(\Lambda+\lambda)r\left(\chi_{\\{0<\psi<m_{1}\\}}-\chi_{\\{0<\min\\{\psi,\psi_{0}\\}<m_{1}\\}}\right)+\lambda
r\left(\chi_{\\{m_{2}<\psi\leq
0\\}}-\chi_{\\{m_{2}<\min\\{\psi,\psi_{0}\\}\leq 0\\}}\right)dX$
$\displaystyle=$ $\displaystyle I_{1}+I_{2}+I_{3}+I_{4},$
where $\displaystyle\Omega_{\mu,R_{0}}$ is bounded by $\displaystyle
N_{i,\mu}$, $\displaystyle N_{0,\mu}$, $\displaystyle L_{i}$, $\displaystyle
H_{i,\mu}$, $\displaystyle\left\\{((-1)^{i},r)\mid R\leq r\leq R_{0}\right\\}$
and $\displaystyle\\{(x,R_{0})\mid-R_{0}\leq x\leq R_{0}\\}$ for
$\displaystyle i=1,2$.
The first term $\displaystyle I_{1}$ can be estimated as follows,
$\displaystyle I_{1}=$
$\displaystyle\int_{\Omega_{\mu,R_{0}}}\frac{|\nabla\max\\{\psi-\psi_{0},0\\}|^{2}}{r}dX+2\int_{\Omega_{\mu,R_{0}}}\frac{\nabla\max\\{\psi-\psi_{0},0\\}\cdot\nabla\psi_{0}}{r}dX$
(3.39) $\displaystyle=$
$\displaystyle\int_{\Omega_{\mu,R_{0}}}\frac{|\nabla\max\\{\psi-\psi_{0},0\\}|^{2}}{r}dX-2\sqrt{\Lambda+\lambda}\int_{\bar{\Omega}_{\mu,R_{0}}\cap\\{\psi_{0}=0\\}}\max\\{\psi-\psi_{0},0\\}dx$
$\displaystyle+2\sqrt{\lambda}\int_{\bar{\Omega}_{\mu,R_{0}}\cap\\{\psi_{0}=0\\}}\max\\{\psi-\psi_{0},0\\}dx-2\sqrt{\lambda}\int_{\bar{\Omega}_{\mu,R_{0}}\cap\\{\psi_{0}<0\\}\cap\\{r=R_{0}\\}}\max\\{\psi-\psi_{0},0\\}dx.$
Furthermore, for the second term $\displaystyle I_{2}$, one has
$\displaystyle I_{2}=$
$\displaystyle-2\sqrt{\Lambda+\lambda}\left\\{\int_{\Omega_{\mu,R_{0}}\cap\\{0<\psi_{0}<m_{1}\\}\cap\\{0<\psi<m_{1}\\}}\nabla\max\left\\{\psi-\psi_{0},0\right\\}\cdot
edX\right.$ (3.40)
$\displaystyle+\left.\int_{\Omega_{\mu,R_{0}}\cap\\{\psi_{0}<0\\}\cap\\{0<\psi<m_{1}\\}}\nabla(\psi-\psi_{0})\cdot
edX-\int_{\Omega_{\mu,R_{0}}\cap\\{0<\psi_{0}<m_{1}\\}\cap\\{\psi=m_{1}\\}}\nabla(m_{1}-\psi_{0})\cdot
edX\right\\}$ $\displaystyle=$ $\displaystyle
2\sqrt{\Lambda+\lambda}\int_{\bar{\Omega}_{\mu,R_{0}}\cap\\{\psi_{0}=0\\}}\max\\{\psi-\psi_{0},0\\}dx.$
Similarly, we obtain
$I_{3}=-2\sqrt{\lambda}\int_{\bar{\Omega}_{\mu,R_{0}}\cap\\{\psi_{0}=0\\}}\max\\{\psi-\psi_{0},0\\}dx+2\sqrt{\lambda}\int_{\bar{\Omega}_{\mu,R_{0}}\cap\\{\psi_{0}<0\\}\cap\\{r=R_{0}\\}}\max\\{\psi-\psi_{0},0\\}dx.$
(3.41)
Finally, we have
$\displaystyle I_{4}\geq$
$\displaystyle(\Lambda+\lambda)\int_{\Omega_{\mu,R_{0}}}r\left(\chi_{\\{0<\psi<m_{1}\\}\cap\\{\psi_{0}\leq
0\\}}-\chi_{\\{0<\psi_{0}<m_{1}\\}\cap\\{\psi=m_{1}\\}}\right)dX-\lambda\int_{\Omega_{\mu,R_{0}}}r\chi_{\\{m_{2}<\psi_{0}\leq
0\\}\cap\\{\psi>0\\}}dX$ (3.42) $\displaystyle\geq$
$\displaystyle-(\Lambda+\lambda)\int_{\Omega_{\mu,R_{0}}}r\chi_{\\{0<\psi_{0}<m_{1}\\}\cap\\{\psi=m_{1}\\}}dX-\lambda\int_{\Omega_{\mu,R_{0}}\cap\\{m_{2}<\psi_{0}\leq
0\\}}r\left(\chi_{\\{\psi>0\\}}-\chi_{\\{0<\psi<m_{1}\\}}\right)dX$
$\displaystyle\geq$
$\displaystyle-(\Lambda+\lambda)\int_{\Omega_{\mu,R_{0}}}r\chi_{\\{0<\psi_{0}<m_{1}\\}\cap\\{\psi=m_{1}\\}}dX-\lambda\int_{\Omega_{\mu,R_{0}}}r\chi_{\\{m_{2}<\psi_{0}\leq
0\\}\cap\\{\psi=m_{1}\\}}dX.$
Inserting (3.39)-(3.42) into (3.38) yields
$\displaystyle 0>$
$\displaystyle\int_{\Omega_{\mu,R_{0}}}\frac{|\nabla\max\\{\psi-\psi_{0},0\\}|^{2}}{r}dX-(\Lambda+\lambda)\int_{\Omega_{\mu,R_{0}}}r\chi_{\\{0<\psi_{0}<m_{1}\\}\cap\\{\psi=m_{1}\\}}dX$
$\displaystyle-\lambda\int_{\Omega_{\mu,R_{0}}}r\chi_{\\{m_{2}<\psi_{0}\leq
0\\}\cap\\{\psi=m_{1}\\}}dX$ $\displaystyle=$
$\displaystyle\int_{\Omega_{\mu,R_{0}}\cap\\{0<\psi_{0}<m_{1}\\}\cap\\{\psi=m_{1}\\}}\left(\frac{|\nabla\psi_{0}|^{2}}{r}-(\Lambda+\lambda)r\right)dX+\int_{\Omega_{\mu,R_{0}}\cap\\{0<\psi_{0}<\psi<m_{1}\\}}\frac{|\nabla(\psi-\psi_{0})|^{2}}{r}dX$
$\displaystyle+\int_{\Omega_{\mu,R_{0}}\cap\\{m_{2}<\psi_{0}\leq
0\\}\cap\\{\psi=m_{1}\\}}\left(\frac{|\nabla\psi_{0}|^{2}}{r}-\lambda
r\right)dX+\int_{\Omega_{\mu,R_{0}}\cap\\{m_{2}<\psi_{0}\leq
0\\}\cap\\{\psi_{0}<\psi<m_{1}\\}}\frac{|\nabla(\psi-\psi_{0})|^{2}}{r}dX$
$\displaystyle=$
$\displaystyle\int_{\Omega_{\mu,R_{0}}\cap\\{m_{2}<\psi_{0}<\psi<m_{1}\\}}\frac{|\nabla(\psi-\psi_{0})|^{2}}{r}dX,$
which derives a contradiction. Hence, (3.37) holds, it implies that
$\displaystyle\psi(x,r)<m_{1}\ \ \text{in
$\displaystyle\Omega_{\mu}\cap\\{r>R\\}$},$
and thus, this gives that the free boundaries $\displaystyle\Gamma_{1,\mu}$
vanishes.
For $\displaystyle\theta=0$, taking
$\displaystyle\psi_{0}=\min\left\\{\max\left\\{\frac{\sqrt{\lambda}(r^{2}-R^{2})}{2}+m_{2},m_{2}\right\\},0\right\\}+\max\left\\{\min\left\\{\frac{\sqrt{\Lambda+\lambda}(r^{2}-R^{2}+\frac{2m_{2}}{\sqrt{\lambda}})}{2},m_{1}\right\\},0\right\\}.$
Similar arguments as before, yield that
$\displaystyle\psi(x,r)\geq\psi_{0}(x,r)\ \ \text{in
$\displaystyle\Omega_{\mu}\cap\\{r>R\\}$},$
which implies that the free boundary $\displaystyle\Gamma_{2,\mu}$ is empty.
Therefore, we complete the proof of Proposition 3.15. ∎
###### Remark 3.1.
Furthermore, we define $\displaystyle g_{1,\lambda,\theta,\mu}(R)=-\infty$ for
$\displaystyle\theta=\pi$, $\displaystyle g_{2,\lambda,\theta,\mu}(R)=+\infty$
for $\displaystyle\theta=0$, respectively.
Proposition 3.15 implies that the one of free boundaries vanishes for
horizontal asymptotic direction, and on another side, we will show that the
both of two free boundaries are non-empty, for non-horizontal asymptotic
direction.
###### Lemma 3.16.
If $\displaystyle\theta\in(0,\pi)$, then $\displaystyle\Gamma_{i,\mu}$ is non-
empty and a connected curve, $\displaystyle x=g_{i,\lambda,\theta,\mu}(r)$ is
continuous in $\displaystyle(R,+\infty)$. And $\displaystyle\lim_{r\rightarrow
R^{+}}g_{i,\lambda,\theta,\mu}(r)$ exists and denoted as $\displaystyle
g_{i,\lambda,\theta,\mu}(R+0)$ for $\displaystyle i=1,2$.
###### Proof.
Step 1. We will show that $\displaystyle\Gamma_{i,\mu}$ is non-empty for
$\displaystyle i=1,2$.
Firstly, we claim that there exists a constant $\displaystyle R_{0}>0$, such
that
$\displaystyle B_{R_{0}}(X_{0})\subset\Omega_{\mu}\cap\\{r>R_{0}\\}$
contains a free boundary point $\displaystyle
X_{0}=(x_{0},R+R_{0})\in\Omega_{\mu}$ for any $\displaystyle
0<\psi_{\lambda,\theta,\mu}(X_{0})<m_{1}$.
Indeed, suppose not, we have $\displaystyle
B_{R_{0}}(X_{0})\cap\Gamma_{1,\mu}=\varnothing$. Similar arguments as Lemma
3.7, one gets
$\displaystyle\sup_{\partial
B_{R_{0}}(X_{0})}\left(m_{1}-\psi_{\lambda,\theta,\mu}\right)\geq
c\sqrt{\Lambda+\lambda}R(R+R_{0}),$
which implies $\displaystyle R_{0}\leq\frac{m_{1}}{c\sqrt{\Lambda+\lambda}R}$.
This is impossible for sufficiently large $\displaystyle R_{0}$. Hence, the
claim holds.
Without loss of generality, we assume that $\displaystyle\Gamma_{1,\mu}$ is
empty, then we obtain $\displaystyle\psi_{\lambda,\theta,\mu}<m_{1}$ in
$\displaystyle\Omega_{\mu}\cap\\{r>R\\}$.
In view of the claim, there is a sequence $\displaystyle
X_{n}=(x_{n},r_{n})\in\Gamma_{2,\mu}$ such that $\displaystyle R<r_{n}\leq c$
and $\displaystyle x_{n}\rightarrow-\infty$. Hence, there exists a subsequence
$\displaystyle X_{n_{k}}=(x_{n_{k}},r_{n_{k}})\in\Gamma_{2,\mu}$ and
$\displaystyle r_{n_{k}}\rightarrow\xi$, $\displaystyle
x_{n_{k}}\rightarrow-\infty$ as $\displaystyle k\rightarrow+\infty$. Due to
Proposition 3.13, we can prove that
$\displaystyle\psi_{\lambda,\theta,\mu}(X+X_{n_{k}})\rightarrow\psi_{0}(X)$
uniformly in any compact subset of $\displaystyle\\{(x,r)|r>R-\xi\\}$ as
$\displaystyle k\rightarrow+\infty$, where $\displaystyle\psi_{0}$ is a
constant flow with deflection angle $\displaystyle\theta=\pi$. This
contradicts with $\displaystyle\theta\in(0,\pi)$. Thus, the free boundaries
$\displaystyle\Gamma_{1,\mu}$ and $\displaystyle\Gamma_{2,\mu}$ are non-empty.
Step 2. We will verify that $\displaystyle\Gamma_{i,\mu}$ is a connected curve
and $\displaystyle x=g_{i,\lambda,\theta,\mu}(r)$ is a continuous function in
$\displaystyle[R,+\infty)$, $\displaystyle i=1,2$.
Without loss of generality, we consider the left free boundary. Let
$\displaystyle(\alpha,\beta)$ be the maximal interval such that $\displaystyle
x=g_{1,\lambda,\theta,\mu}(r)$ is finite-valued for all
$\displaystyle[R,+\infty)$.
Similar arguments as Section 5 in [2], we obtain $\displaystyle\alpha=R$, and
the limit $\displaystyle\lim_{r\rightarrow R}g_{1,\lambda,\theta,\mu}(r)$
exists.
If $\displaystyle\beta<+\infty$, one has
$\displaystyle x=g_{1,\lambda,\theta,\mu}(r)\rightarrow+\infty\quad\text{
or}\quad x=g_{1,\lambda,\theta,\mu}(r)\rightarrow-\infty\ \ \text{as}\ \
r\rightarrow\beta,$
which together with Proposition 3.13 implies $\displaystyle\theta=0$ or
$\displaystyle\theta=\pi$. This leads a contradiction to the assumption
$\displaystyle\theta\in(0,\pi)$.
Therefore, we complete the proof of Lemma 3.16. ∎
### 3.5. Monotonicity with respect to the parameter $\displaystyle\theta$
Next, we will establish a fact that the minimizer
$\displaystyle\psi_{\lambda,\theta,\mu}$ and free boundary $\displaystyle
x=g_{i,\lambda,\theta,\mu}(r)$ ($\displaystyle i=1,2$) are monotonic with
respect to the asymptotic deflection angle $\displaystyle\theta$.
###### Proposition 3.17.
Suppose that $\displaystyle\theta_{1},\theta_{2}\in[0,\pi]$ with
$\displaystyle\theta_{1}<\theta_{2}$,
$\displaystyle\psi_{\lambda,\theta_{1},\mu}$ and
$\displaystyle\psi_{\lambda,\theta_{2},\mu}$ are minimizers to the truncated
variational problem ($\displaystyle P_{\lambda,\theta_{1},\mu}$) and
($\displaystyle P_{\lambda,\theta_{2},\mu}$), and $\displaystyle
x=g_{i,\lambda,\theta_{1},\mu}(r)$ and $\displaystyle
x=g_{i,\lambda,\theta_{2},\mu}(r)$ be the free boundary of
$\displaystyle\psi_{\lambda,\theta_{1},\mu}$ and
$\displaystyle\psi_{\lambda,\theta_{2},\mu}$, respectively, then
$\psi_{\lambda,\theta_{1},\mu}\geq\psi_{\lambda,\theta_{2},\mu}\ \ \text{for
$\displaystyle(x,r)\in\Omega_{\mu}$},$ (3.43)
and
$g_{i,\lambda,\theta_{1},\mu}(r)>g_{i,\lambda,\theta_{2},\mu}(r)\ \ \text{for
$\displaystyle r\geq R$, $\displaystyle i=1,2$}.$ (3.44)
###### Proof.
Denote $\displaystyle\psi_{1}=\psi_{\lambda,\theta_{1},\mu}$ and
$\displaystyle\psi_{2}=\psi_{\lambda,\theta_{2},\mu}$ for simplicity, and set
$\displaystyle v_{1}=\max\left\\{\psi_{1},\psi_{2}\right\\}$ and
$\displaystyle v_{2}=\min\left\\{\psi_{1},\psi_{2}\right\\}$.
For $\displaystyle\theta_{1}<\theta_{2}$, as is customary Lemma 8.1 in [2], we
obtain
$\displaystyle
J_{\lambda,\theta_{1},\mu}(\psi_{1})=J_{\lambda,\theta_{1},\mu}(v_{1})\ \
\text{and}\ \
J_{\lambda,\theta_{2},\mu}(\psi_{2})=J_{\lambda,\theta_{2},\mu}(v_{2}).$
Since $\displaystyle\psi_{1}$ and $\displaystyle\psi_{2}$ are the minimizers
to the functionals $\displaystyle J_{\lambda,\theta_{1},\mu}$ and
$\displaystyle J_{\lambda,\theta_{2},\mu}$, respectively, we can now proceed
as in Theorem 7.1 in [4] to obtain that
$\displaystyle\text{either}~{}~{}\psi_{1}\geq\psi_{2}~{}~{}\text{or}~{}~{}\psi_{1}\leq\psi_{2}~{}\text{in}~{}\Omega_{\mu}.$
However, noticing that $\displaystyle\psi_{1}\geq\psi_{2}$ in
$\displaystyle\Omega_{\mu}\cap\\{r>R_{0}\\}$ for some sufficiently large
$\displaystyle R_{0}>R$, we conclude that the case
$\displaystyle\psi_{1}\geq\psi_{2}$ in $\displaystyle\Omega_{\mu}$.
Next, without loss of generality, we prove that (3.44) holds for
$\displaystyle i=1$, namely
$g_{1,\lambda,\theta_{1},\mu}(r)>g_{1,\lambda,\theta_{2},\mu}(r)\ \ \text{for
$\displaystyle r\geq R$}.$ (3.45)
Indeed, in view of (3.43), one has
$g_{1,\lambda,\theta_{1},\mu}(r)\geq g_{1,\lambda,\theta_{2},\mu}(r)\ \
\text{for}\ \ r\geq R.$ (3.46)
For any $\displaystyle r>R$, suppose not, there exists a point $\displaystyle
X_{0}=\left(x_{0},r_{0}\right)$ with $\displaystyle r_{0}>R$ such that
$\displaystyle
x_{0}=g_{1,\lambda,\theta_{1},\mu}(r_{0})=g_{1,\lambda,\theta_{2},\mu}(r_{0}).$
Since the free boundary $\displaystyle x=g_{1,\lambda,\theta_{1},\mu}(r)$ is
analytic in $\displaystyle r>R$, and applying Hopf’s lemma yields that
$\displaystyle\frac{\partial}{\partial\nu}\left(\psi_{\lambda,\theta_{1},\mu}-\psi_{\lambda,\theta_{2},\mu}\right)<0\
\ \text{at}~{}~{}X_{0},$
where $\displaystyle\nu$ is the unit outward normal vector of $\displaystyle
x=g_{1,\lambda,\theta_{1},\mu}(r)$ at $\displaystyle X_{0}$. This contradicts
to the free boundary conditions
$\displaystyle\sqrt{\lambda+\Lambda}=\frac{1}{r}\frac{\partial\psi_{\lambda,\theta_{1},\mu}}{\partial\nu}<\frac{1}{r}\frac{\partial\psi_{\lambda,\theta_{2},\mu}}{\partial\nu}=\sqrt{\lambda+\Lambda}$
at $\displaystyle X_{0}$.
On another side, for $\displaystyle r=R$, suppose that $\displaystyle
g_{1,\lambda,\theta_{1},\mu}(R)=g_{1,\lambda,\theta_{2},\mu}(R)$ and
$\displaystyle X_{0}=(g_{1,\lambda,\theta_{1},\mu}(R),R)$. If $\displaystyle
g_{1,\lambda,\theta_{1},\mu}(R)\leq-1$, let $\displaystyle G_{\delta}$ be a
domain bounded by $\displaystyle N_{1}$, $\displaystyle L_{1}$,
$\displaystyle\Gamma_{1,\lambda,\theta_{1},\mu}$ and $\displaystyle\partial
B_{\delta}(X_{0})$, and $\displaystyle
G_{\delta}\subset\\{0<\psi_{\lambda,\theta_{1},\mu}<m_{1}\\}$. If
$\displaystyle g_{1,\lambda_{\mu},\theta_{1},\mu}(R)>-1$, set $\displaystyle
G_{\delta}$ be a domain bounded by $\displaystyle N_{1}$,
$\displaystyle\\{(x,R)\mid-1\leq x\leq
g_{1,\lambda_{\mu},\theta_{1},\mu}(R)\\}$,
$\displaystyle\Gamma_{1,\lambda,\theta_{1},\mu}$ and $\displaystyle\partial
B_{\delta}(X_{0})$, and $\displaystyle
G_{\delta}\subset\\{0<\psi_{\lambda,\theta_{1},\mu}<m_{1}\\}$.
Set
$\displaystyle\tilde{\psi}_{\varepsilon}=(1+\varepsilon)(m_{1}-\psi_{\lambda,\theta_{1},\mu})-(m_{1}-\psi_{\lambda,\theta_{2},\mu})\
\ \text{for some $\displaystyle\varepsilon>0$}.$
Recalling the fact $\displaystyle\psi_{1}\geq\psi_{2}$ in
$\displaystyle\Omega_{\mu}$, we can choose $\displaystyle\delta>0$
sufficiently small such that
$\displaystyle
G_{\delta}\subset\\{0<\psi_{\lambda,\theta_{1},\mu}<m_{1}\\}\cap\\{0<\psi_{\lambda,\theta_{2},\mu}<m_{1}\\}.$
It follows from the similar arguments as in Corollary 11.5 [23] that there
exists a small $\displaystyle\varepsilon>0$ such that
$\tilde{\psi}_{\varepsilon}<0\ \ \text{in $\displaystyle G_{\delta}$}.$ (3.47)
Hence, we obtain
$\displaystyle\frac{1}{R}\frac{\partial\tilde{\psi}_{\varepsilon}\left(g_{1,\lambda,\theta_{1},\mu}(R),R\right)}{\partial\nu}\geq
0,$
where $\displaystyle\nu$ is the unit normal vector of the left free boundary
$\displaystyle\Gamma_{1,\lambda,\theta_{1},\mu}$ at
$\displaystyle\left(g_{1,\lambda,\theta_{1},\mu}(R),R\right)$, then
$\displaystyle(1+\varepsilon)\sqrt{\lambda+\Lambda}\leq\sqrt{\lambda+\Lambda}.$
This leads a contradiction and then the inequality (3.45) holds for
$\displaystyle r=R$.
Therefore, we finish the proof of the Proposition 3.17. ∎
### 3.6. Continuous dependence to the parameters $\displaystyle\lambda$ and
$\displaystyle\theta$
In this subsection, a convergence result to the parameters
$\displaystyle\lambda$ and $\displaystyle\theta$ will be stated as follows.
###### Proposition 3.18.
For any $\displaystyle\lambda>0$ and $\displaystyle\theta\in[0,\pi]$, and
sequences $\displaystyle\lambda_{n}\rightarrow\lambda$,
$\displaystyle\theta_{n}\rightarrow\theta$ with
$\displaystyle\theta_{n}\in[0,\pi]$, let
$\displaystyle\psi_{\lambda_{n},\theta_{n},\mu}$ be the minimizer to the
variational problem ($\displaystyle P_{\lambda_{n},\theta_{n},\mu}$),
$\displaystyle x=g_{i,\lambda_{n},\theta_{n},\mu}(r)$ and $\displaystyle
x=g_{\lambda_{n},\theta_{n},\mu}(r)$ be the free boundary of
$\displaystyle\psi_{\lambda_{n},\theta_{n},\mu}$ and interface, respectively.
Then there exist three subsequences still labeled as
$\displaystyle\psi_{\lambda_{n},\theta_{n},\mu}$, $\displaystyle
g_{i,\lambda_{n},\theta_{n},\mu}(r)$ and $\displaystyle
g_{\lambda_{n},\theta_{n},\mu}(r)$ such that
$\psi_{\lambda_{n},\theta_{n},\mu}\rightarrow\psi_{\lambda,\theta,\mu}\ \
\text{weakly in}~{}~{}H_{loc}^{1}(\Omega_{\mu})~{}~{}\text{and pointwise}\
\text{in}\ \Omega_{\mu},$ (3.48)
$g_{i,\lambda_{n},\theta_{n},\mu}(r)\rightarrow g_{i,\lambda,\theta,\mu}(r)\ \
\text{uniformly for $\displaystyle r\geq R$},$ (3.49)
and
$g_{\lambda_{n},\theta_{n},\mu}(r)\rightarrow g_{\lambda,\theta,\mu}(r)\ \
\text{uniformly for $\displaystyle r\geq 0$}.$ (3.50)
Here, $\displaystyle\psi_{\lambda,\theta,\mu}$ is the minimizer to the
variational problem ($\displaystyle P_{\lambda,\theta,\mu}$) and
$\displaystyle x=g_{i,\lambda,\theta,\mu}(r)$ and $\displaystyle
x=g_{\lambda,\theta,\mu}(r)$ are the free boundary and interface of
$\displaystyle\psi_{\lambda,\theta,\mu}$ for $\displaystyle i=1,2$,
respectively.
###### Proof.
Firstly, recalling the following facts
$\displaystyle\psi_{\lambda_{n},\theta_{n},\mu}\in H_{loc}^{1}(\Omega_{\mu}),\
\ \left|\nabla\psi_{\lambda_{n},\theta_{n},\mu}\right|\leq C,$
and using diagonal procedure gives that there exists a subsequence
$\displaystyle\left\\{\psi_{\lambda_{n},\theta_{n},\mu}\right\\}_{n=1}^{\infty}$
and a function $\displaystyle\omega\in H_{loc}^{1}(\Omega_{\mu})$ for some
$\displaystyle 0<\alpha<1$ such that
$\displaystyle\psi_{\lambda_{n},\theta_{n},\mu}\rightarrow\omega\ \
\text{weakly
in}~{}~{}H_{loc}^{1}(\Omega_{\mu}),~{}~{}C_{loc}^{\alpha}(\Omega_{\mu})~{}~{}\text{
and pointwise in}~{}~{}\Omega_{\mu}.$
Along the similar arguments as Lemma 9.2 in [2], we obtain that
$\displaystyle\omega$ is indeed a minimizer to the truncated variational
problem ($\displaystyle P_{\lambda,\theta,\mu}$). Due to the uniqueness of
minimizer to the truncated variational problem ($\displaystyle
P_{\lambda,\theta,\mu}$), we have
$\displaystyle\omega=\psi_{\lambda,\theta,\mu}$. Therefore, we obtain the
convergence of (3.48).
Secondly, we will show the statement (3.49) for
$\displaystyle\Gamma_{1,\lambda_{n},\theta_{n},\mu}$. Indeed, for any
$\displaystyle r_{n}>R$, let
$\displaystyle
X_{n}=\left(g_{1,\lambda_{n},\theta_{n},\mu}(r_{n}),r_{n}\right)\in\Gamma_{1,\lambda_{n},\theta_{n},\mu},\
\ \text{and}\ \ X_{n}\rightarrow
X_{0}=(x_{0},r_{0}),~{}~{}\text{as}~{}~{}n\rightarrow+\infty.$
Then, for any small $\displaystyle r>0$, the non-degeneracy lemma implies that
there exist two positive constants $\displaystyle C_{1}$ and $\displaystyle
C_{2}$, such that
$\displaystyle C_{1}\lambda_{n}r_{n}\leq\frac{1}{r}\fint_{\partial
B_{r}(X_{n})}\left(m_{1}-\psi_{\lambda_{n},\theta_{n},\mu}\right)dS\leq
C_{2}\lambda_{n}r_{n}.$
Letting $\displaystyle n\rightarrow+\infty$ gives
$\displaystyle C_{1}\lambda r_{0}\leq\frac{1}{r}\fint_{\partial
B_{r}(X_{0})}\left(m_{1}-\psi_{\lambda,\theta,\mu}\right)dS\leq C_{2}\lambda
r_{0}.$
Moreover, recalling the non-degeneracy Lemma 3.5 and Lemma 3.6 yields that
$\displaystyle X_{0}\in\Gamma_{1,\mu}$. Hence, we obtain the assertion (3.49)
for $\displaystyle r\in(R,+\infty)$.
Using Lemma 10.4 in [23], we can obtain the result for $\displaystyle r=R$,
namely,
$\displaystyle g_{1,\lambda_{n},\theta_{n},\mu}(R)\rightarrow
g_{1,\lambda,\theta,\mu}(R)\ \ \text{as $\displaystyle n\rightarrow+\infty$}.$
Similarly, (3.49) holds for the right free boundary
$\displaystyle\Gamma_{2,\lambda_{n},\theta_{n},\mu}$.
Finally, similar arguments as Theorem 7.1 in [6], we can obtain (3.50). ∎
### 3.7. Continuous and smooth fit conditions of the free boundaries
In this subsection, we will verify that there exist two parameters
$\displaystyle\lambda$ and $\displaystyle\theta$, such that the free
boundaries $\displaystyle\Gamma_{i,\mu}$ connect smoothly at the end points
$\displaystyle A_{i}$ of the nozzles $\displaystyle N_{i}$ ($\displaystyle
i=1,2$), respectively. Namely, for any $\displaystyle\mu>0$, there exists a
pair of parameters $\displaystyle(\lambda_{\mu},\theta_{\mu})$ with
$\displaystyle\lambda_{\mu}>0$, $\displaystyle\theta_{\mu}\in(0,\pi)$, such
that
$\displaystyle g_{1,\lambda_{\mu},\theta_{\mu},\mu}(R)=-1\ \ \text{and}\ \
g_{2,\lambda_{\mu},\theta_{\mu},\mu}(R)=1.$
As already mentioned before, this is the main difference to the impinging free
jet without rigid nozzle walls.
To see this, we first define a set $\displaystyle\Sigma_{\mu}$ as
$\Sigma_{\mu}=\\{\lambda\mid\lambda\geq 0,\text{there exists a
$\displaystyle\theta\in(0,\pi)$, such that $\displaystyle
g_{1,\lambda,\theta,\mu}(R)<-1$ and $\displaystyle
g_{2,\lambda,\theta,\mu}(R)>1$}\\}.$ (3.51)
The following lemma implies that $\displaystyle\Sigma_{\mu}$ is non-empty.
###### Lemma 3.19.
There exists $\displaystyle\theta_{0}\in(0,\pi)$ such that
$g_{1,\lambda,\theta_{0},\mu}(R)<-1\ \ \text{and}\ \
g_{2,\lambda,\theta_{0},\mu}(R)>1,$ (3.52)
for sufficiently small $\displaystyle\lambda>0$.
###### Proof.
For any
$\displaystyle\Omega_{0}\subset\subset\Omega_{\mu}\cap\\{r<R\\}\cap\\{m_{2}<\psi_{\lambda,\theta,\mu}<0\\}$,
firstly, it follows from Lemma 5.2 in [2] that there exists a positive
constant $\displaystyle C$ (depending only on $\displaystyle\Omega_{0}$), such
that
$|\nabla\psi_{\lambda,\theta,\mu}|\leq C\lambda\ \ \text{in}\ \ \Omega_{0},$
(3.53)
provided that $\displaystyle\Omega_{0}$ contains a free boundary point.
For $\displaystyle\theta\in(0,\pi)$, suppose not, without loss of generality,
suppose $\displaystyle g_{2,\lambda,\theta,\mu}(R)\leq 1$.
Indeed, it follows from the monotonicity of
$\displaystyle\psi_{\lambda,\theta,\mu}(x,r)$ with respect to $\displaystyle
x$, that there exists a point $\displaystyle X_{1}\in\Omega_{\mu}$, such that
$\displaystyle\psi_{\lambda,\theta,\mu}(X_{1})=\frac{m_{2}}{2},\ \
\text{with}\ \ X_{1}=(x_{1},R),\ \ \text{and}\ \
g_{\lambda,\theta,\mu}(R)<x_{1}<1.$
Denote $\displaystyle X_{2}=(x_{2},R)$ as the initial point of the right free
boundary $\displaystyle x=g_{2,\lambda,\theta,\mu}(r)$, due to the
monotonicity of $\displaystyle\psi_{\lambda,\theta,\mu}$ with respect to
$\displaystyle x$, one has $\displaystyle x_{2}>x_{1}$. Taking $\displaystyle
X_{3}=\left(\frac{3x_{2}-x_{1}}{2},R\right)$, an arc
$\displaystyle\gamma\in\Omega_{\mu}\cap\\{r>R\\}$ connecting $\displaystyle
X_{1}$ to $\displaystyle X_{3}$ and $\displaystyle|\gamma|\leq
C|X_{2}-X_{1}|=C|x_{2}-x_{1}|$, which intersects $\displaystyle\Gamma_{2,\mu}$
at $\displaystyle X_{4}$, $\displaystyle\gamma_{0}$ denotes the arc part
$\displaystyle\gamma$ from $\displaystyle X_{1}$ to $\displaystyle X_{4}$.
Let $\displaystyle\Omega_{0}$ be bounded by $\displaystyle\gamma_{0}$,
$\displaystyle r=R$ and $\displaystyle\Gamma_{2,\mu}$, it follows from (3.53)
that
$|\nabla\psi_{\lambda,\theta,\mu}|\leq C\lambda\ \text{in}\ \
\Omega_{0}\setminus B_{\delta}(A_{2}),\ \ \text{for sufficiently small
$\displaystyle\delta>0$}.$ (3.54)
Hence, it follows from (3.54) that
$\displaystyle-\frac{m_{2}}{2}=\psi_{\lambda,\theta,\mu}(X_{1})-\psi_{\lambda,\theta,\mu}(X_{4})\leq\int_{\gamma_{0}}|\nabla\psi_{\lambda,\theta,\mu}|dl\leq
C\lambda|X_{1}-X_{2}|\leq C\lambda\left(1-g_{\lambda,\theta,\mu}(R)\right),$
which is impossible with sufficiently small $\displaystyle\lambda$. Therefore,
for some sufficiently small $\displaystyle\lambda>0$, one has $\displaystyle
g_{2,\lambda,\theta,\mu}(R)>1$.
Indeed, due to Proposition 3.18 and Remark 3.1, we obtain that there exists an
$\displaystyle\theta_{0}$ ($\displaystyle\pi-\theta_{0}\ll 1$) such that
(3.52) holds.
Hence, we complete the proof of this lemma. ∎
Next, the following lemma implies that the set $\displaystyle\Sigma_{\mu}$ has
a uniform positive lower bound.
###### Lemma 3.20.
If $\displaystyle\theta\in(0,\pi)$, we have
$\min\left\\{-g_{1,\lambda,\theta,\mu}(R),g_{2,\lambda,\theta,\mu}(R)\right\\}<1,$
(3.55)
for sufficiently large $\displaystyle\lambda$.
###### Proof.
Indeed, it suffices to prove that the free boundaries
$\displaystyle\Gamma_{1,\mu}:x=g_{1,\lambda,\theta,\mu}(r)$ and
$\displaystyle\Gamma_{2,\mu}:x=g_{2,\lambda,\theta,\mu}(r)$ with
$\displaystyle R\leq r\leq 2R$ are contained in a neighborhood of
$\displaystyle\Gamma_{\mu}:x=g_{\lambda,\theta,\mu}(r)$ for sufficiently large
$\displaystyle\lambda$.
Firstly, we prove that the free boundary $\displaystyle\Gamma_{1,\mu}$ with
$\displaystyle R\leq r\leq 2R$ is contained in a neighborhood of
$\displaystyle\Gamma_{\mu}$ for sufficiently large $\displaystyle\lambda$.
Suppose not, then there exists a small and fixed $\displaystyle r_{0}>0$ and
$\displaystyle\tilde{X}=(\tilde{x},\tilde{r})\in\Gamma_{1,\mu}\cap\left\\{R\leq
r\leq 2R\right\\}$, and $\displaystyle
B_{r_{0}}(\tilde{X})\subset\Omega_{\mu}\cap\\{r>R\\}\cap\\{0<\psi_{\lambda,\theta,\mu}<m_{1}\\}$,
such that for any $\displaystyle\lambda>0$,
$\displaystyle B_{r_{0}}(\tilde{X})\cap\Gamma_{\mu}=\varnothing.$
Thus, the non-degeneracy Lemma 3.6 implies that
$\displaystyle\sqrt{\lambda+\Lambda}C\tilde{r}\leq\frac{1}{r_{0}}\fint_{\partial
B_{r_{0}}(\tilde{X})}\left(m_{1}-\psi_{\lambda,\theta,\mu}\right)dS\leq\frac{m_{1}}{r_{0}},$
which yields
$\displaystyle\sqrt{\lambda+\Lambda}\leq\frac{m_{1}}{Cr_{0}R}.$
This leads to a contradiction for sufficiently large $\displaystyle\lambda>0$.
Similarly, we can prove that the free boundaries $\displaystyle\Gamma_{2,\mu}$
with $\displaystyle R\leq r\leq 2R$ is contained in a neighborhood of
$\displaystyle\Gamma_{\mu}$ for sufficiently large $\displaystyle\lambda$.
Therefore, we finish the proof of Lemma 3.20. ∎
Define
$\lambda_{\mu}=\sup\left\\{\lambda\mid\lambda\in\Sigma_{\mu}\right\\},$ (3.56)
Lemma 3.20 implies that there exists a positive constant $\displaystyle C$
independent of $\displaystyle\mu$, such that
$\displaystyle\lambda_{\mu}\leq C.$
Finally, we will check that there exists a
$\displaystyle\theta_{\mu}\in(0,\pi)$ such that
$g_{1,\lambda_{\mu},\theta_{\mu},\mu}(R)=-1\ \ \text{and}\ \
g_{2,\lambda_{\mu},\theta_{\mu},\mu}(R)=1.$ (3.57)
###### Proposition 3.21.
There exists a $\displaystyle\theta_{\mu}\in(0,\pi)$ such that (3.57) holds.
Furthermore, $\displaystyle
N_{i}\cup\Gamma_{i,\lambda_{\mu},\theta_{\mu},\mu}$ is $\displaystyle
C^{1}$-smooth in a neighborhood of $\displaystyle A_{i}$, for $\displaystyle
i=1,2$.
###### Proof.
Taking a sequence $\displaystyle(\lambda_{n},\theta_{n})$ such that
$\displaystyle g_{1,\lambda_{n},\theta_{n},\mu}(R)<-1,\ \
g_{2,\lambda_{n},\theta_{n},\mu}(R)>1,$
and
$\displaystyle\lambda_{n}\rightarrow\lambda_{\mu}>0,\ \
\theta_{n}\rightarrow\theta_{\mu}\in[0,\pi].$
Noticing the fact that $\displaystyle x=g_{i,\lambda,\theta,\mu}(r)$
$\displaystyle(i=1,2)$ is continuous with respect to the parameters
$\displaystyle\lambda$ and $\displaystyle\theta$, then we have
$g_{1,\lambda_{\mu},\theta_{\mu},\mu}(R)\leq-1\ \ \text{and}\ \
g_{2,\lambda_{\mu},\theta_{\mu},\mu}(R)\geq 1.$ (3.58)
Firstly, we claim that
$0<\theta_{\mu}<\pi.$ (3.59)
Without loss of generality, we suppose $\displaystyle\theta_{\mu}=0$, then for
any $\displaystyle\tilde{\theta}>0$, the monotonicity in Proposition 3.17
gives that
$\psi_{\lambda_{\mu},\tilde{\theta},\mu}\leq\psi_{\lambda_{\mu},0,\mu},$
(3.60)
and
$g_{1,\lambda_{\mu},\tilde{\theta},\mu}(r)<g_{1,\lambda_{\mu},0,\mu}(r)\quad\text{for}\
r\geq R.$ (3.61)
It follows from $\displaystyle\theta_{\mu}=0$ that $\displaystyle
g_{2,\lambda_{\mu},0,\mu}(R)=+\infty$, choosing
$\displaystyle\tilde{\theta}>0$ be sufficiently small, due to the convergence
of the free boundaries (3.49), one gets
$\displaystyle g_{2,\lambda_{\mu},\tilde{\theta},\mu}(R)\geq 2.$
Furthermore, (3.61) implies that
$\displaystyle g_{1,\lambda_{\mu},\tilde{\theta},\mu}(R)<-1.$
Then, we can choose a $\displaystyle\lambda_{0}>\lambda_{\mu}$ and
$\displaystyle\lambda_{0}-\lambda_{\mu}$ suitably small such that
$\displaystyle g_{1,\lambda_{0},\tilde{\theta},\mu}(R)<-1\ \ \text{and}\ \
g_{2,\lambda_{0},\tilde{\theta},\mu}(R)>1,$
which implies $\displaystyle\lambda_{0}\in\Sigma_{\mu}$. This contradicts with
the definition of $\displaystyle\lambda_{\mu}$.
Consequently, we obtain $\displaystyle\theta_{\mu}>0$. Similarly, we can prove
$\displaystyle\theta_{\mu}<\pi$. Hence, the claim (3.59) holds.
Moreover, we will verify the continuous fit conditions (3.57). Indeed, suppose
not, without loss of generality, we assume that
$\displaystyle g_{1,\lambda_{\mu},\theta_{\mu},\mu}(R)<-1.$
Taking $\displaystyle\tilde{\theta}\in(0,\theta_{\mu})$ with
$\displaystyle\theta_{\mu}-\tilde{\theta}$ being suitably small, then the
continuity of $\displaystyle g_{1,\lambda_{\mu},\theta_{\mu},\mu}(R)$ with
respect to $\displaystyle\theta$ gives
$\displaystyle g_{1,\lambda_{\mu},\tilde{\theta},\mu}(R)<-1.$
Similar to (3.61), we have
$\displaystyle 1\leq
g_{2,\lambda_{\mu},\theta_{\mu},\mu}(R)<g_{2,\lambda_{\mu},\tilde{\theta},\mu}(R).$
Hence,
$\displaystyle g_{1,\lambda_{\mu},\tilde{\theta},\mu}(R)<-1\ \ \text{and}\ \
g_{2,\lambda_{\mu},\tilde{\theta},\mu}(R)>1.$
Therefore, similar to the above arguments for
$\displaystyle\theta=\tilde{\theta}$, we can choose a
$\displaystyle\lambda_{0}>\lambda_{\mu}$ and
$\displaystyle\lambda_{0}-\lambda_{\mu}$ being sufficiently small, and
$\displaystyle\lambda_{0}\in\Sigma_{\mu}$, it leads a contradiction to the
definition of $\displaystyle\lambda_{\mu}$.
Thus, we obtain the continuous fit conditions (3.57).
Furthermore, the similar proof to the jet flow problem in [4] implies that the
free boundaries are $\displaystyle C^{1}$-smooth at the end points of the
nozzles $\displaystyle A_{i}$ $\displaystyle(i=1,2)$, we omit it here. ∎
### 3.8. Existence of the impinging outgoing jet
In order to obtain the existence of the impinging outgoing jet, we take a
sequence $\displaystyle\mu=\mu_{n}\rightarrow+\infty$, and the corresponding
($\displaystyle\lambda_{\mu_{n}},\theta_{\mu_{n}}$) with
$\displaystyle\lambda_{\mu_{n}}>0$ and
$\displaystyle\theta_{\mu_{n}}\in(0,\pi)$,
$\displaystyle g_{1,\lambda_{\mu_{n},\theta_{\mu_{n}},\mu_{n}}}(R)=-1,\ \
\text{and}\ \ g_{2,\lambda_{\mu_{n},\theta_{\mu_{n}},\mu_{n}}}(R)=1,$
then there exist a $\displaystyle\lambda\geq 0$ and
$\displaystyle\theta\in[0,\pi]$ and a subsequence $\displaystyle\mu_{n}$, such
that $\displaystyle\lambda_{\mu_{n}}\rightarrow\lambda$,
$\displaystyle\theta_{\mu_{n}}\rightarrow\theta$ and
$\displaystyle\psi_{\lambda_{\mu_{n}},\theta_{\mu_{n}},\mu_{n}}\rightarrow\psi_{\lambda,\theta}\
\ \text{weakly in}~{}~{}H_{loc}^{1}(\Omega)~{}~{}\text{and a.e
in}~{}~{}\Omega.$
The similar arguments as in Proposition 3.18 imply that
$\displaystyle\psi_{\lambda,\theta}$ is a local minimizer to the variational
problem $\displaystyle J_{\lambda,\theta}$, namely,
$\displaystyle J_{\Omega_{0}}(\psi_{\lambda,\theta})\leq J_{\Omega_{0}}(v)\ \
\text{for any $\displaystyle\Omega_{0}\subset\subset\Omega$ and $\displaystyle
v-\psi_{\lambda,\theta}\in H_{0}^{1}(\Omega_{0})$},$
where $\displaystyle
J_{\Omega_{0}}(v)=\int_{\Omega_{0}}r\left|\frac{\nabla\psi}{r}-\left(\sqrt{\Lambda+\lambda}\chi_{\\{0<\psi<m_{1}\\}}+\sqrt{\lambda}\chi_{\\{m_{2}<\psi\leq
0\\}}\right)e\right|^{2}dX$.
Furthermore, along the similar arguments in Proposition 6.1, we can check that
$\displaystyle\psi_{\lambda,\theta}$ is a weak solution to the boundary value
problem (2.4).
Since
$\displaystyle m_{2}\leq\psi_{\lambda,\theta}\leq m_{1}\ \
\text{in}~{}~{}\Omega,$
and
$\psi_{\lambda,\theta}(x,r)\geq\psi_{\lambda,\theta}(\tilde{x},r)\ \ \ \
\text{for any}~{}~{}x<\tilde{x},$ (3.62)
using the same arguments as before, there exist two $\displaystyle
C^{1}$-smooth functions $\displaystyle x=g_{1,\lambda,\theta}(r)$ and
$\displaystyle x=g_{2,\lambda,\theta}(r)$ such that
$g_{1,\lambda_{\mu_{n}},\theta_{\mu_{n}},{\mu_{n}}}(r)\rightarrow
g_{1,\lambda,\theta}(r)\ \ \text{for any $\displaystyle r\in[R,+\infty)$},$
(3.63)
and
$g_{2,\lambda_{\mu_{n}},\theta_{\mu_{n}},{\mu_{n}}}(r)\rightarrow
g_{2,\lambda,\theta}(r)\ \ \text{for any $\displaystyle r\in[R,+\infty)$}.$
(3.64) $g_{1,\lambda,\theta}(R)=-1,\quad\text{and}\quad
g_{2,\lambda,\theta}(R)=1.$ (3.65)
Furthermore, along the similar arguments as Lemma 3.19 and 3.20, we assert
that
$\lambda>0\ \ \text{and}\ \ 0<\theta<\pi,$ (3.66)
and the smooth fit condition of $\displaystyle x=g_{i,\lambda,\theta}(r)$ at
$\displaystyle A_{i}$ follows immediately from the arguments in Proposition
3.21 for $\displaystyle i=1,2$.
Using the standard elliptic estimates yields that
$\displaystyle\psi_{\lambda,\theta}\in
C^{2,\sigma}(\Omega_{1}\cup\Omega_{2})\cap
C^{0}(\overline{\Omega_{1}\cup\Omega_{2}})$ for some
$\displaystyle\sigma\in(0,1)$ and it solves the boundary value problem (2.4).
Hence, the existence of the impinging outgoing jets in Theorem 1.1 has been
established.
Next, we will show the positivity of radial velocity to the axially symmetric
impinging outgoing jets.
###### Proposition 3.22.
Let $\displaystyle\psi_{\lambda,\theta}$ be the solution to the boundary value
problem (2.4), then
$m_{2}<\psi_{\lambda,\theta}<m_{1}\ \ \text{in}~{}~{}G,$ (3.67)
and
$V=-\frac{1}{r}\frac{\partial\psi_{\lambda,\theta}}{\partial x}>0\ \ \
\text{in}~{}~{}\overline{G}\setminus\left(N_{0}\cup\Gamma\right),$ (3.68)
where $\displaystyle G$ is bounded by $\displaystyle N_{i}$,
$\displaystyle\Gamma_{i}$ and $\displaystyle N_{0}$ for $\displaystyle i=1,2$.
###### Proof.
Noting that
$\displaystyle\Delta\psi_{\lambda,\theta}-\frac{1}{r}\frac{\partial\psi_{\lambda,\theta}}{\partial
r}=0\ \text{in any bounded connected smooth open subdomain $\displaystyle
G_{0}\subset G\setminus\Gamma$},$
and $\displaystyle m_{2}\leq\psi_{\lambda,\theta}\leq m_{1}$ on
$\displaystyle\partial G_{0}$, then, the strong maximum principle implies that
$\displaystyle m_{2}<\psi_{\lambda,\theta}<m_{1}\ \ \text{in}~{}~{}G_{0}.$
The arbitrariness of domain $\displaystyle G_{0}\subset G$ yields to (3.67).
Next, since $\displaystyle N_{1}\cup\Gamma_{1}\in C^{1}$, there exists a
bounded smooth subdomain $\displaystyle
G_{0}\subset\\{0<\psi_{\lambda,\theta}<m_{1}\\}$ with
$\displaystyle\overline{G_{0}}\cap N_{1}=X_{0}$ (or $\displaystyle
G_{0}\subset\\{m_{2}<\psi_{\lambda,\theta}<0\\}$ with
$\displaystyle\overline{G_{0}}\cap N_{2}=X_{0}$). Then, $\displaystyle
w=-\frac{\partial\psi_{\lambda,\theta}}{\partial x}$ satisfies
$\displaystyle\Delta w-\frac{1}{r}\frac{\partial w}{\partial r}=0\ \ \
\text{in}~{}~{}G_{0}.$
Since $\displaystyle\psi_{\lambda,\theta}=m_{1}$ on $\displaystyle N_{1}$, the
slip boundary condition (1.10) implies that
$\displaystyle\partial_{x}\psi_{\lambda,\theta}(f_{1}(r),r)f_{1}^{\prime}(r)+\partial_{r}\psi_{\lambda,\theta}(f_{1}(r),r)=0.$
This implies that the unit outward normal derivative satisfies
$\displaystyle\frac{\partial\psi_{\lambda,\theta}}{\partial\nu}\left(f_{1}(r),r)\right)=-\partial_{x}\psi_{\lambda,\theta}\left(f_{1}(r),r)\right)\sqrt{1+f_{1}^{\prime}(r)^{2}}.$
On another hand, $\displaystyle\psi_{\lambda,\theta}$ attains its maximum on
$\displaystyle N_{1}$, and Hopf’s lemma gives that
$\displaystyle w=-\partial_{x}\psi_{\lambda,\theta}>0,\ \ \
\text{on}~{}~{}N_{1}.$
Similarly, one gets
$\displaystyle w=-\partial_{x}\psi_{\lambda,\theta}>0,\ \ \
\text{on}~{}~{}N_{2}.$
Then, in view of (3.62), one has
$\displaystyle w\geq 0\ \ \ \text{on}~{}~{}\partial G_{0},\ \ \text{and}\ \
w(X_{0})>0,$
and applying the maximum principle to $\displaystyle
w=-\frac{\partial\psi_{\lambda,\theta}}{\partial x}$ yields that
$\displaystyle w>0\ \ \ \text{in any subdomain}~{}~{}G_{0}\subset G.$
Finally, we claim that (3.68) holds on
$\displaystyle\Gamma_{1}\cup\Gamma_{2}$.
Recalling the fact
$\displaystyle
w=\frac{\sqrt{\lambda+\Lambda}r}{\sqrt{1+(g_{1,\lambda,\theta}^{\prime}(r))^{2}}}\geq
0\ \ \text{on}~{}~{}\Gamma_{1},\ \ \text{and}\ \
w=\frac{\sqrt{\lambda}r}{\sqrt{1+(g_{2,\lambda,\theta}^{\prime}(r))^{2}}}\geq
0\ \ \text{on}~{}~{}\Gamma_{2},$
Suppose that the claim can not hold, then, without loss of generality, there
exists a $\displaystyle r_{0}\geq R$ such that $\displaystyle
g_{1,\lambda,\theta}^{\prime}(r_{0})=+\infty$ or $\displaystyle-\infty$,
$\displaystyle\partial_{x}\psi_{\lambda,\theta}(g_{1,\lambda,\theta}(r_{0}),r_{0})=0$
and $\displaystyle w(g_{1,\lambda,\theta}(r_{0}),r_{0})=0$.
Thanks to the fact that
$\displaystyle\frac{|\nabla\psi_{\lambda,\theta}|}{r}=\sqrt{\lambda+\Lambda}$
on $\displaystyle\Gamma_{1}$ (or
$\displaystyle\frac{|\nabla\psi_{\lambda,\theta}|}{r}=\sqrt{\lambda}$ on
$\displaystyle\Gamma_{2}$), then
$\displaystyle\frac{\partial}{\partial
s}\left(\frac{|\nabla\psi_{\lambda,\theta}|^{2}}{r^{2}}\right)=0\ \ \text{on
$\displaystyle\Gamma_{1}\cup\Gamma_{2}$},$
where $\displaystyle s=(1,0)$ is the tangential vector of
$\displaystyle\Gamma_{1}\cup\Gamma_{2}$. This implies that
$\displaystyle\left(\frac{\partial}{\partial
x}\left(\frac{|\nabla\psi_{\lambda,\theta}|^{2}}{r^{2}}\right),\frac{\partial}{\partial
r}\left(\frac{|\nabla\psi_{\lambda,\theta}|^{2}}{r^{2}}\right)\right)\cdot\left(1,0\right)=0\
\ \text{on $\displaystyle\Gamma_{1}\cup\Gamma_{2}$}.$
Then, one has
$\partial_{xr}\psi_{\lambda,\theta}(g_{1,\lambda,\theta}(r_{0}),r_{0})=\partial_{r}w(g_{1,\lambda,\theta}(r_{0}),r_{0})=0.$
(3.69)
However, the Hopf’s Lemma gives that
$\displaystyle\left|\frac{\partial w}{\partial\nu}\right|=\left|\frac{\partial
w}{\partial r}\right|>0\ \ \text{at
$\displaystyle(g_{1,\lambda,\theta}(r_{0}),r_{0})$},$
which contradicts with (3.69). Then, we prove that (3.68) holds on
$\displaystyle\Gamma_{1}\cup\Gamma_{2}$.
Therefore, we obtain the positivity of the vertical velocity and complete the
proof of Proposition 3.22. ∎
### 3.9. The properties of the interface
In this subsection, we will show that there exists a $\displaystyle
C^{1}$-smooth curve
$\displaystyle\Gamma:\\{\psi_{\lambda,\theta}=0\\}\cap\\{r>0\\}$ separating
the two fluids, and the axially symmetric impinging outgoing jet established
here possesses a unique branching point on the symmetric axis $\displaystyle
N_{0}$. For $\displaystyle\Lambda=0$, the proof is similar to Section 4.10 in
[17], we omit it here.
Next, it suffices to prove that the results hold for $\displaystyle\Lambda>0$.
Indeed, taking subsequence $\displaystyle\mu_{n}\rightarrow+\infty$, one has
$\displaystyle g_{\lambda_{\mu_{n}},\theta_{\mu_{n}},{\mu_{n}}}(r)\rightarrow
g_{\lambda,\theta}(r)\ \ \text{for any $\displaystyle r>0$}.$
Then, the similar arguments as Lemma 3.10 implies that $\displaystyle
x=g_{\lambda,\theta}(r)\in[-\infty,+\infty]$ is generalized continuous
function in $\displaystyle[0,+\infty)$, we need to prove that $\displaystyle
x=g_{\lambda,\theta}(r)$ is finite valued for any $\displaystyle
r\in[0,+\infty)$.
$g_{\lambda_{\mu_{n}},\theta_{\mu_{n}},{\mu_{n}}}(r)\rightarrow
g_{\lambda,\theta}(r)\ \ \text{for any $\displaystyle r\in(0,+\infty)$}.$
(3.70)
Since $\displaystyle g_{\lambda,\theta}(r)<g_{2,\lambda,\theta}(r)$ for any
$\displaystyle r>R$ and $\displaystyle
g_{\lambda,\theta}(r)>g_{1,\lambda,\theta}(r)$ for any $\displaystyle r>R$,
then it suffices to prove that $\displaystyle g_{\lambda,\theta}(r)$ is finite
valued for $\displaystyle 0\leq r\leq M_{0}$, where
$\displaystyle\max\\{r_{1},r_{2}\\}\leq M_{0}\leq R$. Denote by
$\displaystyle(\alpha_{i},\beta_{i})\subset[0,M_{0}]$ ($\displaystyle
i=1,2,...$, $\displaystyle\alpha_{i}<\beta_{i}$ and
$\displaystyle\beta_{i}\leq\alpha_{i+1}$) the maximum intervals where
$\displaystyle x=g_{\lambda,\theta}(r)$ is finite valued.
Similar arguments as Proposition 5.1 in [31], we can prove that the number of
intervals $\displaystyle(\alpha_{i},\beta_{i})$ is one, denote
$\displaystyle(\alpha,+\infty)$ for simplicity.
Next, we will prove that $\displaystyle\alpha=0$.
Suppose $\displaystyle\alpha>0$ and
$\displaystyle\lim_{r\rightarrow\alpha}g_{\lambda,\theta}(r)=-\infty$. For
some sufficiently large $\displaystyle M>0$, Set
$\displaystyle R_{0}=\max\\{x\mid
x=g_{\lambda,\theta}(r),\alpha<r<r_{1}\\},r_{3}=\min\\{r\mid
g_{\lambda,\theta}(r)=R_{0}-M\\},$ $\displaystyle r_{4}=\max\\{r\mid
g_{\lambda,\theta}(r)=R_{0},\alpha<r<r_{1}\\},\text{such that $\displaystyle
R_{0}-M\leq g_{\lambda,\theta}(r)\leq R_{0}$\ \ with $\displaystyle r_{3}\leq
r\leq r_{4}$}.$
It follows from Lemma 6.1 in [6] that
$\displaystyle M\leq C(r_{4}-r_{3})\leq Cr_{1},$
and this is impossible when $\displaystyle M$ sufficiently large.
Furthermore, the case $\displaystyle\alpha>0$ and
$\displaystyle\lim_{r\rightarrow\alpha}g_{\lambda,\theta}(r)=+\infty$, there
exists a sufficiently large $\displaystyle R_{0}>0$ and for any $\displaystyle
M>0$, define
$\displaystyle H_{R_{0}}=\min\\{r\mid g_{\lambda,\theta}(r)=R_{0}\\},\ \
l_{R_{0}}=\left\\{(R_{0},r)\mid 0\leq r\leq H_{R_{0}}\right\\},$
$\displaystyle H_{R_{0}+M}=\min\\{r\mid g_{\lambda,\theta}(r)=R_{0}+M\\},\ \
\text{and}\ \ l_{R_{0}+M}=\left\\{(R_{0},r)\mid 0\leq r\leq
H_{R_{0}+M}\right\\}.$
We define a domain $\displaystyle G_{R_{0},M}$, which is bounded by
$\displaystyle l_{R_{0}}$, $\displaystyle l_{R_{0}+M}$, $\displaystyle\Gamma$
and $\displaystyle x$-axis.
Applying Green’s formula in $\displaystyle G_{R_{0},M}$ and
$\displaystyle|\psi_{\lambda,\theta}|\leq Cr^{2}$ in
$\displaystyle\Omega\cap\\{r<\min\\{r_{1},r_{2}\\}\\}$ (using the fact (3.8)),
we find that
$\displaystyle-\int_{\partial
G_{R_{0},M}}\frac{x-R_{0}}{r}\frac{\partial\psi_{\lambda,\theta}}{\partial\nu}dS$
$\displaystyle=-\int_{\partial
G_{R_{0},M}}\frac{\partial(x-R_{0})}{\partial\nu}\frac{\psi_{\lambda,\theta}}{r}dS$
(3.71) $\displaystyle=\int_{\partial
G_{R_{0},M}\cap\\{x=R_{0}\\}}\frac{\psi_{\lambda,\theta}}{r}dS-\int_{\partial
G_{R_{0},M}\cap\\{x=R_{0}+M\\}}\frac{\psi_{\lambda,\theta}}{r}dS$
$\displaystyle\leq CH_{R_{0}}^{2},$
where $\displaystyle\nu$ is the unit normal vector.
Indeed, in view of
$\displaystyle\frac{|\nabla\psi_{\lambda,\theta}^{+}|}{r}\geq\sqrt{\Lambda}$
on $\displaystyle\Gamma\cap\\{R_{0}\leq x\leq R_{0}+M\\}$, and
$\displaystyle\frac{\partial\psi_{\lambda,\theta}}{\partial r}\geq 0$ on
$\displaystyle x$-axis with $\displaystyle x\geq R_{0}$, the left hand side of
(3.71) is estimated as
$\displaystyle-\int_{\partial
G_{R_{0},M}}\frac{x-R_{0}}{r}\frac{\partial\psi_{\lambda,\theta}}{\partial\nu}dS$
$\displaystyle=-\int_{\partial
G_{R_{0},M}\cap(\Gamma\cup\\{r=0\\}\cup\\{x=R_{0}+M\\})}\frac{x-R_{0}}{r}\frac{\partial\psi_{\lambda,\theta}}{\partial\nu}dS$
$\displaystyle\geq\sqrt{\Lambda}\int_{\partial
G_{R_{0},M}\cap\Gamma}(x-R_{0})dS-M\int_{\partial
G_{R_{0},M}\cap\\{x=R_{0}+M\\}}\frac{1}{r}\frac{\partial\psi_{\lambda,\theta}}{\partial\nu}dS$
$\displaystyle\geq c\sqrt{\Lambda}M^{2}-CMH_{R_{0}},$
due to (3.16). Here $\displaystyle\nu$ is parallel to
$\displaystyle\nabla\psi_{\lambda,\theta}$ on free streamlines, the positive
constants $\displaystyle c$, $\displaystyle C$ are independent of
$\displaystyle M$ and $\displaystyle H_{R_{0}}$.
This together with (3.71) gives that
$\displaystyle M\leq CH_{R_{0}},$
then we derives a contradiction for sufficiently large $\displaystyle M$.
Hence, we obtain $\displaystyle\alpha=0$.
Finally, it suffices to prove that
$\displaystyle|g_{\lambda,\theta}(0+0)|<+\infty$.
Suppose not, without loss of generality, we assume $\displaystyle
g_{\lambda,\theta}(0+0)=+\infty$. Similar to the above arguments, we only need
to construct domain $\displaystyle G$ bounded by $\displaystyle x=R_{0}$ with
some $\displaystyle R_{0}>0$ sufficiently large, $\displaystyle x=R_{0}$,
$\displaystyle x=R_{0}+M$, $\displaystyle r=0$ and interface
$\displaystyle\Gamma$, then
$\displaystyle M\leq C,$
which derives a contradiction for sufficiently large $\displaystyle M$.
Similarly, we can exclude that $\displaystyle g_{\lambda,\theta}(0)=-\infty$.
Therefore, we conclude that $\displaystyle x=g_{\lambda,\theta}(r)$ is finite
for any $\displaystyle r\in[0,+\infty)$.
Now, collecting all results obtained above, we complete the proof of Theorem
1.1.
## 4\. Uniqueness of the impinging outgoing jet
In this section, we will investigate the uniqueness of the impinging outgoing
jet and the parameters when $\displaystyle\Lambda=0$.
Proof of Theorem 1.2. Let $\displaystyle\psi_{\lambda,\theta}$ and
$\displaystyle\tilde{\psi}_{\lambda,\theta}$ be two solutions to the boundary
value problem (2.4), and
$\displaystyle\Gamma_{i,\lambda,\theta}:x=g_{i,\lambda,\theta}(r)$ and
$\displaystyle\tilde{\Gamma}_{i,\lambda,\theta}:x=\tilde{g}_{i,\lambda,\theta}(r)$
be the corresponding free boundaries for $\displaystyle i=1,2$. Due to the
continuous fit conditions, one has
$\displaystyle
f_{1}(R)=g_{1,\lambda,\theta}(R)=\tilde{g}_{1,\lambda,\theta}(R)\ \
\text{and}\ \
f_{2}(R)=g_{2,\lambda,\theta}(R)=\tilde{g}_{2,\lambda,\theta}(R).$
Without loss of generality, we assume
$\displaystyle\lim_{r\rightarrow+\infty}(g_{1,\lambda,\theta}(r)-\tilde{g}_{1,\lambda,\theta}(r))\geq
0.$
Set $\displaystyle\psi^{\varepsilon}=\psi_{\lambda,\theta}(x+\varepsilon,r)$
for some $\displaystyle\varepsilon\geq 0$ and choose a smallest
$\displaystyle\varepsilon_{0}\geq 0$ such that
$\displaystyle\psi^{\varepsilon_{0}}\leq\tilde{\psi}_{\lambda,\theta}\ \
\text{in $\displaystyle\Omega$ and
$\displaystyle\psi^{\varepsilon_{0}}(X_{0})=\tilde{\psi}_{\lambda,\theta}(X_{0})$},$
for some $\displaystyle
X_{0}\in\overline{\\{m_{2}<\tilde{\psi}_{\lambda,\theta}<m_{1}\\}}$.
We claim that
$\displaystyle
X_{0}\notin\\{m_{2}<\psi^{\varepsilon_{0}}<m_{1}\\}\cap\\{m_{2}<\tilde{\psi}_{\lambda,\theta}<m_{1}\\}.$
Suppose not and there exists a point $\displaystyle
X_{0}\in\\{m_{2}<\psi^{\varepsilon_{0}}<m_{1}\\}\cap\\{m_{2}<\tilde{\psi}_{\lambda,\theta}<m_{1}\\}$,
such that
$\displaystyle
m_{2}<\psi^{\varepsilon_{0}}(X_{0})=\tilde{\psi}_{\lambda,\theta}(X_{0})<m_{1}.$
The continuity of $\displaystyle\psi_{\lambda,\theta}$ in
$\displaystyle\Omega$ implies that there exists a ball $\displaystyle
B_{r}(X_{0})\subset\\{m_{2}<\psi^{\varepsilon_{0}}<m_{1}\\}\cap\\{m_{2}<\tilde{\psi}_{\lambda,\theta}<m_{1}\\}$
such that
$\left\\{\begin{array}[]{lll}\Delta\tilde{\psi}_{\lambda,\theta}-\frac{1}{r}\frac{\partial\tilde{\psi}_{\lambda,\theta}}{\partial
r}=0,\ \
\Delta\psi^{\varepsilon_{0}}-\frac{1}{r}\frac{\partial\psi^{\varepsilon_{0}}}{\partial
r}=0&\text{in}\ \ B_{r}(X_{0}),\\\
\psi^{\varepsilon_{0}}(X)\leq\tilde{\psi}_{\lambda,\theta}(X)&\text{on}\ \
\partial B_{r}(X_{0}).\end{array}\right.$ (4.1)
Therefore, it follows from the strong maximum principle that
$\displaystyle\psi^{\varepsilon_{0}}(X)=\tilde{\psi}_{\lambda,\theta}(X)\ \
\text{in}\ \ B_{r}(X_{0}).$
Applying the strong maximum principle in $\displaystyle\Omega$ again, we
obtain a contradiction to the boundary condition of
$\displaystyle\tilde{\psi}_{\lambda,\theta}$.
Then, one has
$\displaystyle\psi^{\varepsilon_{0}}(X_{0})=\tilde{\psi}_{\lambda,\theta}(X_{0})=m_{1},\
\ \text{or}\ \
\psi^{\varepsilon_{0}}(X_{0})=\tilde{\psi}_{\lambda,\theta}(X_{0})=m_{2}.$
Hence, the following two cases may occur.
Case 1. $\displaystyle\varepsilon_{0}>0$, then $\displaystyle
X_{0}\in\Gamma_{1,\lambda,\theta}\cap\tilde{\Gamma}_{1,\lambda,\theta}$ or
$\displaystyle
X_{0}\in\Gamma_{2,\lambda,\theta}\cap\tilde{\Gamma}_{2,\lambda,\theta}$ and
$\displaystyle X_{0}\neq A_{1}$, $\displaystyle A_{2}$. Then,
$\displaystyle\Delta\psi^{\varepsilon_{0}}-\frac{1}{r}\frac{\partial\psi^{\varepsilon_{0}}}{\partial
r}=\Delta\tilde{\psi}_{\lambda,\theta}-\frac{1}{r}\frac{\partial\tilde{\psi}_{\lambda,\theta}}{\partial
r}=0\ \ \text{in}\
\Omega\cap\\{m_{2}<\psi^{\varepsilon_{0}}<m_{1}\\}\cap\\{m_{2}<\tilde{\psi}_{\lambda,\theta}<m_{1}\\}.$
The $\displaystyle C^{1}$-smoothness of the free boundaries implies that
$\displaystyle\Gamma_{1,\lambda,\theta}$ (or
$\displaystyle\Gamma_{2,\lambda,\theta}$) is tangent to
$\displaystyle\tilde{\Gamma}_{1,\lambda,\theta}$ (or
$\displaystyle\tilde{\Gamma}_{2,\lambda,\theta}$) at the point $\displaystyle
X_{0}$. Then, it follows from the maximum principle that
$\displaystyle\sqrt{\lambda}=\frac{1}{r}\frac{\partial\psi^{\varepsilon_{0}}}{\partial\nu}>\frac{1}{r}\frac{\partial\tilde{\psi}_{\lambda,\theta}}{\partial\nu}=\sqrt{\lambda}\
\ \text{or}\ \
\sqrt{\lambda}=\frac{1}{r}\frac{\partial\psi^{\varepsilon_{0}}}{\partial\nu}<\frac{1}{r}\frac{\partial\tilde{\psi}_{\lambda,\theta}}{\partial\nu}=\sqrt{\lambda}\
\ \text{at}\ \ X_{0},$
where $\displaystyle\nu$ is outer normal vector, we derive a contradiction.
Case 2. $\displaystyle\varepsilon_{0}=0$, then $\displaystyle X_{0}=A_{1}$ or
$\displaystyle A_{2}$. Without loss of generality, suppose $\displaystyle
X_{0}=A_{1}$, similar to the proof of Proposition 3.21, construct a domain
$\displaystyle G_{\delta}$ with $\displaystyle\delta>0$ and
$\displaystyle\bar{\psi}=(1+\zeta)(m_{1}-\tilde{\psi}_{\lambda,\theta})-(m_{1}-\psi^{\varepsilon_{0}})$
as (3.47), then for some sufficiently small $\displaystyle\zeta>0$, which
gives
$\displaystyle\bar{\psi}<0\ \ \text{in $\displaystyle G_{\delta}$},$
we have
$\displaystyle(1+\zeta)\sqrt{\lambda}=\frac{1+\zeta}{r}\left|\frac{\partial\tilde{\psi}_{\lambda,\theta}}{\partial\nu}\right|\leq\frac{1}{r}\left|\frac{\partial\psi^{\varepsilon_{0}}}{\partial\nu}\right|=\sqrt{\lambda}\
\ \text{at}\ \ A_{1},\ \ \text{for small $\displaystyle\zeta>0$},$
a contradiction. Similarly, we obtain $\displaystyle X_{0}\neq A_{2}$.
Hence, we obtain the uniqueness of the minimizer
$\displaystyle\psi_{\lambda,\theta}$ for given $\displaystyle\lambda$ and
$\displaystyle\theta$.
Next, we will prove $\displaystyle\theta=\tilde{\theta}$ for given
$\displaystyle\lambda$.
Suppose not, without loss of generality, we assume
$\displaystyle\theta<\tilde{\theta}$.
Let $\displaystyle\psi_{\lambda,\theta}$ and
$\displaystyle\psi_{\lambda,\tilde{\theta}}$ be the two solutions to the
boundary value problem (2.4) corresponding to the pairs of the parameters
$\displaystyle(\lambda,\theta)$ and $\displaystyle(\lambda,\tilde{\theta})$,
respectively. Due to Proposition 3.17, one has
$\displaystyle\psi_{\lambda,\theta}(X)\geq\psi_{\lambda,\tilde{\theta}}(X)\ \
\text{in $\displaystyle\Omega$}.$
Similar arguments as above, we take $\displaystyle X_{0}=A_{1}$ and derive a
contradiction.
Hence, we obtain $\displaystyle\theta=\tilde{\theta}$ as desired.
## 5\. Asymptotic behavior of impinging outgoing jet
In this section, we will establish the asymptotic behaviors of axially
symmetric impinging outgoing jets in far fields which are stated in Theorem
1.3.
Proof of Theorem 1.3. Due to the standard elliptic estimates, there exists a
constant $\displaystyle C$ depending only on $\displaystyle m_{1}$,
$\displaystyle m_{2}$ and $\displaystyle\lambda$ such that
$\|\nabla\psi_{\lambda,\theta}\|_{C^{1,\sigma}(G)}\leq C,\ \ \text{for some}\
\ 0<\sigma<1,$ (5.1)
where $\displaystyle
G\subset\subset\\{0<\psi_{\lambda,\theta}<m_{1}\\}\cup\\{m_{2}<\psi_{\lambda,\theta}<0\\}$.
Set $\displaystyle\psi_{n}(x,r)=\psi_{\lambda,\theta}(x-n,r)$ and a strip
$\displaystyle
E=\left\\{-\infty<x<+\infty\right\\}\times\left\\{0<r<r_{1}\right\\}$, there
exists a subsequence still labeled as $\displaystyle\psi_{n}(x,r)$ such that
$\displaystyle\psi_{n}(x,r)\rightarrow\psi_{0}(x,r),\ \ \text{uniformly
in}~{}~{}C^{2,\sigma_{0}}(S),~{}~{}0<\sigma_{0}<\sigma,$
for any compact set $\displaystyle S\subset\subset E$ and
$\displaystyle\psi_{0}(x,r)$ solves the following boundary value problem in
the strip $\displaystyle E$,
$\left\\{\begin{array}[]{lll}\Delta\psi_{0}(x,r)-\frac{1}{r}\frac{\partial\psi_{0}}{\partial
r}=0,&\text{in}\ \ E,\\\ \psi_{0}\left(x,0\right)=0,\ \
\psi_{0}\left(x,r_{1}\right)=m_{1},&\text{for}\ \ -\infty<x<+\infty,\\\
\max\left\\{\frac{m_{2}}{r_{2}^{2}}r^{2},m_{2}\right\\}\leq\psi_{0}(x,r)\leq\frac{m_{1}}{r_{1}^{2}}r^{2},&\text{in}\
\ E,\end{array}\right.$ (5.2)
where we have used the Lemma 3.3. Obviously, the problem (5.2) has a unique
solution as
$\psi_{0}(x,r)=\frac{m_{1}}{r_{1}^{2}}r^{2},\ \ r\in[0,r_{1}].$ (5.3)
Hence, we obtain
$\displaystyle\nabla\psi_{\lambda,\theta}(x,r)\rightarrow\left(0,\frac{2m_{1}r}{r_{1}^{2}}\right)\
\ \text{in $\displaystyle C^{1,\sigma_{0}}(S)$ as}~{}~{}x\rightarrow-\infty.$
Using the Bernoulli’s law yields the asymptotic behavior (1.17) and (1.18) of
flow field in the upstream.
Along the similar arguments as before, we obtain the asymptotic behavior in
the upstream
$\displaystyle\nabla\psi_{\lambda,\theta}(x,r)\rightarrow\left(0,\frac{2m_{2}r}{r_{2}^{2}}\right)\
\ \text{in $\displaystyle C^{1,\sigma_{0}}(S^{\prime})$
as}~{}~{}x\rightarrow+\infty,$
where $\displaystyle S^{\prime}\subset\subset
E^{\prime}=\left\\{-\infty<x<+\infty\right\\}\times\left\\{0<r<r_{2}\right\\}$.
Finally, it follows from Lemma 3.14, we obtain (1.16), (1.24) and (1.25).
Hence, the proof of Theorem 1.3 is done.
## 6\. Appendix
The minimizer $\displaystyle\psi_{\lambda,\theta,\mu}$ satisfies the following
elliptic equation in a weak sense. The similar proofs of these results can be
found in Theorem 2.2-2.3 in [5], we omit here.
###### Proposition 6.1.
Let $\displaystyle\psi_{\lambda,\theta,\mu}$ be a minimizer to the truncated
variational problem ($\displaystyle P_{\lambda,\theta,\mu}$), and
$\displaystyle\mathfrak{L}^{2}\left(\\{\psi_{\lambda,\theta,\mu}=0\\}\right)=0$
($\displaystyle\mathfrak{L}^{2}$ is the two dimensional Lebesgue measure),
then
$\Delta\psi_{\lambda,\theta,\mu}-\frac{1}{r}\frac{\partial\psi_{\lambda,\theta,\mu}}{\partial
r}=0,\ \ \text{in}\ \
\Omega_{\mu}\cap\left\\{m_{2}<\psi_{\lambda,\theta,\mu}<m_{1}\right\\}\cap\\{\psi_{\lambda,\theta,\mu}\neq
0\\},$ (6.1)
and
$\Delta\psi_{\lambda,\theta,\mu}-\frac{1}{r}\frac{\partial\psi_{\lambda,\theta,\mu}}{\partial
r}\geq 0,\ \ \text{in}\ \ D_{\mu}=\Omega_{\mu}\cap\\{r<R\\},$ (6.2)
in a weak sense.
Acknowledgments. The authors would like to thank the referees for their
helpful suggestions and careful reading which has improved the presentation of
this paper.
Conflict of interest. The authors declare that they have no conflict of
interest.
## References
* [1] H. W. Alt, L. A. Caffarelli, Existence and regularity for a minimum problem with free boundary, J. Reine Angew. Math., 325, 405-144, (1981).
* [2] H. W. Alt, L. A. Caffarelli, A. Friedman, Asymmetric jet flows, Comm. Pure Appl. Math., 35, 29-68, (1982).
* [3] H. W. Alt, L. A. Caffarelli, A. Friedman, Jet flows with gravity, J. Reine Angew. Math., 35, 58-103, (1982).
* [4] H. W. Alt, L. A. Caffarelli, A. Friedman, Axially symmetric jet flows, Arch. Rational Mech. Anal., 81, 97-149, (1983).
* [5] H. W. Alt, L. A. Caffarelli, A. Friedman, Variational problems with two phases and their free boundaries, Trans. Amer. Math. Soc., 282, 431-461, (1984).
* [6] H. W. Alt, L. A. Caffarelli, A. Friedman, Jets with two fluids. I. One free boundary, Indiana Univ. Math. J., 33, 213-247, (1984).
* [7] H. W. Alt, L. A. Caffarelli, A. Friedman, Jets with two fluids. II. Two free boundaries, Indiana Univ. Math. J., 33, 367-391, (1984).
* [8] H. W. Alt, L. A. Caffarelli, A. Friedman, A free boundary problem for quasilinear elliptic equations, Ann. Scuola Norm. Sup. Pisa Cl. Sci., 11, 1-44, (1984).
* [9] G. Birkhoff, E. H. Zarantonello, Jets, Wakes and Cavities, Academic Press, New York, (1957).
* [10] X. F. Chen, Axially symmetric jets of compressible fluid, Nonlinear Anal. TMA., 16, 1057-1087, (1991).
* [11] J. F. Cheng, L. L. Du, W. Xiang, Axially symmetric jets of compressible fluid, Nonlinearity, 33, 4627-4669, (2020).
* [12] J. F. Cheng, L. L. Du, Y. F. Wang, The uniqueness of the asymmetric jet flow, J. Differential Equations, 269, 3794-3815, (2020).
* [13] G. R. Cowan, A. H. Holtzman, Flow conditions in colliding plates: Exlosive bonding, J. Appl. Phys., 34, 928-939, (1963).
* [14] F. Dias, A. R. Elcrat, L. N. Trefethen, Ideal jet flow in two dimensions, J. Fluid Mech., 185, 275-288, (1987).
* [15] F. Dias, J. M. Vanden-Broeck, Flows emerging from a nozzle and falling under gravity, J. Fluid Mech., 213, 465-477, (1990).
* [16] L. L. Du, B. Duan, Global subsonic Euler flows in an infinitely long axisymmetric nozzle, J. Differential Equations, 250, 813-847, (2011).
* [17] L. L. Du, Y. F. Wang, Collision of incompressible inviscid fluids effluxing from two nozzles, Calculus of Variations and PDEs, 56-136, (2017).
* [18] L. L. Du, S. K. Weng, Z. P. Xin, Subsonic irrotational flows in a finitely long nozzle with variable end pressure, Comm. Partial Differential Equations, 39, 666-695, (2014).
* [19] L. L. Du, C. J. Xie, On subsonic Euler flows with stagnation points in two dimensional nozzles, Indiana Univ. Math. J., 63, 1499-1523, (2014).
* [20] L. L. Du, C. J. Xie, Z. P. Xin, Steady subsonic ideal flows through an infinitely long nozzle with large vorticity, Comm. Math. Phys., 328, 327-354, (2014).
* [21] L. L. Du, Z. P. Xin, W. Yan, Subsonic flows in a multi-dimensional nozzle, Arch. Rational Mech. Anal., 201, 965–1012, (2011).
* [22] B. Duan, Z. Luo, Subsonic non-isentropic Euler flows with large vorticity in axisymmetric nozzles, J. Math. Anal. Appl., 430, 1037–1057, (2015).
* [23] A. Friedman, Variational Principles and Free-boundary Problems, Pure and Applied Mathematics, John Wiley Sons, Inc., New York, 1982.
* [24] A. Friedman, Mathematics in industrial problems, II, I.M.A. Volumes in Mathematics and its Applications, Vol.24, Springer-Verlag, New York, 1989.
* [25] P. R. Garabedian, H. Lewy, M. Schiffer, Axially symmetric cavitational flow, Annals of Math., 56, 560-602, (1952).
* [26] D. Gilbarg, N. S. Trudinger, Elliptic Partial Differential Equations of Second Order, Classics in Mathematics, Springer-Verlag, Berlin, 2001.
* [27] M. I. Gurevich, Theory of Jets in Ideal Fluids, New York, Academic Press, 1965.
* [28] J. Hureau, R. Weber, Impinging free jets of ideal fluid, J. Fluid Mech., 372, 357-374, (1998).
* [29] J. B. Keller, On unsymmetrically impinging jets, J. Fluid Mech., 211, 653-655, (1990).
* [30] L. M. Milne-Thomson, Theoretical Hydrodynamics, London: MacMillan, 1968.
* [31] Y. F. Wang, W. Xiang, Two-phase fluids in collision of incompressible inviscid fluids effluxing from two nozzles, J. Differential Equations, 267, 6783-6830, (2019).
* [32] C. J. Xie, Z. P. Xin, Global subsonic and subsonic-sonic flows through infinitely long nozzles, Indiana Univ. Math. J., 56 (6), 2991–3023, (2007).
* [33] C. J. Xie, Z. P. Xin, Global subsonic and subsonic-sonic flows through infinitely long axially symmetric nozzles, J. Differential Equations, 248, 2657–2683, (2010).
* [34] C. J. Xie, Z. P. Xin, Existence of global steady subsonic Euler flows through infinitely long nozzle, SIAM J. Math. Anal., 42 (2), 751–784, (2010).
|
# An Empirical Comparison of Deep Learning Models for Knowledge Tracing on
Large-Scale Dataset
Shalini Pandey, George Karypis, Jaideep Srivastava
###### Abstract
Knowledge tracing (KT) is the problem of modeling each student’s mastery of
knowledge concepts (KCs) as (s)he engages with a sequence of learning
activities. It is an active research area to help provide learners with
personalized feedback and materials.Various deep learning techniques have been
proposed for solving KT. Recent release of large-scale student performance
dataset (Choi et al. 2019) motivates the analysis of performance of deep
learning approaches that have been proposed to solve KT. Our analysis can help
understand which method to adopt when large dataset related to student
performance is available. We also show that incorporating contextual
information such as relation between exercises and student forget behavior
further improves the performance of deep learning models.
## Introduction
The availability of large-scale student performance dataset has attracted
researchers to develop models for predicting students’ knowledge state aimed
at providing proper feedback (Self 1990). For developing such models,
knowledge tracing (KT) is considered to be an important problem and is defined
as tracing a student’s knowledge state, which represents her mastery level of
KCs, based on her past learning activities. KT can be formalized as a
supervised sequence learning task - given student’s past exercise interactions
$\mathbf{X}=(\mathbf{x}_{1},\mathbf{x}_{2},\ldots,\mathbf{x}_{t})$, predict
some aspect of her next interaction $\mathbf{x}_{t+1}$. On the question-
answering platform, the interactions are represented as
$\mathbf{x}_{t}=(e_{t},r_{t})$, where $e_{t}$ is the exercise that the student
attempts at timestamp $t$ and $r_{t}$ is the correctness of the student’s
answer. KT aims to predict whether the student will be able to answer the next
exercise correctly, i.e., predict $p(r_{t+1}=1|e_{t+1},\mathbf{X})$.
Among various deep learning models, Deep Knowledge Tracing (DKT) (Piech et al.
2015) and its variant (Yeung and Yeung 2018) use Recurrent Neural Network
(RNN) to model a student’s knowledge state in one summarized hidden vector.
Dynamic Key-value memory network (DKVMN) (Zhang et al. 2017) exploits Memory
Augmented Neural Network (Santoro et al. 2016) for KT. It maps the exercises
to the underlying KCs and then utilizes the student mastery at those KCs to
predict whether the student will be able to answer the exercise correctly. The
student mastery at the KCs are modeled using a dynamic matrix called value.
The student performance is then used to update the value matrix, thus updating
student mastery at the associated KCs. Self-Attention model for Knowledge
Tracing (SAKT) (Pandey and Karypis 2019) employs a self-attention layer that
directly identifies student past interactions that are relevant to the next
exercise. It then predicts whether student will be able to solve the next
exercise based on his/her performance at those past interactions. The assigned
weights to the past interactions also provide an interpretation regarding
which interactions from the past played important role in the prediction.
Relation-Aware Self-Attention for Knowledge Tracing (Pandey and Srivastava
2020) improves over SAKT by incorporating contextual information and student
forget behavior. It explicitly models the relation between different exercise
pairs from their textual content and student performance data and employs a
kernel function with a decaying curve with respect to time to model student
tendency to forget.
(a) DKT
(b) DKVMN
(c) SAKT
(d) RKT
Figure 1: Model differences among DKT, DKVMN, SAKT and RKT.
In this paper, we perform an analysis of the described deep-learning models
for knowledge tracing. This analysis will help understand which deep-learning
model performs best when we have massive student performance dataset. In
addition, we visualize the attention weights to qualitatively reveal SAKT and
RKT behavior.
To summarize, figure 1 represents the difference between the four models we
have analyzed in this work. First DKT uses a summarized hidden vector to model
the knowledge state. Second, DKVMN maintains the concept state for each
concept simultaneously and all concept states constitute the knowledge state
of a student. Third, SAKT assigns weights to the past interaction using self-
attention mechanism to identify the relevant ones. It then uses the weighted
combination of these past interactions to estimate student knowledge on the
involved KCs and predict her performance. Finally, RKT improves over SAKT by
introducing a relation coefficient added to the attention weights learned from
SAKT. The relation coefficient are learned from the contextual information
explicitly modelling the relation between exercises involved in the past
interactions and student forget behavior of students.
## Methods
KT predicts whether a student will be able to answer the next exercise $e_{t}$
based on his/her previous interaction sequence
${X}={{x}_{1},{x}_{2},\ldots,{x}_{t-1}}$. The deep learning methods transform
the problem of KT into a sequential modeling problem. It is convenient to
consider the model with inputs ${{x}_{1},{x}_{2},\ldots,{x}_{t-1}}$ and the
exercise sequence with one position ahead, ${e_{2},e_{3},\ldots,e_{t}}$ and
the output being the correctness of the response to exercises
${r_{2},r_{3},\ldots,r_{t}}$. The interaction tuple ${x}_{t}=(e_{t},r_{t})$ is
presented to the model as a number $y_{t}=e_{t}+r_{t}\times E$, where $E$ is
the total number of exercises. Thus, the total values that an element in the
interaction sequence can take is $2E$, while elements in the exercise sequence
can take $E$ possible values.
### Deep Knowledge Tracing
Deep Knowledge Tracing (DKT) (Piech et al. 2015) employs a Recurrent Neural
Network (RNN) as its backbone model. As each interaction can be identified by
a unique ID, it can be represented using the encoding vector $\textbf{x}_{t}$.
After the transformation, DKT passes the $\textbf{x}_{t}$ to the hidden layer
and computes the hidden state $\textbf{h}_{t}$ using the vanilla RNN or LSTM-
RNN. As the hidden state summarizes the information from the past, the hidden
state in the DKT can be treated as the latent knowledge state of student
resulting from past learning trajectory. This latent knowledge state is then
fed to the output layer to compute the output vector $\textbf{y}_{t}$, which
represents the probabilities of answering each question correctly. The
objective of the DKT is to predict the next interaction performance, so the
target prediction is extracted by performing a dot product of the output
vector $\boldsymbol{y}_{t}$ and the one-hot encoded vector of the next
question $\boldsymbol{e}_{t}$. Based on the predicted output $p_{t}$ and the
target output $r_{t}$, the loss function $\mathcal{L}$ can be expressed as
follows:
$\mathcal{L}=-\sum_{t}r_{t}\log(p_{t})+(1-r_{t})\log(1-p_{t})$ (1)
### Dynamic Key-Value Memory Network
Dynamic Key-Value Memory Network (DKVMN) (Zhang et al. 2017) exploits the
relationship between concepts and the ability to trace each concept state.
DKVMN model maps each exercise with the underlying concepts and maintains a
concept state for each concept. At each timestamp, the knowledge of the
related concept states of the attempted exercise gets updated. DKVMN consists
two matrices, a static matrix called key, which stores the concept
representations and the other dynamic matrix called value, which stores and
updates the student’s understanding (concept state) of each concept. The
memory, denoted as $M^{t}$, is an $N\times d$ matrix, where $N$ is the number
of memory locations, and $d$ is the embedding size. At each timestamp $t$, the
input is $\boldsymbol{x}_{t}$. The embedding vector $\boldsymbol{x}_{t}$ is
used to compute the read weight $\boldsymbol{w}^{r}_{t}$ and the write weight
$\boldsymbol{w}^{w}_{t}$. The intuition of the model is that when a student
answers the exercise that has been stored in the memory with the same
response, $\boldsymbol{x}_{t}$ will be written to the previously used memory
locations and when a new exercise arrives or the student gets a different
response, $\boldsymbol{x}_{t}$ will be written to the least recently used
memory locations.
In DKVMN, when a student attempts an exercise, the student mastery over
associated concepts is retrieved as weighted sum of all memory slots in the
value matrix, where the weight is computed by taking the softmax activation of
the inner product between $\boldsymbol{x}_{t}$ and each key slot
$\boldsymbol{M}^{k}(i)$.:
$\boldsymbol{r}_{t}=\sum_{i=1}^{N}w_{t}(i)\boldsymbol{M}^{v}_{t}(i),$
$w_{t}(i)=\text{Softmax}(\boldsymbol{x}_{t}^{T}\boldsymbol{M}^{k}(i)).$ (2)
The calculated read content $\boldsymbol{r}_{t}$ is treated as a summary of
the student’s mastery level of this exercise. Finally to predict the
performance of the student:
$f_{t}=\text{Tanh}(\boldsymbol{W}_{1}^{T}[\boldsymbol{r}_{t},\boldsymbol{x}_{t}]+\boldsymbol{b}_{1}),\\\
p_{t}=\text{Sigmoid}(\boldsymbol{W}_{2}^{T}\boldsymbol{f}_{t}+\boldsymbol{b}_{2})$
(3)
$p_{t}$ is a scalar that represents the probability of answering $e_{t}$
correctly.
After the student answers the exercise $e_{t}$, the model updates the value
matrix according to the correctness of the student’s answer. For this, it
computes an erase vector $\boldsymbol{e}_{t}$ and an add vector
$\boldsymbol{a}_{t}$ as:
$\boldsymbol{e}_{t}=\text{Sigmoid}(\boldsymbol{E}^{T}\boldsymbol{v}_{t}+\boldsymbol{b}_{e}),\boldsymbol{a}_{t}=\text{Tanh}(\boldsymbol{D}^{T}\boldsymbol{v}_{t}+\boldsymbol{b}_{a})$
(4)
where the transformation matrices
$\boldsymbol{E},\boldsymbol{D}\in\mathbb{R}^{d_{v}\times d_{v}}$.
The memory vectors of value component $\boldsymbol{M}^{v}_{t}(i)$ from the
previous timestamp are modified as follows:
$\displaystyle\boldsymbol{M}^{v}_{t}(i)=\boldsymbol{\tilde{M}}^{v}_{t}(i)+w_{t}(i)\boldsymbol{a}_{t},$
(5)
$\displaystyle\boldsymbol{\tilde{M}}^{v}_{t}(i)=\boldsymbol{M}^{v}_{t-1}(i)[\boldsymbol{1}-w_{t}(i)\boldsymbol{e}_{t}],$
(6)
### Self-Attention for Knowledge Tracing
Self-Attention Model for Knowledge Tracing (SAKT) (Pandey and Karypis 2019) is
a purely transformer based model for KT. The idea behind SAKT is that in the
KT task, the skills that a student builds while going through the sequence of
learning activities, are related to each other and the performance on a
particular exercise is dependent on his performance on the past exercises
related to that exercise. SAKT first identifies relevant KCs from the past
interactions and then predicts student’s performance based on his/her
performance on those KCs. To identify the relevant interaction, it employs a
self-attention mechanism which computes the dot-product between the past
interaction representation and the next exercise representation. Essentially,
the student ability to answer next question is encoded in the vector
$\textbf{y}_{t}$ and is computed as:
$\textbf{y}_{t}=\sum_{j=1}^{t-1}\alpha_{j}{\textbf{x}}_{j}\textbf{W}^{V},\alpha_{j}=\frac{\exp(e_{j})}{\sum_{k=1}^{t-1}\exp(e_{k})},\\\
$ (7)
$e_{j}=\frac{\textbf{e}_{t}\textbf{W}^{Q}({\textbf{x}}_{j}\textbf{W}^{K})^{T}}{\sqrt{d}},$
(8)
where $d$ is the embedding size, $\textbf{W}^{Q}\in\mathbb{R}^{d\times d}$,
$\textbf{W}^{V}\in\mathbb{R}^{d\times d}$ and
$\textbf{W}^{K}\in\mathbb{R}^{d\times d}$ are projection matrices for query
and key, respectively.
Point-Wise Feed-Forward Layer: In addition, a PointWise Feed-Forward Layer
(FFN) is applied to the output of SAKT. The FFN helps incorporate non-
linearity in the model and considers the interactions between different latent
dimensions. It consists of two linear transformations with a ReLU nonlinear
activation function between the linear transformations. The final output of
FFN is
$\textbf{F}=\text{ReLU}(\textbf{y}_{t}\textbf{W}^{(1)}+\textbf{b}^{(1)})\textbf{W}^{(2)}+\textbf{b}^{(2)}$,
where $\textbf{W}^{(1)}\in\mathbb{R}^{d\times d}$,
$\textbf{W}^{(2)}\in\mathbb{R}^{d\times d}$ are weight matrices and
$\textbf{b}^{(1)}\in\mathbb{R}^{d}$ and
$\textbf{b}^{(2)}\in\mathbb{R}^{d\times d}$ are the bias vectors.
Besides of the above modeling structure, we added residual connections (He et
al. 2016) after both self-attention layer and Feed forward layer to train a
deeper network structure. We also applied the layer normalization (Ba, Kiros,
and Hinton 2016) and the dropout (Srivastava et al. 2014) to the output of
each layer, following (Vaswani et al. 2017).
### Relation-aware Self-attention for Knowledge Tracing
Similar to SAKT, Relation-aware Self-attention for Knowledge Tracing (RKT)
(Pandey and Srivastava 2020) also identifies the past interaction relevant for
solving the next exercise. Furthermore, it improves over SAKT by incorporating
contextual information. This contextual information integrates both the
exercise relation information through their similarity as well as student
performance data and the forget behavior information through modeling an
exponentially decaying kernel function. Essentially, RKT exploits the fact
that students acquire their skills while solving exercises and each such
interaction has a distinct impact on student ability to solve a future
exercise. This impact is characterized by 1) the relation between exercises
involved in the interactions and 2) student forget behavior.
RKT explicitly models the relation between exercises. To incorporate that it
utilizes textual content of the exercises. In the absence of information about
the textual content of exercises,we leaverage the skill tags associated with
each exercise. Exercises $i,j$ with the same skill tag are given similarity
value, $sim_{i,j}=1$, otherwise $sim_{i,j}=0$. The correlation between
exercises can also be determined from the learner’s performance data .
Essentially, RKT determines relevance of the knowledge gained from exercise
$j$ to solve exercise $i$ by building a contingency table as shown in table 1
considering only the pairs of $i$ and $j$, where $j$ occurs before $i$ in the
learning sequence. Then it computes Phi coefficient that describes the
relation from $j$ to $i$ as,
$\phi_{i,j}=\frac{n_{11}n_{00}-{n_{01}n_{10}}}{\sqrt{n_{1*}n_{0*}n_{*1}n_{*0}}}.$
(9)
Table 1: A contingency table for two exercises $i$ and $j$. | | exercise $i$ |
---|---|---|---
| | incorrect | correct | total
exercise $j$ | incorrect | $n_{00}$ | $n_{01}$ | $n_{0*}$
correct | $n_{10}$ | $n_{11}$ | $n_{1*}$
| total | $n_{*0}$ | $n_{*1}$ | $n$
Finally, the relation of exercise $j$ with exercise $i$ is calculated as :
$\textbf{A}_{i,j}=\begin{cases}\phi_{i,j}+\text{sim}_{i,j},&\text{if
}\text{sim}_{i,j}+\phi_{i,j}>\theta\\\ 0,&\text{otherwise},\end{cases}$ (10)
where $\theta$ is a threshold that controls sparsity of relation matrix.
RKT also models student forget behavior by employing a kernel function with
exponentially decaying curve with time to reduce the importance of interaction
as time interval increases following the idea from forgetting curve theory.
Specifically, given the time sequence of interaction of a student
$\textbf{t}=(t_{1},t_{2},\ldots,t_{n-1})$ and the time at which the student
attempts next exercise $t_{n}$, we compute the relative time interval between
the next interaction and the $i$th interaction as $\Delta_{i}=t_{n}-t_{i}$.
Thus, we compute forget behavior based relation coefficients,
$\textbf{R}^{T}=[\exp(-\Delta_{1}/S_{u}),\exp(-\Delta_{2}/S_{u}),\ldots,\exp(-\Delta_{n-1}/S_{u})]$,
where $S_{u}$ refers to relative strength of memory of student $u$ and is a
trainable parameter in our model. The resultant relation coefficients
$\textbf{R}=\text{softmax}(\textbf{R}^{E}+\textbf{R}^{T}),$ (11)
RKT also adopts the self-attention architecture (Vaswani et al. 2017) similar
to SAKT. To incorporate the relation coefficients into the learned attention
weights, it adds the two weights:
$\beta_{j}=\lambda\alpha_{j}+(1-\lambda)\textbf{R}_{j},$ (12)
where $\alpha$ is computed using Eq. 3. $\textbf{R}_{j}$ is the $j$th element
of the relation coefficient R, $\lambda$ is a tunable parameter. The
representation of output at the $i$th interaction,
$\textbf{o}\in\mathbb{R}^{d}$, is obtained by the weighted sum of linearly
transformed interaction embedding and position embedding:
$\textbf{y}_{t}=\sum_{j=1}^{n-1}\beta_{j}\hat{\textbf{x}}_{j}\textbf{W}^{V},$
(13)
where $\textbf{W}^{V}\in\mathbb{R}^{d\times d}$ is the projection matrix for
value space. The further architecture remains same as the SAKT described
above.
(a) AUC
(b) ACC
Figure 2: Performance Comparison. RKT performs best among the models.
## Data
To compare the deep-learning methods for KT, we use large-scale student
interaction dataset, EdNet released in (Choi et al. 2019). EdNet consists of
all student-system interactions collected over a period spanning two years by
Santa, a multi-platform AI tutoring service with approximately 780,000
students. It has collected a total of 131,441,538 student interactions with
each student generating an average of 441.20 interactions. The dataser
consists a total 13,169 problems and 1,021 lectures tagged with 293 types of
skills, and each of them has been consumed 95,294,926 times and 601,805 times,
respectively.
## Evaluation Setting
The prediction of student performance is considered in a binary classification
setting i.e., answering an exercise correctly or not. Hence, we compare the
performance using the Area Under Curve (AUC) and Accuracy (ACC) metric.
Similar to evaluation procedure employed in (nagatani2019augmenting; Piech et
al. 2015), we train the model with the interactions in the training phase and
during the testing phase, we update the model after each exercise response is
received. The updated model is then used to perform the prediction on the next
exercise. Generally, the value $0.5$ of AUC or ACC represents the performance
prediction result by randomly guessing, and the larger, the better.
To ensure fair comparison, all models are trained with embeddings of size
$200$. The maximum allowed sequence length for self-attention is set as $50$.
The model is trained with mini-batch size of $128$. We use Adam optimizer with
a learning rate of $0.001$. The dropout rate is set to $0.1$ to reduce
overfitting. The L2 weight decay is set to $0.00001$.
## Results and Discussions
### Quantitative Results
Figure 2 shows the performance comparison of deep-learning models for KT on
Ednet dataset. Different kinds of baselines demonstrate noticeable performance
gaps. SAKT model shows improvement over DKT and DKVMN model which can be
traced to the fact that SAKT identifies the relevance between past
interactions and next exercise. RKT performs consistently better than all the
baselines. Compared with other baselines, RKT is able to explicitly captures
the relations between exercises based on student performance data and text
content. Additionally, it models learner forget behavior using a kernel
function which is more interpretable and proven way to model human memory
(Ebbinghaus 2013).
The results reveal that provided enough data, attention-based models surpass
the other sequence encoder techniques such as RNN, LSTM and Memory Augmented
Networks. Furthermore, incorporating contextual data such as relation between
exercises and domain knowledge such as student forget behavior attribute to
performance gain even after availability of the massive dataset. This
motivates us to further explore Knowledge Guided Machine Learning in the KT
task.
(a) SAKT
(b) RKT
Figure 3: Visualization of attention weights of an example student from EdNet
by SAKT and RKT. Each subfloat depicts the attention weights assigned by the
models for that student.
(a) SAKT
(b) RKT
Figure 4: Visualization of attention weights pattern on different datasets.
Each subfloat depicts the average attention weights of different sequences.
### Qualitative Analysis
Benefiting from a purely attention mechanism, RKT and SAKT models are highly
interpretable for explaining the prediction result. Such interpetability can
help understand which past interactions played an important role in predicting
student performance on the next exercise. To this end, we compared the
attention weights obtained from both RKT and SAKT. We selected one student
from the dataset and obtain the attention weights corresponding to the past
interactions for predicting her performance at an exercise. Figure 3 shows the
heatmap of attention weight matrix where $(i,j)$th element represents the
attention weight on $j$th element when predicting performance on $i$th
interaction. We compare the generated heatmap for both SAKT and RKT. This
comparison shows the effect of relation information for revising the attention
weights. Without relation information the attention weights are more
distributed over previous interaction, while the relation information
concentrates the attention weights to specific relevant interactions.
Finally we also performed experiment to visualize the attention weights
averaged over multiple sequences by RKT and SAKT. Recall that at time step
$t_{i}$, the relation-aware self-attention layer in our model revises the
attention weights on the previous interactions depending on the time elapsed
since the interaction, and the relations between the exercises involved. To
this end, we examine all sequences and seek to reveal meaningful patterns by
showing the average attention weights on the previous interactions. Note that
when we calculate the average weight, the denominator is the number of valid
weights, so as to avoid the influence of padding for short sequences. Figure 4
compares average attention weights assigned by SAKT and RKT. This comparison
shows the effect of relation information for revising the attention weights.
Without relation information the attention weights are more distributed over
previous interaction, while the relation information concentrates the
attention weights closer to diagonal.Thus, it is beneficial to consider
relations between exercises for KT.
## Conclusion
In this work, we analyzed the performance of various deep learning models for
Knowledge Tracing. Analysis of these models on large dataset with
approximately $780,000$ students revealed that self-attention based models
such as SAKT and RKT outperform RNN-based models such as DKT. In addition, RKT
which leverages additional information such as relation between exercises and
student forget behavior and explicitly models these components gains further
improvement.
## References
* Ba, Kiros, and Hinton (2016) Ba, J. L.; Kiros, J. R.; and Hinton, G. E. 2016. Layer normalization. _arXiv preprint arXiv:1607.06450_ .
* Choi et al. (2019) Choi, Y.; Lee, Y.; Shin, D.; Cho, J.; Park, S.; Lee, S.; Baek, J.; Kim, B.; and Jang, Y. 2019. EdNet: A Large-Scale Hierarchical Dataset in Education. _arXiv_ arXiv–1912.
* Ebbinghaus (2013) Ebbinghaus, H. 2013. Memory: A contribution to experimental psychology. _Annals of neurosciences_ 20(4): 155.
* He et al. (2016) He, K.; Zhang, X.; Ren, S.; and Sun, J. 2016. Deep residual learning for image recognition. In _Proceedings of the IEEE conference on computer vision and pattern recognition_ , 770–778.
* Pandey and Karypis (2019) Pandey, S.; and Karypis, G. 2019. A Self-Attentive model for Knowledge Tracing. _arXiv preprint arXiv:1907.06837_ .
* Pandey and Srivastava (2020) Pandey, S.; and Srivastava, J. 2020. RKT: Relation-Aware Self-Attention for Knowledge Tracing. In _Proceedings of the 29th ACM International Conference on Information & Knowledge Management_, 1205–1214.
* Piech et al. (2015) Piech, C.; Bassen, J.; Huang, J.; Ganguli, S.; Sahami, M.; Guibas, L. J.; and Sohl-Dickstein, J. 2015. Deep knowledge tracing. In _Advances in Neural Information Processing Systems_ , 505–513.
* Santoro et al. (2016) Santoro, A.; Bartunov, S.; Botvinick, M.; Wierstra, D.; and Lillicrap, T. 2016. One-shot learning with memory-augmented neural networks. _arXiv preprint arXiv:1605.06065_ .
* Self (1990) Self, J. 1990. Theoretical foundations for intelligent tutoring systems. _Journal of Artificial Intelligence in Education_ 1(4): 3–14.
* Srivastava et al. (2014) Srivastava, N.; Hinton, G.; Krizhevsky, A.; Sutskever, I.; and Salakhutdinov, R. 2014. Dropout: a simple way to prevent neural networks from overfitting. _The journal of machine learning research_ 15(1): 1929–1958.
* Vaswani et al. (2017) Vaswani, A.; Shazeer, N.; Parmar, N.; Uszkoreit, J.; Jones, L.; Gomez, A. N.; Kaiser, Ł.; and Polosukhin, I. 2017. Attention is all you need. In _Advances in Neural Information Processing Systems_ , 5998–6008.
* Yeung and Yeung (2018) Yeung, C.-K.; and Yeung, D.-Y. 2018. Addressing two problems in deep knowledge tracing via prediction-consistent regularization. _arXiv preprint arXiv:1806.02180_ .
* Zhang et al. (2017) Zhang, J.; Shi, X.; King, I.; and Yeung, D.-Y. 2017. Dynamic key-value memory networks for knowledge tracing. In _Proceedings of the 26th International Conference on World Wide Web_ , 765–774. International World Wide Web Conferences Steering Committee.
|
# Equivalence of NEGF and scattering approaches to electron transport in the
Kitaev chain
Junaid Majeed Bhat Abhishek Dhar International Centre for Theoretical
Sciences, Tata Institute of Fundamental Research, Bengaluru-560089, India
###### Abstract
We consider electron transport in a Kitaev chain connected at its two ends to
normal metallic leads kept at different temperatures and chemical potentials.
Transport in this set-up is usually studied using two frameworks — the
nonequilibrium Green’s function (NEGF) approach or the scattering approach. In
the NEGF approach the current and other steady state properties of a system
are expressed in terms of Green’s functions that involve the wire properties
and self-energy corrections arising from the leads. In the scattering
approach, transport is studied in terms of the scattering amplitudes of plane
waves incident on the wire from the reservoirs. Here we show explicitly that
these two approaches produce identical results for the conductance of the
Kitaev chain. Further we show that the NEGF expression for conductance can be
written in such a way that there is a one-to-one correspondence of the various
terms in the NEGF expression to the amplitudes for normal transmission,
Andreev transmission and Andreev reflection in the scattering approach.
Thereby, we obtain closed form expressions for these. We obtain the
wavefunctions of zero energy Majorana bound states(MBS) of the wire connected
to leads and prove that they are present in the same parameter regime in which
they occur for an isolated wire. These bound states give rise to perfect
Andreev reflection responsible for zero bias quantized conductance peak. We
discuss the dependence of the width of this peak on different parameters of
the Hamiltonian and relate it to the MBS wavefunction properties. We find that
the peak broadens if the weight of the MBS in the reservoirs increases and
vice versa.
###### pacs:
## I Introduction
Electron transport properties of the Kitaev chain, a simple example of a one-
dimensional spinless superconducting wire, has been extensively investigated
recently Kitaev (2001); Sau _et al._ (2010); Oreg _et al._ (2010); Mourik
_et al._ (2012); Das _et al._ (2012a); Thakurathi _et al._ (2015). Amongst
the interesting experimental results are the signatures of the so-called
Majorana bound states (MBS) seen in measurements of the electrical
conductance. Theoretically, transport has been studied in this system using
the quantum Langevin equations - nonequilibrium Green’s function approach Roy
_et al._ (2012); Bondyopadhaya and Roy (2019); Bhat and Dhar (2020) (QLE-
NEGF), scattering approach Blonder _et al._ (1982); Thakurathi _et al._
(2015); Maiellaro _et al._ (2019) and the Keldysh nonequilibrium Green’s
function approach Lobos and Sarma (2015); Doornenbal _et al._ (2015); Komnik
(2016); Zhang and Quan (2020). The QLE-NEGF and Keldysh approach start from
the same microscopic model of system-bath Hamiltonian with the Kitaev wire
sandwiched between two normal leads (the baths), and involve elimination of
bath degrees of freedom to find the steady state properties of the system. For
normal (without superconducting pairing potential terms) wires, the
equivlanece of these two approaches has been established quite generally Dhar
and Sen (2006). In both these approaches the conductance is given in terms of
nonequilibrium Green’s functions. On the other hand, in the scattering
approach one considers scattering of plane waves, incident from the normal
metallic reservoirs, by the superconducting region. The conductance is
expressed in terms of the scattering amplitudes. For the case of the Kitaev
chain and more generally in superconducting wires, three scattering processes
are identified corresponding to normal transmission, Andreev transmission and
Andreev reflection Blonder _et al._ (1982); Thakurathi _et al._ (2015);
Maiellaro _et al._ (2019). It is expected that the scattering formalism and
the NEGF formalism should be equivalent and one of the main aims of the
present paper is an explicit demonstration of this equivalence.
In the QLE-NEGF approach one writes the quantum Langevin equations of motion
for the system and then the transport properties are found via its steady
state solution. The quantum Langevin equation follows from removing the bath
degrees of freedom from the Heisenberg equation of motion for the wire. The
QLE-NEGF approach was first applied to the Kitaev model in Ref. Roy _et al._
, 2012, while Ref. Bhat and Dhar, 2020 provides a more general and complete
application of this method to spinless superconducting wires connected to two
normal baths, obtaining explicit expressions for particle and energy currents.
One of the earliest application of the scattering approach to a system
consisting of a metallic lead connected to a superconductor was by Blonder,
Tinkham and Klapwijk Blonder _et al._ (1982). More recent work has
implemented the scattering approach to a 1-D superconductor sandwiched between
two metallic leads Thakurathi _et al._ (2015). However, both of these papers
consider the continuum models for the superconductor. The scattering approach
has been used earlier to study superconducting lattice models Nehra _et al._
(2020), but to our knowledge, its connection to the NEGF approach for the same
model has not been explored so far. It has been understood that the NEGF
particle current and conductance at the ends of the superconductor has one
local and two non-local contributions which are attributed to Andreev and
normal scattering amplitudes Lobos and Sarma (2015); Bhat and Dhar (2020). In
this paper, we present a detailed study of the scattering approach to the
Kitaev chain which is a simple example of a 1-D spinless superconductor and
show analytically that it yields the same results as the QLE-NEGF approach. We
also show that the three terms in the NEGF current expression correspond
precisely to different scattering processes that take place in the system. The
main idea is to express the Green’s function in terms of transfer matrices
from which the connection to the scattering process becomes clear. This
treatment follows Ref. Das and Dhar, 2012 but now involves $4\crossproduct 4$
transfer matrices instead of $2\crossproduct 2$ ones for normal 1-D wires.
An interesting aspect of the Kitaev chain is that, in the topologically non-
trivial parameter regime they host special zero modes called Majorana Bound
states(MBS). These are symmetry protected robust states localized at the two
edges of the wire. These states are responsible for a perfect Andreev
reflection at zero bias and lead to the zero bias peak in the conductance that
has been observed experimentally Mourik _et al._ (2012); Das _et al._
(2012b); Aguado (2017). This peak is is believed to be one of the strongest
experimental signatures of the MBS. These states for isolated wires (in the
absence of reservoirs) were first discussed by Kitaev Kitaev (2001). In the
present work we explore the effect of the reservoirs on these exotic states
and relate the properties of the MBS wavefuction, which now leaks into the
leads, to the behaviour of the zero bias conductance peak width. We find that
these states are present in the same parameter regime in which they occur in
the isolated wire and that the conductance peak broadens as the weight of the
MBS wavefunction increases in the two reservoirs.
This paper is structured as follows: in Sec. II, we introduce the exact
lattice Hamiltonian considered for the calculations and provide a summary of
the results from the QLE-NEGF and scattering approaches applied to this model.
We also discuss qualitatively the expected equivalence between the two
approaches. In Sec. III we provide explicit details of the calculation
involved in scattering approach and also discuss the zero mode MBS in this
system. The analytical proof for the equivalence of the two approaches is
given in Sec. IV. A numerical demonstration of this is provided in Sec. V
along with a discussion of the behaviour of the conductance peak and its
relation to the MBS wavefunction. We conclude in Sec. VI.
## II The Model and the equivalence of the two Approaches
In this section, we introduce the model for the Kiteav chain connected to
reservoirs at its two ends and then summarize the results obtained by applying
the QLE-NEGF approach and the scattering approach to this model. After that,
we qualitatively discuss the equivalence of the two approaches which is later
proven analytically in Sec. IV. The Hamiltonian of the Kitaev chain(1-D wire),
$\mathcal{H}_{W}$, is given by normal tight binding Hamiltonian with mean
field BCS-type coupling between its neighbouring sites. The reservoirs are
taken to be semi-infinite chains with nearest neighbour tight binding
Hamiltonians, $\mathcal{H}_{L}$ and $\mathcal{H}_{R}$. $L$ and $R$ refer to
the left and the right reservoir respectively. The finite ends of the
reservoirs are placed at ends of the wire and its extremal sites are coupled
to the nearest reservoir sites via tight binding Hamiltonians,
$\mathcal{H}_{WL}$ and $\mathcal{H}_{WR}$. The creation and annihilation
operators, satisfying usual fermionic anti-commutation relations, for the
wire, the left bath and the right bath are denoted as
$\\{c_{j}^{\dagger},c_{j}\\}$, $\\{c_{\alpha}^{\dagger},c_{\alpha}\\}$ and
$\\{c_{\alpha^{\prime}}^{\dagger},c_{\alpha^{\prime}}\\}$ respectively. The
Latin indices $j,k,..$ are taken to label the sites on the wire. These take
values from $1,2,...,N$, $N$ being the number of sites on the wire. Similarly,
Greek indices $\alpha,\nu,..$ taking values from $-\infty,...,-1,0$ and primed
Greek indices $\alpha^{\prime},\nu^{\prime},..$ taking values from
$N+1,N+2,...,\infty$ label the left reservoir and right reservoir sites
respectively. The full Hamiltonian is thus given by,
$\displaystyle\mathcal{H}$
$\displaystyle=\mathcal{H}_{W}+\mathcal{H}_{WL}+\mathcal{H}_{WR}+\mathcal{H}_{L}+\mathcal{H}_{R},$
(1) $\displaystyle\text{where}~{}~{}\mathcal{H}_{W}$
$\displaystyle=\sum_{j=1}^{N-1}\left[-\mu_{w}c_{j}^{\dagger}c_{j}-\eta_{w}(c_{j}^{\dagger}c_{j+1}+c_{j+1}^{\dagger}c_{j})\right.$
$\displaystyle\left.\hskip
56.9055pt+\Delta(c_{j}c_{j+1}+c_{j+1}^{\dagger}c_{j}^{\dagger})\right],$ (2)
$\displaystyle\mathcal{H}_{WL}$
$\displaystyle=-\eta_{c}(c_{1}^{\dagger}c_{0}+c_{0}^{\dagger}c_{1}),$ (3)
$\displaystyle\mathcal{H}_{WR}$
$\displaystyle=-\eta_{c}(c_{N}^{\dagger}c_{N+1}+c_{N+1}^{\dagger}c_{N}),$ (4)
$\displaystyle\mathcal{H}_{L}$
$\displaystyle=-\eta_{b}\sum_{\alpha=-\infty}^{0}c_{\alpha}^{\dagger}c_{\alpha+1}+c_{\alpha+1}^{\dagger}c_{\alpha},$
(5) $\displaystyle\mathcal{H}_{R}$
$\displaystyle=-\eta_{b}\sum_{\alpha^{\prime}=N+1}^{\infty}c_{\alpha^{\prime}}^{\dagger}c_{\alpha^{\prime}+1}+c_{\alpha^{\prime}+1}^{\dagger}c_{\alpha^{\prime}},$
(6)
where $\Delta,~{}\eta_{w},~{}\mu_{w}$ are respectively the superconducting
pairing strength, hopping amplitude and the chemical potential on the sites of
the wire, $\eta_{c}$ is the coupling strength between the wire and the
reservoirs, and the hopping amplitude in the reservoirs is given by
$\eta_{b}$. For simplicity, all of these parameters are taken to be real. The
reservoirs are initially described by grand canonical ensembles at
temperatures, $T_{L},T_{R}$ and chemical potentials, $\mu_{L},\mu_{R}$ and, as
we will see, this determines the correlation properties of the noise terms in
the final Langevin equations.
We first present the QLE-NEGF results for electron transport in this model.
Following the steps given in Ref. Dhar and Sen, 2006; Bhat and Dhar, 2020, we
start from the Heisenberg equations of motion for the entire system which
given by,
$\displaystyle\dot{c}_{l}$
$\displaystyle=-i\sum_{m}H^{W}_{lm}c_{m}-i\sum_{m}K_{lm}c_{m}^{\dagger}$
$\displaystyle\hskip
56.9055pt-i\sum_{\alpha}V^{L}_{l\alpha}c_{\alpha}-i\sum_{\alpha^{\prime}}V^{R}_{l\alpha^{\prime}}c_{\alpha^{\prime}},$
(7) $\displaystyle\dot{c}_{\alpha}$
$\displaystyle=-i\sum_{\nu}H^{L}_{\alpha\nu}c_{\nu}-i\sum_{l}V^{L\dagger}_{\alpha
l}c_{l},$ (8) $\displaystyle\dot{c}_{\alpha^{\prime}}$
$\displaystyle=-i\sum_{\nu^{\prime}}H^{R}_{\alpha^{\prime}\nu^{\prime}}c_{\nu}-i\sum_{l}V^{R\dagger}_{\alpha^{\prime}l}c_{l},$
(9)
where
$\displaystyle H^{W}_{lm}$
$\displaystyle=-\mu_{w}\delta_{lm}-\eta_{w}(\delta_{l,m-1}+\delta_{l,m+1}),$
(10) $\displaystyle K_{lm}$
$\displaystyle=\Delta(\delta_{l,m+1}-\delta_{l,m-1}),$ (11) $\displaystyle
H^{L}_{\alpha\nu}$
$\displaystyle=-\eta_{b}(\delta_{\alpha,\nu-1}+\delta_{\alpha,\nu+1}),$ (12)
$\displaystyle H^{R}_{\alpha^{\prime}\nu^{\prime}}$
$\displaystyle=-\eta_{b}(\delta_{\alpha^{\prime},\nu^{\prime}-1}+\delta_{\alpha^{\prime},\nu^{\prime}+1}),$
(13) $\displaystyle V^{L}_{l\alpha}=-\eta_{c}$
$\displaystyle\delta_{l1}\delta_{\alpha
0}~{}~{}~{}V^{R}_{l\alpha^{\prime}}=-\eta_{c}\delta_{lN}\delta_{\alpha^{\prime},N+1}.$
(14)
The time dependent bath degrees of freedom can be removed from the Heisenberg
equation for wire operators using appropriate Green’s functions, namely
$\displaystyle
g_{L}^{+}(t)=-ie^{-itH^{L}}\theta(t)=\int_{-\infty}^{\infty}\frac{d\omega}{2\pi}g_{L}^{+}(\omega)e^{-i\omega
t}$ (15) and $\displaystyle
g_{R}^{+}(t)=-ie^{-itH^{R}}\theta(t)=\int_{-\infty}^{\infty}\frac{d\omega}{2\pi}g_{R}^{+}(\omega)e^{-i\omega
t},$ (16)
for the left and right reservoirs respectively. These two equations furnish
formal solutions for the Hiesenberg equations of motion for the reservoir
operators. These solutions can be used to eliminate the reservoir degrees of
freedom from the Heisenberg equation for the wire operators to give the
quantum Langevin equation for the wire Bhat and Dhar (2020):
$\displaystyle
i\dot{c}_{l}=\sum_{m}H^{W}_{lm}c_{m}+\sum_{m}K_{lm}c_{m}^{\dagger}+\eta^{L}_{l}(t)+\eta_{l}^{R}(t)$
$\displaystyle+\int_{-\infty}^{t}ds\left([\Sigma^{+}_{L}(t-s)]_{lm}+[\Sigma^{+}_{R}(t-s)]_{lm}\right)c_{m}(s).$
(17)
Thus, the effect of the reservoirs on the dynamics of the wire operators is
expressed as the sum of the noise $\eta^{L}_{l}(t)$ and $\eta^{R}_{l}(t)$ and
the history dependent dissipation terms given by the integrals. Here
$\Sigma^{+}_{L}(t)=V^{L}g^{+}_{L}(t)V^{L\dagger}$ and
$\Sigma^{+}_{R}(t)=V^{R}g^{+}_{R}(t)V^{R\dagger}$ are therefore the self
energy corrections to the wire due to the left and the right reservoirs
respectively. The properties of the noise and dissipation are easiest to
express in Fourier space and are given by:
$\displaystyle\Sigma_{L}^{+}(\omega)=V^{L}g_{L}^{+}(\omega)V^{L\dagger},$ (18)
$\displaystyle\expectationvalue{\tilde{\eta}_{l}^{L\dagger}(\omega)\tilde{\eta}_{m}^{L}(\omega^{\prime})}=[\Gamma_{L}(\omega)]_{ml}f_{L}(\omega)\delta(\omega-\omega^{\prime}),$
(19)
$\displaystyle\expectationvalue{\tilde{\eta}_{l}^{L}(\omega)\tilde{\eta}_{m}^{L\dagger}(\omega^{\prime})}=[\Gamma_{L}(\omega)]_{lm}\left[1-f_{L}(\omega)\right]\delta(\omega-\omega^{\prime}).$
(20)
with $\Gamma_{L}=\frac{1}{2\pi
i}(\Sigma_{L}^{-}(\omega)-\Sigma^{+}_{L}(\omega))$ and
$f_{L}(\omega)=f(\omega,\mu_{L},T_{L})=[e^{(\omega-\mu_{L})/T_{L}}+1]^{-1}$ is
the usual Fermi-Dirac distribution. The right reservoir will have similar
properties.
Our goal is now to obtain the steady state solution for the wire operators
which would then give us the steady state current entering the wire from the
left reservoir. To that end, we assume a parameter regime of the Hamiltonian
which allows a steady state Dhar and Sen (2006); Bhat and Dhar (2020) and the
corresponding solution can be written by taking a Fourier transform of Eq.
(II) to give
$[\Pi(\omega)]_{lm}\tilde{c}_{m}(\omega)-K_{lm}\tilde{c}_{m}^{\dagger}(-\omega)=\tilde{\eta}_{l}^{L}(\omega)+\tilde{\eta}_{l}^{R}(\omega),$
(21)
with $\Pi(\omega)=\omega-H^{W}-\Sigma^{+}_{L}(\omega)-\Sigma^{+}_{R}(\omega).$
We note that Eq. (21) forms a set of linear equations in variables
$c_{l}(\omega)$ and $c_{l}^{\dagger}(-\omega)$ and therefore we can solve for
$c_{l}(\omega)$ to get
$\displaystyle\tilde{c}_{m}(\omega)$
$\displaystyle=[G_{1}^{+}(\omega)]_{ml}\left[\tilde{\eta}_{l}^{L}(\omega)+\tilde{\eta}_{l}^{R}(\omega)\right]$
$\displaystyle+[G_{2}^{+}(\omega)]_{ml}\left[\tilde{\eta}_{l}^{L\dagger}(-\omega)+\tilde{\eta}_{l}^{R\dagger}(-\omega)\right],$
(22)
where $G_{1}^{+}(\omega)$ and $G_{2}^{+}(\omega)$ are defined as
$\displaystyle G_{1}^{+}(\omega)$
$\displaystyle=\frac{1}{\Pi(\omega)+K[\Pi^{*}(-\omega)]^{-1}K^{\dagger}},$
(23) $\displaystyle G_{2}^{+}(\omega)$
$\displaystyle=G_{1}^{+}(\omega)K[\Pi^{*}(-\omega)]^{-1}.$ (24)
The steady state solution is now expressed in terms of the two Green’s
functions $G_{1}^{+}(\omega)$ and $G_{2}^{+}(\omega)$. The steady state
properties of the wire would be given in terms of these two Green’s functions
and would involve the correlation properties of the noise terms which are
determined by the initial states of the reservoirs.
The current coming from the left reservoir, $J_{L}$ can be obtained from the
rate of change of total number of particles in the left reservoir and is given
by,
$J_{L}=2\sum_{m\alpha}\imaginary[V_{m\alpha}^{L}\expectationvalue{c_{m}^{\dagger}(t)c_{\alpha}(t)}]$
(25)
Using Eq. (22) and the correlation properties of the reservoirs, Eqs. 19,20),
one finds Bhat and Dhar (2020) that Eq. (25) leads to the following expression
for the current:
$\displaystyle
J_{L}=\int_{-\infty}^{\infty}\frac{d\omega}{2\pi}\bigg{(}T_{1}(\omega)(f_{L}^{e}(\omega)-f_{R}^{e}(\omega))$
$\displaystyle+T_{2}(\omega)(f_{L}^{e}(\omega)-f_{R}^{h}(\omega))+T_{3}(\omega)(f_{L}^{e}(\omega)-f_{L}^{h}(\omega))\bigg{)},$
(26)
where $f_{X}^{e}(\omega)=f(\omega,\mu_{X},T_{X})$,
$f_{X}^{h}(\omega)=f(\omega,-\mu_{X},T_{X})$, $(X=L,R)$ are the electron and
hole occupation numbers and
$\displaystyle
T_{1}(\omega)=4\pi^{2}\Tr[G_{1}^{+}(\omega)\Gamma_{R}(\omega)G_{1}^{-}(\omega)\Gamma_{L}(\omega)],$
(27) $\displaystyle
T_{2}(\omega)=4\pi^{2}\Tr[G_{2}^{+}(\omega)\Gamma_{R}^{T}(-\omega)G_{2}^{-}(\omega)\Gamma_{L}(\omega)],$
(28) $\displaystyle
T_{3}(\omega)=4\pi^{2}\Tr[G_{2}^{+}(\omega)\Gamma_{L}^{T}(-\omega)G_{2}^{-}(\omega)\Gamma_{L}(\omega)].$
(29)
From the expression for current we can obtain the conductance at the left end
and, in units of $e^{2}/h$, is found to be:
$G_{L}(T_{L},\mu_{L})=2\pi\partialderivative{J_{L}}{\mu_{L}},$ (30)
which at, $T_{L}=T_{R}=0$, gives:
$G_{L}=T_{1}(\mu_{L})+T_{2}(\mu_{L})+T_{3}(\mu_{L})+T_{3}(-\mu_{L}).$ (31)
The transmission functions involve the two Green’s functions
$G_{1}^{+}(\omega)$ and $G_{2}^{+}(\omega)$ given by Eq. (23) and Eq. (24)
respectively. The various matrices which are present in their expression have
simple forms and can be obtained explicitly by using Eq. (10-14):
$\displaystyle[\Sigma^{+}_{L}(\omega)]_{ij}$
$\displaystyle=\eta_{c}^{2}\Sigma(\omega)\delta_{i1}\delta_{j1},$ (32)
$\displaystyle[\Gamma_{L}(\omega)]_{ij}$
$\displaystyle=\frac{\eta_{c}^{2}}{\pi}g(\omega)\delta_{i1}\delta_{j1},$ (33)
$\displaystyle[\Sigma^{+}_{R}(\omega)]_{ij}$
$\displaystyle=\eta_{c}^{2}\Sigma(\omega)\delta_{iN}\delta_{jN},$ (34)
$\displaystyle[\Gamma_{R}(\omega)]_{ij}$
$\displaystyle=\frac{\eta_{c}^{2}}{\pi}g(\omega)\delta_{iN}\delta_{jN},$ (35)
$\displaystyle[\Pi(\omega)]_{ij}$
$\displaystyle=(\omega+\mu_{w})\delta_{ij}+\eta_{w}(\delta_{i,j+1}+\delta_{i,j-1})$
$\displaystyle-\eta_{c}^{2}\Sigma(\omega)\delta_{i1}\delta_{j1}-\eta_{c}^{2}\Sigma(\omega)\delta_{iN}\delta_{jN},$
(36)
where $g(\omega)=\imaginary[\Sigma(\omega)]$, and it can be shown that Dhar
and Sen (2006)
$\displaystyle\Sigma(\omega)=\begin{cases}\frac{1}{\eta_{b}}\left(\frac{\omega}{2\eta_{b}}-\sqrt{\frac{\omega^{2}}{4\eta_{b}^{2}}-1}\right),&\text{if
}\omega>2\eta_{b}\\\
\frac{1}{\eta_{b}}\left(\frac{\omega}{2\eta_{b}}+\sqrt{\frac{\omega^{2}}{4\eta_{b}^{2}}-1}\right),&\text{if
}\omega<-2\eta_{b}\\\
\frac{1}{\eta_{b}}\left(\frac{\omega}{2\eta_{b}}-i\sqrt{1-\frac{\omega^{2}}{4\eta_{b}^{2}}}\right),&\text{if
}\absolutevalue{\omega}<2\eta_{b}.\end{cases}$ (37)
Using these results, the terms involved in the NEGF-expression for conductance
become
$\displaystyle
T_{1}(\omega)=4\eta_{c}^{4}g^{2}(\omega)\absolutevalue{[G_{1}^{+}(\omega)]_{1N}}^{2},$
(38) $\displaystyle
T_{2}(\omega)=4\eta_{c}^{4}g^{2}(\omega)\absolutevalue{[G_{2}^{+}(\omega)]_{1N}}^{2},$
(39) $\displaystyle
T_{3}(\omega)=4\eta_{c}^{4}g^{2}(\omega)\absolutevalue{[G_{2}^{+}(\omega)]_{11}}^{2}.$
(40)
We will use these expressions for the analytical proof of the equivalence of
the two methods in Sec. IV.
We now have the conductance at the left junction from the QLE-NEGF approach
and we want to compare these results with the results from the scattering
approach to the same problem. Here we present a qualitative discussion of the
scattering formalism and relegate the details of the calculation to Sec.
(III). The first step in the scattering approach would be to identify the
different scattering processes that could take place in the system. Let us
consider a plane wave incident on the wire from the left reservoir and then,
considering the wire as a scatterer, we note that there are a total of four
processes that can take place. Two of these processes are — (i) an electron
from the left reservoir being reflected back into the left reservoir and (ii)
an electron from the left reservoir being transmitted across the wire into the
right reservoir. We will refer to these as normal reflection and normal
transmission processes respectively. However, in a superconductor the electron
and hole wavefunctions are intertwined and therefore an electron may get
scattered as a hole also. This results in the two additional scattering
process in which an electron from the left reservoir can (iii) get reflected
back as a hole into the left reservoir or (iv) get transmitted across the wire
as a hole into the right reservoir. We refer to these as Andreev reflection
and transmission respectively. During these two processes, charge conservation
is ensured by the formation of a cooper pair in the wire.
Having identified the scattering processes, the next step would be to write
down a stationary state wavefunction, at some energy $E$, in the three regions
of the system (the wire, left bath, right bath) with appropriate scattering
amplitudes and wavefunctions so that all the scattering processes are
captured. In the left reservoir, we thus have the incoming plane wave and the
outgoing plane waves for the reflected electron and hole corresponding to the
normal and Andreev reflection respectively. The reflected electron and hole
plane waves are multiplied by some scattering amplitudes which we take to be
$r_{n}$ and $r_{a}$ respectively. Similarly, in the right reservoir we will
have the transmitted electron and hole plane waves from the normal
transmission and the Andreev transmission respectively and we take the
scattering amplitudes for these to be $t_{n}$ and $t_{a}$ respectively. In the
wire, the wavefunction will be a superposition of quasi particles of the wire
at energy $E$ which are defined in terms of the diagonalization of the bulk
wire Hamiltonian. The normal and Andreev scattering amplitudes are obtained by
implementing the boundary conditions and then the conductance at the left
junction, in units of $e^{2}/h$, is given by the net probability of an
electron to be transmitted across the left junction which is
$G_{L}^{S}=\absolutevalue{t_{n}}^{2}+\absolutevalue{t_{a}}^{2}+2\absolutevalue{r_{a}}^{2}=1-\absolutevalue{r_{n}}^{2}+\absolutevalue{r_{a}}^{2}.$
(41)
The last step follows from the probability conservation,
$\absolutevalue{r_{n}}^{2}+\absolutevalue{r_{a}}^{2}+\absolutevalue{t_{n}}^{2}+\absolutevalue{t_{a}}^{2}=1$.
The factor $2$ in Eq. (41) with $|r_{a}|^{2}$ is due to the fact that in the
Andreev reflection process, two electrons are transmitted across the junction
as a single cooper pair.
Now in order to compare these two independent approaches note that the NEGF
expression for the current, Eq. (26), has contribution from three terms. On
comparison of these three terms with the usual Landauer formulas for current
one may expect the following: the first term has electrons as incoming and
outgoing particles and therefore this must be the contribution of the electron
from the left bath being scattered as an electron into the right bath (normal
transmission), the second term having electrons and holes in the opposite
baths as the incoming and outgoing particles respectively should correspond to
the process of an electron from the left bath being scattered as a hole into
the right bath (Andreev transmission). Finally, the third term which also has
electrons and holes as incoming and outgoing particles respectively but in the
same bath would therefore correspond to the electron from the left bath
scattered back as a hole into left bath again (Andreev reflection). The traces
in the three terms should then be proportional to the probability of these
three processes respectively. Therefore, the first two terms of the
conductance expression in Eq. (31) calculated at energy $E$, $T_{1}(E)$ and
$T_{2}(E)$ should be equal to the probabilities from the scattering amplitudes
$t_{n}$ and $t_{a}$ at the same energy respectively and the sum of the last
two terms, $T_{3}(E)$ and $T_{3}(-E)$, both of which follow from the third
term of the current expression in Eq. (26) should then be equal to
$2|r_{a}|^{2}$. This would make the two conductance expressions, in Eq. (31)
and Eq. (41), from the two approaches exactly the same. In Sec. IV we present
an exact proof of this result but for now we proceed to Sec. (III) where we
present the details of the calculations involved in the scattering approach.
## III Scattering approach
In this section, we first find out the stationary states of energy $E$ inside
the left reservoir, the wire and the right reservoir. This would enable us to
write down the scattering wavefunction as discussed in Sec. II in the three
regions and after implementing the boundary conditions, at the reservoir-wire
junctions, we would obtain a set of linear equations for the normal and
Andreev scattering amplitudes. The conductance could then be obtained via Eq.
(41). Afterwards, we discuss the case of $E=0$ separately and find the
wavefunctions and the parameter regime of existence of the MBS.
Consider for the moment the case where the wire has $N$ sites while the left
and right reservoirs have $N_{L}$ and $N_{R}$ number of sites respectively so
that the total number of sites is $N_{S}=N+N_{L}+N_{R}$. Let us define a
column vector $\chi_{p}=\begin{pmatrix}c_{p}\\\ c_{p}^{\dagger}\end{pmatrix}$,
where the index $p$ refers to any site on the entire system so that we can
rewrite the Hamiltonian in the form
$H=\frac{1}{2}\sum_{p,q}\chi_{p}^{\dagger}\mathcal{A}_{pq}\chi_{q}$ (42)
where $\mathcal{A}_{pq}$ are $2\crossproduct 2$ block matrices which form the
elements of the $2N_{S}\times 2N_{S}$ matrix $\mathcal{A}$ given by
$\mathcal{A}=\setcounter{MaxMatrixCols}{12}\begin{pmatrix}\mathbf{0}&A_{L}&&\\\
A_{L}^{T}&\mathbf{0}&A_{L}&&\\\ &&\ddots&&&&&&\\\
&&A_{L}^{T}&\mathbf{0}&A_{C}&&&&\\\ &&&A_{C}^{T}&A&A_{W}&&&&\\\
&&&&A^{T}_{W}&A&A_{W}&&\\\ &&&&&&\ddots&&&&&\\\ &&&&&&A^{T}_{W}&A&A_{C}&&&\\\
&&&&&&&A_{C}^{T}&\mathbf{0}&A_{R}\\\ &&&&&&&&A_{R}^{T}&\mathbf{0}&A_{R}\\\
&&&&&&&&&&\ddots&\\\ &&&&&&&&&&A_{R}^{T}&\mathbf{0}\end{pmatrix},$ (43)
with
$\displaystyle A_{R}=$ $\displaystyle A_{L}=\begin{pmatrix}-\eta_{b}&&0\\\
0&&\eta_{b}\end{pmatrix}~{}~{}\text{,}~{}~{}$ $\displaystyle
A_{C}=\begin{pmatrix}-\eta_{c}&&0\\\ 0&&\eta_{c}\end{pmatrix}\text{,}$ (44)
$\displaystyle A_{W}$ $\displaystyle=\begin{pmatrix}-\eta_{w}&&-\Delta\\\
\Delta&&\eta_{w}\end{pmatrix}~{}~{}\text{,}~{}~{}$ $\displaystyle
A=\begin{pmatrix}-\mu_{w}&&0\\\ 0&&\mu_{w}\end{pmatrix}.$ (45)
Now considering first the wire region, let $\Psi_{W}(j)$ be the components of
the stationary state of energy $E$ of the wire in this basis. Then, in the
bulk of the wire we have:
$A_{W}^{T}\Psi_{W}(j-1)+A\Psi_{W}(j)+A_{W}\Psi_{W}(j+1)=E\Psi_{W}(j).$ (46)
We choose $\Psi_{W}(j)=\begin{pmatrix}U\\\ V\end{pmatrix}z^{j}$ and fix $z$
such that the Eq. (46) is satisfied. On substitution of $\Psi_{W}(j)$ in Eq.
(46), we arrive at the following equation,
$\begin{pmatrix}\eta_{w}(z+\frac{1}{z})+\mu_{w}+E&&\Delta(z-\frac{1}{z})\\\
\Delta(z-\frac{1}{z})&&\eta_{w}(z+\frac{1}{z})+\mu_{w}-E\end{pmatrix}\begin{pmatrix}U\\\
V\end{pmatrix}=0,$ (47)
which means that $z$ must be fixed such that
$\begin{vmatrix}\eta_{w}(z+\frac{1}{z})+\mu_{w}+E&&\Delta(z-\frac{1}{z})\\\
\Delta(z-\frac{1}{z})&&\eta_{w}(z+\frac{1}{z})+\mu_{w}-E\end{vmatrix}=0.$ (48)
Clearly, there are four possible solutions for $z$ as this determinant on
expansion will give a fourth order equation in $z$. However, we can make
things a bit simpler by choosing $z=e^{x}$ so that the above determinant on
expansion gives a quadratic equation in $\cosh x$ rather than a fourth order
equation in $z$. The quadratic equation thus obtained is the following:
$(\cosh x)^{2}-\frac{\mu\eta_{w}}{\Delta^{2}-\eta_{w}^{2}}\cosh
x+\frac{E^{2}-\mu^{2}-4\Delta^{2}}{4(\Delta^{2}-\eta_{w}^{2})}=0$ (49)
with its two solutions given by
$\cosh
x_{\pm}=\frac{\mu_{w}\eta_{w}\pm\sqrt{(\eta_{w}^{2}-\Delta^{2})(E^{2}-4\Delta^{2})+\Delta^{2}\mu_{w}^{2}}}{2(\Delta^{2}-\eta_{w}^{2})}.$
(50)
Therefore, the four possible solutions to $z$, which are obtained from the two
quadratic equations $z^{2}-2\cosh x_{\pm}z+1=0$, are given by
$\displaystyle
z_{1}=e^{-x_{+}},~{}z_{2}=e^{-x_{-}},~{}z_{3}=e^{x_{+}},~{}z_{4}=e^{x_{-}}.$
(51)
From Eq. (47), we see that $U$ and $V$ for the corresponding solutions for $z$
could be chosen in the following form:
$\displaystyle U_{s}$ $\displaystyle=-\Delta(z^{2}_{s}-1)$ (52) $\displaystyle
V_{s}$ $\displaystyle=\eta_{w}(z_{s}^{2}+1)+z_{s}(\mu_{w}+E)$ (53)
where $s=1,2,3,4$ for the four solutions of $z$. Therefore, we have the
required stationary states inside the wire. Now, the stationary states of
energy $E$ inside the baths can be obtained from the wire solution via the
transformation, $\mu_{w}\rightarrow 0$, $\Delta\rightarrow 0$ and
$\eta_{w}\rightarrow\eta_{b}$. We get the solutions to be two left travelling
plane waves and two right travelling plane waves of the following forms:
$\displaystyle\begin{pmatrix}1\\\
0\end{pmatrix}e^{iqx},~{}~{}~{}\begin{pmatrix}1\\\ 0\end{pmatrix}e^{-iqx},$
(54) $\displaystyle\begin{pmatrix}0\\\
1\end{pmatrix}e^{iq^{\prime}x},~{}~{}~{}\begin{pmatrix}0\\\
1\end{pmatrix}e^{-iq^{\prime}x},$ (55)
where $q$ and $q^{\prime}$ are given by
$\cos^{-1}\left(-\frac{E}{2\eta_{b}}\right)$ and
$\cos^{-1}\left(-\frac{E}{2\eta_{b}}\right)-\pi$ respectively. Physically, the
first two solutions, Eq. (54), correspond to an electron travelling right and
left respectively while the last two solutions, Eq. (55), correspond to a hole
travelling to the right and left respectively.
We are now in a position to write the explicit form of the scattering
wavefunction in the three regions for a plane wave of energy $E$ incident from
the left reservoir. This will be of the form:
$\displaystyle\Psi_{L}(\alpha)=\begin{pmatrix}1\\\
0\end{pmatrix}e^{iq\alpha}+r_{n}\begin{pmatrix}1\\\
0\end{pmatrix}e^{-iq\alpha}+r_{a}\begin{pmatrix}0\\\
1\end{pmatrix}e^{-iq^{\prime}\alpha}$ (56)
$\displaystyle\Psi_{W}(j)=\sum_{s=1}^{4}a_{s}\begin{pmatrix}U_{s}\\\
V_{s}\end{pmatrix}z_{s}^{j-1}$ (57)
$\displaystyle\Psi_{R}(\alpha^{\prime})=t_{n}\begin{pmatrix}1\\\
0\end{pmatrix}e^{iq(\alpha^{\prime}-N-1)}+t_{a}\begin{pmatrix}0\\\
1\end{pmatrix}e^{iq^{\prime}(\alpha^{\prime}-N-1)}$ (58)
with
$\alpha=-\infty,...,-1,0~{}~{},~{}~{}j=1,2,...N~{}\text{and}~{}\alpha^{\prime}=N+1,N+2,...\infty$.
As already discussed in Sec. (II), $r_{n}$ is the probability amplitude for
the electron to be reflected back at the left junction as an electron.
Therefore, this corresponds to the normal reflection. $r_{a}$ is the
probability amplitude for the Andreev reflection. Similarly, $t_{n}$ and
$t_{a}$ are the normal and the Andreev transmission amplitudes respectively.
The solution inside the wire represents a superposition, with amplitudes
$a_{1},a_{2},a_{3}$ and $a_{4}$, of the quasi-particles with energy $E$ in the
wire travelling to the left and right respectively. These scattering
amplitudes are obtained by implementing the boundary conditions. We note that
we have eight scattering amplitudes and two boundaries, one at the left end
and the other at the right end of the wire. Each site on either side of each
boundary gives two equations. Therefore, a single boundary gives four
equations in total and we have exactly eight equations from the two
boundaries, sufficient to determine the eight scattering amplitudes. These
eight boundary equations are given by
$\displaystyle A_{L}\Psi_{L}(-1)+A_{C}\Psi_{W}(1)=E\Psi_{L}(0),$ (59)
$\displaystyle A_{C}\Psi_{L}(0)+A\Psi_{W}(1)+A_{W}\Psi_{W}(2)=E\Psi_{W}(1),$
(60) $\displaystyle
A_{W}^{T}\Psi_{W}(N-1)+A\Psi_{W}(N)+A_{C}\Psi_{R}(N+1)=E\Psi_{W}(N),$ (61)
$\displaystyle A_{C}^{T}\Psi_{W}(N)+A_{R}\Psi_{R}(N+2)=E\Psi_{R}(N+1).$ (62)
After substituting the solution from Eqs. (56, 57, 58), the eight linear
equations for the scattering amplitudes can be expressed in matrix form as
$\setcounter{MaxMatrixCols}{15}\begin{pmatrix}\eta_{b}e^{iq}+E&&0&&\eta_{c}U_{1}&&\eta_{c}U_{2}&&\eta_{c}U_{3}&&\eta_{c}U_{4}&&0&&0\\\
0&&\eta_{b}e^{iq^{\prime}}-E&&\eta_{c}V_{1}&&\eta_{c}V_{2}&&\eta_{c}V_{3}&&\eta_{c}V_{4}&&0&&0\\\
\eta_{c}&&0&&f_{1}&&f_{2}&&f_{3}&&f_{4}&&0&&0\\\
0&&\eta_{c}&&f^{\prime}_{1}&&f^{\prime}_{2}&&f^{\prime}_{3}&&f^{\prime}_{4}&&0&&0\\\
0&&0&&g_{1}&&g_{2}&&g_{3}&&g_{4}&&\eta_{c}&&0\\\
0&&0&&g_{1}^{\prime}&&g_{2}^{\prime}&&g_{3}^{\prime}&&g_{4}^{\prime}&&0&&\eta_{c}\\\
0&&0&&\eta_{c}z_{1}^{N-1}U_{1}&&\eta_{c}z_{2}^{N-1}U_{2}&&\eta_{c}z_{3}^{N-1}U_{3}&&\eta_{c}z_{4}^{N-1}U_{4}&&\eta_{b}e^{iq}+E&&0\\\
0&&0&&\eta_{c}z_{1}^{N-1}V_{1}&&\eta_{c}z_{2}^{N-1}V_{2}&&\eta_{c}z_{3}^{N-1}V_{3}&&\eta_{c}z_{4}^{N-1}V_{4}&&0&&\eta_{b}e^{iq^{\prime}}-E\end{pmatrix}\begin{pmatrix}r_{n}\\\
r_{a}\\\ a_{1}\\\ a_{2}\\\ a_{3}\\\ a_{4}\\\ t_{n}\\\
t_{a}\end{pmatrix}=\begin{pmatrix}-\eta_{b}e^{-iq}-E\\\ 0\\\ -\eta_{c}\\\ 0\\\
0\\\ 0\\\ 0\\\ 0\end{pmatrix},$ (63)
where for $s=1,2,3,4$ the $f_{s}$, $f_{s}^{\prime}$, $g_{s}$ and
$g_{s}^{\prime}$ are given by
$\displaystyle f_{s}$
$\displaystyle=(\mu_{w}+E)U_{s}+z_{s}(\eta_{w}U_{s}+\Delta V_{s}),$ (64)
$\displaystyle f^{\prime}_{s}$
$\displaystyle=(\mu_{w}-E)V_{s}+z_{s}(\eta_{w}V_{s}+\Delta U_{s}),$ (65)
$\displaystyle g_{s}$
$\displaystyle=(\mu_{w}+E)U_{s}z_{s}^{N-1}+(\eta_{w}U_{s}-\Delta
V_{s})z_{s}^{N-2},$ (66) $\displaystyle g_{s}^{\prime}$
$\displaystyle=(\mu_{w}-E)V_{s}z_{s}^{N-1}+(\eta_{w}V_{s}-\Delta
U_{s})z_{s}^{N-2}.$ (67)
Solving Eqs. (63) gives us the required expressions for $r_{n}$, $r_{a}$,
$t_{n}$ and $t_{a}$, and from these we can obtain the conductance using the
scattering approach.
We now look for the special solution corresponding to the zero energy MBS in
this open wire system. We expect that for long enough wires there are two MBS
each localized at edges of the wire. Let us consider a zero energy eigenstate
localized at the left end $(j=1)$, therefore for this we must have $t_{n}=0$,
$t_{a}=0$ and $a_{s}=0$ if $|z_{s}|\geq 1$ in the wavefunction given by Eq.
(56, 57, 58) with $E=0$. Out of the four roots, $z_{1}$, $z_{2}$, $z_{3}$ and
$z_{4}$, it is clear that two of them always have absolute values greater than
$1$ while the other two are always less that $1$. Let us choose, by
relabelling, $z_{1}$ and $z_{2}$ to be the ones with absolute values less than
$1$, therefore we set $a_{3}$ and $a_{4}$ to be zero. We also note from Eq.
(47) that for $E=0$, $U=\pm V$. Therefore, depending on whether $z_{1}(z_{2})$
satisfies $U_{1}=V_{1}(U_{2}=V_{2})$ or $U_{1}=-V_{1}(U_{2}=-V_{2})$ we choose
$U_{1}(U_{2})$ and $V_{1}(V_{2})$ accordingly. This choice could be made by
noting that $U=V$ is satisfied by
$\displaystyle
z_{\pm}=\frac{-\mu_{w}\pm\sqrt{\mu_{w}^{2}-4\eta_{w}^{2}+4\Delta^{2}}}{2(\eta_{w}+\Delta)},$
(68)
while $U=-V$ is satisfied by
$\displaystyle
z_{\pm}^{\prime}=\frac{-\mu_{w}\pm\sqrt{\mu_{w}^{2}-4\eta_{w}^{2}+4\Delta^{2}}}{2(\eta_{w}-\Delta)}.$
(69)
Thus $z_{1}$ and $z_{2}$ have to be equal to two of these four roots which
have absolute values less than $1$. Fixing $\Delta>0$, we find that for
$|\mu_{w}|<2|\eta_{w}|$, $|z_{\pm}|<1$ and $|z_{\pm}^{\prime}|>1$ while for
$|\mu_{w}|>2|\eta_{w}|$, the absolute value one of the roots among $z_{\pm}$
and $z_{\pm}^{\prime}$ is greater than 1 while the other is less than 1. This
implies that for $\Delta>0$ and $|\mu_{w}|<2|\eta_{w}|$, we need to set
$U_{1}=V_{1}$ and $U_{2}=V_{2}$, while for $\Delta>0$ and
$|\mu_{w}|>2|\eta_{w}|$, we have $U_{1}=V_{1}$ and $U_{2}=-V_{2}$.
We first take the case with $\Delta>0$ and $|\mu_{w}|<2|\eta_{w}|$. For this
we have $t_{n}=t_{a}=a_{3}=a_{4}=0$, $E=0$, $U_{2}=V_{1}$ and $U_{2}=V_{2}$,
which simplify Eq. (63) to the set of four equations:
$\displaystyle
i\eta_{b}r_{n}+\eta_{c}U_{1}a_{1}+\eta_{c}U_{2}a_{2}=i\eta_{b},$ (70)
$\displaystyle-$ $\displaystyle
i\eta_{b}r_{a}+\eta_{c}U_{1}a_{1}+\eta_{c}U_{2}a_{2}=0,$ (71)
$\displaystyle\eta_{c}r_{n}+\kappa_{1}U_{1}a_{1}+\kappa_{2}U_{2}a_{2}=-\eta_{c},$
(72) $\displaystyle\eta_{c}r_{a}+\kappa_{1}U_{1}a_{1}+\kappa_{2}U_{2}a_{2}=0,$
(73)
where $\kappa_{s}=\mu_{w}+z_{s}(\eta_{w}+\Delta)$. These equations can be
solved to give $r_{n}=0$, $r_{a}=1$,
$U_{1}a_{1}=\frac{(i\eta_{b}\kappa_{2}+\eta_{c}^{2})}{\eta_{c}(\kappa_{2}-\kappa_{1})}~{}\text{and}~{}U_{2}a_{2}=-\frac{(i\eta_{b}\kappa_{1}+\eta_{c}^{2})}{\eta_{c}(\kappa_{2}-\kappa_{1})}.$
(74)
These equations then give the wavefunction of the zero mode that is localized
at the left end and can be written as
$\displaystyle\Psi_{L}^{MBS}(\alpha)=\begin{pmatrix}1\\\
1\end{pmatrix}\sin\frac{\pi\alpha}{2}~{}\text{and}~{}\Psi_{R}^{MBS}(\alpha^{\prime})=0,$
(75) $\displaystyle\Psi_{W}^{MBS}(j)=\begin{pmatrix}1\\\
1\end{pmatrix}\imaginary\left[\frac{(i\eta_{b}\kappa_{2}+\eta_{c}^{2})z_{1}^{j-1}-(i\eta_{b}\kappa_{1}+\eta_{c}^{2})z_{2}^{j-1}}{\eta_{c}(\kappa_{2}-\kappa_{1})}\right].$
(76)
Also, due to the perfect Andreev reflection($r_{a}=1$), we get $G_{L}(E=0)=2$
which marks the zero bias peak found in systems which host MBSRoy _et al._
(2012); Mourik _et al._ (2012); Das _et al._ (2012a). Thus the zero mode
found is the wavefunction of the zero energy MBS found to be present in the
parameter regime $|\mu_{w}|<2|\eta_{w}|$. Due to the left-right symmetry of
the Hamiltonian, the wavefunction of the MBS localized at the other end of the
wire can directly be written as:
$\displaystyle\Phi_{R}^{MBS}(\alpha^{\prime})=\begin{pmatrix}1\\\
1\end{pmatrix}\sin\frac{\pi(N-\alpha^{\prime})}{2}~{}\text{and}~{}\Phi_{L}^{MBS}(\alpha)=0,$
(77) $\displaystyle\Phi_{W}^{MBS}(j)=\begin{pmatrix}1\\\
1\end{pmatrix}\imaginary\left[\frac{(i\eta_{b}\kappa_{2}+\eta_{c}^{2})z_{1}^{N-j}-(i\eta_{b}\kappa_{1}+\eta_{c}^{2})z_{2}^{N-j}}{\eta_{c}(\kappa_{2}-\kappa_{1})}\right].$
(78)
Figure 1: Plot of the MBS wavefunction for different couplings with the
reservoir at parameter values– $\eta_{b}=1.5$, $\mu_{w}=0.5$, $\Delta=0.5$,
$\eta_{w}=1$. The normalization of these wavefunctions is the same as in Eq.
(75-76) and the vertical black line marks the left end of the wire. Note that
the lead wavefunctions are not visible on this scale.
The absolute value of the height of the peak in the MBS wavefunction is given
by $|\frac{\eta_{b}}{\eta_{c}}|$. Therefore, the height of the peak decreases
as coupling with the bath increases which makes sense since one expects the
wavefunction to leak into the reservoir more as the coupling with reservoirs
increases. This can be seen in Fig. 1 where we plot the MBS wavefunction for a
few different couplings with the reservoirs. Also, increasing $\eta_{b}$
increases the band width of the system which decreases the density of the
states around $E=0$ and therefore the MBS of the isolated wire hybridizes less
with the reservoir wavefunctions as the energy difference between them
increases. Note that if the height of the peak in the MBS goes down, the
weight of the MBS in the reservoirs increases and vice-versa. We will see
later that this wavefunction helps in explaining the behaviour of the zero
bias peak with different parameters of the Hamiltonian.
Let us consider the case with $\Delta>0$ and $|\mu_{w}|>2|\eta_{w}|$. For this
we have
$\displaystyle
i\eta_{b}r_{n}+\eta_{c}U_{1}a_{1}+\eta_{c}U_{2}a_{2}=i\eta_{b},$ (79)
$\displaystyle-$ $\displaystyle
i\eta_{b}r_{a}+\eta_{c}U_{1}a_{1}-\eta_{c}U_{2}a_{2}=0,$ (80)
$\displaystyle\eta_{c}r_{n}+\kappa_{1}U_{1}a_{1}+\kappa_{1}U_{2}a_{2}=-\eta_{c},$
(81) $\displaystyle\eta_{c}r_{a}+\kappa_{1}U_{1}a_{1}-\kappa_{1}U_{2}a_{2}=0.$
(82)
These equations can be solved for $r_{n}$, $r_{a}$, $U_{1}$ and $U_{2}$ with
which we can then construct the zero mode present in this parameter regime.
However, these equations give $r_{a}=0$ and therefore there is no perfect
Andreev reflection ($r_{a}=1$). Thus the zero mode constructed out of them
would not be the MBS. They would merely be the zero energy states of the left
reservoir leaking into the wire. We therefore conclude that only the zero
energy states present in the parameter regime $|\mu_{w}|<2|\eta_{w}|$ give
rise to the perfect Andreev reflection and are the states representing the MBS
of this system. Similar arguments can be repeated for the case $\Delta<0$.
## IV Analytical proof of the equivalence of QLE-NEGF and Scattering
approaches
In this section we will show analytically the equivalence between the two
approaches by deriving the following equalities,
$\displaystyle T_{1}(E)=|t_{n}|^{2},$ $\displaystyle~{}T_{2}(E)=|t_{a}|^{2}$
(83) $\displaystyle\text{and}~{}T_{3}(E)=$ $\displaystyle
T_{3}(-E)=|r_{a}|^{2}$ (84)
where, $T_{1}(E)$, $T_{2}(E)$ and $T_{3}(E)$ are given Eq. (38, Eq. (39) and
Eq. (40) respectively with $\mu_{L}$ replaced by $E$. This would then straight
forwardly imply the equivalence of the two conductance expression. To proceed,
we first need to find a set of equations relating the transmission amplitudes,
$t_{n}$ and $t_{a}$, to the reflection amplitudes, $r_{n}$ and $r_{a}$
directly, which is possible by relating $\begin{pmatrix}\Psi_{L}(-1)\\\
\Psi_{L}(0)\end{pmatrix}$ directly to $\begin{pmatrix}\Psi_{R}(N+1)\\\
\Psi_{R}(N+2)\end{pmatrix}$ via transfer matrices. We start by considering the
equation for the stationary state of energy $E$ inside the wire
$A_{W}^{T}\Psi_{W}(j-1)+A\Psi_{W}(j)+A_{W}\Psi_{W}(j+1)=E\Psi_{W}(j)$ (85)
which we re-write in the following recursive form:
$\displaystyle\begin{pmatrix}A_{W}^{T}\Psi_{W}(j-1)\\\
\Psi_{W}(j)\end{pmatrix}=\Omega_{W}\begin{pmatrix}A_{W}^{T}\Psi_{W}(j)\\\
\Psi_{W}(j+1)\end{pmatrix},$ (86)
where
$\displaystyle\Omega_{W}=\begin{pmatrix}(E-A)A_{W}^{-T}&&-A_{W}\\\
A_{W}^{-T}&&0\end{pmatrix}.$ (87)
Using the boundary conditions at the left junction, Eq. (59) and Eq. (60), we
can write
$\begin{pmatrix}\Psi_{L}(-1)\\\
\Psi_{L}(0)\end{pmatrix}=\Omega_{L1}\Omega_{L2}\begin{pmatrix}A_{W}^{T}\Psi_{W}(1)\\\
\Psi_{W}(2)\end{pmatrix},$ (88)
where
$\displaystyle\Omega_{L1}=\begin{pmatrix}A_{L}^{-1}EA_{C}^{-1}&&-A_{L}^{-1}A_{C}\\\
A_{C}^{-1}&&0\end{pmatrix},$ (89)
$\displaystyle\Omega_{L2}=\Omega_{W}=\begin{pmatrix}(E-A)A_{W}^{-T}&&-A_{W}\\\
A_{W}^{-T}&&0\end{pmatrix}.$ (90)
Using Eq. (86) repeatedly in Eq. (88) we have the following equation:
$\begin{pmatrix}\Psi_{L}(-1)\\\
\Psi_{L}(0)\end{pmatrix}=\Omega_{L1}\Omega_{L2}\Omega^{N-2}_{W}\begin{pmatrix}A_{W}^{T}\Psi_{W}(N-1)\\\
\Psi_{W}(N)\end{pmatrix}.$ (91)
Finally, we use the boundary conditions at the right junction, Eq. (61) and
Eq. (62), to obtain the desired equation
$\displaystyle\begin{pmatrix}\Psi_{L}(-1)\\\ \Psi_{L}(0)\end{pmatrix}$
$\displaystyle=\Omega_{L1}\Omega_{L2}\Omega^{N-2}_{W}\Omega_{R2}\Omega_{R1}\begin{pmatrix}\Psi_{R}(N+1)\\\
\Psi_{R}(N+2)\end{pmatrix}$ (92)
$\displaystyle=\Omega_{L1}\Omega\Omega_{R1}\begin{pmatrix}\Psi_{R}(N+1)\\\
\Psi_{R}(N+2)\end{pmatrix},$ (93)
where
$\displaystyle\Omega_{R2}$ $\displaystyle=\begin{pmatrix}E-A&&-I\\\
I&&0\end{pmatrix},$ (94) $\displaystyle\Omega_{R1}$
$\displaystyle=\begin{pmatrix}A_{C}^{-T}E&&-A_{C}^{-T}A_{R}\\\
A_{C}&&0\end{pmatrix},$ (95) $\displaystyle\Omega$
$\displaystyle=\Omega_{L2}\Omega^{N-2}_{W}\Omega_{R2},$ (96)
and $I$ denotes a $2\times 2$ unit matrix. We now have Eq. (93) which relates
$\begin{pmatrix}\Psi_{L}(-1)\\\ \Psi_{L}(0)\end{pmatrix}$ directly to
$\begin{pmatrix}\Psi_{R}(N+1)\\\ \Psi_{R}(N+2)\end{pmatrix}$ via the transfer
matrix, $\Omega_{L1}\Omega\Omega_{R1}$. This equation will furnish a set of
four equations for $r_{n}$, $r_{a}$, $t_{n}$ and $t_{a}$ after using the forms
of $\Psi_{L}(\alpha)$ and $\Psi_{R}(\alpha^{\prime})$ from Eq. (56) and Eq.
(58) respectively. However, we could make things much more simpler by using
the forms of the matrices $A_{C}$, $A_{L}$ and $A_{R}$ to write
$\displaystyle\Omega_{L1}$
$\displaystyle=\frac{1}{\eta_{b}\eta_{c}}\begin{pmatrix}E&&-\eta_{c}^{2}\\\
-\eta_{b}\sigma^{z}&&0\end{pmatrix},$ (97) $\displaystyle\Omega_{L1}^{-1}$
$\displaystyle=\frac{1}{\eta_{c}}\begin{pmatrix}0&&-\eta_{c}^{2}\sigma^{z}\\\
-\eta_{b}&&-E\sigma^{z}\end{pmatrix},$ (98) $\displaystyle\text{and}\hskip
14.22636pt\Omega_{R1}$
$\displaystyle=\frac{1}{\eta_{c}}\begin{pmatrix}-E\sigma^{z}&&-\eta_{b}\\\
-\eta_{c}^{2}\sigma^{z}&&0\end{pmatrix},$ (99)
where $\sigma^{z}=\begin{pmatrix}1&&0\\\ 0&&-1\end{pmatrix}$. Now, from Eq.
(93) we have
$\Omega_{L1}^{-1}\begin{pmatrix}\Psi_{L}(-1)\\\
\Psi_{L}(0)\end{pmatrix}=\Omega\Omega_{R1}\begin{pmatrix}\Psi_{R}(N+1)\\\
\Psi_{R}(N+2)\end{pmatrix},$ (100)
which then gives the following two matrix equations:
$\displaystyle\eta_{c}^{2}\sigma^{z}\Psi_{L}(0)=\bar{\Omega}_{11}[E\sigma^{z}\Psi_{R}(N+1)+\eta_{b}\Psi_{R}(N+2)]$
$\displaystyle\hskip
85.35826pt+\bar{\Omega}_{12}\eta_{c}^{2}\sigma^{z}\Psi_{R}(N+1),$ (101)
$\displaystyle\eta_{b}\Psi_{L}(-1)+E\sigma^{z}\Psi_{L}(0)=\bar{\Omega}_{21}E\sigma^{z}\Psi_{R}(N+1)$
$\displaystyle\hskip
56.9055pt+\bar{\Omega}_{21}\eta_{b}\Psi_{R}(N+2)+\bar{\Omega}_{22}\eta_{c}^{2}\sigma^{z}\Psi_{R}(N+1),$
(102)
where $\bar{\Omega}_{ij}$ are $2\crossproduct 2$ matrices that form blocks of
the matrix $\Omega$, _i.e_
$\Omega=\begin{pmatrix}\bar{\Omega}_{11}&&\bar{\Omega}_{12}\\\
\bar{\Omega}_{21}&&\bar{\Omega}_{22}\end{pmatrix}.$ (103)
Using the forms of $\Psi_{L}(\alpha)$ and $\Psi_{R}(\alpha^{\prime})$ from Eq.
(56) and Eq. (58) respectively, Eq. (101) and Eq. (102) can be written as
$\displaystyle\eta_{c}^{2}(\ket{+}+r_{n}\ket{+}-r_{a}\ket{-})=$
$\displaystyle\hskip
28.45274pt\left[-\eta_{b}e^{-iq}\bar{\Omega}_{11}+\eta_{c}^{2}\bar{\Omega}_{12}\right][t_{n}\ket{+}-t_{a}\ket{-}],$
(104) $\displaystyle-$
$\displaystyle\eta_{b}(e^{iq}\ket{+}+e^{-iq}r_{n}\ket{+}-e^{-iq}r_{a}\ket{-})=$
$\displaystyle\hskip
28.45274pt\left[-\eta_{b}e^{-iq}\bar{\Omega}_{21}+\eta_{c}^{2}\bar{\Omega}_{22}\right][t_{n}\ket{+}-t_{a}\ket{-}],$
(105)
where we substituted $q-\pi$ for $q^{\prime}$, $\ket{\pm}$ is the eigenvector
of $\sigma^{z}$ with eigenvalue $\pm 1$. We can simultaneously get rid of
$r_{n}$ and $r_{a}$ by subtracting Eq. (104) and Eq. (105) after
multiplication with appropriate factors. Thus, one finds:
$-2ie^{iq}\sin
q\ket{+}=\frac{\eta_{b}}{\eta_{c}^{2}}\mathcal{O}[t_{n}\ket{+}-t_{a}\ket{-}],$
(106)
where
$\mathcal{O}=\left[-e^{-2iq}\bar{\Omega}_{11}+\frac{\eta_{c}^{2}}{\eta_{b}}e^{-iq}\bar{\Omega}_{12}-\frac{\eta_{c}^{2}}{\eta_{b}}e^{-iq}\bar{\Omega}_{21}+\frac{\eta_{c}^{4}}{\eta_{b}^{2}}\bar{\Omega}_{22}\right].$
(107)
From Eq. (106) we can write down the two equations for $t_{n}$ and $t_{a}$:
$\displaystyle 1=-\frac{t_{n}}{2i\frac{\eta_{c}^{2}}{\eta_{b}}\sin
q}\bra{+}\mathcal{O}\ket{+}+\frac{t_{a}}{2i\frac{\eta_{c}^{2}}{\eta_{b}}\sin
q}\bra{+}\mathcal{O}\ket{-},$ (108) $\displaystyle
0=-\frac{t_{n}}{2i\frac{\eta_{c}^{2}}{\eta_{b}}\sin
q}\bra{-}\mathcal{O}\ket{+}+\frac{t_{a}}{2i\frac{\eta_{c}^{2}}{\eta_{b}}\sin
q}\bra{-}\mathcal{O}\ket{-}.$ (109)
Also, from Eq. (105) we directly get an expression of $r_{a}$ in terms of
$t_{n}$ and $t_{a}$:
$\displaystyle r_{a}$
$\displaystyle=\left[-\bra{-}\bar{\Omega}_{21}\ket{+}+e^{iq}\frac{\eta_{c}^{2}}{\eta_{b}}\bra{-}\bar{\Omega}_{22}\ket{+}\right]t_{n}$
$\displaystyle~{}~{}-\left[-\bra{-}\bar{\Omega}_{21}\ket{-}+e^{iq}\frac{\eta_{c}^{2}}{\eta_{b}}\bra{-}\bar{\Omega}_{22}\ket{-}\right]t_{a}.$
(110)
For the moment we leave this here and turn our attention to the terms in the
NEGF-expression for conductance. From Eq. (38-40) we see that $T_{1}(E)$,
$T_{2}(E)$ and $T_{3}(E)$ are essentially given by the elements
$[G_{1}^{+}(E)]_{1N}$, $[G_{2}^{+}(E)]_{1N}$ and $[G_{2}^{+}(E)]_{11}$ of the
two Green’s functions. We note that given the forms of the Green’s functions
$G_{1}^{+}(\omega)$ and $G_{2}^{+}(\omega)$, it is not easy to obtain these
elements. Therefore, we have to re-write these Green’s functions in some other
form so that these elements could be obtained analytically. For that, we
consider the Fourier transformed Langevin equations of motion for the wire,
Eq. (21), and write its solution in a slightly different form involving a
single $2N\times 2N$ Greens function. We start with the equations
$\displaystyle[\Pi(\omega)]_{lm}\tilde{c}_{m}(\omega)-K_{lm}\tilde{c}_{m}^{\dagger}(-\omega)=\tilde{\eta}_{l}^{L}(\omega)+\tilde{\eta}_{l}^{R}(\omega),$
(111)
$\displaystyle[\Pi(-\omega)]^{*}_{lm}\tilde{c}_{m}^{\dagger}(-\omega)-K_{lm}^{*}\tilde{c}_{m}(\omega)=\tilde{\eta}_{l}^{L\dagger}(-\omega)+\tilde{\eta}_{l}^{R\dagger}(-\omega).$
(112)
Defining the two component vectors
$\displaystyle C_{i}(\omega)=\begin{pmatrix}\tilde{c}_{i}(\omega)\\\
\tilde{c}^{\dagger}_{i}(-\omega)\end{pmatrix}~{}\text{and}~{}\xi_{i}(\omega)=\begin{pmatrix}-\tilde{\eta}^{L}_{i}(\omega)-\tilde{\eta}^{R}_{i}(\omega)\\\
\tilde{\eta}_{i}^{L\dagger}(-\omega)+\tilde{\eta}_{i}^{R\dagger}(-\omega)\end{pmatrix},$
we write Eq. (111) and Eq. (112) together as
$[\mathcal{G}^{-1}(\omega)]_{lm}C_{m}(\omega)=\xi_{l}(\omega)$ which has the
solution:
$\displaystyle C_{l}(\omega)=[\mathcal{G}(\omega)]_{lm}\xi_{m}(\omega),$ (113)
with $\mathcal{G}^{-1}(\omega)$ being a $2N\crossproduct 2N$ matrix whose
$lm$-th $2\crossproduct 2$ matrix block element is given by
$[\mathcal{G}^{-1}(\omega)]_{lm}=\begin{pmatrix}-[\Pi(\omega)]_{lm}&&K_{lm}\\\
-K_{lm}^{*}&&[\Pi(-\omega)]_{lm}^{*}\end{pmatrix}$ (114)
Comparing Eq. (113) with Eq. (22) for the $\tilde{c}_{m}(\omega)$ we see that
$[\mathcal{G}(\omega)]_{lm}=\begin{pmatrix}-[G_{1}^{+}(\omega)]_{lm}&&[G_{2}^{+}(\omega)]_{lm}\\\
-[G_{2}^{+}(-\omega)]^{*}_{lm}&&[G_{1}^{+}(-\omega)]_{lm}^{*}\end{pmatrix}.$
(115)
Now, from Eq. (36), Eq. (11) and Eq. (114) we find that the matrix
$\mathcal{G}(E)$ has the following structure:
$\mathcal{G}(E)=\begin{pmatrix}-E+A-A_{\Sigma}&A_{W}&0&0&\dots\dots&0&0\\\
A_{W}^{T}&-E+A&A_{W}&0&\dots\dots&0&0\\\
0&A_{W}^{T}&-E+A&A_{W}&\dots\dots&0&0\\\
\vdots&\vdots&\vdots&\ddots&&\vdots&\vdots\\\
\vdots&\vdots&\vdots&&\ddots&\vdots&\vdots\\\
\vdots&\vdots&\vdots&\dots&A_{W}^{T}&-E+A&A_{W}\\\
0&0&0&\dots&\dots&A_{W}^{T}&-E+A-A_{\Sigma}\end{pmatrix}^{-1}$ (116)
where $A_{\Sigma}=\begin{pmatrix}-\eta_{c}^{2}\Sigma(E)&&0\\\
0&&\eta_{c}^{2}\Sigma^{*}(-E)\end{pmatrix}$, $\Sigma(E)$ being given by Eq.
(37) with $\mu_{L}$ replaced by $E$, and the matrices $A,~{}A_{W}$ defined as
in Eq. (45). We note that for $|E|<2\eta_{b}$,
$-\eta_{b}\Sigma(E)=\eta_{b}\Sigma^{*}(-E)=e^{iq}$. This then simplifies
$A_{\Sigma}$ to be $\frac{\eta_{c}^{2}}{\eta_{b}}e^{iq}I_{2}$ with $I_{2}$
being a $2\crossproduct 2$ identity matrix. We work in the regime of
$|E|<2\eta_{b}$ as outside of it the conductance is zero. Note that
$\mathcal{G}(E)=\mathcal{G}^{T}(E)$ and therefore, we have
$\displaystyle[G_{1}^{+}(E)]^{T}$ $\displaystyle=G_{1}^{+}(E)$ (117)
$\displaystyle G_{2}^{-}(-E)~{}$ $\displaystyle=-G_{2}^{+}(E).$ (118)
These relations would be useful later on. The block tri-diagonal structure of
$\mathcal{G}(E)$ in Eq. (116) allows us to find the required elements of
$\mathcal{G}(E)$, which are $\mathcal{G}_{N1}$ and $\mathcal{G}_{11}$, for
obtaining the terms in NEGF-expression for conductance. Thus, we define
$I_{2N}$ to be a $2N\crossproduct 2N$ identity matrix so that, using the first
column of equations from the identity
$\mathcal{G}^{-1}(E)\mathcal{G}(E)=I_{2N}$, we can write
$\displaystyle(-E+A-A_{\Sigma})\mathcal{G}_{11}+A_{W}\mathcal{G}_{21}=I_{2}$
(119) $\displaystyle
A_{W}^{T}\mathcal{G}_{i-1,1}+(-E+A)\mathcal{G}_{i1}+A_{W}\mathcal{G}_{i+1,1}=0~{}\text{for}~{}1<i<N$
(120) $\displaystyle
A_{W}^{T}\mathcal{G}_{N-1,1}+(-E+A-A_{\Sigma})\mathcal{G}_{N1}=0$ (121)
We rewrite these equations as a recursion relation, following similar steps as
we did for Eq. (93), to obtain
$\begin{pmatrix}-I_{2}\\\
\mathcal{G}_{11}\end{pmatrix}=\Omega_{1}\Omega^{N-2}_{W}\Omega_{2}\begin{pmatrix}\mathcal{G}_{N1}\\\
0\end{pmatrix}$ (122)
with $\Omega_{W}$ given by Eq. (87),
$\displaystyle\Omega_{1}=\begin{pmatrix}(E-A+A_{\Sigma})A_{W}^{-T}&&-A_{W}\\\
A_{W}^{-T}&&0\end{pmatrix}=\begin{pmatrix}1&&A_{\Sigma}\\\
0&&1\end{pmatrix}\Omega_{L2},$ (123)
$\displaystyle\Omega_{2}=\begin{pmatrix}E-A+A_{\Sigma}&&1\\\
1&&0\end{pmatrix}=\Omega_{R2}\begin{pmatrix}1&&0\\\
-A_{\Sigma}&&-1\end{pmatrix},$ (124)
where $\Omega_{L2}$ and $\Omega_{R2}$ are the same matrices defined in the
scattering calculation by Eq. (90) and Eq. (94) respectively. Using Eq. (123),
Eq. (124) and substituting
$A_{\Sigma}=\frac{\eta_{c}^{2}}{\eta_{b}}e^{iq}I_{2}$, one can express Eq.
(122) as
$\begin{pmatrix}I_{2}\\\
\mathcal{G}_{11}\end{pmatrix}=\begin{pmatrix}e^{2iq}\mathcal{O}&&\bar{\Omega}_{12}+\frac{\eta_{b}}{\eta_{c}^{2}}e^{iq}\bar{\Omega}_{22}\\\
\bar{\Omega}_{21}-\frac{\eta_{b}}{\eta_{c}^{2}}e^{iq}\bar{\Omega}_{22}&&-\bar{\Omega}_{22}\end{pmatrix}\begin{pmatrix}\mathcal{G}_{N1}\\\
0\end{pmatrix},$ (125)
where $\mathcal{O}$ is given by Eq. (107). From the upper block of Eq. (125),
we obtain the following matrix equation for $[G^{+}(E)]_{N1}$ and
$[G_{2}^{+}(-E)]_{N1}^{*}$:
$\displaystyle\ket{+}=-e^{2iq}\mathcal{O}\left[[G^{+}_{1}(E)]_{N1}\ket{+}+[G_{2}^{+}(-E)]_{N1}^{*}\ket{-}\right].$
(126)
which gives two linear equations for $[G^{+}(E)]_{N1}$ and
$[G_{2}^{+}(-E)]_{N1}^{*}$:
$\displaystyle
1=-e^{2iq}[G^{+}_{1}(E)]_{N1}\bra{+}\mathcal{O}\ket{+}-e^{2iq}[G_{2}^{+}(-E)]_{N1}^{*}\bra{+}\mathcal{O}\ket{-}$
(127) $\displaystyle
0=-e^{2iq}[G^{+}_{1}(E)]_{N1}\bra{-}\mathcal{O}\ket{+}-e^{2iq}[G_{2}^{+}(-E)]_{N1}^{*}\bra{-}\mathcal{O}\ket{-}.$
(128)
Comparing these with Eq. (108 and 109) and noticing that $\sin q=\eta_{b}g(E)$
we have
$\displaystyle 2i\eta_{c}^{2}g(E)[G^{+}_{1}(E)]_{N1}$
$\displaystyle=e^{-2iq}t_{n},$ (129) $\displaystyle
2i\eta_{c}^{2}g(E)[G_{2}^{+}(-E)]_{N1}^{*}$ $\displaystyle=-e^{-2iq}t_{a}.$
(130)
These equations, along with Eq. (117 and 118) imply that
$\displaystyle T_{1}(E)=4\eta_{c}^{4}g^{2}(E)|[G_{1}^{+}(E)]_{1N}|^{2}$
$\displaystyle=|t_{n}|^{2},$ (131) $\displaystyle
T_{2}(E)=4\eta_{c}^{4}g^{2}(E)|[G_{2}^{+}(E)]_{1N}|^{2}$
$\displaystyle=|t_{a}|^{2},$ (132)
which are the required relations. If we consider the lower block equations of
Eq. (125), then one of the component equation reads
$\displaystyle[G_{2}^{+}(-E)]^{*}_{11}=\left[\bra{-}\bar{\Omega}_{21}\ket{+}-e^{iq}\frac{\eta_{c}^{2}}{\eta_{b}}\bra{-}\bar{\Omega}_{22}\ket{+}\right][G_{1}^{+}(E)]_{N1}$
$\displaystyle+\left[\bra{-}\bar{\Omega}_{21}\ket{-}-e^{iq}\frac{\eta_{c}^{2}}{\eta_{b}}\bra{-}\bar{\Omega}_{22}\ket{-}\right][G_{2}^{+}(-E)]^{*}_{N1}.$
(133)
We replace $[G_{1}^{+}(E)]_{N1}$ and $[G_{2}^{+}(-E)]^{*}_{N1}$ for $t_{n}$
and $t_{a}$ in this equation with the help of Eq. (129, 130). Comparing the
resulting equation with Eq. (110) and using Eq. (118), we finally obtain
$T_{3}(E)=T_{3}(-E)=4\eta_{c}^{4}g^{2}(E)|[G_{2}^{+}(E)]_{11}|^{2}=|r_{a}|^{2}.$
(134)
This completes the analytic proof for the equivalence.
We now proceed to the next section where we present numerical comparison of
the results from the two approaches and the behaviour of the zero bias
conductance peak in different parameter regimes of the Hamiltonian, along with
some other numerical results which are useful in discussing the electron
transport in the wire.
## V Numerical Results and Discussion
The quantities required for calculating the conductance from the scattering
method are simply obtained by solving the system of eight linear equations
given by Eq. (63) while the terms in the NEGF expression for conductance are
straight forward to calculate via Eq. (38-40). We note that $\mu_{L}$ in the
NEGF expression for conductance plays the role of $E$ in the scattering
method. In Figs. (2a) and (2b) we show the conductance from the NEGF-
expression and the scattering expression plotted as functions of $\mu_{L}=E$
for a long chain ($N=50$) and a short chain ($N=3$) respectively. The plot
shows a perfect agreement between the two. Similarly, Figs. (2c,2e,2g) for
$N=50$, and Figs. (2d,2f,2h) for $N=3$, show a perfect agreement between the
quantities $T_{3}(\mu_{L})$, $T_{3}(-\mu_{L})$ and
$\absolutevalue{r_{a}}^{2}$, $T_{2}(\mu_{L})$ and $\absolutevalue{t_{a}}^{2}$,
$T_{1}(\mu_{L})$ and $\absolutevalue{t_{n}}^{2}$ respectively.
It is interesting to consider the spectrum and the form of wavefunctions of
the Kitaev chain connected to leads. With infinite leads we saw in Sec. III
that all scattering eigenstates and the MBS can be obtained analytically. We
can also work with large finite reservoirs and obtain the spectrum by
diagonalising the matrix $\mathcal{A}$ in Eq. (43). Note that this furnishes a
doubly counted spectrum of the Hamiltonian and the corresponding wavefunctions
have twice the number of components as the number of sites in the system. As
per the basis that we have chosen to write $\mathcal{A}$, two adjacent
components will correspond to a single site of the system. Therefore in our
plots, if $N_{L}=N_{R}=N_{B}$ then left bath sites are from $-2N_{B}+1$ to
$0$, the wire sites are from $1$ to $2N$ and the sites from $2N+1$ to
$2N+2N_{B}$ correspond to the right reservoir. The wavefunctions were
normalized to one and we typically uses a reservoir size of $N_{B}=1000$.
In Fig. (3a) and Fig (3b) we show respectively the spectrum of the isolated
Kitaev wire and the wire connected to reservoirs. The isolated band spectrum
agrees with the expected form $\epsilon_{k}=\pm\sqrt{(\mu_{w}+2\eta_{w}\cos
k)^{2}+4\Delta^{2}\sin^{2}k}$, $k\in(-\pi,\pi)$. In Fig. (3c) and Fig. (3d),
we plot two wavefunctions of the system, one whose energy lies in the gap of
the isolated wire and the other just outside the gap. We see that the first is
almost fully localized in the reservoirs while the second is localized in the
wire. In Figs. (4) we plot the zero-energy state in the gap and as expected
this gives us the MBS wavefunction, mostly localized at the two edges inside
the wire but with some leakage into the leads. The plots in Fig. (4a) and Fig.
(4b) show a comparison between the numerically obtained MBS and the analytical
ones given by Eq. (75-78) and we find perfect agreement. The plots in Fig.
(4c) and Fig. (4d) show the MBS wavefunction for different parameters of the
wire Hamiltonian. We will see shortly that the wavefunctions in Fig. (3) and
Fig. (4) help in understanding different features of Fig. (2) and also Fig.
(5) (which shows the behaviour of the zero bias conductance peak for different
parameters).
(a) $~{}N=50$
(b) $~{}N=3$
(c) $~{}N=50$
(d) $~{}N=3$
(e) $~{}N=50$
(f) $~{}N=3$
(g) $~{}N=50$
(h) $~{}N=3$
Figure 2: The comparison of various quantities obtained from QLE-NEGF and
scattering approaches for parameter values — $\Delta=0.25$, $\eta_{c}=0.2$
$\eta_{b}=1$, $\eta_{w}=1$ and $\mu_{w}=0.5$. The inset in each plot shows the
absolute value of the difference between the corresponding curves. (a) shows
the comparison of NEGF-expression and the scattering expression for $G_{L}$.
From the corresponding inset, which shows the difference between the two
results, we can say that the two expressions match perfectly. (c) shows the
comparison of $\absolutevalue{r_{a}}^{2}$ with the last two terms,
$T_{3}=T_{3}(\mu_{L})$ and $T_{4}=T_{3}(-\mu_{L})$ in the NEGF-expression for
conductance. In the inset, $\delta
r_{a}=\absolutevalue{T_{3}-\absolutevalue{r_{a}}^{2}}+\absolutevalue{T_{4}-\absolutevalue{r_{a}}^{2}}$
is plotted with $\mu_{L}=E$. (e) and (g) show the comparison of
$\absolutevalue{t_{n}}^{2}$ and $\absolutevalue{t_{a}}^{2}$ with
$T_{1}=T_{1}(\mu_{L})$ and $T_{2}=T_{2}(\mu_{L})$ of the NEGF-conductance
expression respectively. The corresponding insets of the two plots show the
variation of $\delta t_{n}=|T_{1}-\absolutevalue{t_{n}}^{2}|$ and $\delta
t_{a}=|T_{2}-\absolutevalue{t_{a}}^{2}|$ respectively. Plots (b), (d), (f) and
(h) show same results as (a), (c), (e) and (g) respectively for $N=3$.
Figure 3: (a) shows the spectrum of a wire in absence of reservoirs(isolated
wire) at parameter values–$N=100$, $\eta_{w}=1$, $\Delta=0.25$ and
$\mu_{w}=0.5$. (b) shows the spectrum of the wire connected to the two
reservoirs with parameter values–$N_{B}=1000$, $N=100$, $\eta_{w}=1$,
$\Delta=0.25$ , $\mu_{w}=0.5$, $\eta_{b}=1.5$ and $\eta_{c}=0.5$. The two
horizontal lines are at $E\approx\pm 0.486$ in these plots mark the SC gap at
these parameter values. The two points between the SC gap in (a) are the zero
energy MBS of the isolated wire. The spectrum in (b) also has similar zero
modes but cannot be distinctly seen. (c) and (d) show two eigenstates of the
matrix $\mathcal{A}$ in Eq. (42) which serve as the effective eigenstates of
the joint system. In these two plots, the two vertical lines at $p=1$ and
$p=200$ mark the two ends of the wire and the insets show the left junction
zoomed in.
Figure 4: Plots (a) and (b) show the two MBS which are localized at the left
end and the right end of the wire respectively with parameter values–
$\eta_{b}=1.5$, $\mu_{w}=\eta_{c}=0.5$, $\Delta=0.25$, $\eta_{w}=1$ and
$N=200$. In these two plots, we choose $N_{B}=1000$ for numerical calculation
of the MBS and we take the same size of the reservoirs to normalize the
analytical wavefunctions for the two MBS. (c) and (d) show the analytical MBS
wavefunction for different $\mu_{w}$ and $\Delta$ with the other parameters
same as in (c) and (d) respectively. The insets show the wavefunction in the
reservoirs zoomed in.
(a) Zero bias Peak at different $\eta_{c}$ with $\eta_{b}=1.5$, $\mu_{w}=0.5$
and $\Delta=0.1$
(b) Zero bias Peak at different $\eta_{b}$ with $\mu_{w}=0.5$, $\eta_{c}=0.3$
and $\Delta=0.1$
(c) Zero bias Peak at different $\mu_{w}$ with $\eta_{b}=1.5$, $\eta_{c}=0.3$
and $\Delta=0.1$
(d) Zero bias Peak at different $\Delta$ with $\eta_{b}=1.5$, $\eta_{c}=0.3$
and $\mu_{w}=0.5$
Figure 5: Behaviour of the zero bias peak over different parameters of the
Hamiltonian. Plots (a) to (d) are for $N=100$ and $\eta_{w}=1$. All other
parameters are mentioned in the sub captions of the figures.
The features seen in the spectrum in Figs. (3,4) can be related to the results
for conductance in Fig. (2). The zero bias peak in the conductance is due to
the perfect Andreev reflection$(|r_{a}|^{2}=1)$, supported by the Majorana
bound states of the wire Roy _et al._ (2012); Mourik _et al._ (2012);
Maiellaro _et al._ (2019); Das _et al._ (2012b). For long wires, these bound
states exist in the topological parameter regime, $(\mu_{w}<2\eta_{w})$. The
zero energy MBS in the isolated wire can be clearly seen in its spectrum, Fig.
(3a), at the middle of the superconducting gap (SC gap), while for the wire
connected to reservoirs, Fig. (3b), it is hidden in the contimuum spectrum due
to the leads. Plotting the wavefunctions, Figs. (4a,b), reveals their nature.
The plots in Fig. (2e and 2g) tell us $t_{n}$ and $t_{a}$ are zero within a
certain energy range which is the same as the SC gap for the isolated wire.
This makes sense physically as for long wires the transmission from the left
bath to right bath could only happen via the excitation of finite energy
quasiparticles of the superconducting wire which are not possible if the
energy of the incoming particle is within the SC gap. Typical wavefunctions in
the superconducting gap look like Fig. (3c), having most of their support in
the two baths and this explains why there is no transmission from the left
reservoir to the right reservoir and we get $t_{n}=t_{a}=0$. Outside the SC
gap, on the other hand, there are wavefunctions which look like Fig. (3d)
having most of their support inside the wire with some leakage into the baths.
These thus contribute to the transmission peaks outside the gap.
In Figs. (5a-5d) we show how the width of the zero bias conductance peak
varies as we vary different parameters of the Hamiltonian. Let us consider
Fig. (5a) and Fig. (5b) which show the zero bias peak for different $\eta_{c}$
and $\eta_{b}$ respectively while keeping all other parameters fixed. We see
that the peak broadens as we increase $\eta_{c}$ or decrease $\eta_{b}$. This
can be understood in terms of the MBS wavefunction given by Eq. (77-78) where
we saw that the height of the peak in the MBS is proportional to
$\frac{\eta_{b}}{\eta_{c}}$. Hence, decreasing $\eta_{b}$ or increasing
$\eta_{c}$ would decrease the height of the peak and therefore increase the
weight of the MBS in the left reservoir. Thus the peak broadens as the weight
of the MBS in the reservoirs increases. Similarly, from Fig. (5c) and Fig.
(5d) we see that the peak broadens as $\mu_{w}$ decreases or $\Delta$
increases and correspondingly the weight of the wavefunction in the
reservoirs, Fig. (4c) and Fig. (4d), increases. From Fig. (5c) we see that the
peak splits at the transition point Das _et al._ (2012a),
$(\mu_{w}=2\eta_{w})$, marking the topological phase transition into a
topologically trivial state, and then disappears.
## VI Conclusion
In conclusion, we provided an analytical proof of the equivalence of the QLE-
NEGF approach and the scattering approach to electron transport in a 1-D
superconducting wire. In both cases we start from the same microscopic model
of a Kitaev wire connected to one-dimensional leads (baths) containing free
Fermions in equilibrium. In the former method one starts with the Heisenberg
equations of motion of the full system and eliminates the bath degrees of
freedom to obtain effective quantum Langevin equations of motion. The steady
state solutuon of these leads to the NEGF formula for the conductance in terms
of a set of nonequilibrium Green’s functions. In the second approach one
considers the wire as a scatterer of plane waves from the leads and writes
down the corresponding scattering solutions for the energy eigenstates. These
solutions involve scattering amplitudes that are obtained using the boundary
conditions at the wire-leads junctions. The conductance at the junction is
then given by the net probability of transmission of particles across the
junction.
We summarize here our some of the main results:
* •
We obtained the complete solution of the scattering states in the Kitaev
chain, including closed form expressions for the scattering amplitudes
$t_{a},t_{n},r_{n},r_{a}$.
* •
We obtained the special zero energy solution corresponding to the MBS state of
the isolated open Kitaev chain. We showed that this state exists in the same
parameter regime as for the isolated wire.
* •
The conductance of the wire from the QLE-NEGF method and the scattering
approach are given respectively by Eq. (31) and Eq. (41). We showed
analytically that the terms in the NEGF conductance expression, $T_{1}(E)$,
$T_{2}(E)$ and $T_{3}(E)$, can be related to the scattering amplitudes
$t_{n}$, $t_{a}$ and $r_{a}$ respectively. This leads us to proving the
complete equivalence of the two formulas for conductance and hence of the two
approaches.
* •
We have demonstrated clearly and explicitly the physical interpretation— from
our derivation we see that the expression for current, Eq. (26), is exactly in
Landauer’s form with each of the baths playing the role of a ”double
reservoir”, of electrons and holes. The wire acts as a scatterer and scatters
the incoming electrons as holes or electrons into the two baths. Therefore, an
electron from say the left bath may end up being scattered as a hole or an
electron into the left bath only. These two processes are the normal
reflection and Andreev reflection processes respectively. The electron may
also end up being scattered into the right reservoir, therefore transmitted
across the wire, as an electron or a hole. Out of the four possibilities of
the future of an electron from the left bath, all excepting the normal
reflection lead to particles being transmitted across the left junction.
Therefore, only these three actually contribute to the conductance of the
wire. This is the reason behind the NEGF current expression having three
distinct terms with the probabilities of these processes multiplied with the
corresponding difference of thermal occupations of the incoming electrons and
outgoing electron or holes as one typically finds in Landauer expressions.
* •
Finally we have given numerical examples that (a) demonstrate the equivalence
of the two approaches, (b) show the nature of scattering wavefunctions and the
MBS state and their dependence on various parameters and (c) relate the
conductance properties to those of the spectrum.
## References
* Kitaev (2001) A. Y. Kitaev, Physics-Uspekhi 44, 131 (2001).
* Sau _et al._ (2010) J. D. Sau, S. Tewari, R. M. Lutchyn, T. D. Stanescu, and S. D. Sarma, Physical Review B 82, 214509 (2010).
* Oreg _et al._ (2010) Y. Oreg, G. Refael, and F. von Oppen, Physical review letters 105, 177002 (2010).
* Mourik _et al._ (2012) V. Mourik, K. Zuo, S. M. Frolov, S. Plissard, E. P. Bakkers, and L. P. Kouwenhoven, Science 336, 1003 (2012).
* Das _et al._ (2012a) A. Das, Y. Ronen, Y. Most, Y. Oreg, M. Heiblum, and H. Shtrikman, Nature Physics 8, 887 (2012a).
* Thakurathi _et al._ (2015) M. Thakurathi, O. Deb, and D. Sen, Journal of Physics: Condensed Matter 27, 275702 (2015).
* Roy _et al._ (2012) D. Roy, C. Bolech, and N. Shah, Physical Review B 86, 094503 (2012).
* Bondyopadhaya and Roy (2019) N. Bondyopadhaya and D. Roy, Phys. Rev. B 99, 214514 (2019).
* Bhat and Dhar (2020) J. M. Bhat and A. Dhar, Phys. Rev. B 102, 224512 (2020).
* Blonder _et al._ (1982) G. E. Blonder, M. Tinkham, and T. M. Klapwijk, Phys. Rev. B 25, 4515 (1982).
* Maiellaro _et al._ (2019) A. Maiellaro, F. Romeo, C. A. Perroni, V. Cataudella, and R. Citro, Nanomaterials 9, 894 (2019).
* Lobos and Sarma (2015) A. M. Lobos and S. D. Sarma, New Journal of Physics 17, 065010 (2015).
* Doornenbal _et al._ (2015) R. Doornenbal, G. Skantzaris, and H. Stoof, Physical Review B 91, 045419 (2015).
* Komnik (2016) A. Komnik, Physical Review B 93, 125117 (2016).
* Zhang and Quan (2020) F. Zhang and H. Quan, arXiv preprint arXiv:2011.05823 (2020).
* Dhar and Sen (2006) A. Dhar and D. Sen, Physical Review B 73, 085119 (2006).
* Nehra _et al._ (2020) R. Nehra, A. Sharma, and A. Soori, EPL (Europhysics Letters) 130, 27003 (2020).
* Das and Dhar (2012) S. G. Das and A. Dhar, The European Physical Journal B 85, 372 (2012).
* Das _et al._ (2012b) A. Das, Y. Ronen, Y. Most, Y. Oreg, M. Heiblum, and H. Shtrikman, arXiv preprint arXiv:1205.7073 (2012b).
* Aguado (2017) R. Aguado, La Rivista del Nuovo Cimento 40, 523 (2017).
|
capbtabboxtable[][]
# Data-Driven Protection Levels for Camera and 3D Map-based Safe Urban
Localization
Shubh Gupta and Grace Xingxin Gao Stanford University
## Abstract
Reliably assessing the error in an estimated vehicle position is integral for
ensuring the vehicle’s safety in urban environments. Many existing approaches
use GNSS measurements to characterize protection levels (PLs) as probabilistic
upper bounds on the position error. However, GNSS signals might be reflected
or blocked in urban environments, and thus additional sensor modalities need
to be considered to determine PLs. In this paper, we propose an approach for
computing PLs by matching camera image measurements to a LiDAR-based 3D map of
the environment. We specify a Gaussian mixture model probability distribution
of position error using deep neural network-based data-driven models and
statistical outlier weighting techniques. From the probability distribution,
we compute the PLs by evaluating the position error bound using numerical
line-search methods. Through experimental validation with real-world data, we
demonstrate that the PLs computed from our method are reliable bounds on the
position error in urban environments.
## 1 Introduction
In recent years, research on autonomous navigation for urban environments has
been garnering increasing attention. Many publications have targeted different
aspects of navigation such as route planning [1], perception [2] and
localization [3, 4]. For trustworthy operation in each of these aspects,
assessing the level of safety of the vehicle from potential system failures is
critical. However, fewer works have examined the problem of safety
quantification for autonomous vehicles.
In the context of satellite-based localization, safety is typically addressed
via integrity monitoring (IM) [5]. Within IM, protection levels specify a
statistical upper bound on the error in an estimated position of the vehicle,
which can be trusted to enclose the position errors with a required
probabilistic guarantee. To detect an unsafe estimated vehicle position, these
protection levels are compared with the maximum allowable position error
value, known as the alarm limit. Various methods [6, 7, 8] have been proposed
over the years for computing protection levels, however, most of these
approaches focus on GNSS-only navigation. These approaches do not directly
apply to GNSS-denied urban environments, where visual sensors are becoming
increasingly preferred [9]. Although various options in visual sensors exist
in the market, camera sensors are inexpensive, lightweight, and have been
widely employed in industry. For quantifying localization safety in GNSS-
denied urban environments, there is thus a need to develop new ways of
computing protection levels using camera image measurements.
Since protection levels are bounds over the position error, computing them
from camera image measurements requires a model that relates the measurements
to position error in the estimate of the vehicle location. Furthermore, since
the lateral, longitudinal and vertical directions are well-defined with
respect to a vehicle’s location on the road, the model must estimate the
maximum position error in each of these directions for computing protection
levels [10]. However, characterizing such a model is not straightforward. This
is because the relation between a vehicle location in an environment and the
corresponding camera image measurement is complex which depends on identifying
and matching structural patterns in the measurements with prior known
information about the environment [3, 4, 11, 12].
Recently, data-driven techniques based on deep neural networks (DNNs) have
demonstrated state-of-the-art performance in determining the state of the
camera sensor, comprising of its position and orientation, by identifying and
matching patterns in images with a known map of the environment [13, 14, 15]
or an existing database of images [16, 11]. By leveraging datasets consisting
of multiple images with known camera states in an environment, these
approaches train a DNN to model the relationship between an image and the
corresponding state. However, the model characterized by the DNN can often be
erroneous or brittle. For instance, recent research has shown that the output
of a DNN can change significantly with minimal changes to the inputs [17].
Thus, for using DNNs to determine the position error, uncertainty in the
output of the DNN must also be addressed.
DNN-based algorithms consider two types of uncertainty [18, 19]. Aleatoric or
statistical uncertainty results from the noise present in the inputs to the
DNN, due to which a precise output cannot be produced. For camera image
inputs, sources of noise include illumination changes, occlusion or the
presence of visually ambiguous structures, such as windows tessellated along a
wall [18]. On the other hand, epistemic or systematic uncertainty exists
within the model itself. Sources of epistemic uncertainty include poorly
determined DNN model parameters as well as external factors that are not
considered in the model [20], such as environmental features that might be
ignored by the algorithm while matching the camera images to the environment
map.
While aleatoric uncertainty is typically modeled as the input-dependent
variance in the output of the DNN [18, 21, 22], epistemic uncertainty relates
to the DNN model and, therefore, requires further deliberation. Existing
approaches approximate epistemic uncertainty by assuming a probability
distribution over the weight parameters of the DNN to represent the ignorance
about the correct parameters [23, 24, 25]. However, these approaches assume
that a correct value of the parameters exists and that the probability
distribution over the weight parameters captures the uncertainty in the model,
both of which do not necessarily hold in practice [26]. This inability of
existing DNN-based methods to properly characterize uncertainty limits their
applicability to safety-critical applications, such as localization of
autonomous vehicles.
In this paper, we propose a novel method for computing protection levels
associated with a given vehicular state estimate (position and orientation)
from camera image measurements and a 3D map of the environment. This work is
based on our recent ION GNSS+ 2020 conference paper [27] and includes
additional experiments and improvements to the DNN training process. Recently,
high-definition 3D environment maps in the form of LiDAR point clouds have
become increasingly available through industry players such as HERE, TomTom,
Waymo and NVIDIA, as well as through projects such as USGS 3DEP [28] and
OpenTopography [29]. Furthermore, LiDAR-based 3D maps are more robust to noise
from environmental factors, such as illumination and weather, than image-based
maps[30]. Hence, we use LiDAR-based 3D point cloud maps in our approach.
Previously, CMRNet [15] has been proposed as a DNN-based approach for
determining the vehicular state from camera images and a LiDAR-based 3D map.
In our approach, we extend the DNN architecture proposed in [15] to model the
position error and the covariance matrix (aleatoric uncertainty) in the
vehicular state estimate. To assess the epistemic uncertainty in the position
error, we evaluate the DNN position error outputs at multiple candidate states
in the vicinity of the state estimate, and combine the outputs into samples of
the state estimate position error. Fig. 1 shows the architecture of our
proposed approach. Given a state estimate, we first select multiple candidate
states from its neighborhood. Using the DNN, we then evaluate the position
error and covariance for each candidate state by comparing the camera image
measurement with a local map constructed from the candidate state and the 3D
environment map. Next, we linearly transform the position error and covariance
outputs from the DNN with the relative positions of candidate states into
samples of the state estimate position error and variance. We then separate
these samples into the lateral, longitudinal and vertical directions and
weight the samples to mitigate the impact of outliers in each direction.
Subsequently, we combine the position error samples, outlier weights, and
variance samples to construct a Gaussian mixture model probability
distribution of the position error in each direction, and numerically evaluate
its intervals to compute protection levels.
Our main contributions are as follows:
1. 1.
We extend the CMRNet [15] architecture to model both the position error in the
vehicular state estimate and the associated covariance matrix. Using the 3D
LiDAR-based map of the environment, we first construct a local map
representation with respect to the vehicular state estimate. Then, we use the
DNN to analyze the correspondence between the camera image measurement and the
local map for determining the position error and the covariance matrix.
2. 2.
We develop a novel method for capturing epistemic uncertainty in the DNN
position error output. Unlike existing approaches which assume a probability
distribution over DNN weight parameters, we directly analyze different
position errors that are determined by the DNN for multiple candidate states
selected from within a neighborhood of the state estimate. The position error
outputs from the DNN corresponding to the candidate states are then linearly
combined with the candidate states’ relative position from the state estimate,
to obtain an empirical distribution of the state estimate position error.
3. 3.
We design an outlier weighting scheme to account for possible errors in the
DNN output at inputs that differ from the training data. Our approach weighs
the position error samples from the empirical distribution using a robust
outlier detection metric, known as robust Z-score [31], along the lateral,
longitudinal and vertical directions individually.
4. 4.
We construct the lateral, longitudinal and vertical protection levels as
intervals over the probability distribution of the position error. We model
this probability distribution as a Gaussian Mixture Model [32] from the
position error samples, DNN covariance and outlier weights.
5. 5.
We demonstrate the applicability of our approach in urban environments by
experimentally validating the protection levels computed from our method on
real-world data with multiple camera images and different state estimates.
Figure 1: Architecture of our proposed approach for computing protection
levels. Given a state estimate, multiple candidate states are selected from
its neighborhood and the corresponding position error and the covariance
matrix for each candidate state are evaluated using the DNN. The position
errors and covariance are then linearly transformed to obtain samples of the
state estimate position error and variance, which are then weighted to
determine outliers. Finally, the position error samples, outlier weights and
variance are combined to construct a Gaussian Mixture Model probability
distribution, from which the lateral, longitudinal and vertical protection
levels are computed through numerical evaluation of its probability intervals.
The remainder of this paper is structured as follows: Section II discusses
related work. Section III formulates the problem of estimating protection
levels. Section IV describes the two types of uncertainties considered in our
approach. Section V details our algorithm. Section VI presents the results
from experimentation with real-world data. We conclude the paper in Section
VII.
## 2 Related Work
Several methods have been developed over the years which characterize
protection levels in the context of GNSS-based urban navigation. Jiang and
Wang [6] compute horizontal protection levels using an iterative search-based
method and test statistic based on the bivariate normal distribution. Cezón
_et al._ [7] analyze methods which utilize the isotropy of residual vectors
from the least-squares position estimation to compute the protection levels.
Tran and Presti [8] combine Advanced Receiver Autonomous Integrity Monitoring
(ARAIM) with Kalman filtering, and compute the protection levels by
considering the set of position solutions which arise after excluding faulty
measurements. These approaches compute the protection levels by deriving the
mathematical relation between measurement and position domain errors. However,
such a relation is difficult to formulate with camera image measurements and a
LiDAR-based 3D map, since the position error in this case depends on various
factors such as the structure of buildings in the environment, available
visual features and illumination levels.
Previous works have proposed IM approaches for LiDAR and camera-based
navigation where the vehicle is localized by associating identified landmarks
with a stored map or a database. Joerger _et al._ [33] developed a method to
quantify integrity risk for LiDAR-based navigation algorithms by analyzing
failures of feature extraction and data association subroutines. Zhu _et al._
[34] derived a bound on the integrity risk in camera-based navigation using
EKF caused by incorrect feature associations. However, these IM approaches
have been developed for localization algorithms based on data-association and
cannot be directly applied to many recent camera and LiDAR-based localization
techniques which use deep learning to model the complex relation between
measurements and the stored map or the database. Furthermore, these IM
techniques do not estimate protection levels, which is the focus of our work.
Deep learning has been widely applied for determining position information
from camera images. Kendall _et al._ [35] train a DNN using images from a
single environment to learn a relation between the image and the camera 6-DOF
pose. Taira _et al._ [11] learn image features using a DNN and apply feature
extraction and matching techniques to estimate the 6-DOF camera pose relative
to a known 3D map of the environment. Sarlin _et al._ [16] develop a deep
learning-based 2D-3D matching technique to obtain 6-DOF camera pose from
images and a 3D environment model. However, these approaches do not model the
corresponding uncertainty associated with the estimated camera pose, or
account for failures in DNN approximation [26], which is necessary for
characterizing safety measures such as protection levels.
Some recent works have proposed to estimate the uncertainty associated with
deep learning algorithms. Kendall and Cipolla [23] estimate the uncertainty in
DNN-based camera pose estimation from images, by evaluating the network
multiple times through dropout [24]. Loquercio _et al._ [19] propose a general
framework for estimating uncertainty in deep learning as variance computed
from both aleatoric and epistemic sources. McAllister _et al._ [21] suggest
using Bayesian deep learning to determine uncertainty and quantify safety in
autonomous vehicles, by placing probability distributions over DNN weights to
represent the uncertainty in the DNN model. Yang _et al._ [22] jointly
estimate the vehicle odometry, scene depth and uncertainty from sequential
camera images. However, the uncertainty estimates from these algorithms do not
take into account the inaccuracy of the trained DNN model, or the influence of
the underlying environment structure on the DNN outputs. In our approach, we
evaluate the DNN position error outputs at inputs corresponding to multiple
states in the environment, and utilize these position errors for
characterizing uncertainty both from inaccuracy in the DNN model as well as
from the environment structure around the state estimate.
To the best of our knowledge, our approach is the first that applies data-
driven algorithms for computing protection levels by characterizing the
uncertainty from different error sources. The proposed method seeks to
leverage the high-fidelity function modeling capability of DNNs and combine it
with techniques from robust statistics and integrity monitoring to compute
robust protection levels using camera image measurements and 3D map of the
environment.
## 3 Problem Formulation
We consider the scenario of a vehicle navigating in an urban environment using
measurements acquired by an on-board camera. The 3D LiDAR map of the
environment $\mathcal{M}$ that consists of points
$\mathbf{p}\in\mathbb{R}^{3}$ is assumed to be pre-known from either openly
available repositories [28, 29] or from Simultaneous Localization and Mapping
algorithms [36].
The vehicular state $\mathbf{s}_{t}=[\mathbf{x}_{t},\mathbf{o}_{t}]$ at time
$t$ is a 7-element vector comprising of its 3D position
$\mathbf{x}_{t}=[x_{t},y_{t},z_{t}]^{\top}\in\mathbb{R}^{3}$ along $x,y$ and
$z$-dimensions and 3D orientation unit quaternion
$\mathbf{o}_{t}=[o_{1,t},o_{2,t},o_{3,t},o_{4,t}]\in\textrm{SU}(2)$. The
vehicle state estimates over time are denoted as
$\\{\mathbf{s}_{t}\\}_{t=1}^{T_{\text{max}}}$ where $T_{\text{max}}$ denotes
the total time in a navigation sequence. At each time $t$, the vehicle
captures an RGB camera image $I_{t}\in\mathbb{R}^{l\times w\times 3}$ from the
on-board camera, where $l$ and $w$ denote pixels along length and width
dimensions, respectively.
Given an integrity risk specification $IR$, our objective is to compute the
lateral protection level $PL_{lat,t}$, longitudinal protection level
$PL_{lon,t}$, and vertical protection level $PL_{vert,t}$ at time $t$, which
denote the maximal bounds on the position error magnitude with a probabilistic
guarantee of at least $1-IR$. Considering $x,y$ and $z$-dimensions in the
rotational frame of the vehicle
$\displaystyle PL_{lat,t}$
$\displaystyle=\sup\left\\{\rho\mid\mathbb{P}\left(|x_{t}-x^{*}_{t}|\leq\rho\right)\leq
1-IR\right\\}$ $\displaystyle PL_{lon,t}$
$\displaystyle=\sup\left\\{\rho\mid\mathbb{P}\left(|y_{t}-y^{*}_{t}|\leq\rho\right)\leq
1-IR\right\\}$ $\displaystyle PL_{vert,t}$
$\displaystyle=\sup\left\\{\rho\mid\mathbb{P}\left(|z_{t}-z^{*}_{t}|\leq\rho\right)\leq
1-IR\right\\},$ (1)
where $\mathbf{x}^{*}_{t}=[x^{*}_{t},y^{*}_{t},z^{*}_{t}]$ denotes the unknown
true vehicle position at time $t$.
## 4 Types of Uncertainty in Position Error
Protection levels for a state estimate $\mathbf{s}_{t}$ at time $t$ depend on
the uncertainty in determining the associated position error
$\Delta\mathbf{x}_{t}=[\Delta x_{t},\Delta y_{t},\Delta z_{t}]$ between the
state estimate position $\mathbf{x}_{t}$ and the true position
$\mathbf{x}^{*}_{t}$ from the camera image $I_{t}$ and the environment map
$\mathcal{M}$. We consider two different kinds of uncertainty, which are
categorized by the source of inaccuracy in determining the position error
$\Delta\mathbf{x}_{t}$: aleatoric and epistemic uncertainty.
### 4.1 Aleatoric Uncertainty
Aleatoric uncertainty refers to the uncertainty from noise present in the
camera image measurements $I_{t}$ and the environment map $\mathcal{M}$, due
to which a precise value of the position error $\Delta\mathbf{x}_{t}$ cannot
be determined. Existing DNN-based localization approaches model the aleatoric
uncertainty as a covariance matrix with only diagonal entries [18, 21, 22] or
with both diagonal and off-diagonal terms [37, 38]. Similar to the existing
approaches, we characterize the aleatoric uncertainty by using a DNN to model
the covariance matrix $\Sigma_{t}$ associated with the position error
$\Delta\mathbf{x}_{t}$. We consider both nonzero diagonal and off-diagonal
terms in $\Sigma_{t}$ to model the correlation between $x,y$ and $z$-dimension
uncertainties, such as along the ground plane.
Aleatoric uncertainty by itself does not accurately represent the uncertainty
in determining the position error. This is because aleatoric uncertainty
assumes that the noise present in training data also represents the noise in
all future inputs and that the DNN approximation is error-free. These
assumptions fail in scenarios when the input at evaluation time is different
from the training data or when the input contains features that occur rarely
in the real world [26]. Thus, relying purely on aleatoric uncertainty can lead
to an overconfident estimates of the position error uncertainty [18].
$\mathbf{x}_{t}$
---
$\mathbf{x}^{1}_{t}$
---
$\mathbf{x}^{2}_{t}$
---
$\Delta\mathbf{x}^{1}_{t}$
---
$\Delta\mathbf{x}_{t}$
---
$\Delta\mathbf{x}^{2}_{t}$
---
$\mathbf{x}^{*}_{t}$
---
State estimate
---
True state
---
Candidate state
---
Figure 2: Position error $\Delta\mathbf{x}_{t}$ in the state estimate position
$\mathbf{x}_{t}$ is a linear combination of the position error
$\Delta\mathbf{x}^{i}_{t}$ in position $\mathbf{x}^{i}_{t}$ of any candidate
state $s^{i}_{t}$ and the relative position vector between
$\mathbf{x}^{i}_{t}$ and $\mathbf{x}_{t}$.
### 4.2 Epistemic Uncertainty
Epistemic uncertainty relates to the inaccuracies in the model for determining
the position error $\Delta\mathbf{x}_{t}$. In our approach, we characterize
the epistemic uncertainty by leveraging a geometrical property of the position
error $\Delta\mathbf{x}_{t}$, where for the same camera image $I_{t}$,
$\Delta\mathbf{x}_{t}$ can be obtained by linearly combining the position
error $\Delta\mathbf{x}^{\prime}_{t}$ computed for any _candidate state_
$\mathbf{s}^{\prime}_{t}$ and the relative position of
$\mathbf{s}^{\prime}_{t}$ from the state estimate $\mathbf{s}_{t}$ (Fig. 2).
Hence, using known relative positions and orientations of $N_{C}$ candidate
states $\\{\mathbf{s}_{t}^{1},\ldots,\mathbf{s}_{t}^{N_{C}}\\}$ from
$\mathbf{s}_{t}$, we transform the different position errors
$\\{\Delta\mathbf{x}_{t}^{1},\ldots,\Delta\mathbf{x}_{t}^{N_{C}}\\}$
determined for the candidate states into samples of the state estimate
position error $\Delta\mathbf{x}_{t}$. The empirical distribution comprised of
these position error samples characterizes the epistemic uncertainty in the
position error estimated using the DNN.
## 5 Data-Driven Protection Levels
This section details our algorithm for computing data-driven protection levels
for the state estimate $\mathbf{s}_{t}$ at time $t$, using the camera image
$I_{t}$ and environment map $\mathcal{M}$. First, we describe the method for
generating local representations of the 3D environment map $\mathcal{M}$ with
respect to the state estimate $\mathbf{s}_{t}$. Then, we illustrate the
architecture of the DNN. Next, we discuss the loss functions used in DNN
training. We then detail the method for selecting multiple candidate states
from the neighborhood of the state estimate $\mathbf{s}_{t}$. Using position
errors and covariance matrix evaluated from the DNN for each of these
candidate states, we then illustrate the process for transforming the
candidate state position errors into multiple samples of the state estimate
position error. Then, to mitigate the impact of outliers in the computed
position error samples in each of the lateral, longitudinal and vertical
directions, we detail the procedure for computing outlier weights. Next, we
characterize the probability distribution over position error in lateral,
longitudinal and vertical directions. Finally, we detail the approach for
determining protection levels from the probability distribution by numerical
methods.
### 5.1 Local Map Construction
A local representation of the 3D LiDAR map of the environment captures the
environment information in the vicinity of the state estimate $\mathbf{s}_{t}$
at time $t$. By comparing the environment information captured in the local
map with the camera image $I_{t}\in\mathbb{R}^{l\times w\times 3}$ using a
DNN, we estimate the position error $\Delta\mathbf{x}_{t}$ and covariance
$\Sigma_{t}$ in the state estimate $\mathbf{s}_{t}$. For computing the local
maps, we utilize the LiDAR-image generation procedure described in [15].
Similar to their approach, we generate the local map
$L(\mathbf{s},\mathcal{M})\in\mathbb{R}^{l\times w}$ associated with a vehicle
state $\mathbf{s}$ and LiDAR environment map $\mathcal{M}$ in two steps.
1. 1.
First, we determine the rigid-body transformation matrix $H_{\mathbf{s}}$ in
the special Euclidean group $\textrm{SE}(3)$ corresponding to the vehicle
state $\mathbf{s}$
$H_{\mathbf{s}}=\left[\begin{matrix}R_{\mathbf{s}}&T_{\mathbf{s}}\\\
\mathbf{0}_{1\times 3}&1\end{matrix}\right]\in\textrm{SE}(3),$ (2)
where
1. –
$R_{\mathbf{s}}$ denotes the rotation matrix corresponding to the orientation
quaternion elements $\mathbf{o}=[o_{1},o_{2},o_{3},o_{4}]$ in the state
$\mathbf{s}$
2. –
$T_{\mathbf{s}}$ denotes the translation vector corresponding to the position
elements $\mathbf{x}=[x,y,z]$ in the state $\mathbf{s}$.
Using the matrix $H_{\mathbf{s}}$, we rotate and translate the points in the
map $\mathcal{M}$ to the map $\mathcal{M}_{\mathbf{s}}$ in the reference frame
of the state $\mathbf{s}$
$\mathcal{M}_{\mathbf{s}}=\\{\left[\begin{matrix}I_{3\times
3}&\mathbf{0}_{3\times 1}\end{matrix}\right]\cdot
H_{\mathbf{s}}\cdot\left[\begin{matrix}\mathbf{p}^{\top}&1\end{matrix}\right]^{\top}\mid\mathbf{p}\in\mathcal{M}\\},$
(3)
where $I$ denotes the identity matrix.
For maintaining computational efficiency in the case of large maps, we use the
points in the LiDAR map $\mathcal{M}_{\mathbf{s}}$ that lie in a sub-region
around the state $\mathbf{s}$ and in the direction of the vehicle orientation.
2. 2.
In the second step, we apply the occlusion estimation filter presented in [39]
to identify and remove occluded points along rays from the camera center. For
each pair of points $(\mathbf{p}^{(i)},\mathbf{p}^{(j)})$ where
$\mathbf{p}^{(i)}$ is closer to the state $\mathbf{s}$, $\mathbf{p}^{(j)}$ is
marked occluded if the angle between the ray from $\mathbf{p}^{(j)}$ to the
camera center and the line from $\mathbf{p}^{(j)}$ to $\mathbf{p}^{(i)}$ is
less than a threshold. Then, the remaining points are projected to the camera
image frame using the camera projection matrix $K$ to generate the local depth
map $L(\mathbf{s},\mathcal{M})$. The $i$th point $\mathbf{p}^{(i)}$ in
$\mathcal{M}_{\mathbf{s}}$ is projected as
$\displaystyle[\begin{matrix}p_{x}&p_{y}&c\end{matrix}]^{\top}$
$\displaystyle=K\cdot\mathbf{p}^{(i)}$
$\displaystyle[L(\mathbf{s},\mathcal{M})]_{(\lceil p_{x}/c\rceil,\lceil
p_{y}/c\rceil)}$
$\displaystyle=[\begin{matrix}0&0&1\end{matrix}]\cdot\mathbf{p}^{(i)},$ (4)
where
1. –
$p_{x},p_{y}$ denote the projected 2D coordinates with scaling term $c$
2. –
$[L(\mathbf{s},\mathcal{M})]_{(p_{x},p_{y})}$ denotes the $(p_{x},p_{y})$
pixel position in the local map $L(\mathbf{s},\mathcal{M})$.
The local depth map $L(\mathbf{s},\mathcal{M})$ for state $\mathbf{s}$
visualizes the environment features that are expected to be captured in a
camera image obtained from the state $\mathbf{s}$. However, the obtained
camera image $I_{t}$ is associated with the true state $\mathbf{s}^{*}_{t}$
that might be different from the state estimate $\mathbf{s}_{t}$.
Nevertheless, for reasonably small position and orientation differences
between the state estimate $\mathbf{s}_{t}$ and true state
$\mathbf{s}^{*}_{t}$, the local map $L(\mathbf{s},\mathcal{M})$ contains
features that correspond with some of the features in the camera image $I_{t}$
that we use to estimate the position error.
Figure 3: Architecture of our deep neural network for estimating translation
and rotation errors as well as parameters of the covariance matrix. The
translation and rotation errors are determined using CMRNet [15], and employs
correlation layers [40] for comparing feature representations of the camera
image and the local depth map. Using a similar architecture, we design
CovarianceNet which produces parameters of the covariance matrix associated
with the translation error output.
### 5.2 DNN Architecture
We use a DNN to estimate the position error $\Delta\mathbf{x}_{t}$ and
associated covariance matrix $\Sigma_{t}$ by implicitly identifying and
comparing the positions of corresponding features in camera image $I_{t}$ and
the local depth map $L(\mathbf{s}_{t},\mathcal{M})$ associated with the state
estimate $\mathbf{s}_{t}$.
The architecture of our DNN is given in Fig. 3. Our DNN comprises of two
separate modules, one for estimating the position error $\Delta\mathbf{x}_{t}$
and other for the parameters of the covariance matrix $\Sigma_{t}$. The first
module for estimating the position error $\Delta\mathbf{x}_{t}$ is based on
CMRNet [15]. CMRNet was originally proposed as an algorithm to iteratively
determine the position and orientation of a vehicle using a camera image and
3D LiDAR map, starting from a provided initial state. For determining position
error $\Delta\mathbf{x}_{t}$ using CMRNet, we use the state estimate
$\mathbf{s}_{t}$ as the provided initial state and the corresponding DNN
translation $\Delta\tilde{\mathbf{x}}_{t}$ and rotation
$\Delta\tilde{\mathbf{r}}$ error output for transforming the state
$\mathbf{s}_{t}$ towards the true state $\mathbf{s}^{*}_{t}$. Formally, given
any state $\mathbf{s}$ and camera image $I_{t}$ at time $t$, the translation
error $\Delta\tilde{\mathbf{x}}$ and rotation error $\Delta\tilde{\mathbf{r}}$
are expressed as
$\Delta\tilde{\mathbf{x}},\Delta\tilde{\mathbf{r}}=\textrm{CMRNet}(I_{t},L(\mathbf{s},\mathcal{M})).$
(5)
CMRNet estimates the rotation error $\Delta\tilde{\mathbf{r}}$ as a unit
quaternion. Furthermore, the architecture determines both the translation
error $\Delta\tilde{\mathbf{x}}$ and rotation error $\Delta\tilde{\mathbf{r}}$
in the reference frame of the state $\mathbf{s}$. Since the protection levels
depend on the position error $\Delta\mathbf{x}$ in the reference frame from
which the camera image $I_{t}$ is captured (the vehicle reference frame), we
transform the translation error $\Delta\tilde{\mathbf{x}}$ to the vehicle
reference frame by rotating it with the inverse of $\Delta\tilde{\mathbf{r}}$
$\Delta\mathbf{x}=-\tilde{R}^{\top}\cdot\Delta\tilde{\mathbf{x}},$ (6)
where $\tilde{R}$ is the $3\times 3$ rotation matrix corresponding to the
rotation error quaternion $\Delta\tilde{\mathbf{r}}$.
In the second module, we determine the covariance matrix $\Sigma$ associated
with $\Delta\mathbf{x}$ by first estimating the covariance matrix
$\tilde{\Sigma}$ associated with the translation error
$\Delta\tilde{\mathbf{x}}$ obtained from CMRNet and then transforming it to
the vehicle reference frame using $\Delta\tilde{\mathbf{r}}$.
We model the covariance matrix $\tilde{\Sigma}$ by following a similar
approach to [37]. Since the covariance matrix is both symmetric and positive-
definite, we consider the decomposition of $\tilde{\Sigma}$ into diagonal
standard deviations $\boldsymbol{\sigma}=[\sigma_{1},\sigma_{2},\sigma_{3}]$
and correlation coefficients
$\boldsymbol{\eta}=[\eta_{21},\eta_{31},\eta_{32}]$
$\displaystyle[\tilde{\Sigma}]_{ii}$ $\displaystyle=\sigma_{i}^{2}$
$\displaystyle[\tilde{\Sigma}]_{ij}$
$\displaystyle=[\Sigma]_{ji}=\eta_{ij}\sigma_{i}\sigma_{j},$ (7)
where $i,j\in\\{1,2,3\\}$ and $j<i$. We estimate these terms using our second
DNN module (referred to as CovarianceNet) which has a similar network
structure as CMRNet, but with $256$ and $6$ artificial neurons in the last two
fully connected layers to prevent overfitting. For stable training,
CovarianceNet produces logarithm of the standard deviation output, which is
converted to the standard deviation by then taking the exponent. Additionally,
we use tanh function to scale the correlation coefficient outputs
$\boldsymbol{\eta}$ in CovarianceNet between $\pm 1$. Formally, given a
vehicle state $\mathbf{s}$ and camera image $I_{t}$ at time $t$, the standard
deviation $\boldsymbol{\sigma}$ and correlation coefficients
$\boldsymbol{\eta}$ is approximated as
$\boldsymbol{\sigma},\boldsymbol{\eta}=\textrm{CovarianceNet}(I_{t},L(\mathbf{s},\mathcal{M})).$
(8)
Using the constructed $\tilde{\Sigma}$ from the obtained
$\boldsymbol{\sigma},\boldsymbol{\eta}$, we obtain the covariance matrix
$\Sigma$ associated with $\Delta\mathbf{x}$ as
$\Sigma=\tilde{R}^{\top}\cdot\tilde{\Sigma}\cdot\tilde{R}.$ (9)
We keep the aleatoric uncertainty restricted to position domain errors in this
work for simplicity, and thus treat $\Delta\tilde{\mathbf{r}}$ as a point
estimate. The impact of errors in estimating $\Delta\tilde{\mathbf{r}}$ on
protection levels is taken into consideration as epistemic uncertainty, and is
discussed in more detail in Section V.5 and V.7.
The feature extraction modules in CovarianceNet and CMRNet are separate since
the two tasks are complementary: for estimating position error, the DNN must
learn features that are robust to noise in the inputs while the variance in
the estimated position error depends on the noise itself.
### 5.3 Loss Functions
The loss function for training the DNN must penalize position error outputs
that differ from the corresponding ground truth present in the dataset, as
well as penalize covariance that overestimates or underestimates the
uncertainty in the position error predictions. Furthermore, the loss muss
incentivize the DNN to extract useful features from the camera image and local
map inputs for predicting the position error. Hence, we consider three
additive components in our loss function $\mathcal{L}(\cdot)$
$\mathcal{L}=\alpha_{\textrm{Huber}}\mathcal{L}_{\textrm{Huber}}(\Delta\tilde{\mathbf{x}}^{*},\Delta\tilde{\mathbf{x}})+\alpha_{\textrm{MLE}}\mathcal{L}_{\textrm{MLE}}(\Delta\tilde{\mathbf{x}}^{*},\Delta\tilde{\mathbf{x}},\tilde{\Sigma})+\alpha_{\textrm{Ang}}\mathcal{L}_{\textrm{Ang}}(\Delta\tilde{\mathbf{r}}^{*},\Delta\tilde{\mathbf{r}}),$
(10)
where
1. –
$\Delta\tilde{\mathbf{x}}^{*},\Delta\tilde{\mathbf{r}}^{*}$ denotes the
vector-valued translation and rotation error in the reference frame of the
state estimate $\mathbf{s}$ to the unknown true state $\mathbf{s}^{*}$
2. –
$\mathcal{L}_{\textrm{Huber}}(\cdot)$ denotes the Huber loss function [41]
3. –
$\mathcal{L}_{\textrm{MLE}}(\cdot)$ denotes the loss function for the maximum
likelihood estimation of position error $\Delta\mathbf{x}$ and covariance
$\tilde{\Sigma}$
4. –
$\mathcal{L}_{\textrm{Ang}}(\cdot)$ denotes the quaternion angular distance
from [15]
5. –
$\alpha_{\textrm{Huber}},\alpha_{\textrm{MLE}},\alpha_{\textrm{Ang}}$ are
coefficients for weighting each loss term.
We employ the Huber loss $\mathcal{L}_{\textrm{Huber}}(\cdot)$ and quaternion
angular distance $\mathcal{L}_{\textrm{Ang}}(\cdot)$ terms from [15]. The
Huber loss term $\mathcal{L}_{\textrm{Huber}}(\cdot)$ penalizes the
translation error output $\Delta\tilde{\mathbf{x}}$ of the DNN
$\displaystyle\mathcal{L}_{\textrm{Huber}}(\Delta\tilde{\mathbf{x}}^{*},\Delta\tilde{\mathbf{x}})$
$\displaystyle=\sum_{X=x,y,z}D_{\textrm{Huber}}(\Delta\tilde{X}^{*},\Delta\tilde{X})$
$\displaystyle D_{\textrm{Huber}}(a^{*},a)$
$\displaystyle=\begin{cases}\frac{1}{2}(a-a^{*})^{2}&\textrm{for
}|a-a^{*}|\leq\delta\\\
\delta\cdot(|a-a^{*}|-\frac{1}{2}\delta)&\textrm{otherwise}\end{cases},$ (11)
where $\delta$ is a hyperparameter for adjusting the penalty assignment to
small error values. In this paper, we set $\delta=1$. Unlike the more common
mean squared error, the penalty assigned to higher error values is linear in
Huber loss instead of quadratic. Thus, Huber loss is more robust to outliers
and leads to more stable training as compared with squared error. The
quaternion angular distance term $\mathcal{L}_{\textrm{Ang}}(\cdot)$ penalizes
the rotation error output $\Delta\tilde{\mathbf{r}}$ from CMRNet
$\displaystyle\mathcal{L}_{\textrm{Ang}}(\Delta\tilde{\mathbf{r}}^{*},\Delta\tilde{\mathbf{r}})$
$\displaystyle=D_{\textrm{Ang}}(\Delta\tilde{\mathbf{r}}^{*}\times\Delta\tilde{\mathbf{r}}^{-1})$
$\displaystyle D_{\textrm{Ang}}(\mathbf{q})$
$\displaystyle=\operatorname{atan2}2\left(\sqrt{q_{2}^{2}+q_{3}^{2}+q_{4}^{2}},|q_{1}|\right),$
(12)
where
1. –
$q_{i}$ denotes the $i$th element in quaternion $\mathbf{q}$
2. –
$\Delta\mathbf{r}^{-1}$ denotes the inverse of the quaternion
$\Delta\mathbf{r}$
3. –
$\mathbf{q}\times\mathbf{r}$ here denotes element-wise multiplication of the
quaternions $\mathbf{q}$ and $\mathbf{r}$
4. –
$\operatorname{atan2}2(\cdot)$ is the two-argument version of the arctangent
function.
Including the quaternion angular distance term
$\mathcal{L}_{\textrm{Ang}}(\cdot)$ in the loss function incentivizes the DNN
to learn features that are relevant to the geometry between the camera image
and the local depth map. Hence, it provides additional supervision to the DNN
training as a multi-task objective [42], and is important for the stability
and speed of the training process.
The maximum likelihood loss term $\mathcal{L}_{\textrm{MLE}}(\cdot)$ depends
on both the translation error $\Delta\tilde{\mathbf{x}}$ and covariance matrix
$\tilde{\Sigma}$ estimated from the DNN. The loss function is analogous to the
negative log-likelihood of the Gaussian distribution
$\mathcal{L}_{\textrm{MLE}}(\Delta\tilde{\mathbf{x}}^{*},\Delta\tilde{\mathbf{x}},\tilde{\Sigma})=\frac{1}{2}\log|\tilde{\Sigma}|+\frac{1}{2}(\Delta\tilde{\mathbf{x}}^{*}-\Delta\tilde{\mathbf{x}})^{\top}\cdot\tilde{\Sigma}^{-1}\cdot(\Delta\tilde{\mathbf{x}}^{*}-\Delta\tilde{\mathbf{x}})$
(13)
If the covariance output from the DNN has small values, the corresponding
translation error is penalized much more than the translation error
corresponding to a large valued covariance. Hence, the maximum likelihood loss
term $\mathcal{L}_{\textrm{MLE}}(\cdot)$ incentivizes the DNN to output small
covariance only when the corresponding translation error output has high
confidence, and otherwise output large covariance.
### 5.4 Multiple Candidate State Selection
To assess the uncertainty in the DNN-based position error estimation process
as well as the uncertainty from environmental factors, we evaluate the DNN
output at $N_{C}$ candidate states
$\\{\mathbf{s}^{1}_{t}\ldots,\mathbf{s}^{N_{C}}_{t}\\}$ in the neighborhood of
the state estimate $\mathbf{s}_{t}$.
For selecting the candidate states
$\\{\mathbf{s}^{1}_{t}\ldots,\mathbf{s}^{N_{C}}_{t}\\}$, we randomly generate
multiple values of translation offset
$\\{\mathbf{t}^{1},\ldots,\mathbf{t}^{N_{C}}\\}$ and rotation offset
$\\{\mathbf{r}^{1},\ldots,\mathbf{r}^{N_{C}}\\}$ about the state estimate
$\mathbf{s}_{t}$, where $N_{C}$ is the total number of selected candidate
states. The $i$th translation offset $\mathbf{t}^{i}\in\mathbb{R}^{3}$ denotes
translation in $x,y$ and $z$ dimensions and is sampled from a uniform
probability distribution between a specified range $\pm t_{max}$ in each
dimension. Similarly, the $i$th rotation offset
$\mathbf{r}^{i}\in\textrm{SU}(2)$ is obtained by uniformly sampling between
$\pm r_{max}$ angular deviations about each axis and converting the resulting
rotation to a quaternion. The $i$th candidate state $\mathbf{s}^{i}_{t}$ is
generated by rotating and translating the state estimate $\mathbf{s}_{t}$ by
$\mathbf{r}^{i}$ and $\mathbf{t}^{i}$, respectively. Corresponding to each
candidate state $\mathbf{s}^{i}_{t}$, we generate a local depth map
$L(\mathbf{s}^{i}_{t},\mathcal{M})$ using the procedure laid out in Section
V.1.
### 5.5 Linear Transformation of Position Errors
Using each local depth map $L(\mathbf{s}^{i}_{t},\mathcal{M})$ and camera
image $I_{t}$ for the $i$th candidate state $\mathbf{s}^{i}_{t}$ as inputs to
the DNN in Section V.2, we evaluate the candidate state position error
$\Delta\mathbf{x}^{i}_{t}$ and covariance matrix $\Sigma^{i}_{t}$. From the
known translation offset $\mathbf{t}^{i}$ between the candidate state
$\mathbf{s}^{i}_{t}$ and the state estimate $\mathbf{s}_{t}$ and the DNN-based
rotation error $\Delta\tilde{\mathbf{r}}_{t}$ in $\mathbf{s}_{t}$, we compute
the transformation matrix $H_{\mathbf{s}^{i}_{t}\to\mathbf{s}_{t}}$ for
converting the candidate state position error $\Delta\mathbf{x}^{i}_{t}$ to
the state estimate position error $\Delta\mathbf{x}_{t}$ in the vehicle
reference frame
$H_{\mathbf{s}^{i}_{t}\to\mathbf{s}_{t}}=\left[\begin{matrix}I_{3\times
3}&-\tilde{R}_{t}^{\top}\mathbf{t}^{i}\end{matrix}\right],$ (14)
where $I_{3\times 3}$ denotes the identity matrix and $\tilde{R}_{t}$ is the
$3\times 3$ rotation matrix computed from the DNN-based rotation error
$\Delta\tilde{\mathbf{r}}_{t}$ between the state estimate $\mathbf{s}_{t}$ and
the unknown true state $\mathbf{s}^{*}_{t}$. Note that the rotation offset
$\mathbf{r}^{i}$ is not used in the transformation, since we are only
concerned with the position errors from the true state $\mathbf{s}^{*}_{t}$ to
the state estimate $\mathbf{s}_{t}$, which are invariant to the orientation of
the candidate state $\mathbf{s}^{i}_{t}$. Using the transformation matrix
$H_{\mathbf{s}^{i}_{t}\to\mathbf{s}_{t}}$, we obtain the $i$th sample of the
state estimate position error $\Delta\mathbf{x}_{t}^{(i)}$
$\Delta\mathbf{x}_{t}^{(i)}=H_{\mathbf{s}^{i}_{t}\to\mathbf{s}_{t}}\cdot[\begin{matrix}\Delta\mathbf{x}^{i}_{t}&1\end{matrix}]^{\top}=\Delta\mathbf{x}^{i}_{t}-\tilde{R}_{t}^{\top}\mathbf{t}^{i}.$
(15)
We use parentheses in the notation $\Delta\mathbf{x}_{t}^{(i)}$ for the
transformed samples of the position error between the true state
$\mathbf{s}^{*}_{t}$ and the state estimate $\mathbf{s}_{t}$ to differentiate
from the position error $\Delta\mathbf{x}^{i}_{t}$ between
$\mathbf{s}^{*}_{t}$ and the candidate state $\mathbf{s}^{i}_{t}$. Next, we
modify the candidate state covariance matrix $\Sigma^{i}_{t}$ to account for
uncertainty in DNN-based rotation error $\Delta\tilde{\mathbf{r}}_{t}$. The
resulting covariance matrix $\Sigma^{(i)}_{t}$ in terms of the covariance
matrix $\Sigma^{i}_{t}$ for $\Delta\mathbf{x}^{i}_{t}$, $\tilde{R}_{t}$ and
$\mathbf{t}^{i}$ is
$\Sigma^{(i)}_{t}=\Sigma^{i}_{t}+\textrm{Var}[\tilde{R}_{t}^{\top}\mathbf{t}^{i}].$
(16)
Assuming small errors in determining the true rotation offsets between state
estimate $\mathbf{s}_{t}$ and the true state $\mathbf{s}^{*}_{t}$, we consider
the random variable $R^{\prime}\tilde{R}_{t}^{\top}\mathbf{t}^{i}$ where
$R^{\prime}$ represents the random rotation matrix corresponding to small
angular deviations [43]. Using $R^{\prime}\tilde{R}_{t}^{\top}\mathbf{t}^{i}$,
we approximate the covariance matrix $\Sigma^{(i)}_{t}$ as
$\displaystyle\Sigma^{(i)}_{t}$
$\displaystyle\approx\Sigma^{i}_{t}+\mathbb{E}[(R^{\prime}-I)(\tilde{R}_{t}^{\top}\mathbf{t}^{i})(\tilde{R}_{t}^{\top}\mathbf{t}^{i})^{\top}(R^{\prime}-I)^{\top}]$
$\displaystyle[\Sigma^{(i)}_{t}]_{i^{\prime}j^{\prime}}$
$\displaystyle\approx[\Sigma^{i}_{t}]_{i^{\prime}j^{\prime}}+\mathbb{E}[(\mathbf{r}^{\prime}_{i^{\prime}})^{\top}(\tilde{R}_{t}^{\top}\mathbf{t}^{i})(\tilde{R}_{t}^{\top}\mathbf{t}^{i})^{\top}(\mathbf{r}^{\prime}_{j^{\prime}})]$
$\displaystyle=[\Sigma^{i}_{t}]_{i^{\prime}j^{\prime}}+\mathrm{Tr}\left((\tilde{R}_{t}^{\top}\mathbf{t}^{i})(\tilde{R}_{t}^{\top}\mathbf{t}^{i})^{\top}\mathbb{E}[(\mathbf{r}^{\prime}_{i^{\prime}})(\mathbf{r}^{\prime}_{j^{\prime}})^{\top}]\right)$
$\displaystyle=[\Sigma^{i}_{t}]_{i^{\prime}j^{\prime}}+\mathrm{Tr}\left((\tilde{R}_{t}^{\top}\mathbf{t}^{i})(\tilde{R}_{t}^{\top}\mathbf{t}^{i})^{\top}Q_{i^{\prime}j^{\prime}}\right),$
(17)
where $(\mathbf{r}^{\prime}_{i})^{\top}$ represents the $i$th row vector in
$R^{\prime}-I$. Since errors in $\tilde{R}$ depend on the DNN output, we
specify $R^{\prime}$ through the empirical distribution of the angular
deviations in $\tilde{R}$ as observed for the trained DNN on the training and
validation data, and precompute the expectation $Q_{i^{\prime}j^{\prime}}$ for
each $(i^{\prime},j^{\prime})$ pair.
The samples of state estimate position error
$\\{\Delta\mathbf{x}_{t}^{(1)},\ldots,\Delta\mathbf{x}_{t}^{(N_{C})}\\}$
represent both inaccuracy in the DNN estimation as well as uncertainties due
to environmental factors. If the DNN approximation fails at the input
corresponding to the state estimate $\mathbf{s}_{t}$, the estimated position
errors at candidate states would lead to a wide range of different values for
the state estimate position errors. Similarly, if the environment map
$\mathcal{M}$ near the state estimate $\mathbf{s}_{t}$ contains repetitive
features, the position errors computed from candidate states would be
different and hence indicate high uncertainty.
### 5.6 Outlier Weights
Since the candidate states
$\\{\mathbf{s}^{1}_{t}\ldots,\mathbf{s}^{N_{C}}_{t}\\}$ are selected randomly,
some position error samples may correspond to the local depth map and camera
image pairs for which the DNN performs poorly. Thus, we compute outlier
weights $\\{\mathbf{w}^{(1)}_{t},\ldots,\mathbf{w}^{(N_{C})}_{t}\\}$
corresponding to the position error samples
$\\{\Delta\mathbf{x}_{t}^{(1)},\ldots,\Delta\mathbf{x}_{t}^{(N_{C})}\\}$ to
mitigate the effect of these erroneous position error values in determining
the protection levels. We compute outlier weights in each of the $x,y,$ and
$z$-dimensions separately, since the DNN approximation might not necessarily
fail in all of its outputs. An example of this scenario is when the input
camera image and local map contain features such as building edges that can be
used to robustly determine errors along certain directions but not others.
For computing the outlier weights
$\mathbf{w}_{t}^{(i)}=[w^{(i)}_{x,t},w^{(i)}_{y,t},w^{(i)}_{z,t}]$ associated
with the $i$th position error value $\Delta\mathbf{x}_{t}^{(i)}=[\Delta
x_{t}^{(i)},\Delta y_{t}^{(i)},\Delta z_{t}^{(i)}]$, we employ the robust
Z-score based outlier detection technique [31]. The robust Z-score is used in
a variety of anomaly detection approaches due to its resilience to outliers
[44]. We apply the following operations in each dimension $X=x,y,$ and $z$:
1. 1.
We compute the Median Absolute Deviation statistic [31] ${M\negthinspace
AD}_{X}$ using all position error values $\\{\Delta X_{t}^{(1)},\ldots,\Delta
X_{t}^{(N_{C})}\\}$
${M\negthinspace AD}_{X}=\operatorname{median}(|\Delta
X_{t}^{(i)}-\operatorname{median}(\Delta X_{t}^{(i)})|).$ (18)
2. 2.
Using the statistic ${M\negthinspace AD}_{X}$, we compute the robust Z-score
$\mathcal{Z}^{(i)}_{X}$ for each position error value $\Delta X_{t}^{(i)}$
$\mathcal{Z}^{(i)}_{X}=\frac{|\Delta X_{t}^{(i)}-\operatorname{median}(\Delta
X_{t}^{(i)})|}{{M\negthinspace AD}_{X}}.$ (19)
The robust Z-score $\mathcal{Z}^{(i)}_{X}$ is high if the position error
$\Delta\mathbf{x}^{(i)}$ deviates from the median error with a large value
when compared with the median deviation value.
3. 3.
We compute the outlier weights $\\{w^{(1)}_{X},\ldots,w^{(N_{C})}_{X}\\}$ from
the robust Z-scores
$\\{\mathcal{Z}^{(1)}_{X},\ldots,\mathcal{Z}^{(N_{C})}_{X}\\}$ by applying the
softmax operation [45] such that the sum of weights is unity
$w^{(i)}_{X,t}=\frac{e^{-\gamma\cdot\mathcal{Z}^{(i)}_{X}}}{\sum_{j=1}^{N_{C}}e^{-\gamma\cdot\mathcal{Z}^{(j)}_{X}}},$
(20)
where $\gamma$ denotes the scaling coefficient in the softmax function. We set
$\gamma=0.6745$ as the approximate inverse of the standard normal distribution
evaluated at $3/4$ to make the scaling in the statistic consistent with the
standard deviation of a normal distribution [31]. A small value of outlier
weight $w^{(i)}_{X,t}$ indicates that the position error $\Delta X_{t}^{(i)}$
is an outlier.
For brevity, we extract the diagonal variances associated with each dimension
for all position error samples
$\displaystyle(\sigma^{2}_{x,t})^{(i)}$
$\displaystyle=[\Sigma^{(i)}_{t}]_{11}$
$\displaystyle(\sigma^{2}_{y,t})^{(i)}$
$\displaystyle=[\Sigma^{(i)}_{t}]_{22}$
$\displaystyle(\sigma^{2}_{z,t})^{(i)}$
$\displaystyle=[\Sigma^{(i)}_{t}]_{33}.$ (21)
### 5.7 Probability Distribution of Position Error
We construct a probability distribution in each of the $X=x,y$ and
$z$-dimensions from the previously obtained samples of position errors $\Delta
X^{(i)}_{t}$, variances $(\sigma^{2}_{X,t})^{(i)}$ and outlier weights
$w^{(i)}_{X,t}$. We model the probability distribution using the Gaussian
Mixture Model (GMM) distribution [32]
$\displaystyle\mathbb{P}(\rho_{X,t})$
$\displaystyle=\sum_{i=1}^{N_{C}}w^{(i)}_{X,t}\mathcal{N}\left(\Delta
X_{t}^{(i)},(\sigma^{2}_{X,t})^{(i)}\right),$ (22)
where
1. –
$\rho_{X,t}$ denotes the position error random variable
2. –
$\mathcal{N}(\mu,\sigma^{2})$ is the Gaussian distribution with mean $\mu$ and
variance $\sigma^{2}$.
The probability distributions $\mathbb{P}(\rho_{x,t})$,
$\mathbb{P}(\rho_{y,t})$ and $\mathbb{P}(\rho_{z,t})$ incorporate both
aleatoric uncertainty from the DNN-based covariance and epistemic uncertainty
from the multiple DNN evaluations associated with different candidate states.
Both the position error and covariance matrix depend on the rotation error
point estimate from CMRNet for transforming the error values to the vehicle
reference frame. Since each DNN evaluation for a candidate state estimates the
rotation error independently, the epistemic uncertainty incorporates the
effects of errors in DNN-based estimation of both rotation and translation.
The epistemic uncertainty is reflected in the multiple GMM components and
their weight coefficients, which represent the different possible position
error values that may arise from the same camera image measurement and the
environment map. The aleatoric uncertainty is present as the variance in each
possible value of the position error represented by the individual components.
### 5.8 Protection Levels
We compute the protection levels along the lateral, longitudinal and vertical
directions using the probability distributions obtained in the previous
section. Since the position errors are in the vehicle reference frame, the
$x,y$ and $z$-dimensions coincide with the lateral, longitudinal and the
vertical directions, respectively. First, we obtain the cumulative
distribution function $\textrm{CDF}(\cdot)$ for each probability distribution
$\displaystyle\textrm{CDF}(\rho_{X,t})$
$\displaystyle=\sum_{i=1}^{N_{C}}w^{(i)}_{X,t}\Phi\left(\frac{\rho_{X,t}-\Delta
X_{t}^{(i)}}{(\sigma_{X,t})^{(i)}}\right)$ (23)
where $\Phi(\cdot)$ is the cumulative distribution function of the standard
normal distribution. Then, for a specified value of the integrity risk $IR$,
we compute the protection level $PL$ in lateral, longitudinal and vertical
directions from equation 1 using the CDF as the probability distribution. For
numerical optimization, we employ a simple interval halving method for line
search or the bisection method [46]. To account for both positive and negative
errors, we perform the optimization both using CDF (supremum) and
$1-\textrm{CDF}$ (infemum) with $IR/2$ as the integrity risk and use the
maximum absolute value as the protection level.
The computed protection levels consider heavy-tails in the GMM probability
distribution of the position error that arise because of the different
possible values of the position error that can be computed from the available
camera measurements and environment map. Our method computes large protection
levels when many different values of position error may be equally probable
from the measurements, resulting in larger tail probabilities in the GMM, and
small protection levels only if the uncertainty from both aleatoric and
epistemic sources is small.
## 6 Experimental Results
### 6.1 Real-World Driving Dataset
We use the KITTI visual odometry dataset [47] to evaluate the performance of
the protection levels computed by our approach. The dataset was recorded
around Karlsruhe, Germany over multiple driving sequences and contains images
recorded by multiple on-board cameras, along with ground truth positions and
orientations. Additionally, the dataset contains LiDAR point cloud
measurements which we use to generate the environment map corresponding to
each sequence. Since our approach for computing protection levels just
requires a monocular camera sensor, we use the images recorded by the left RGB
camera in our experiments. We use the sequences $00$, $03$, $05$, $06$, $07$,
$08$ and $09$ from the dataset based on the availability of a LiDAR
environment map. We use sequence $00$ for validation of our approach and the
rest of the sequences are utilized in training our DNN. The experimental
parameters are provided in Table 5.
### 6.2 LiDAR Environment Map
To construct a precise LiDAR point cloud map $\mathcal{M}$ of the environment,
we exploit the openly available position and orientation values for the
dataset computed via Simultaneous Localization and Mapping [4]. Similar to
[15], we aggregate the LiDAR point clouds across all time instances. Then, we
detect and remove sparse outliers within the aggregated point cloud by
computing Z-score [31] of each point in a $0.1$ m local neighborhood. We
discarded the points which had a higher Z-score than $3$. Finally, the
remaining points are down sampled into a voxel map of the environment
$\mathcal{M}$ with resolution of $0.1$ m. The corresponding map for sequence
00 in the KITTI dataset is shown in Fig. 5. For storing large maps, we divide
the LiDAR point cloud sequences into multiple overlapping parts and construct
separate maps of roughly $500$ Megabytes each.
### 6.3 DNN Training and Testing Datasets
We generate the training dataset for our DNN in two steps. First, we randomly
select a state estimate $s_{t}$ at time $t$ from within a $2$ m translation
and a $10^{\circ}$ rotation of the ground truth positions and orientations in
each driving sequence. The translation and rotation used for generating the
state estimate is utilized as the ground truth position error
$\Delta\mathbf{x}^{*}_{t}$ and orientation error $\Delta\mathbf{r}^{*}_{t}$.
Then, using the LiDAR map $\mathcal{M}$, we generate the local depth map
$L(\mathbf{s}_{t},\mathcal{M})$ corresponding to the state estimate
$\mathbf{s}_{t}$ and use it as the DNN input along with the camera image
$I_{t}$ from the driving sequence data. The training dataset comprises of
camera images from $11455$ different time instances, with the state estimate
selected at runtime so as to have different state estimates for the same
camera images in different epochs.
Similar to the data augmentation techniques described in [15], we
1. 1.
Randomly change contrast, saturation and brightness of images,
2. 2.
Apply random rotations in the range of $\pm 5^{\circ}$ to both the camera
images and local depth maps,
3. 3.
Horizontally mirror the camera image and compute the local depth map using a
modified camera projection matrix.
All three of these data augmentation techniques are used in training CMRNet in
the first half of the optimization process. However, for training
CovarianceNet, we skip the contrast, saturation and brightness changes during
the second half of the optimization so that the DNN can learn real-world noise
features from camera images.
We generate the validation and test datasets from sequence $00$ in the KITTI
odometry dataset, which is not used for training. We follow a similar
procedure as the one for generating the training dataset, except we do not
augment the data. The validation dataset comprises of randomly selected $100$
time instances from sequence $00$, while the test dataset contains the
remaining $4441$ time instances in sequence $00$.
Parameter | Value
---|---
Integrity risk $IR$ | $0.01$
Candidate state maximum translation offset $t_{max}$ | $1.0$ m
Candidate state maximum rotation offset $r_{max}$ | $5^{\circ}$
Number of candidate states $N_{C}$ | $24$
Lateral alarm limit $AL_{lat}$ | $0.85$ m
Longitudinal alarm limit $AL_{lon}$ | $1.50$ m
Vertical alarm limit $AL_{vert}$ | $1.47$ m
Figure 4: Experimental parameters
Figure 5: 3D LiDAR environment map from KITTI dataset sequence 00 [47].
### 6.4 Training Procedure
We train the DNN using stochastic gradient descent. Directly optimizing via
the maximum likelihood loss term $\mathcal{L}_{\textrm{MLE}}(\cdot)$ might
suffer from instability caused by the interdependence between the translation
error $\Delta\tilde{\mathbf{x}}$ and covariance $\tilde{\Sigma}$ outputs [48].
Therefore, we employ the mean-variance split training strategy proposed in
[48]: First, we set
$(\alpha_{\textrm{Huber}}=1,\alpha_{\textrm{MLE}}=1,\alpha_{\textrm{Ang}}=1)$
and only optimize the parameters of CMRNet till validation error stops
decreasing. Next, we set
$(\alpha_{\textrm{Huber}}=0,\alpha_{\textrm{MLE}}=1,\alpha_{\textrm{Ang}}=0)$
and optimize the parameters of CovarianceNet. We alternate between these two
steps till validation loss stops decreasing. Our DNN is implemented using the
PyTorch library [49] and takes advantage of the open-source implementation
available for CMRNet [15] as well as the available pretrained weights for
initialization. Similar to CMRNet, all the layers in our DNN use the leaky
RELU activation function with a negative slope of $0.1$. We train the DNN on
using a single NVIDIA Tesla P40 GPU with a batch size of $24$ and learning
rate of $10^{-5}$ selected via grid search.
### 6.5 Metrics
We evaluate the lateral, longitudinal and vertical protection levels computed
using our approach using the following three metrics (with subscript $t$
dropped for brevity):
1. 1.
Bound gap measures the difference between the computed protection levels
$PL_{lat},PL_{lon},PL_{vert}$ and the true position error magnitude during
nominal operations (protection level is less than the alarm limit and greater
than the position error)
$\displaystyle BG_{lat}$ $\displaystyle=\textrm{avg}(PL_{lat}-|\Delta x^{*}|)$
$\displaystyle BG_{lon}$ $\displaystyle=\textrm{avg}(PL_{lon}-|\Delta y^{*}|)$
$\displaystyle BG_{vert}$ $\displaystyle=\textrm{avg}(PL_{vert}-|\Delta
z^{*}|),$ (24)
where
1. –
$BG_{lat},BG_{lon}$ and $BG_{vert}$ denote bound gaps in lateral, longitudinal
and vertical dimensions respectively
2. –
$\textrm{avg}(\cdot)$ denotes the average computed over the test dataset for
which the value of protection level is greater than position error and less
than the alarm limit
A small bound gap value $BG_{lat},BG_{lon},BG_{vert}$ is desirable since it
implies that the algorithm both estimates the position error magnitude during
nominal operations accurately and has low uncertainty in the prediction. We
only consider the bound gap for nominal operations, since the estimated
position is declared unsafe when the protection level exceeds the alarm limit.
2. 2.
Failure rate measures the total fraction of time instances in the test data
sequence for which the computed protection levels
$PL_{lat},PL_{lon},PL_{vert}$ are smaller than the true position error
magnitude
$\displaystyle FR_{lat}$
$\displaystyle=\frac{1}{T_{\textrm{max}}}\sum_{t=1}^{T_{\textrm{max}}}\mathbbm{1}_{t}\left(PL_{lat}<|\Delta
x^{*}|\right)$ $\displaystyle FR_{lon}$
$\displaystyle=\frac{1}{T_{\textrm{max}}}\sum_{t=1}^{T_{\textrm{max}}}\mathbbm{1}_{t}\left(PL_{lon}<|\Delta
y^{*}|\right)$ $\displaystyle FR_{vert}$
$\displaystyle=\frac{1}{T_{\textrm{max}}}\sum_{t=1}^{T_{\textrm{max}}}\mathbbm{1}_{t}\left(PL_{vert}<|\Delta
z^{*}|\right),$ (25)
where
1. –
$FR_{lat},FR_{lon}$ and $FR_{vert}$ denote failure rates for lateral,
longitudinal and vertical protection levels, respectively
2. –
$\mathbbm{1}_{t}(\cdot)$ denotes the indicator function computed using the
protection level and true position error values at time $t$. The indicator
function evaluates to $1$ if the event in its argument holds true, and
otherwise evaluates to $0$
3. –
$T_{\textrm{max}}$ denotes the total time duration of the test sequence
The failure rate $FR_{lat},FR_{lon},FR_{vert}$ should be consistent with the
specified value of the integrity risk $IR$ to meet the safety requirements.
3. 3.
False alarm rate is computed for a specified alarm limit
$AL_{lat},AL_{lon},AL_{vert}$ in the lateral, longitudinal and vertical
directions and measures the fraction of time instances in the test data
sequence for which the computed protection levels
$PL_{lat},PL_{lon},PL_{vert}$ exceed the alarm limit
$AL_{lat},AL_{lon},AL_{vert}$ while the position error magnitude is within the
alarm limits. We first define the following integrity events
$\displaystyle\Omega_{lat,PL}$ $\displaystyle=(PL_{lat}>AL_{lat})$
$\displaystyle\Omega_{lat,PE}$ $\displaystyle=(|\Delta x^{*}|>AL_{lat})$
$\displaystyle\Omega_{lon,PL}$ $\displaystyle=(PL_{lon}>AL_{lon})$
$\displaystyle\Omega_{lon,PE}$ $\displaystyle=(|\Delta y^{*}|>AL_{lon})$
$\displaystyle\Omega_{vert,PL}$ $\displaystyle=(PL_{vert}>AL_{vert})$
$\displaystyle\Omega_{vert,PE}$ $\displaystyle=(|\Delta z^{*}|>AL_{vert}).$
(26)
The complement of each event is denoted by $\bar{\Omega}$. Next, we define the
counts for false alarms $N_{X,FA}$, true alarms $N_{X,TA}$ and the number of
times the position error exceeds the alarm limit $N_{X,PE}$ with $X=lat,lon$
and $vert$
$\displaystyle N_{X,FA}$
$\displaystyle=\sum_{t=1}^{T_{\textrm{max}}}\mathbbm{1}_{t}\left(\Omega_{X,PL}\cap\bar{\Omega}_{X,PE}\right)$
$\displaystyle N_{X,TA}$
$\displaystyle=\sum_{t=1}^{T_{\textrm{max}}}\mathbbm{1}_{t}\left(\Omega_{X,PL}\cap\Omega_{X,PE}\right)$
$\displaystyle N_{X,PE}$
$\displaystyle=\sum_{t=1}^{T_{\textrm{max}}}\mathbbm{1}_{t}\left(\Omega_{X,PE}\right).$
(27)
Finally, we compute the false alarm rates $FAR_{lat},FAR_{lon},FAR_{vert}$
after normalizing the total number of position error magnitudes lying above
and below the alarm limit $AL$
$\displaystyle FAR_{X}$
$\displaystyle=\frac{N_{X,FA}\cdot({T_{\textrm{max}}}-N_{X,PE})}{N_{X,FA}\cdot({T_{\textrm{max}}}-N_{X,PE})+N_{X,TA}\cdot
N_{X,PE}}.$ (28)
### 6.6 Results
Fig. 6 shows the lateral and longitudinal protection levels computed by our
approach on two $200$ s subsets of the test sequence. For clarity, protection
levels are computed at every $5$th time instance. Similarly, Fig. 7 shows the
vertical protection levels along with the vertical position error magnitude in
a subset of the test sequence. As can be seen from both the figures, the
computed protection levels successfully enclose the position error magnitudes
at a majority of the points ($\sim 99\%$) in the visualized subsequences.
Furthermore, the vertical protection levels are observed to be visually closer
to the position error as compared to the lateral and longitudinal protection
levels. This is due to the superior performance of the DNN in determining
position errors along the vertical dimension, which is easier to learn since
all the camera images in the dataset are captured by a ground-based vehicle.
Figure 6: Lateral and longitudinal protection level results on the test
sequence in real-world dataset. We show protection levels for two subsets of
the total sequence, computed at every $5$ s intervals. The protection levels
successfully enclose the state estimates in $\sim 99\%$ of the cases. Figure
7: Vertical protection level results on the test sequence in real-world
dataset. We show protection levels for a subset of the total sequence. The
protection levels successfully enclose the position error magnitudes with a
small bound gap.
Fig. 8 displays the integrity diagrams generated after the Stanford-ESA
integrity diagram proposed for SBAS integrity [50]. The diagram is generated
from $15000$ samples of protection levels corresponding to randomly selected
state estimates and camera images within the test sequence. For protection
levels each direction, we set the alarm limit (Table 5) based on the
specifications suggested for mid-size vehicles in [10], beyond which the state
estimate is declared unsafe to use. The lateral, longitudinal and vertical
protection levels are greater than the position error magnitudes in $\sim 99$%
cases, which is consistent with the specified integrity requirement.
Furthermore, a large fraction of the failures is in the region where the
protection level is greater than the alarm limit and thus the system has been
correctly identified to be under unsafe operation.
We conducted an ablation study to numerically evaluate the impact of our
proposed epistemic uncertainty measure and outlier weighting method in
computing protection levels. We evaluated protection levels in three different
cases: Incorporating DNN covariance, epistemic uncertainty and outlier
weighting (VAR+EO); incorporating just the DNN covariance and epistemic
uncertainty with equal weights assigned to all position error samples (VAR+E);
and only using the DNN covariance (VAR). For VAR, we constructed a Gaussian
distribution using the DNN position error output and diagonal variance entries
in each dimension. Then, we computed protection levels from the inverse
cumulative distribution function of the Gaussian distribution corresponding to
the specified value of integrity risk $IR$. Table 1 summarizes our results.
Incorporating the epistemic uncertainty in computing protection levels
improved the failure rate from $0.05$ in lateral protection levels, $0.05$ in
longitudinal protection levels and $0.03$ in vertical protection levels to
within $0.01$ in all cases. This is because the covariance estimate from the
DNN provides an overconfident measure of uncertainty, which is corrected by
our epistemic uncertainty measure. Furthermore, incorporating outlier
weighting reduced the average nominal bound gap by about $0.02$ m in lateral
protection levels, $0.05$ m in longitudinal protection levels, and $0.05$ m in
vertical protection levels as well as false alarm rate by about $0.02$ for
each direction while keeping the failure rate within the specified integrity
risk requirement.
The mean bound gap between the lateral protection levels computed from our
approach and the position error magnitudes in the nominal cases is smaller
than a quarter of the width of a standard U.S. lane. In the longitudinal
direction, the bound gap is somewhat larger since fewer visual features are
present along the road for determining the position error using the DNN. The
corresponding value in the vertical dimension is smaller, owing to the DNN’s
superior performance in determining position errors and uncertainty in the
vertical dimension. This demonstrates the applicability of our approach to
urban roads.
For an integrity risk requirement of $0.01$, the protection levels computed by
our method demonstrate a failure rate equal to or within $0.01$ as well.
However, further lowering the integrity risk requirement during our
experiments either did not similarly improve the failure rate or caused a
significant increase in the bound gaps and the false alarm rate. A possible
reason is that the uncertainty approximated by our approach through both the
aleatoric and epistemic measures fails to act as an accurate uncertainty
representation for smaller integrity risk requirements than $0.01$. Future
research would consider more and varied training data, better strategies for
selecting candidate states, and different DNN architectures to meet smaller
integrity risk requirements.
A shortcoming of our approach is the large false alarm rate exhibited by the
computed protection levels in Table 1. The large value results both from the
inherent noise in the DNN-based estimation of position and rotation error as
well as from frequently selecting candidate states that result in large
outlier error values. A future work direction for reducing the false alarm
rate is to explore strategies for selecting candidate states and mitigating
outliers.
A key advantage offered by our approach is its application to scenarios where
a direct analysis of the error sources in the state estimation algorithm is
difficult, such as when feature rich visual information is processed by a
machine learning algorithm for estimating the state. In such scenarios, our
approach computes protection levels separately from the state estimation
algorithm by both evaluating a data-driven model of the position error
uncertainty and characterizing the epistemic uncertainty in the model outputs.
| Lateral PL | | Longitudinal PL | | Vertical PL
---|---|---|---|---|---
| $BG$(m) | $FR$ | $FAR$ | | $BG$(m) | $FR$ | $FAR$ | | $BG$(m) | $FR$ | $FAR$
VAR+EO | $0.49$ | $0.01$ | $0.47$ | | $0.77$ | $0.01$ | $0.40$ | | $0.38$ | $<\negmedspace 0.01$ | $0.14$
VAR+E | $0.51$ | $0.01$ | $0.49$ | | $0.82$ | $0.01$ | $0.43$ | | $0.43$ | $<\negmedspace 0.01$ | $0.16$
VAR | $0.42$ | $0.05$ | $0.45$ | | $0.64$ | $0.05$ | $0.36$ | | $0.30$ | $0.02$ | $0.12$
Table 1: Evaluation of lateral, longitudinal and vertical protection levels
from our approach. We compare protection levels computed by our trained model
using DNN covariance, epistemic uncertainty and outlier weighting (VAR+EO),
DNN covariance and epistemic uncertainty (VAR+E) and only DNN covariance
(VAR). Incorporating epistemic uncertainty results in lower failure rate,
while incorporating outlier weights reduces bound gap and false alarm rate.
Figure 8: Integrity diagram results for the lateral, longitudinal and vertical
protection levels. The diagram contains protection levels evaluated across
$15000$ different state estimates and camera images randomly selected from the
test sequence. A majority of the samples are close to and greater than the
position error magnitude, validating the applicability of the computed
protection levels as a robust safety measure.
## 7 Conclusions
In this work, we presented a data-driven approach for computing lateral,
longitudinal and vertical protection levels associated with a given state
estimate from camera images and a 3D LiDAR map of the environment. Our
approach estimates both aleatoric and epistemic measures of uncertainty for
computing protection levels, thereby providing robust measures of localization
safety. We demonstrated the efficacy of our method on real-world data in terms
of bound gap, failure rate and false alarm rate. Results show that the
lateral, longitudinal and vertical protection levels computed from our method
enclose the position error magnitudes with $0.01$ probability of failure and
less than $1$ m bound gap in all directions, which demonstrates that our
approach is applicable to GNSS-denied urban environments.
## Acknowledgements
This material is based upon work supported by the National Science Foundation
under award #2006162.
## References
* Delling et al. [2017] Daniel Delling, Andrew V. Goldberg, Thomas Pajor, and Renato F. Werneck. Customizable Route Planning in Road Networks. _Transportation Science_ , 51(2):566–591, May 2017. ISSN 0041-1655, 1526-5447. 10.1287/trsc.2014.0579.
* Jensen et al. [2016] Morten Borno Jensen, Mark Philip Philipsen, Andreas Mogelmose, Thomas Baltzer Moeslund, and Mohan Manubhai Trivedi. Vision for Looking at Traffic Lights: Issues, Survey, and Perspectives. _IEEE Transactions on Intelligent Transportation Systems_ , 17(7):1800–1815, July 2016. ISSN 1524-9050, 1558-0016. 10.1109/TITS.2015.2509509.
* Wolcott and Eustice [2017] Ryan W Wolcott and Ryan M Eustice. Robust LIDAR localization using multiresolution Gaussian mixture maps for autonomous driving. _The International Journal of Robotics Research_ , 36(3):292–319, March 2017. ISSN 0278-3649, 1741-3176. 10.1177/0278364917696568.
* Caselitz et al. [2016] Tim Caselitz, Bastian Steder, Michael Ruhnke, and Wolfram Burgard. Monocular camera localization in 3D LiDAR maps. In _2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)_ , pages 1926–1931, Daejeon, South Korea, October 2016. IEEE. ISBN 978-1-5090-3762-9. 10.1109/IROS.2016.7759304.
* Spilker Jr. et al. [1996] James J. Spilker Jr., Penina Axelrad, Bradford W. Parkinson, and Per Enge, editors. _Global Positioning System: Theory and Applications, Volume I_. American Institute of Aeronautics and Astronautics, Washington DC, January 1996. ISBN 978-1-56347-106-3 978-1-60086-638-8. 10.2514/4.866388.
* Jiang and Wang [2016] Yiping Jiang and Jinling Wang. A New Approach to Calculate the Horizontal Protection Level. _Journal of Navigation_ , 69(1):57–74, January 2016. ISSN 0373-4633, 1469-7785. 10.1017/S0373463315000545.
* Cezón et al. [2013] A. Cezón, M. Cueto, and I. Fernández. Analysis of Multi-GNSS Service Performance Assessment: ARAIM vs. IBPL Performances Comparison. pages 2654–2663, September 2013. ISSN: 2331-5954.
* Tran and Lo Presti [2019] Hieu Trung Tran and Letizia Lo Presti. Kalman filter-based ARAIM algorithm for integrity monitoring in urban environment. _ICT Express_ , 5(1):65–71, March 2019. ISSN 24059595. 10.1016/j.icte.2018.05.002.
* Badue et al. [2021] Claudine Badue, Rânik Guidolini, Raphael Vivacqua Carneiro, Pedro Azevedo, Vinicius B. Cardoso, Avelino Forechi, Luan Jesus, Rodrigo Berriel, Thiago M. Paixão, Filipe Mutz, Lucas de Paula Veronese, Thiago Oliveira-Santos, and Alberto F. De Souza. Self-driving cars: A survey. _Expert Systems with Applications_ , 165:113816, March 2021\. ISSN 09574174. 10.1016/j.eswa.2020.113816.
* Reid et al. [2019] Tyler G. R. Reid, Sarah E. Houts, Robert Cammarata, Graham Mills, Siddharth Agarwal, Ankit Vora, and Gaurav Pandey. Localization Requirements for Autonomous Vehicles. _SAE International Journal of Connected and Automated Vehicles_ , 2(3):12–02–03–0012, September 2019. ISSN 2574-075X. 10.4271/12-02-03-0012. arXiv: 1906.01061.
* Taira et al. [2021] Hajime Taira, Masatoshi Okutomi, Torsten Sattler, Mircea Cimpoi, Marc Pollefeys, Josef Sivic, Tomas Pajdla, and Akihiko Torii. InLoc: Indoor Visual Localization with Dense Matching and View Synthesis. _IEEE Transactions on Pattern Analysis and Machine Intelligence_ , 43(4):1293–1307, April 2021. ISSN 0162-8828, 2160-9292, 1939-3539. 10.1109/TPAMI.2019.2952114.
* Kim et al. [2018] Youngji Kim, Jinyong Jeong, and Ayoung Kim. Stereo Camera Localization in 3D LiDAR Maps. In _2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)_ , pages 1–9, Madrid, October 2018\. IEEE. ISBN 978-1-5386-8094-0. 10.1109/IROS.2018.8594362.
* Lyrio et al. [2015] Lauro J. Lyrio, Thiago Oliveira-Santos, Claudine Badue, and Alberto Ferreira De Souza. Image-based mapping, global localization and position tracking using VG-RAM weightless neural networks. In _2015 IEEE International Conference on Robotics and Automation (ICRA)_ , pages 3603–3610, Seattle, WA, May 2015. IEEE. ISBN 978-1-4799-6923-4. 10.1109/ICRA.2015.7139699.
* Oliveira et al. [2020] Gabriel L. Oliveira, Noha Radwan, Wolfram Burgard, and Thomas Brox. Topometric Localization with Deep Learning. In Nancy M. Amato, Greg Hager, Shawna Thomas, and Miguel Torres-Torriti, editors, _Robotics Research_ , volume 10, pages 505–520. Springer International Publishing, Cham, 2020. ISBN 978-3-030-28618-7 978-3-030-28619-4. 10.1007/978-3-030-28619-4_38. Series Title: Springer Proceedings in Advanced Robotics.
* Cattaneo et al. [2019] Daniele Cattaneo, Matteo Vaghi, Augusto Luis Ballardini, Simone Fontana, Domenico Giorgio Sorrenti, and Wolfram Burgard. CMRNet: Camera to LiDAR-Map Registration. _2019 IEEE Intelligent Transportation Systems Conference (ITSC)_ , pages 1283–1289, October 2019. 10.1109/ITSC.2019.8917470. arXiv: 1906.10109.
* Sarlin et al. [2019] Paul-Edouard Sarlin, Cesar Cadena, Roland Siegwart, and Marcin Dymczyk. From Coarse to Fine: Robust Hierarchical Localization at Large Scale. _arXiv:1812.03506 [cs]_ , pages 12708–12717, April 2019. 10.1109/CVPR.2019.01300. arXiv: 1812.03506 version: 2.
* Recht et al. [2019] Benjamin Recht, Rebecca Roelofs, Ludwig Schmidt, and Vaishaal Shankar. Do ImageNet Classifiers Generalize to ImageNet? In Kamalika Chaudhuri and Ruslan Salakhutdinov, editors, _Proceedings of the 36th International Conference on Machine Learning_ , volume 97 of _Proceedings of Machine Learning Research_ , pages 5389–5400. PMLR, June 2019.
* Kendall and Gal [2017] Alex Kendall and Yarin Gal. What Uncertainties Do We Need in Bayesian Deep Learning for Computer Vision? _arXiv:1703.04977 [cs]_ , 30, October 2017. arXiv: 1703.04977.
* Loquercio et al. [2020] Antonio Loquercio, Mattia Segu, and Davide Scaramuzza. A General Framework for Uncertainty Estimation in Deep Learning. _IEEE Robotics and Automation Letters_ , 5(2):3153–3160, April 2020. ISSN 2377-3766, 2377-3774. 10.1109/LRA.2020.2974682.
* Kiureghian and Ditlevsen [2009] Armen Der Kiureghian and Ove Ditlevsen. Aleatory or epistemic? Does it matter? _Structural Safety_ , 31(2):105–112, March 2009\. ISSN 01674730. 10.1016/j.strusafe.2008.06.020.
* McAllister et al. [2017] Rowan McAllister, Yarin Gal, Alex Kendall, Mark van der Wilk, Amar Shah, Roberto Cipolla, and Adrian Weller. Concrete Problems for Autonomous Vehicle Safety: Advantages of Bayesian Deep Learning. In _Proceedings of the Twenty-Sixth International Joint Conference on Artificial Intelligence_ , pages 4745–4753, Melbourne, Australia, August 2017. International Joint Conferences on Artificial Intelligence Organization. ISBN 978-0-9992411-0-3. 10.24963/ijcai.2017/661.
* Yang et al. [2020] Nan Yang, Lukas von Stumberg, Rui Wang, and Daniel Cremers. D3VO: Deep Depth, Deep Pose and Deep Uncertainty for Monocular Visual Odometry. In _2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)_ , pages 1278–1289, Seattle, WA, USA, June 2020\. IEEE. ISBN 978-1-72817-168-5. 10.1109/CVPR42600.2020.00136.
* Kendall and Cipolla [2016] Alex Kendall and Roberto Cipolla. Modelling uncertainty in deep learning for camera relocalization. In _2016 IEEE International Conference on Robotics and Automation (ICRA)_ , pages 4762–4769, Stockholm, Sweden, May 2016. IEEE. ISBN 978-1-4673-8026-3. 10.1109/ICRA.2016.7487679.
* Gal and Ghahramani [2016] Yarin Gal and Zoubin Ghahramani. Dropout as a Bayesian Approximation: Representing Model Uncertainty in Deep Learning. In Maria Florina Balcan and Kilian Q. Weinberger, editors, _Proceedings of The 33rd International Conference on Machine Learning_ , volume 48 of _Proceedings of Machine Learning Research_ , pages 1050–1059, New York, New York, USA, June 2016. PMLR.
* Blundell et al. [2015] Charles Blundell, Julien Cornebise, Koray Kavukcuoglu, and Daan Wierstra. Weight Uncertainty in Neural Network. In Francis Bach and David Blei, editors, _Proceedings of the 32nd International Conference on Machine Learning_ , volume 37 of _Proceedings of Machine Learning Research_ , pages 1613–1622, Lille, France, July 2015. PMLR.
* Smith and Gal [2018] Lewis Smith and Yarin Gal. Understanding Measures of Uncertainty for Adversarial Example Detection. _arXiv:1803.08533 [cs, stat]_ , pages 560–569, March 2018. arXiv: 1803.08533.
* Gupta and Gao [2020] Shubh Gupta and Grace X. Gao. Data-Driven Protection Levels for Camera and 3D Map-based Safe Urban Localization. pages 2483–2499, October 2020. 10.33012/2020.17698.
* Lukas and Stoker [2016] Vicki Lukas and J. M. Stoker. 3D Elevation Program—Virtual USA in 3D. USGS Numbered Series 2016-3022, U.S. Geological Survey, Reston, VA, 2016. Code Number: 2016-3022 Code: 3D Elevation Program—Virtual USA in 3D Publication Title: 3D Elevation Program—Virtual USA in 3D Reporter: 3D Elevation Program—Virtual USA in 3D Series: Fact Sheet IP-074727.
* Krishnan et al. [2011] Sriram Krishnan, Christopher Crosby, Viswanath Nandigam, Minh Phan, Charles Cowart, Chaitanya Baru, and Ramon Arrowsmith. OpenTopography: a services oriented architecture for community access to LIDAR topography. In _Proceedings of the 2nd International Conference on Computing for Geospatial Research & Applications - COM.Geo ’11_, pages 1–8, Washington, DC, 2011. ACM Press. ISBN 978-1-4503-0681-2. 10.1145/1999320.1999327.
* Wang et al. [2020] Cheng Wang, Chenglu Wen, Yudi Dai, Shangshu Yu, and Minghao Liu. Urban 3D modeling with mobile laser scanning: a review. _Virtual Reality & Intelligent Hardware_, 2(3):175–212, June 2020. ISSN 20965796. 10.1016/j.vrih.2020.05.003.
* Iglewicz and Hoaglin [1993] Boris Iglewicz and David Caster Hoaglin. _How to Detect and Handle Outliers_. ASQC Quality Press, 1993. ISBN 978-0-87389-247-6. Google-Books-ID: siInAQAAIAAJ.
* Lindsay [1995] Bruce G. Lindsay. _Mixture Models: Theory, Geometry, and Applications_. IMS, 1995. ISBN 978-0-940600-32-4. Google-Books-ID: VFDzNhikFbQC.
* Joerger and Pervan [2019] M. Joerger and B. Pervan. Quantifying Safety of Laser-Based Navigation. _IEEE Transactions on Aerospace and Electronic Systems_ , 55(1):273–288, February 2019. ISSN 1557-9603. 10.1109/TAES.2018.2850381. Conference Name: IEEE Transactions on Aerospace and Electronic Systems.
* Zhu et al. [2020] C. Zhu, M. Joerger, and M. Meurer. Quantifying Feature Association Error in Camera-based Positioning. In _2020 IEEE/ION Position, Location and Navigation Symposium (PLANS)_ , pages 967–972, April 2020. 10.1109/PLANS46316.2020.9109919. ISSN: 2153-3598.
* Kendall et al. [2015] Alex Kendall, Matthew Grimes, and Roberto Cipolla. PoseNet: A Convolutional Network for Real-Time 6-DOF Camera Relocalization. In _2015 IEEE International Conference on Computer Vision (ICCV)_ , pages 2938–2946, Santiago, Chile, December 2015. IEEE. ISBN 978-1-4673-8391-2. 10.1109/ICCV.2015.336.
* Cadena et al. [2016] Cesar Cadena, Luca Carlone, Henry Carrillo, Yasir Latif, Davide Scaramuzza, Jose Neira, Ian Reid, and John J. Leonard. Past, Present, and Future of Simultaneous Localization And Mapping: Towards the Robust-Perception Age. _IEEE Transactions on Robotics_ , 32(6):1309–1332, December 2016. ISSN 1552-3098, 1941-0468. 10.1109/TRO.2016.2624754. arXiv: 1606.05830.
* Russell and Reale [2019] Rebecca L. Russell and Christopher Reale. Multivariate Uncertainty in Deep Learning. _arXiv:1910.14215 [cs, stat]_ , October 2019. arXiv: 1910.14215.
* Liu et al. [2018] Katherine Liu, Kyel Ok, William Vega-Brown, and Nicholas Roy. Deep Inference for Covariance Estimation: Learning Gaussian Noise Models for State Estimation. In _2018 IEEE International Conference on Robotics and Automation (ICRA)_ , pages 1436–1443, Brisbane, QLD, May 2018. IEEE. ISBN 978-1-5386-3081-5. 10.1109/ICRA.2018.8461047.
* Pintus et al. [2011] Ruggero Pintus, Enrico Gobbetti, and Marco Agus. Real-time rendering of massive unstructured raw point clouds using screen-space operators. In _Proceedings of the 12th International conference on Virtual Reality, Archaeology and Cultural Heritage_ , pages 105–112, 2011.
* Dosovitskiy et al. [2015] Alexey Dosovitskiy, Philipp Fischer, Eddy Ilg, Philip Hausser, Caner Hazirbas, Vladimir Golkov, Patrick van der Smagt, Daniel Cremers, and Thomas Brox. FlowNet: Learning Optical Flow with Convolutional Networks. In _2015 IEEE International Conference on Computer Vision (ICCV)_ , pages 2758–2766, Santiago, December 2015. IEEE. ISBN 978-1-4673-8391-2. 10.1109/ICCV.2015.316.
* Huber [1992] Peter J. Huber. Robust Estimation of a Location Parameter. In Samuel Kotz and Norman L. Johnson, editors, _Breakthroughs in Statistics_ , pages 492–518. Springer New York, New York, NY, 1992. ISBN 978-0-387-94039-7 978-1-4612-4380-9. 10.1007/978-1-4612-4380-9_35. Series Title: Springer Series in Statistics.
* Zeng and Ji [2015] Tao Zeng and Shuiwang Ji. Deep Convolutional Neural Networks for Multi-instance Multi-task Learning. In _2015 IEEE International Conference on Data Mining_ , pages 579–588, Atlantic City, NJ, USA, November 2015. IEEE. ISBN 978-1-4673-9504-5. 10.1109/ICDM.2015.92.
* Barfoot et al. [2011] Timothy Barfoot, James R. Forbes, and Paul T. Furgale. Pose estimation using linearized rotations and quaternion algebra. _Acta Astronautica_ , 68(1-2):101–112, January 2011. ISSN 00945765. 10.1016/j.actaastro.2010.06.049.
* Rousseeuw and Hubert [2018] Peter J. Rousseeuw and Mia Hubert. Anomaly detection by robust statistics. _WIREs Data Mining and Knowledge Discovery_ , 8(2), March 2018. ISSN 1942-4787, 1942-4795. 10.1002/widm.1236.
* Goodfellow et al. [2016] Ian Goodfellow, Yoshua Bengio, and Aaron Courville. _Deep Learning_. MIT Press, November 2016. ISBN 978-0-262-03561-3. Google-Books-ID: Np9SDQAAQBAJ.
* Burden and Faires [2011] Richard L. Burden and J. Douglas Faires. _Numerical Analysis_. Brooks/Cole, Cengage Learning, 2011. ISBN 978-0-538-73564-3. Google-Books-ID: KlfrjCDayHwC.
* Geiger et al. [2012] A. Geiger, P. Lenz, and R. Urtasun. Are we ready for autonomous driving? The KITTI vision benchmark suite. In _2012 IEEE Conference on Computer Vision and Pattern Recognition_ , pages 3354–3361, Providence, RI, June 2012. IEEE. ISBN 978-1-4673-1228-8 978-1-4673-1226-4 978-1-4673-1227-1. 10.1109/CVPR.2012.6248074.
* Skafte et al. [2019] Nicki Skafte, Martin Jø rgensen, and Sø ren Hauberg. Reliable training and estimation of variance networks. In H. Wallach, H. Larochelle, A. Beygelzimer, F. d\textquotesingle Alché-Buc, E. Fox, and R. Garnett, editors, _Advances in Neural Information Processing Systems 32_ , volume 32, pages 6326–6336. Curran Associates, Inc., 2019.
* Paszke et al. [2019] Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, Alban Desmaison, Andreas Kopf, Edward Yang, Zachary DeVito, Martin Raison, Alykhan Tejani, Sasank Chilamkurthy, Benoit Steiner, Lu Fang, Junjie Bai, and Soumith Chintala. PyTorch: An Imperative Style, High-Performance Deep Learning Library. In H. Wallach, H. Larochelle, A. Beygelzimer, F. d\textquotesingle Alché-Buc, E. Fox, and R. Garnett, editors, _Advances in Neural Information Processing Systems_ , volume 32. Curran Associates, Inc., 2019.
* Tossaint et al. [2007] M. Tossaint, J. Samson, F. Toran, J. Ventura-Traveset, M. Hernandez-Pajares, J.M. Juan, J. Sanz, and P. Ramos-Bosch. The Stanford - ESA Integrity Diagram: A New Tool for The User Domain SBAS Integrity Assessment. _Navigation_ , 54(2):153–162, June 2007. ISSN 00281522. 10.1002/j.2161-4296.2007.tb00401.x.
|
capbtabboxtable[][]
# Reliable GNSS Localization Against Multiple Faults Using a Particle Filter
Framework
Shubh Gupta and Grace X. Gao S. Gupta is with the Department of Electrical
Engineering, Stanford University, CA, 94709, USA. e-mail:
<EMAIL_ADDRESS>X. Gao is with the Department of Aeronautics &
Astronautics, Stanford University, CA, 94709, USA. e-mail:
<EMAIL_ADDRESS>
###### Abstract
For reliable operation on urban roads, navigation using the Global Navigation
Satellite System (GNSS) requires both accurately estimating the positioning
detail from GNSS pseudorange measurements and determining when the estimated
position is safe to use, or available. However, multiple GNSS measurements in
urban environments contain biases, or faults, due to signal reflection and
blockage from nearby buildings which are difficult to mitigate for estimating
the position and availability. This paper proposes a novel particle filter-
based framework that employs a Gaussian Mixture Model (GMM) likelihood of GNSS
measurements to robustly estimate the position of a navigating vehicle under
multiple measurement faults. Using the probability distribution tracked by the
filter and the designed GMM likelihood, we measure the accuracy and the risk
associated with localization and determine the availability of the navigation
system at each time instant. Through experiments conducted on challenging
simulated and real urban driving scenarios, we show that our method achieves
small horizontal positioning errors compared to existing filter-based state
estimation techniques when multiple GNSS measurements contain faults.
Furthermore, we verify using several simulations that our method determines
system availability with smaller probability of false alarms and integrity
risk than the existing particle filter-based integrity monitoring approach.
###### Index Terms:
Particle filter, Integrity Monitoring, Global Navigation Satellite Systems,
urban environment, mixture models, Sequential Monte Carlo, Expectation-
Maximization.
## I Introduction
Figure 1: Our proposed algorithm for robust localization in the presence of
several GNSS measurement faults associates weights both with the GNSS
measurements and the different likely positions denoted by particles. We
assign smaller weights to faulty GNSS measurements and to less likely
positions while assigning large weights to the various positions supported by
one or more non-faulty GNSS measurements.
Global Navigation Satellite System (GNSS) in urban environments is often
affected by limited satellite visibility, signal attenuation, non line-of-
sight signals, and multipath effects [1]. Such impairments to GNSS signals
result in fewer measurements as compared to open-sky conditions as well as
time-varying biases, or faults, simultaneously in multiple measurements.
Improper handling of these faults during localization may unknowingly result
in large positioning errors, posing a significant risk to the safety of the
navigating vehicle. Therefore, safe navigation in urban environments requires
addressing two challenges: accurately estimating the position of the
navigating vehicle from the faulty measurements and determining the estimated
position’s trustworthiness, or integrity [1].
### I-A Position Estimation under faulty GNSS Measurements
For estimating the position of a ground-based vehicle, many existing
approaches use GNSS measurements in tandem with odometry measurements from an
inertial navigation system. Many of these approaches rely on state estimation
by means of filters, such as Kalman filter and particle filter [2]. These
filters track the probability distribution of the navigating vehicle’s
position across time, approximated by a Gaussian distribution in a Kalman
filter and by a multimodal distribution represented as a set of samples, or
particles, in a particle filter. However, the traditional filters are based on
the assumption of Gaussian noise (or overbounds) in GNSS measurements, which
is often violated in urban environments due to faults caused by multipath and
non line-of-sight errors [1].
Several methods to address non-Gaussian measurement errors in filtering have
been proposed in the area of robust state estimation. Karlgaard [3] integrated
the Huber estimation technique with Kalman filter for outlier-robust state
estimation. Pesonen [4], Medina et al. [5], and Crespillo et al. [6] developed
robust estimation schemes for localization using fault-contaminated GNSS
measurements. Lesouple [7] incorporated sparse estimation theory in mitigating
GNSS measurement faults for localization. However, these techniques are
primarily designed for scenarios where a large fraction of non-faulty
measurements are present at each time instant, which is not necessarily the
case for urban environment GNSS measurements.
In the context of achieving robustness against GNSS measurement faults,
several techniques have been developed under the collective name of Receiver
Autonomous Integrity Monitoring (RAIM) [8]. RAIM algorithms mitigate
measurement faults either by comparing state estimates obtained using
different groups of measurements (solution-separation RAIM) [9, 10, 8] or by
iteratively excluding faulty measurements based on measurement residuals
(residual-based RAIM) [11, 12, 13, 14]. Furthermore, several works have
combined RAIM algorithms with filtering techniques for robust positioning.
Grosch et al. [15], Hewitson et al. [16], Leppakoski et al. [17], and Li et
al. [18] utilized residual-based RAIM algorithms to remove faulty GNSS
measurements before updating the state estimate using KF. Boucher et al. [19],
Ahn et al. [20], and Wang et al. [21, 22] constructed multiple filters
associated with different groups of GNSS measurements, and used the
logarithmic likelihood ratio between the distributions tracked by the filters
to detect and remove faulty measurements. Pesonen [23] proposed a Bayesian
filtering framework that tracks indicators of multipath bias in each GNSS
measurement along with the state. A limitation of these approaches is that the
computation required for comparing state estimates from several groups of GNSS
measurements increases combinatorially both with the number of measurements
and the number of faults considered [8]. Furthermore, recent research has
shown that poorly distributed GNSS measurement values can cause significant
positioning errors in RAIM, exposing serious vulnerabilities in these
approaches [24].
In a recent work [25], we developed an algorithm for robustly localizing a
ground vehicle in challenging urban scenarios with several faulty GNSS
measurements. To handle situations where a single position cannot be robustly
estimated from the GNSS measurements, the algorithm employed a particle filter
for tracking a multimodal distribution of the vehicle position. Furthermore,
the algorithm equipped the particle filter with a Gaussian Mixture likelihood
model of GNSS measurements to ensure that various position hypotheses
supported by different groups of measurements are assigned large probability
in the tracked distribution.
### I-B Integrity Monitoring in GNSS-based Localization
For assessing the trustworthiness of an estimated position of the vehicle, the
concept of GNSS integrity has been developed in previous literature [26, 1].
Integrity is defined through a collection of parameters that together express
the ability of a navigation system to provide timely warnings when the system
output is unsafe to use [2, 27]. Some parameters of interest for monitoring
integrity are defined as follows
* •
Misleading Information Risk is defined as the probability that the position
error exceeds a specified maximum value, known as the alarm limit (AL).
* •
Accuracy represents a measure of the error in position output from the
navigation system.
* •
Availability of a navigation system determines whether the system output is
safe to use at a given time instant.
* •
Integrity Risk is the probability of the event where the position error
exceeds AL but the system is declared available by the integrity monitoring
algorithm.
Different approaches for monitoring integrity have been proposed in
literature. Solution-separation RAIM algorithms monitor integrity based on the
agreement between the different state estimates obtained for different groups
of measurements [9, 10, 8]. Residual-based RAIM algorithms determine the
integrity parameters from statistics derived from the measurement residuals
for the obtained state estimate [11, 12, 13, 14]. Tanil et al. [28] utilized
the innovation sequence of a Kalman filter to derive integrity parameters and
determine the availability of the system. However, the approach is designed
and evaluated for single satellite faults and does not necessarily generalize
to scenarios where several measurements are affected by faults. However, these
techniques have been designed for Kalman filter and cannot directly be applied
to a particle filter-based framework. In Bayesian RAIM [23], the availability
of the system is determined through statistics computed from the tracked
probability distribution. The proposed technique is general and can be applied
to a variety of filters, including the particle filter. However, the accuracy
of the tracked probability distribution by a filter is limited in low-
probability and tail regions necessary for monitoring integrity [29].
### I-C Our Approach
In this paper, we build upon our prior work on a particle filter-based
framework [25] that incorporates GNSS and odometry measurements both for
estimating a position that is robust to faults in several GNSS measurements
and for assessing the trustworthiness of the estimated position. Unlike
traditional particle filters used in GNSS-based navigation, our approach
associates each GNSS measurement with a weight coefficient that is estimated
along with particle filter weights. Our algorithm for estimating the
measurement weights and the particle weights is based on the expectation-
maximization algorithm [30]. At each time instant, our algorithm mitigates
several faults at once by reducing the weights assigned to both the faulty
GNSS measurements and the particles corresponding to unlikely positions (Fig.
1). Using the measurements and the estimated weights, we then evaluate
measures of misleading information risk, accuracy and determine the
availability of the localization output.
To reduce the effect of faulty GNSS measurements on the particle weights, we
model the likelihood of GNSS measurements within the particle filter as a
Gaussian mixture model (GMM) [31] with the measurement weights as the weight
coefficients. The GMM likelihood is characterized by a weighted sum of
multiple probability distribution components totaling the number of available
measurements at the time instant. Each component in our GMM represents the
conditional probability distribution of observing a single GNSS measurement
from a position. In scenarios where determining a single best position from
the measurements is difficult, our designed GMM likelihood assigns high
probability to several positions that are likely under different groups of
GNSS measurements.
Figure 2: Architecture of the proposed framework. The framework incorporates
GNSS and odometry measurements to infer the probability distribution of the
vehicle state as a set of weighted particles in a particle filter. Robustness
to faults in GNSS measurements is achieved using the GMM Weighting,
Measurement Voting and Vote Pooling steps which are based on expectation-
maximization [30]. From the estimated probability distribution and the GNSS
measurements, the framework derives measures of Accuracy and Misleading
Information risk to determine the availability of the system at each time
instant.
We next describe the key contributions of this work:
1. 1.
We develop a novel particle filter-based localization framework that mitigates
faults in several measurements by estimating the particle weights together
with weight coefficients associated with GNSS measurements. Our approach for
estimating the measurement and particle weights integrates the weighting step
in the particle filter algorithm [32] with the expectation-maximization
algorithm [33].
2. 2.
We design an approach for determining the navigation system availability in
the presence of faults in several GNSS measurements. Our designed approach
considers both the variance in the particle filter probability distribution
like [23] and the disagreement between residuals computed from different GNSS
measurements.
3. 3.
We have validated our approach on real-world and simulated driving scenarios
under various scenarios to verify the statistical performance. We have
compared the localization performance of our proposed framework against
existing Kalman filter and particle filter-based baselines, and the integrity
monitoring performance against the existing particle filter-based method.
Our approach provides several advantages over prior approaches for fault-
robust localization and integrity monitoring. While inspired by existing
residual-based RAIM algorithms which mitigate faults using residuals computed
from an initial position estimate, our approach utilizes the tracked
probability distribution of vehicle position to compute the residuals and
mitigate faults. This enhances the robustness of our approach in scenarios
with multiple faults since several different position hypotheses are
considered. Unlike the existing solution-separation RAIM and filter-based
approaches that exhibit a combinatorial growth in computation with increasing
number measurements and measurement faults, our approach scales linearly in
computation with the number of measurements. This is due to our weighting
scheme that simultaneously assigns weights to GNSS measurements and particles
instead of individually considering several subsets of the measurements. Since
our approach tracks several position hypotheses for mitigating measurement
faults, it can track the position probability distribution in scenarios with a
large fraction of faulty GNSS measurements, unlike the existing robust state
estimation approaches. Unlike existing particle filter-based approaches that
rely on a known map of the environment [34, 35, 36] or a known transition
model of faults [23], our approach requires no assumptions about existing
environment information for mitigating faults. Furthermore, our measure of the
misleading information risk for integrity monitoring is derived through our
designed GMM likelihood of measurements, which accounts for the multi-modal
and heavy-tailed nature of the position probability distribution, unlike the
purely filter-based approach presented in [23].
The rest of the paper is organized as follows: Section II discusses the
proposed algorithm in detail. Section III details the experiments and
comparisons with existing techniques on both simulated and real-world data.
Section IV summarizes our work and provides concluding remarks.
## II Localization and Integrity Monitoring Framework
The different components of our method are outlined in Fig. 2
1. 1.
The Propagation step receives as input the previous time particle distribution
of receiver state and the odometry measurement and generates as output
propagated particles in the extended state-space, which are associated with
different GNSS measurements. The extended state-space consists of the receiver
state along with an integer-valued selection of a GNSS measurement associated
with the particle.
2. 2.
The Iterative Weighting step receives as input the extended state-space
particles and the GNSS measurements and infers the particle weights and GMM
weight coefficients in three steps: First, the Measurement Voting step
computes a probabilistic confidence, referred to as vote, in the association
between the extended state-space particle and the chosen GNSS measurement.
Next, the Vote Pooling step combines the votes to obtain an overall confidence
in each GNSS measurement, referred to as measurement weights. Finally, the GMM
Weighting step updates the particle weights using the a GMM constructed from
the measurement weights as the measurement likelihood function. The process is
then repeated for a fixed number of iterations, or till convergence.
3. 3.
The Reduced Resampling step receives as input the extended state-space
particles and their computed weights and generates a new set of particles in
the receiver state space.
4. 4.
For IM, we compute measures of Accuracy and Misleading Information Risk using
the particles and the GMM likelihood. Then, we determine the System
Availability by comparing the obtained accuracy and misleading information
risk against prespecified thresholds.
We next describe our problem setup, followed by details about the individual
components of our method. Finally, we end the section with the run-time
analysis of our approach.
### II-A Problem Setup
We consider a vehicle navigating in an urban environment using GNSS and
odometry measurements. We denote the true state vector of the vehicle at time
$t$ as $x_{t}$, and the overall trajectory as $\\{x_{t}\\}_{t=1}^{T}$, where
$T$ denotes the total duration of the vehicle’s trajectory. For generality, we
do not restrict $x_{t}$ to have a specific form and instead assume that it
consists of various quantities such as the vehicle’s position coordinates,
heading angles, receiver clock errors, linear velocities, and yaw rates. The
$k$th GNSS pseudorange measurement acquired at time $t$ is denoted by the
scalar value $\rho_{t}^{(k)}$. At time $t$, the vehicle acquires a set of
$K_{t}$ GNSS pseudorange measurements
$M_{t}=\\{\rho_{t}^{(k)}\\}_{k=1}^{K_{t}}$ and odometry measurement $u_{t}$.
The probability distribution $\pi_{t}$ of the vehicle state is approximated by
a particle filter as
$\pi_{t}=P(x_{t}|x_{t-1},M_{t},u_{t})\approx\hat{\pi}_{t}=\sum_{i=1}^{N}w_{t}^{(i)}\delta(x=x_{t}^{(i)}),$
(1)
where $\hat{\pi}_{t}$ denotes the approximate state probability distribution
represented using particles; $x_{t}^{(i)}$ denotes the state of the $i$th
particle; $w_{t}^{(i)}$ denotes the weight of the $i$th particle such that
$\sum_{i=1}^{N}w_{t}^{(i)}=1$; $\delta(x=x_{t})$ denotes the Dirac delta
function [37] that assumes the value of zero everywhere except $x_{t}$; and
$N$ is the total number of particles.
To design our particle filter-based localization algorithm, we assume that the
state of the vehicle follows the Markov property, i.e. given the present state
of the vehicle, the future states are independent of the past states. By the
Markov property among $(x_{1},\ldots,x_{t})$, the conditional probabilities
$P(x_{t}|x_{t-1},M_{t},u_{t})$ and
$P(x_{t}|x_{1},\ldots,x_{t-1},M_{1},\ldots,M_{t},u_{t})$ are equivalent.
Following the methodology in filtering algorithms [38] and applying Bayes’
theorem with independence assumptions on $M_{t},u_{t}$, the probability
distribution is factored as
$\displaystyle P(x_{t}|x_{t-1},M_{t},u_{t})$ $\displaystyle\propto
P(M_{t}|x_{t},x_{t-1},u_{t})\cdot P(x_{t}|x_{t-1},u_{t})$
$\displaystyle\propto\underbrace{P(M_{t}|x_{t})}_{L_{t}(x_{t})}\cdot\underbrace{P(x_{t}|x_{t-1},u_{t})}_{\tilde{\pi}_{t}},$
(2)
where $L_{t}(x_{t})$ can be interpreted as the likelihood of receiving $M_{t}$
at time $t$ from $x_{t}$; and $\tilde{\pi}_{t}$ denotes the probability
distribution at time $t$ predicted using $\hat{\pi}_{t-1}$ and $u_{t}$.
In our approach, we consider the joint tasks of state estimation and fault
mitigation using an extended state-space $(x,\chi)$ of particles, where
$\chi=k\in\\{1,\ldots,K_{t}\\}$ denotes an integer corresponding to the $k$th
GNSS measurement in $M_{t}$. The value of $\chi$ is assigned to each extended
state-space particle at the time of its creation in the propagation step, and
remains fixed until the resampling step. The extended state-space particles
are used in the iterative weighting step to determine the weight coefficients
in the GMM likelihood as well as to compute the particle weights.
### II-B Propagation Step
First, we generate uniformly weighted $K_{t}$ copies of each particle
$x_{t-1}^{(i)}$, each corresponding to a different value of
$\chi=\\{1,\ldots,K_{t}\\}$. We then propagate these extended state-space
particles $(x)_{t-1}^{(i,\chi)}$ via the state dynamics
$\displaystyle x_{t}^{(i,\chi)}$
$\displaystyle=f(x_{t-1}^{(i,\chi)},u_{t})+\epsilon^{(i,\chi)}_{f}\quad\forall\
\chi=\\{1,\ldots,K_{t}\\},$ (3) $\displaystyle\tilde{\pi}^{(i)}_{t}$
$\displaystyle=\sum_{k=1}^{K_{t}}K_{t}^{-1}\delta(x=\tilde{x}_{t}^{(i,\chi=k)}),$
(4) $\displaystyle\tilde{\pi}_{t}$
$\displaystyle\approx\sum_{i=1}^{N}w_{t-1}^{(i)}\tilde{\pi}^{(i)}_{t},$ (5)
where $f:\mathcal{X}\times\mathcal{U}\to\mathcal{X}$ is the function to
propagate state $x\in\mathcal{X}$ using odometry $u\in\mathcal{U}$ based on
system dynamics; and
$\epsilon^{(i,\chi)}_{f}\sim\mathcal{N}(0,\sigma^{2}_{f})$ denotes the
stochastic term in propagation of each particle.
Next, we compute the measurement likelihood model and update the weights
associated with each particle in the weighting step.
### II-C Iterative Weighting Step
In conventional filtering schemes for GNSS, the measurement likelihood model
is designed with the assumption that all the measurements are independent of
each other and model the probability $P(\rho_{t}^{(k)}|x_{t})$ associated with
the $k$th measurement at time $t$ for $x_{t}$ as a Gaussian distribution [38,
39, 23, 40]. However, this likelihood model inherently assumes a unimodal
probability distribution of $x_{t}$, and therefore, has a tendency to over
simplify the measurement errors when faults occur in multiple GNSS
measurements.
Hence, we model the likelihood instead as a Gaussian mixture model (GMM) which
has additive contributions of each measurement towards the likelihood, such
that faults in any small subset of measurements does not have a dominating
effect on the overall likelihood. Each measurement component in the GMM
likelihood is associated with a weight coefficient which controls the impact
of that component on the overall likelihood. The mixture model based
likelihood is expressed as
$L_{t}(x_{t})=\sum_{k=1}^{K_{t}}\gamma_{k}\mathcal{N}\left(\rho_{t}^{(k)}\mid\hat{\rho}_{x_{t}}^{k},\left(\sigma_{x_{t}}^{k}\right)^{2}\right)\
\text{s.t.}\ \sum_{k=1}^{K_{t}}\gamma_{k}=1,$ (6)
where $\hat{\rho}_{x_{t}}^{k}$ is the expected value of the $k$th pseudorange
measurement from $x_{t}$ assuming known satellite position;
$\sigma_{x_{t}}^{k}$ is the standard deviation obtained using the carrier-to-
noise ratio, satellite geometry or is empirically computed [41]; and
$\gamma_{k}$ is the measurement weight associated with $k$th measurement that
scales the individual probability distribution components to enforce a unit
integral for the probability distribution. A single measurement component for
the $k$th measurement in the GMM assigns similar probabilities to all $x_{t}$
which generate similar values of $\hat{\rho}_{x_{t}}^{k}$. Therefore, the GMM
induces a multi-modal probability distribution over the state, with peaks at
positions supported by different subsets of component measurements. The values
of the measurement weights decide the relative confidence between the modes,
i.e., for the same values of standard deviation, the mode of the component
with higher measurement weight has a higher probability value associated with
it [31].
Using the proposed GMM likelihood, we jointly determine the values for
$\\{\gamma_{k}\\}_{k=1}^{K_{t}}$ and $\\{w^{(i)}_{t}\\}_{i=1}^{N}$ using an
iterative approach based on the EM algorithm for GMM [30]. Our approach
consists of three steps, namely Measurement Voting, Vote Pooling and GMM
Weighting.
1. 1.
Measurement Voting Assuming no prior knowledge about measurement faults, we
uniformly initialize the value of $\gamma_{k}$ with $K_{t}^{-1}$ for all
values of $k\in\\{1,\ldots,K_{t}\\}$. The initial value of
$\\{w^{(i,\chi=k)}_{t}\\}_{i=1}^{N}$ is set using the previous time instant
weights as
$w^{(i,\chi=k)}_{t}=K_{t}^{-1}\cdot w^{(i)}_{t-1}.$ (7)
where $i\in\\{1,\ldots,N\\}$. Since $\\{w^{(i)}_{t}\\}_{i=1}^{N}$ and
measurement weights $\\{\gamma_{k}\\}_{k=1}^{K_{t}}$ are interdependent, we
cannot obtain a closed form expression for their updated values. Therefore, we
introduce a set of random variables $\\{V^{k}\\}_{k=1}^{K_{t}}$, referred to
as votes, to indirectly compute the updates for both
$\\{w^{(i)}_{t}\\}_{i=1}^{N}$ and $\\{\gamma_{k}\\}_{k=1}^{K_{t}}$. The random
variable $V^{k}$ denotes the confidence associated with the $k$th measurement
depending on a given value of the state $x_{t}$, and differs from $\gamma_{k}$
due to its dependence on $x_{t}$. This dependence is exercised by computing
the normalized residual $r^{k}_{i}$ for each $x_{t}^{(i,\chi)}$ in
$\tilde{\pi}_{t}$
$r^{k}_{i}=(\sigma_{x^{(i,\chi=k)}_{t}}^{k})^{-1}(\rho_{t}^{(k)}-\hat{\rho}_{x^{(i,\chi=k)}_{t}}^{k}).$
(8)
Using the initial values of $\\{w^{(i)}_{t}\\}_{i=1}^{N}$ and
$\\{\gamma_{k}\\}_{k=1}^{K_{t}}$ and the computed residuals
$\\{r^{k}_{i}\\}_{i=1,k=1}^{N,K_{t}}$, we then infer the probability
distribution of the random variables $\\{V^{k}\\}_{k=1}^{K_{t}}$ in the
expectation-step of the EM algorithm [30]. We model the probability
distribution of $\\{V^{k}\\}_{k=1}^{K_{t}}$ as a weighted set of votes
$\\{v^{k}_{i}\\}_{i=1,k=1}^{N,K_{t}}$
$v^{k}_{i}=P_{\mathcal{N}^{2}(0,1)}\left(r^{k}_{i}\right)^{2},$ (9)
where $P_{\mathcal{N}^{2}(0,1)}(\cdot)$ denotes the probability density
function of the square of standard Gaussian distribution [42]. The squared
standard Gaussian distribution assigns smaller probability value to large
residuals than the Gaussian distribution. This is in line with RAIM algorithms
that use the chi-squared distribution for detecting GNSS faults [11].
2. 2.
Vote pooling Next, we normalize and pool the votes cast by each particle in
the maximization-step to compute $\gamma_{k}$ for each measurement. We write
an expression for the total empirical probability $\pi_{t}^{tot}$ at time $t$
of measurements $M_{t}$ using the votes $\\{v^{k}_{i}\\}_{i=1,k=1}^{N,K_{t}}$
and measurement weights $\\{\gamma_{k}\\}_{k=1}^{K_{t}}$ as
$\pi_{t}^{tot}=\sum_{i=1}^{N}\sum_{k=1}^{K_{t}}\gamma_{k}w^{(i,\chi=k)}_{t}v^{k}_{i}.$
(10)
We maximize $\pi_{t}^{tot}$ with respect to $\\{\gamma_{k}\\}_{k=1}^{K_{t}}$
and the constraint $\sum_{k=1}^{K_{t}}\gamma_{k}=1$ to obtain an update rule
for vote pooling
$\gamma_{k}=\frac{\sum_{i=1}^{N}w^{(i,\chi=k)}_{t}v^{k}_{i}}{\sum_{i=1}^{N}\sum_{k^{\prime}=1}^{K_{t}}w^{(i,\chi=k)}_{t}v^{k^{\prime}}_{i}}.$
(11)
This update rule can be seen as assigning an empirical probability to each
$\gamma_{k}$ according to the weighted votes.
3. 3.
GMM weighting Using the computed measurement weights
$\\{\gamma_{k}\\}_{k=1}^{K_{t}}$, we then update the weights of each extended
state-space particle. We compute the updated particle weights in the
logarithmic scale, since implementing the particle filter on a machine with
finite precision of storing floating point numbers may lead to numerical
instability [43]. However, the GMM log-likelihood contains additive terms
inside the logarithm and therefore cannot be readily distributed. Hence, we
consider an extended state-space that includes the measurement association
variable $\chi$ described in Section II.A. The measurement likelihood
conditioned on $\chi$ can now be equivalently written as a Categorical
distribution likelihood [44]
$\displaystyle
L_{t}(x_{t})=P(M_{t}|x_{t},\chi)=\prod_{k=1}^{K_{t}}\left(\gamma_{k}\mathcal{N}_{x_{t}}(\rho^{(k)}_{t})\right)^{\mathbb{I}[\chi=k]}_{\textstyle,}$
(12)
where $\mathbb{I}[\cdot]$ denotes an indicator function which is 1 if the
condition in its argument is true and zero otherwise;
$\mathcal{N}_{x_{t}}(\rho^{(k)}_{t})$ is shorthand notation for
$\mathcal{N}\left(\rho^{(k)}_{t}\mid\hat{\rho}_{x_{t}}^{k},\left(\sigma_{x_{t}}^{k}\right)^{2}\right)$;
and $\sum_{k=1}^{K_{t}}\gamma_{k}=1$. The log-likelihood from the above
expression is derived as
$\displaystyle\log
L_{t}(x_{t})=\sum_{k=1}^{K_{t}}\mathbb{I}[\chi=k]\left(\log\gamma_{k}+\log\mathcal{N}_{x_{t}}(\rho^{(k)}_{t})\right),$
(13)
We use the log-likelihood derived above to compute the new weights
$\\{w^{(i,\chi=k)}_{t}\\}_{i=1,k=1}^{N,K_{t}}$ for each particle in a stable
manner
$\displaystyle l^{(i)}_{k}=\log$ $\displaystyle
L_{t}x^{(i,\chi=k)}_{t}-\max_{i,k}\left(\log L_{t}x^{(i,\chi=k)}_{t}\right),$
(14) $\displaystyle w^{(i,\chi=k)}_{t}$
$\displaystyle=\frac{\exp\left(l^{(i)}_{k}\right)}{\sum_{i=1}^{N}\sum_{k=1}^{K_{t}}\exp\left(l^{(i)}_{k}\right)}.$
(15)
### II-D Reduced Resampling Step
To obtain the updated particle distribution
$\\{x_{t}^{(i)}\\}_{i=1}^{N}=\hat{\pi}_{t}$, we redistribute the weighted
extended state-space particles via the sequential importance resampling (SIR)
procedure [45]. Additionally, we reduce the $N\times K_{t}$ particles to the
original number of $N$ particles with equal weights
$\\{x_{t}^{(i)}\\}_{i=1}^{N}\leftarrow\text{SIR}\left(\\{(x)_{t}^{(i,\chi=k)},w_{t}^{(i,\chi=k)}\\}_{i=1,k=1}^{N,K_{t}}\right),$
(16)
where $\text{SIR}(\cdot)$ is the SIR procedure [45] that resamples from the
categorical distribution of weighted extended space particles. The mean state
estimate $\hat{x}_{t}$ denotes the state solution from the algorithm, and is
computed as $\hat{x}_{t}=\sum_{i=1}^{N}w_{t}^{(i)}x_{t}^{(i)}$.
### II-E Integrity Monitoring
A key feature of our integrity monitor is that we consider a non-Gaussian
probability distribution of the receiver state represented through particles.
Our integrity monitor is based on two fundamental assumptions: First,
sufficient redundancy exists in positioning information across the combination
of measurements from different satellites and multiple time epochs. Second,
the positions which are likely under faulty measurements have lower
correlation with the filter dynamics across time than the positions which are
likely under non-faulty measurements.
We develop a Bayesian framework for integrity monitoring inspired by [23]. At
each time instant, the integrity monitor calculates measures of misleading
information risk $P_{MIR}$ (referred to as integrity risk in [23]) and
accuracy $r_{A}$, which are derived later in the section. The integrity
monitor then compares the misleading information risk $P_{MIR}$ and accuracy
$r_{A}$ against prespecified reference values to detect whether navigation
system output is safe to use. Following the Stanford-ESA integrity diagram
[46], we refer to such events as hazardous operations. If hazardous operations
are detected, the integrity monitor declares the system as unavailable.
#### Misleading Information Risk
In [23], the process of computing $P_{MIR}$ relies on the state probability
distribution tracked by several Kalman or particle filters. The filters are
characterized by considering multipath errors in different measurements, thus
tracking different position probability distributions. $P_{MIR}$ is then
measured by integrating the probability over all positions that lie outside
the specified AL. However, this approach for estimating $P_{MIR}$ is often
inaccurate, since both the Kalman filter and the particle filter algorithms
are designed to estimate the probability distribution in the vicinity of the
most probable states and not the tail-ends [47]. Therefore, we derive an
alternative approach to estimate $P_{MIR}$ directly from our GMM measurement
likelihood
$\displaystyle P_{MIR}$ $\displaystyle=P(\\{x_{t}\not\in\Omega_{I}\\}\mid
M_{t},u_{t},\hat{\pi}_{t-1})$ $\displaystyle=1-$
$\displaystyle\frac{P(\\{x_{t}\in\Omega_{I}\\}\mid
u_{t},\hat{\pi}_{t-1})P(M_{t}\mid\\{x_{t}\in\Omega_{I}\\},u_{t},\hat{\pi}_{t-1})}{P(M_{t}\mid
u_{t},\hat{\pi}_{t-1})}$ $\displaystyle\approx\ \ $
$\displaystyle\hat{P}_{MIR}=1-\frac{P(\\{x_{t}\in\Omega_{I}\\}\mid
u_{t},\hat{\pi}_{t-1})}{P(M_{t}\mid u_{t},\hat{\pi}_{t-1})}$
$\displaystyle\int_{x_{t}\in\Omega_{I}}|\Omega_{I}|^{-1}\sum^{K_{t}}_{k=1}\gamma_{k}\mathcal{N}\left(\rho_{t}^{(k)}\mid\hat{\rho}_{x_{t}}^{k},\left(\sigma_{x_{t}}^{k}\right)^{2}\right)dx_{t}$
(17)
where $\Omega_{I}$ denotes the set of positions that lie within AL about the
mean state estimate $\hat{x}_{t}$ and $|\Omega_{I}|$ denotes its total area.
For simplicity, we approximate the conditional distribution
${P(x_{t}\mid\\{x_{t}\in\Omega_{I}\\},u_{t},\hat{\pi}_{t-1})}$ as a uniform
distribution $|\Omega_{I}|^{-1}\ \forall x_{t}\in\Omega_{I}$. We empirically
verify that this approximation works well in practice for determining the
system availability.
For computational efficiency, we approximate the above integral using cubature
techniques for two-dimensional disks described in [48]. The term
$P(\\{x_{t}\in\Omega_{I}\\}\mid u_{t},\hat{\pi}_{t-1})$ is computed by adding
the weights of particles that lie inside AL based on the propagated
distribution $\tilde{\pi}_{t}$.
#### Accuracy
Bayesian RAIM [23] defines $r_{A}$ as the radius of the smallest disk about
$\hat{x}_{t}$ that contains the vehicle state with a specified probability
$\alpha$. Unlike $P_{MIR}$, accuracy is determined primarily by the
probability mass near the state estimate and not the tail-ends. To compute
accuracy, we first approximate the particle distribution by a Gaussian
distribution with the mean parameter as $\hat{x}_{t}$ and covariance
$\mathbf{C}$ computed as a weighted estimate from samples as
$\mathbf{C}=\frac{1}{1-\sum_{i=1}^{N}\left(w^{(i)}_{t}\right)^{2}}\sum_{i=1}^{N}w^{(i)}_{t}(x^{(i)}_{t}-\hat{x}_{t})(x^{(i)}_{t}-\hat{x}_{t})^{\top}.$
(18)
Next, we estimate the accuracy $r_{A}$ using the inverse cumulative
probability distribution of the Gaussian distribution as
$r_{A}\approx\hat{r}_{A}=\max_{i=1,2}\sqrt{\mathbf{C}_{i,i}}\Phi^{-1}(\alpha)$
(19)
where $\hat{r}_{A}$ is the estimated accuracy; $\Phi^{-1}(\cdot)$ denotes the
inverse cumulative probability function for standard Gaussian distribution. A
smaller value of $r_{A}$ computed by the expression implies higher accuracy in
positioning.
#### Availability
We compare the estimated values of $\hat{P}_{MIR}$ and $\hat{r}_{A}$ against
their respective thresholds $P_{MIR}^{0}$ and $r^{0}_{A}$ specified by the
user
$(\hat{P}_{MIR}\leq P_{MIR}^{0})\quad\text{and}\quad(\hat{r}_{A}\leq
r_{A}^{0}).$ (20)
If any of the above constraints are violated, the integrity monitor declares
the system unavailable.
### II-F Computation Requirement
We analyze the computational requirement of our algorithm and compare it with
the requirement of the existing least squares, robust state estimation,
residual-based RAIM and particle filter-based approaches. The required
computation in the existing least-squares and robust state estimation
approaches (both filter based and non-filter based) is proportional to the
number of available measurements $k$, since each measurement is only seen once
to compute the position. For residual-based RAIM algorithms that remove faults
iteratively [11, 12, 13, 14], the maximum required computation grows
proportionally to $mk$ since it depends both on the number of iterations
(maximum $m$) and on the computation required per iteration (proportional to
$k$). For the existing particle filter-based methods that employ several
filters [19, 21, 20, 22], the computation depends on the number of filters,
which scales proportionally to $k^{m}$.
Instead, our approach grows linearly in computation with the number of
available measurements. For $n$ particles and $k$ available measurements, each
component in our framework has a maximum computational requirement
proportional to $nk$ irrespective of the number of faults present. Hence, our
approach exhibits smaller computational requirements than existing particle
filter-based methods, and similar requirements to residual-based RAIM, least-
squares and robust state estimation algorithms with respect to the number of
GNSS measurements.
## III Experimentation Results
We evaluated the framework for both the localization and integrity monitoring
performance and on simulated and real-world driving scenarios. For the
experiments on simulated data, we validated our algorithm across multiple
trajectories with various simulated noise profiles that imitate GNSS
measurements in urban environments.
For our first set of experiments on evaluating the localization performance
under GNSS measurement faults, we consider two baselines:
1. a)
Joint state-space particle filter (J-PF), which is based on the particle
filter algorithm proposed by Pesonen [23] which considers both the vehicle
navigation parameters and measurement fault vector (signal quality parameters)
in the particle filter state-space. We refer to the combined state-space
comprising of all the parameters as the _joint state-space_. Since J-PF
independently considers both the measurement faults and the state variables,
it can also be interpreted as a bank of particle filters [49, 22]. Since
computational requirement of J-PF is combinatorial in the number of considered
measurement faults, we limit the algorithm to consider at most two faults.
Furthermore, we assume a uniformly distributed fault transition probability in
the particle filter.
2. b)
Kalman filter RAIM (KF-RAIM), which combines the residual-based RAIM algorithm
[11] with Kalman filter for fault-robust sequential state estimation. Similar
to the fault exclusion strategy employed in [17] and [50], KF-RAIM detects and
removes measurement faults iteratively by applying a global and a local test
using statistics derived from normalized pseudorange measurement residuals.
The local test is repeated to remove faulty measurements until the global test
succeeds and declares the estimated position safe to use.
For our second set of experiments for evaluating the performance in
determining the system availability, we compare our approach against Bayesian
RAIM [23], which determines the system availability through measures of
misleading information risk and accuracy computed by integrating the J-PF
probability distribution.
All the filters are initialized at the ground truth initial position with a
specified standard deviation $\sigma_{init}$. Note that for our approach, we
only used a single iteration in the weighting step in our experiments on
simulated data, since we observed that a single iteration suffices to provide
us good performance at low computation cost.
#### Localization Performance Metrics
For evaluating the positioning performance of our approach, we compute the
metrics of horizontal root mean square error (RMSE) as well as the percentage
of estimates that deviate more that 15 m from the ground truth ($\%{>}15$ m).
We compute these metrics as
$RMSE=\sqrt{\frac{1}{T}\sum_{t=1}^{T}\|\hat{x}_{t}-x_{t}^{*}\|_{\text{pos}}^{2}},$
(21) $\%{>}15\text{ m}=\frac{N(\|\hat{x}_{t}-x_{t}^{*}\|>15)}{T},$ (22)
where $T$ denotes the total number of timesteps for which the algorithm is
run; $x^{*}_{t}$ denotes the ground truth state of the receiver;
$\|x\|_{\text{pos}}$ denotes the Euclidean norm of the horizontal position
coordinates in state $x$; and $N(I)$ denotes the number of occurrences of
event $I$.
#### Integrity Monitoring Performance Metrics
We evaluate the performance of our integrity monitor on its capability to
declare the navigation system unavailable when the navigation system is under
hazardous operations. In our experiments, we refer to hazardous operations as
the event when the positioning error in the navigation system exceeds an alarm
limit, which we set to $15$ m. For evaluation, our metrics consist of
estimated probability of false alarm $\hat{P}(FA)$ and integrity risk
$\hat{P}(IR)$ computed as
$\hat{P}(FA)=\frac{N(\neg I_{av}\bigcap\neg I_{ho})}{T},$ (23)
$\hat{P}(IR)=\frac{N(I_{av}\bigcap I_{ho})}{T},$ (24)
where $I_{av}(\neg I_{av})$ denotes the event when the navigation system is
declared available (unavailable) by the IM algorithm and $I_{ho}(\neg I_{ho})$
denotes the event of hazardous operations (normal operations).
The rest of the section is organized as follows: (A) evaluation of the
localization performance on simulated data and comparison of our approach with
KF-RAIM and J-PF approaches, (B) evaluation of our approach on real-world
data, (C) comparison of our integrity monitoring algorithm with Bayesian RAIM
[23] on simulated scenarios, (D) analysis of the computation time of our
algorithm and comparison in terms of computation with KF-RAIM and J-PF
approaches, and (E) discussion of the results.
### III-A Evaluation on simulated scenarios
We evaluated our approach on multiple simulated driving scenarios with varying
noise profiles in GNSS and odometry measurements. Our choice of simulations is
motivated by two factors: accurate ground truth information is available and
multiple measurement sequences with different noise values can be obtained.
Our simulation environment consists of a vehicle moving on the horizontal
surface according to 50 different randomly generated trajectories
approximately of length $4000$ m each. The vehicle obtains noisy GNSS ranging
measurements from multiple simulated satellites as well as noisy odometry
measurements. The simulated satellites move with a constant velocity of $1000$
m/s along well-separated paths in random directions at fixed heights of
$2\times 10^{7}$ m from the horizontal surface. The odometry consists of
vehicle speed measurements, and an accurate heading direction of the vehicle
is assumed to be known (eg. from a high accuracy magnetometer). A driving
scenario lasts for $400$ s with measurements acquired at the rate of $1$ Hz.
In each state estimation algorithm, we track the 2D state $(x,y)$ of the
vehicle. We limit ourselves to the simple 2D case in simulation in view of the
computational overhead from requiring significantly more particles in larger
state-spaces. The parameters used for the simulation and filter are given in
Table I.
Parameter | Value
---|---
# of satellites | $5$-$10$
GNSS ranging noise $\sigma_{GNSS}$ | $5$-$10$ m
GNSS ranging bias error magnitude | $50$-$200$ m
Maximum # of faulty measurements | $1$-$6$
GNSS fault change probability | $0.2$
Vehicle speed | $10$ m/s
Odometry noise $\sigma_{u}$ | $5$ m/s
# of particles | $500$
Filter initialization $\sigma_{init}$ | $5$ m
Filter measurement model noise $\\{\sigma^{k}\\}_{k=1}^{K}$ | $5$ m
Filter propagation model noise $\sigma_{f}$ | $5$ m
Alarm limit $AL$ | $15$-$20$ m
Accuracy probability requirement $\alpha$ | $0.5$
TABLE I: Experimental parameters for simulated scenario
We induce two kinds of noise profiles in our pseudorange measurements from the
simulation - a zero mean Gaussian random noise on all measurements and a
random bias noise on a subset of measurements. The subset of measurements
containing the bias noise is initialized randomly, and has a small probability
of changing to a different random subset at every subsequent time instant. The
number of affected measurements is selected randomly between zero and a
selected maximum number of faults at every change. The zero mean Gaussian
noise is applied to measurements such that the variance is double the normal
value for biased measurements. Using our noise model, we simulate the effects
of non line-of-sight signals and multipath on pseudorange measurements in
urban GNSS navigation.
#### Localization performance
| | Few faults | | Many faults
---|---|---|---|---
| | $(5,1)$ | $(5,2)$ | | $(7,4)$ | $(10,6)$
KF-RAIM | RMSE(m) | $7.6$ | $21.5$ | | $34.3$ | $34.4$
$\%{>}15$ m | $5.9$ | $68.7$ | | $87.4$ | $91.1$
J-PF | RMSE(m) | $\mathbf{4.8}$ | $\mathbf{5.8}$ | | $33.5$ | $22.9$
$\%{>}15$ m | $\mathbf{1.2}$ | $\mathbf{3.1}$ | | $94.2$ | $85.4$
Ours | RMSE(m) | $11.0$ | $12.4$ | | $\mathbf{13.2}$ | $\mathbf{12.4}$
$\%{>}15$ m | $23.4$ | $26.6$ | | $\mathbf{33.1}$ | $\mathbf{28.7}$
TABLE II: Comparison of localization performance on the simulated dataset. The
dataset consists of varying total number of measurements and maximum number of
measurement faults (total # of measurements, max # of faults). Our approach
demonstrates better localization performance in scenarios with large number of
faults in comparison to J-PF and KF-RAIM.
In this experiment, we vary the number of available and faulty GNSS
measurements at a time instant. We compare the localization performance of our
algorithm with that of J-PF and SR-RAIM approaches. We consider the following
scenarios (total # of measurements, max # of faults): $(5,1)$, $(5,2)$,
$(7,4)$, $(10,6)$. For each of these scenarios, we induce a bias error of
$100$ m with a standard deviation of $5$ m in the GNSS measurements. We
compute the localization metrics for $50$ full runs of duration $400$ s each
across different randomly generated trajectories.
The trajectories for two scenarios, $(5,1)$ and $(10,6)$, are visualized for
each algorithm in Fig. 3 and the computed metrics are recorded in Table II. As
can be seen from the qualitative and quantitative results, our approach
demonstrates better localization performance than the compared approaches for
scenarios with high noise degradation while maintaining a low rate of large
positioning errors. J-PF has better performance than its counterparts in
scenarios with single faults since it considers all the possibilities of
faults separately. However, its performance worsens in scenarios with more
than 2 measurement faults. Similarly, KF-RAIM is able to identify and remove
faults in scenarios with few faulty measurements resulting in lower
positioning errors, but has poor performance in many-fault scenarios.
(a) $1$ faulty GNSS measurement out of $5$ in total.
(b) $6$ faulty GNSS measurements out of $10$ in total.
Figure 3: Localization accuracy comparison between our approach, J-PF, and KF-
RAIM approaches for (a) few-fault scenario and (b) many-fault scenario. We fix
the underlying ground truth trajectory to be a square of side $1000$ m with
$(0,0)$ as both the start and end positions. Our approach estimates the
vehicle state with better accuracy that J-PF and KF-RAIM in scenarios with a
many faulty measurements.
#### Varying GNSS measurement noise
In this experiment, we study the impact of GNSS measurement noise bias and
standard deviation values on the performance of our algorithm. First, keeping
the GNSS measurement standard deviation fixed at $5$ m, we vary the bias
between $10-100$ m in increments of $10$ m. Next, keeping the bias fixed at
$100$ m, we vary the standard deviation between $5-25$ m in increments of $5$
m. For all cases, the total number of measurements is kept at $7$ and the
maximum number of faults is $3$. The RMSE for both studies are computed across
$20$ different runs and plotted in Fig. 4.
The first plot shows that the localization performance of our approach
initially deteriorates and then starts improving. This is because for low bias
error values, the faulty GNSS measurements do not have a significant impact on
the localization solution, even if the algorithm fails to remove them. Beyond
the bias error value of $50$ m, the algorithm successfully assigns lower
measurement weights to faulty measurements, resulting in lower RMSE for higher
bias error values.
The second plot demonstrates the impact of high measurement noise standard
deviation on our approach. As the measurement noise standard deviation is
increased, the RMSE increases in a superlinear fashion. The performance
deteriorates with increasing noise standard deviation since both the tasks of
fault mitigation as well as localization are negatively impacted by an
increased measurement noise.
Figure 4: Sensitivity analysis of the localization performance of our approach
with varying values of GNSS measurements (a) bias and (b) standard deviation.
For low values of bias, the faults do not significantly impact the navigation
solution, resulting in low RMSE values. Beyond the value of $50$ m, our
approach is able to remove the impact of faulty measurements resulting in low
RMSE values. The performance of the algorithm increasingly deteriorates with
higher standard deviation in GNSS measurement noise since both the capability
to remove faulty measurements as well as localization are hampered by an
increased noise.
### III-B Evaluation on real-world data
Figure 5: Real-world dataset for validation of our approach. We use Frankfurt
Westend-Tower sequence from smartLoc project dataset collected by TU Chemnitz
[51]. The trajectory followed by the vehicle is shown in a). The vehicle moves
from the start point (green) to end point (red) along the trajectory (cyan) in
the direction specified by red arrows. b) shows the residuals of the
pseudorange measurements with respect to ground truth position spread across 2
minutes (receiver clock drift errors removed via filtering). Multiple
measurements demonstrate bias errors due to multipath and non line-of-sight
effects in the urban environment.
For the real-world case, we use GNSS pseudorange and odometry measurements
from the publicly available urban GNSS dataset collected by TU Chemnitz for
the SmartLoc project [51]. We consider the Frankfurt Westend-Tower trajectory
from the dataset for evaluating our approach. The trajectory is ${\sim}2300$ m
long and has both dense and sparse urban regions, with $32\%$ of total
measurements as NLOS signals.
The dataset was collected on a vehicle moving in urban environments, using a
low-cost u-blox EVK-M8T GNSS receiver at 5 Hz and CAN data for odometry at 50
Hz. The odometry data contains both velocity and yaw-rate measurements from
the vehicle CAN bus. The ground truth or reference position is provided using
a high-cost NovAtel GNSS receiver at 20 Hz. Additionally, corrections to
satellite ephemeris data are also provided as an SP3 file along with the data.
The receiver collects ${\sim}12$ pseudorange measurements at each time instant
from GPS, GLONASS, and Galileo constellations. As a preprocessing step, we
remove inter-constellation clock error differences by subtracting the initial
measurement residuals from all measurements, such that the measurements have
zero residuals at the initial position. Fig. 5 shows the reference trajectory
used for evaluation as well as measurement residuals with respect to the
ground truth position in a dense urban region for a duration of $2$ minutes.
Our state-space consists of the vehicle position $(x,y)$, heading $\theta$,
and clock bias error $\delta t$. To mitigate the impact of clock drift, we
denote clock bias error $\delta t$ as the difference between the true receiver
clock bias and a simple linear drift model with precomputed drift rate, and
estimate this difference across time instants. We do not track the clock drift
to keep the state size small for computational tractability. Note that in
practice, the clock parameters can be determined separately using least-
squares while the positioning parameters can be tracked by our approach for
efficient computation. For our approach and J-PF, we use 1000 particles to
account for a larger state space size than the simulated scenario.
Additionally, we use 5 iterations of weighting in our approach, which is
empirically determined to achieve small positioning errors on the scenario.
Figure 6: Estimated vehicle trajectories on real-world data. Our approach (a)
produces trajectories that are closer to the ground truth as compared to
trajectory from J-PF (b) and KF-RAIM (c). The horizontal RMSE values are
computed by averaging over 20 runs with different randomness values in
initialization and propagation. In regions where multiple measurements
contaminated with bias errors, our approach is able to localize better than
the baselines by assigning high measurement weights to measurements that are
consistent with the particle filter probability distribution of vehicle state.
Fig. 6 shows vehicle trajectories estimated using our approach, J-PF, and KF-
RAIM in the local (North-East-Down) frame of reference. Trajectories estimates
from our approach are closer to the ground truth than compared approaches and
thereby have lower horizontal RMSE values. J-PF and KF-RAIM achieve similar
RMSE values and visually exhibit a higher variance in state estimation error
than our approach.
### III-C Comparison of integrity monitoring approaches
Since obtaining a large amount of real-world data for statistical validation
is difficult and out of scope for this work, we restrict our analysis of the
integrity monitoring performance to simulated data. For the simulated
scenario, we generate 400 s long trajectories with GNSS pseudorange
measurements acquired at the rate of 1 Hz (Fig. 7 a)). GNSS pseudorange
measurements are simulated from 10 different satellites. At time instants
between 125-175 s, we add bias errors in upto $60\%$ of the available
measurements, such that the faulty measurements correspond to a position
offset from the ground truth position. The position offset is randomly
selected across different runs from between $50-150$ m. To limit our analysis
to GNSS measurements, we do not simulate any odometry measurements for the
simulations and localize only using GNSS pseudorange measurements for all the
filters. For the particle filter algorithms, we set the propagation noise
standard deviation to $20$ m to ensure that the position can be tracked in the
absence of odometry measurements.
Figure 7: (a) Simulation experiment for analyzing integrity monitoring
performance. The start and end of the 400 s long trajectory are marked in
green and red, respectively. Bias errors are induced in upto $60\%$ of the
total 10 simulated GNSS measurements between $125-175$ s (yellow region). (b)
Minimum attained probability of false alarm and integrity risk across
different threshold values for our approach and Bayesian RAIM [23] for
different number of particles ($100$, $500$) and alarm limit ($10$ m, $15$ m).
The values are computed from more than $10^{4}$ samples across multiple runs.
Our approach exhibits lower false alarms and smaller integrity risk than
Bayesian RAIM across all thresholds for each configuration of the number of
particles and alarm limit.
Both Bayesian RAIM [23] and our approach depend on different thresholds for
determining the system availability. In both approaches, a reference value of
minimum accuracy and misleading information risk is required whose optimal
value depends on the scenario and the algorithm. Varying these threshold
values results in a trade-off between $\hat{P}(FA)$ and $\hat{P}(IR)$. For
example, setting the threshold such that an alarm is always generated produces
many false alarms, but no missed-identifications. Therefore, we compare the
integrity monitor’s performance by computing $\hat{P}(FA)$ and $\hat{P}(IR)$
across all different values of the thresholds and for two different settings
for the number of particles ($100$, $500$) and the alarm limit ($10$ m, $15$
m). (Fig. 7 b)). All the metrics are calculated using more than $10^{4}$
samples across multiple runs of the simulation for each algorithm. Fig. 7 b)
shows that our approach generates lower false alarms and missed-
identifications than Bayesian RAIM across different threshold values for each
considered parameter setting.
### III-D Discussion
Our analysis and experiments on both the simulated and the real-world driving
data suggest that our framework a) exhibits lower positioning errors than
existing approaches in environments with a high fraction of faulty GNSS
measurements, b) detects hazardous operating conditions with a superior
performance to Bayesian RAIM and c) mitigates faults in multiple GNSS
measurements in a tractable manner.
Although the localization performance of our approach in scenarios with
multiple GNSS faults is promising, the performance in scenarios with few
faults is worse than the existing approaches. This poor performance results
from conservative design choices in the framework design, such as the GMM
measurement likelihood model instead of the commonly used multivariate
Gaussian model. However, we believe that a conservative design is necessary
for robust state estimation in challenging urban scenarios with multiple
faults. Therefore, exploring hybrid approaches that switch between
localization algorithms is an avenue for future research.
Another drawback of our framework is that the approach for determining system
availability often generates false alarms. This is because of the large
uncertainty within the GMM likelihood components, which in turn results in a
conservative estimate of the misleading information risk. In future work, we
will explore methods to reduce this uncertainty by incorporating additional
measurements and sources of information such as camera images, carrier-phase,
and road networks.
## IV Conclusion
In this paper, we presented a novel probabilistic framework for fault-robust
localization and integrity monitoring in challenging scenarios with faults in
several GNSS measurements. The presented framework leverages GNSS and odometry
measurements to compute a fault-robust probability distribution of the
position and declares the navigation system unavailable if a reliable position
cannot be estimated. We employed a particle filter for state estimation and
developed a novel GMM likelihood for computing particle weights from GNSS
measurements while mitigating the impact of measurement errors due to
multipath and NLOS signals. Our approach for mitigating these errors is based
on the expectation-maximization algorithm and determines the GMM weight
coefficients and particle weights in an iterative manner. To determine the
system availability, we derived measures of the misleading information risk
and accuracy that are compared with specified reference values. Through a
series of experiments on challenging simulated and real-world urban driving
scenarios, we have shown that our approach achieves lower positioning errors
in state estimation as well as small probability of false alarm and integrity
risk in integrity monitoring when compared to the existing particle filter-
based approach. Furthermore, our approach is capable of mitigating multiple
measurement faults with lower computation requirements than the existing
particle filter-based approaches. We believe that this work offers a promising
direction for real-time deployment of algorithms in challenging urban
environments.
## Acknowledgements
This material is based upon work supported by the National Science Foundation
under award #2006162.
## References
* [1] N. Zhu, J. Marais, D. Betaille, and M. Berbineau, “GNSS Position Integrity in Urban Environments: A Review of Literature,” _IEEE Transactions on Intelligent Transportation Systems_ , vol. 19, no. 9, pp. 2762–2778, Sep. 2018.
* [2] Y. J. Morton, F. S. T. Van Diggelen, J. J. Spilker, and B. W. Parkinson, Eds., _Position, navigation, and timing technologies in the 21st century: integrated satellite navigation, sensor systems, and civil applications_ , first edition ed. Hoboken: Wiley/IEEE Press, 2020.
* [3] C. D. Karlgaard, “Nonlinear Regression Huber–Kalman Filtering and Fixed-Interval Smoothing,” _Journal of Guidance, Control, and Dynamics_ , vol. 38, no. 2, pp. 322–330, Feb. 2015. [Online]. Available: https://arc.aiaa.org/doi/10.2514/1.G000799
* [4] H. Pesonen, “Robust estimation techniques for gnss positioning,” in _Proceedings of NAV07-The Navigation Conference and Exhibition_ , vol. 31, no. 1.11, 2007.
* [5] D. A. Medina, M. Romanovas, I. Herrera-Pinzon, and R. Ziebold, “Robust position and velocity estimation methods in integrated navigation systems for inland water applications,” in _2016 IEEE/ION Position, Location and Navigation Symposium (PLANS)_. Savannah, GA: IEEE, Apr. 2016, pp. 491–501. [Online]. Available: http://ieeexplore.ieee.org/document/7479737/
* [6] O. G. Crespillo, D. Medina, J. Skaloud, and M. Meurer, “Tightly coupled GNSS/INS integration based on robust M-estimators,” in _2018 IEEE/ION Position, Location and Navigation Symposium (PLANS)_. Monterey, CA: IEEE, Apr. 2018, pp. 1554–1561. [Online]. Available: https://ieeexplore.ieee.org/document/8373551/
* [7] J. Lesouple, T. Robert, M. Sahmoudi, J.-Y. Tourneret, and W. Vigneau, “Multipath Mitigation for GNSS Positioning in an Urban Environment Using Sparse Estimation,” _IEEE Transactions on Intelligent Transportation Systems_ , vol. 20, no. 4, pp. 1316–1328, Apr. 2019.
* [8] M. Joerger, F.-C. Chan, and B. Pervan, “Solution Separation Versus Residual-Based RAIM,” _Navigation_ , vol. 61, no. 4, pp. 273–291, Dec. 2014.
* [9] R. G. Brown and P. W. McBURNEY, “Self-Contained GPS Integrity Check Using Maximum Solution Separation,” _Navigation_ , vol. 35, no. 1, pp. 41–53, Mar. 1988.
* [10] Y. C. Lee and others, “Analysis of range and position comparison methods as a means to provide GPS integrity in the user receiver,” in _Proceedings of the 42nd Annual Meeting of the Institute of Navigation_. Citeseer, 1986, pp. 1–4.
* [11] B. W. Parkinson and P. Axelrad, “Autonomous GPS Integrity Monitoring Using the Pseudorange Residual,” _Navigation_ , vol. 35, no. 2, pp. 255–274, Jun. 1988.
* [12] M. A. Sturza, “Navigation System Integrity Monitoring Using Redundant Measurements,” _Navigation_ , vol. 35, no. 4, pp. 483–501, Dec. 1988.
* [13] M. Joerger and B. Pervan, “Sequential residual-based RAIM,” in _Proceedings of the 23rd international technical meeting of the satellite division of the institute of navigation (ION GNSS 2010)_ , 2010, pp. 3167–3180.
* [14] Z. Hongyu, H. Li, and L. Jing, “An optimal weighted least squares RAIM algorithm,” in _2017 Forum on Cooperative Positioning and Service (CPGPS)_. Harbin, China: IEEE, May 2017, pp. 122–127.
* [15] A. Grosch, O. Garcia Crespillo, I. Martini, and C. Gunther, “Snapshot residual and Kalman Filter based fault detection and exclusion schemes for robust railway navigation,” in _2017 European Navigation Conference (ENC)_. Lausanne, Switzerland: IEEE, May 2017, pp. 36–47.
* [16] S. Hewitson and J. Wang, “Extended Receiver Autonomous Integrity Monitoring (eRAIM) for GNSS/INS Integration,” _Journal of Surveying Engineering_ , vol. 136, no. 1, pp. 13–22, Feb. 2010.
* [17] H. Leppakoski, H. Kuusniemi, and J. Takala, “RAIM and Complementary Kalman Filtering for GNSS Reliability Enhancement,” in _2006 IEEE/ION Position, Location, And Navigation Symposium_. Coronado, CA: IEEE, 2006, pp. 948–956. [Online]. Available: http://ieeexplore.ieee.org/document/1650695/
* [18] Z. Li, D. Song, F. Niu, and C. Xu, “RAIM algorithm based on robust extended Kalman particle filter and smoothed residual,” in _Lect. Notes Electr. Eng._ , 2017, iSSN: 18761119.
* [19] C. Boucher, A. Lahrech, and J.-C. Noyer, “Non-linear filtering for land vehicle navigation with GPS outage,” in _2004 IEEE International Conference on Systems, Man and Cybernetics (IEEE Cat. No.04CH37583)_ , vol. 2. The Hague, Netherlands: IEEE, 2004, pp. 1321–1325.
* [20] J.-S. Ahn, R. Rosihan, D.-H. Won, Y.-J. Lee, G.-W. Nam, M.-B. Heo, and S.-K. Sung, “GPS Integrity Monitoring Method Using Auxiliary Nonlinear Filters with Log Likelihood Ratio Test Approach,” _Journal of Electrical Engineering and Technology_ , vol. 6, no. 4, pp. 563–572, Jul. 2011.
* [21] E. Wang, T. Pang, Z. Zhang, and P. Qu, “GPS Receiver Autonomous Integrity Monitoring Algorithm Based on Improved Particle Filter,” _Journal of Computers_ , vol. 9, no. 9, pp. 2066–2074, Sep. 2014\.
* [22] E. Wang, C. Jia, G. Tong, P. Qu, X. Lan, and T. Pang, “Fault detection and isolation in GPS receiver autonomous integrity monitoring based on chaos particle swarm optimization-particle filter algorithm,” _Adv. Sp. Res._ , 2018.
* [23] H. Pesonen, “A Framework for Bayesian Receiver Autonomous Integrity Monitoring in Urban Navigation,” _Navigation_ , vol. 58, no. 3, pp. 229–240, Sep. 2011.
* [24] Y. Sun and L. Fu, “A New Threat for Pseudorange-Based RAIM: Adversarial Attacks on GNSS Positioning,” _IEEE Access_ , vol. 7, pp. 126 051–126 058, 2019.
* [25] S. Gupta and G. X. Gao, “Particle RAIM for Integrity Monitoring,” Miami, Florida, Oct. 2019, pp. 811–826.
* [26] S. Pullen, T. Walter, and P. Enge, “SBAS and GBAS Integrity for Non-Aviation Users: Moving Away from” Specific Risk.”
* [27] B. W. Parkinson, P. Enge, P. Axelrad, and J. J. Spilker Jr, _Global positioning system: Theory and applications, Volume II_. American Institute of Aeronautics and Astronautics, 1996\.
* [28] C. Tanil, S. Khanafseh, M. Joerger, and B. Pervan, “Sequential Integrity Monitoring for Kalman Filter Innovations-based Detectors,” Miami, Florida, Oct. 2018, pp. 2440–2455.
* [29] D. Panagiotakopoulos, A. Majumdar, and W. Y. Ochieng, “Extreme value theory-based integrity monitoring of global navigation satellite systems,” _GPS Solutions_ , vol. 18, no. 1, pp. 133–145, Jan. 2014. [Online]. Available: http://link.springer.com/10.1007/s10291-013-0317-9
* [30] J. Vila and P. Schniter, “Expectation-maximization Gaussian-mixture approximate message passing,” in _2012 46th Annual Conference on Information Sciences and Systems (CISS)_. Princeton, NJ, USA: IEEE, Mar. 2012, pp. 1–6.
* [31] D. N. Geary, G. J. McLachlan, and K. E. Basford, “Mixture Models: Inference and Applications to Clustering.” _Journal of the Royal Statistical Society. Series A (Statistics in Society)_ , vol. 152, no. 1, p. 126, 1989.
* [32] A. Doucet and A. M. Johansen, “A tutorial on particle filtering and smoothing: Fifteen years later,” _Handbook of nonlinear filtering_ , vol. 12, no. 656-704, p. 3, 2009.
* [33] A. P. Dempster, N. M. Laird, and D. B. Rubin, “Maximum Likelihood from Incomplete Data Via the EM Algorithm,” _Journal of the Royal Statistical Society: Series B (Methodological)_ , vol. 39, no. 1, pp. 1–22, Sep. 1977.
* [34] L.-T. Hsu, Y. Gu, and S. Kamijo, “NLOS Correction/Exclusion for GNSS Measurement Using RAIM and City Building Models,” _Sensors_ , vol. 15, no. 7, pp. 17 329–17 349, Jul. 2015.
* [35] S. Miura, S. Hisaka, and S. Kamijo, “GPS multipath detection and rectification using 3D maps,” in _16th International IEEE Conference on Intelligent Transportation Systems (ITSC 2013)_. The Hague, Netherlands: IEEE, Oct. 2013, pp. 1528–1534.
* [36] L.-T. Hsu, F. Chen, and S. Kamijo, “Evaluation of Multi-GNSSs and GPS with 3D Map Methods for Pedestrian Positioning in an Urban Canyon Environment,” _IEICE Transactions on Fundamentals of Electronics, Communications and Computer Sciences_ , vol. E98.A, no. 1, pp. 284–293, 2015.
* [37] P. A. M. Dirac, _The principles of quantum mechanics_. Oxford university press, 1981, no. 27.
* [38] Z. Chen and others, “Bayesian filtering: From Kalman filters to particle filters, and beyond,” _Statistics_ , vol. 182, no. 1, pp. 1–69, 2003.
* [39] M. Joerger and B. Pervan, “Kalman Filter Residual-Based Integrity Monitoring Against Measurement Faults,” in _AIAA Guidance, Navigation, and Control Conference_. Minneapolis, Minnesota: American Institute of Aeronautics and Astronautics, Aug. 2012.
* [40] L. Yu, “Research on GPS RAIM Algorithm Based on SIR Particle Filtering State Estimation and Smoothed Residual,” _Appl. Mech. Mater._ , vol. 422, pp. 196–203, 2013.
* [41] X. Zhao, Y. Qian, M. Zhang, J. Niu, and Y. Kou, “An improved adaptive Kalman filtering algorithm for advanced robot navigation system based on GPS/INS,” in _2011 IEEE International Conference on Mechatronics and Automation_. Beijing, China: IEEE, Aug. 2011, pp. 1039–1044.
* [42] M. K. Simon, _Probability distributions involving Gaussian random variables: A handbook for engineers and scientists_. Springer Science & Business Media, 2007.
* [43] C. Gentner, S. Zhang, and T. Jost, “Corrigendum to “Log-PF: Particle Filtering in Logarithm Domain”,” _Journal of Electrical and Computer Engineering_ , vol. 2018, pp. 1–1, 2018.
* [44] C. M. Bishop, _Pattern recognition and machine learning_. springer, 2006.
* [45] F. Gustafsson, F. Gunnarsson, N. Bergman, U. Forssell, J. Jansson, R. Karlsson, and P.-J. Nordlund, “Particle filters for positioning, navigation, and tracking,” _IEEE Transactions on Signal Processing_ , vol. 50, no. 2, pp. 425–437, Feb. 2002.
* [46] M. Tossaint, J. Samson, F. Toran, J. Ventura-Traveset, M. Hernandez-Pajares, J. Juan, J. Sanz, and P. Ramos-Bosch, “The Stanford - ESA Integrity Diagram: A New Tool for The User Domain SBAS Integrity Assessment,” _Navigation_ , vol. 54, no. 2, pp. 153–162, Jun. 2007.
* [47] R. Van Der Merwe, A. Doucet, N. De Freitas, and E. A. Wan, “The unscented particle filter,” in _Advances in neural information processing systems_ , 2001, pp. 584–590.
* [48] F. G. Lether, “A Generalized Product Rule for the Circle,” _SIAM Journal on Numerical Analysis_ , vol. 8, no. 2, pp. 249–253, Jun. 1971.
* [49] E. Wang, T. Pang, M. Cai, and Z. Zhang, “Fault Detection and Isolation for GPS RAIM Based on Genetic Resampling Particle Filter Approach,” _TELKOMNIKA Indonesian Journal of Electrical Engineering_ , vol. 12, no. 5, pp. 3911–3919, May 2014.
* [50] K. Gunning, J. Blanch, T. Walter, L. de Groot, and L. Norman, “Integrity for Tightly Coupled PPP and IMU,” Miami, Florida, Oct. 2019, pp. 3066–3078.
* [51] P. Reisdorf, T. Pfeifer, J. Breßler, S. Bauer, P. Weissig, S. Lange, G. Wanielik, and P. Protzel, “The problem of comparable gnss results–an approach for a uniform dataset with low-cost and reference data,” in _Proc. of Intl. Conf. on Advances in Vehicular Systems, Technologies and Applications (VEHICULAR)_ , 2016.
* [52] J. Blanch, T. Walter, P. Enge, Y. Lee, B. Pervan, M. Rippl, and A. Spletter, “Advanced RAIM user algorithm description: integrity support message processing, fault detection, exclusion, and protection level calculation,” in _Proceedings of the 25th International Technical Meeting of The Satellite Division of the Institute of Navigation (ION GNSS 2012)_ , 2012, pp. 2828–2849.
* [53] J. Blanch, T. Walter, and P. Enge, “Optimal Positioning for Advanced Raim,” _Navigation_ , vol. 60, no. 4, pp. 279–289, Dec. 2013.
* [54] J. Blanch, K. Gunning, T. Walter, L. De Groot, and L. Norman, “Reducing Computational Load in Solution Separation for Kalman Filters and an Application to PPP Integrity,” Reston, Virginia, Feb. 2019, pp. 720–729.
* [55] J. L. Blanco-Claraco, F. Mañas-Alvarez, J. L. Torres-Moreno, F. Rodriguez, and A. Gimenez-Fernandez, “Benchmarking Particle Filter Algorithms for Efficient Velodyne-Based Vehicle Localization,” _Sensors_ , vol. 19, no. 14, p. 3155, Jul. 2019.
* [56] L. BLISS, “Navigation Apps Changed the Politics of Traffic,” _citylab.com_ , Nov. 2019.
* [57] M. Brenner, “Integrated GPS/Inertial Fault Detection Availability,” _Navigation_ , vol. 43, no. 2, pp. 111–130, Jun. 1996.
* [58] R. G. Brown, _GPS RAIM: Calculation of thresholds and protection radius using chi-square methods; a geometric approach_. Radio Technical Commission for Aeronautics, 1994.
* [59] ——, “Solution of the Two-Failure GPS RAIM Problem Under Worst-Case Bias Conditions: Parity Space Approach,” _Navigation_ , vol. 44, no. 4, pp. 425–431, Dec. 1997.
* [60] J. Carpenter, P. Clifford, and P. Fearnhead, “Improved particle filter for nonlinear problems,” _IEE Proceedings - Radar, Sonar and Navigation_ , vol. 146, no. 1, p. 2, 1999.
* [61] H. P. Chan and T. L. Lai, “A general theory of particle filters in hidden Markov models and some applications,” _The Annals of Statistics_ , vol. 41, no. 6, pp. 2877–2904, Dec. 2013.
* [62] J. Cho, “On the enhanced detectability of GPS anomalous behavior with relative entropy,” _Acta Astronautica_ , vol. 127, pp. 526–532, Oct. 2016\.
* [63] A. Doucet, S. Godsill, and C. Andrieu, “On sequential Monte Carlo sampling methods for Bayesian filtering,” _Statistics and Computing_ , vol. 10, no. 3, pp. 197–208, 2000.
* [64] P. D. Groves and Z. Jiang, “Height Aiding, C/N0 Weighting and Consistency Checking for GNSS NLOS and Multipath Mitigation in Urban Areas,” _Journal of Navigation_ , vol. 66, no. 5, pp. 653–669, Sep. 2013.
* [65] P. D. Groves, Z. Jiang, M. Rudi, and P. Strode, “A portfolio approach to NLOS and multipath mitigation in dense urban areas.” The Institute of Navigation, 2013.
* [66] P. D. Groves, “Shadow Matching: A New GNSS Positioning Technique for Urban Canyons,” _Journal of Navigation_ , vol. 64, no. 3, pp. 417–430, Jul. 2011.
* [67] H. Isshiki, “A New Method for Detection, Identification and Mitigation of Outliers in Receiver Autonomous Integrity Monitoring (RAIM),” in _ION GNSS 21st International Technical Meeting of the Satellite Division_ , 2008, pp. 16–19.
* [68] Z. Jiang, P. D. Groves, W. Y. Ochieng, S. Feng, C. D. Milner, and P. G. Mattos, “Multi-constellation GNSS multipath mitigation using consistency checking,” in _Proceedings of the 24th International Technical Meeting of The Satellite Division of the Institute of Navigation (ION GNSS 2011)_. Institute of Navigation, 2011, pp. 3889–3902.
* [69] M. Joerger, M. Jamoom, M. Spenko, and B. Pervan, “Integrity of laser-based feature extraction and data association,” in _2016 IEEE/ION Position, Location and Navigation Symposium (PLANS)_. Savannah, GA: IEEE, Apr. 2016, pp. 557–571.
* [70] M. Joerger and B. Pervan, “Solution separation and Chi-Squared ARAIM for fault detection and exclusion,” in _2014 IEEE/ION Position, Location and Navigation Symposium-PLANS 2014_. IEEE, 2014, pp. 294–307.
* [71] M. Kaddour, M. E. El Najjar, Z. Naja, N. A. Tmazirte, and N. Moubayed, “Fault detection and exclusion for GNSS measurements using observations projection on information space,” in _2015 Fifth International Conference on Digital Information and Communication Technology and its Applications (DICTAP)_. Beirut: IEEE, Apr. 2015, pp. 198–203.
* [72] E. Kaplan and C. Hegarty, _Understanding GPS: principles and applications_. Artech house, 2005.
* [73] J. Kotecha and P. Djuric, “Gaussian sum particle filtering for dynamic state space models,” in _2001 IEEE International Conference on Acoustics, Speech, and Signal Processing. Proceedings (Cat. No.01CH37221)_ , vol. 6. Salt Lake City, UT, USA: IEEE, 2001, pp. 3465–3468.
* [74] R. Kumar and M. G. Petovello, “A novel GNSS positioning technique for improved accuracy in urban canyon scenarios using 3D city model,” in _ION GNSS_ , 2014, pp. 8–12.
* [75] H. Kuusniemi, _User-level reliability and quality monitoring in satellite-based personal navigation_. Tampere University of Technology, 2005.
* [76] J. S. Liu, _Monte Carlo strategies in scientific computing_. Springer Science & Business Media, 2008.
* [77] D. A. McAllester, “Some PAC-Bayesian Theorems,” _Machine Learning_ , vol. 37, no. 3, pp. 355–363, 1999.
* [78] J. A. Nelder and R. Mead, “A Simplex Method for Function Minimization,” _The Computer Journal_ , vol. 7, no. 4, pp. 308–313, Jan. 1965.
* [79] M. Obst, “Bayesian Approach for Reliable GNSS-based Vehicle Localization in Urban Areas,” PhD Thesis, Ph. D. dissertation, Technische Universität Chemnitz, 2015.
* [80] A. Rabaoui, N. Viandier, J. Marais, and E. Duflos, “Using Dirichlet Process Mixtures for the modelling of GNSS pseudorange errors in urban canyon,” _ION GNSS 2009_ , 2009, publisher: Institute Of Navigation.
* [81] R. Rosihan, A. Indriyatmoko, S. Chun, D. Won, Y. Lee, T. Kang, J. Kim, and H.-s. Jun, “Particle Filtering Approach To Fault Detection and Isolation for GPS Integrity Monitoring,” in _Proceedings of the 19th International Technical Meeting of the Satellite Division of The Institute of Navigation (ION GNSS)_ , 2006.
* [82] M. Seeger, “PAC-Bayesian Generalisation Error Bounds for Gaussian Process Classification,” _CrossRef Listing of Deleted DOIs_ , vol. 1, 2000.
* [83] C. Shao, “Improving the Gaussian Sum Particle filtering by MMSE constraint,” in _2008 9th International Conference on Signal Processing_. Beijing: IEEE, Oct. 2008, pp. 34–37.
* [84] M. Šimandl and J. Duník, “DESIGN OF DERIVATIVE-FREE SMOOTHERS AND PREDICTORS,” _IFAC Proceedings Volumes_ , vol. 39, no. 1, pp. 1240–1245, 2006.
* [85] H. Sorenson and D. Alspach, “Recursive bayesian estimation using gaussian sums,” _Automatica_ , vol. 7, no. 4, pp. 465–479, Jul. 1971.
* [86] Tze Leung Lai, “Sequential multiple hypothesis testing and efficient fault detection-isolation in stochastic systems,” _IEEE Transactions on Information Theory_ , vol. 46, no. 2, pp. 595–608, Mar. 2000.
* [87] S. Wallner, “A Proposal for Multi-Constellation Advanced RAIM for Vertical Guidance.”
* [88] J. Wang and P. B. Ober, “On the Availability of Fault Detection and Exclusion in GNSS Receiver Autonomous Integrity Monitoring,” _Journal of Navigation_ , vol. 62, no. 2, pp. 251–261, Apr. 2009.
* [89] L. Wang, P. D. Groves, and M. K. Ziebart, “Multi-Constellation GNSS Performance Evaluation for Urban Canyons Using Large Virtual Reality City Models,” _Journal of Navigation_ , vol. 65, no. 3, pp. 459–476, Jul. 2012.
* [90] X. Xu, C. Yang, and R. Liu, “Review and prospect of GNSS receiver autonomous integrity monitoring,” _Acta Aeronautica et Astronautica Sinica_ , vol. 34, no. 3, pp. 451–463, 2013, publisher: Chinese Society of Aeronautics and Astronautics(CSAA), 37 Xueyuan Road ….
* [91] T. Yap, Mingyang Li, A. I. Mourikis, and C. R. Shelton, “A particle filter for monocular vision-aided odometry,” in _2011 IEEE International Conference on Robotics and Automation_. Shanghai: IEEE, May 2011, pp. 5663–5669.
* [92] B. S. Pervan, S. P. Pullen, and J. R. Christie, “A Multiple Hypothesis Approach to Satellite Navigation Integrity,” _Navigation_ , vol. 45, no. 1, pp. 61–71, Mar. 1998. [Online]. Available: https://onlinelibrary.wiley.com/doi/10.1002/j.2161-4296.1998.tb02372.x
|
# DivSwapper: Towards Diversified Patch-based Arbitrary Style Transfer
Zhizhong Wang Lei Zhao Corresponding authors. Haibo Chen Zhiwen Zuo
Ailin Li Wei Xing∗ &Dongming Lu College of Computer Science and Technology,
Zhejiang University {endywon, cszhl, cshbchen, zzwcs, liailin, wxing,
<EMAIL_ADDRESS>
###### Abstract
Gram-based and patch-based approaches are two important research lines of
style transfer. Recent diversified Gram-based methods have been able to
produce multiple and diverse stylized outputs for the same content and style
images. However, as another widespread research interest, the diversity of
patch-based methods remains challenging due to the stereotyped style swapping
process based on nearest patch matching. To resolve this dilemma, in this
paper, we dive into the crux of existing patch-based methods and propose a
universal and efficient module, termed DivSwapper, for diversified patch-based
arbitrary style transfer. The key insight is to use an essential intuition
that neural patches with higher activation values could contribute more to
diversity. Our DivSwapper is plug-and-play and can be easily integrated into
existing patch-based and Gram-based methods to generate diverse results for
arbitrary styles. We conduct theoretical analyses and extensive experiments to
demonstrate the effectiveness of our method, and compared with state-of-the-
art algorithms, it shows superiority in diversity, quality, and efficiency.
| | | | | | | | | | |
---|---|---|---|---|---|---|---|---|---|---|---
Inputs | (a) CNNMRF + Our DivSwapper | | | Inputs | (b) Style-Swap + Our DivSwapper
| | | | | | | | | | |
Inputs | (c) Avatar-Net + Our DivSwapper | | | Inputs | (d) WCT + Our DivSwapper
Figure 1: Given the same content and style images, our proposed DivSwapper
can endow existing patch-based methods (e.g., (a) CNNMRF Li and Wand (2016a),
(b) Style-Swap Chen and Schmidt (2016), and (c) Avatar-Net Sheng et al.
(2018)) with the explicit ability to generate diverse stylized results.
Moreover, it can also be intergated into Gram-based methods (e.g., (d) WCT Li
et al. (2017c)) to achieve diversity. Our approach is plug-and-play and shows
superiority in diversity, quality, and efficiency over the state of the art.
## 1 Introduction
Committed to automatically transforming the style of one image to another,
style transfer has become a vibrant community that attracts widespread
attention from both industry and academia. The seminal work of Gatys et al.
(2016) first utilized the Convolutional Neural Networks (CNNs) to extract
hierarchical features and transfer the style by iteratively matching the Gram
matrices (i.e., feature correlations). Since then, valuable efforts have been
made to improve the efficiency Johnson et al. (2016), quality Lin et al.
(2021), and generality Huang and Belongie (2017), etc. However, as another
important aspect of style transfer, diversity has received relatively less
attention, and there are only a few works to solve this dilemma. Li et al.
(2017b) and Ulyanov et al. (2017) introduced the diversity loss to train the
feed-forward networks to generate diverse outputs in a learning-based
mechanism. Alternatively, in a learning-free manner, Wang et al. (2020a)
proposed to use Deep Feature Perturbation based on Whitening and Coloring
Transform (WCT) to perturb the deep image feature maps while keeping their
Gram matrices unchanged. These methods are all Gram-based; though considerable
diversity can be achieved, unfortunately, they are not applicable to other
types of approaches such as patch-based methods, since these methods are not
underpinned by the Gram matrix assumption.
In this work, we are interested in the diversity of the patch-based
stylization mechanism. As another widespread research interest of style
transfer, the patch-based method is first formulated by Li and Wand (2016a,
b). They combined Markov Random Fields (MRFs) and CNNs to extract and match
the local neural patches of the content and style images. Later, Chen and
Schmidt (2016) proposed a Style-Swap operation and an inverse network for fast
patch-based stylization. Since then, many successors were further designed for
higher quality Sheng et al. (2018) and extended applications Champandard
(2016), etc.
Let us start with a fundamental problem: what limits the diversity of patch-
based style transfer? Whether using iterative optimization Li and Wand (2016a)
or feed-forward networks Chen and Schmidt (2016), the core of patch-based
methods is to substitute the patches of the content image with the best-
matched patches of the style image (which we call “style swapping” Chen and
Schmidt (2016) in this paper), where a Normalized Cross-Correlation (NCC)
approach is mainly adopted to measure the similarities of two patches.
However, as we all know, the NCC heavily depends on the consistency of local
variations Sheng et al. (2018), and this stereotyped patch matching process
restricts each content patch to be bound to its nearest style patch, thus
limiting the diversity. Though it may be effective on semantic-level style
transfer (e.g., portrait-to-portrait), for more general artistic styles (e.g.,
Fig. 1), there is little semantic correspondence between them and the
contents. Even for human beings, it is hard to say which patches should match
best. Therefore, we argue that for artistic style transfer Gatys et al.
(2016), it would be more reasonable to relax the restricted style swapping
process and allow some meaningful variations but maintain those inherent
characteristics (e.g., the approximate semantic matching). It can give users
more options to select the most satisfactory results according to different
preferences. Moreover, for semantic-level style transfer, the diversified
matching process can also help alleviate the undesirable artifacts caused by
the restricted patch matching Zhang et al. (2019) (see later Sec. 4.3).
However, making such meaningful variations is a challenging task. First,
neural patches are with high dimensions and hard to control. Maybe a small
change would result in significant quality degradation, or a big change might
not lead to a marked visual difference Sheng et al. (2018). Therefore, the
difficulty is finding the neural patches critical to visual variations and
controlling them gracefully. Second, the visual effects and quality of the
final results are also determined by the inherent correspondence between the
content and style patches. Thus, how to manipulate this complicated
correspondence to obtain diverse visual effects while maintaining the original
quality is another problem to be solved.
Based on the above analyses, in this paper, we dive into the crux of patch-
based style transfer and explore the universal way to diversify it. As shown
in Fig. 2, an essential intuition we will use is that the visual effects of
the output images are determined by the local neural patches of the
intermediate activation feature maps; since the patches with higher activation
values often contribute more to perceptually important (discriminative)
information Aberman et al. (2018); Zhang et al. (2018), they may also
contribute more to visual variations as the human eyes are often more
sensitive to the changes of these parts. In other words, if we could
appropriately vary these higher-activated patches, then more significant
diversity can be obtained. However, directly manipulating these patches is
intractable since it is hard to distinguish which patches are with higher
activation values and where they should be placed so as not to degrade the
quality.
To remedy it, in this work, we theoretically derive that simply shifting the
L2 norm of each style patch in the style swapping process can gracefully
improve diversity and vary the patches with higher activation values in an
implicit and holistic way. Based on this finding, we introduce a universal and
efficient module, termed DivSwapper, for diversified patch-based arbitrary
style transfer. Our DivSwapper is plug-and-play and learning-free, which can
be easily integrated into existing patch-based methods to help them generate
diverse outputs for arbitrary styles. Besides, despite building upon the
patch-based mechanism, it can also be applied to Gram-based methods to achieve
higher diversity (see examples in Fig. 1). Theoretical analyses and extensive
experiments demonstrate that our DivSwapper can achieve significant diversity
while maintaining the original quality and some inherent characteristics
(e.g., the approximate semantic matching) of the baseline methods.
Furthermore, compared with other state-of-the-art (SOTA) diversified
algorithms, it shows notable superiority in diversity, quality, and
efficiency.
Figure 2: Our intuition: Patches with higher activation values often
contribute more to perceptually important (discriminative) information such as
semantics, salient colors, and edges; thereby, they could also contribute more
to diversity. Top: Some style exemplars. Bottom: Heat maps of the activation
feature maps (upsampled to the full image resolution) extracted from layer
Relu_4_1 of a pre-trained VGG19 Simonyan and Zisserman (2014).
Overall, the main contributions of our work are threefold:
* •
We explore the challenging problem of diversified patch-based style transfer
and dive into its crux to achieve diversity. A universal and efficient module
called DivSwapper is proposed to address the challenges and provide graceful
control between diversity and quality.
* •
Our DivSwapper is plug-and-play and learning-free, which can be easily
integrated into existing patch-based and Gram-based methods with little extra
computation and time overhead.
* •
We analyze and demonstrate the effectiveness and superiority of our method
against SOTA diversified algorithms in terms of diversity, quality, and
efficiency.
| | | | | | | | | | | |
---|---|---|---|---|---|---|---|---|---|---|---|---
(a) Inputs | (b) Avatar-Net | | | (c) + Small | (d) + Big | (e) Random | | | (f) + Our DivSwapper
Figure 3: Challenges of diversified patch-based style transfer. We adopt
Avatar-Net Sheng et al. (2018) as the baseline method.
## 2 Related Work
The seminal work of Gatys et al. (2016) has ushered in an era of Neural Style
Transfer (NST) Jing et al. (2019), where the CNNs are used to decouple and
recombine the styles and contents of arbitrary images. After the recent rapid
development, various methods have been proposed, among which the Gram-based
and patch-based are the most representative.
Gram-based methods. The method proposed by Gatys et al. (2016) is Gram-based,
using so-called Gram matrices of the feature maps extracted from CNNs to
represent the styles of images, and could achieve visually stunning results.
Since then, numerous Gram-based approaches were proposed to improve the
performance in many aspects, including efficiency Johnson et al. (2016);
Ulyanov et al. (2016), quality Li et al. (2017a); Lu et al. (2019); Wang et
al. (2020b, 2021); Lin et al. (2021); Chandran et al. (2021); Cheng et al.
(2021); An et al. (2021); Chen et al. (2021b), and generality Chen et al.
(2017); Huang and Belongie (2017); Li et al. (2017c, 2019); Jing et al.
(2020), etc.
Patch-based methods. Patch-based style transfer is another important research
line. Li and Wand (2016a, b) first combined MRFs and CNNs for arbitrary style
transfer. It extracts local neural patches to represent the styles of images
and searches for the most similar patches from the style image to satisfy the
local structure prior of the content image. Later, Chen and Schmidt (2016)
proposed to swap the content activation patch with the best-matched style
activation patch using a Style-Swap operation, and then used an inverse
network for fast patch-based stylization. Based on them, many successors were
further designed for better performance Sheng et al. (2018); Gu et al. (2018);
Park and Lee (2019); Kolkin et al. (2019); Yao et al. (2019); Zhang et al.
(2019); Deng et al. (2020); Liu et al. (2021); Chen et al. (2021a) and
extended applications Champandard (2016); Liao et al. (2017); Wang et al.
(2022).
Diversified methods. Our method is closely related to the existing diversified
methods. Li et al. (2017b) and Ulyanov et al. (2017) introduced the diversity
loss to train the feed-forward networks to generate diverse outputs by
mutually comparing and maximizing the variations between the generated results
in mini-batches. However, these methods are learning-based and have restricted
generalization, limited diversity, and poor scalability Wang et al. (2020a).
To combat these limitations, Wang et al. (2020a) proposed a learning-free
method called Deep Feature Perturbation to empower the WCT-based methods to
generate diverse results. This method is universal for arbitrary styles, but
unfortunately, it relies on WCT and does not apply to other types of methods.
Discussions. While there have been some efforts for the diversified style
transfer, they are all Gram-based and are not applicable to other types of
approaches such as patch-based methods. As another widespread research
interest, the diversity of patch-based style transfer remains challenging. Our
work, as far as we know, takes the first step in this direction. The proposed
approach is learning-free and universal for arbitrary styles, and can be
easily embedded into existing patch-based methods to empower them to generate
diverse results. Moreover, it can also be applied to Gram-based methods to
achieve higher diversity. Compared with the state of the art, our approach can
achieve higher diversity, quality, and efficiency, which will be validated in
later Sec. 4.
## 3 Proposed Approach
Before introducing our approach, let us first reiterate why implementing
diversity in patch-based methods is challenging?
First, neural patches are with high dimensions and hard to control. On the one
hand, maybe a small change would easily result in significant quality
degradation, e.g., Fig. 3 (c). As can be observed in the red box areas, the
result exhibits a severe quality problem that the portrait’s eyes disappear,
even if we only change 50 of the total 2500 neural patches. On the other hand,
it is also possible that a big change may not lead to a marked visual
difference, e.g., Fig. 3 (d). Although all the 2500 neural patches have been
changed, the result is still very similar to the original one in Fig. 3 (b).
Therefore, the difficulty is finding the neural patches critical to visual
variations and controlling them gracefully.
Second, the inherent correspondence between the content and style patches
ensures the visual correctness and rationality of the final results, as well
as the semantic correspondence. If we simply ignore this local correspondence
(e.g., randomly matching a style patch for each content patch), it will
destroy the content prior and generate poor results, as shown in Fig. 3 (e).
Therefore, one key desideratum of the diversified patch-based style transfer
is to generate meaningful variations while maintaining the original quality
and some inherent characteristics (e.g., the approximate semantic matching).
Aiming at the challenges above and based on the intuition introduced in Sec.
1, we propose a simple yet effective diversified style swapping module, termed
DivSwapper, for diversified patch-based arbitrary style transfer. The proposed
module is plug-and-play and learning-free, which can be easily integrated into
existing patch-based and Gram-based methods to achieve diversity. As shown in
Fig. 3 (f), the synthesized diverse results are all reasonable, with
meaningful variations while maintaining the original quality and some inherent
characteristics (e.g., the approximate semantic matching). In the following
section, we will first describe the workflow of our proposed DivSwapper, and
then introduce the key finding and design in DivSwapper to achieve diversity,
i.e., the Shifted Style Normalization (SSN). In the light of this, we
theoretically derive its effectiveness in generating diverse reasonable
solutions and helping vary more significant neural patches with higher
activation values.
Figure 4: The workflow of our proposed DivSwapper.
### 3.1 Workflow of DivSwapper
Given a content image $I_{c}$ and style image $I_{s}$ pair, suppose
$F_{c}=CNN(I_{c})$ and $F_{s}=CNN(I_{s})$ are content and style activation
feature maps extracted from a certain layer (e.g., Relu_4_1) of a pre-trained
CNN (e.g., VGG Simonyan and Zisserman (2014)). As shown in Fig. 4 (step
(1-5)), our DivSwapper aims to search for the diverse yet plausible style
patches in $F_{s}$ for each content patch in $F_{c}$, and then substitute the
latter with the former. The detailed workflow is:
1. (1)
Extract the style patches from $F_{s}$, denoted as
$\\{\phi_{j}(F_{s})\\}_{j\in\\{1,\dots,n_{s}\\}}$, where $n_{s}$ is the number
of patches.
2. (2)
Normalize each style patch by using a Shifted Style Normalization (SSN)
approach. The shifted normalized style patches are denoted as
$\\{\hat{\phi}_{j}(F_{s})\\}$.
3. (3)
Calculate the similarities between all pairs of the style and content patches
by the Normalized Cross-Correlation (NCC) measure, i.e,
$\mathcal{S}_{i,j}=\langle\phi_{i}(F_{c}),\hat{\phi}_{j}(F_{s})\rangle$ (the
norm of the content patch $\phi_{i}(F_{c})$ is removed as it is constant with
respect to the $\mathop{\arg\max}$ operation in the next step (4)). This
process can be efficiently implemented by using a convolutional layer with the
shifted normalized style patches $\\{\hat{\phi}_{j}(F_{s})\\}$ as filters and
content feature map $F_{c}$ as input. The computed result $T$ has $n_{s}$
feature channels, and each spatial location is a vector of NCC between a
content patch and all style patches.
4. (4)
Find the nearest style patch for each content patch, i.e.,
$\phi_{i}(F_{cs})=\mathop{\arg\max}_{j\in\\{1,\dots,n_{s}\\}}\mathcal{S}_{i,j}$.
It can be achieved by first finding the channel-wise argmax for each spatial
location of $T$, and then replacing it with a channel-wise one-hot encoding.
The result is denoted as $\hat{T}$.
5. (5)
Reconstruct the swapped feature ${F_{cs}}$ by a deconvolutional layer with the
original style patches $\\{\phi_{j}(F_{s})\\}$ as filters and $\hat{T}$ as
input.
Analysis: The most novel insight behind DivSwapper is that we use a SSN
approach to inject diversity into the NCC-based style swapping process, which
kills three birds with one stone: i) We can reshuffle all style patches by
adding random norm shifts, which ensures the scope of diversity. ii) NCC is
still used for nearest patch matching, and the final swapped feature
${F_{cs}}$ is reconstructed by the original style patches, thereby the
original quality and the inherent characteristics (e.g., the approximate
semantic matching) can be well maintained. iii) SSN implicitly helps vary more
significant style patches with higher activation values, thus achieving more
meaningful diversity (see more analyses in Sec. 3.2). Note that since the
matching step (3) and the reconstruction step (5) actually can be implemented
by two convolutional layers, our DivSwapper is very efficient.
### 3.2 Shifted Style Normalization
The stereotyped style swapping process aims to search for the nearest style
patch for each content patch, which only produces one deterministic solution,
as illustrated in Fig. 5 (a). To obtain different solutions, an intuitive way
is to match other plausible style patches instead of the nearest ones, which
can be achieved by adjusting the distances between the content and style
patches. However, as analyzed in Sec. 1, the key to obtaining more meaningful
diversity is to gracefully control and vary those significant patches with
higher activation values. Therefore, we propose the Shifted Style
Normalization (SSN) to explicitly alter the distances between the content and
style patches while implicitly restricting the swapping process to vary more
significant style patches with higher activation values.
Simply yet non-trivially, as illustrated in Fig. 5 (b), our SSN adds a random
positive deviation $\sigma$ to shift the L2 norm of each style patch, like
follows:
$\\{\hat{\phi}_{j}(F_{s})=\frac{\phi_{j}(F_{s})}{\parallel\phi_{j}(F_{s})\parallel+\sigma}\\}_{j\in\\{1,\dots,n_{s}\\}}.$
(1)
Figure 5: (a) The original style swapping process only produces one
deterministic solution by matching the nearest style patch with each content
patch. (b) Our SSN adds random deviations to shift the style patch
normalization, thus achieving diversity.
Now, we theoretically derive the power of this “magical” random deviation
$\sigma$ to generate diverse solutions and help gracefully vary more
significant style patches with higher activation values. For simplicity, we
only take one content and two style activation patches to illustrate, which
are denoted as $\mathcal{P}^{c}$, $\mathcal{P}^{s}_{1}$, and
$\mathcal{P}^{s}_{2}$, respectively. Note that the values in these vectors are
non-negative because they are often extracted from the ReLU activation layers
(e.g., Relu_4_1) of VGG model. Specifically, we first suppose that they
satisfy the following original NCC matching relationship:
$\frac{\langle\mathcal{P}^{c},\mathcal{P}^{s}_{1}\rangle}{\parallel\mathcal{P}^{c}\parallel\parallel\mathcal{P}^{s}_{1}\parallel}=\cos\theta_{1}>\frac{\langle\mathcal{P}^{c},\mathcal{P}^{s}_{2}\rangle}{\parallel\mathcal{P}^{c}\parallel\parallel\mathcal{P}^{s}_{2}\parallel}=\cos\theta_{2}>0,$
(2)
which means $\mathcal{P}^{s}_{1}$ matches $\mathcal{P}^{c}$ better than
$\mathcal{P}^{s}_{2}$, where $\theta_{1}$ is the angle between vector
$\mathcal{P}^{c}$ and $\mathcal{P}^{s}_{1}$, $\theta_{2}$ is the angle between
vector $\mathcal{P}^{c}$ and $\mathcal{P}^{s}_{2}$. We want to change their
matching relationship by randomly shifting the L2 norms of the style patches,
i.e.,
$\frac{\langle\mathcal{P}^{c},\mathcal{P}^{s}_{1}\rangle}{\parallel\mathcal{P}^{c}\parallel(\parallel\mathcal{P}^{s}_{1}\parallel+\sigma_{1})}<\frac{\langle\mathcal{P}^{c},\mathcal{P}^{s}_{2}\rangle}{\parallel\mathcal{P}^{c}\parallel(\parallel\mathcal{P}^{s}_{2}\parallel+\sigma_{2})}.$
(3)
Thus, we can deduce:
$\displaystyle\langle\mathcal{P}^{c},\mathcal{P}^{s}_{2}\rangle\sigma_{1}-\langle\mathcal{P}^{c},\mathcal{P}^{s}_{1}\rangle\sigma_{2}>$
(4)
$\displaystyle\langle\mathcal{P}^{c},\mathcal{P}^{s}_{1}\rangle\parallel\mathcal{P}^{s}_{2}\parallel-\langle\mathcal{P}^{c},\mathcal{P}^{s}_{2}\rangle\parallel\mathcal{P}^{s}_{1}\parallel.$
Since
$\langle\mathcal{P}^{c},\mathcal{P}^{s}_{1}\rangle\parallel\mathcal{P}^{s}_{2}\parallel-\langle\mathcal{P}^{c},\mathcal{P}^{s}_{2}\rangle\parallel\mathcal{P}^{s}_{1}\parallel>0$
(Eq. (2)), we can get the following solution:
$\langle\mathcal{P}^{c},\mathcal{P}^{s}_{2}\rangle\sigma_{1}-\langle\mathcal{P}^{c},\mathcal{P}^{s}_{1}\rangle\sigma_{2}>0\Rightarrow\langle\mathcal{P}^{c},\mathcal{P}^{s}_{2}\rangle\sigma_{1}>\langle\mathcal{P}^{c},\mathcal{P}^{s}_{1}\rangle\sigma_{2}.$
As $\sigma_{1}$ and $\sigma_{2}$ are positive and i.i.d. (independent and
identically distributed), it turns out that holistically our SSN tends to
replace $\mathcal{P}^{s}_{1}$ with a suitable $\mathcal{P}^{s}_{2}$ which
satisfies
$\langle\mathcal{P}^{c},\mathcal{P}^{s}_{2}\rangle>\langle\mathcal{P}^{c},\mathcal{P}^{s}_{1}\rangle$.
Since
$\langle\mathcal{P}^{c},\mathcal{P}^{s}_{2}\rangle=\parallel\mathcal{P}^{c}\parallel\parallel\mathcal{P}^{s}_{2}\parallel\cos\theta_{2}$,
and
$\langle\mathcal{P}^{c},\mathcal{P}^{s}_{1}\rangle=\parallel\mathcal{P}^{c}\parallel\parallel\mathcal{P}^{s}_{1}\parallel\cos\theta_{1}$,
we can deduce as follows:
$\parallel\mathcal{P}^{s}_{2}\parallel\cos\theta_{2}>\parallel\mathcal{P}^{s}_{1}\parallel\cos\theta_{1}\Rightarrow\frac{\parallel\mathcal{P}^{s}_{2}\parallel}{\parallel\mathcal{P}^{s}_{1}\parallel}>\frac{\cos\theta_{1}}{\cos\theta_{2}}.$
(5)
As $\cos\theta_{1}>\cos\theta_{2}$ (Eq. (2)), we can obtain
$\parallel\mathcal{P}^{s}_{2}\parallel>\parallel\mathcal{P}^{s}_{1}\parallel$,
which means the varied $\mathcal{P}^{s}_{2}$ often has higher activation
values than original $\mathcal{P}^{s}_{1}$. That is to say, our SSN could help
vary more significant style patches with higher activation values in an
implicit and holistic way. Besides, since it is still implicitly constrained
by the original NCC (Eq. (2)) and the variations are gracefully controlled by
the sampling range of $\sigma$, the overall quality and approximate semantic
matching can be well preserved, as will be demonstrated in later Sec. 4.3.
| | | | | | |
---|---|---|---|---|---|---|---
Content | | | (a) MTS
| | | | | | |
Style | | | (b) ITN
| | | | | | |
WCT | | | (c) WCT + DFP
| | | | | | |
WCT | | | (d) WCT + Our DivSwapper
| | | | | | |
Avatar-Net | | | (e) Avatar-Net + DFP
| | | | | | |
Avatar-Net | | | (f) Avatar-Net + Our DivSwapper
Figure 6: Qualitative comparisons. From top to bottom, the first column shows
the input content and style images and the original outputs of baselines; the
middle four columns show the diverse outputs of MTS, ITN, baselines + DFP, and
baselines + our DivSwapper [best viewed in color changes and zoomed-in]. We
also visualize their average activation feature differences (averaged on
$C_{20}^{2}=190$ pairs of diverse results) via heat maps in the last column.
## 4 Experimental Results
### 4.1 Implementation Details
Baselines. We integrate our DivSwapper into two types of patch-based methods
based on (1) iteration optimization (CNNMRF Li and Wand (2016a)) and (2) feed-
forward networks (Style-Swap Chen and Schmidt (2016) and Avatar-Net Sheng et
al. (2018)). Besides, we also integrate it into a typical Gram-based method,
i.e., WCT Li et al. (2017c). We keep the default settings of these baselines
and fine-tune the sampling range of our $\sigma$ (sampled from a uniform
distribution) to make our quality similar to the baselines, i.e., $(0,10^{3}]$
for CNNMRF, $(0,10^{5}]$ for Style-Swap, $(0,5\times 10^{3}]$ for Avatar-Net,
and $(0,5\times 10^{3}]$ for WCT. We will discuss these settings in later Sec.
4.3. For more implementation details, please refer to the supplementary
material (SM).
Metrics. To evaluate the diversity, we collect 36 content-style pairs released
by Wang et al. (2020a). For each pair, we randomly produce 20 outputs, so
there are a total of $36\times C_{20}^{2}=6840$ pairs of outputs generated by
each method. Like Wang et al. (2020a), we adopt the average pixel distance
$D_{pixel}$ and LPIPS (Learned Perceptual Image Patch Similarity) distance
$D_{LPIPS}$ Zhang et al. (2018) to measure the diversity in pixel space and
deep feature space, respectively.
### 4.2 Comparisons with Prior Arts
We compare our DivSwapper with three SOTA diversified methods, i.e., Multi-
Texture-Synthesis (MTS) Li et al. (2017b), Improved-Texture-Networks (ITN)
Ulyanov et al. (2017), and Deep-Feature-Perturbation (DFP) Wang et al.
(2020a). Since these methods are all Gram-based, we integrate our DivSwapper
into the Gram-based baseline WCT Li et al. (2017c) and the Gram-and-patch-
based baseline Avatar-Net Sheng et al. (2018) for a fair comparison.
Qualitative Comparison. As shown in Fig. 6 (a,b), MTS and ITN only achieve
subtle diversity, which is hard to perceive. In rows (c,e), DFP can diversify
WCT and Avatar-Net to generate diverse results, but the diversity is still
limited, especially for Avatar-Net. On the same baselines, our DivSwapper
achieves much more significant diversity, e.g., the colors changed on the
skies and buildings in rows (d,f) (best compared with the difference heat maps
in the last column). Moreover, as shown in Fig. 1 and 7, our DivSwapper can
also diversify the pure patch-based methods like Style-Swap and CNNMRF, which
is beyond the capability of DFP. It is worth noting that the patterns varied
significantly in our results in Fig. 7 generally correspond to the style
regions with higher activation values in the activation heat maps, e.g., the
beige walls in the top and the blue and red edges in the bottom. It verifies
that our DivSwapper indeed helps vary more significant style patches with
higher activation values. We also validate its effectiveness on AdaIN Huang
and Belongie (2017) and SANet Park and Lee (2019) in SM.
| | | | | | |
---|---|---|---|---|---|---|---
Style | Style-Swap | | | Style-Swap + Our DivSwapper
| | | | | | |
Style | CNNMRF | | | CNNMRF + Our DivSwapper
Figure 7: Our DivSwapper can diversify existing pure patch-based approaches
like Style-Swap and CNNMRF, which is beyond the capability of SOTA diversified
methods, e.g., DFP.
[b] Baseline Method $D_{pixel}$ $D_{LPIPS}$ Efficiency MTS Original 0.080
0.175 - ITN Original 0.077 0.163 - WCT Original 0.000 0.000 3.421s \+ DFP
0.162 0.431 4.091s \+ DivSwapper 0.204 0.485 3.566s Avatar-Net Original 0.000
0.000 3.920s \+ DFP 0.102 0.264 4.268s \+ DivSwapper 0.128 0.320 3.932s Style-
Swap Original 0.000 0.000 10.571s \+ DivSwapper 0.065 0.234 10.582s 1 CNNMRF
Original 0.084 0.257 118.44s \+ DivSwapper 0.142 0.378 140.91s
* 1
Due to the limitation of GPU memory, we only test the images of size
448$\times$448px for CNNMRF.
Table 1: Quantitative comparisons. The efficiency is tested on images of size
$512\times 512$px and a 6GB Nvidia 1060 GPU.
Quantitative Comparison. The quantitative results are shown in Tab. 1.
Consistent with Fig. 6, MTS and ITN obtain low diversity scores in both Pixel
and LPIPS distance. Integrated into the same baselines (i.e., WCT and Avatar-
Net), our DivSwapper is clearly superior to DFP in both diversity and
efficiency (DFP involves some slow CPU-based SVD operations to obtain
orthogonal noise matrix). In addition, our DivSwapper can also diversify
Style-Swap and help improve the diversity of CNNMRF. Note that due to the use
of noise initialization and iterative optimization process, CNNMRF has
produced some varied results and the extra time increased by our DivSwapper is
more than other baselines.
Quality Comparison. As style transfer is highly subjective, we conduct a user
study to evaluate how users may prefer the outputs of our diversified methods
over the deterministic ones and those of other SOTA diversified methods (i.e.,
DFP). WCT and Avatar-Net are adopted as the baselines. Twenty users
unconnected with the project are recruited. For each baseline, we give each
user 50 groups of images (each group contains the input content and style
images, and three randomly shuffled outputs, i.e., one original output of the
baseline method, one random output of baseline + DFP, and one random output of
baseline + our DivSwapper) and ask him/her to select the favorite output. The
statistics in Tab. 2 show that both DFP and our DivSwapper can help users
obtain preferred (higher quality) results compared with baselines, and our
method also achieves higher quality than DFP.
Baseline | Original | \+ DFP | \+ Our DivSwapper
---|---|---|---
WCT | 27.24 | 33.96 | 38.80
Avatar-Net | 28.81 | 31.78 | 39.41
Table 2: Percentage (%) of the votes in the user study. | | | | | | |
---|---|---|---|---|---|---|---
| | | | | | |
Inputs | (0, 5$\times 10^{1}$] | (0, 5$\times 10^{2}$] | (0, 5${\bf\times 10^{3}}$] | (0, 5$\times 10^{4}$] | | | Normal
$D_{pixel}$ | 0.043 | 0.077 | 0.128 | 0.145 | | | 0.124
$D_{LPIPS}$ | 0.085 | 0.187 | 0.320 | 0.389 | | | 0.311
Figure 8: Effects of different sampling ranges ($2^{nd}$ to $5^{th}$ columns)
and distributions (last column) of the random deviations $\sigma$. Our
DivSwapper is integrated into Avatar-Net Sheng et al. (2018).
### 4.3 Ablation Study
Graceful Control between Diversity and Quality. Our DivSwapper can provide
graceful control between diversity and quality by sampling the deviations
$\sigma$ from different ranges. As shown in Fig. 8, with the increase of
sampling range, the generated results consistently gain more diversity (which
can also be validated by the metrics below), but a too-large sampling range
may reduce the quality (e.g., the $5^{th}$ column). When proper range (e.g.,
$(0,5\times 10^{3}$]) is applied, we can obtain the sweet spot of the two: the
results exhibit considerable diversity and also maintain the original quality
and some inherent characteristics. For different baselines, the proper range
of $\sigma$ can be easily determined via only a few trials and errors, and our
experiments verify that these constant range values can work stably on
different content and style inputs.
Effect of Sampling Distribution. We also try other sampling distributions
instead of the default uniform one. As shown in the last column of Fig. 8,
sampling $\sigma$ from a normal distribution could achieve similar performance
(e.g., the top image and the diversity scores), but the results may be erratic
and sometimes produce unwanted effects (e.g., the hazy blocks in the bottom
image). This problem may be caused by the concentration property of the normal
distribution. However, it does not occur when using a uniform distribution.
Semantic-level Style Transfer. Though our primary motivation is to improve the
diversity in artistic style transfer, for semantic-level style transfer, the
proposed method can also produce diverse results while maintaining the
original quality and the semantic correspondence. As can be seen in our
generated results in Fig. 9 (e), although the patterns in each semantic area
vary significantly (e.g., the backgrounds and hairs), the main semantic-level
stylization is well preserved (e.g., the details on portraits). It is because
the SSN used in our DivSwapper is still constrained by the original NCC, and
the variations are gracefully controlled by $\sigma$, as analyzed in Sec. 3.2.
Moreover, our DivSwapper can also help alleviate the inherent flaws (e.g.,
undesirable artifacts) caused by the original restricted patch matching Zhang
et al. (2019), which is our new merit against SOTA methods, e.g., DFP in
column (d). It also further justifies that our method can help users obtain
diverse results with higher quality.
| | | | | | |
---|---|---|---|---|---|---|---
| | | | | | |
(a) Content | (b) Style | (c) Avatar-Net | (d) + DFP | | | (e) + Our DivSwapper
Figure 9: Results on semantic-level style transfer, e.g., portrait-to-
portrait. The diverse results generated by using our DivSwapper can maintain
the semantic-level stylization while also alleviating the undesirable
artifacts (e.g., the eye patterns in the red box areas) caused by the
restricted patch matching.
## 5 Concluding Remarks
In this work, we explore the challenging problem of diversified patch-based
style transfer and introduce a universal and efficient module, i.e.,
DivSwapper, to resolve it. Our DivSwapper is plug-and-play and can be easily
integrated into existing patch-based and Gram-based methods to generate
diverse results for arbitrary styles. Theoretical analyses and extensive
experiments demonstrate the effectiveness of our method, and compared with
SOTA algorithms, it shows superiority in diversity, quality, and efficiency.
We hope our analyses and investigated method can help readers better
understand the crux of patch-based methods and inspire future works in style
transfer and many other similar fields.
## Acknowledgements
This work was supported in part by the projects No. 2021YFF0900604, 19ZDA197,
LY21F020005, 2021009, 2019011, Zhejiang Elite Program project: research and
application of media fusion digital intelligence service platform based on
multimodal data, MOE Frontier Science Center for Brain Science & Brain-Machine
Integration (Zhejiang University), National Natural Science Foundation of
China (62172365), and Key Scientific Research Base for Digital Conservation of
Cave Temples (Zhejiang University), State Administration for Cultural
Heritage.
## References
* Aberman et al. [2018] Kfir Aberman, Jing Liao, Mingyi Shi, Dani Lischinski, Baoquan Chen, and Daniel Cohen-Or. Neural best-buddies: Sparse cross-domain correspondence. TOG, 37(4):1–14, 2018.
* An et al. [2021] Jie An, Siyu Huang, Yibing Song, Dejing Dou, Wei Liu, and Jiebo Luo. Artflow: Unbiased image style transfer via reversible neural flows. In CVPR, pages 862–871, 2021.
* Champandard [2016] Alex J Champandard. Semantic style transfer and turning two-bit doodles into fine artworks. arXiv preprint arXiv:1603.01768, 2016.
* Chandran et al. [2021] Prashanth Chandran, Gaspard Zoss, Paulo Gotardo, Markus Gross, and Derek Bradley. Adaptive convolutions for structure-aware style transfer. In CVPR, pages 7972–7981, 2021.
* Chen and Schmidt [2016] Tian Qi Chen and Mark Schmidt. Fast patch-based style transfer of arbitrary style. arXiv preprint arXiv:1612.04337, 2016.
* Chen et al. [2017] Dongdong Chen, Lu Yuan, Jing Liao, Nenghai Yu, and Gang Hua. Stylebank: An explicit representation for neural image style transfer. In CVPR, pages 1897–1906, 2017.
* Chen et al. [2021a] Haibo Chen, Lei Zhao, Zhizhong Wang, Huiming Zhang, Zhiwen Zuo, Ailin Li, Wei Xing, and Dongming Lu. Artistic style transfer with internal-external learning and contrastive learning. In NeurIPS, 2021.
* Chen et al. [2021b] Haibo Chen, Lei Zhao, Zhizhong Wang, Huiming Zhang, Zhiwen Zuo, Ailin Li, Wei Xing, and Dongming Lu. Dualast: Dual style-learning networks for artistic style transfer. In CVPR, pages 872–881, 2021.
* Cheng et al. [2021] Jiaxin Cheng, Ayush Jaiswal, Yue Wu, Pradeep Natarajan, and Prem Natarajan. Style-aware normalized loss for improving arbitrary style transfer. In CVPR, pages 134–143, 2021.
* Deng et al. [2020] Yingying Deng, Fan Tang, Weiming Dong, Wen Sun, Feiyue Huang, and Changsheng Xu. Arbitrary style transfer via multi-adaptation network. In ACM MM, pages 2719–2727, 2020.
* Gatys et al. [2016] Leon A Gatys, Alexander S Ecker, and Matthias Bethge. Image style transfer using convolutional neural networks. In CVPR, pages 2414–2423, 2016.
* Gu et al. [2018] Shuyang Gu, Congliang Chen, Jing Liao, and Lu Yuan. Arbitrary style transfer with deep feature reshuffle. In CVPR, pages 8222–8231, 2018.
* Huang and Belongie [2017] Xun Huang and Serge Belongie. Arbitrary style transfer in real-time with adaptive instance normalization. In ICCV, pages 1501–1510, 2017.
* Jing et al. [2019] Yongcheng Jing, Yezhou Yang, Zunlei Feng, Jingwen Ye, Yizhou Yu, and Mingli Song. Neural style transfer: A review. TVCG, 26(11):3365–3385, 2019.
* Jing et al. [2020] Yongcheng Jing, Xiao Liu, Yukang Ding, Xinchao Wang, Errui Ding, Mingli Song, and Shilei Wen. Dynamic instance normalization for arbitrary style transfer. In AAAI, volume 34, pages 4369–4376, 2020.
* Johnson et al. [2016] Justin Johnson, Alexandre Alahi, and Li Fei-Fei. Perceptual losses for real-time style transfer and super-resolution. In ECCV, pages 694–711. Springer, 2016.
* Kolkin et al. [2019] Nicholas Kolkin, Jason Salavon, and Gregory Shakhnarovich. Style transfer by relaxed optimal transport and self-similarity. In CVPR, pages 10051–10060, 2019.
* Li and Wand [2016a] Chuan Li and Michael Wand. Combining markov random fields and convolutional neural networks for image synthesis. In CVPR, pages 2479–2486, 2016.
* Li and Wand [2016b] Chuan Li and Michael Wand. Precomputed real-time texture synthesis with markovian generative adversarial networks. In ECCV, pages 702–716. Springer, 2016.
* Li et al. [2017a] Yanghao Li, Naiyan Wang, Jiaying Liu, and Xiaodi Hou. Demystifying neural style transfer. In IJCAI, pages 2230–2236, 2017.
* Li et al. [2017b] Yijun Li, Chen Fang, Jimei Yang, Zhaowen Wang, Xin Lu, and Ming-Hsuan Yang. Diversified texture synthesis with feed-forward networks. In CVPR, 2017.
* Li et al. [2017c] Yijun Li, Chen Fang, Jimei Yang, Zhaowen Wang, Xin Lu, and Ming-Hsuan Yang. Universal style transfer via feature transforms. In NeurIPS, pages 386–396, 2017.
* Li et al. [2019] Xueting Li, Sifei Liu, Jan Kautz, and Ming-Hsuan Yang. Learning linear transformations for fast image and video style transfer. In CVPR, pages 3809–3817, 2019.
* Liao et al. [2017] Jing Liao, Yuan Yao, Lu Yuan, Gang Hua, and Sing Bing Kang. Visual attribute transfer through deep image analogy. TOG, 2017.
* Lin et al. [2021] Tianwei Lin, Zhuoqi Ma, Fu Li, Dongliang He, Xin Li, Errui Ding, Nannan Wang, Jie Li, and Xinbo Gao. Drafting and revision: Laplacian pyramid network for fast high-quality artistic style transfer. In CVPR, pages 5141–5150, 2021.
* Liu et al. [2021] Songhua Liu, Tianwei Lin, Dongliang He, Fu Li, Meiling Wang, Xin Li, Zhengxing Sun, Qian Li, and Errui Ding. Adaattn: Revisit attention mechanism in arbitrary neural style transfer. In ICCV, pages 6649–6658, 2021.
* Lu et al. [2019] Ming Lu, Hao Zhao, Anbang Yao, Yurong Chen, Feng Xu, and Li Zhang. A closed-form solution to universal style transfer. In ICCV, pages 5952–5961, 2019.
* Park and Lee [2019] Dae Young Park and Kwang Hee Lee. Arbitrary style transfer with style-attentional networks. In CVPR, pages 5880–5888, 2019.
* Sheng et al. [2018] Lu Sheng, Ziyi Lin, Jing Shao, and Xiaogang Wang. Avatar-net: Multi-scale zero-shot style transfer by feature decoration. In CVPR, pages 8242–8250, 2018.
* Simonyan and Zisserman [2014] Karen Simonyan and Andrew Zisserman. Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556, 2014.
* Ulyanov et al. [2016] Dmitry Ulyanov, Vadim Lebedev, Andrea Vedaldi, and Victor S Lempitsky. Texture networks: Feed-forward synthesis of textures and stylized images. In ICML, pages 1349–1357, 2016.
* Ulyanov et al. [2017] Dmitry Ulyanov, Andrea Vedaldi, and Victor Lempitsky. Improved texture networks: Maximizing quality and diversity in feed-forward stylization and texture synthesis. In CVPR, pages 6924–6932, 2017.
* Wang et al. [2020a] Zhizhong Wang, Lei Zhao, Haibo Chen, Lihong Qiu, Qihang Mo, Sihuan Lin, Wei Xing, and Dongming Lu. Diversified arbitrary style transfer via deep feature perturbation. In CVPR, pages 7789–7798, 2020.
* Wang et al. [2020b] Zhizhong Wang, Lei Zhao, Sihuan Lin, Qihang Mo, Huiming Zhang, Wei Xing, and Dongming Lu. Glstylenet: exquisite style transfer combining global and local pyramid features. IET Computer Vision, 14(8):575–586, 2020.
* Wang et al. [2021] Zhizhong Wang, Lei Zhao, Haibo Chen, Zhiwen Zuo, Ailin Li, Wei Xing, and Dongming Lu. Evaluate and improve the quality of neural style transfer. CVIU, 207:103203, 2021.
* Wang et al. [2022] Zhizhong Wang, Lei Zhao, Haibo Chen, Ailin Li, Zhiwen Zuo, Wei Xing, and Dongming Lu. Texture reformer: Towards fast and universal interactive texture transfer. In AAAI, 2022.
* Yao et al. [2019] Yuan Yao, Jianqiang Ren, Xuansong Xie, Weidong Liu, Yong-Jin Liu, and Jun Wang. Attention-aware multi-stroke style transfer. In CVPR, pages 1467–1475, 2019.
* Zhang et al. [2018] Richard Zhang, Phillip Isola, Alexei A Efros, Eli Shechtman, and Oliver Wang. The unreasonable effectiveness of deep features as a perceptual metric. In CVPR, pages 586–595, 2018.
* Zhang et al. [2019] Yulun Zhang, Chen Fang, Yilin Wang, Zhaowen Wang, Zhe Lin, Yun Fu, and Jimei Yang. Multimodal style transfer via graph cuts. In ICCV, pages 5943–5951, 2019.
|
11institutetext: School of Computing, University of Utah, Salt Lake City, UT,
USA, 84112
<EMAIL_ADDRESS>
# A Zero Attentive Relevance Matching Network for Review Modeling in
Recommendation System
Hansi Zeng Zhichao Xu and Qingyao Ai
###### Abstract
User and item reviews are valuable for the construction of recommender
systems. In general, existing review-based methods for recommendation can be
broadly categorized into two groups: the siamese models that build static user
and item representations from their reviews respectively, and the interaction-
based models that encode user and item dynamically according to the similarity
or relationships of their reviews. Although the interaction-based models have
more model capacity and fit human purchasing behavior better, several
problematic model designs and assumptions of the existing interaction-based
models lead to its suboptimal performance compared to existing siamese models.
In this paper, we identify three problems of the existing interaction-based
recommendation models and propose a couple of solutions as well as a new
interaction-based model to incorporate review data for rating prediction. Our
model implements a relevance matching model with regularized training losses
to discover user relevant information from long item reviews, and it also
adapts a zero attention strategy to dynamically balance the item-dependent and
item-independent information extracted from user reviews. Empirical
experiments and case studies on Amazon Product Benchmark datasets show that
our model can extract effective and interpretable user/item representations
from their reviews and outperforms multiple types of state-of-the-art review-
based recommendation models.
###### Keywords:
Review Modeling, Interaction-based Model, Relevance Matching
## 1 Introduction
Review text is considered to be valuable for effectively learning user and
item representations for recommender systems. Previous studies show that the
incorporation of reviews into the optimization of recommender systems can
significantly improve the performance of rating prediction by alleviating data
sparsity problems with user preferences and item properties expressed in
review text [20, 19, 41, 5]. In general, existing review based recommender
systems for rating prediction can be roughly categorized into two groups: 1)
The siamese models that independently encode static user and item
representations from reviews and use the static representations to predict the
rating [41, 5]; 2) The interaction-based models that dynamically learn the
user and item representations based on their context [31, 10]. In particular,
the interaction-based models assume that, given different target items,
different user reviews might play different roles in determining the utility
of the items. For example, when the target item is about an album from the Led
Zeppelin, the user review that reflects her interest on Rock $\&$ Roll music
might be more useful than the rest of her reviews.
Although the interaction-based models have more model capacity and fit the
human purchasing behavior better [31], several problematic model designs and
assumptions lead to its lower performance than siamese models as shown in
recent studies [25]. First, most existing interaction-based models exploit co-
attention mechanism [26, 36, 6, 30] to distill textual similarity information
between user and item reviews, but such information might be diluted when
there is a vast amount of text in user and item reviews. Second, because the
number of reviews to profile each user in the training set is limited, it is
common that the target item’s characteristics are beyond the interest of a
user expressed in her limited reviews. Interaction-based models that force the
user representations to extract valuable information from user reviews for the
target item might introduce irrelevant aspects of the user and cause serious
overfitting. Third, existing interaction-based models extract user-item
relationships mostly by modeling the textual similarity between user and item
reviews. High textual similarities between user and item reviews, however, not
necessarily reflect the user’s true opinion on the target item. For example,
an item review “the taste of cappuccino is really good” might have higher
textual similarity with a user review “I really enjoy the taste of beef pho”
than with “I am a big coffee fan”, but the latter review that reflects the
user opinion on coffee could be more informative when predicting her rating on
the target item.
Based on these observations, in this paper, we propose a new interaction-based
rating prediction model to mitigate the weakness of the existing interaction-
based recommendation models. First, we implement a relevance matching model
[11] instead of a semantic matching model [36] to search the relevant review
from the user to the target item. Our relevance matching model treats each
user review as a query to search and extracts relevant information from all
the reviews of the target item. It is capable of discovering relevance
information from a large amount of review text with thousands or more words.
Second, to better capture the semantic relationships instead of the textual
similarity between user and item reviews, we use the ground-truth review
(available in the training stage) written from the user to the target item and
the corresponding item reviews as a pair of positive “query-document” to train
our relevance matching module and plug it as the auxiliary loss in the
training objective, since the ground-truth review expresses the user true
opinion to the target item. After the relevance matching function is well-
trained, other user reviews that have high relevance matching scores to the
target item would share similar characteristics to the ground-truth review and
also reflect the user true interest to the target. Last but not least, when
there is not relevant review from the user to the target item, we exploit a
zero-attention network [1] to avoid using irrelevant reviews to build user
representations. Our zero-attention network not only builds dynamic user
representations when there are high informative reviews from the user to the
target item, but also allows the model to degenerate to a siamese model with
static user representations when all user reviews are not relevant to the
target item. Specifically, separated from the interaction module, we build
static user and item embeddings using a multi-layer convolutional self-
attention network to extract information hierarchically from words, sentences
and reviews. We then construct the final user representations using both the
dynamic user representations extracted by the interaction module and the
static user embeddings created by the self-attention network. When there is no
user review relevant to the target item, the dynamic user representation
created by the interaction module with zero attention networks would be
downgraded to a zero vector and the final rating prediction of the user-item
pair would purely depend on the static user and item embeddings. Empirical
experiments and case studies on four datasets from Amazon Product Benchmark
show that that our proposed model the Zero Attentive Relevance Matching
Network (ZARM) can extract effective and interpretable user/item
representations from review data and outperforms multiple types of state-of-
the-art review-based recommendation models.
## 2 Related Works
Review Based Recommendation. Using review text to enhance user and item
representations for recommender system has been widely studied in recent years
[20, 19, 40, 14, 29, 23, 39, 9]. Many works are focus on topic modeling from
review text for users and items. For example, the HFT[20] uses LDA-like topic
modeling to learn user and item parameters from reviews. The likelihood from
the topic distribution is used as a regularization for rating prediction by
matrxi factorization(MF). The RMR[19] uses the same LDA-like model on item
reviews but fit the ratings using Guassian mixtures but not MF-like models.
Recently, with the advance of deep learning, many recommendation models start
to combine neural network with review data to learn user and item
representations, including DeepCoNN[41], TransNets[4] D-Att[27], NARRE[5],
HUITA[33], HANN[7], MPCN[31], AHN[10], HSACN[38]. Despite their differences,
existing work using deep learning models for review modeling can be roughly
divided into two styles – siamese networks and interaction-based networks. For
example, DeepCoNN[41] uses two convolution neural networks to learn user and
item representations from reviews statically; NARRE[5] extends CNN with an
attention network over review-level to select reviews with more
informativeness. In addition, the MPCN[31], a interaction-based network, uses
co-attention mechanism to select the most informative and matching reviews
from the user and item respectively, then another attention mechanism is
applied to learn the fixed dimensional representation, by modeling the word-
level interaction on the matched reviews. However, both of these two styles
have their own weaknesses. The siamese models lack the dynamic target-
dependent modeling and neglect the interaction between the user and target.
But the interaction-based models forcely require the dynamic matching between
each user and item, neglecting the fact that not every user exists the
informative review to the target. Even the informative review exists, the
matching information might be diluted considering thousands of words within
tens of review for profiling the user and item.
Interaction Based Text Matching. The review based dynamic user-item modeling
is closely related to query-document representation learning in the QA [26]
task or premise-hypothesis encoding in the NLI [36, 6, 30] task that exploit
the co-attention mechanism. The co-attention mechanism computes the pairwise
similarity between two sequences, builds the pair-wise attention weights, and
integrates them with other feautres of the sequences for effective text
semantic matching learning. Besides the text semantic matching in the NLP
tasks, several works on the IR tasks[11, 34, 12, 2] also utilize the
interaction-based approaches for text relevance matching learning. For
example, DRMM [11] proposes a pooling pyramid technique that converts the
pair-wise similarity matrix into the histogram, and use it as feature for
final text matching prediction. Based on the DRMM, K-NRM [34] introduces the
kernel-based differential pooling technique that can learn the matching signal
in different level. Recent work[22] further investigate using the semantic
matching and relevance matching together or alone in the NLP and IR tasks. It
finds that using relevance matching alone performs reasonable well in many NLP
tasks but the semantic matching is not effective for IR tasks.
Rethinking the Progress of Deep Recommender System. While we have witnessed
the rapid advancements of deep learning methodology and its applications on
the field of recommender systems, there are worries about the progress we
made. Dacrema et al.[8] investigated the performance of several recent
algorithms proposed in top conferences and found most of them can not compete
with traditional methods, like Matrix Factorization and its derivative models
[18, 15, 35], BPR [24] or Item-KNN. Furthermore, Sachdeva et al.[25] focused
on the usefulness of reviews. He examined several review-based recommendation
algorithms and found that applying complex structures to extract semantic
information in reviews, not necessarily improve the system’s performance. Our
goal is to try to tackle these existing problems in this field and propose an
interpretable method to effectively utilize the review information.
## 3 Proposed Method
### 3.1 Overview
The goal of the proposed model is to predict the rating from the user to the
target item based on their review text. The architecture of our model is shown
in Figure. 1. Our model contains two parallel encoders that use multi-layer
convolution self-attention network to hierarchically encode user and item
static representations from their reviews respectively. Besides, the model has
a interaction module that encodes the user dynamic representation according to
her current interacted item where we first compute the relevant level of each
user review to the target item by the relevance matching function. Then the
zero-attention network is applied to allow the dynamic user representation
degrade to a zero vector when there is no user review relevant to the target,
in which case the final user representation would purely depend on the static
user representation. The encoded user static and dynamic representations will
be concatenated and be taken as the input to the feature transformation layer
to encode the final user representation. On the rightmost of the model, the
prediction layer is added to let the learned user and item final
representations interact with each other and compute the final rating
prediction. In the training stage, the auxiliary loss is plugged to guide the
training of relevance matching function. In the following sections, we will
introduce the static user/item encoder (section 3.2), dynamic user encoder
composed of the relevance matching function and zero-attention network
(section 3.3), prediction layer(section 3.4), and the training objective
(section 3.5) in details.
Figure 1: Overview of our model structure
### 3.2 Static User/Item Encoder
Since the static user and item encoder only differ in their inputs, we
introduce the process of encoding user static representation in the following
in details. And the same process is applied to static item encoder in the
similar way. Assume the input of the user encoder is $\\{r^{u}_{1},\ldots
r^{u}_{N}\\}$, where $N$ is the number of reviews written by the user. We
learn each review representation hierarchically from word-level to sentence-
level. More specifically, a user review $r^{u}=\\{s_{1},\ldots s_{T}\\}$
consists of $T$ sentences, and each sentence $s_{i}$ is composed of a sequence
of $L$ words $\\{w^{i}_{1},\ldots w^{i}_{L}\\}$. To learn the sentence
representation $\bm{s_{i}}$, we apply the word-level self-attentive
convolution network to encode the contextual representation of each word in
the sentence and use the attention network to aggregate the learned contextual
embeddings in to a single vector. Mathematically, we first apply the word
embedding layer to map each word $w^{i}_{j}$ into a vector
$\bm{w}^{i}_{j}\in\mathbb{R}^{d_{w}}$ to form a sequence of word embeddings
$\bm{W}^{i}\in\mathbb{R}^{d_{w}\times L}$, then we apply the word-level
convolution neural network to learn the local semantic representation of each
words:
$\displaystyle\bm{Q}^{i}_{w}=\text{CNN}_{w}^{Q}(\bm{W}^{i}),\
\bm{K}^{i}_{w}=\text{CNN}_{w}^{K}(\bm{W}^{i}),\
\bm{V}^{i}_{w}=\text{CNN}_{w}^{V}(\bm{W}^{i})$ (1)
where $\bm{Q}^{i}_{w},\ \bm{K}^{i}_{w},\
\bm{V}^{i}_{w}\in\mathbb{R}^{d_{w}\times L}$. To enrich each word semantic
representation and capture long-range dependencies between words, we apply the
multihead-self-attention network [32] on top of the learned word local
representation from $\text{CNN}(\cdot)$. Finally, a $1$ layer feed-forward
network is sequentially plugged to learn more flexible representations:
$\displaystyle\bm{Z}_{w}^{i}=\text{FFN}_{w}\bigg{(}\text{Multihead-Self-
Attention}_{w}(\bm{Q}^{i}_{w},\bm{K}^{i}_{w},\bm{V}^{i}_{w})\bigg{)}$ (2)
where $\bm{Z}_{w}^{i}\in\mathbb{R}^{d_{s}\times L}$. Then we use addictive-
attention network [37] to aggregate the contextual representations into a
single vector $\bm{s_{i}}\in\mathbb{R}^{d_{s}}$ for sentence modeling:
$\displaystyle\bm{s_{i}}=\text{Addictive-Attention}_{w}(\bm{Z}^{i}_{w})$ (3)
We apply the same procedure on each sentence of the review $r^{u}$ to form a
sequence of sentence representations
$\bm{S}=[\bm{s}_{1},\ldots\bm{s}_{T}]\in\mathbb{R}^{d_{s}\times T}$. Then we
take the sentence sequences as an input to sentence-level self-attentive
convolution network with addictive-attention network to form the review
representation $\bm{r}^{u}\in\mathbb{R}^{d_{r}}$:
$\displaystyle\bm{Z}_{s}$ $\displaystyle=\text{Multihead-Self-
Attention}_{s}\bigg{(}\text{CNN}^{Q}_{s}(\bm{S}),\text{CNN}^{K}_{s}(\bm{S}),\text{CNN}^{V}_{s}(\bm{S})\bigg{)}$
(4) $\displaystyle\bm{r}^{u}$ $\displaystyle=\text{Addictive-
Attention}_{s}(\bm{Z}_{s})$ (5)
We apply the same hierarchical network on each review written by the user,
then form a sequence of review representations
$\bm{R}=[\bm{r}^{u}_{1},\ldots\bm{r}^{u}_{N}]\in\mathbb{R}^{d_{r}\times N}$.
Finally, we apply the user-level addictive-attention network to aggregate the
information of these reviews and form a single vector
$\bm{u}^{static}\in\mathbb{R}^{d_{r}}$ to form the user static representation:
$\displaystyle\bm{u}^{static}$ $\displaystyle=\text{Addictive-
Attention}_{u}(\bm{R}^{u})$ (6)
The item static representation $\bm{i}^{static}$ can be obtained using a
similar procedure.
### 3.3 Dynamic User Encoder
To learn the dynamic user representation, we first compute the relevant scores
of the reviews of the user to the target item using relevance matching
function. In other words, given $N$ reviews written by the user
$\\{r^{u}_{1},\ldots r^{u}_{N}\\}$, we want to compute their corresponding
relevance scores $\\{\alpha_{1},\ldots\alpha_{N}\\}$ to the target. The
detailed introduction of the relevance matching function is in the following.
Relevance Matching Function: The input of the function is a query-document
pair where we treat the user review as a query and the target item reviews as
a document, and we denote the function as $m(\cdot,\cdot)$. Formally, each
user review $r^{u}_{k}$ can be alternatively represented as a sequence of word
embeddings $[\bm{e}_{1}^{u},\ldots,\bm{e}_{M}^{u}]:=\bm{S}_{u}^{k}$, and the
item document is a concatenation of a sequence of word embeddings of its each
review
$[\bm{e}_{1}^{i},\ldots,\bm{e}_{M}^{i},\ldots,\bm{e}_{(N-1)M+1}^{i},\ldots,\bm{e}_{NM}^{i}]:=\bm{S}_{i}$,
where $\bm{e}^{u}_{k},\ \bm{e}^{i}_{k}\in\mathbb{R}^{d_{w}}$,
$\bm{S}^{k}_{u}\in\mathbb{R}^{d_{w}\times M}$,
$\bm{S}^{i}\in\mathbb{R}^{d_{w}\times MN}$, $d_{w}$ is the dimension of the
word embedding, $M$ is the review length, and $N$ is the number of review from
the target item. To get the relevant matching score from the $k$-th user
review to its target item, we first compute the word similarity matrix $S$:
$\displaystyle\bm{M}={\bm{S}^{k}_{u}}^{T}\bm{S}_{i}\in\mathbb{R}^{M\times MN}$
(7)
where $\mathbf{M}_{i,j}$ can be considered as cosine similarity score (we
normalize it into cosine space) by matching $i$-th word of user review with
$j$-th word of item document. We apply mean pooling and max pooling on every
row of similarity matrix to obtain discriminate features:
$\displaystyle mean(\mathbf{M})=\begin{bmatrix}mean(\mathbf{M_{1:}})\\\
\ldots\\\ mean(\mathbf{M_{n:}})\end{bmatrix}\in\mathbb{R}^{M}\
,max(\mathbf{M})=\begin{bmatrix}max(\mathbf{M}_{1:})\\\ \ldots\\\
max(\mathbf{M}_{n:})\end{bmatrix}\in\mathbb{R}^{M}$ (8)
Also, we consider the relative important score for each word in the user
review $\bm{S}^{k}_{u}$ by applying a function $imp(\cdot)$:
$\displaystyle
imp({\mathbf{S}_{u}^{k}})=\begin{bmatrix}imp({\mathbf{e}_{l}^{u}})\\\
\ldots\\\ imp({\mathbf{e}_{l}^{u}})\end{bmatrix}\in\mathbb{R}^{M}\ \
\text{where},\ \
imp(\bm{e}_{j}^{u})=\frac{\exp({\bm{w}_{p}^{T}\bm{e}_{k}^{u})}}{\sum_{o=1}^{n}\exp({\bm{w}_{p}^{T}}\bm{e}_{o}^{u})}$
(9)
where $\bm{w}_{p}\in\mathbb{R}^{d_{w}}$, then the input feature for scoring
function parameterized by a $2$ layer feed-forward neural network is:
$\displaystyle\bm{I}^{rel}=\begin{bmatrix}imp(\bm{S}^{k}_{u})\odot
mean(\mathbf{M})\\\ imp(\bm{S}^{k}_{u})\odot
max(\mathbf{M})\end{bmatrix}\in\mathbb{R}^{2M}$ (10)
Hence the relevant score between the $k$-th user review $\bm{S}^{u}_{k}$ and
item document $\bm{S}^{i}$ is:
$\displaystyle
m(\bm{S}^{u}_{k},\bm{S}^{i})=\text{FFN}\big{(}\text{FFN}(\bm{I}^{rel})\big{)}=\alpha_{k}\in[-\infty,\infty]$
(11)
When there is no user review relevant to the target item, we can expect each
relevant score $\alpha_{k}\ll 0$. However, if we naively normalized the
relevant scores, and use them as weights to measure the importance of each
user review, the final dynamic user representation we get by weighted sum of
the user review representations would be a non-zero vector. It is due to the
fact that after the normalized process, every relevant score will be assigned
as a probability measure, and summation of these probabilities being $1$ makes
the situation that every normalized relevant weight $a_{k}\approx 0$ become
impossible, hence the weighted sum of the user reviews cannot be a zero
vector. For example, suppose that the relevant scores of all user reviews are
$\alpha_{k}=-100$, where $k=1,\ldots N$, then the normalized relevant score
would be $\alpha_{k}=\frac{1}{N}$. The dynamic item representation will become
$\bm{u}^{dynamic}=\displaystyle\sum_{k=1}^{N}\frac{1}{N}\bm{r}^{u}_{k}$, which
is not a zero vector even when all user reviews are not relevant to the target
item. To resolve the problem, we use the zero-attention network motivated by
[1].
Zero-Attention Network: we introduce a zero score $\alpha_{0}=0$, and re-
normalize the relevant scores by taking the zero score in to account.
Formally,
$\hat{\alpha}_{k}=\frac{\exp{(0)}+\exp(\alpha_{k})}{\exp{(0)}+\exp{(\alpha_{1})}+\ldots\exp{(\alpha_{m})}}=\frac{1+\exp(\alpha_{k})}{1+\exp{(\alpha_{1})}+\ldots\exp{(\alpha_{m})}},\
k=1,\ldots N$, and
$\hat{\alpha}_{0}=\frac{1}{1+\exp{(\alpha_{1})}+\ldots\exp{(\alpha_{m})}}$,
then the user dynamic representation is,
$\displaystyle\mathbf{u}^{dynamic}$
$\displaystyle=\displaystyle\sum_{k=1}^{N}\hat{\alpha}_{k}\mathbf{r}^{u}_{k}+\hat{\alpha_{0}}\vec{\mathbf{0}}$
(12)
Intuitively, when $\alpha_{k}\ll 0$, the normalized score
$\hat{\alpha}_{k}\approx 0$ for all $k=1,\ldots,N$, and the
$\bm{u}^{dynamic}\approx\vec{\mathbf{0}}$, which is close to a zero vector. In
the other hand, if there exist a large relevant score, for example
$\alpha_{k}=10$ for a certain $k$, the effect of $\alpha_{0}=0$ will be very
low, and the normalized score will be $\hat{\alpha}_{k}\approx 1$, and
$\bm{u}^{dynamic}\approx\bm{r}^{u}_{k}$
### 3.4 Prediction Layer
This layer combine the static and dynamic user representations to form a final
user representation learned from reviews. Also, it learns a final item static
representation from reviews by $1$-layer feed-forward neural network:
$\displaystyle\bm{u}^{r}$
$\displaystyle=\text{Relu}\Big{(}\begin{bmatrix}\mathbf{W}^{static}_{u},&\mathbf{W}^{dynamic}_{u}\end{bmatrix}\begin{bmatrix}\bm{u}^{static}\\\
\bm{u}^{dynamic}\end{bmatrix}+\mathbf{b}_{u}\Big{)}$ (13)
$\displaystyle\bm{i}^{r}$
$\displaystyle=\text{Relu}\Big{(}\bm{W}^{static}_{i}\bm{i}^{static}+\mathbf{b}_{i}\Big{)}$
(14)
where $\bm{W}^{static}_{u},\ \bm{W}^{static}_{u},\ \bm{W}^{dynamic}_{i},\
\bm{W}^{static}_{i}\in\mathbb{R}^{d_{h}\times d_{r}}$, $\bm{b}_{u},\
\bm{b}_{i}\in\mathbb{R}^{d_{h}}$. Finally, we combine the user and item id
embeddings $\bm{u}^{id}$, $\bm{i}^{id}\in\mathbb{R}^{d_{h}}$, with user and
item embeddings learned from reviews $\bm{u}^{r}$, $\bm{i}^{r}$, to form their
final representations, which are $\bm{u}=\bm{u}^{r}+\bm{u}^{id},\
\bm{i}=\bm{i}^{r}+\bm{i}^{id}$. We take the user and item embeddings as input
to get the final rating prediction:
$\displaystyle\hat{y}_{u,i}=\bm{w}_{f}^{T}(\bm{u}\odot\bm{i})+b_{u}+b_{i}+b_{g}$
(15)
where $\bm{w}_{f}\in\mathbb{R}^{d_{h}},\ b_{u},\ b_{i},\ b_{g}\in\mathbb{R}$
### 3.5 Training Objective
Besides a regression loss for the rating prediction, an auxiliary loss is
utilized for better training the relevance matching function. Specifically, we
assumed there is a user-item pair $(u,i)$ with the ground-truth rating
$y_{u,i}$ in the training stage, and the ground-truth review $r_{u,i}^{g}$
written from the user to the target item is treated as a “positive query” to
the target item. Also we randomly sample a review from the different user
different item as a ”negative query” to the target item which is
$r_{u,i}^{n}$. The corresponding word sequence representation of the ground-
truth review, negative review and target item document is $\bm{S}^{u}_{g},\
\bm{S}^{u}_{n},\ \bm{S}^{i}$. Ideally, a good relevance matching function
$m(\cdot,\cdot)$ can distinguish the positive query-document pair from the
negative one, in other words, we wish
$m(\bm{S}^{u}_{g},\bm{S}^{i})>m(\bm{S}^{u}_{n},\bm{S}^{i})$. In the same time,
we want to minimize the regression loss between ground-truth rating $y_{u,i}$
and predicted rating $\hat{y}_{u,i}$ computed from Equation 15. To achieve the
above two goals, we write the objective function as followed,
$\displaystyle
loss=\displaystyle\sum_{\\{(u,i)\\}\in\mathcal{S}}\underbrace{\bigg{(}y_{u,i}-\hat{y}_{u,i}\bigg{)}^{2}}_{\text{regression
loss}}-\underbrace{\bigg{(}\log\big{(}m(\bm{S}^{u}_{g},\bm{S}^{i})\big{)}+\log\big{(}1-m(\bm{S}^{u}_{n},\bm{S}^{i})\big{)}\bigg{)}}_{\text{auxiliary
loss}}$
## 4 Experimental Setup
Datasets and Evluation Metrics. We conduct our experiment on four different
categories of 5-core Amazon product review datasets [13]. The statistics of
these four categories are shown in the first and second columns of the Table
1. For each dataset, we randomly split user-item pairs into training,
validation, and testing sets with ratio 8:1:1. We use NLTK[3] to tokenize
sentences and words of reviews. We let the number of reviews be the same for
profiling user and item where the number of reviews is set to cover $90\%$ of
users for the balance of efficiency and performance. We adopt Mean Square
Error (MSE) as the main metric to evaluate the performance of our model. The
source code can be found here 111https://github.com/HansiZeng/ZARM.
Compared Methods. To evaluate the performance of our method, we compare it to
several state-of-the-art baseline models: (1) MF [17]: a basic but well-known
CF model that predict the rating using inner product between user, item hidden
representations plus user, item and global bias; (2) NeurMF [15]: the CF based
model combines linearity of GMF and non-linearity of MLPs for modeling user
and item latent representations; (3) HFT [20]: the topic modeling based model
combines the ratings with reviews via LDA; (4) DeepCoNN [41]: the CNN based
model uses two convolution neural network to learn user and item
representation; (5) NARRE [5]: the CNN based model modifies the DeepCoNN by
using the attention network over review-level to select reviews with more
informativeness. (6) MPCN [31]: the model that selects informative reviews
from user and item by review-level pointers using the co-attention technique,
and selects informative word-level representations for the rating prediction
by applying word-level pointers over the selected reviews; (7) AHN [10]: a
dynamic model using co-attention mechansim but treats user and item
asymmetrically; (8) ZARM-static: the variant of the ZARM that only user static
representations; And (9) ZARM-dynamic: the variant of the ZARM that only uses
user dynamic representations.
Parameter Settings. We use 300-dimensional pretrained word embeddings from
Google News [21], and employ the Adam[16] for optimization with an inital
learning rate $0.001$. We set the dimension of sentence hidden vector and
review hidden vector as $100$, and the latent dimension of the prediction
layer as $32$. Also, the convolution kernel size is $1$ or $3$ based on the
performance in each dataset, and number of head for each self-attention layer
is $2$. We apply dropout after the word embedding layer, after each feed
forward layer in sequence encoding modules, and before the prediction layer
with rate $[0.2,0.3,0.5]$. The hidden dimension of the two layer neural
network in the Relevance Matching Module is set to $16$. The hyper-parameters
of baselines are set following the settings of their original papers.
## 5 Results and Analysis
The MSE results of compared models are shown in Table 1. Based on the results,
we can make several observations. Firstly, the siamese models outperform the
interaction-based models significantly. As discussed previously, due to the
fact that not every user exists informative review to the target, interaction-
based models that force to extract informative reviews from user data will
suffer from heavily over-fitting. Among siamese networks, we observe that the
ZARM-static outperforms the other siameses models. This demonstrates that
ZARM-static can capture the review hierarchical structure and use attention
neural network to select the important information in each level. Among
interaction-based models, ZARM-dynamic outperforms the other baselines such as
MPCN and AHN. This demonstrates the effectiveness of the relevance matching
component in discovering relevant information from vast review text and the
utility of the auxiliary training loss that makes the found relevant review
more aligned with the ground-truth that reflect the user true opinion on the
target item. Finally, our model (i.e., ZARM) shows consistently improvement
over siamese and interaction-based models across all datasets. Our model uses
the zero-attention network that can build dynamic user representations from
reviews when there are high informative reviews and can easily degrade to user
static representations when there is not. This strategy combines the
advantages of both the siamese and interaction-based models.
Table 1: Experiment results on benchmark datasets. ${\dagger}$ and
${\ddagger}$ represents the best performance among siamese and interaction-
based models, respectively. The bold value is the best performance among all
models in each dataset.
| Dataset
---
#Reviews / #Users / #Items
| Toys & Games
---
167k / 19k / 12k
| Video Games
---
232k / 24k / 11k
| Kindle Store
---
983k / 68k / 62k
| Office Products
---
53k / 2k / 4k
Non-text-based | MF | 0.8010 | 1.0979 | 0.6231 | 0.6954
NeuMF | 0.8012 | 1.0931 | 0.6255 | 0.6941
Siamese | HFT | $0.7947^{{\dagger}}$ | 1.0837 | 0.6172 | 0.6881
DeepCoNN | 0.8273 | 1.1241 | 0.6437 | 0.7102
NARRE | 0.7982 | 1.0881 | 0.6199 | 0.6794
ZARM-static | 0.7952 | $1.0774^{{\dagger}}$ | $0.6159^{{\dagger}}$ | $0.6757^{{\dagger}}$
Interaction-based | MPCN | 0.8199 | 1.1062 | 0.6337 | 0.7101
AHN | 0.8233 | 1.1137 | 0.6341 | 0.7341
ZARM-dynamic | $0.8054^{{\ddagger}}$ | $1.1054^{{\ddagger}}$ | $0.6279^{{\ddagger}}$ | $0.7024^{{\ddagger}}$
Hybrid | ZARM | 0.7881 | 1.0632 | 0.6083 | 0.6695
### 5.1 Ablation Studies
We conduct the ablation study on the validation sets of the the four benchmark
datasets. We report the performance of $6$ variant models from the defualt
model setting: (1) we change the static review aggregator in Equation 6 to max
pooling; (2) we encode each review using average embedding of words; (3) We
use the relative position representations [28] to encode the relative position
between entities in the self-attention network, now we remove the position
encoding vectors to conduct ablation study; (4) we remove the user and item
bias in Equation 15; (5) we remove the auxiliary loss in the training
objective; And (6) we make our user and item representations symmetrically by
adding the dynamic item encoding to represent the item.
Table 2: Ablation Study (validation MSE) on four datasets
Architecture | Toys-and-Games | Video-Games | Kindle-Store | Office-Product
---|---|---|---|---
Default | 0.7897 | 1.0611 | 0.5961 | 0.6731
(1) max pooling | 0.7922 | 1.0645 | 0.6075 | 0.6795
(2) avg embedding | 0.7854 | 1.0641 | 0.6043 | 0.6742
(3) Remove pos. vec. | 0.7913 | 1.0654 | 0.5985 | 0.6755
(4) Remove u/i bias | 0.8021 | 1.0713 | 0.6022 | 0.6761
(5) Remove aux. loss | 0.8147 | 1.0944 | 0.6189 | 0.6893
(6) Add item dyn. | 0.7938 | 1.0695 | 0.6053 | 0.6800
As shown in Table 2, the performance of ZARM would drop when we use max
pooling in the aggregator, remove the position encoding vectors in self-
attention network, or remove the u/i bias. Using average word embeddings for
review embeddings achieves suboptimal performance on most datasets, but it
also outperforms the default ZARM on Toys-and-Games, which indicates that such
simple aggregators may have some value on specific data types. Interestingly,
in our experiments, the variant architecture that encodes item using its
dynamic and static representation underperforms the default ZARM which only
use the item static encoding. This indicates that building interaction-based
representations on the item side may not as profitable as they are on the user
side, or the current interaction module is not suitable for the construction
of dynamic item representations.
### 5.2 Behavior of the Dynamic Interaction Matching
Figure 2: The distribution of user-item pairs with different numbers of
relevant review ($\alpha_{k}>0$).
We conduct several experiments on investigating the behaviors of the dynamic
interaction matching Firstly, we investigate the number of relevant reviews
($\alpha_{k}>0$ in Eq. (11)) each user have to the target item as shown in
Figure 2. Although the number of relevant review from the user to the target
is dataset dependent, there are around $50\%$ of user-item pairs do not have
the relevant review in each dataset where the type of pairs in Video Games
dataset account for $60\%$ the most, and in Office Product account for $48\%$
the least. On the other hand, some users have more than one relevant reviews
to their target items. For example, in the Toys $\&$ Games dataset, $15\%$ of
the user-item pairs have relevant review more than 3. Such observation implies
that some users have consistent interests and tend to buy items with similar
characteristics, which lead to their target item matched to her multiple
history items in high possibility.
Figure 3: The distribution of relevance matching scores of each user review
given a user-item pair.
To further analyze the interaction module in our model, we randomly sample
$40$ users and their corresponding target items in the validation set for case
studies. For each user we visualize the zero score and the relevant score of
each review to the target item (from r1 to r11) as shown in Figure 3. We
observe that there are roughly half of the user-item pairs having large zero
score which is larger than $0.5$. On the other hand, there are some users
containing reviews with high relevant scores like the pair (user5, item5),
(user36, item36) with review id r6, r9. We then take a closer look into the
two high relevant review r6, r9 and their corresponding target item documents
as shown in Table 3. We observe that the high relevant reviews and their
target item documents share multiple similar keywords, and these keywords are
highly informative that can describe the item characteristics to a large
extent. For example, the keyword ”Gyro Hercules” in the r6 and ”Gyro Hercules
helicopter” in its corresponding target item document have high textual
similarity and describe the general characteristics of the two items that that
r6 and target item belong to. Moreover, The user true opinion on the target
item can be reflected in the high relevant reviews from the user to target.
For example, the first target item (item5) which has the advantage of ”keep on
going and not falls down” meets the user interest that is shown in r6 that
mentions she likes a helicopter that is ”truly withstand a hard fall”. And the
second target item (item 36) which is suitable for kid Christmas gift conforms
to the user interest reflected in r9 in which she mention that she needs a
Christmas gift for her 3-year-old granddaughter.
Table 3: Examples of high relevant reviews r6, r9 and their corresponding target item documents. The first column is their complete user review, and the second column are sampled text selected from the item documents. User Review | Target Item Document
---|---
I have bought other remote control helicopters only to take them outside and have a little breeze of wind knock it down and break. With the Gyro Hercules it can truly withstand a hard fall so you can fly it nearly anywhere. | My kids demolish other helicopters / keeps on going and not falls down / helicopter / Gyro Hercules helicopter / it is durable enough I can’t even break it with my terrible skills.
Bought for our Granddaughter(she is 3) for Christmas. She just loves the write on wipe off A,B,C’s and 1,2,3’s. The art projects that were included and quality of the items for the project, TERRIFIC! Would recommend for all 3 year olds. | This is perfect for a rainy day Christmas Vacation / my three yr old LOVES crafts / Filled with all the supplies to make 16 high quality crafts
## 6 Conclusion
We propose a new model ZARM for the review based rating prediction task. In
our model, the interaction module based on relevance matching function with
zero-attention network is utilized to learn user dynamic representation in
more flexible way. And the auxiliary loss plugged into the training object
make the relevance matching function better trained. Experiments on the four
Amazon benchmark datasets show our model can outperform the state-of-art
models based on the siamese network and interaction-based network. By
conducting case studies, we take a deeper look into the behavior of our
interaction module, and investigate the several statistical and semantic
characteristics of the relevant reviews for users to targets extracted by the
interaction module.
## Acknowledgements
This work was supported in part by the School of Computing, University of Utah
and in part by NSF IIS-2007398. Any opinions, findings and conclusions or
recommendations expressed in this material are those of the authors and do not
necessarily reflect those of the sponsor.
## References
* [1] Ai, Q., Hill, D.N., Vishwanathan, S., Croft, W.: A zero attention model for personalized product search. Proceedings of the 28th ACM International Conference on Information and Knowledge Management (2019)
* [2] Bi, K., Ai, Q., Croft, W.B.: A transformer-based embedding model for personalized product search. Association for Computing Machinery, New York, NY, USA (2020). https://doi.org/10.1145/3397271.3401192, https://doi.org/10.1145/3397271.3401192
* [3] Bird, S.: Nltk: The natural language toolkit. ArXiv cs.CL/0205028 (2006)
* [4] Catherine, R., Cohen, W.: Transnets: Learning to transform for recommendation. Proceedings of the Eleventh ACM Conference on Recommender Systems (2017)
* [5] Chen, C., Zhang, M., Liu, Y., Ma, S.: Neural attentional rating regression with review-level explanations. In: Proceedings of the 2018 World Wide Web Conference (2018)
* [6] Chen, Q., Zhu, X.D., Ling, Z., Wei, S., Jiang, H., Inkpen, D.: Enhanced lstm for natural language inference. In: ACL (2017)
* [7] Cong, D., Zhao, Y., Qin, B., Han, Y., Zhang, M., Liu, A., Chen, N.: Hierarchical attention based neural network for explainable recommendation. In: Proceedings of the 2019 on International Conference on Multimedia Retrieval (2019)
* [8] Dacrema, M.F., Cremonesi, P., Jannach, D.: Are we really making much progress? a worrying analysis of recent neural recommendation approaches. Proceedings of the 13th ACM Conference on Recommender Systems (2019)
* [9] Diao, Q., Qiu, M., Wu, C.Y., Smola, A., Jiang, J., Wang, C.: Jointly modeling aspects, ratings and sentiments for movie recommendation (jmars). In: KDD ’14 (2014)
* [10] Dong, X., Ni, J., Cheng, W., Chen, Z., Zong, B., Song, D., Liu, Y., Chen, H., Melo, G.: Asymmetrical hierarchical networks with attentive interactions for interpretable review-based recommendation. In: AAAI (2020)
* [11] Guo, J., Fan, Y., Ai, Q., Croft, W.: A deep relevance matching model for ad-hoc retrieval. Proceedings of the 25th ACM International on Conference on Information and Knowledge Management (2016)
* [12] Guo, J., Fan, Y., Pang, L., Yang, L., Ai, Q., Zamani, H., Wu, C., Croft, W.B., Cheng, X.: A deep look into neural ranking models for information retrieval. Information Processing & Management 57(6), 102067 (2020)
* [13] He, R., McAuley, J.: Ups and downs: Modeling the visual evolution of fashion trends with one-class collaborative filtering. ArXiv abs/1602.01585 (2016)
* [14] He, X., Chen, T., Kan, M., Chen, X.: Trirank: Review-aware explainable recommendation by modeling aspects. In: CIKM ’15 (2015)
* [15] He, X., Liao, L., Zhang, H., Nie, L., Hu, X., Chua, T.S.: Neural collaborative filtering. Proceedings of the 26th International Conference on World Wide Web (2017)
* [16] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. CoRR abs/1412.6980 (2015)
* [17] Koren, Y., Bell, R., Volinsky, C.: Matrix factorization techniques for recommender systems. Computer 42 (2009)
* [18] Koren, Y., Bell, R., Volinsky, C.: Matrix factorization techniques for recommender systems. Computer 42(8), 30–37 (2009)
* [19] Ling, G., Lyu, M.R., King, I.: Ratings meet reviews, a combined approach to recommend. In: RecSys ’14 (2014)
* [20] McAuley, J., Leskovec, J.: Hidden factors and hidden topics: understanding rating dimensions with review text. In: RecSys ’13 (2013)
* [21] Mikolov, T., Sutskever, I., Chen, K., Corrado, G.S., Dean, J.: Distributed representations of words and phrases and their compositionality. ArXiv abs/1310.4546 (2013)
* [22] Rao, J., Liu, L., Tay, Y., Yang, W., Shi, P., Lin, J.: Bridging the gap between relevance matching and semantic matching for short text similarity modeling. In: EMNLP/IJCNLP (2019)
* [23] Ren, Z., Liang, S., Li, P., Wang, S., Rijke, M.: Social collaborative viewpoint regression with explainable recommendations. In: WSDM ’17 (2017)
* [24] Rendle, S., Freudenthaler, C., Gantner, Z., Schmidt-Thieme, L.: Bpr: Bayesian personalized ranking from implicit feedback. arXiv preprint arXiv:1205.2618 (2012)
* [25] Sachdeva, N., McAuley, J.: How useful are reviews for recommendation? a critical review and potential improvements. arXiv preprint arXiv:2005.12210 (2020)
* [26] Santos, C.D., Tan, M., Xiang, B., Zhou, B.: Attentive pooling networks. ArXiv abs/1602.03609 (2016)
* [27] Seo, S., Huang, J., Yang, H., Liu, Y.: Interpretable convolutional neural networks with dual local and global attention for review rating prediction. Proceedings of the Eleventh ACM Conference on Recommender Systems (2017)
* [28] Shaw, P., Uszkoreit, J., Vaswani, A.: Self-attention with relative position representations. ArXiv abs/1803.02155 (2018)
* [29] Tan, Y., Zhang, M., Liu, Y., Ma, S.: Rating-boosted latent topics: Understanding users and items with ratings and reviews. In: IJCAI (2016)
* [30] Tay, Y., Luu, A.T., Hui, S.: Compare, compress and propagate: Enhancing neural architectures with alignment factorization for natural language inference. In: EMNLP (2018)
* [31] Tay, Y., Tuan, L.A., Hui, S.C.: Multi-pointer co-attention networks for recommendation. arXiv preprint arXiv:1801.09251 (2018)
* [32] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, L., Polosukhin, I.: Attention is all you need. ArXiv abs/1706.03762 (2017)
* [33] Wu, C., Wu, F., Liu, J., Huang, Y.: Hierarchical user and item representation with three-tier attention for recommendation. In: NAACL-HLT (2019)
* [34] Xiong, C., Dai, Z., Callan, J., Liu, Z., Power, R.: End-to-end neural ad-hoc ranking with kernel pooling. arXiv preprint arXiv:1706.06613 (2017)
* [35] Xu, Z., Han, Y., Zhang, Y., Ai, Q.: E-commerce recommendation with weighted expected utility. arXiv preprint arXiv:2008.08302 (2020)
* [36] Yang, R., Zhang, J., Gao, X., Ji, F., Chen, H.: Simple and effective text matching with richer alignment features. ArXiv abs/1908.00300 (2019)
* [37] Yang, Z., Yang, D., Dyer, C., He, X., Smola, A., Hovy, E.: Hierarchical attention networks for document classification. In: HLT-NAACL (2016)
* [38] Zeng, H., Ai, Q.: A hierarchical self-attentive convolution network for review modeling in recommendation systems. arXiv preprint arXiv:2011.13436 (2020)
* [39] Zhang, W., Yuan, Q., Han, J., Wang, J.: Collaborative multi-level embedding learning from reviews for rating prediction. In: IJCAI (2016)
* [40] Zhang, Y., Lai, G., Zhang, M., Zhang, Y., Liu, Y., Ma, S.: Explicit factor models for explainable recommendation based on phrase-level sentiment analysis. SIGIR (2014)
* [41] Zheng, L., Noroozi, V., Yu, P.S.: Joint deep modeling of users and items using reviews for recommendation. In: Proceedings of the Tenth ACM International Conference on Web Search and Data Mining (2017)
|
# GridTracer: Automatic Mapping of Power Grids using Deep Learning and
Overhead Imagery
Bohao Huang, Jichen Yang, Artem Streltsov, Kyle Bradbury, Leslie M. Collins,
and Jordan Malof
###### Abstract
Energy system information valuable for electricity access planning such as the
locations and connectivity of electricity transmission and distribution towers
– termed the power grid – is often incomplete, outdated, or altogether
unavailable. Furthermore, conventional means for collecting this information
is costly and limited. We propose to automatically map the grid in overhead
remotely sensed imagery using deep learning. Towards this goal, we develop and
publicly-release a large dataset ($263km^{2}$) of overhead imagery with ground
truth for the power grid – to our knowledge this is the first dataset of its
kind in the public domain. Additionally, we propose scoring metrics and
baseline algorithms for two grid mapping tasks: (1) tower recognition and (2)
power line interconnection (i.e., estimating a graph representation of the
grid). We hope the availability of the training data, scoring metrics, and
baselines will facilitate rapid progress on this important problem to help
decision-makers address the energy needs of societies around the world.
###### Index Terms:
Remote sensing, deep learning, object detection, power grid, energy systems
## I Introduction
Providing access to sustainable, reliable, and affordable energy access is
vital to the prosperity and sustainability of modern societies, and it is the
United Nation’s Sustainable Development Goal #7 (SDG7) [1]. Increased
electricity access is correlated with positive educational, health, gender
equality, and economic outcomes [2]. Ensuring energy access over the coming
decades, and achieving SDG7, will require careful planning from non-profits,
governments, and utilities to determine electrification pathways to meet
rapidly growing energy demand.
A crucial resource for this decision-making will be high-quality information
about existing power transmission and distribution towers, as well as the
power lines that connect them (see Fig. 1); we collectively refer to these
infrastructures as the power grid (PG). Information about the precise
locations of PG towers and line connectivity is crucial for decision-makers to
determine cost-efficient solutions for extending reliable and sustainable
energy access [3]. For example, this information can be used in conjunction
with modeling tools like the Open Source Spatial Electrification Tool (OnSSET)
[4] to determine the optimal pathway to electrification: grid extension,
mini/microgrids, or off-grid systems like solar.
Unfortunately, the PG information available to decision-makers is frequently
limited. Existing PG data are often incomplete, outdated, of low spatial
resolution, or simply unavailable [5, 6]. Furthermore, conventional methods of
collecting PG data, such as field surveys or collating utility company
records, are either costly or require non-disclosure agreements. The
importance of this problem and the lack of PG data has recently prompted major
organizations such as the World Bank [5] and Facebook [6] to investigate
solutions to it. In this work we propose to address this problem by using deep
learning models to automatically map (i.e., detect and connect) the PG towers
detectable in high-resolution color overhead imagery (e.g., from satellite
imagery and aerial photography) using deep learning models.
Figure 1: (a) Transmission towers and lines. (b) Distribution towers and
lines. Transmission towers carry higher-voltage electricity than distribution
towers, and are physically larger.
### I-A Mapping the grid using overhead imagery
Recently deep learning models - namely deep neural networks (DNNs) - have been
shown to be capable of accurately mapping a variety of objects in color
overhead imagery, such as buildings [7, 8] roads [8, 9], and solar arrays [10,
11, 12]. Since PG towers and lines are often visible in overhead imagery,
these results suggest that mapping the PG may be also be feasible. However, PG
mapping presents several unique challenges compared to mapping other objects
in overhead imagery.
The most immediate challenge of PG mapping is the structure of the desired
output. The PG is generally represented as a geospatial graph, where each
tower represents a graph node with an associated spatial location, and each PG
line represents a connection between two nodes [5, 6]. This representation is
compact, and well-suited for subsequent use by energy researchers and
decision-makers. Therefore, we require that any automatic recognition model
produce a geospatial PG graph as output.
A second challenge is that PG infrastructure exhibits weak and geographically-
distributed visual features in overhead imagery, making the problem both
unique and challenging. Fig. 2 illustrates the weak visual features of PG
infrastructure in representative examples of (a) distribution and (b)
transmission PG infrastructure. Looking closely at Fig. 2(a) it is apparent
that PG towers exhibit very few visual features (if any), aside from their
shadows. Shadows are useful for detection however their strength and visual
appearance varies, and they are not always present (e.g., depending upon time
of day). As illustrated in Fig. 2 PG lines typically appear as thin white or
black lines that are only intermittently visible due to their varying contrast
with the local background (e.g., white lines become faint, or disappear, as
they cross pale background). From Fig. 2 it is also notable that transmission
infrastructure tends to be relatively easy to detect because it is much
larger, however, it is also much more rare than distribution infrastructure.
As a result of these two major challenges, PG mapping is a unique and
challenging problem for existing visual recognition models. Fig. 3(a)
demonstrates the desired output structure of the PG mapping where towers are
represented in boxes or nodes and lines are represented in edges. Fig. 3(b)
illustrates the large-scale topology of the PG in one region, which might be
leveraged for recognition. On the contrast, modern DNNs for example rely
primarily upon local visual features for object recognition [13, 14], making
them poorly-suited for PG mapping. Additionally most existing DNNs do not
typically produce output in the form of a geospatial graph. Some work has
recently been conducted for inferring geospatial graphs on overhead imagery
for the problem of road mapping [9, 15]. Unfortunately, however, these
approaches make fundamentally different assumptions about the structure of the
underlying graph, and the visual features associated with it, limiting their
applicability to the PG mapping problem (see Section II for further
discussion).
Figure 2: Overhead imagery of power grid towers and lines in two different
locations. (a) Distribution towers and lines in Tucson, Arizona, U.S.A. (b)
Transmission towers and lines in Wilmington, North Carolina, U.S.A.
Therefore, effective PG mapping will require the development of novel models
that can address its unique challenges. This raises a third major challenge of
PG mapping: the absence of any publicly-available benchmark dataset to train
and validate recognition models. Furthermore, it is unclear how to score a
geospatial graph so that different models can be compared. Without a publicly-
available dataset and scoring metrics, it is not possible to effectively study
this problem.
Figure 3: Illustration of the PG inference task, where (a) shows the annotated
tower and lines and (b) demonstrates the power grid in larger scale. Note the
complexity of the topology in (b).
### I-B Contribution of this work
In this work we make two primary contributions and lay the foundations for a
practical PG mapping approach with overhead imagery. First, we develop and
publicly-release a dataset of satellite imagery encompassing an area of
$263km^{2}$ across seven cities: two cities in the U.S. and five cities in New
Zealand. In this work we employ imagery with a 0.3m ground sampling distance.
Our primary motive for choosing this resolution is because it is the highest
resolution that is also widely employed for research on object recognition in
overhead imagery [8, 9, 16, 7]. Imagery at this resolution is also
commercially available across the globe, making any techniques developed using
it widely applicable. Our dataset includes both imagery and corresponding hand
annotations of transmission and distribution towers (as rectangles) and the
power lines connecting them (as line segments), making it possible to train
and validate deep learning approaches for PG mapping. We also perform
additional tests to evaluate the quality of the hand annotations. To enable
future benchmark testing, we describe a data handling procedure for training
and validating models, and we propose metrics to score the graphical output of
the models.
Our second contribution is a novel deep model – termed GridTracer – that makes
a first step towards addressing the unique challenges of PG mapping in
overhead imagery. GridTracer effectively splits PG mapping into three simpler
steps: tower detection, line segmentation, and PG graph inference. In tower
detection we use a deep object detection model to identify the location and
size (e.g., with a bounding box) of individual PG towers. In line segmentation
we use a deep image segmentation model to generate a pixel-wise score
indicating the likelihood that a PG line is present. In PG graph inference, we
integrate the output of steps one and two over large geographic regions, with
the goal of estimating which towers are likely to be connected by power lines.
The final output of GridTracer is a geospatial graph, in which graph nodes
represent PG tower locations, and graph edges (node connections) represent PG
lines.
As we discuss in Section II, existing DNN-based approaches are not suitable to
solve the PG mapping problem, and therefore we propose GridTracer as a
baseline for future work. To this end, we use our new benchmark dataset to
comprehensively study the performance of GridTracer, including ablation
studies to analyze the major designs and hyperparameter choices of GridTracer.
We also compare GridTracer’s performance to human-level PG mapping accuracy on
our benchmark to provide insights on the level of PG mapping accuracy that may
ultimately be achievable, and thereby how much further GridTracer could be
improved with further research. We hope the availability of these resources
(e.g., data, scoring metrics, and baseline) and analyses will facilitate rapid
progress on this important problem to help decision-makers address the energy
needs of societies around the world.
## II Related Work
Mapping the power grid in remotely-sensed data. Until recently, the majority
of work related to PG analysis using remote sensing techniques focused around
identifying vegetation encroachment for monitoring known transmission lines
[17, 18, 19]. More recently, Facebook [6] and Development Seed (in partnership
with the World Bank) [5] have both proposed approaches to map PGs in overhead
imagery within the past year.
Facebook’s approach uses nighttime lights imagery to identify distribution
lines (medium voltage). This approach maps grid connectivity using VIIRS
750m-resolution day/night band nighttime lights imagery to identify and
connect communities using electricity. Since nighttime light data is not
perfectly correlated with energy access, especially in regions with lower
electricity access rates, such an approach could potentially be less accurate
in locations where it would be most beneficial for extending electricity
access. Additionally, the 750m-resolution of the underlying data prevents
individual towers and lines from being directly observed. This approach
culminated in the release of a medium-voltage transmission dataset [6], which,
while global in scope, only made PG estimates at a resolution of one estimate
per square kilometer, and reported their performance in a scale no smaller
than $1km^{2}$; both of which are coarser than what is needed for many types
of electricity access planning.
A summary of the approach proposed by Development Seed is published online,
along with software and their output PG mapping
dataset111https://github.com/developmentseed/ml-hv-grid-pub. Their approach
relies on identifying high voltage transmission infrastructure in color
overhead imagery, making it more similar to our proposed approach. However,
because their approach uses a human in the loop, it will be costly to scale it
to large geographic areas, or utilize their approach for repeated PG mapping
over time.
Our work builds on the foundation laid by each of these approaches, seeking to
achieve a high resolution mapping of the PG (i.e., individual towers and
connections), and to do so with a fully-automated and scalable pipeline, akin
to that of Facebook’s approach. Furthermore, and in contrast to both existing
approaches, we map both transmission and distribution PG infrastructure,
providing more comprehensive support for electricity access planning. Our
experimental results also include comprehensive analysis of the proposed
GridTracer model; providing support for its overall design, its performance
under different deployment scenarios, and its sensitivity to hyperparameter
settings.
Object detection in overhead imagery. One of the first components of our
approach is to identify the PG towers (or poles); for this step, we employ
object detection. Object detection algorithms identify and localize the
objects of interest within a given image, typically by placing a bounding box
around the object. Recent state-of-the-art object detection algorithms employ
deep neural networks (DNNs). The studies [20, 21, 22] first use DNNs to
propose regions of interest (RoI), after which a second neural network is used
to (i) classify and (ii) refine the bounding boxes identified by the first-
stage network. This two-stage process generally yields higher accuracy on
benchmark problems [23]. Based upon this approach, [24, 25, 26] have developed
detectors specifically for the overhead imagery to address the specific
challenges in remote sensing including the various scales of the objects as
well as arbitrary orientation of their bounding boxes [24]. However, to our
best knowledge there has been no previous work on object detection with PG
towers, may limited by the lack of available datasets.
Segmentation of overhead imagery. The second step of GridTracer relies on a
deep learning model to automatically segment PG lines in the overhead imagery.
Although a variety of segmentation models have been employed on overhead
imagery, we focus on two models: the U-net model, and StackNetMTL. The U-net
model employs an encoder-decoder structure with skip connections between the
encoder and decocder to maintain fine-grainned imagery details to support
accurate pixel-wise classifications [27, 28]. The U-Net [27] and subsequent
variants (e.g., Ternaus models [29]) have recently yielded top performance on
the Inria (2018) [16], DeepGlobe (2019) [8], and DSTL (2018) [30] benchmarks
for object segmentation in overhead imagery. In addition to their benchmark
success, PG lines are very small (sometimes just a single pixel wide) and
therefore we hypothesize that fine-grained features will be crucial to detect
PG lines.
The StackNetMTL [31] is a recent model that achieved state-of-the-art
performance specifically for the segmentation of roads in overhead imagery.
Like PG infrastructure, roads exhibit weak local visual features (though to a
lesser extent than transmission lines), and considering the visual features
over a larger context can improve recognition [31]. The StackNetMTL, for
example, also trains a model to predict both the label of a pixel (i.e., road,
or not) and its likely road orientation. The joint learning process helps the
network to better learn the connectivity information, and can also be applied
to PG lines in a similar fashion. For these reasons, and its state-of-the-art
performance, we explore the StackNetMTL for PG line segmentation.
Graph extraction from overhead imagery. Once the PG towers are identified, we
can infer a graphical representation of the grid (i.e., a map), with PG towers
as nodes, and PG lines as edges; this is the final step of GridTracer. To our
knowledge, the most closely related problem to ours is road mapping (e.g.,
[31, 32, 33]. Historically, road mapping was treated largely as a segmentation
problem [33, 34, 35], however, in recent years road mapping has also been
formalized as a geospatial graph inference problem, in which road
intersections are graph nodes and any intervening roadway are treated as graph
edges. Two recent and well-cited models for road graph extraction have
recently been proposed: RoadTracer [9] and DeepRoadMapper [15]. Unfortunately,
neither of these methods is directly applicable to the PG mapping problem. The
most immediate challenge is the way in which these two methods create graph
nodes: RoadTracer places nodes at regular spatial intervals without any regard
for any local visual cues, and DeepRoadMapper assigns graph nodes at any
location where a road segment (identified by a segmentation model) has a
discontinuity. This leaves both methods without any clear mechanism to enforce
graph nodes to reside on PG towers, and therefore these methods are not
directly applicable to PG mapping (e.g., as baselines for comparison). For
these reasons, we develop GridTracer as the first approach that is applicable
to the PG mapping problem, and we propose it as a problem-specific baseline
approach, upon which future methods can be developed and compared.
## III The Power Grid Imagery Dataset
The PG imagery dataset consists of 264 $km^{2}$ of overhead imagery collected
over three distinct geographic regions: Arizona, USA (AZ); Kansas, USA (KS)
and New Zealand (AZ). Some basic statistics of the dataset are presented in
Table I, where the PG infrastructure statistics are derived from human
annotations. We chose these diverse geographic regions so that we (and future
users) could demonstrate the validity of any PG mapping approaches across
differing geographic settings.
Although our dataset includes both 0.15m and 0.3m resolution imagery, we
resampled all of the imagery to 0.3m for our experiments. This was done, in
part, to maintain consistency of testing results. A second reason was to
enhance the practical relevance of our results; while it is likely that
utilizing higher resolution imagery would yield greater PG mapping accuracy,
0.15m resolution imagery is only available via aerial photography, whereas
0.3m imagery is available from satellites (e.g., Worldview 2 and 3
satellites). Satellite-based imagery offers much greater geographic coverage
and imaging frequency compared to aerial photography, while also being less
expensive. Our aim here is to explore this problem for imagery that could
ultimately support applications across the globe, including areas currently
transitioning to electricity access. By employing 0.3m resolution imagery we
will better support these objectives.
TABLE I: Summary of power grid annotation imagery Region | Area ($km^{2}$) | # towers | # other towers | # lines
---|---|---|---|---
Arizona, USA | 108 | 595 | 23 | 503
Kansas, USA | 83 | 813 | 102 | 712
New Zealand | 73 | 1998 | 269 | 1729
### III-A Ground truth representation
There are two major classes of objects that we annotated in the imagery:
towers and lines. For the purpose of PG mapping, we need to precisely localize
each PG tower in the imagery, as well as provide information about its shape
and size to support the training of object detection models. Therefore each
tower was annotated with a bounding box (i.e., a recangle), which is
parameterized by a vector $t=(r,c,h,w)$ where $(r,c)$ encodes the pixel
location of the top left corner of the box (the row and columns of the
corresponding pixel), and $(h,w)$ encodes the height and width (again, in
pixels) of the rectangle.
Let $T$ denote the set of all $t$-vectors in the ground truth of the dataset,
which specifies the location of each tower. Given $T$, the PG lines can be
represented very succinctly by observing that PG lines always form straight
line segments between the centroids of PG tower bounding boxes. Therefore, the
precise visual extent of a particular PG line can be accurately inferred
simply by knowing which two towers are connected by that line. Therefore, we
can succinctly represent the PG lines in the imagery by an adjacency matrix,
$A$, where $A_{ij}=1$ indicates that there is a connection (a power line)
between the $i^{th}$ and $j^{th}$ towers in $T$, and $A_{ij}=0$ otherwise.
Adjacency matrices are commonly used to succinctly represent graphs, and
therefore the PG is naturally conceptualized as a graph. However, the nodes in
the PG graph are each associated with a geospatial location, distinguishing
them from generic graphs in mathematics. Therefore, we refer to the PG as a
geospatial graph, which is characterized by a set of node locations as well as
an adjacency matrix.
### III-B Annotation details
All ground truth labels were acquired via manual human annotation of the color
overhead imagery. The imagery was split into non-overlapping sub-images,
termed ”tiles”, approximately $5k\times 5k$ in size. Each tile was manually
inspected and annotated using a software tool especially designed for the
rapid annotation of overhead imagery. The tool allows users to choose between
two primary types of annotation: rectangles for towers (label “T”) and line
segments for lines (“L”). These annotations are then used to generate the $T$
and $A$ ground truth matrices discussed in Section III-A. One separate $T$ and
$A$ matrix was generated for each image tile, so that each tile could be
processed and scored independently.
Rectangular annotations were drawn so that they enclosed the entire physical
extent of each tower, excluding shadows. Examples of tower annotations are
provided in Fig. 4 as blue squares. In a small subset of cases the annotators
were uncertain about whether a tower is an electricity tower or some other
type of tower e.g., streetlights. The annotators were instructed in those
cases to assign an “Other Tower” category (“OT”). “OT” annotations are still
included in the ground truth matrices as graph nodes however, the ”OT”
indicator is included in the ground truth meta data so that users can decide
how to use these towers. In this work we use ”OT” towers for training since
many of these objects look similar to PG towers and the models may benefit
from the additional training imagery. However, we exclude ”OT” labels from
evaluation because we want to measure performance only on true PG
infrastructure.
Annotators were instructed to draw line segments between any two towers that
were connected by a power line. In such cases, a line segment was drawn from
the center of the first tower’s rectangle to the center of the second tower’s
rectangle. Examples of line segment annotations are provided by the green
lines in Fig. 4.
It sometimes occurs that PG lines connect towers in two neighboring tiles.
This potentially creates substantial additional complexity when annotating and
processing the imagery. This can also substantially slow down the processing
of imagery with deep learning models because large image tiles may not fit
into the memory of graphics processing units. In order to circumvent these
potential problems, we created artificial ”Edge Nodes” (EN) that were placed
at any location where a PG line crossed the boundary of an image tile. An
example EN node is shown in Fig. 4. This formalism allows us to maintain a
fully self-contained graph representation of each image tile so that each tile
can be processed separately. Similar to the ”OT” nodes, the EN nodes were
included in all ground truth as graph nodes. However, because EN nodes do not
actually represent PG towers, we do not use ”EN” nodes for training or
evaluating tower detection or PG graph inference output.
Each annotator was trained to recognize PG towers (including the OT and EN
designations) and lines using two especially challenging tiles of imagery that
were identified by our team. To ensure overall quality and consistency in the
dataset, each annotator’s training annotations were reviewed for accuracy
before that annotator was permitted to annotate more tiles. We also note that
annotators were asked to annotate any substations (”SS”) that were present in
the imagery. These substations were not included in our experiments, however
for completeness, we include their annotation details in the Appendix.
Figure 4: An example of annotation. The green lines represent the power line
and blue box represent the transmission tower. The white box represents the
edge node, which is not a physically real object. It is annotated for the
completeness of power line connectivity across tiles.
### III-C Dataset characteristics and analysis
In this section, we present qualitative and quantitative analyses of the
dataset to (i) illustrate the diversity of the dataset, as well as (ii)
provide useful information for algorithm development and analysis of our
experimental results. Basic statistics regarding the dataset are presented in
Table I. From these basic statistics we can see that there are substantial
differences between the three regions. For example, New Zealand has the
highest density (per unit area) of PG infrastructure – by a large margin -
while Kansas has a greater density than Arizona. New Zealand also has the
greatest number of line connections per tower.
Figure 5: Top Row: Stacked histogram of the tower (‘T’) sizes. Dark color bars
mean towers only connected to two other towers. Light color bars mean towers
connected to three or more towers. Middle Row: Distribution of the lengths
power line length between two connected towers. Bottom Row: Number of unique
power line angles categorized in 20 groups. More unique angles mean more
complex connection pattern. Note the scale of the y axis is not the same for
New Zealand in the first and third rows.
Fig. 5 presents several other useful statistical features associated with the
dataset, stratified by location. We briefly summarize some of the main
observations. The first row indicates that New Zealand has substantially
smaller towers than the US locations on average. All three plots in the top
row also show that the towers in all three regions are usually connected to
two towers, with a small number of towers having three or more connections,
especially in Arizona. The second row indicates that there is a wide, but
relatively similar, distribution of line lengths across all three locations.
This is a useful distribution for limiting the potential towers connected to a
given tower (e.g., we need not consider towers further than 100 meters away).
Furthermore, we observe New Zealand has substantially more power lines
compared to the other three regions. Finally, in row three, we create a crude
measure for the complexity of the PG (per unit area); we count the number of
unique line angles within each tile, using 18 discrete possible angle bins. A
histogram of the number of unique line angles, per tile, is created for each
location. The results indicate that the PG in New Zealand is substantially
more complex, on average, than the others, since the histogram suggests that
power lines in New Zealand have various different orientations. This analysis
provides important information such as the two regions in US are relatively
similar, except Kansas has more power line connections and the grid pattern is
slightly more complex. The New Zealand region has substantially more complex
PG compared to the other two regions.
### III-D Annotation Quality Assessment
To assess the quality of our annotations, we chose approximately 10% of the
image tiles randomly (distributed equally across the three regions) and
produced two independent sets of annotations for each tile: one set made by
each of two unique annotators. We then computed the agreement between the
annotations made by the two annotators. Evaluating annotator agreement is a
common strategy for assessing the quality of annotations of machine learning
training data [36, 23]. Two towers were declared as matches if their centroids
were within $3m$ of one another; two lines were declared as matches if both
ends of each line segment matched with both ends of another line segment,
within $3m$ in each case.
Fig. 6(a) summarizes the agreement between the annotators. Tower annotations
exhibit a 70-90% agreement across the three locations. Line annotations
exhibit slightly lower agreement for each location because line agreement
depends upon first correctly-identifying the tower locations. Overall, the
results show strong agreement among the annotations, suggesting that
consistent PG mapping results may be feasible, given a sufficiently-
sophisticated recognition model (e.g., approximated by a human analyst in this
case). Similarly, these results also suggest our annotations are suitable for
measuring the recognition accuracy of automatic models. For example, we expect
that a sufficiently-sophisticated automatic recognition model should be
capable of achieving a 70-90% agreement with our annotations. By contrast, if
our human annotations had instead demonstrated very little agreement between
annotators (e.g., an approximation of a sophisticated recognition model), then
it will be difficult to distinguish between poor detectors and good detectors,
suggesting automatic PG mapping may be infeasible.
Figure 6: Quantitative measure of agreement between annotators. (a) Percentage
of tower centroids being annotated within $3m$ range of different annotators.
(b) Histogram of centroid distance between different annotators.
## IV GridTracer: A Baseline Model for Power Grid Mapping in Overhead Imagery
In this section we present our baseline model GridTracer for the PG inference
problem. We break down the PG graph inference problem into three sub-problems:
tower detection, line segmentation and graph inference. The processing
pipeline of GridTracer is illustrated in Fig. 7.
Figure 7: An illustration of our baseline grid mapping algorithm: GridTracer.
(a) We use an object detection model to infer the locations of PG towers,
$\hat{T}$. (b) We use a segmentation model to infer an image, $\hat{C}$, where
each pixel intensity indicates the probability that a line exists at that
location. (c) Using $\hat{T}$ and $\hat{C}$ we infer the connections between
each tower, which is given by an adjacency matrix, $\hat{A}$. The final output
of the model is a geospatial graph of the PG characterized by $\hat{A}$ and
its associated PG tower locations, $\hat{T}$.
### IV-A Tower Detection
The goal of tower detection is to predict the centroid of each PG tower.
However, our ground truth annotations provide richer information about each
tower – full rectangles. This makes it possible to naturally apply and train
state-of-the-art object detection models for tower detection. Due to the
challenging nature of tower detection, we focus on maximum accuracy and employ
a two-stage (as opposed to one-stage) object detector: the Faster RCNN [21].
We trained faster RCNN with Inception V2 [37] on our proposed PG dataset to
detect towers. In section VII-B we provide results from an ablation study
using different backbone choices and find that the Inception V2 generally
yields the best results. Due to the relatively small size of PG towers in
imagery compared to other objects to which object detectors have been applied
[23, 38], we had to significantly reduce the size of the bounding box anchors
to achieve good results.
At inference time, we first extract the raw imagery into $500\times 500$ sub-
images. We apply the tower detector on those sub-images and only keep the
boxes with a confidence higher than 0.5. Since we are only interested in the
location instead of the size of the towers, the centers of the bounding boxes
were retained as predictions. We use Non-Maximum Suppression (NMS) [39] to
remove redundant predictions for nearby bounding boxes. The result of this
process is a list of estimated PG tower centroid locations, termed $\hat{T}$,
illustrated in Fig. 7.
### IV-B Line segmentation
As described in section I, the PG lines are conceptualized as edges in the PG
graph. As a result, the precise location (e.g., pixel-wise segmentation) of
the PG lines are unnecessary for the final output of GridTracer, however,
having such information is useful to determine whether a line exists. As a
result, as an intermediate step, GridTracer employs a state-of-the-art
segmentation model to infer all locations throughout the imagery that may
indicate PG lines. The output of this model will then be utilized in the next
stage of processing (graph inference) to infer which towers are most likely to
be connected by PG lines.
To extract a segmentation map of PG lines, we employ the StackNetMTL, which
has recently achieved success for road segmentation [31]. As discussed in II,
the StackNetMTL includes a greater visual context when inferring target
labels, which we hypothesize may also benefit PG line segmentation. We provide
ablation studies in section VII-B indicating that this is indeed the case.
In order to train our segmentation models, we must create ground truth imagery
that indicates which pixels are PG lines (pixel value of one), and which are
not (pixel value of zero). To do this, we use our manual annotations to draw
straight lines between the centroids of each pair of connected towers. Each
line is 30-pixels-wide and all pixels in the line are set to a value of one.
This width is chosen to ensure that the ground truth labels encompass the real
power lines, whose exact locations in the imagery are unknown. Once trained,
we apply StackNetMTL to produce a map of pixel-wise PG line probabilities,
termed $\hat{C}$, that is illustrated in Fig. 7.
### IV-C Graph inference
The goal of this step is to infer an adjacency matrix, $\hat{A}$, where
$\hat{A}_{ij}=1$ indicates that there is a connection between the $i^{th}$ and
$j^{th}$ towers in $\hat{T}$, and $\hat{A}_{ij}=0$ otherwise. To infer these
connections, we will rely upon the output of the PG line segmentation model,
$\hat{C}$. Each pixel in $\hat{C}$ indicates the relative likelihood that a
power line exists at pixel k. GridTracer will label $\hat{A}_{ij}=1$ if and
only if two conditions are met: (i) the distance between tower $i$ and $j$ is
less than some user-defined threshold, $d$; and (ii) $S_{ij}\geq\gamma$, where
$\gamma$ is a user-defined threshold. $S_{ij}$ is the estimated likelihood
that a connection exists between tower $i$ and $j$, based upon integrating the
pixel-wise output of a power line segmentation model (StackNetMTL), along the
path between the two towers, given by
$S_{ij}=\frac{1}{|P_{ij}|}\sum{C_{ij}}.$ (1)
Here $P_{ij}$ is the set of pixels in the path between the towers, which is a
straight-line segment, of width $w$, between the two towers. This simple
operation allows us to integrate visual cues well beyond the field-of-view of
the segmentation model. Once $S_{ij}$ is obtained, we retain the connection
between tower $i$ and $j$ if $S_{ij}$ exceeds a threshold value, $\gamma$, and
the distance between the two towers is smaller than a predefined parameter,
$d$. In practice, we only consider connections between towers that are within
$d$ pixels of one another, dramatically limiting the number of candidate
connections we need to consider.
As illustrated in Fig. 7, the final output of GridTracer is a geospatial graph
of the PG characterized by $\hat{A}$ and its associated PG tower locations,
$\hat{T}$.
## V Experimental design
In this section we describe the major experimental design details. A major
goal of this paper is to establish a benchmark for PG mapping, and therefore
we prescribe a proposed data handling scheme for training and evaluating PG
mapping models. We also propose a set of scoring metrics for evaluating
models. Finally, we describe the implementation details of GridTracer.
Figure 8: Three different data handling schemes. (a): Train a separate model
for each region and evaluate on the corresponding region. (b): Train one model
on all regions and test on all regions. (c): Train model on all but one region
and evaluate on the held-out region.
### V-A Data handling for model training
We explore three data handling schemes, as illustrated in Fig. 8. In all data
handling schemes, we use the same subset of the imagery from each location for
testing (first 20% of imagery of each city) so that, regardless of the data
handling scheme, the exact same testing dataset is always employed. Fig. 8(a)
is the ”conventional” data-handling scheme, in which training imagery is
available from all testing locations, and models are trained on all available
training imagery. This approach is labeled as the ”conventional” because it is
commonly-employed on overhead imagery recognition benchmark problems
(e.g.,DeepGlobe [8], DSTL [30]). For this reason we prescribe this as the
primary data handling scheme for our PG mapping benchmark dataset. Our main
results in section VI are obtained using this scheme. Fig. 8(b,c) presents two
additional data handling schemes that we utilize in Section VII for further
analysis. We describe the motivation for these designs in Section VII.
### V-B Scoring: tower detection
As discussed in the introduction, we split the PG mapping problem into two
sub-problems: tower detection and tower connection (i.e. line
interconnection).
For tower detection, we adopt the mean average precision (mAP) metric because
it is widely-used for object detection tasks (e.g., [38, 23, 23]). mAP is
computed by first assigning a label to each predicted box,
$\hat{b}\in\hat{B}$, indicating whether it is a correct detection, or a false
detection. This label is based upon whether a given predicted box achieves a
sufficiently high IoU with at least one ground truth box, $b\in B$.
Mathematically, we have
$l_{i}=1[\max_{b_{j}\in B}\;IoU(\hat{b}_{i},b_{j})>\tau]$ (2)
where $l_{i}$ is the label assigned to the $i^{th}$ predicted bounding box,
and $\tau$ is a user-defined threshold. In this work we utilize an alternative
matching criteria that depends instead on the the distance between the
centroid of the predicted and ground-truth boxes. Mathematically, we have
$l_{i}=1[\max_{b_{j}\in B}\;d(\hat{b}_{i},b_{j})>\tau]$ (3)
where $d$ is the distance between the centroids of the bounding boxes. We term
this modified metric distance-based mAP ($DmAP$). We also utilize Eq. 3 for
our PG graph scoring metric (discussed next in Section V-C), since it also
requires linking predicted and ground truth towers.
We rely upon eq. 3 for linking because, in PG mapping, we are primarily
concerned with the accuracy of the locations (e.g., centroids) of the
predicted towers rather than their precise shape and size. Furthermore, we
find that our $mAP$ scores for PG tower detection are often very low, even
while our $DmAP$ scores are high, indicating that IoU may indicate a poor
prediction even while the location of the predicted tower is accurate. We
present results in Section VII indicating that this is indeed the case for our
benchmark dataset and our models. Fig. 9 presents a typical example of such a
scenario.
As a result of this problem we utilize the linking criterion in eq. 3 for all
of our benchmark scoring metrics, unless otherwise noted. When using eq. 3, we
use $\tau=3m$ because it reflects the variability of human annotations made
over the same towers. As illustrated in Fig. 6 (b), centroids of human
annotations fall within $3m$ of each other roughly 99% of the time.
Figure 9: A tower detector prediction example. Where the blue box is the
annotated ground truth and green box is the prediction with confidence score
on the top left.
### V-C Scoring: power grid inference
For the evaluation of tower connections, our goal is to reward true power line
connections between real towers, and penalize any predictions that are
incorrect (including those between falsely detected towers). There is no
previously defined metric for assessing the accuracy of PG network
predictions, and therefore we propose one here to reflect our goals. One
existing metric that captures these goals well is the SGEN+ proposed in [40]
to score predictions of graphical structures in scene understanding problems.
This metric includes a “recall” measure, indicating the proportion of true
graph connections (i.e., power lines) identified by a model, but no
”precision” metric. This is because in [40], the authors are trying to study
the relationship between the object pairs contained in the image and the
annotators are not able to use language to describe all relationship between
object pairs. Therefore, the ”precision” metric is not included since the
authors do not want to penalize models for predicting relationship that is not
described by the annotators. However, in the PG mapping task, the connection
relationship is well defined by the power lines, therefore we propose the
scoring metric as:
$R=\frac{C(T)+C(L)}{N_{truth}},P=\frac{C(T)+C(L)}{N_{pred}},F1=2\times\frac{R\times
P}{R+P}$
Here, $R$ and $P$ represents recall and precision, respectively. $C(\cdot)$ is
the counting operation and $T$, $L$ stands for correctly recognized nodes
(towers) and edges (lines), respectively. $N_{pred}$ represents the total
number given by predictions and $N_{truth}$ represents the total number of
objects and relationships given by the ground truth. As discussed in Section
V-B, we use the criterion in equation 3 to determine when detected towers
match ground truth towers.
### V-D Implementation details of GridTracer
There are two deep learning models in GridTracer: a tower detector and a tower
connector (for identifying power line connections). We train each of these two
components separately. We train the faster RCNN tower detectors (one for each
set of training data) and evaluate them on $500\times 500$ uniformly extracted
sub-images from our raw input imagery. We use anchors with area of
{$10^{2}$,$25^{2}$,$50^{2}$,$100^{2}$,$200^{2}$} pixels, and aspect ratio of
{0.5,1.0,2.0}. These boxes are much smaller than the original RCNN [40],
inspired by similar work with overhead imagery in [41]. The first two anchor
sizes were chosen specifically to capture the small-scale towers especially in
New Zealand. During training, we augmented the training data using both random
horizontal and vertical flips as well as 90, 180, and 270 degree rotations.
For all of the experiments, we train the models with a batch size of 5 for
50,000 iterations using the aforementioned training image partition method. We
adapted a manual learning rate scheduler which uses a learning rate of
$3e^{-3}$ for the first 10k steps and drop by a factor of 0.1 after every 10k
steps.
At inference time, after the locations of the towers are predicted, we use
gridTracer with the predicted tower locations to predict the PG. For the
hyper-parameters, we set $\gamma=0.2,d=600m,w=9$. We present ablation results
for selecting these parameters in Section VII-D.
### V-E Human-level performance estimation
In order to aid our analysis of the PG mapping problem, and GridTracer, we
estimate the level of performance that a human annotator may achieve on our
dataset. Human-level performance is often used as a benchmark for visual
recognition tasks [42, 43, 44, 45] because humans often (though not always)
achieve strong performance on visual recognition techniques, and furthermore,
a sufficiently-sophisticated automatic approach should be able to achieve the
same performance as a human. Therefore if an automatic approach does not reach
human-level performance, it implies that the recognition model may be making
incorrect or incomplete assumptions, and further investment might yield
greater performance. Similarly, if human performance is poor on a given task,
it may indicate that a visual task is difficult or even infeasible. We will
use human-level performance to ensure the overall feasibility of our PG
mapping problem, and assess the relative performance of GridTracer.
As discussed in Section V, to estimate human-level performance we randomly
sampled 20% of the imagery from each of our three geographic regions, and had
them annotated by a second group of human annotators. Then we treated these
annotations as predictions, and assessed their accuracy using the same tower
detection and graph inference metrics that we apply to GridTracer. In Section
VI we will compare GridTracer to our human annotators on the same 20% subset
of imagery.
## VI PG mapping benchmark results
In this section, we present the performance of GridTracer using the
”conventional” data handling scheme illustrated in Fig. 8(a). As discussed in
Section V, this scheme has been employed in numerous recent benchmark for
recognition in overhead imagery (e.g.,DeepGlobe [8], DSTL [30]). Although
GridTracer is composed of three steps, here we present the two metrics of our
PG mapping benchmark: (i) PG tower detection, and (ii) PG graph inference. We
provide line segmentation results, along with other analyses, in Section VII.
The results here with GridTracer represent the first results using an
automatic recognition algorithm for both transmission and distribution mapping
in overhead imagery, and therefore they represent a baseline upon which other
approaches can build. However, because this is a new problem, it is difficult
to evaluate (i) the relative success of GridTracer and (ii) how much better we
could expect to perform with further research? To address these important
questions we estimate human-level performance for this problem, and compare it
with GridTracer on the same 20% subset of our testing dataset. Please see
Section V for methodological details. The results of GridTracer and the human-
level performance will be presented on the 20% subset of our testing dataset
below, in addition to GridTracer’s performance on the full testing dataset.
### VI-A Tower detection
The PG tower detection results for GridTracer are presented in Table II. The
results indicate that the DmAP score is roughly 0.61 on average across our
three testing regions, indicating that almost one out of every two detected
towers is false, however, the level of performance varies substantially across
regions. In Arizona, for example, only one in four detected towers is false,
while it is closer to one in two in the other regions. We hypothesize that
this difference may be caused by the differences in the background. The
shadows, which is usually a critical feature for detectors, of the towers in
Arizona, as shown in Fig. 10 top, usually blends with the nearby bushes. This
makes the shadows more difficult to recognize, and therefore leads to a
decrease in the detection performance.
To provide a reference point for judging the results of GridTracer, we can
review the results in rows two and three, comparing the performance of
GridTracer and humans over the same 20% of our testing data. First, we note
that the performance of GridTracer in row one and three are similar, implying
that our 20% testing subset is relatively representative of the full testing
dataset. With this in mind, human annotators achieve a $DmAP$ of 0.86 on
average across the three regions, indicating that nearly nine of ten predicted
towers will be correct. Although the level of performance needed to support
energy-related decision-making and research will vary, these results provide
both energy and computer vision researchers with an estimate of the accuracy
that should be achievable with a baseline recognition model, given sufficient
development.
Furthermore, these results help us understand the degree to which GridTracer
can be improved, and possibly how it can be improved. Although GridTracer
relies upon a state-of-the-art object detection model, human performance is
substantially better, and more consistent across each of the geographic
regions, suggesting significant improvements can be made. As discussed in
Section I, modern DNNs rely primarily upon local visual cues to detect
objects. The substantial performance advantage of humans suggests that they
are therefore likely to be using additional cues to identify towers. We
hypothesize that such cues may include the integration of non-local visual
features, or exploitation of the known topology/structure of the PG. For
example, it may be possible to infer the presence of a PG tower if its
inclusion results in a more probable PG topology, even if there are limited
visual cues for the tower itself.
TABLE II: Tower detection performance for GridTracer and human annotators using the $DmAP$ metric (higher better). Parenthesis indicates the subset of testing data over which each method was scored. We train GridTracer using the ”conventional” data handling scheme in Fig. 8 (a) | Arizona | Kansas | New Zealand | Average
---|---|---|---|---
GridTracer (100%) | 0.73 | 0.54 | 0.55 | 0.61
Human (20%) | 0.92 | 0.88 | 0.79 | 0.86
GridTracer (20% ) | 0.72 | 0.55 | 0.49 | 0.59
### VI-B PG graph inference
The PG graph inference results for GridTracer are presented in Table III. We
apply a similar analysis here to the one before in Section VI-A. GridTracer’s
average F1 score of 0.63 indicates that approximately 63% of the underlying PG
(i.e., towers plus connections) is identified, and that 63% of the inferred PG
infrastructure is correct. This is roughly similar to GridTracer’s performance
for tower detection alone, although caution must be taken when comparing the
results here to those in Table II due to differences in the scoring metrics.
Similar to the results for PG tower detection, we find that graph inference
scores for GridTracer on the 20% subset are similar to those on the full
dataset, indicating that the testing subset is representative of the full
dataset. We also again find that human annotators achieve substantially better
performance than GridTracer, indicating that further improvements can be made
to the PG graph inference approach. However, it is notable that human
performance is lower on the graph inference problem, indicating that (given
the $0.3m$ resolution of our imagery), PG graph inference may have a lower
achievable performance than tower detection. As with tower detection, the
level of accuracy in these output data that is needed to support energy-
related decision-making and research will vary. These results may therefore
provide valuable insights regarding the potential utility of PG mapping in
overhead imagery for different applications.
Figure 10: PG inference results of GridTracer. Ground truth PG towers and lines are presented in green, while predicted PG towers and lines are presented in blue. TABLE III: PG graph inference performance for GridTracer and human annotators using the F1 graph metric (higher better). Parenthesis indicates the subset of testing data over which each method was scored. We train GridTracer using the ”conventional” data handling scheme in Fig. 8 (a) | Arizona | Kansas | New Zealand | Average
---|---|---|---|---
GridTracer (100% data) | 0.68 | 0.59 | 0.61 | 0.63
Human (20% data) | 0.75 | 0.88 | 0.68 | 0.77
GridTracer (20% data) | 0.73 | 0.59 | 0.61 | 0.64
Fig. 10 presents a visualization of the PG inferred by GridTracer compared to
the ground truth in each of our three geographic test regions. These results
illustrate the various types of errors that are made by GridTracer, such as
undetected towers, which also necessarily result in one or more (usually more)
undetected PG lines. Although GridTracer finds the majority of towers and
connections, the effects of these errors is that the inferred PG graph is not
consistent with common PG topology: e.g., towers are connected to two (or
more) PG lines, and there are no disjoint subgraphs arising from a single
missing connection or tower. By contrast, human annotations almost always fit
these real-world constraints and we hypothesize that humans leverage apriori
knowledge about the PG to infer the presence of PG infrastructure even when
weak or non-existent visual cues are present. Given its current design,
GridTracer does not exploit, or impose upon its predictions, most of these
topological cues, and we believe this is an important direction for future
work.
## VII Additional Analysis
In this section we present additional experimental results and analysis that
provide further insights into the performance and design of GridTracer.
### VII-A Object detection scoring metrics
Recall in Section VI-A we argued that the $mAP$-based metric tends to disagree
with our $DmAP$ metric, justifying our adoption of the $DmAP$ scoring metric
for our PG mapping benchmark. In this sub-section we provide experimental
results supporting this claim. In Table IV we compare the $DmAP$ metric to two
commonly-used $mAP$-based metrics on our benchmark testing dataset:
$mAP_{0.5}$ and $mAP_{0.75}$. The subscript of each $mAP$ score denotes the
value of $\tau$ in eq. 2. We see that the $DmAP$ score is higher (indicating
better performance) compared to both of the $mAP$-based scores. This suggests
(as argued in Section VI-A) that the shape and size of GridTracer’s predicted
bounding boxes often meet the centroid-based $DmAP$ score - the metric of our
primary concern in PG mapping - even if they do not meet the IoU-based $mAP$
score.
TABLE IV: Performance of the tower detection component using the $DmAP$ metric (higher better) of GridTracer with the ”conventional” data handling scheme in Fig. 8 (a) Metric | Arizona | Kansas | New Zealand | Average
---|---|---|---|---
$DmAP$ | 0.73 | 0.54 | 0.55 | 0.63
$mAP_{0.75}$ | 0.09 | 0.02 | 0.08 | 0.06
$mAP_{0.5}$ | 0.60 | 0.44 | 0.51 | 0.52
### VII-B Object detector encoder comparisons
In this section we report the performance of three pre-trained backbone
networks, or ”encoders”, that we considered for inclusion in GridTracer’s
tower detection model. It has been shown in several fields that large pre-
trained encoders can offer performance advantages, including in overhead
imagery [46, 29]. Here we consider three widely-used encoders, in order of
their size: ResNet50 [47], ResNet101 [47], and InceptionV2 [37]. In Table V we
compare the performance of faster R-CNN tower detector models, each using a
different encoder, on our PG mapping benchmark task for tower detection. Among
them, ResNet50 yielded the worst performance and the other two relatively
larger backbones have significantly better results. This result is consistent
with other recent findings [46, 29], and suggests that a large backbone is
beneficial for extracting visual features for PG mapping.
TABLE V: Tower detection performance in $DmAP$ (higher better) of different backbones with the ”conventional” data handling scheme in Fig. 8 (a) Backbone | Arizona | Kansas | New Zealand | Average
---|---|---|---|---
ResNet50 | 0.45 | 0.48 | 0.54 | 0.49
ResNet101 | 0.73 | 0.53 | 0.59 | 0.62
Inception V2 | 0.73 | 0.54 | 0.55 | 0.61
### VII-C PG line segmentation performance and model comparison
In this section we report the performance of two segmentation models that we
considered for inclusion in GridTracer. The first is a UNet model with a
ResNet50 [47] encoder that has been pretrained on the ImageNet [48]. Models of
this form have recently achieved state-of-the-art performance for segmentation
of overhead imagery [46, 29]. We also considered the StackNetMTL model
(discussed in Section II, that recently achieved state-of-the-art performance
on road segmentation. Due to the similarities between our task and road
mapping, we hypothesized that StackNetMTL may yield better results.
To assess these two segmentation models, we employed the intersection-over-
union (IoU) metric since it is widely used in recent segmentation benchmark
problems (e.g., [30, 8]). The results of this experiment are presented below
in Table VI. As we see, StackNetMTL provides substantially and consistently
superior performance compared to the UNet. As a result of this superior
performance, we adopted the StackNetMTL in GridTracer.
TABLE VI: Line performance in $IoU$ (higher better) of different backbones with the data handling scheme in Fig. 8 (a) Model | Arizona | Kansas | New Zealand | Average
---|---|---|---|---
UNet | 50.02 | 33.85 | 38.38 | 40.75
StackNetMTL | 54.43 | 40.04 | 46.70 | 45.57
### VII-D Robustness to graph inference hyperparameter settings
The graph inference stage in GridTracer algorithm has three hyper parameters:
$\gamma$, $d$ and $w$. In Table VII we show $GridTracer$’s benchmark
performance when varying the value of each of these hyperparameters. We find
that $d=600m$ and $w=9$ yields the best performance among the settings we
considered, but we also find that GridTracer is relatively robust with respect
to their settings. We find that performance is somewhat more sensitive to
$\gamma$; if it is set too small then we obtain large numbers of false PG line
connections, reducing our performance. However, performance appears to be
insensitive once we use larger values, achieving the best performance when
$\gamma=0.2$, and dropping only slightly if we set it higher. Overall we find
the model is relatively insensitive to these hyperparameter settings.
TABLE VII: Performance of GridTracer using $DmAP$ (higher better) with data handling scheme in Fig. 8 (a) $\gamma$ | Arizona | Kansas | New Zealand | Average
---|---|---|---|---
$0.1$ | 0.62 | 0.51 | 0.54 | 0.56
$0.2$ | 0.68 | 0.59 | 0.61 | 0.63
$0.3$ | 0.69 | 0.56 | 0.59 | 0.61
$d$ | Arizona | Kansas | New Zealand | Average
$200m$ | 0.64 | 0.55 | 0.59 | 0.59
$400m$ | 0.65 | 0.58 | 0.61 | 0.61
$600m$ | 0.68 | 0.59 | 0.61 | 0.63
$800m$ | 0.68 | 0.60 | 0.60 | 0.63
$1000m$ | 0.66 | 0.59 | 0.61 | 0.62
$w$ | Arizona | Kansas | New Zealand | Average
$7$ | 0.68 | 0.59 | 0.60 | 0.62
$9$ | 0.68 | 0.59 | 0.61 | 0.63
$11$ | 0.68 | 0.59 | 0.59 | 0.62
### VII-E GridTracer performance under varying testing scenarios
In this section we consider the performance of GridTracer under less
conventional testing scenarios. Our benchmark testing results were based upon
the data handling scheme illustrated in Fig. 8(a), which is the conventional
approach used in most benchmark problems in overhead imagery. Here we consider
the performance of GridTracer when tested using the data handling schemes
illustrated in Fig. 8(b,c). These experiments are aimed to address two
questions: (a) is training on geographically-diverse imagery beneficial? and
(b) how well does GridTracer generalize to previously-unseen geographic
regions?
Is geographically-diverse training data beneficial? As discussed in section
III-C, the three regions in the PG dataset had significantly different visual
characteristics, and somewhat unique PG grid topologies. Given the unique
characteristics of these regions, it is unclear whether it is beneficial to
train a single model on all regions simultaneously (the ”conventional”
scheme), as opposed to training a unique model that is tailored to the
characteristics of each geographic region. We address this question by
comparing the performance of GridTracer when using data handling schemes (a)
and (b) in Fig. 8. In contrast to scheme (a), scheme (b) trains a model
separately for each geographic region.
The results of this experiment are presented in Table VIII. In all three
stages, the model trained with data handling scheme (a) generally outperforms
the one trained with scheme (b).This suggests that sourcing training data from
the same visual domain as the testing data tends to under-perform a more
geographically (and thereby visually) diverse pool of training data.
TABLE VIII: GridTracer performance with the ”conventional” data handling scheme in Fig. 8(a) and the ”in-domain” scheme in the in Fig. 8(b), cross-domain training | Azrizona | Kansas | New Zealand | Average
---|---|---|---|---
Data handling | Tower detection (DmaP)
(a) | 0.73 | 0.54 | 0.55 | 0.61
(b) | 0.68 | 0.53 | 0.51 | 0.57
| Line segmentation (IoU)
(a) | 54.43 | 40.04 | 45.62 | 46.70
(b) | 52.97 | 39.42 | 46.50 | 46.30
| Graph inference (F1)
(a) | 0.68 | 0.59 | 0.61 | 0.63
(b) | 0.70 | 0.56 | 0.48 | 0.58
Generalization to unseen geographies. The conventional testing scenario in
Fig. 8(a) is typical in computer vision research implicitly assumes that
labeled training data is available in (or near) every geographic location on
which we wish to apply our recognition models. In practice however it is
cumbersome and costly to collect training imagery in each deployment location,
and re-train the model with that imagery. In this section we consider how well
GridTracer performs when evaluated in novel geographic locations - i.e.,
locations for which no training imagery is available in the training dataset.
We use the data handling scheme in Fig. 8(c) to approximate this realistic
scenario by limiting the model to be trained on only two regions and then
testing on the third (unseen) region. The results are presented in Table IX,
compared to the conventional testing scenario.
In scheme (c) both the tower detection and line segmentation yields poor
results. This indicates that the model does not generalize well to novel
geographic regions. This finding is consistent with, and corroborates, other
recent findings in the literature indicating that deep learning models do not
generalize well to new geographic regions [49, 16].
TABLE IX: GridTracer performance with the ”conventional” data handling scheme in Fig. 8(a) and the ”cross-domain” scheme in Fig. 8(c) | Azrizona | Kansas | New Zealand | Average
---|---|---|---|---
Data handling | Tower detection (DmaP)
(a) | 0.73 | 0.54 | 0.55 | 0.61
(c) | 0.06 | 0.15 | 0.05 | 0.09
| Line segmentation (IoU)
(a) | 54.43 | 40.04 | 45.62 | 46.70
(c) | 7.51 | 20.18 | 7.95 | 11.88
## VIII Conclusion
In this work we proposed a novel approach for collecting power grid
information automatically by mapping (i.e., detecting and connecting)
transmission and distribution towers and lines in overhead imagery using deep
learning. We developed and publicly released a dataset of overhead imagery
with ground truth information for a variety of power grids. To our knowledge,
this is the first dataset of its kind in the public domain and will enable
other researchers to build increasingly effective transmission and
distribution grid mapping algorithms.
We also took the first steps towards tackling the PG mapping problem as well.
We developed and evaluated baseline algorithms for two problems: tower
detection and identifying tower interconnections through power lines. In
particular, we developed GridTracer as baseline approach to solve the PG
mapping problem. We also estimate the ability of human annotators to perform
PG mapping, providing future researchers with an estimate of the level of PG
mapping accuracy that may ultimately be achievable with a fully-automatic
mapping algorithm. In particular, we found that GridTracer does not yet reach
human-level PG mapping accuracy, suggesting that further improvements can be
made to bridge this performance gap. Ultimately these results provide a strong
foundation for the development of automatic PG mapping techniques, which offer
a powerful tool to collect valuable information to support energy researchers
and decision-makers.
## Acknowledgment
The authors would like to thank the Duke University Bass Connections and Data+
programs for their support of this work and for each of the team members who
contributed to the construction of the dataset including Qiwei Han, Varun
Nair, Tamasha Pathirathna, Xiaolan You, Wendell Cathcart, Ben Alexander, Yutao
Gong, Xinchun Hu, Lin Zuo. Additional thanks to Wayne Hu for assistance with
the dataset and Sang-Jyh Lin, Jose Luis Moscoso, Andy Yang. This work was
supported in part by National Science Foundation Grant no. OIA-1937137. Bohao
Huang thanks the Duke University Energy Initiative PhD fellowship, funded by
the Alfred P. Sloan Foundation for supporting his work.
## References
* [1] U. Nations, “The Sustainable Development Goals Report 2018,” Tech. Rep., 2018\.
* [2] P. Alstone, D. Gershenson, and D. M. Kammen, “Decentralized energy systems for clean electricity access,” _Nature Climate Change_ , vol. 5, no. 4, pp. 305–314, 2015.
* [3] S. Szabó, K. Bódis, T. Huld, and M. Moner-Girona, “Energy solutions in rural Africa: Mapping electrification costs of distributed solar and diesel generation versus grid extension,” _Environmental Research Letters_ , vol. 6, no. 3, 2011.
* [4] D. Mentis, M. Welsch, F. Fuso Nerini, O. Broad, M. Howells, M. Bazilian, and H. Rogner, “A GIS-based approach for electrification planning-A case study on Nigeria,” _Energy for Sustainable Development_ , vol. 29, pp. 142–150, 2015.
* [5] Development Seed, “Mapping the electric grid using ML to augment human tracing of HV infrastructure,” 2018.
* [6] C. Arderne, C. Zorn, C. Nicolas, and E. Koks, “Predictive mapping of the global power system using open data,” _Scientific data_ , vol. 7, no. 1, pp. 1–12, 2020.
* [7] B. Huang, K. Lu, N. Audebert, A. Khalel, Y. Tarabalka, J. M. Malof, A. Boulch, B. L. Saux, L. Collins, K. Bradbury, S. Lefevre, and M. El-Saban, “Large-scale semantic classification: outcome of the first year of inria aerial image labeling benchmark,” in _International Geoscience and Remote Sensing Symposium_ , 2018.
* [8] I. Demir, K. Koperski, D. Lindenbaum, G. Pang, J. Huang, S. Basu, F. Hughes, D. Tuia, R. Raska, and R. Raskar, “Deepglobe 2018: A challenge to parse the earth through satellite images,” in _2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW)_. IEEE, 2018, pp. 172–17 209. [Online]. Available: http://arxiv.org/abs/1805.06561
* [9] F. Bastani, S. He, S. Abbar, M. Alizadeh, H. Balakrishnan, S. Chawla, S. Madden, and D. Dewitt, “RoadTracer: Automatic Extraction of Road Networks from Aerial Images,” _Computer Vision and Pattern Recognition (CVPR)_ , 2018. [Online]. Available: http://nms.lcs.mit.edu/papers/4023.pdf
* [10] J. Malof, K. Bradbury, L. Collins, and R. Newell, “Automatic detection of solar photovoltaic arrays in high resolution aerial imagery,” _Applied Energy_ , vol. 183, 2016.
* [11] J. Yu, Z. Wang, A. Majumdar, and R. Rajagopal, “DeepSolar: A Machine Learning Framework to Efficiently Construct a Solar Deployment Database in the United States,” _Joule_ , vol. 2, no. 12, pp. 2605–2617, 2018.
* [12] J. M. Malof, B. Li, B. Huang, K. Bradbury, and A. Stretslov, “Mapping solar array location , size , and capacity using deep learning and overhead imagery,” pp. 1–6, 2015.
* [13] Y. Li, Y. Chen, N. Wang, and Z. Zhang, “Scale-aware trident networks for object detection,” in _Proceedings of the IEEE international conference on computer vision_ , 2019, pp. 6054–6063.
* [14] W. Luo, Y. Li, R. Urtasun, and R. Zemel, “Understanding the effective receptive field in deep convolutional neural networks,” in _Advances in neural information processing systems_ , 2016, pp. 4898–4906.
* [15] G. Máttyus, W. Luo, and R. Urtasun, “Deeproadmapper: Extracting road topology from aerial images,” in _Proceedings of the IEEE International Conference on Computer Vision_ , 2017, pp. 3438–3446.
* [16] E. Maggiori, Y. Tarabalka, G. Charpiat, P. Alliez, E. Maggiori, Y. Tarabalka, G. Charpiat, P. Alliez, and C. Semantic, “Can Semantic Labeling Methods Generalize to Any City ? The Inria Aerial Image Labeling Benchmark,” pp. 3226–3229, 2017.
* [17] L. Matikainen, M. Lehtomäki, E. Ahokas, J. Hyyppä, M. Karjalainen, A. Jaakkola, A. Kukko, and T. Heinonen, “Remote sensing methods for power line corridor surveys,” _ISPRS Journal of Photogrammetry and Remote Sensing_ , vol. 119, pp. 10–31, 2016.
* [18] J. Ahmad, A. S. Malik, L. Xia, and N. Ashikin, “Vegetation encroachment monitoring for transmission lines right-of-ways: A survey,” _Electric Power Systems Research_ , vol. 95, pp. 339–352, 2013.
* [19] Y. Kobayashi, G. G. Karady, G. T. Heydt, L. Fellow, and R. G. Olsen, “The Utilization of Satellite Images to Identify Trees Endangering Transmission Lines,” vol. 24, no. July 2009, pp. 1703–1709, 2014.
* [20] T.-Y. Lin, P. Goyal, R. Girshick, K. He, and P. Dollár, “Focal loss for dense object detection,” in _Proceedings of the IEEE international conference on computer vision_ , 2017, pp. 2980–2988.
* [21] S. Ren, K. He, R. Girshick, and J. Sun, “Faster r-cnn: Towards real-time object detection with region proposal networks,” in _Advances in neural information processing systems_ , vol. 39, no. 6, 2015, pp. 91–99.
* [22] K. He, G. Gkioxari, P. Dollár, and R. Girshick, “Mask r-cnn,” in _Proceedings of the IEEE international conference on computer vision_ , 2017, pp. 2961–2969. [Online]. Available: http://arxiv.org/abs/1703.06870
* [23] T.-Y. Y. Lin, M. Maire, S. Belongie, J. Hays, P. Perona, D. Ramanan, P. Dollár, and C. L. Zitnick, “Microsoft coco: Common objects in context,” in _European conference on computer vision_ , vol. 8693 LNCS, no. PART 5. Springer, 2014, pp. 740–755.
* [24] X. Yang, J. Yang, J. Yan, Y. Zhang, T. Zhang, Z. Guo, S. Xian, and K. Fu, “SCRDet: Towards More Robust Detection for Small, Cluttered and Rotated Objects,” pp. 8232–8241, 2018. [Online]. Available: http://arxiv.org/abs/1811.07126
* [25] C. Li, C. Xu, Z. Cui, D. Wang, Z. Jie, T. Zhang, and J. Yang, “Learning Object-Wise Semantic Representation for Detection in Remote Sensing Imagery,” _Cvprw_ , pp. 20–27, 2019.
* [26] S. M. Azimi, E. Vig, R. Bahmanyar, M. Körner, and P. Reinartz, “Towards multi-class object detection in unconstrained remote sensing imagery,” in _Asian Conference on Computer Vision_. Springer, 2018, pp. 150–165.
* [27] O. Ronneberger, P. Fischer, and T. Brox, “U-Net: Convolutional Networks for Biomedical Image Segmentation,” _International Conference on Medical image computing and computer-assisted intervention_ , pp. 1–8, 2015. [Online]. Available: http://arxiv.org/abs/1505.04597
* [28] L.-C. Chen, G. Papandreou, F. Schroff, and H. Adam, “Rethinking Atrous Convolution for Semantic Image Segmentation,” 2017. [Online]. Available: http://arxiv.org/abs/1706.05587
* [29] V. I. Iglovikov, S. Seferbekov, A. V. Buslaev, and A. Shvets, “TernausNetV2: Fully Convolutional Network for Instance Segmentation,” 2018. [Online]. Available: http://arxiv.org/abs/1806.00844
* [30] V. Iglovikov, S. Mushinskiy, and V. Osin, “Satellite Imagery Feature Detection using Deep Convolutional Neural Network: A Kaggle Competition,” 2017\. [Online]. Available: http://arxiv.org/abs/1706.06169
* [31] A. Batra, S. Singh, G. Pang, S. Basu, C. Jawahar, and M. Paluri, “Improved road connectivity by joint learning of orientation and segmentation,” in _Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition_ , 2019, pp. 10 385–10 393.
* [32] D. Chaudhuri, N. K. Kushwaha, A. Samal, and R. C. Agarwal, “Automatic Building Detection From High-Resolution Satellite Images Based on Morphology and Internal Gray Variance,” _Selected Topics in Applied Earth Observations and Remote Sensing, IEEE Journal of_ , vol. PP, no. 99, pp. 1–13, 2015.
* [33] V. Mnih and G. E. Hinton, “Learning to detect roads in high-resolution aerial images,” _Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)_ , vol. 6316 LNCS, no. PART 6, pp. 210–223, 2010.
* [34] A. Marcu and M. Leordeanu, “Dual Local-Global Contextual Pathways for Recognition in Aerial Imagery,” 2016. [Online]. Available: http://arxiv.org/abs/1605.05462
* [35] S. Qian, L. Xiaoping, and L. Xia, “Road Detection from Remote Sensing Images by Generative Adversarial Networks,” vol. 3536, no. c, pp. 1–10, 2017.
* [36] M. Cordts, M. Omran, S. Ramos, T. Rehfeld, M. Enzweiler, R. Benenson, U. Franke, S. Roth, and B. Schiele, “The cityscapes dataset for semantic urban scene understanding,” in _Proceedings of the IEEE conference on computer vision and pattern recognition_ , 2016, pp. 3213–3223.
* [37] C. Szegedy, S. Ioffe, V. Vanhoucke, and A. Alemi, “Inception-v4, Inception-ResNet and the Impact of Residual Connections on Learning,” _Arxiv.Org_ , 2016.
* [38] M. Everingham, L. Van Gool, C. K. I. Williams, J. Winn, and A. Zisserman, “The pascal visual object classes (voc) challenge,” _International journal of computer vision_ , vol. 88, no. 2, pp. 303–338, 2010.
* [39] N. Bodla, B. Singh, R. Chellappa, and L. S. Davis, “Soft-nms – improving object detection with one line of code,” in _The IEEE International Conference on Computer Vision (ICCV)_ , Oct 2017.
* [40] J. Yang, J. Lu, S. Lee, D. Batra, and D. Parikh, “Graph R-CNN for Scene Graph Generation,” _Proceedings of the European Conference on Computer Vision (ECCV)_ , pp. 670—-685, 2018.
* [41] J. Pang, C. Li, J. Shi, Z. Xu, and H. Feng, “R2-cnn: Fast tiny object detection in large-scale remote sensing images,” _IEEE Transactions on Geoscience and Remote Sensing_ , vol. 57, no. 8, pp. 5512–5524, 2019.
* [42] S. Dodge and L. Karam, “A study and comparison of human and deep learning recognition performance under visual distortions,” in _2017 26th international conference on computer communication and networks (ICCCN)_. IEEE, 2017, pp. 1–7.
* [43] Y. Netzer and T. Wang, “Reading digits in natural images with unsupervised feature learning,” _Nips_ , pp. 1–9, 2011.
* [44] J. Stallkamp, M. Schlipsing, J. Salmen, and C. Igel, “Man vs. computer: Benchmarking machine learning algorithms for traffic sign recognition,” _Neural networks_ , vol. 32, pp. 323–332, 2012.
* [45] J. Choi, H. Lei, V. Ekambaram, P. Kelm, L. Gottlieb, T. Sikora, K. Ramchandran, and G. Friedland, “Human vs machine: establishing a human baseline for multimodal location estimation,” in _Proceedings of the 21st ACM international conference on Multimedia_ , 2013, pp. 867–876.
* [46] L. Zhou, C. Zhang, and M. Wu, “D-linknet: Linknet with pretrained encoder and dilated convolution for high resolution satellite imagery road extraction,” _IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops_ , vol. 2018-June, pp. 192–196, 2018.
* [47] K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in _Proceedings of the IEEE conference on computer vision and pattern recognition_ , 2016, pp. 770–778.
* [48] Jia Deng, Wei Dong, R. Socher, Li-Jia Li, Kai Li, and Li Fei-Fei, “ImageNet: A large-scale hierarchical image database,” _2009 IEEE Conference on Computer Vision and Pattern Recognition_ , pp. 248–255, 2009. [Online]. Available: http://ieeexplore.ieee.org/lpdocs/epic03/wrapper.htm?arnumber=5206848
* [49] F. Kong, B. Huang, K. Bradbury, and J. Malof, “The synthinel-1 dataset: a collection of high resolution synthetic overhead imagery for building segmentation,” in _The IEEE Winter Conference on Applications of Computer Vision_ , 2020, pp. 1814–1823.
|
# Unsupervised Noisy Tracklet Person Re-identification
Minxian Li
Nanjing University of Science and Technology
<EMAIL_ADDRESS>Xiatian Zhu
Vision Semantics Limited
<EMAIL_ADDRESS>Shaogang Gong
Queen Mary University of London
<EMAIL_ADDRESS>
###### Abstract
Existing person re-identification (re-id) methods mostly rely on supervised
model learning from a large set of person identity labelled training data per
domain. This limits their scalability and usability in large scale
deployments. In this work, we present a novel selective tracklet learning
(STL) approach that can train discriminative person re-id models from
unlabelled tracklet data in an unsupervised manner. This avoids the tedious
and costly process of exhaustively labelling person image/tracklet true
matching pairs across camera views. Importantly, our method is particularly
more robust against arbitrary noisy data of raw tracklets therefore scalable
to learning discriminative models from unconstrained tracking data. This
differs from a handful of existing alternative methods that often assume the
existence of true matches and balanced tracklet samples per identity class.
This is achieved by formulating a data adaptive image-to-tracklet selective
matching loss function explored in a multi-camera multi-task deep learning
model structure. Extensive comparative experiments demonstrate that the
proposed STL model surpasses significantly the state-of-the-art unsupervised
learning and one-shot learning re-id methods on three large tracklet person
re-id benchmarks.
## 1 Introduction
Figure 1: (Top) Existing tracklet person re-id benchmarks exhibit less
realistic evaluation scenarios for unsupervised model learning. This is due to
data selection from manual annotation which gives rises to an easier learning
task with significantly less noisy data. In current benchmarks, tracklets
without cross-camera matches are excluded, and poor tracklets are often
discarded including (Bottom): (a) Partial detection, (b) Multiple persons, (c)
Non-person, (d) Severe occlusion, (e) Identity switch.
Person re-identification (re-id) is a task of matching the identity
information of person bounding box images extracted from disjoint surveillance
camera views [9]. Existing state-of-the-art re-id methods rely heavily on
supervised deep learning [42, 19, 20, 40, 34, 2, 32, 46, 33, 35]. They assume
a large set of cross-camera pairwise training data exhaustively labelled per
surveillance camera network (i.e. domain), and are often significantly
degraded for new domain deployments. Such poor cross-domain scalability leads
to recent research focus on developing unsupervised domain adaptation [50, 5,
51, 30, 39] and unsupervised deep learning [16, 17, 22, 3] methods. In
general, model learning with domain adaptation is less scalable since it
assumes some common characteristics between the source and target domains
which is not always true.
Unsupervised deep learning re-id models [16, 17, 22, 3] have started
increasingly to explore unlabelled tracking data (tracklets). This is
reasonable and intuitive because most image frames of a tracklet may share the
person identity (ID), which provides rich spatio-temporal appearance variation
information. To enable discriminative model optimisation, the key is to self-
discover and learning reliable within-camera and cross-camera tracklet true
matching among inevitably noisy tracklet training data. It is non-trivial due
to that person ID labelling is unavailable for discriminative learning and
noisy tracklet frame detection, as well as the majority pairs are false
matches.
Existing tracklet person re-id benchmarks (e.g. MARS [49] and DukeMTMC-SI-
Tracklet [31, 17]) present artificially simplified evaluation scenarios for
unsupervised learning. This is because after manual selection and annotation
in dataset construction, their tracklet data are no longer realistic (Fig. 1).
For example, all the tracklets without cross-camera true matching and with
poor conditions are often removed. Manually removing such tracklets in
annotation would significantly simplify model learning, e.g. a 10% rank-1 rate
difference in model performance [17]. In real-world applications, tracklet
manual filtering is not available. Scalable unsupervised learning algorithms
are required to handle automatically unconstrained raw tracklet data without
manual data selection.
In this work, we consider the problem of unsupervised deep learning on
unconstrained raw tracklet data, which is a more realistic and scalable
setting than the existing tests [49, 31, 17]. Given unfiltered and unlabelled
noisy training data, more robust tracklet learning algorithm is required. To
this end, we present a selective tracklet learning (STL) method. STL is
characterised by a robust image-to-tracklet selective matching loss function
that is able to selectively associate true matching tracklets and adaptively
suppress potentially noisy frame images and tracklets in model learning. It
does not assume the existence of true matches for individual tracklets within
and across camera views.
The contributions of this work are as follows: (1) We analyse the limitations
of the existing tracklet person re-id benchmarks and methods. In particular,
the current benchmarks fail to reflect the true challenges for unsupervised
model learning, due to the effect of data selection during manual annotation.
This phenomenon makes the developed methods less scalable and robust to more
realistic scenarios. (2) We formulate a selective tracklet learning (STL)
method for unsupervised deep learning with superior robustness against
unconstrained tracklet data. This is achieved by designing a data adaptive
image-to-tracklet selective matching loss function. (3) To enable more
realistic unsupervised tracklet learning test, we introduce an unconstrained
raw tracklet person re-id dataset DukeMTMC-Raw-Tracklet. It is constructed
based on the DukeMTMC tracking benchmark [31]. Extensive comparative
experiments show the performance advantages and superior robustness of STL
over the state-of-the-art unsupervised and one-shot learning models on three
tracklet person re-id benchmarks: MARS [48], DukeMTMC-SI-Tracklet [17, 31],
and the newly introduced DukeMTMC-Raw-Tracklet.
## 2 Related Work
Supervised person re-id. Most existing person re-id models are supervised
learning methods with a large set of cross-camera ID labelled training data
[18, 4, 19, 20, 40, 34, 2, 32, 33, 35]. Moreover, the training and test data
are typically assumed to be sampled from the same surveillance camera network,
i.e. the same domain. As a result, their scalability and usability is
significantly reduced for large real-world applications. This is because no
such large training sets are available in typical test domains due to high
labelling costs.
Supervision reduction. To address the scalability and generalisation
limitation of supervised learning re-id models, unsupervised model learning is
desired. A trade-off can be achieved by semi-supervised learning [25, 38],
although still need labelled data. Alternatively, human-in-the-loop models can
reduce the labelling effort by leveraging human-computer interaction [37, 24],
although the process can be overly elaborated and involved. Unsupervised model
learning is attractive without the need to collect ID labelled training data.
However, earlier attempts [7, 28, 14, 13, 12, 44, 26, 23, 36, 47] have rather
poor re-id performance due to weak hand-crafted features.
Unsupervised domain adaptation person re-id. Recently, unsupervised domain
adaptation methods have gained noticeable success [39, 6, 29, 45, 5, 51]. The
idea is to transfer the available person identity information from a labelled
source domain to an unlabelled target domain. Existing methods can be
generally divided into three groups. The first group is image synthesis which
aims to render the source person identities into the target domain environment
[50, 5, 51, 1]. As such, the conventional supervised learning algorithms are
enabled to train re-id models. The second group is the conventional feature
alignment scheme [30, 29, 39]. The third group is by unsupervised clustering
which generates pseudo labels for supervised learning [6, 45]. These methods
are usually stronger than unsupervised learning methods. However, they assume
a similar imagery data distribution between the source and target domains,
which restricts their generalisation to arbitrary and unconstrained
application scenarios.
Figure 2: An overview of the proposed selective tracklet learning (STL)
method for unsupervised tracklet re-id model learning. (a) The frames from the
same tracklet with the noisy frame. (b) The frame feature and tracklet feature
are both used for model learning. (c) An adaptive sampler is introduced to
generate per-camera neighbours and cross-camera neighbours of tracklets. (d) A
per-camera image-to-tracklet selective matching loss
$\mathcal{L}_{\text{pcm}}$ is proposed to learn the feature representation
against noisy tracklet data within the same camera. (e) A cross-camera image-
to-tracklet selective matching loss $\mathcal{L}_{\text{ccm}}$ is proposed to
learn the feature representation against noisy tracklet data across cameras.
Unsupervised video tracklet person re-id. Unsupervised video tracklet re-id
methods have been advanced notably [16, 3, 22, 17]. They excel at leveraging
the spatio-temporal continuity information of tracklet data freely available
from person tracking. Whilst solving this tracklet learning problem is
inherently challenging, it promises enormous potential due to massive
surveillance videos available for model learning. To encourage novel model
development, two large scale tracklet person re-id benchmarks [48, 17, 31]
were constructed. They are based on exhaustive data selection and identity
labelling during benchmarking. When testing unsupervised learning algorithms,
one assumes no identity labels whilst still use the same training tracklet
data. This means the training data are selected manually, other than the
unconstrained raw tracklet data as typically encountered in real-world
unsupervised model learning. This discrepancy on model training data renders
the existing benchmarks fail to test the realistic performance of unsupervised
model learning algorithms. Moreover, this data bias also gives undesired
influence on algorithm designing. For example, current methods [16, 3, 17, 22]
often assume the existence of tracklet matches and class balanced training
data. Such assumptions however is often not valid on truly unconstrained
tracklet data. Consequently, they are less robust and scalable to more
realistic unfiltered tracklets (Table 2).
In this study, we aim to resolve the aforementioned artificial and unrealistic
assumption for scaling up unsupervised learning algorithms to real-world
application scenarios. To this end, we propose a selective tracklet learning
approach. It is capable of learning a discriminative re-id model directly from
unlabelled raw tracklet data without manual selection and noise removal. To
enable testing true performance, we further introduce an unconstrained raw
tracklet person re-id benchmark by using DukeMTMC videos.
## 3 Methodology
Problem setting. We start with automated person tracklet extraction on a large
set of multi-camera surveillance videos by off-the-shelf detection and
tracking models [15, 8]. Let us denote the $i$-th tracklet from the $m$-th
camera as $\bm{T}_{i}^{m}=\\{\bm{I}_{1},\bm{I}_{2},\cdots\\}$ with a varying
number of image frames. There is a total of $M$ camera views. The per-camera
tracklet number $N^{m}$ varies. In unsupervised learning, no person ID labels
are available on person tracklet data. The objective is to learn a
discriminative person re-id model from these unconstrained raw tracklets
without any manual processing.
Approach overview. We adopt multi-task unsupervised tracklet learning with
each task dedicated for an individual camera view as [17, 16]. In particular,
we first automatically annotate each tracklet with a unique class label per-
camera. Each camera is therefore associated with an independent class label
set. In multi-task learning, we ground all the branches on a common re-id
feature representation. This model takes individual frame images $\bm{I}$ as
input (Fig. 2(a)) other than tracklets. This is favourable due to the
possibility of modelling noisy image frames within tracklets. After extracting
the frame feature afte the backbone CNN, we aggregate the frame features from
the same tracklet to the tracklet feature. Especially, we proposed an adaptive
sampler to generate two neighbour sets: per-camera neighbours and cross-camera
neighbours. Based on these two neighbour sets, we proposed the per-
camera/cross-camera image-to-tracklet selective matching loss functions to
learn the feature representation against noisy tracklet data. An overview of
our STL is depicted in Fig. 2.
### 3.1 Per-Camera View Learning
For enforcing model learning constraints for each camera view, the softmax
Cross Entropy (CE) classification loss function is utilised [17, 16] as:
$\mathcal{L}_{\text{ce}}={-}\sum_{i=1}^{N^{m}}\mathbbm{1}(i=y)\cdot{\log}\Big{(}\frac{\exp({\bm{W}_{i}^{\top}{\bm{x}}})}{\sum_{k=1}^{N^{m}}\exp({\bm{W}_{k}^{\top}{\bm{x}}})}\Big{)}$
(1)
where $\bm{x}$ is the task-shared feature vector of an input frame image
$\bm{I}$ and $\bm{W}_{i}$ the classifier parameters for the $i$-th tracklet
label. The indicator function $\mathbbm{1}(\cdot)$ returns $1$ for true
arguments and $0$ for otherwise. The term in $\log(\cdot)$ defines the
posterior probability on the tracklet label $i$.
In this context, the CE loss assumes that each tracklet is to be matched with
only a single person candidate. This is not valid for many raw tracklet data
with noise introduced into model learning. To address this limitation, we
propose a novel image-to-tracklet selective matching loss formulation. It is a
weighted non-parametric formulation of the CE loss. Formally, the proposed
per-camera image-to-tracklet selective matching loss function for a specific
training image frame $\bm{I}_{i}$ is designed as (Fig. 2(d)):
$\mathcal{L}_{\text{pcm}}={-}\sum_{j=1}^{N^{m}}{w(i,j)}\cdot{\log}(P(\bm{z}_{j}|\bm{x}_{i}))$
(2)
where $\bm{x}_{i}$ and $\bm{z}_{j}$ specify the feature vector of image
$\bm{I}_{i}$ and the $j$-th tracklet as $\bm{T}_{j}$,
$P(\bm{z}_{j}|\bm{x}_{i})$ is the matching probability of image $\bm{x}_{i}$
and tracklet $\bm{z}_{j}$, $N^{m}$ is the tracklet number of the $m$-th camera
view, $w(i,j)$ denotes the similarity weight between the $\bm{I}_{i}$’s
corresponding tracklet and the $j$-th tracklet $\bm{z}_{j}$. This weight aims
to minimise negative effect of trajectory fragmentation by taking into account
the tracklet pairwise information in classification [17]. The specific
computation of $w(i,j)$, as defined in Eq. (6), will be discussed below.
To suppress the contamination of tracklet by noisy and distracting image
frames, we introduce the posterior probability $P(\bm{z}_{j}|\bm{x}_{i})$
based on image-to-tracklet selective matching as:
$P(\bm{z}_{j}|\bm{x}_{i})=\frac{\exp(\bm{z}^{\top}_{j}\bm{x}_{i}/\tau)}{\sum_{k=1}^{N^{m}}\exp(\bm{z}^{\top}_{k}\bm{x}_{i}/\tau)}$
(3)
where $\bm{z}^{\top}_{j}\bm{x}_{i}$ expresses the matching degree between
image $\bm{x}_{i}$ and tracklet $\bm{z}_{j}$, $\tau$ is a temperature
parameter that controls the concentration of the distribution [11]. It is
normalised over all the $N^{m}$ tracklets by the softmax function.
In contrast to the point-to-point probability in the non-parametric
classification loss [41], Eq. (3) is a point-to-set matching probability,
which is more robust to contaminated and distracting tracklets. This can be
understood at two aspects: (1) Suppose a tracklet contains noisy frames due to
multiple person, non-person or ID switch, etc. Often, the noisy frames tend to
have much smaller matching scores $\bm{z}^{\top}_{j}\bm{x}_{i}$ against a
true-matching tracklet, as compared to other clean frames. (2) The image-to-
tracklet pairs with large matching scores will become significantly more
salient after applying the exponential operation $\exp$. This is effectively a
process of selecting good-quality matching tracklets (e.g. less noisy true
matches) and simultaneously down-weighing the remaining ones (e.g. more noisy
true matches and false matches). If there is no true match, all
$\bm{z}^{\top}_{j}\bm{x}_{i}$ values tend to be small. This data adaptive and
selective matching capability is highly desired for dealing with noisy raw
tracklets in unsupervised tracklet re-id learning.
In unsupervised tracklet training data, the majority of tracklet pairs per
camera are false matches. Therefore, considering all the pairs in Eq. (2) is
likely to introduce a large amount of negative matching. As [17], we consider
only a fraction of tracklets that are more likely true matches (i.e. tracklet
association). To this end, $k$ nearest neighbour ($k$-NN) search is often
adopted:
$\mathcal{N}_{k}(\bm{z})=\\{\bm{z}^{\prime}\;\;|\;\;\bm{z}^{\prime\top}\bm{z}{\text{
is among top-}k}.\\}$ (4)
For each tracklet, this implicitly assumes $k$ true matches in each camera
view. Given unconstrained raw tracklets without manual selection, this
condition is often harder to meet. Data adaptive tracklet association is hence
needed.
To that end, we suggest to further exploit the concept of
$\epsilon$-neighbourhood ($\epsilon$-NN) (Fig. 2(c))
$\mathcal{N}_{k+\epsilon}(\bm{z})=\\{\bm{z}^{\prime}\;|\;\bm{z}^{\prime\top}\bm{z}>\epsilon\;\;\&\;\;{\text{among
top-}k}\\},$ (5)
where $\epsilon$ is the neighbourhood boundary threshold. Adding such a
similarity score constraint, we aim to filter out the noisy tracklet pairs
associated by $k$-NN with low pairwise proximity. The resulting neighbourhood
sizes vary from 0 to $k$ in accordance with how many similar tracklets exist,
i.e. tracklet data adaptive. This property is critical for model learning on
unconstrained tracklets without guarantee of a fixed number of reliable good-
quality true matches.
After obtaining possibly matching tracklets
$\mathcal{N}_{k+\epsilon}(\bm{z}_{i})$ for a specific tracklet $\bm{z}_{i}$,
we can compute the tracklet similarity weight as their $L_{1}$ normalised
quantity:
$w(i,j)=\left\\{\begin{array}[]{ll}\frac{\bm{z}_{j}^{\top}\bm{z}_{i}}{\sum_{\bm{z}_{k}^{\prime}\in\mathcal{N}_{k+\epsilon}}(z_{k}^{\prime\top}\bm{z}_{i})},&\text{if}\;\;\bm{z}_{j}\in\mathcal{N}_{k+\epsilon}(\bm{z}_{i})\\\
0,&\text{otherwise}\end{array}\right.$ (6)
As such, only visually similar tracklets potentially with minimal noisy image
frames are encouraged (Eq. (2)) to be positive matches in model learning.
Discussion. In formulation, our image-to-tracklet selective matching loss is
similar as the instance loss [41]. Both are non-parametric variants of CE.
However, there are a few fundamental differences: (1) The instance loss treats
each individual image as a class, whilst our loss considers tracklet-wise
class. Conceptually, this introduces a two-tier hierarchical structure into
the instance loss: local image and global tracklet. (2) The instance loss does
not consider the camera view structure. In contrast, we uniquely combine the
multi-task inference idea with tracklet classes for additionally exploiting
the underlying correlation between per-camera tracklet groups. Moreover, our
loss design shares some spirit with the focal loss [21] both using a
modulating parameter for controlling the target degree (noise measure in ours
and imbalance measure in focal loss). But they have more fundamental
differences in addition to different formulations: (1) The focal loss is
parametric and supervised vs. our non-parametric and unsupervised loss. (2)
The focal loss aims to solve the class imbalance between positive and negative
samples in supervised learning, whilst ours is for selective and robust image-
to-tracklet matching in unsupervised learning.
### 3.2 Cross-Camera View Learning
Besides per-camera view learning, it is crucial to simultaneously consider
cross-camera tracklet learning [17, 16]. To this end, we need to similarly
perform tracklet association across camera views. We consistently utilise
$k$-NN+$\epsilon$-NN for tracklet association across different camera views.
Specifically, for a tracklet $\bm{z}$ we search the nearest tracklets from
different cameras (Fig. 2(c)):
$\tilde{\mathcal{N}}_{k+\epsilon}(\bm{z})=\\{\bm{z}^{\prime}\;|\;\bm{z}^{\prime\top}\bm{z}>\epsilon\;\;\&\;\;{\text{among
top-}k}\\}.$ (7)
With the self-discovered cross-camera tracklet association
${\tilde{\mathcal{N}}_{k+\epsilon}(\bm{x}_{i})}$ of a specific tracklet
$\bm{z}_{i}$ which contains a training image frame $\bm{x}_{i}$, we then
enforce a cross-camera image-to-tracklet matching loss function (Fig. 2(e)):
$\mathcal{L}_{\text{ccm}}=\sum_{\bm{z}^{\prime}\in\tilde{\mathcal{N}}_{k+\epsilon}(\bm{z}_{i})}1-\bm{z}^{\prime\top}\bm{x}_{i}.$
(8)
This loss encourages the image $\bm{x}_{i}$ to have as similar feature
representation of visually alike tracklets as possible in a cross-camera
sense. In doing so, person appearance variation across camera views is
minimised if the image-to-tracklet association is correct.
Algorithm 1 The STL model training procedure.
Input: Automatically generated raw tracklet data.
Output: An optimised person re-id model.
for $e=1$ to max_epoch do
if $e$ in the first training stage epochs
Update per-camera tracklet neighbourhood (Eq. (5))
for $t=1$ to per-epoch iteration number do
Feedforward a mini-batch of tracklet frame images
Update the tracklet representations (Eq. (10))
Compute per-camera matching loss (Eq. (2))
Update the model by back-propagation
end for
else /* The second training stage epochs */
Update per-camera tracklet neighbourhood (Eq. (5))
Update cross-camera tracklet neighbourhood (Eq. (7))
for $t=1$ to per-epoch iteration number do
Feedforward a mini-batch of tracklet frame images
Update tracklet representations (Eq. (10))
Compute STL model training loss (Eq. (9))
Update the network model by back-propagation
end for
end if
end for
Dataset | Training data | Test data
---|---|---
# Identity | # Tracklet | # Identity | # Tracklet
MARS* [48] | 625 | 8,298 | 636 | 11,310
DukeMTMC-SI-Tracklet* [31, 17] | 702 | 5,803 | 1,086 | 6,844
DukeMTMC-Raw-Tracklet* (New) | 702 | 7,427 | 1,105 | 8,950
DukeMTMC-Raw-Tracklet (New) | 702 + unknown | 12,452 | 1,105 | 8,950
Table 1: Dataset statistics and benchmarking setting. *: With tracklet
selection.
### 3.3 Model Training
Overall objective loss. By combining the per-camera and cross-camera learning
constraints, we obtain the final model objective loss function as:
$\mathcal{L}_{\text{STL}}={L}_{\text{pcm}}+\lambda\mathcal{L}_{\text{ccm}}$
(9)
where $\lambda$ is a balancing weight. Trained jointly by
$\mathcal{L}_{\text{pcm}}$ and $\mathcal{L}_{\text{ccm}}$, the STL model is
able to mine the discriminative re-id information both within- and cross-
camera views concurrently. This overall loss function is differentiable
therefore allowing for end-to-end model optimisation. For more accurate
tracklet association between camera views, we start to apply the cross-camera
matching loss (Eq. (8)) in the middle of training as [17]. The STL model
training process is summarised in Algorithm 1.
Tracklet representation. In our model formulation, we need to represent each
tracklet as a whole. To obtain this feature representation, we adopt a moving
average strategy [27] for computational scalability. Specifically, we maintain
a representation memory for each tracklet during training. In each training
iteration, given an input image frame $\bm{x}$, we update its corresponding
tracklet’s feature vector $\bm{z}$ as:
$\bm{z}=\frac{1}{2}(\bm{z}+\bm{x})$ (10)
This scheme updates only the tracklets whose images are sampled by the current
mini-batch in each iteration. Although not all the tracklets are updated and
synchronised along with the model training, the discrepancy from their
accurate representations is supposed to be marginal due to the small model
learning speed therefore matters little.
Model parameter setting. In unsupervised re-id learning, we have no access to
labelled training data for model parameter selection by cross-validation.
Also, it is improper to use any test data for model parameter tuning which is
not available in real-world application. All the parameters of a model are
usually estimated empirically. Moreover, the identical parameter setting
should be used for all the different datasets in domain scalability and
generic large scale application considerations. With this principle, we set
the parameters of our STL model for all different tests and experiments as:
$\tau=0.1$ for Eq. (3), $\lambda=10$ for Eq. (9), $\epsilon=0.7$ and $k=1$ for
Eq. (5) and (7). Otherwise parameter settings may give better model
performance on specific tests. But they are not exhaustively considered in our
study, because this often assumes extra domain knowledge which is not
generally available therefore making the performance evaluation less realistic
and less generic.
To minimise the negative effect of inaccurate cross-camera, STL begins the
model training with per-camera image-to-tracklet selective matching loss Eq.
(2) during the first training stage, and then add cross-camera image-to-
tracklet selective matching loss Eq. (8) in the second training stage. We do
not incorporate the ccm loss until in the second training stage due to
insufficiently reliable feature representations for cross-camera tracklet
matching in the beginning of training stage.
## 4 Experiments
### 4.1 Experimental Setup
Datasets. To evaluate the proposed STL model, we tested on two publicly
available person tracklet datasets: MARS [48], and DukeMTMC-SI-Tracklet [48,
17]. The dataset statistics and test settings are given in Table 1. The
previous two datasets both contain only manually selected person tracklet data
therefore presenting less realistic unsupervised learning scenarios. To enable
more realistic algorithm test, we introduced a new raw tracklet person re-id
dataset.
As DukeMTMC-SI-Tracklet, we used the DukeMTMC tracking videos [31]. To extract
person tracklets, we leveraged an efficient detector-and-tracker model [8].
and a graph association method. From all the DukeMTMC videos captured from 8
distributed surveillance cameras, we obtained 21,402 person tracklets,
including a total of 1,341,096 bounding box images. Detection and tracking
errors are inevitable as shown in Fig. 1(b). In reality, we usually assume no
manual efforts for cleaning tracklet data but expect the unsupervised learning
algorithm to be sufficiently robust to any errors and noise. We therefore keep
all tracklets. In spirit of DukeMTMC-SI-Tracklet, we call the newly introduced
dataset DukeMTMC-Raw-Tracklet.
For the DukeMTMC-Raw-Tracklet dataset benchmarking, we utilised a similar
method as [17] to automatically label the identity classes of test person
tracklets. This is for enabling model performance test. We used the same 1,105
test person identity classes as DukeMTMC-SI-Tracklet for allowing an apple-to-
apple comparison between datasets. The number of training person identity
classes is unknown due to no manual annotation, a natural property of
unconstrained raw tracklets in real-world application scenarios.
Figure 3: Cross-camera tracklet matching pairs from (a) MARS, (b) DukeMTMC-SI-
Tracklet, and (c) DukeMTMC-Raw-Tracklet.
To assess the exact effect of manual selection to unsupervised model learning,
we further selected out the tracklets of the same 702 training person
identities as DukeMTMC-SI-Tracklet. Along with the same test data, we built
another version of dataset with selected training tracklets as the
conventional datasets [48, 17]. We name this dataset DukeMTMC-Raw-Tracklet*.
The statistics of both datasets with and without selection are described in
Table 1.
Performance metrics. For model performance measurement, we used the Cumulative
Matching Characteristic (CMC) and mean Average Precision (mAP) metrics.
Implementation details. We used an ImageNet pre-trained ResNet-50 [10] as the
backbone for our STL model. The re-id feature representation is $L_{2}$
normalised with 128 dimensions each. Person bounding box images were resized
to $256\\!\times\\!128$. We adopted the Stochastic Gradient Descend (SGD)
optimiser. We set the learning rate is $3\\!\times\\!10^{-5}$, the mini-batch
size to 384, and the epoch number to 20. The first training stage begins from
the first epoch, and the second training stage from the tenth epoch.
Methods | MARS* | Duke-SI-TKL* | Duke-Raw-TKL* | Duke-Raw-TKL
---|---|---|---|---
R1 | R5 | R20 | mAP | R1 | R5 | R20 | mAP | R1 | R5 | R20 | mAP | R1 | R5 | R20 | mAP
GRDL [13] | 19.3 | 33.2 | 46.5 | 9.6 | - | - | - | - | - | - | - | - | - | - | - | -
UnKISS [12] | 22.3 | 37.4 | 53.6 | 10.6 | - | - | - | - | - | - | - | - | - | - | - | -
SMP† [26] | 23.9 | 35.8 | 44.9 | 10.5 | - | - | - | - | - | - | - | - | - | - | - | -
DGM+IDE† [44] | 36.8 | 54.0 | 68.5 | 21.3 | - | - | - | - | - | - | - | - | - | - | - | -
RACE† [43] | 43.2 | 57.1 | 67.6 | 24.5 | - | - | - | - | - | - | - | - | - | - | - | -
TAUDL [16] | 43.8 | 59.9 | 66.0 | 29.1 | 26.1 | 42.0 | 57.2 | 20.8 | - | - | - | - | - | - | - | -
DAL [3] | 46.8 | 63.9 | 71.6 | 21.4 | - | - | - | - | - | - | - | - | - | - | - | -
BUC [22] | 51.1 | 64.2 | 72.9 | 26.4 | 30.6 | 43.9 | 51.7 | 16.7 | 38.6 | 50.1 | 61.9 | 20.1 | 31.1 | 41.3 | 52.0 | 15.1
UTAL [17] | 49.9 | 66.4 | 77.8 | 35.2 | 43.8 | 62.8 | 76.5 | 36.6 | 48.7 | 62.9 | 76.6 | 38.4 | 41.3 | 55.7 | 71.3 | 31.8
STL (Ours) | 54.5 | 71.5 | 82.0 | 37.2 | 46.7 | 65.4 | 78.1 | 38.9 | 55.4 | 71.7 | 79.0 | 41.5 | 56.1 | 69.6 | 80.9 | 41.5
Table 2: Unsupervised person re-id performance on tracklet benchmarking
datasets. †: Use one-shot labels. *: With manual tracklet selection. TKL =
Tracklet.
### 4.2 Comparisons to the State-Of-The-Art Methods
Competitors. We compared the proposed STL method with three different
modelling strategies: (a) Hand-crafted feature based methods (GRDL [13],
UnKISS [12]), (b) One-shot learning methods (SMP [26], DGM+IDE [44], RACE
[43]), (c) Unsupervised deep learning models (TAUDL [16], DAL [3], BUC [22],
UTAL [17]).
Results. Table 2 compares the re-id performance. We have the following main
observations:
(1) Hand-crafted feature based methods (GRDL and UnKISS) produce the weakest
performance. This is due to the poor discriminative ability of manually
designed features and the lacking of end-to-end model optimisation.
(2) One-shot learning methods (SMP, DGM, RACE) improve the re-id model
generalisation capability. However, their assumption on one-shot training data
would limit their application scalability due to the need for some amount of
person identity labelling per domain.
(3) The more recent unsupervised deep learning methods (TAUDL, DAL, BUC111We
utilised the officially released codes of BUC [22] with the default parameter
settings, and we used the single parameter setting for BUC on all the tests.
In contrast, the authors of BUC seemingly used the labelled test data to tune
the model parameters which is improper for unsupervised learning. As a result,
the results on MARS were reported differently., UTAL) further push the
boundary of model performance. However, all these existing methods are
outperformed clearly by the proposed STL model, in particularly on the
unconstrained raw tracklets training data. This suggests the overall
performance advantages of our model over the strong alternative methods.
(4) Existing methods BUC and UTAL both suffer from the noisy data in
unconstrained raw tracklets, as indicated by their significant performance
drop from DukeMTMC-Raw-Tracklet* (with tracklet selection) to DukeMTMC-Raw-
Tracklet. This suggests that the data selection in dataset benchmarking
simplifies the model learning task, i.e. less challenging than the realistic
setting. The proposed DukeMTMC-Raw-Tracklet dataset is designed particularly
for addressing this problem.
(5) Our STL model is shown to be more robust against more noisy tracklet data,
with little re-id performance changes. This suggests the superiority of our
image-to-tracklet selective matching in dealing with more noisy unconstrained
tracklets.
(6) While both UTAL and STL adopt multi-task learning model design, STL is
also superior on all three datasets with manual selection. This suggests that
assuming a fixed number of true matches (due to using $k$-NN) is suboptimal
even for carefully constructed training data, and the modelling superiority of
our image-to-tracklet selective matching in handling unconstrained raw
tracklet data with more noise.
### 4.3 Further Analysis and Discussions
To provide more insight and interpretation into the performance advantages of
our STL method, we analysed key model designs on two large tracklet re-id
datasets: MARS and DukeMTMC-Raw-Tracklet.
Loss design for per-camera view learning. The loss function for per-camera
view learning is a key component in STL. We compared our image-to-tracklet
selective matching (ITSM) loss (Eq. (2)) with the conventional cross-entropy
(CE) loss (Eq. (1).) Table 3 shows that the ITSM loss is significantly more
effective especially on DukeMTMC-Raw-Tracklet dataset. This suggests the
superiority of our loss design in handling noisy tracklets, thanks to its data
adaptive and selective learning capability.
Dataset | MARS* | Duke-Raw-TKL
---|---|---
Loss | Rank-1 | mAP | Rank-1 | mAP
CE | 43.8 | 31.4 | 22.4 | 15.3
ITSM | 48.2 | 32.2 | 49.1 | 34.2
Table 3: Evaluate loss design for per-camera view learning. ITSM: per-camera
Image-to-Tracklet Selective Matching.
Image-to-tracklet selective effect. We tested the data selective effect by
controlling the temperature parameter $\tau$ in Eq. (3). It is one of the key
factors for our method to be able to select possibly well matching tracklets
and deselect potentially noisy tracklets. Table 4 shows several key
observations. (1) With $\tau\\!=\\!1$ we impose no selection effect in
matching, the model generalisation performance degrades significantly. (2)
When setting small values ($<$0.2) to $\tau$, as expected the model
performance is dramatically boosted. This is due to the modulating effect of
our loss function to the selective matching between images and tracklets. It
is also observed that more gain is obtained in case of unconstrained raw
tracklets due to more noise and distraction. (3) The optimal value is around
$\tau\\!=\\!0.1$ consistently, suggesting the generic benefit of a single
setting.
Dataset | MARS* | Duke-Raw-TKL
---|---|---
$\tau$ | Rank-1 | mAP | Rank-1 | mAP
1 | 15.7 | 9.8 | 10.4 | 6.3
0.5 | 25.3 | 15.1 | 14.6 | 11.4
0.2 | 44.7 | 29.9 | 41.7 | 30.5
0.1 | 54.5 | 37.2 | 56.1 | 41.5
0.05 | 47.6 | 32.5 | 47.7 | 35.7
Table 4: Evaluate the temperature $\tau$.
Benefit of cross-camera view learning. We evaluated the efficacy of. cross-
camera view learning Table 5 shows that significant re-id accuracy gains can
be obtained. This verifies the cross-camera image-to-tracklet matching loss
(Eq. (8)) on top of per-camera view learning (Eq. (2)).
Dataset | MARS* | Duke-Raw-TKL
---|---|---
CCM | Rank-1 | mAP | Rank-1 | mAP
✗ | 48.2 | 32.2 | 49.1 | 34.2
✓ | 54.5 | 37.2 | 56.1 | 41.5
Table 5: Effect of cross-camera matching (CCM) learning.
To examine further cross-camera matching, we tracked the number and accuracy
of tracklet association in training. Figure 4 shows that the number of cross-
camera tracklet pairs grow dramatically whilst the matching accuracy drops
slightly or moderately along the training. This justifies the positive effect
of cross-camera tracklet association. That is, most pairs are correct true
matches, providing model training with discriminative information for person
appearance variation across camera views. Besides, we also observe further
room for more accurate tracklet association.
Figure 4: The (a) number and (b) precision of cross-camera tracklet pairs
discovered during training.
Tracklet association strategy. We tested the effect of $\epsilon$-NN on top
of $k$-NN (Eq. (5) and (7)) as the tracklet association strategy in STL. We
set $k=1$ and $\epsilon=0.7$, the default setting of our model. Table 6 shows
that using $k$-NN only is inferior to $k$-NN+$\epsilon$-NN. This suggests the
data adaptive benefit of $\epsilon$-NN particularly in handling unconstrained
raw tracklet data, verifying our design of additionally leveraging
$\epsilon$-NN for tracklet association except $k$-NN.
Dataset | MARS* | Duke-Raw-TKL
---|---|---
Strategy | Rank-1 | mAP | Rank-1 | mAP
$k$-NN | 51.0 | 32.7 | 51.0 | 37.4
$k$-NN+$\epsilon$-NN | 54.5 | 37.2 | 56.1 | 41.5
Table 6: $k$-NN versus $\epsilon$-NN in tracklet association.
Sensitivity of tracklet association threshold. We tested the model performance
sensitivity of setting the tracklet matching similarity threshold $\epsilon$
(Eq. (5) and (7)). Table 7 shows that re-id accuracies vary with the change of
$\epsilon$ as expected. This is because $\epsilon$ controls what image-to-
tracklet matching pairs are used in objective loss functions during training.
Importantly, $\epsilon$ is not very sensitive with a good value range (around
$0.7\\!\sim\\!0.8$) giving strong model performance. This robustness is a
critical property of our method, since when applied to diverse tracklets data
with unconstrained conditions, label supervision is not available for hyper-
parameter cross-validation.
Dataset | MARS* | Duke-Raw-TKL
---|---|---
$\epsilon$ | Rank-1 | mAP | Rank-1 | mAP
0.9 | 48.7 | 30.4 | 48.3 | 33.3
0.8 | 54.2 | 36.2 | 55.3 | 41.3
0.7 | 54.5 | 37.2 | 56.1 | 41.5
0.6 | 50.7 | 34.1 | 47.7 | 35.2
Table 7: Evaluate the tracklet association threshold $\epsilon$.
## 5 Conclusion
We presented a selective tracklet learning (STL) approach, which aims to
address the limitations of both the existing supervised person re-id methods
and unsupervised tracklet learning methods concurrently. Specifically, STL is
able to learn discriminative and generalisable re-id model from unlabelled raw
tracklet datasets. This eliminates the artificial assumptions on exhaustive
person ID labelling as by supervised re-id methods, and manual filtering as by
existing tracklet unsupervised learning models. We also introduced an
unconstrained raw tracklet person re-id benchmark, DukeMTMC-Raw-Tracklet.
Extensive experiments show the superiority and robustness advantages of STL
over the state-of-the-art unsupervised learning re-id methods on three
tracklet person re-id benchmarks.
## References
* [1] Slawomir Bak, Peter Carr, and Jean-Francois Lalonde. Domain adaptation through synthesis for unsupervised person re-identification. In Proc. Eur. Conf. Comput. Vis., pages 189–205, 2018.
* [2] Xiaobin Chang, Timothy M Hospedales, and Tao Xiang. Multi-level factorisation net for person re-identification. In Proc. IEEE Conf. Comput. Vis. Pattern Recognit., pages 2109–2118, 2018.
* [3] Yanbei Chen, Xiatian Zhu, and Shaogang Gong. Deep association learning for unsupervised video person re-identification. Proc. Bri. Mach. Vis. Conf., 2018.
* [4] Ying-Cong Chen, Xiatian Zhu, Wei-Shi Zheng, and Jian-Huang Lai. Person re-identification by camera correlation aware feature augmentation. IEEE Trans. Pattern Anal. Mach. Intell., 40(2):392–408, 2018.
* [5] Weijian Deng, Liang Zheng, Qixiang Ye, Guoliang Kang, Yi Yang, and Jianbin Jiao. Image-image domain adaptation with preserved self-similarity and domain-dissimilarity for person re-identification. In Proc. IEEE Conf. Comput. Vis. Pattern Recognit., pages 994–1003, 2018.
* [6] Hehe Fan, Liang Zheng, and Yi Yang. Unsupervised person re-identification: Clustering and fine-tuning. arXiv:1705.10444, 2017.
* [7] Michela Farenzena, Loris Bazzani, Alessandro Perina, Vittorio Murino, and Marco Cristani. Person re-identification by symmetry-driven accumulation of local features. In Proc. IEEE Conf. Comput. Vis. Pattern Recognit., pages 2360–2367, 2010.
* [8] Rohit Girdhar, Georgia Gkioxari, Lorenzo Torresani, Manohar Paluri, and Du Tran. Detect-and-track: Efficient pose estimation in videos. In Proc. IEEE Conf. Comput. Vis. Pattern Recognit., pages 350–359, 2018.
* [9] Shaogang Gong, Marco Cristani, Shuicheng Yan, and Chen Change Loy. Person re-identification. Springer, 2014.
* [10] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In Proc. IEEE Conf. Comput. Vis. Pattern Recognit., pages 770–778, 2016.
* [11] Geoffrey Hinton, Oriol Vinyals, and Jeff Dean. Distilling the knowledge in a neural network. arXiv preprint arXiv:1503.02531, 2015.
* [12] Furqan M Khan and Francois Bremond. Unsupervised data association for metric learning in the context of multi-shot person re-identification. In Proc. IEEE Conf. Adv. Vid. Sig. Surv., pages 256–262, 2016.
* [13] Elyor Kodirov, Tao Xiang, Zhenyong Fu, and Shaogang Gong. Person re-identification by unsupervised $l_{1}$ graph learning. In Proc. Eur. Conf. Comput. Vis., pages 178–195, 2016.
* [14] Elyor Kodirov, Tao Xiang, and Shaogang Gong. Dictionary learning with iterative laplacian regularisation for unsupervised person re-identification. In Proc. Bri. Mach. Vis. Conf., 2015.
* [15] Laura Leal-Taixé, Anton Milan, Ian Reid, Stefan Roth, and Konrad Schindler. Motchallenge 2015: Towards a benchmark for multi-target tracking. arXiv:1504.01942, 2015.
* [16] Minxian Li, Xiatian Zhu, and Shaogang Gong. Unsupervised person re-identification by deep learning tracklet association. In Proc. Eur. Conf. Comput. Vis., pages 737–753, 2018.
* [17] Minxian Li, Xiatian Zhu, and Shaogang Gong. Unsupervised tracklet person re-identification. IEEE Trans. Pattern Anal. Mach. Intell., 2019.
* [18] Wei Li, Rui Zhao, Tong Xiao, and Xiaogang Wang. Deepreid: Deep filter pairing neural network for person re-identification. In Proc. IEEE Conf. Comput. Vis. Pattern Recognit., pages 152–159, 2014.
* [19] Wei Li, Xiatian Zhu, and Shaogang Gong. Person re-identification by deep joint learning of multi-loss classification. In Proc. Int. Jo. Conf. of Artif. Intell., 2017.
* [20] Wei Li, Xiatian Zhu, and Shaogang Gong. Harmonious attention network for person re-identification. In Proc. IEEE Conf. Comput. Vis. Pattern Recognit., pages 2285–2294, 2018.
* [21] Tsung-Yi Lin, Priya Goyal, Ross Girshick, Kaiming He, and Piotr Dollár. Focal loss for dense object detection. In Proc. IEEE Int. Conf. Comput. Vis., pages 2980–2988, 2017.
* [22] Yutian Lin, Xuanyi Dong, Liang Zheng, Yan Yan, and Yi Yang. A bottom-up clustering approach to unsupervised person re-identification. In AAAI Conf. on Art. Intel., 2019.
* [23] Giuseppe Lisanti, Iacopo Masi, Andrew D Bagdanov, and Alberto Del Bimbo. Person re-identification by iterative re-weighted sparse ranking. IEEE Trans. Pattern Anal. Mach. Intell., 37(8):1629–1642, 2015\.
* [24] Chunxiao Liu, Chen Change Loy, Shaogang Gong, and Guijin Wang. Pop: Person re-identification post-rank optimisation. In Proc. IEEE Int. Conf. Comput. Vis., pages 441–448, 2013.
* [25] Xiao Liu, Mingli Song, Dacheng Tao, Xingchen Zhou, Chun Chen, and Jiajun Bu. Semi-supervised coupled dictionary learning for person re-identification. In Proc. IEEE Conf. Comput. Vis. Pattern Recognit., pages 3550–3557, 2014.
* [26] Zimo Liu, Dong Wang, and Huchuan Lu. Stepwise metric promotion for unsupervised video person re-identification. In Proc. IEEE Int. Conf. Comput. Vis., pages 2429–2438, 2017.
* [27] James M Lucas and Michael S Saccucci. Exponentially weighted moving average control schemes: properties and enhancements. Technometrics, 32(1):1–12, 1990.
* [28] Xiaolong Ma, Xiatian Zhu, Shaogang Gong, Xudong Xie, Jianming Hu, Kin-Man Lam, and Yisheng Zhong. Person re-identification by unsupervised video matching. Pattern Recognition, 65:197–210, 2017.
* [29] Peixi Peng, Yonghong Tian, Tao Xiang, Yaowei Wang, Massimiliano Pontil, and Tiejun Huang. Joint semantic and latent attribute modelling for cross-class transfer learning. IEEE Trans. Pattern Anal. Mach. Intell., 40(7):1625–1638, 2018\.
* [30] Peixi Peng, Tao Xiang, Yaowei Wang, Massimiliano Pontil, Shaogang Gong, Tiejun Huang, and Yonghong Tian. Unsupervised cross-dataset transfer learning for person re-identification. In Proc. IEEE Conf. Comput. Vis. Pattern Recognit., pages 1306–1315, 2016.
* [31] Ergys Ristani, Francesco Solera, Roger Zou, Rita Cucchiara, and Carlo Tomasi. Performance measures and a data set for multi-target, multi-camera tracking. In Workshop of Eur. Conf. Comput. Vis., pages 17–35, 2016.
* [32] Yantao Shen, Hongsheng Li, Tong Xiao, Shuai Yi, Dapeng Chen, and Xiaogang Wang. Deep group-shuffling random walk for person re-identification. In Proc. IEEE Conf. Comput. Vis. Pattern Recognit., pages 2265–2274, 2018.
* [33] Yantao Shen, Hongsheng Li, Shuai Yi, Dapeng Chen, and Xiaogang Wang. Person re-identification with deep similarity-guided graph neural network. In Proc. Eur. Conf. Comput. Vis., September 2018.
* [34] Chunfeng Song, Yan Huang, Wanli Ouyang, and Liang Wang. Mask-guided contrastive attention model for person re-identification. In Proc. IEEE Conf. Comput. Vis. Pattern Recognit., pages 1179–1188, 2018.
* [35] Yumin Suh, Jingdong Wang, Siyu Tang, Tao Mei, and Kyoung Mu Lee. Part-aligned bilinear representations for person re-identification. In Proc. Eur. Conf. Comput. Vis., September 2018.
* [36] Hanxiao Wang, Shaogang Gong, and Tao Xiang. Unsupervised learning of generative topic saliency for person re-identification. In Proc. Bri. Mach. Vis. Conf., 2014.
* [37] Hanxiao Wang, Shaogang Gong, Xiatian Zhu, and Tao Xiang. Human-in-the-loop person re-identification. In Proc. Eur. Conf. Comput. Vis., pages 405–422, 2016.
* [38] Hanxiao Wang, Xiatian Zhu, Tao Xiang, and Shaogang Gong. Towards unsupervised open-set person re-identification. In IEEE Int. Conf. on Img. Proc., pages 769–773, 2016.
* [39] Jingya Wang, Xiatian Zhu, Shaogang Gong, and Wei Li. Transferable joint attribute-identity deep learning for unsupervised person re-identification. In Proc. IEEE Conf. Comput. Vis. Pattern Recognit., pages 2275–2284, 2018.
* [40] Longhui Wei, Shiliang Zhang, Wen Gao, and Qi Tian. Person transfer gan to bridge domain gap for person re-identification. In Proc. IEEE Conf. Comput. Vis. Pattern Recognit., pages 79–88, 2018.
* [41] Zhirong Wu, Yuanjun Xiong, X Yu Stella, and Dahua Lin. Unsupervised feature learning via non-parametric instance discrimination. In Proc. IEEE Conf. Comput. Vis. Pattern Recognit., pages 3733–3742, 2018.
* [42] Tong Xiao, Hongsheng Li, Wanli Ouyang, and Xiaogang Wang. Learning deep feature representations with domain guided dropout for person re-identification. In Proc. IEEE Conf. Comput. Vis. Pattern Recognit., pages 1249–1258, 2016.
* [43] Mang Ye, Xiangyuan Lan, and Pong C Yuen. Robust anchor embedding for unsupervised video person re-identification in the wild. In Proc. Eur. Conf. Comput. Vis., pages 170–186, 2018.
* [44] Mang Ye, Andy J Ma, Liang Zheng, Jiawei Li, and Pong C Yuen. Dynamic label graph matching for unsupervised video re-identification. In Proc. IEEE Int. Conf. Comput. Vis., pages 5142–5150, 2017.
* [45] Hong-Xing Yu, Ancong Wu, and Wei-Shi Zheng. Cross-view asymmetric metric learning for unsupervised person re-identification. In Proc. IEEE Int. Conf. Comput. Vis., pages 994–1002, 2017.
* [46] Ying Zhang, Tao Xiang, Timothy M Hospedales, and Huchuan Lu. Deep mutual learning. In Proc. IEEE Conf. Comput. Vis. Pattern Recognit., 2018.
* [47] Rui Zhao, Wanli Ouyang, and Xiaogang Wang. Unsupervised salience learning for person re-identification. In Proc. IEEE Conf. Comput. Vis. Pattern Recognit., pages 3586–3593, 2013.
* [48] Liang Zheng, Zhi Bie, Yifan Sun, Jingdong Wang, Chi Su, Shengjin Wang, and Qi Tian. Mars: A video benchmark for large-scale person re-identification. In Proc. Eur. Conf. Comput. Vis., pages 868–884, 2016.
* [49] Liang Zheng, Liyue Shen, Lu Tian, Shengjin Wang, Jingdong Wang, and Qi Tian. Scalable person re-identification: A benchmark. In Proc. IEEE Conf. Comput. Vis. Pattern Recognit., pages 1116–1124, 2015.
* [50] Zhedong Zheng, Liang Zheng, and Yi Yang. Unlabeled samples generated by gan improve the person re-identification baseline in vitro. In Proc. IEEE Int. Conf. Comput. Vis., pages 3754–3762, 2017.
* [51] Zhun Zhong, Liang Zheng, Shaozi Li, and Yi Yang. Generalizing a person retrieval model hetero-and homogeneously. In Proc. Eur. Conf. Comput. Vis., pages 172–188, 2018.
|
# The metastable Mpemba effect corresponds to a non-monotonic temperature
dependence of extractable work
Raphaël Chétrite Laboratoire J A Dieudonné, UMR CNRS 7351, Université de Nice
Sophia Antipolis, Nice, France<EMAIL_ADDRESS>Avinash Kumar, John
Bechhoefer Dept. of Physics, Simon Fraser University, Burnaby, British
Columbia, V5A 1S6, Canada
###### Abstract
The Mpemba effect refers to systems whose thermal relaxation time is a non-
monotonic function of the initial temperature. Thus, a system that is
initially hot cools to a bath temperature more quickly than the same system,
initially warm. In the special case where the system dynamics can be described
by a double-well potential with metastable and stable states, dynamics occurs
in two stages: a fast relaxation to local equilibrium followed by a slow
equilibration of populations in each coarse-grained state. We have recently
observed the Mpemba effect experimentally in such a setting, for a colloidal
particle immersed in water. Here, we show that this metastable Mpemba effect
arises from a non-monotonic temperature dependence of the maximum amount of
work that can be extracted from the local-equilibrium state at the end of
Stage 1.
## I Introduction
A generic consequence of the second law of thermodynamics is that a system,
once perturbed, will tend to relax back to thermal equilibrium. Such
relaxation is typically exponential. To understand why, consider energy
relaxation and recall that the heat equation,
$\displaystyle\partialderivative{T}{t}=\kappa\nabla^{2}T(\bm{r},t)\,,$ (1)
for the temperature field $T$ at position $\bm{r}$ and time $t$ and thermal
diffusivity $\kappa$ has solutions that can be written in the form111 We begin
the eigenfunction expansion at $m=2$ to be consistent with the analogous
expansion of the Fokker-Planck solution in Lu and Raz [1].
$\displaystyle
T(\bm{r},t)=T_{\infty}(\bm{r})+\sum_{m=2}^{\infty}a_{m}\,v_{m}(\bm{r})\operatorname{e}^{-\lambda_{m}t}\,.$
(2)
Here, $T_{\infty}(\bm{r})$ is the static temperature-field solution of
Equation (1); it must account for boundary conditions. For $t\to\infty$, an
arbitrary initial condition $T(\bm{r},0)$ will relax to this state. In
Equation (2), the $v_{m}(\bm{r})$ are spatial eigenfunctions, with
corresponding eigenvalues $\lambda_{m}$ and coefficients $a_{m}$, which
represent the projections of the field $[T(\bm{r},0)-T_{\infty}(\bm{r})]$ onto
the corresponding eigenfunction. For long but finite times, all but the
slowest eigenmode will have decayed, and the temperature is, approximately,
$\displaystyle T(\bm{r},t)\approx
T_{\infty}(\bm{r})+a_{2}\,v_{2}(\bm{r})\operatorname{e}^{-\lambda_{2}t}\,,$
(3)
which, indeed, shows a simple exponential decay to $T_{\infty}(\bm{r})$ for a
probe at a fixed position $\bm{r}$.
Although exponential decays are typical, anomalous, non-exponential relaxation
is also encountered. Large objects, for example, may have an asymptotic time
scale $\lambda_{2}^{-1}$ that exceeds experimental times, so that it is not
possible to wait “long enough.” Similarly, glassy systems and other complex
materials may have a spectrum of exponents for mechanical and dielectric
relaxation that have not only very long time scales but also many closely
spaced values that are not resolved as a sequence of exponentials. Rather,
they can collectively combine to approximate a power-law or even logarithmic
time decay, with specific details that depend on the history of preparation
[2].
Another class of anomalous systems shows unexpectedly fast relaxation in
certain circumstances. The best-known of these is the observation that,
occasionally, a sample of hot water may cool and begin to freeze more quickly
than a sample of cool or warm water prepared under identical conditions. Based
on the scenario of exponential relaxation sketched above, one’s naive
intuition is that a hotter system will have to “pass through” all intermediate
temperatures and thus take longer to equilibrate. More succinctly, the
observation is that, in some systems, the equilibration time is a non-
monotonic function of the initial temperature: the time for a system initially
in equilibrium at a given temperature takes to cool and reach equilibrium with
the bath temperature does not always increase with initial temperature.
While observations of this phenomenon date back two millennia to the ancient
Greeks [3, 4], its modern study began with observations by Mpemba in the 1960s
[5]. The effect has since been observed in systems such as manganites [6],
clathrates [7], polymers [8, 9] and predicted in simulations of other systems,
including carbon nanotube resonators [10], granular fluids [11], and spin
glasses [12]. In all these Mpemba effects, the relaxation time shows a
surprisingly complicated dependence on the deviation of initial temperature
from equilibrium: increasing and then decreasing, and in some cases increasing
again with increasing deviation. The relaxation time thus does not increase
monotonically with the deviation from equilibrium, as one might naively
expect.
One challenge in studying Mpemba effects is that the systems where they have
been observed or predicted have been rather complicated, with many possible
explanations for the effect. The explanations tend to be complicated and
specific to a particular system. Even water is not as simple as it might seem:
proposed mechanisms include evaporation [13, 14, 15], convection [16],
supercooling [17], dissolved gases [18], and effects arising from hydrogen
bonds [19].
In an effort to understand the Mpemba effect more generically, Lu and Raz
recently proposed an explanation that is linked to the structure of
eigenfunction expansions such as that in Equation (2) [1]. Their work was
formulated for mesoscopic systems that are in the classical regime yet are
small enough that thermal fluctuations make an important contribution to their
dynamics. Such systems may be described by master equations and Fokker-Planck
equations, for finite and continuous state spaces, respectively [20, 21, 22,
23, 24]. For the latter, the Fokker-Planck equation describes the evolution of
the probability density function $p(\bm{x},t)$ for a system described by a
state vector $\bm{x}(t)$.222 In a many-body system, the dimension of $\bm{x}$
can be very large. Its structure is similar to that of Equation (1): its
linearity implies that solutions are also described by an infinite-series,
eigenfunction expansion similar to that in Equation (2). The essence of Lu and
Raz’s explanation is that the projection of the initial state $p(\bm{x},0)$ –
a Gibbs-Boltzmann distribution corresponding to an initial temperature $T$ –
onto the slowest eigenfunction, $a_{2}$ can be non-monotonic in $T$, or,
equivalently, in $\beta^{-1}\equiv k_{\textrm{B}}T$, where
$k_{\textrm{B}}\equiv 1$ (in our units) is Boltzmann’s constant. Such a
consequence implies a Mpemba effect because the long-time limit for the
probability density function has the same form as Equation (3):
$\displaystyle p(\bm{x},t)\approx
g_{\beta_{\textrm{b}}}(\bm{x})+a_{2}(\beta,\beta_{\textrm{b}})\,v_{2,\beta_{\textrm{b}}}(\bm{x})\operatorname{e}^{-\lambda_{2}t}\,,$
(4)
with $g_{\beta_{\textrm{b}}}(\bm{x})$ the Gibbs-Boltzmann distribution for the
system at a temperature $T_{\textrm{b}}$ corresponding to the surrounding
thermal bath with which the system is in contact and can exchange energy. The
coefficient $a_{2}$ is a function of both the initial temperature and bath
temperature:
$\displaystyle
a_{2}(\beta,\beta_{\textrm{b}})=\int\differential{\bm{x}}g_{\beta}(\bm{x})u_{2,\beta_{\textrm{b}}}(\bm{x})\,,$
(5)
where the initial state $p(\bm{x},0)$ is assumed to be in equilibrium at a
higher temperature $\beta^{-1}$ and where $u_{2,\beta_{\textrm{b}}}(\bm{x})$
is the left eigenfunction of the Fokker-Planck operator, which is the dual-
basis element corresponding to the right eigenfunction
$v_{2,\beta_{\textrm{b}}}(\bm{x})$ of the same Fokker-Planck operator. Both
$u_{2}$ and $v_{2}$ are calculated for the Markovian Langevin dynamics
associated with white noise whose covariance is set by the bath temperature,
$\beta_{\textrm{b}}^{-1}$. We need to distinguish between left and right
eigenfunctions because the operator generating Fokker-Planck dynamics is not
self-adjoint, in contrast to the operator generating the heat-diffusion
dynamics discussed in Equation (1). The Mpemba effect then translates to the
non-monotonicity of $a_{2}$ as a function of the initial temperature
$\beta^{-1}$: If a high-temperature initial condition has a smaller
coefficient $a_{2}$, then, in the long-time limit, the system will be closer
to equilibrium than a cool-temperature initial condition with larger $a_{2}$.
This non-monotonicity in $a_{2}$ is easier to establish than the non-
monotonicity of equilibration times that defines the Mpemba effect. The latter
requires either an experiment or, at the very least, repeated numerical
solution of the full Fokker-Planck equation.
Inspired by the scenario proposed by Lu and Raz [1], we have explored the
Mpemba effect in a simple, mesoscopic setting that – unlike previous work –
lends itself to quantitative experiments that straightforwardly connect with
theory [25]. In particular, we explored the motion of a single micron-scale
colloidal particle immersed in water and moving in a tilted double-well
potential. The one-dimensional (1D) state space consists of the position
$x(t)$ of the particle. By choosing carefully the tilt of the potential, along
with the energy-barrier height and the offset (asymmetry) of the double-well
potential within a box that confines the particle motion at high temperatures,
we could demonstrate convincingly the existence of the Mpemba effect and
measure the non-monotonic temperature dependence of the $a_{2}$ coefficient.
We even found conditions where $a_{2}=0$. At such a point, the slowest
relaxation dynamics is $\sim\operatorname{e}^{-\lambda_{3}t}$, implying an
exponential speed-up over the generic relaxation dynamics,
$\sim\operatorname{e}^{-\lambda_{2}t}$. This strong Mpemba effect had been
predicted by Klich et al. [26].
Although our recent experimental work gives strong support to the basic
scenario proposed by Lu and Raz, it does not offer good physical insight into
the conditions needed to produce or observe the Mpemba effect. What physical
picture corresponds to the anomalous temperature dependence of the $a_{2}$
coefficient? In this Brief Research Report, we offer a more physical
interpretation of the Mpemba effect explored in our previous work.
## II Thermalization in a double-well potential with metastability
A common feature of experiments showing Mpemba effects is that they involve a
temperature quench: the system is cooled very rapidly. We model this situation
by making the high-temperature initial state an initial condition for dynamics
that take place entirely in contact with a bath of fixed temperature. In
effect, the quench is infinitely fast. The thermalization dynamics are then
given by the Langevin equation
$\displaystyle\dot{x}=-\gamma
U^{\prime}(x)+\sqrt{2\gamma\beta_{\textrm{b}}^{-1}}\,\eta,$ (6)
with $\gamma$ a friction coefficient and $\eta(t)$ Gaussian white noise
modeling thermal fluctuations from the bath, with $\langle\eta(t)\rangle=0$
and $\langle\eta(t)\,\eta(t^{\prime})\rangle=\delta(t-t^{\prime})$. The noise-
strength $2\gamma\beta_{\textrm{b}}^{-1}$ enforces the fluctuation-dissipation
relation [21, 23]. The potential $U(x)$ is a double-well potential with
barrier height $E_{0}\gg{\beta_{\textrm{b}}}^{-1}$ and two coarse-grained
states, denoted $L$ and $R$ in Figure 1A. The range of particle motions is
also constrained to a finite range; the potential is implicitly infinite at
the extremities. By tilting the potential, one state has a higher energy than
the other (difference is $\Delta E$) and becomes a toy model for the water-ice
phase transition. However, the energy barrier $E_{0}$, while high enough that
the two states are well defined, is also low enough that many transitions over
the barrier are observed during a typical experiment.
Figure 1 illustrates the case studied by Kumar and Bechhoefer [25], with (a)
showing the potential and (b) the dynamics of a quench from a high
temperature. With a moderately high barrier, both wells have significant
probability for the equilibrium state $g_{\beta_{\textrm{b}}}(x)$ (Figure 1B,
right). For $U(x)$, the barrier $E_{0}=2.0$, the energy difference between
states is $\Delta E=1.3$, and the hot temperature
$\beta_{\textrm{h}}^{-1}=1000$; all quantities are multiplied by
$\beta_{\textrm{b}}$ and are, hence, dimensionless. The separation between
wells was 80 nm.
Figure 1: Two-stage dynamics (A) Tilted double-well potential $U(x)$ with
coarse-grained states $\\{L,R\\}$. The potential includes a box (not shown).
(B) Evolution of the probability density function for position: a high-
temperature equilibrium initial state $g_{\beta}(x)$ (left) has a fast
relaxation to a local equilibrium state
$\rho_{\beta,\beta_{\textrm{b}}}^{\textrm{leq}}(x)$ (middle) and a slow
relaxation to global equilibrium $g_{\beta_{\textrm{b}}}(x)$ at the colder
bath temperature (right).
At a temperature corresponding to $\beta^{-1}$, the equilibrium free energy of
the system is
$\displaystyle
F_{\beta}^{\textrm{eq}}\equiv-\beta^{-1}\ln\left[\int_{-\infty}^{+\infty}\differential{x}\exp\left(-\beta
U(x)\right)\right]\,.$ (7)
and the corresponding equilibrium Gibbs density is
$\displaystyle
g_{\beta}(x)\equiv\exp\left[-\beta\left(U(x)-F_{\beta}^{\textrm{eq}}\right)\right]\,,$
(8)
The metastability of $U$ means that the system evolves on two very different
time scales:
Stage 1 is a fast relaxation to local equilibration. The initial, high-
temperature Gibbs density rapidly evolves to a state that is at local
equilibrium with respect to the bath temperature. A local equilibrium is a
density that is similar locally to $g_{\beta_{\textrm{b}},}$ but with altered
fractions of systems in the left or right wells. Using Bayes’ theorem, we can
write such a local-equilibrium state as
$\displaystyle\rho_{\beta,\beta_{\textrm{b}}}^{\textrm{leq}}(x)$
$\displaystyle=\mathbb{P}\left(\textrm{ be in the left well at
}\beta\right)\mathbb{P}\left(\left.x\right|\textrm{ be in the left well at
}\beta_{\textrm{b}}\right)$ $\displaystyle\quad+\mathbb{P}\left(\textrm{ be in
the right well at }\beta\right)\mathbb{P}\left(\left.x\right|\textrm{ be in
the right well at }\beta_{\textrm{b}}\right)\,.$
More precisely, the local $\beta,\beta_{\textrm{b}}$ equilibrium is the
density
$\displaystyle\rho_{\beta,\beta_{\textrm{b}}}^{\textrm{leq}}(x)$
$\displaystyle=\begin{cases}a_{\textrm{L}}\left(\frac{g_{\beta_{\textrm{b}}}(x)}{\int_{-\infty}^{0}\differential{x^{\prime}}g_{\beta_{\textrm{b}}}(x^{\prime})}\right)&\qquad
x<0\quad\textrm{(left well)}\,,\\\\[12.0pt]
a_{\textrm{R}}\left(\frac{g_{\beta_{\textrm{b}}}(x)}{\int_{0}^{\infty}\differential{x^{\prime}}g_{\beta_{\textrm{b}}}(x^{\prime})}\right)&\qquad
x>0\quad\textrm{(right well)}\,,\end{cases}$ (9)
with $0\leq a_{\textrm{L}}\leq 1$. Choosing $a_{\textrm{L}}+a_{\textrm{R}}=1$
ensures normalization of the probability density.
In a fast quench, we assume that the fraction of initial systems at
equilibrium at the higher temperature $\beta^{-1}$ is unchanged when local
equilibrium is established. Essentially, we ignore the diffusion of
trajectories that start on one side of the barrier and end up on the other at
the end of the transient. In this approximation, the fraction that ends up in
each well corresponds to that of the initial state, $g_{\beta}$. Thus,
$\displaystyle
a_{\textrm{L}}=\int_{-\infty}^{0}\differential{x^{\prime}}g_{\beta}(x^{\prime})\qquad\text{and}\qquad
a_{\textrm{R}}=\int_{0}^{\infty}\differential{x^{\prime}}g_{\beta}(x^{\prime})\,.$
(10)
As shown in Figure 1B, center, the local-equilibrium distribution
$\rho_{\beta,\beta_{\textrm{b}}}^{\textrm{leq}}(x)$ is discontinuous at $x=0$;
higher barriers will reduce the discontinuity, of order
$\operatorname{e}^{-\beta_{\textrm{b}}E_{0}}\ll 1$.
Stage 2 is a final relaxation to global equilibrium on a slow time scale: the
overall populations in each well (coarse-grained state) change, and the
density converge to the Gibbs density $g_{\beta_{\textrm{b}}}$. Local
equilibrium is maintained during the evolution, which is illustrated
schematically in Fig 1b. In this metastable regime, the equilibration time was
analyzed by Kramers long ago [27, 21, 28, 29]. It also corresponds to the
limit of Equation (4); as a result, the final relaxation is exponential, with
decay rate $\lambda_{2}$.
## III Metastable Mpemba effect
Given this scenario of thermal relaxation as a two-stage process, we can
readily understand how the Mpemba effect can occur. The idea is to follow the
dynamics in the function space of all admissible probability density functions
$p(x,t)$. If we expand the solution in eigenfunctions analogously to Equation
(2), we see that the infinite-dimensional function space is spanned by the
eigenfunctions. To visualize the motion, we project it onto the 2D subspace
spanned by the eigenfunctions $v_{2}(x)$ and $v_{3}(x)$. The system state is
then characterized as a parametric plot of the amplitudes $a_{2}(t)$ and
$a_{3}(t)$. Animations from a 3D projection spanning $a_{2}$–$a_{3}$–$a_{4}$
are available in the supplementary material. A similar geometric plot was used
to explore quenching in an anti-ferromagnetic Ising spin system in [26].
Figure 2: Probability-density dynamics in the plane defined by the $a_{2}$ and
$a_{3}$ coefficients. The red curve $G$ denotes the set of equilibrium
densities, the green curve $G_{\textrm{leq}}$ the set of local-equilibrium
densities. Arrows indicate the slow relaxation along $G_{\textrm{leq}}$ to
global equilibrium, at the intersection with $G$ (denoted by the large hollow
marker with a dot at its center at $T=1$). Gray lines denote the rapid
relaxation from an initial condition (temperature relative to the bath
indicated by a marker along $G$). The time progression of $p(x,t)$, projected
onto the $a_{2}$–$a_{3}$ plane, is from dark to light. Curves are calculated
from the double-well potential described in Kumar and Bechhoefer, with domain
asymmetry $\alpha=3$ (see [25] for definitions).
Figure 2 shows the geometry of trajectories. They are organized about two
static, 1D curves, labeled $G$ and $G_{\textrm{leq}}$. The red curve ($G$)
represents the set of all equilibrium Gibbs-Boltzmann densities, $g_{\beta}$,
for $0\leq\beta<\infty$. It is sometimes known as the quasi-static locus. The
green curve ($G_{\textrm{leq}}$) represents the set of all local-equilibrium
densities of the form of Equation (9), as parametrized by
$a_{\textrm{L}}\in[0,1]$. Both curves are represented as 2D parametric plots
but lie in the full infinite-dimensional space. Both $G$ and
$G_{\textrm{leq}}$ have finite length, in general. (The entire length is not
shown in the figure.) The two curves intersect at $a_{2}=a_{3}=0$, which
describes the global equilibrium $g_{\beta_{\textrm{b}}}$ with respect to the
bath (large hollow marker with dot). The apparent crossing near $a_{2}\approx
0.4$ is spurious, as the 3D projections in the supplement show.
The dynamical trajectories are represented by the variously shaded gray
curves. At time $t=0$, the systems are in equilibrium along the red curve at a
variety of temperatures $\\{1,1.2,1.5,3,50,100,1000\\}\times T_{\textrm{b}}$,
which are indicated by black markers. The curves then move rapidly towards the
green curve (local equilibrium). The time course is suggested by the dark-to-
light gradient. Once they reach the vicinity of $G_{\textrm{leq}}$, they
closely follow this green curve back to the global-equilibrium state.
Within this representation, we note the “arrival point” of each trajectory
when it “hits” $G_{\textrm{leq}}$. For small temperatures (1, 1.2, 1.5, 3),
the distance between this arrival point and the global-equilibrium state
increases monotonically with $\beta$. For larger temperatures (50, 100, 1000),
however, the distance decreases until, at $T=1000T_{\textrm{b}}$, it nearly
vanishes (denoting the strong Mpemba effect). Along $G_{\textrm{leq}}$, the
system is in the limit described by Equation (4) and relaxes exponentially to
global equilibrium. Relaxation along $G_{\textrm{leq}}$ therefore must be
monotonic with the distance away from global equilibrium. Trajectories that
arrive along this curve that are farther from global equilibrium will take
longer to relax. In the Appendix, we show that this notion of “distance” along
$G_{\textrm{leq}}$ can be expressed as the Kullback-Leibler divergence
$D_{\textrm{KL}}$ between the local equilibrium density
$\rho_{\beta,\beta_{\textrm{b}}}^{\textrm{leq}}$ given in Equation (9) and the
global equilibrium density $g_{\beta_{\textrm{b}}}$. In particular,
$D_{\textrm{KL}}[\rho_{\beta,\beta_{\textrm{b}}}^{\textrm{leq}},g_{\beta_{\textrm{b}}}]$
is a monotonic function of $a_{\textrm{L}}$ (defined in Equation 10), which is
the natural parameter for the manifold $G_{\textrm{leq}}$.
Now we can understand how the (metastable) Mpemba effect can arise. In the
example shown in Figure 2, the distance along $G_{\textrm{leq}}$ initially
increases with $T$ and so does the total equilibration time. But then this
distance decreases for higher temperatures, leading to the Mpemba effect. We
note that in our approximation, the time to traverse the initial stage is much
shorter than the time to relax along the green curve, so that variations in
the length of the initial trajectory are irrelevant.
If the bath temperature were changed at a finite rate (rather than a hot
system being quenched directly into the bath), then the dynamics would be
different. For example, if the system is very slowly cooled from the initial
temperature to final bath temperature, the trajectory would follow the quasi-
static locus (red curve $G$) and no Mpemba effect would be possible. Having
shown that no Mpemba effect is possible with an infinitely slow quench and
that the effect can be observed in the limit of an infinitely rapid quench, we
can conclude that the Mpemba effect requires a sufficiently fast temperature
quench.
## IV Metastable Mpemba effect in terms of extractable work
Our main goal is to express the criterion for the Mpemba effect in more
physical terms. For the metastable setting described above, we will find such
a criterion in terms of a thermodynamic work. We recall that the second law of
thermodynamics for a system in contact with a single thermal bath of
temperature $\beta_{\textrm{b}}^{-1}$ can be expressed in terms of work and
free energy rather than entropy:
$\displaystyle W\geq\triangle F_{\textrm{neq},\beta_{\textrm{b}}}\,,$ (11)
where $W$ is the work received by the system and $\triangle F_{\textrm{neq}}$
denotes the difference in nonequilibrium free energies (final $-$ initial
values). See, for example, Gavrilov et al. [30], Equation 5 and associated
references.
We recall also that the nonequilibrium free energy generalizes the familiar
notion of free energy to systems out of equilibrium. Thus, in analogy to
Equation 7, we define
$\displaystyle F_{\textrm{neq},\beta_{\textrm{b}}}\left(\rho\right)\equiv
E(\rho)-\beta_{\textrm{b}}^{-1}S\left(\rho\right)\,,$ (12)
where the average energy $E(\rho)$ and Gibbs-Shannon entropy $S(\rho)$ are
given by
$\displaystyle E(\rho)$
$\displaystyle\equiv\int_{-\infty}^{+\infty}\differential{x}\rho(x)U(x)$
$\displaystyle S(\rho)$
$\displaystyle\equiv-\int_{-\infty}^{+\infty}\differential{x}\rho(x)\ln\rho(x)\,.$
(13)
These expressions reduce to their usual definitions for
$\rho=g_{\beta_{\textrm{b}}}$ but can be evaluated, as well, over
nonequilibrium densities.
In the formulation of the second law of Equation (11), the initial and final
states are arbitrary. In our case, the initial state is the (approximate)
local equilibrium reached at the end of Stage 1. In the final state, the
system is in equilibrium with the bath.
Physically $-\triangle F_{\textrm{neq}}$ represents the maximum amount of work
that may be extracted from the nonequilibrium isothermal protocol [31]. We
will refer to this quantity as the extractable work.
$\displaystyle W_{\textrm{ex}}\equiv-\triangle
F_{\textrm{neq},\beta_{\textrm{b}}}\,.$ (14)
In the Appendix, we show that the difference in nonequilibrium free energies
$\triangle F_{\textrm{neq}}$ may be expressed as a Kullback-Leibler
divergence. Explicitly,
$\displaystyle\triangle F_{\textrm{neq}}$
$\displaystyle=-\left[F\left(\rho_{\beta,\beta_{\textrm{b}}}^{\textrm{leq}}\right)-F\left(g_{\beta_{\textrm{b}}}\right)\right]$
$\displaystyle=-\beta_{\textrm{b}}^{-1}D_{\textrm{KL}}\left(\rho_{\beta,\beta_{\textrm{b}}}^{\textrm{leq}},g_{\beta_{\textrm{b}}}\right)\,.$
(15)
In our set-up, the extractable work between the “intermediate” time (end of
Stage 1) where
$F_{\textrm{neq},\beta_{\textrm{b}}}=F_{\textrm{neq},\beta_{\textrm{b}}}\left(\rho_{\beta,\beta_{\textrm{b}}}^{\textrm{leq}}\right)$,
and the final time of the slow evolution (where
$F_{\textrm{neq},\beta_{\textrm{b}}}=F_{\textrm{eq},\beta_{\textrm{b}}})$, is
given by Equation (15):
$\displaystyle
W_{\textrm{ex}}\left(\beta,\beta_{\textrm{b}}\right)=\beta_{\textrm{b}}^{-1}D_{\textrm{KL}}\left(\rho_{\beta,\beta_{\textrm{b}}}^{\textrm{leq}},g_{\beta_{\textrm{b}}}\right)\,.$
(16)
In Sec. III and Figure 2, we saw that
$D_{\textrm{KL}}(\rho_{\beta,\beta_{\textrm{b}}}^{\textrm{leq}},g_{\beta_{\textrm{b}}})$
can be non-monotonic as a function of $\beta$. We thus conclude that there can
be a non-monotonic dependence on $\beta$ of the function
$\displaystyle\beta\rightarrow
W_{\textrm{ex}}\left(\beta,\beta_{\textrm{b}}\right)\,.$ (17)
This is our main result: If the metastable Mpemba effect occurs, then the
extractable work from the local-equilibrium state at the end of Stage 1 is
non-monotonic in the initial temperature $\beta^{-1}$. Figure 3 shows an
example, again calculated for the potential considered by Kumar and Bechhoefer
[25].
In addition to having a clear physical interpretation,
$W_{\textrm{ex}}(\beta,\beta_{\textrm{b}})$ is easily calculated as a simple
numerical integral of equilibrium Gibbs-Boltzmann distributions for two
temperatures. By contrast, to establish the non-monotonicity of $a_{2}$, the
criterion of Lu and Raz [1], one must first find the left eigenfunction
$u_{2}$ by solving the boundary-value problem associated with the adjoint
Fokker-Planck operator.
Figure 3: Extractable work is a non-monotonic function of initial temperature
$T=\beta^{-1}$ for the double-well potential of Figure 1A.
## V Discussion
The anomalous relaxation process known as the Mpemba effect is defined by a
non-monotonic dependence of relaxation time on initial temperature. Lu and Raz
[1] showed that an equivalent criterion is the non-monotonicity of the $a_{2}$
projection coefficient derived from an associated Fokker-Planck equation. In
this Brief Research Report, we have shown that, for a 1D potential $U(x)$ with
a metastable and a stable minimum, the Mpemba effect can be viewed as a simple
two-stage relaxation in the function space of all admissible probability
densities. In the fast Stage 1, the system relaxes to a local equilibrium. In
the slow Stage 2, the populations in the two coarse-grained states
equilibrate. In such a situation, we have shown that the Mpemba effect is
associated with a non-monotonic temperature dependence of the maximum
extractable work of the local equilibrium stage reached at the end of Stage
1.Relative to the $a_{2}$ coefficient, extractable work is a much more
physical quantity that is also much easier to calculate.
The physical picture offered here, for a double-well potential, meets our
goal: We can relate the existence of the Mpemba effect to a non-monotonicity
of the extractable work. However, we have not carefully characterized the
range of validity of the approximations used in our analysis. For example, in
writing Equation (9), we assume that the fraction of initial systems that
start in either state ($x<0$ or $x>0$) is preserved after the initial fast
transient. In fact, even during the brief transient, calculating the fraction
of systems in each region is subtle, a point emphasized by van Kampen [32] in
a careful study that would be the starting point for a more detailed
theoretical investigation.
Although our arguments assume a 1D potential with two local states, they
generalize easily to many dimensions and many local states. In such cases, the
state vector has a large number of dimensions, and solving the Fokker-Planck
equation or even calculating its eigenfunctions is difficult. But calculating
the extractable work remains easy. Of course, our arguments do not imply that
the Mpemba effect can occur only in potentials with metastable states and
leave open the possibility for other scenarios.
## Appendix
1\. Monotonicity of Kullback-Leibler divergence along $G_{\textrm{leq}}$. The
Kullback-Leibler divergence [33] can be written in terms of Equation (9) as
$\displaystyle
D_{\textrm{KL}}\left(\rho_{\beta,\beta_{\textrm{b}}}^{\textrm{leq}},g_{\beta_{\textrm{b}}}\right)$
$\displaystyle=\int_{-\infty}^{\infty}\differential{x}\rho_{\beta,\beta_{\textrm{b}}}^{\textrm{leq}}(x)\ln\left[\frac{\rho^{\textrm{leq}}(x)}{g_{\beta_{\textrm{b}}}}\right]$
$\displaystyle=\int_{-\infty}^{0}\differential{x}a_{\textrm{L}}\left(\frac{g_{\beta_{\textrm{b}}}(x)}{\int_{-\infty}^{0}\differential{x^{\prime}}g_{\beta_{\textrm{b}}}(x^{\prime})}\right)\ln\frac{a_{\textrm{L}}g_{\beta_{\textrm{b}}}(x)}{[{\int_{-\infty}^{0}\differential{x^{\prime}}g_{\beta_{\textrm{b}}}(x^{\prime})}]\,g_{\beta_{\textrm{b}}}(x)}+\int_{0}^{\infty}\differential{x}\cdots$
$\displaystyle=a_{\textrm{L}}\ln\left(\frac{a_{\textrm{L}}}{a_{\textrm{L}}^{*}}\right)+a_{\textrm{R}}\ln\left(\frac{a_{\textrm{R}}}{a_{\textrm{R}}^{*}}\right)\,.$
$\displaystyle=D_{\textrm{KL}}\left[\matrixquantity(a_{\textrm{L}}\\\
a_{\textrm{R}}),\matrixquantity(a_{\textrm{L}}^{*}\\\
a_{\textrm{R}}^{*})\right]\,.$ (18)
In the second line, we omit the corresponding $a_{\textrm{R}}$ terms. In the
third line,
$a_{\textrm{L}}^{*}\equiv\int_{-\infty}^{0}\differential{x}g_{\beta_{\textrm{b}}}(x)$
and
$a_{\textrm{R}}^{*}\equiv\int_{0}^{\infty}\differential{x}g_{\beta_{\textrm{b}}}(x)$.
In the fourth line, the vectors represent two-state probability distributions.
Note that in the “short Stage 1” approximation of Equation (10), the final
expression for $D_{\textrm{KL}}$ involves two coarse-grained probability
distributions, with $\smallmatrixquantity(a_{\textrm{L}}\\\ a_{\textrm{R}})$
depending only on $\beta$ and $\smallmatrixquantity(a_{\textrm{L}}^{*}\\\
a_{\textrm{R}}^{*})$ only on $\beta_{\textrm{b}}$.
We then investigate the monotonicity of
$D_{\textrm{KL}}\left[\matrixquantity(a_{\textrm{L}}\\\
a_{\textrm{R}}),\matrixquantity(a_{\textrm{L}}^{*}\\\
a_{\textrm{R}}^{*})\right]$ by differentiating:
$\displaystyle\derivative{D_{\textrm{KL}}}{a_{\textrm{L}}}=\ln\left(\frac{a_{\textrm{L}}}{a_{\textrm{R}}}\right)-\ln\left(\frac{a_{\textrm{L}}^{*}}{a_{\textrm{R}}^{*}}\right)\,,$
(19)
which is positive for $a_{\textrm{L}}>a_{\textrm{L}}^{*}$ and negative for
$a_{\textrm{L}}<a_{\textrm{L}}^{*}$. (Recall that
$a_{\textrm{L}}+a_{\textrm{R}}=a_{\textrm{L}}^{*}+a_{\textrm{R}}^{*}=1$.)
Thus, $D_{\textrm{KL}}\left(\rho^{\textrm{leq}},g\right)$ is monotonic in
$a_{\textrm{L}}$ on either side of equilibrium.
2\. Proof of Equation (15). The relationship is well known [34] and holds for
any distribution, including ones describing local equilibrium. Below, to
simplify notation, we write $\rho^{\textrm{leq}}$ for
$\rho_{\beta,\beta_{\textrm{b}}}^{\textrm{leq}}$ and $g$ for
$g_{\beta_{\textrm{b}}}$.
$\displaystyle D_{\textrm{KL}}\left(\rho^{\textrm{leq}},g\right)$
$\displaystyle=\int_{-\infty}^{\infty}\differential{x}\rho^{\textrm{leq}}(x)\ln\left[\frac{\rho^{\textrm{leq}}(x)}{g(x)}\right]$
$\displaystyle=\int_{-\infty}^{\infty}\differential{x}\rho^{\textrm{leq}}(x)\ln\rho^{\textrm{leq}}(x)-\int_{-\infty}^{\infty}\differential{x}\rho^{\textrm{leq}}(x)\ln
g(x)$
$\displaystyle=-S\left(\rho^{\textrm{leq}}\right)-\int_{-\infty}^{\infty}\differential{x}\rho^{\textrm{leq}}(x)\left[-\beta_{\textrm{b}}U(x)+\beta_{\textrm{b}}F\left(g\right)\right]$
$\displaystyle=-S\left(\rho^{\textrm{leq}}\right)+\beta_{\textrm{b}}\left[E\left(\rho^{\textrm{leq}}\right)\right]-\beta_{\textrm{b}}F\left(g\right)$
$\displaystyle=\beta_{\textrm{b}}\left[F\left(\rho^{\textrm{leq}}\right)-F\left(g\right)\right]\,,$
which is equivalent to Equation (15).
## Funding
JB and AK were supported by NSERC Discovery and RTI Grants (Canada). RC
acknowledges support from the Pacific Institute for Mathematical Sciences
(PIMS), the French Centre National de la Recherche Scientifique (CNRS) that
made possible his visit to Vancouver and the project RETENU
ANR-20-CE40-0005-01 of the French National Research Agency (ANR).
## References
* Lu and Raz [2017] Lu Z, Raz O. Nonequilibrium thermodynamics of the Markovian Mpemba effect and its inverse. Proc. Natl. Acad. Sci. USA 114 (2017) 5083–5088.
* Amir et al. [2012] Amir A, Oreg Y, Imry Y. On relaxations and aging of various glasses. Proc. Natl. Acad. Sci. USA 109 (2012) 1850–1855.
* Aristotle [1923] Aristotle. Meterologica (Oxford: Clarendon Press), E. W. Webster, Book 1, Part 12 edn. (1923).
* Ross [1981] Ross WD. Aristotle’s Metaphysics (Clarendon Press) (1981).
* Mpemba and Osborne [1969] Mpemba EB, Osborne DG. Cool? Phys. Educ. 4 (1969) 172–175.
* Chaddah et al. [2010] Chaddah P, Dash S, Kumar K, Banerjee A. Overtaking while approaching equilibrium. arXiv:1011.3598 (2010).
* Ahn et al. [2016] Ahn YH, Kang H, Koh DY, Lee H. Experimental verifications of Mpemba-like behaviors of clathrate hydrates. Korean J. Chem. Eng. 33 (2016) 1903–1907.
* Lorenzo et al. [2006] Lorenzo AT, Arnal ML, Sanchez JJ, Müller AJ. Effect of annealing time on the self-nucleation behavior of semicrystalline polymers. J. Polym. Sci. Part B: Polym. Phys. 44 (2006) 1738–1750.
* Hu et al. [2018] Hu C, Li J, Huang S, Li H, Luo C, Chen J, et al. Conformation directed Mpemba effect on polylactide crystallization. Cryst. Growth Des. 18 (2018) 5757–5762.
* Greaney et al. [2011] Greaney PA, Lani G, Cicero G, Grossman JC. Mpemba-like behavior in carbon nanotube resonators. Metall. Mater. Trans. A 42 (2011) 3907–3912.
* Lasanta et al. [2017] Lasanta A, Reyes FV, Prados A, Santos A. When the hotter cools more quickly: Mpemba effect in granular fluids. Phys. Rev. Lett. 119 (2017) 148001.
* Baity-Jesi et al. [2019] Baity-Jesi M, Calore E, Cruz A, Fernandez LA, Gil-Narvión JM, Gordillo-Guerrero A, et al. The Mpemba effect in spin glasses is a persistent memory effect. Proc. Natl. Acad. Sci. USA 116 (2019) 15350–15355.
* Kell [1969] Kell GS. The freezing of hot and cold water. Am. J. Phys. 37 (1969) 564–565.
* Vynnycky and Mitchell [2010] Vynnycky M, Mitchell S. Evaporative cooling and the Mpemba effect. Heat Mass Transfer 46 (2010) 881–890.
* Mirabedin and Farhadi [2017] Mirabedin SM, Farhadi F. Numerical investigation of solidification of single droplets with and without evaporation mechanism. Int. J. Refrig. 73 (2017) 219–225.
* Vynnycky and Kimura [2015] Vynnycky M, Kimura S. Can natural convection alone explain the Mpemba effect? Int. J. Heat Mass Transfer 80 (2015) 243–255.
* Auerbach [1995] Auerbach D. Supercooling and the Mpemba effect: When hot water freezes quicker than cold. Am. J. Phys. 63 (1995) 882–885.
* Wojciechowski et al. [1988] Wojciechowski B, Owczarek I, Bednarz G. Freezing of aqueous solutions containing gases. Cryst. Res. Technol. 23 (1988) 843–848.
* Zhang et al. [2014] Zhang X, Huang Y, Ma Z, Zhou Y, Zhou J, Zheng W, et al. Hydrogen-bond memory and water-skin supersolidity resolving the Mpemba paradox. Phys. Chem. Chem. Phys. 16 (2014) 22995–23002.
* Risken [1989] Risken H. The Fokker-Planck Equation: Methods of Solution and Applications (Springer), 2nd edn. (1989).
* van Kampen [2007] van Kampen NG. Stochastic Processes in Physics and Chemistry (Elsevier), 3rd edn. (2007).
* Hänggi and Thomas [1982] Hänggi P, Thomas H. Stochastic processes: Time evolution, symmetries and linear response. Phys. Rep. 88 (1982) 207–319.
* Gardiner [2009] Gardiner CW. Stochastic Methods: A Handbook for the Natural and Social Sciences (Springer), 4th edn. (2009).
* Seifert [2012] Seifert U. Stochastic thermodynamics, fluctuation theorems and molecular machines. Rep. Prog. Phys. 75 (2012) 126001.
* Kumar and Bechhoefer [2020] Kumar A, Bechhoefer J. Exponentially faster cooling in a colloidal system. Nature 584 (2020) 64–68.
* Klich et al. [2019] Klich I, Raz O, Hirschberg O, Vucelja M. Mpemba index and anomalous relaxation. Phys. Rev. X 9 (2019) 021060.
* Kramers [1940] Kramers HA. Brownian motion in a field of force and the diffusion model of chemical reactions. Physica A 7 (1940) 284–304.
* Hänggi [1990] Hänggi P. Reaction-rate theory: fifty years after Kramers. Rev. Mod. Phys. 62 (1990) 251–341.
* Berglund [2013] Berglund N. Kramers’ law: Validity, derivations and generalisations. Markov Processes Relat. Fields 19 (2013) 459–490.
* Gavrilov et al. [2017] Gavrilov M, Chétrite R, Bechhoefer J. Direct measurement of nonequilibrium system entropy is consistent with Gibbs-Shannon form. PNAS 114 (2017) 11097–11102.
* Parrondo et al. [2015] Parrondo JMR, Horowitz JM, Sagawa T. Thermodynamics of information. Nature Phys. 11 (2015) 131–139.
* van Kampen [1977] van Kampen NG. A soluble model for diffusion in a bistable potential. J. Stat. Phys. 17 (1977) 71–87.
* Cover and Thomas [2006] Cover T, Thomas J. Elements of Information Theory (New York: John Wiley & Sons, Inc.), 2nd edn. (2006).
* Shaw [1984] Shaw R. The Dripping Faucet as a Model Chaotic System (Aerial Press) (1984).
|
# Free Lunch for Few-shot Learning:
Distribution Calibration
Shuo Yang1, Lu Liu2, Min Xu1
1School of Electrical and Data Engineering, University of Technology Sydney,
2Australian Artificial Intelligence Institute, University of Technology Sydney
{shuo.yang<EMAIL_ADDRESS><EMAIL_ADDRESS>
Corresponding author.
###### Abstract
Learning from a limited number of samples is challenging since the learned
model can easily become overfitted based on the biased distribution formed by
only a few training examples. In this paper, we calibrate the distribution of
these few-sample classes by transferring statistics from the classes with
sufficient examples. Then an adequate number of examples can be sampled from
the calibrated distribution to expand the inputs to the classifier. We assume
every dimension in the feature representation follows a Gaussian distribution
so that the mean and the variance of the distribution can borrow from that of
similar classes whose statistics are better estimated with an adequate number
of samples. Our method can be built on top of off-the-shelf pretrained feature
extractors and classification models without extra parameters. We show that a
simple logistic regression classifier trained using the features sampled from
our calibrated distribution can outperform the state-of-the-art accuracy on
three datasets (5% improvement on miniImageNet compared to the next best). The
visualization of these generated features demonstrates that our calibrated
distribution is an accurate estimation. The code is available at:
https://github.com/ShuoYang-1998/Few_Shot_Distribution_Calibration
## 1 Introduction
Table 1: The class mean similarity (“mean sim”) and class variance similarity (“var sim”) between Arctic fox and different classes. | Arctic fox
---|---
| mean sim | var sim
white wolf | 97% | 97%
malamute | 85% | 78%
lion | 81% | 70%
meerkat | 78% | 70%
jellyfish | 46% | 26%
orange | 40% | 19%
beer bottle | 34% | 11%
Learning from a limited number of training samples has drawn increasing
attention due to the high cost of collecting and annotating a large amount of
data. Researchers have developed algorithms to improve the performance of
models that have been trained with very few data. Finn et al. (2017); Snell et
al. (2017) train models in a meta-learning fashion so that the model can adapt
quickly on tasks with only a few training samples available. Hariharan &
Girshick (2017); Wang et al. (2018) try to synthesize data or features by
learning a generative model to alleviate the data insufficiency problem. Ren
et al. (2018) propose to leverage unlabeled data and predict pseudo labels to
improve the performance of few-shot learning.
While most previous works focus on developing stronger models, scant attention
has been paid to the property of the data itself. It is natural that when the
number of data grows, the ground truth distribution can be more accurately
uncovered. Models trained with a wide coverage of data can generalize well
during evaluation. On the other hand, when training a model with only a few
training data, the model tends to overfit on these few samples by minimizing
the training loss over these samples. These phenomena are illustrated in
Figure 1. This biased distribution based on a few examples can damage the
generalization ability of the model since it is far from mirroring the ground
truth distribution from which test cases are sampled during evaluation.
Here, we consider calibrating this biased distribution into a more accurate
approximation of the ground truth distribution. In this way, a model trained
with inputs sampled from the calibrated distribution can generalize over a
broader range of data from a more accurate distribution rather than only
fitting itself to those few samples. Instead of calibrating the distribution
of the original data space, we try to calibrate the distribution in the
feature space, which has much lower dimensions and is easier to calibrate
(Xian et al. (2018)). We assume every dimension in the feature vectors follows
a Gaussian distribution and observe that similar classes usually have similar
mean and variance of the feature representations, as shown in Table 1.
Thus, the mean and variance of the Gaussian distribution can be transferred
across similar classes (Salakhutdinov et al. (2012)). Meanwhile, the
statistics can be estimated more accurately when there are adequate samples
for this class. Based on these observations, we reuse the statistics from
many-shot classes and transfer them to better estimate the distribution of the
few-shot classes according to their class similarity. More samples can be
generated according to the estimated distribution which provides sufficient
supervision for training the classification model.
In the experiments, we show that a simple logistic regression classifier
trained with our strategy can achieve state-of-the-art accuracy on three
datasets. Our distribution calibration strategy can be paired with any
classifier and feature extractor with no extra learnable parameters. Training
with samples selected from the calibrated distribution can achieve 12%
accuracy gain compared to the baseline which is only trained with the few
samples given in a 5way1shot task. We also visualize the calibrated
distribution and show that it is an accurate approximation of the ground truth
that can better cover the test cases.
Figure 1: Training a classifier from few-shot features makes the classifier
overfit to the few examples (Left). Classifier trained with features sampled
from calibrated distribution has better generalization ability (Right).
## 2 Related Works
Few-shot classification is a challenging machine learning problem and
researchers have explored the idea of learning to learn or meta-learning to
improve the quick adaptation ability to alleviate the few-shot challenge. One
of the most general algorithms for meta-learning is the optimization-based
algorithm. Finn et al. (2017) and Li et al. (2017) proposed to learn how to
optimize the gradient descent procedure so that the learner can have a good
initialization, update direction, and learning rate. For the classification
problem, researchers proposed simple but effective algorithms based on metric
learning. MatchingNet (Vinyals et al., 2016) and ProtoNet (Snell et al., 2017)
learned to classify samples by comparing the distance to the representatives
of each class. Our distribution calibration and feature sampling procedure
does not include any learnable parameters and the classifier is trained in a
traditional supervised learning way.
Another line of algorithms is to compensate for the insufficient number of
available samples by generation. Most methods use the idea of Generative
Adversarial Networks (GANs) (Goodfellow et al., 2014) or autoencoder
(Rumelhart et al., 1986) to generate samples (Zhang et al. (2018); Chen et al.
(2019b); Schwartz et al. (2018); Gao et al. (2018)) or features (Xian et al.
(2018); Zhang et al. (2019)) to augment the training set. Specifically, Zhang
et al. (2018) and Xian et al. (2018) proposed to synthesize data by
introducing an adversarial generator conditioned on tasks. Zhang et al. (2019)
tried to learn a variational autoencoder to approximate the distribution and
predict labels based on the estimated statistics. The autoencoder can also
augment samples by projecting between the visual space and the semantic space
(Chen et al., 2019b) or encoding the intra-class deformations (Schwartz et
al., 2018). Liu et al. (2019b) and Liu et al. (2019a) propose to generate
features through the class hierarchy. While these methods can generate extra
samples or features for training, they require the design of a complex model
and loss function to learn how to generate. However, our distribution
calibration strategy is simple and does not need extra learnable parameters.
Data augmentation is a traditional and effective way of increasing the number
of training samples. Qin et al. (2020) and Antoniou & Storkey (2019) proposed
the used of the traditional data augmentation technique to construct pretext
tasks for unsupervised few-shot learning. Wang et al. (2018) and Hariharan &
Girshick (2017) leveraged the general idea of data augmentation, they designed
a hallucination model to generate the augmented version of the image with
different choices for the model’s input, i.e., an image and a noise (Wang et
al., 2018) or the concatenation of multiple features (Hariharan & Girshick,
2017). Park et al. (2020); Wang et al. (2019); Liu et al. (2020b) tried to
augment feature representations by leveraging intra-class variance. These
methods learn to augment from the original samples or their feature
representation while we try to estimate the class-level distribution and thus
can eliminate the inductive bias from a single sample and provide more diverse
generations from the calibrated distribution.
## 3 Main Approach
In this section, we introduce the few-shot classification problem definition
in Section 3.1 and details of our proposed approach in Section 3.2.
### 3.1 Problem Definition
We follow a typical few-shot classification setting. Given a dataset with
data-label pairs
$\mathcal{D}=\left\\{\left({\bm{x}}_{i},y_{i}\right)\right\\}$ where
${\bm{x}}_{i}\in\mathbb{R}^{d}$ is the feature vector of a sample and
$y_{i}\in C$, where $C$ denotes the set of classes. This set of classes is
divided into base classes $C_{b}$ and novel classes $C_{n}$, where $C_{b}\cap
C_{n}=\emptyset$ and $C_{b}\cup C_{n}=C$. The goal is to train a model on the
data from the base classes so that the model can generalize well on tasks
sampled from the novel classes. In order to evaluate the fast adaptation
ability or the generalization ability of the model, there are only a few
available labeled samples for each task $\mathcal{T}$. The most common way to
build a task is called an N-way-K-shot task (Vinyals et al. (2016)), where N
classes are sampled from the novel set and only K (e.g., 1 or 5) labeled
samples are provided for each class. The few available labeled data are called
support set $\mathcal{S}=\\{({\bm{x}}_{i},y_{i})\\}_{i=1}^{{N}\times{K}}$ and
the model is evaluated on another query set
$\mathcal{Q}=\\{({\bm{x}}_{i},y_{i})\\}_{i={N}\times{K}+1}^{{N}\times{K}+N\times{q}}$,
where every class in the task has $q$ test cases. Thus, the performance of a
model is evaluated as the averaged accuracy on (the query set of) multiple
tasks sampled from the novel classes.
### 3.2 Distribution Calibration
As introduced in Section 3.1, the base classes have a sufficient amount of
data while the evaluation tasks sampled from the novel classes only have a
limited number of labeled samples. The statistics of the distribution for the
base class can be estimated more accurately compared to the estimation based
on few-shot samples, which is an ill-posed problem. As shown in Table 1, we
observe that if we assume the feature distribution is Gaussian, the mean and
variance with respect to each class are correlated to the semantic similarity
of each class. With this in mind, the statistics can be transferred from the
base classes to the novel classes if we learn how similar the two classes are.
In the following sections, we discuss how we calibrate the distribution
estimation of the classes with only a few samples (Section 3.2.2) with the
help of the statistics of the base classes (Section 3.2.1). We will also
elaborate on how do we leverage the calibrated distribution to improve the
performance of few-shot learning (Section 3.2.3).
Note that our distribution calibration strategy is over the feature-level and
is agnostic to any feature extractor. Thus, it can be built on top of any
pretrained feature extractors without further costly fine-tuning. In our
experiments, we use the pretrained WideResNet Zagoruyko & Komodakis (2016)
following previous work (Mangla et al. (2020)). The WideResNet is trained to
classify the base classes, along with a self-supervised pretext task to learn
the general-purpose representations suitable for image understanding tasks.
Please refer to their paper for more details on training the feature
extractor.
Algorithm 1 Training procedure for an N-way-K-shot task
0: Support set features $\mathcal{S}=({\bm{x}}_{i},y)_{i=1}^{N\times K}$
0: Base classes’ statistics $\\{{\bm{\mu}}_{i}\\}_{i=1}^{|C_{b}|}$,
$\\{{\bm{\Sigma}}_{i}\\}_{i=1}^{|C_{b}|}$
1: Transform $({\bm{x}}_{i})_{i=1}^{N\times K}$ with Tukey’s Ladder of Powers.
2: for $({\bm{x}}_{i},y_{i})\in\mathcal{S}$ do
3: Calibrate the mean ${\bm{\mu}}^{\prime}$ and the covariance
${\bm{\Sigma}}^{\prime}$ for class $y_{i}$ using ${\bm{x}}_{i}$ as Equation 6.
4: Sample features for class $y_{i}$ from the calibrated distribution as
Equation 7.
5: end for
6: Train a classifier using both support set features and all sampled
features.
#### 3.2.1 Statistics of the base classes
We assume the feature distribution of base classes is Gaussian. The mean of
the feature vector from a base class $i$ is calculated as the mean of every
single dimension in the vector:
${\bm{\mu}}_{i}=\frac{\sum_{j=1}^{n_{i}}{\bm{x}}_{j}}{n_{i}},$ (1)
where ${\bm{x}}_{j}$ is a feature vector of the $j$-th sample from the base
class $i$ and $n_{i}$ is the total number of samples in class $i$. As the
feature vector ${\bm{x}}_{j}$ is multi-dimensional, we use covariance for a
better representation of the variance between any pair of elements in the
feature vector. The covariance matrix ${\bm{\Sigma}}_{i}$ for the features
from class $i$ is calculated as:
${{\bm{\Sigma}}_{i}}=\frac{1}{n_{i}-1}\sum_{j=1}^{n_{i}}\left({\bm{x}}_{j}-{{\bm{\mu}}_{i}}\right)\left({\bm{x}}_{j}-{{\bm{\mu}}_{i}}\right)^{T}.$
(2)
#### 3.2.2 Calibrating statistics of the novel classes
Here, we consider an N-way-K-shot task sampled from the novel classes.
Tukey’s Ladder of Powers Transformation
To make the feature distribution more Gaussian-like, we first transform the
features of the support set and query set in the target task using Tukey’s
Ladder of Powers transformation (Tukey (1977)). Tukey’s Ladder of Powers
transformation is a family of power transformations which can reduce the
skewness of distributions and make distributions more Gaussian-like. Tukey’s
Ladder of Powers transformation is formulated as:
$\mathbf{\tilde{x}}=\left\\{\begin{array}[]{ll}{\bm{x}}^{\lambda}&\text{ if
}\lambda\neq 0\\\ \log({\bm{x}})&\text{ if }\lambda=0\\\ \end{array}\right.$
(3)
where $\lambda$ is a hyper-parameter to adjust how to correct the
distribution. The original feature can be recovered by setting $\lambda$ as 1.
Decreasing $\lambda$ makes the distribution less positively skewed and vice
versa.
Calibration through statistics transfer
Using the statistics from the base classes introduced in Section 3.2.1, we
transfer the statistics from the base classes which are estimated more
accurately on sufficient data to the novel classes. The transfer is based on
the Euclidean distance between the feature space of the novel classes and the
mean of the features from the base classes ${\bm{\mu}}_{i}$ as computed in
Equation 1. Specifically, we select the top $k$ base classes with the closest
distance to the feature of a sample $\tilde{{\bm{x}}}$ from the support set:
$\displaystyle{\mathbb{S}}_{d}=\\{-\|{\bm{\mu}}_{i}-\tilde{{\bm{x}}}\|^{2}~{}|~{}i\in
C_{b}\\},$ (4) $\displaystyle{\mathbb{S}}_{N}=\\{i\
|-\|{\bm{\mu}}_{i}-\tilde{{\bm{x}}}\|^{2}\in topk({\mathbb{S}}_{d})\\},$ (5)
where $topk(\cdot)$ is an operator to select the top elements from the input
distance set ${\mathbb{S}}_{d}$. ${\mathbb{S}}_{N}$ stores the $k$ nearest
base classes with respect to a feature vector $\tilde{{\bm{x}}}$. Then, the
mean and covariance of the distribution is calibrated by the statistics from
the nearest base classes:
${\bm{\mu}}^{\prime}=\frac{\sum_{i\in{\mathbb{S}}_{N}}{\bm{\mu}}_{i}+\tilde{{\bm{x}}}}{k+1},{\bm{\Sigma}}^{\prime}=\frac{\sum_{i\in{\mathbb{S}}_{N}}{\bm{\Sigma}}_{i}}{k}+\alpha,$
(6)
where $\alpha$ is a hyper-parameter that determines the degree of dispersion
of features sampled from the calibrated distribution.
For few-shot learning with more than one shot, the aforementioned procedure of
the distribution calibration should be undertaken multiple times with each
time using one feature vector from the support set. This avoids the bias
provided by one specific sample and potentially achieves more diverse and
accurate distribution estimation. Thus, for simplicity, we denote the
calibrated distribution as a set of statistics. For a class $y\in C_{n}$, we
denote the set of statistics as
${\mathbb{S}}_{y}=\\{({\bm{\mu}}^{{}^{\prime}}_{1},{\bm{\Sigma}}^{{}^{\prime}}_{1}),...,({\bm{\mu}}^{{}^{\prime}}_{K},{\bm{\Sigma}}^{{}^{\prime}}_{K})\\}$,
where ${\bm{\mu}}^{{}^{\prime}}_{i}$, ${\bm{\Sigma}}^{{}^{\prime}}_{i}$ are
the calibrated mean and covariance, respectively, computed based on the $i$-th
feature in the support set of class $y$. Here, the size of the set is the
value of $K$ for an N-way-K-shot task.
#### 3.2.3 How to leverage the calibrated distribution?
With a set of calibrated statistics ${\mathbb{S}}_{y}$ for class $y$ in a
target task, we generate a set of feature vectors with label $y$ by sampling
from the calibrated Gaussian distributions:
${\mathbb{D}}_{y}=\\{({\bm{x}},y)|{\bm{x}}\sim\mathcal{N}({\bm{\mu}},{\bm{\Sigma}}),\forall({\bm{\mu}},{\bm{\Sigma}})\in{\mathbb{S}}^{y}\\}.$
(7)
Here, the total number of generated features per class is set as a
hyperparameter and they are equally distributed for every calibrated
distribution in ${\mathbb{S}}_{y}$. The generated features along with the
original support set features for a few-shot task is then served as the
training data for a task-specific classifier. We train the classifier for a
task by minimizing the cross-entropy loss over both the features of its
support set $\mathcal{S}$ and the generated features ${\mathbb{D}}_{y}$:
$\displaystyle\ell=\sum_{({\bm{x}},y)\sim\mathcal{\mathcal{missing}}{\tilde{S}}\cup{\mathbb{D}}_{y,y\in\mathcal{Y}^{\mathcal{T}}}}-\log\Pr(y|{\bm{x}};\theta),$
(8)
where $\mathcal{Y}^{\mathcal{T}}$ is the set of classes for the task
$\mathcal{T}$. $\mathcal{\tilde{S}}$ denotes the support set with features
transformed by Turkey’s Ladder of Powers transformation and the classifier
model is parameterized by $\theta$.
Table 2: 5way1shot and 5way5shot classification accuracy (%) on miniImageNet and CUB with 95% confidence intervals. The numbers in bold have intersecting confidence intervals with the most accurate method. Methods | miniImageNet | CUB
---|---|---
5way1shot | 5way5shot | 5way1shot | 5way5shot
Optimization-based
MAML (Finn et al. (2017)) | $48.70\pm 1.84$ | $63.10\pm 0.92$ | $50.45\pm 0.97$ | $59.60\pm 0.84$
Meta-SGD (Li et al. (2017)) | $50.47\pm 1.87$ | $64.03\pm 0.94$ | $53.34\pm 0.97$ | $67.59\pm 0.82$
LEO (Rusu et al. (2019)) | $61.76\pm 0.08$ | $77.59\pm 0.12$ | - | -
E3BM (Liu et al. (2020c)) | $63.80\pm 0.40$ | $80.29\pm 0.25$ | - | -
Metric-based |
Matching Net (Vinyals et al. (2016)) | $43.56\pm 0.84$ | $55.31\pm 0.73$ | $56.53\pm 0.99$ | $63.54\pm 0.85$
Prototypical Net (Snell et al. (2017)) | $54.16\pm 0.82$ | $73.68\pm 0.65$ | $72.99\pm 0.88$ | $86.64\pm 0.51$
Baseline++ (Chen et al. (2019a)) | $51.87\pm 0.77$ | $75.68\pm 0.63$ | $67.02\pm 0.90$ | $83.58\pm 0.54$
Variational Few-shot(Zhang et al. (2019)) | $61.23\pm 0.26$ | $77.69\pm 0.17$ | - | -
Negative-Cosine(Liu et al. (2020a)) | $62.33\pm 0.82$ | $80.94\pm 0.59$ | $72.66\pm 0.85$ | $89.40\pm 0.43$
Generation-based |
MetaGAN (Zhang et al. (2018)) | $52.71\pm 0.64$ | $68.63\pm 0.67$ | - | -
Delta-Encoder (Schwartz et al. (2018)) | $59.9$ | $69.7$ | $69.8$ | $82.6$
TriNet (Chen et al. (2019b)) | $58.12\pm 1.37$ | $76.92\pm 0.69$ | $69.61\pm 0.46$ | $84.10\pm 0.35$
Meta Variance Transfer (Park et al. (2020)) | - | $67.67\pm 0.70$ | - | $80.33\pm 0.61$
Maximum Likelihood with DC (Ours) | $66.91\pm 0.17$ | $80.74\pm 0.48$ | $77.22\pm 0.14$ | $89.58\pm 0.27$
SVM with DC (Ours) | $\textbf{67.31}\pm\textbf{0.83}$ | $\textbf{82.30}\pm\textbf{0.34}$ | $\textbf{79.49}\pm\textbf{0.33}$ | $\textbf{90.26}\pm\textbf{0.98}$
Logistic Regression with DC (Ours) | $\textbf{68.57}\pm\textbf{0.55}$ | $\textbf{82.88}\pm\textbf{0.42}$ | $\textbf{79.56}\pm\textbf{0.87}$ | $\textbf{90.67}\pm\textbf{0.35}$
Table 3: 5way1shot and 5way5shot classification accuracy (%) on tieredImageNet (Ren et al., 2018). The numbers in bold have intersecting confidence intervals with the most accurate method. Methods | tieredImageNet
---|---
5way1shot | 5way5shot
Matching Net (Vinyals et al. (2016)) | $68.50\pm 0.92$ | $80.60\pm 0.71$
Prototypical Net (Snell et al. (2017)) | $65.65\pm 0.92$ | $83.40\pm 0.65$
LEO (Rusu et al. (2019)) | $66.33\pm 0.05$ | $82.06\pm 0.08$
E3BM (Liu et al. (2020c)) | $71.20\pm 0.40$ | $85.30\pm 0.30$
DeepEMD (Zhang et al., 2020) | $71.16\pm 0.87$ | $86.03\pm 0.58$
Maximum Likelihood with DC (Ours) | $75.92\pm 0.60$ | $87.84\pm 0.65$
SVM with DC (Ours) | $\textbf{77.93}\pm\textbf{0.12}$ | $\textbf{89.72}\pm\textbf{0.37}$
Logistic Regression with DC (Ours) | $\textbf{78.19}\pm\textbf{0.25}$ | $\textbf{89.90}\pm\textbf{0.41}$
## 4 Experiments
In this section, we answer the following questions:
* •
How does our distribution calibration strategy perform compared to the state-
of-the-art methods?
* •
What does calibrated distribution look like? Is it an accurate approximation
for this class?
* •
How does Tukey’s Ladder of Power transformation interact with the feature
generations? How important is each in relation to performance?
### 4.1 Experimental Setup
#### 4.1.1 Datasets
We evaluate our distribution calibration strategy on miniImageNet (Ravi &
Larochelle (2017)), tieredImageNet (Ren et al. (2018)) and CUB (Welinder et
al. (2010)). miniImageNet and tieredImageNet have a brand range of classes
including various animals and objects while CUB is a more fine-grained dataset
that includes various species of birds. Datasets with different levels of
granularity may have different distributions for their feature space. We want
to show the effectiveness and generality of our strategy on all three
datasets.
miniImageNet is derived from ILSVRC-12 dataset (Russakovsky et al., 2014). It
contains 100 diverse classes with 600 samples per class. The image size is
$84\times 84\times 3$. We follow the splits used in previous works (Ravi &
Larochelle, 2017), which split the dataset into 64 base classes, 16 validation
classes, and 20 novel classes.
tieredImageNet is a larger subset of ILSVRC-12 dataset (Russakovsky et al.,
2014), which contains 608 classes sampled from hierarchical category
structure. Each class belongs to one of 34 higher-level categories sampled
from the high-level nodes in the ImageNet. The average number of images in
each class is 1281. We use 351, 97, and 160 classes for training, validation,
and test, respectively.
CUB is a fine-grained few-shot classification benchmark. It contains 200
different classes of birds with a total of 11,788 images of size $84\times
84\times 3$. Following previous works (Chen et al., 2019a), we split the
dataset into 100 base classes, 50 validation classes, and 50 novel classes.
#### 4.1.2 Evaluation Metric
We use the top-1 accuracy as the evaluation metric to measure the performance
of our method. We report the accuracy on 5way1shot and 5way5shot settings for
miniImageNet, tieredImageNet and CUB. The reported results are the averaged
classification accuracy over 10,000 tasks.
#### 4.1.3 Implementation Details
For feature extractor, we use the WideResNet (Zagoruyko & Komodakis, 2016)
trained following previous work (Mangla et al. (2020)). For each dataset, we
train the feature extractor with base classes and test the performance using
novel classes. Note that the feature representation is extracted from the
penultimate layer (with a ReLU activation function) from the feature
extractor, thus the values are all non-negative so that the inputs to Tukey’s
Ladder of Powers transformation in Equation 3 are valid. At the distribution
calibration stage, we compute the base class statistics and transfer them to
calibrate novel class distribution for each dataset. We use the LR and SVM
implementation of scikit-learn (Pedregosa et al. (2011)) with the default
settings. We use the same hyperparameter value for all datasets except for
$\alpha$. Specifically, the number of generated features is 750; $k=2$ and
$\lambda=0.5$. $\alpha$ is 0.21, 0.21 and 0.3 for miniImageNet, tieredImageNet
and CUB, respectively.
### 4.2 Comparision to State-of-the-art
Table 2 and Table 3 presents the 5way1shot and 5way5shot classification
results of our method on miniImageNet, tieredImageNet and CUB. We compare our
method with the three groups of the few-shot learning method, optimization-
based, metric-based, and generation-based. Our method can be built on top of
any classifier, and we use two popular and simple classifiers, namely SVM and
LR to prove the effectiveness of our method. Simple linear classifiers
equipped with our method perform better than the state-of-the-art few-shot
classification method and achieve the best performance on 1-shot and 5-shot
settings of miniImageNet, tieredImageNet and CUB. The performance of our
distribution calibration surpasses the state-of-the-art generation-based
method by 10% for the 5way1shot setting, which proves that our method can
handle extremely low-shot classification tasks better. Compared to other
generation-based methods, which require the design of a generative model with
extra training costs on the learnable parameters, simple machine learning
classifier with DC is much more simple, effective and flexible and can be
equipped with any feature extractors and classifier model structures.
Specifically, we show three variants, i.e, Maximum likelihood with DC, SVM
with DC, Logistic Regression with DC in Table 2 and Table 3. A simple maximum
likelihood classifier based on the calibrated distribution can outperform
previous baselines and training a SVM classifier or Logistic Regression
classifier using the samples from the calibrated distribution can further
improve the performance.
Figure 2: t-SNE visualization of our distribution estimation. Different colors represent different classes. ‘$\bigstar$’ represents support set features, ‘x’ in figure (d) represents query set features, ‘$\blacktriangle$’ in figure (b)(c) represents generated features. Figure 3: Left: Accuracy when increasing the power in Tukey’s transformation when training with (red) or without (blue) the generated features. Right: Accuracy when increasing the number of generated features with the features are transformed by Tukey’s transformation (red) and without Tukey’s transformation (blue). Table 4: Ablation study on miniImageNet 5way1shot and 5way5shot showing accuracy (%) with 95% confidence intervals. Tukey transformation | Training with generated features | miniImageNet
---|---|---
5way1shot | 5way5shot
✗ | ✗ | ${56.37}\pm{0.68}$ | ${79.03}\pm{0.51}$
✓ | ✗ | ${64.30}\pm{0.53}$ | ${81.33}\pm{0.35}$
✗ | ✓ | ${63.70}\pm{0.38}$ | ${82.26}\pm{0.73}$
✓ | ✓ | $\textbf{68.57}\pm\textbf{0.55}$ | $\textbf{82.88}\pm\textbf{0.42}$
### 4.3 Visualization of Generated Samples
We show what the calibrated distribution looks like by visualizing the
generated features sampled from the distribution. In Figure 2, we show the
t-SNE representation (van der Maaten & Hinton (2008)) of the original support
set (a), the generated features (b,c) as well as the query set (d). Based on
the calibrated distribution, the sampled features form a Gaussian distribution
and more samples (c) can have a more comprehensive representation of the
distribution. Due to the limited number of examples in the support set, only 1
in this case, the samples from the query set usually cover a greater area and
are a mismatch with the support set. This mismatch can be fixed to some extent
by the generated features, i.e., the generated features in (c) can overlap
areas of the query set. Thus, training with these generated features can
alleviate the mismatch between the distribution estimated only from the few-
shot samples and the ground truth distribution.
### 4.4 Applicability of distribution calibration
Applying distribution calibration on different backbones
Our distribution calibration strategy is agnostic to backbones / feature
extractors. Table 5 shows the consistent performance boost when applying
distribution calibration on different feature extractors, i.e, four
convolutional layers (conv4), six convolutional layers (conv6), resnet18,
WRN28 and WRN28 trained with rotation loss. Distribution calibration achieves
around 10% accuracy improvement compared to the backbones trained with
different baselines.
Table 5: 5way1shot classification accuracy (%) on miniImageNet with different backbones. Backbones | without DC | with DC
---|---|---
conv4 (Chen et al., 2019a) | $42.11\pm 0.71$ | $\textbf{54.62}\pm\textbf{0.64}$ ($\uparrow\textbf{12.51}$)
conv6 (Chen et al., 2019a) | $46.07\pm 0.26$ | $\textbf{57.14}\pm\textbf{0.45}$ ($\uparrow\textbf{11.07}$)
resnet18 (Chen et al., 2019a) | $52.32\pm 0.82$ | $\textbf{61.50}\pm\textbf{0.47}$ ($\uparrow\textbf{9.180}$)
WRN28 (Mangla et al., 2020) | $54.53\pm 0.56$ | $\textbf{64.38}\pm\textbf{0.63}$ ($\uparrow\textbf{9.850}$)
WRN28 + Rotation Loss (Mangla et al., 2020) | $56.37\pm 0.68$ | $\textbf{68.57}\pm\textbf{0.55}$ ($\uparrow\textbf{12.20}$)
Applying distribution calibration on other baselines
A variety of works can benefit from training with the features generated by
our distribution calibration strategy. We apply our distribution calibration
strategy on two simple few-shot classification algorithms, Baseline (Chen et
al., 2019a) and Baseline++ (Chen et al., 2019a). Table 6 shows that our
distribution calibration brings over 10% of accuracy improvement on both.
Table 6: 5way1shot classification accuracy (%) on miniImageNet with different baselines using distribution calibration. Method | without DC | with DC
---|---|---
Baseline (Chen et al., 2019a) | $42.11\pm 0.71$ | $\textbf{54.62}\pm\textbf{0.64}$ ($\uparrow\textbf{12.51}$)
Baseline++ (Chen et al., 2019a) | $48.24\pm 0.75$ | $\textbf{61.24}\pm\textbf{0.37}$ ($\uparrow\textbf{13.00}$)
### 4.5 Effects of feature transformation and training with generated
features
Ablation Study
Table 4 shows the performance when our model is trained without Tukey’s Ladder
of Powers transformation for the features as in Equation 3 and when it is
trained without the generated features as in Equation 7. It is clear that
there is a severe decline in performance of over 10% if both are not used in
the 5way1shot setting. The ablation of either one results in a performance
drop of around 5% in the 5way1shot setting.
Choices of Power for Tukey’s Ladder of Powers Transformation
The left side of Figure 3 shows the 5way1shot accuracy when choosing different
powers for the Tukey’s transformation in Equation 3 when training the
classifier with the generated features (red) and without (blue). Note that
when the power $\lambda$ equals 1, the transformation keeps the original
feature representations. There is a consistent general tendency for training
with and without the generated features and in both cases, we found
$\lambda=0.5$ is the optimum choice. With the Tukey’s transformation, the
distribution of query set features in target tasks become more aligned to the
calibrated Gaussian distribution, thus benefits the classifier which is
trained on features sampled from the calibrated distribution.
Number of generated features
The right side of Figure 3 analyzes whether more generated features results in
consistent improvement in both cases, namely when the features of support and
query set are transformed by Tukey’s transformation (red) and when they are
not (blue). We found that when the number of generated features is below 500,
both cases can benefit from more generated features. However, when more
features are sampled, the performance of the classifier tested on
untransformed features begins to decline. By training with the generated
samples, the simple logistic regression classifier has a 12% relative
performance improvement in a 1-shot classification setting.
### 4.6 Other Hyper-parameters
We select the hyperparameters based on the performance of the validation set.
The k base class statistics to calibrate the novel class distribution in
Equation 5 is set to 2. Figure 5 shows the effect of different values of k.
The $\alpha$ in Equation 6 is a constant added on each element of the
estimated covariance matrix, which can determine the degree of dispersion of
features sampled from the calibrated distributions. An appropriate value of
$\alpha$ can ensure a good decision boundary for the classifier. Different
datasets have different statistics and an appropriate value of $\alpha$ may
vary for different datasets. Figure 5 explores the effect of $\alpha$ on all
three datasets, i.e. miniImageNet, tieredImageNet and CUB. We observe that in
each dataset, the performance of the validation set and the novel (testing)
set generally has the same tendency, which indicates that the variance is
dataset-dependent and is not overfitting to a specific set.
Figure 4: The effect of different values of k.
Figure 5: The effect of different values of $\alpha$.
## 5 Conclusion and future works
We propose a simple but effective distribution calibration strategy for few-
shot classification. Without complex generative models, loss functions and
extra parameters to learn, a simple logistic regression trained with features
generated by our strategy outperforms the current state-of-the-art methods by
$\sim 5\%$ on miniImageNet. The calibrated distribution is visualized and
demonstrates an accurate estimation of the feature distribution. Future works
will explore the applicability of distribution calibration on more problem
settings, such as multi-domain few-shot classification, and more methods, such
as metric-based meta-learning algorithms. We provide the generalization error
analysis of the proposed Distribution Calibration method in Yang et al.
(2021).
## References
* Antoniou & Storkey (2019) Antreas Antoniou and Amos J. Storkey. Assume, augment and learn: Unsupervised few-shot meta-learning via random labels and data augmentation. _CoRR_ , 2019.
* Chen et al. (2019a) Wei-Yu Chen, Yen-Cheng Liu, Zsolt Kira, Yu-Chiang Frank Wang, and Jia-Bin Huang. A closer look at few-shot classification. In _ICLR_ , 2019a.
* Chen et al. (2019b) Zitian Chen, Yanwei Fu, Yinda Zhang, Yu-Gang Jiang, Xiangyang Xue, and Leonid Sigal. Multi-level semantic feature augmentation for one-shot learning. _TIP_ , 28(9):4594–4605, 2019b.
* Finn et al. (2017) Chelsea Finn, Pieter Abbeel, and Sergey Levine. Model-agnostic meta-learning for fast adaptation of deep networks. In _ICML_ , 2017.
* Gao et al. (2018) Hang Gao, Zheng Shou, Alireza Zareian, Hanwang Zhang, and Shih-Fu Chang. Low-shot learning via covariance-preserving adversarial augmentation networks. In _NeurIPS_ , 2018.
* Goodfellow et al. (2014) Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. Generative adversarial nets. In _NeurIPS_ , 2014.
* Hariharan & Girshick (2017) Bharath Hariharan and Ross Girshick. Low-shot visual recognition by shrinking and hallucinating features. In _ICCV_ , 2017.
* Li et al. (2017) Zhenguo Li, Fengwei Zhou, Fei Chen, and Hang Li. Meta-sgd: Learning to learn quickly for few shot learning. _CoRR_ , 2017.
* Liu et al. (2020a) Bin Liu, Yue Cao, Yutong Lin, Qi Li, Zheng Zhang, Mingsheng Long, and Han Hu. Negative margin matters: Understanding margin in few-shot classification. In _ECCV_ , 2020a.
* Liu et al. (2020b) Jialun Liu, Yifan Sun, Chuchu Han, Zhaopeng Dou, and Wenhui Li. Deep representation learning on long-tailed data: A learnable embedding augmentation perspective. In _CVPR_ , June 2020b.
* Liu et al. (2019a) Lu Liu, Tianyi Zhou, Guodong Long, Jing Jiang, Lina Yao, and Chengqi Zhang. Prototype propagation networks (PPN) for weakly-supervised few-shot learning on category graph. In _IJCAI_ , 2019a.
* Liu et al. (2019b) Lu Liu, Tianyi Zhou, Guodong Long, Jing Jiang, and Chengqi Zhang. Learning to propagate for graph meta-learning. In _NeurIPS_ , 2019b.
* Liu et al. (2020c) Yaoyao Liu, Bernt Schiele, and Qianru Sun. An ensemble of epoch-wise empirical bayes for few-shot learning. In _ECCV_ , 2020c.
* Mangla et al. (2020) Puneet Mangla, Nupur Kumari, Abhishek Sinha, Mayank Singh, Balaji Krishnamurthy, and Vineeth N Balasubramanian. Charting the right manifold: Manifold mixup for few-shot learning. In _WACV_ , 2020.
* Park et al. (2020) Seong-Jin Park, Seungju Han, Ji-won Baek, Insoo Kim, Juhwan Song, Hae Beom Lee, Jae-Joon Han, and Sung Ju Hwang. Meta variance transfer: Learning to augment from the others. In _ICML_ , 2020.
* Pedregosa et al. (2011) F. Pedregosa, G. Varoquaux, A. Gramfort, V. Michel, B. Thirion, O. Grisel, M. Blondel, P. Prettenhofer, R. Weiss, V. Dubourg, J. Vanderplas, A. Passos, D. Cournapeau, M. Brucher, M. Perrot, and E. Duchesnay. Scikit-learn: Machine learning in Python. _Journal of Machine Learning Research_ , 12:2825–2830, 2011\.
* Qin et al. (2020) Tiexin Qin, Wenbin Li, Yinghuan Shi, and Yang Gao. Diversity helps: Unsupervised few-shot learning via distribution shift-based data augmentation, 2020.
* Ravi & Larochelle (2017) Sachin Ravi and Hugo Larochelle. Optimization as a model for few-shot learning. In _ICLR_ , 2017.
* Ren et al. (2018) Mengye Ren, Eleni Triantafillou, Sachin Ravi, Jake Snell, Kevin Swersky, Joshua B Tenenbaum, Hugo Larochelle, and Richard S Zemel. Meta-learning for semi-supervised few-shot classification. In _ICLR_ , 2018.
* Rumelhart et al. (1986) David E. Rumelhart, Geoffrey E. Hinton, and Ronald J. Williams. Learning Representations by Back-propagating Errors. _Nature_ , 323:533–536, 1986.
* Russakovsky et al. (2014) Olga Russakovsky, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, Zhiheng Huang, Andrej Karpathy, Aditya Khosla, Michael S. Bernstein, Alexander C. Berg, and Fei-Fei Li. Imagenet large scale visual recognition challenge. _CoRR_ , abs/1409.0575, 2014.
* Rusu et al. (2019) Andrei A. Rusu, Dushyant Rao, Jakub Sygnowski, Oriol Vinyals, Razvan Pascanu, Simon Osindero, and Raia Hadsell. Meta-learning with latent embedding optimization. In _ICLR_ , 2019.
* Salakhutdinov et al. (2012) Ruslan Salakhutdinov, Joshua Tenenbaum, and Antonio Torralba. One-shot learning with a hierarchical nonparametric bayesian model. In _ICML workshop_ , 2012.
* Schwartz et al. (2018) Eli Schwartz, Leonid Karlinsky, Joseph Shtok, Sivan Harary, Mattias Marder, Abhishek Kumar, Rogerio Feris, Raja Giryes, and Alex Bronstein. Delta-encoder: an effective sample synthesis method for few-shot object recognition. In _NeurIPS_ , 2018.
* Snell et al. (2017) Jake Snell, Kevin Swersky, and Richard S. Zemel. Prototypical networks for few-shot learning. In _NeurIPS_ , 2017.
* Tukey (1977) John W Tukey. _Exploratory data analysis_. Addison-Wesley Series in Behavioral Science. Addison-Wesley, Reading, MA, 1977. URL https://cds.cern.ch/record/107005.
* van der Maaten & Hinton (2008) Laurens van der Maaten and Geoffrey Hinton. Visualizing data using t-SNE. _Journal of Machine Learning Research_ , 2008.
* Vinyals et al. (2016) Oriol Vinyals, Charles Blundell, Tim Lillicrap, Koray Kavukcuoglu, and Daan Wierstra. Matching networks for one shot learning. In _NeurIPS_ , 2016.
* Wang et al. (2018) Yu-Xiong Wang, Ross Girshick, Martial Hebert, and Bharath Hariharan. Low-shot learning from imaginary data. In _CVPR_ , 2018.
* Wang et al. (2019) Yulin Wang, Xuran Pan, Shiji Song, Hong Zhang, Gao Huang, and Cheng Wu. Implicit semantic data augmentation for deep networks. In _NeurIPS_ , 2019.
* Welinder et al. (2010) P. Welinder, S. Branson, T. Mita, C. Wah, F. Schroff, S. Belongie, and P. Perona. Caltech-UCSD Birds 200. Technical Report CNS-TR-2010-001, California Institute of Technology, 2010\.
* Xian et al. (2018) Yongqin Xian, Tobias Lorenz, Bernt Schiele, and Zeynep Akata. Feature generating networks for zero-shot learning. In _CVPR_ , 2018.
* Yang et al. (2021) Shuo Yang, Songhua Wu, Tongliang Liu, and Min Xu. Bridging the Gap between Few-Shot and Many-Shot Learning via Distribution Calibration. 4 2021. doi: 10.36227/techrxiv.14380697.v1. URL https://www.techrxiv.org/articles/preprint/Bridging_the_Gap_between_Few-Shot_and_Many-Shot_Learning_via_Distribution_Calibration/14380697.
* Zagoruyko & Komodakis (2016) Sergey Zagoruyko and Nikos Komodakis. Wide residual networks. In _BMVC_ , 2016.
* Zhang et al. (2020) Chi Zhang, Yujun Cai, Guosheng Lin, and Chunhua Shen. Deepemd: Few-shot image classification with differentiable earth mover’s distance and structured classifiers. In _CVPR_ , 2020.
* Zhang et al. (2019) Jian Zhang, Chenglong Zhao, Bingbing Ni, Minghao Xu, and Xiaokang Yang. Variational few-shot learning. In _ICCV_ , 2019.
* Zhang et al. (2018) Ruixiang Zhang, Tong Che, Zoubin Ghahramani, Yoshua Bengio, and Yangqiu Song. Metagan: An adversarial approach to few-shot learning. In _NeurIPS_ , 2018.
Figure 6: We show the feature distribution of 5 base classes, and the feature
distribution of 5 novel classes before/after Tukey’s Transformation.
## Appendix A augmentation with nearest class features
Instead of sampling from the calibrated distribution, we can simply retrieve
examples from the nearest class to augment the support set. Table 7 shows the
comparison of training using samples from the calibrated distribution, the
different number of retrieved features from the nearest class, and only using
the support set. We found the retrieved features can improve the performance
compared to only using the support set but can damage the performance when
increasing the number of retrieved features, where the retrieved samples
probably serve as noisy data for tasks targeting different classes.
Training data | miniImageNet 5way1shot
---|---
Support set only | $56.37\pm 0.68$
Support set + 1 feature from the nearest class | $62.39\pm 0.49$
Support set + 5 features from the nearest class | $59.73\pm 0.42$
Support set + 10 features from the nearest class | $58.93\pm 0.49$
Support set + 100 features from the nearest class | $57.33\pm 0.48$
Support set + 100 features sampled from calibrated distribution | $\textbf{68.53}\pm\textbf{0.32}$
Table 7: The comparison with nearest class feature augmentation.
## Appendix B Distribution Calibration without novel feature
We calibrate the novel class mean by averaging the novel class mean and the
retrieved base class means in Equation 6. Table 8 shows the distribution
calibration without averaging novel feature, in which the calibrated mean is
calculated as
${\bm{\mu}}^{\prime}=\frac{\Sigma_{i\in{\mathbb{S}}_{N}}{\bm{\mu}}_{i}}{k}$.
| miniImageNet 5way1shot
---|---
Distribution Calibration w/o novel feature $\tilde{{\bm{x}}}$ | $59.38\pm 0.73$
Distribution Calibration w/ novel feature $\tilde{{\bm{x}}}$ | $\textbf{68.57}\pm\textbf{0.55}$
Table 8: The comparison between distribution calibration with and without
novel feature $\tilde{{\bm{x}}}$.
## Appendix C The effects of Tukey’s transformation
Figure 6 shows the distribution of 5 base classes and 5 novel classes
before/after Tukey’s transformation. It is observed that the base class
distribution satisfies Gaussian assumption well (left) while the novel class
distribution is more skew (middle). The novel class distribution after Tukey’s
transformation (right) is more aligned with the Gaussian-like base class
distribution.
## Appendix D The similarity level analysis
We found that the higher similarities between the retrieved base class
distribution and the novel class ground-truth distribution, the higher the
performance improvement our method will bring as shown in Table 9. The results
in the table are under 5-way-1-shot setting.
Novel class | Top-1 base class similarity | Top-2 base class similarity | DC improvement
---|---|---|---
malamute | 93% | 85% | $\uparrow$ 21.30%
golden retriever | 85% | 74% | $\uparrow$ 18.37%
ant | 71% | 67% | $\uparrow$ 9.77%
Table 9: Performance improvement with respect to the similarity level between
a query novel class and the most similar base classes.
|
# Latent Variable Models for Visual Question Answering
Zixu Wang1, Yishu Miao1, Lucia Specia1,2
1Department of Computing, Imperial College London, UK
2Department of Computer Science, University of Sheffield, UK
{zixu.wang, y.miao20<EMAIL_ADDRESS>
###### Abstract
Current work on Visual Question Answering (VQA) explore deterministic
approaches conditioned on various types of image and question features. We
posit that, in addition to image and question pairs, other modalities are
useful for teaching machine to carry out question answering. Hence in this
paper, we propose latent variable models for VQA where extra information
(_e.g_. captions and answer categories) are incorporated as latent variables,
which are observed during training but in turn benefit question-answering
performance at test time. Experiments on the VQA v2.0 benchmarking dataset
demonstrate the effectiveness of our proposed models: they improve over strong
baselines, especially those that do not rely on extensive language-vision pre-
training.
## 1 Introduction
As a classic multi-modal machine learning problem, Visual Question Answering
(VQA) [2] systems are tasked with providing a correct textual answer given an
image and a textual question. Current VQA models [14, 8, 1] are trained to
learn the relationship between areas in an image and the question, and to
choose the correct answer from a vocabulary of answer candidates, _i.e_., they
are modelled as a classification problem. The majorities of popular VQA [7, 9,
18] models are created in a deterministic manner and explore solely
information from the given image-question pair. There are other approaches
attempting to incorporate extra information, such as image captions [17] and
mutated inputs [6, 3]. However, it in turn restricts the practical
applications as the extra information is required to be explicitly available
during testing.
In this paper, we propose an approach to explore additional information as
latent variables in VQA: we employ latent variables for VQA to exploit extra
information (_i.e_. image captions and answer categories) to complement
limited textual information from image and question pairs. We assume a
realistic setting where this information – esp. captions – may only be
available during the training phase. To that end, we introduce a continuous
latent variable as the caption representation to capture the essential
information from this modality. Moreover, the answer category is modelled as a
discrete latent variable, which acts as an inductive bias to benefit the
learning of answer prediction, and can be integrated out during testing. The
motivation is that the generative framework is able to incorporate many other
types of information as continuous or discrete latent variables, and as such
it effectively leverages additional resources to constrain the original image-
question distribution while omitting them in testing. This grants the models
with stronger generalisation ability compared to its deterministic
counterparts, which generally require off-the-shelf pipelines to model the
information from external modalities.
Intuitively, image captions describe diverse aspects of an image and include
attributes and relations of objects in a more informative way. In our work, a
continuous latent variable is employed for capturing the caption distributions
and constraining the generative distribution conditioned on image and question
pairs. In this way, the joint multimodal representations from images and
question can benefit from the caption modality during training, and it
requires no explicit caption inputs in testing. Similarly, there exists a
strong connection between a question and answer pair when the question
provides informative signals on its type or the category of possible answers.
For example, “How many”, “Where is” and “what is” normally connect to numbers,
locations, and objects respectively. We propose a discrete latent variable is
employed for modelling answer categories and providing better inductive bias
from the question and answer pairs.
In summary, our main contributions are:
* •
A novel generative VQA framework combining the modularity of latent variables
with the flexibility to introduce extra information as continuous and/or
discrete latent variables.
* •
A method to incorporate additional information which does not rely on building
multiple deterministic pipelines, aiming at learning the underlying
compositional, relational, and hierarchical structures of multiple modalities.
The models benefit from the extra information during training without
providing explicit inputs in testing.
* •
The improvements over deterministic baseline models (_e.g_. UpDn [1] and VL-
BERT [15]) in experiments with the VQA v2.0 dataset demonstrate the
effectiveness of our proposed latent variable models. Our qualitative analysis
also indicates that using extra resources (_i.e_. captions and answer
categories) as latent variables captures complementary information during
training and benefits the VQA performance in testing.
Figure 1: Architecture of our latent variable model for VQA. We use dotted
lines to denote the process of proposed latent variables.
## 2 Model
We first present an overview of our general model structure, followed by the
encoders for different modalities, and the proposed corresponding latent
variables.
### 2.1 General Model Structure
In a VQA task, images and questions are normally used to learn a joint
multimodal distribution for answer predictions. We postulate that the joint
representation can be improved by other multimodal information. Hence, we
introduce captions and answer categories to our VQA model as continuous and
discrete latent variables respectively to encourage a better learning in the
joint distribution of image and question pairs during training. A notable
advantage of the latent variable models is that they do not explicitly require
captions or answer categories during testing, and therefore can be easily
extended to condition on any other useful information.
Firstly we introduce the notations used in the general VQA model. $V$, $Q$,
$A$ are used to denote the input image, question, and answer instances
respectively. The image feature $v$, question representation $q$, and answer
representation $a$ are extracted from the image encoder, question encoder, and
answer encoder. The VQA task is constructed as a classification problem to
output the most likely answer $\hat{a}$ from a fixed set of answers based on
the content of the image $v$ and question $q$:
$\hat{a}=\textrm{argmax}\ p(a|v,q)$ (1)
In our latent variable model, we introduce image captions $C$ to the training
phase. Similarly, we extract the caption features $c$ by a caption encoder.
However, instead of directly feeding in the caption features $c$ into to the
model, we employ a continuous latent distribution $z$ to be the caption
representations. Here $z\sim q(z|c)$ is modelled as variational distribution.
We then build a generative distribution $z\sim p(z|v,q)$ to infer the caption
information by conditioning on image and question pairs, which is optimised
during training via neural variational inference. We originally experimented
using $q(z|v,q,c)$ as the vairational distribution. However, this distribution
is quite close (_i.e_. small KL divergence) to the generative distribution
$p(z|v,q)$, which weakens the learning signal from KL divergence.
In addition, we introduce a discrete latent variable $d$ for modelling answer
category inferred via $d\sim p(d|v,q)$, which is also conditioned on image and
question pairs.
Hence, the training of the latent variable model is carried out by the samples
$(v,q,a,c,d)$. During testing, the answer $a$ is predicted from the image and
question pair $(v,q)$:
$\hat{a}\\!=\\!\textrm{argmax}\\!\sum_{d,z}p(a|v,q,d,z)p(d|v,q)p(z|v,q)$ (2)
where the discrete latent variable $d$ is directly integrated out, and the $z$
is the Monte-Carlo sample from $p(z|v,q)$.
### 2.2 Continuous Latent Variable: Caption
As captions are modelled by a continuous latent variable, we only have
explicit captions during training. Here we present the generative distribution
that is conditioned on images and questions during testing, and the
variational distribution that is conditioned on explicit captions during
training. Therefore, the caption encoder is only used in the training phase.
Generative Distribution - $p_{\theta}(z|v,q)$. We use a latent distribution
$p_{\theta}(z|v,q)$ to model the joint multimodal distributions of images and
questions. Compared to its deterministic counterpart using concatenated
multimodal features, we parameterise the stochastic distribution with
$\mathcal{N}(z|\mu_{\theta}(v,q),\sigma^{2}_{\theta}(v,q))$.
Variational Distribution - $q_{\phi}(z|c)$. We first apply a RNN model to
embed the caption inputs $C$ and a latent variable $q_{\phi}(z|c)$ to model
the caption semantics and distributions, where
$z\sim\mathcal{N}(z|\mu_{\phi}(c),\sigma^{2}_{\phi}(c))$.
### 2.3 Discrete Latent Variable: Answer Category
Assuming that each image and question pair $(v,q)$ can be projected to an
answer category to help find a correct answer, we are able to encourage the
model to distinguishing candidates across answer categories instead of only
the spurious relationships between questions and answers via simple linguistic
features. Therefore, in order to leverage this useful inductive bias, we
propose a discrete latent variable to model the answer category given an image
and question pair $(v,q)$. In particular, for each answer category $d$, we
have a conditional independent distribution $p(a|v,q,d)$ over the answers in
the certain answer category.
$p(a|v,q)=\sum_{d}p(a|v,q,d)\cdot p(d|v,q)$ (3)
We trained an answer category classifier using joint image-question pairs as
the input, given the true labels as shown at left bottom in Figure 1. We then
use the category distribution to modify the answer distribution through
element-wise production to get more precise answer distribution.
## 3 Datasets & Setup
### 3.1 Datasets
We use the VQA v2.0 dataset [2] for our proposed latent variable model. The
answers are balanced in order to minimise the effectiveness of dataset priors.
We report the results on validation set and test-standard set through the
official evaluation server. The source of image captions in our work is the
MSCOCO dataset [12].
We use answer categories from the annotations of [10]. The answers in the VQA
v2.0 dataset are annotated with a set of 15 categories for the top 500 answers
that makes up the 82% 111Although the category definitions cannot cover all
types of answers, and false prediction during testing might be observed, the
latent variable can still maintain the robustness in predicting correct
answers by summing over all the probabilities of predicted categories. of the
VQA v2.0 dataset; and the other answers are treated as an additional category.
## 4 Experiments
In this section, we first describe the experimental results of our latent
variable model, compared with both a UpDn (Bottom-up Top-down) baseline model
and a state-of-the-art pre-trained visual-linguistic model (VL-BERT); then we
conduct qualitative analysis to validate the effectiveness of proposed
components.
| VQA v2.0 test-dev (%) | test-std (%)
---|---|---
| All | Yes/No | Num | Other | All
Caption [17] | - | - | - | - | 68.37
DFAF [13] | 70.22 | 86.09 | 53.32 | 60.49 | 70.34
MLIN [5] | 71.09 | 87.07 | 53.39 | 60.49 | 71.27
UpDn [1] | 65.32 | 81.82 | 44.21 | 56.05 | 65.67
\+ latent (ours) | 66.01 | 82.96 | 44.58 | 55.94 | 66.29
VL-BERT${}_{\textrm{large}}$ [15] | 71.79 | - | - | - | 72.22
\+ latent (ours) | 72.03 | 88.03 | 54.16 | 62.42 | 72.37
Table 1: Experimental results on VQA v2.0 test-dev and test-standard (test-
std) set. Accuracies are reported in percentage (%) terms. The state-of-the-
art scores are in bold; underlined scores are best among (baseline _vs_.
latent variable extension); and both underlines and bold scores are the
overall best results.
### 4.1 Quantitative Analysis
We compare the results of our latent variable model with the baseline model
(UpDn), a state-of-the-art visual-linguistic pre-training model (VL-BERT), and
three other related VQA models; where [17] uses generated captions to assist
answer predictions and [13, 5] explore the interactions between visual and
linguistic inputs.
As demonstrated in Table 1, our latent variable model outperforms when acting
as an extension. In particular, our latent variable model outperforms UpDn by
0.69% accuracy on test-dev set and by 0.62% accuracy on test-standard set. In
addition, our model improves the performance by 0.24% accuracy than its VL-
BERT counterpart on test-dev set and by 0.15% accuracy on test-standard set.
These results indicate the effectiveness of including captions and answer
categories as latent variables, to promote the distribution of image and
caption pairs to be closer to the captions’ space, and to learn a better
distinction among different kinds of answers, or different answers within the
same answer category.
The result of [17] (68.37%) is a very strong baseline, which follows a
traditional deterministic approach. However, their model is trained to
generate captions that can be used at test time, while in our case only image
and question pairs are required for answer prediction. [13] and [5] both
achieve comparable performance (70.34% and 71.27% on test-standard set,
respectively) to VL-BERT (72.22%) without pre-training, by dynamically
modulating the intra-modality information and exploring the latent interaction
between modalities. Our latent variable model has the overall best result when
combined with the strong pre-training VL-BERT, which indicates both the
effectiveness of the visual-linguistic pre-training framework, and the
incorporation of continuous (captions) and discrete (answer categories) latent
variables.
Compared to the results on the standard baseline (UpDn), the improvements
achieved by our proposed model on the VL-BERT framework is smaller. This is
because VL-BERT has been pre-trained on massive image captioning data, where
the learning of visual features have largely benefited from the modality of
captions already. Nevertheless, based upon the strong baseline model, our
proposed model can still improve performance slightly, which further indicates
the effectiveness of the latent variable framework.
The state-of-the-art performance on VQA v2.0 among pre-training frameworks is
achieved by LXMERT, Oscar and Uniter [16, 11, 4]. They have been extensively
pre-trained using massive datasets on languages and vision tasks (including
VQA) in a multi-task learning fashion. Our work is not directly comparable,
and is not aimed at improving and beating the state-of-the-art performance.
Instead, it is focused on exploring the potential of latent variable models to
represent additional useful information in multimodal learning and to
contribute to pre-trained vision-language frameworks. We draw attention to the
advantage of using generative framework on the VQA task. In this case, we can
employ more information during training (which is omitted in testing) to
regularise the original multimodal distribution. This can be demonstrated by
the improvements on VL-BERT brought by the latent variables.
| VQA v2.0 val
---|---
| All | Yes/No | Num | Other
UpDn | 63.15 | 80.38 | 42.84 | 55.86
UpDn + caption | 63.85 | 81.10 | 43.63 | 55.90
UpDn + category | 63.51 | 81.62 | 42.17 | 55.38
Ours w/o caption | 64.09 | 81.82 | 44.37 | 55.74
Ours w/ caption | 64.24 | 82.36 | 44.52 | 56.02
Table 2: Ablation study to investigate the effect of each component: caption,
and answer category. “Ours w/o caption” indicates our final model in which
only image and question pairs are needed at test time; while “Ours w/ caption”
represents the model using caption during evaluation. The result of our best
model are in bold; while the best performance with captions as inputs during
testing is underlined for comparison.
### 4.2 Qualitative Analysis
We perform an ablation study to qualitatively analyse the effect of the
components introduced in our work brought by the continuous (image caption)
and discrete (answer category) latent variables, as shown in Table 2.
#### 4.2.1 Effect of Captions
The introduction of captions as a continuous latent variable improves the
classification performance, with an additional modality as input to benefit
the learning of multimodal representations. According to the breakdown numbers
in 2, the improvements brought by the latent variables of captions and answer
categories are 0.70 and 0.36 respectively for All questions altogether. The
combined strategy reaches 0.94 which indicates that the benefits from the two
latent variables are complementary. Note that neither the captions nor the
answer categories is available during testing; we only make use of these
modalities during training.
To further investigate potential benefit of the captions, we design an
experiment that feed in ground truth captions via variational distribution for
caption representations instead of inferring them from question and answer
pairs (_i.e_. use $q_{\phi}(z|c)$ to replace $p_{\theta}(z|v,q)$). We test
this out in the validation dataset and obtain 64.24 (‘Ours w/ caption’)
compared to 64.09 (‘Ours w/o caption’). It shows that having explicit captions
as input gives slightly better performance. However, the captions in these
experiments are ground truth, which means that if we were to use instead
automatically generated captions from an image captioning pipeline, the
numbers might drop due to the possible captioning errors. Primarily, our
proposed model (‘Ours w/o caption’) achieves the performance on par with with
the model with ground truth captions, which demonstrates the effectiveness of
the strategy that incorporates extra modality by latent variables.
Figure 2: Examples of our latent variable model outperforming the baseline
UpDn model from the introduction of answer category as a discrete latent
variable. The answer predicted by UpDn is highlighted in red and the answer
from our model is in blue. We also show other sample answer candidates within
each category.
#### 4.2.2 Effect of Answer Category
As it can be observed from Table 2, after introducing answer category as an
additional discrete latent variable, our proposed model can also be improved
over the UpDn baseline, where the largest improvement can be observed for the
“Yes/No” type. For the question types “Num” and “Other”, the results of
UpDn+category are lower than the baseline. This may be due to the multiple
answer candidates under the two categories. For example, although the category
classier can accurately predict the answer category (_e.g_., “count”, “color”,
_etc_.), it can be still difficult to distinguish among the answers - {“9”,
“20”, …, “many” for “count”; “black”, “brown”, …, “black and white” for
“color”}. We highlight that the contribution of answer categories as a
discrete latent variable is to introduce an inductive bias which helps predict
the correct answer categories and answers given a specific image and question.
In order to further elaborate the effectiveness of answer categories, we
extract examples where our model predicted the correct answers while the UpDn
baseline failed to do so, as shown in Figure 2. For the two cases in the top
row, both models predict answers under the same and correct answer categories,
hence the answer space is similar; however, our latent variable model can
effectively distinguish and learn the difference among the answers which fall
within the same category. The bottom row of Figure 2 shows two cases where the
two models predict answers in different answer categories, and therefore they
are also very different in meaning. Our model not only outputs the highest
probability for the correct answer category, but also makes the correct final
prediction.
## 5 Conclusions
In this paper, we propose to tackle VQA under the framework of latent variable
models, employing captions and answer categories as the continuous and the
discrete latent variables respectively to constrain the original image-
question distribution while omitting the extra information during the test
phase. Our experimental results and qualitative analysis show the
effectiveness of the latent variables in boosting answering performance at
test time when only image and question pairs are available. This framework
could be easily generalised to incorporate other types of information or
modalities to enhance VQA and other tasks.
## References
* [1] Peter Anderson, Xiaodong He, Chris Buehler, Damien Teney, Mark Johnson, Stephen Gould, and Lei Zhang. Bottom-up and top-down attention for image captioning and visual question answering. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 6077–6086, 2018.
* [2] Stanislaw Antol, Aishwarya Agrawal, Jiasen Lu, Margaret Mitchell, Dhruv Batra, C. Lawrence Zitnick, and Devi Parikh. VQA: Visual Question Answering. In International Conference on Computer Vision (ICCV), 2015.
* [3] Long Chen, Xin Yan, Jun Xiao, Hanwang Zhang, Shiliang Pu, and Yueting Zhuang. Counterfactual samples synthesizing for robust visual question answering. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), June 2020.
* [4] Yen-Chun Chen, Linjie Li, Licheng Yu, Ahmed El Kholy, Faisal Ahmed, Zhe Gan, Yu Cheng, and Jingjing Liu. Uniter: Universal image-text representation learning. In ECCV, 2020.
* [5] Peng Gao, Haoxuan You, Zhanpeng Zhang, Xiaogang Wang, and Hongsheng Li. Multi-modality latent interaction network for visual question answering. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), October 2019.
* [6] Tejas Gokhale, Pratyay Banerjee, Chitta Baral, and Yezhou Yang. Mutant: A training paradigm for out-of-distribution generalization in visual question answering. arXiv preprint arXiv:2009.08566, 2020.
* [7] Yash Goyal, Tejas Khot, Douglas Summers-Stay, Dhruv Batra, and Devi Parikh. Making the v in vqa matter: Elevating the role of image understanding in visual question answering. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 6904–6913, 2017.
* [8] Vahid Kazemi and Ali Elqursh. Show, ask, attend, and answer: A strong baseline for visual question answering. arXiv preprint arXiv:1704.03162, 2017.
* [9] Jin-Hwa Kim, Jaehyun Jun, and Byoung-Tak Zhang. Bilinear attention networks. In S. Bengio, H. Wallach, H. Larochelle, K. Grauman, N. Cesa-Bianchi, and R. Garnett, editors, Advances in Neural Information Processing Systems 31, pages 1564–1574. Curran Associates, Inc., 2018.
* [10] Ranjay Krishna, Michael Bernstein, and Li Fei-Fei. Information maximizing visual question generation. In IEEE Conference on Computer Vision and Pattern Recognition, 2019\.
* [11] Xiujun Li, Xi Yin, Chunyuan Li, Xiaowei Hu, Pengchuan Zhang, Lei Zhang, Lijuan Wang, Houdong Hu, Li Dong, Furu Wei, Yejin Choi, and Jianfeng Gao. Oscar: Object-semantics aligned pre-training for vision-language tasks. ECCV 2020, 2020.
* [12] Tsung-Yi Lin, Michael Maire, Serge Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Dollár, and C Lawrence Zitnick. Microsoft coco: Common objects in context. In European conference on computer vision, pages 740–755. Springer, 2014.
* [13] Gao Peng, Zhengkai Jiang, Haoxuan You, Pan Lu, Steven Hoi, Xiaogang Wang, and Hongsheng Li. Dynamic fusion with intra-and inter-modality attention flow for visual question answering. arXiv preprint arXiv:1812.05252, 2018.
* [14] Kevin J. Shih, Saurabh Singh, and Derek Hoiem. Where to look: Focus regions for visual question answering. In Computer Vision and Pattern Recognition, 2016.
* [15] Weijie Su, Xizhou Zhu, Yue Cao, Bin Li, Lewei Lu, Furu Wei, and Jifeng Dai. Vl-bert: Pre-training of generic visual-linguistic representations. In International Conference on Learning Representations, 2020.
* [16] Hao Tan and Mohit Bansal. LXMERT: Learning cross-modality encoder representations from transformers. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 5100–5111, Hong Kong, China, Nov. 2019. Association for Computational Linguistics.
* [17] Jialin Wu, Zeyuan Hu, and Raymond Mooney. Generating question relevant captions to aid visual question answering. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 3585–3594, Florence, Italy, July 2019. Association for Computational Linguistics.
* [18] Zhou Yu, Jun Yu, Yuhao Cui, Dacheng Tao, and Qi Tian. Deep modular co-attention networks for visual question answering. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 6281–6290, 2019.
|
# ComQA:Compositional Question Answering via
Hierarchical Graph Neural Networks
Bingning Wang1, Ting Yao1, Weipeng Chen1, Jingfang Xu1 and Xiaochuan Wang1,2
1\. Sogou Inc. 2. Tsinghua University.BeijingChina wangbingning,yaoting@sogou-
inc.com
(2018)
###### Abstract.
With the development of deep learning techniques and large scale datasets, the
question answering (QA) systems have been quickly improved, providing more
accurate and satisfying answers. However, current QA systems either focus on
the sentence-level answer, i.e., answer selection, or phrase-level answer,
i.e., machine reading comprehension. How to produce compositional answers has
not been throughout investigated. In compositional question answering, the
systems should assemble several supporting evidence from the document to
generate the final answer, which is more difficult than sentence-level or
phrase-level QA. In this paper, we present a large-scale compositional
question answering dataset containing more than 120k human-labeled questions.
The answer in this dataset is composed of discontiguous sentences in the
corresponding document. To tackle the ComQA problem, we proposed a
hierarchical graph neural networks, which represent the document from the low-
level word to the high-level sentence. We also devise a question selection and
node selection task for pre-training. Our proposed model achieves a
significant improvement over previous machine reading comprehension methods
and pre-training methods. Codes, dataset can be found at
https://github.com/benywon/ComQA.
Question Answering, Graph Neural Networks, Datasets
††copyright: acmcopyright††journalyear: 2018††doi:
10.1145/1122445.1122456††conference: Woodstock ’18: ACM Symposium on Neural
Gaze Detection; June 03–05, 2018; Woodstock, NY††booktitle: Woodstock ’18: ACM
Symposium on Neural Gaze Detection, June 03–05, 2018, Woodstock, NY††price:
15.00††isbn: 978-1-4503-XXXX-X/18/06
## 1\. Introduction
Question answering (QA) is a long-term research interest in the NLP community
(Phillips, 1960; Greenwood, 2005). Based on the knowledge resource, QA could
be categorized into knowledge-based QA (KBQA), community QA (CQA) or textual
QA. In KBQA, the answer is produced from a knowledge base. In CQA, the answer
is derived from the user-generated content such as FAQ. In textual QA
(Harabagiu et al., 2003), the knowledge resource is unstructured text such as
Wikipedia pages or news articles. Textual QA extracts a sentence or answer
snippet from a supporting document that responds directly to a query, which is
the main form of the current QA system (Yogish et al., 2017).
Figure 1. An example of a compositional QA system where the answer lies in
discontinuous parts of a document. The query is green and the answers are
marked by red frame.
Traditional textual QA is mostly based on symbolic feature engineering (Cui et
al., 2005; Verberne et al., 2009). Recently, with the development of deep
learning methods and large scale datasets, especially the techniques of
machine reading comprehension, the textual based QA systems have been greatly
improved (Seo et al., 2016; Nguyen et al., 2016; Rajpurkar et al., 2016; Dunn
et al., 2017; Dhingra et al., 2017; Kwiatkowski et al., 2019). Most of the
recent advances in textual based QA are sentence-level or phrase level. For
example, most of the answer selection models can be regarded as two-sentence
matching methods, which select the most relevant sentence (or paragraphs) for
a query. On the other hand, most machine reading comprehension methods resort
to span extraction to extract the consecutive answer span in a passage given
the query, where the answer is usually a very short phrase (Baradaran et al.,
2020; Zeng et al., 2020).
However, for the current textual QA applications, some of the answers are
compositional, i.e., they are made up of discontinuous sentences in a
document. Take the case in Figure 1 for
example111https://www.standard.co.uk/go/london/film/tom-hanks-oscars-wins-
nominations-forrest-gump-a4334001.html, given the query asking about Tom
Hanks’ Oscars winning movies, the answer is made up of discontinuous parts in
the page. Concretely, the answer is composed of three sentences-two from the
subtitle and one from a sentence in the first paragraph. Current QA systems,
focus on either sentence-level or phrase-level, are unable to tackle this
problem.
In this paper, we are focusing on Compositional Question Answ-
ering (ComQA), where the answer is from the discontiguous but relevant
sentences in a document. For the currently available QA datasets, none of them
are focus on compositional answers, so we construct a large scale Chinese
compositional QA dataset through crowdsourcing. First of all, we select the
web pages whose title is a valid question. Then we conduct the page analysis
to extract the main content of the page as the document. Next, we transform
the original document HTML to a list of sentence-like items that each item can
serve as a candidate component of the final answer. Finally, we feed the
question and the transformed page to the crowd workers to select the answers.
Compared with previous textual based QA, the ComQA has three characteristics:
1) Rather than restricting the answer to be a span or a sentence of the text,
ComQA loosens the constraints so that the answer may lie in discontiguous
components in a document. 2) The basic granularity of ComQA is a sentence.
However, every element in the dom-tree of the page’s HTML, e.g., a table or an
image, could be a valid component to form the final answer. 3) Since the
answer in ComQA is composed of different granularity, it introduces the
specific inductive bias of the document structure to the model, requiring a
hierarchical understanding of the text from the low-level words to high-level
paragraphs.
To tackle the problem of compositional answers, we proposed a novel
hierarchical graph neural networks on the ComQA. Concretely, we first adopt a
BERT-like sequential encoder to obtain the basic token representations. Then
we represent the question and the elements of the document in a graph. The
graph is constructed from dom-tree of the page with different levels of
connection. Finally, we represent each node in the graph with the attention-
based hierarchical graph neural networks. The final prediction is based on the
node representations in the graph.
Focusing on document understanding, we also devise two novel pre-training
tasks for our proposed hierarchical graph neural networks: the first one is
the question selection task to determine the relationship between the question
and the document. The second is the node selection task measuring the
contextual dependency of a node in the document. We conduct comprehensive
experiments on the ComQA dataset. The proposed attention-based hierarchical
graph neural networks achieve significant improvement compared with the state-
of-the-art QA baselines, including the widely used machine reading
comprehension model BiDAF (Seo et al., 2016) and recently proposed pre-
training model BERT (Devlin et al., 2018). The ablation study shows the
advantage by incorporating the graph structure and pre-training. In summary,
our contributions are:
1. (1)
This paper investigates the compositional question answering, where the
answers are composed of discontiguous supporting sentences in the document.
2. (2)
We present a new human-labeled compositional question answering dataset
containing more than 120k QA pairs. To our best knowledge, this is the largest
one in this domain.
3. (3)
We propose a hierarchical graph neural networks on the compositional QA, and
two novel tasks are devised to pre-train the model. The proposed model
achieves significant improvement over previous methods such as BiDAF (Seo et
al., 2016) or BERT (Devlin et al., 2018).
## 2\. Related Work
Question Answering. QA has been a long-term focus in artificial intelligence
that can date back to the 1960s, where Green Jr et al. (1961) proposed a
simple system to answer questions about baseball games. Since then, many works
have been done to use diverse data resources to answer any type of question.
Compared to other types of QA such as knowledge base QA or community QA, in
textual QA the knowledge resource is the plain text such as document or
paragraph, which is currently regarded as the machine reading comprehension
(MRC). Since the MCTest (Richardson et al., 2013) was proposed, many
researchers have been focused on MRC. Hermann et al. (2015) proposed a cloze-
style dataset CNN/Daily Mail where the questions and answers are automatically
generated from the news article. Recently, some human create large-scale
authentic datasets have been released, such as SQuAD (Rajpurkar et al., 2016),
NewsQA (Trischler et al., 2016), etc. However, most answers in these datasets
are based on the contiguous segments in the text, and most MRC methods are
based on span extraction, which can hardly deal with the compositional answers
in QA.
Compositional QA. Some works have also been proposed to solve the
compositional answers in QA. In MARCO (Nguyen et al., 2016) dataset, some
answers could not be directly derived from the passage, Yang et al. (2020) and
Zhang et al. (2020) proposed to generate the answer based on the multi-span
extraction and scoring scheme. However, their works are heavily based on the
text’s syntactic features to extract a continuous span. Besides, the answers
are not really compositional in MARCO, given that most non-extractive answers
are continuous span in the passage combined with a span in the question. A
specific dataset called (Dua et al., 2019) has been proposed containing the
multi-span answers. Nevertheless, the compositional answers only account for
6% of the datasets, and each answer span is short phrases. MultiRC (Khashabi
et al., 2018) is a multiple-choice dataset requiring reading comprehension
from multiple sentences. Nevertheless, the dataset is too small to train an
applicable model, and the multiple evidence sentences are not corresponding to
the final answer. Furthermore, in MultiRC the questions are synthetic by
humans, while our proposed ComQA consists of real-world questions. Some multi-
hop QA datasets have recently been released, such as WikiQA (Welbl et al.,
2018) and HotpotQA (Yang et al., 2018), but they are mainly focused on the
multi-document reasoning and the answer is continuous span in one document.
Different from their works, in this paper, we are only focusing on
compositional QA, where the answers are derived from multiple but
discontiguous segments in the documents. And the proposed ComQA is large-
scale, containing more than 120k questions.
Graph Neural Networks. Graph neural network (GNN) is a kind of deep learning
method that operates in the graph. It has gained wide attention due to its
convincing performance and high interpretability. GNN incorporate the explicit
relational inductive bias to the model (Battaglia et al., 2018) to perform
structure reasoning. Kipf and Welling (2016) proposed graph convolutional
network to conduct semi-supervised learning on the club network. To focus on
the important part of the graph, Veličković et al. (2018) proposed the graph
attention networks. Many other NLP tasks have also employed the GNN to model
the text, such as relation extraction (Zhang et al., 2018), text
classification (Yao et al., 2019), dependency parsing (Ji et al., 2019), or
question answering (Tian et al., 2020) etc.
## 3\. Compositional QA
In this section, we will first introduce the ComQA in detail. Next, we will
describe the data construction process.
Compositional QA is similar to the machine reading comprehension, where the
system produces an answer given the question and a corresponding document.
However, in ComQA, the answer is located in different parts of the document,
just like the instance illustrated in Figure 1. In this paper, we classify the
answer components into two types:
Sentence: sentence is the basic component in ComQA. The sentence could be a
subtitle on the page; for example, the sentence in Figure 1 with a bold and
large font. Or intra-paragraph sentences. We use the simple heuristic rules to
divide the paragraph into several sentences, and each sentence could be a
valid candidate answer component.
Special HTML Element: For many questions in ComQA, the answer may lie in a
specific element on the page other than the raw paragraphs. For example, for
the question ‘How to turn on PS4 motion controller?’ the answer may consist of
several images describing the operation process. Another example is when the
document is a wiki page of ‘List of highest mountains on Earth’, it contains a
table of the ranking of the world highest mountains, when asked about the
‘third highest mountain in the world’, we should extract the third item in the
table as the answer. In this paper, we kept several special HTML tags that
potentially compose the answer. They are listed in Table 1.
Tag | Definition
---|---
br | produces a line break in text
p | paragraph marker
img | image content
table | table content
strong | define important text in a document
tbody | group the body content
h1-h6 | define headings
Table 1. The HTML tags we kept as the special nodes in the document graph.
The answer in ComQA is made up of the above two types of components, which we
refer to as node in the rest of the paper. Other types of answers, such as
phrases and words in the paragraph or video elements in the HTML tree, are
also valid for ComQA. However, adding them would increase the tasks’
difficulty, so we leave them for future works. In ComQA the system should
assemble the discontiguous nodes as the answer.
### 3.1. Dataset Collection
We collect the ComQA dataset based on the Chinese search engine Sogou
Search222https://www.sogou.com/. The data construction consists of four
stages: First, we obtain the question-document through a search engine. Then
we process the documents into nodes. Next, crowdsourcing workers were asked to
select the corresponding nodes in the document to answer the question.
Finally, we conduct quality inspection on the labeled dataset to filter out
invalid samples.
#### 3.1.1. Question-Document Collection
We first obtain the question from the web. We select the pages for which the
titles are valid questions and treat the title as the question. We use rules
to determine whether a title is a valid question, including: 1) whether the
question contains the interrogative pronoun such as ‘谁是’(‘who’), ‘如何’(‘how
to’), etc. 2) the title containing the informative words such as ‘的过程’(‘the
process of’), ‘的方法’(‘the method to’), etc. The question words and informative
words are listed in the Appendix. Although the two rules are simple, we found
it is really effective that more than 90% of the selected titles are valid
questions.
Next, we also do some filtering process on the selected documents for which
the title is a question. We limit the page source to be a list of sites with
high quality. Then we filter out the pages containing no text content or just
a pure advertising page. We also remove the pages that are either too long or
too short.
After the above two processes, we obtain a lot of high-quality question-
document pairs. However, since the web content tends to be duplicated, some of
the pages may have highly lexical or semantic overlap. To remove the redundant
pages and increase the final dataset’s diversity, we represent the page
document based on the bag-of-words. We use Ward (Ward Jr, 1963) clustering
algorithm to cluster those documents into 350k clusters. We select the
centroid in each cluster as the final document for crowdsourcing.
Figure 2. The type of the questions in ComQA.
#### 3.1.2. Documents Processing
Now we processed the document into a list of nodes (i.e., sentences and
special HTML elements). We first extract the main content of the page. Then we
use the hierarchical structure of the HTML to locate the leave nodes in the
dom tree. For each leave node, we use its structure features and attributes
features to determine whether it is a text node or a special HTML element. We
remove the HTML element which is not text content or special HTML element.
Finally, we merge the redundant nodes and unnecessary items to form the
cleaned document.
#### 3.1.3. Answer Annotation
We employed crowdworkers to select the answers from the document for each
question. We build an HTML based annotation platform so that the crowdworkers
could annotate on both computer and mobile devices. For the annotation page,
we treat each node in the document as a single element that could be clicked
as the answer component. The final answer is the combination of the selected
nodes. For the page that the title is not a valid question or the document
doesn’t contain the right answer, crowdworkers are asked to select none of the
nodes. A snapshot of the annotation page is shown in Figure 8.
#### 3.1.4. Quality Inspection
We conduct a quality inspection after the data has been annotated by the
crowdworkers. Since the dataset is really large, we must select the data
samples that may be erroneously annotated. We do this by the following stages:
first of all, similar with cross-validation (Hastie et al., 2009), we divided
the data into ten folds, we train a model on the nine folds and evaluate on
the other one fold. For the one fold held out data, we select the samples that
are ambiguous to the current model, i.e. the log probability of the real
answer is very small, as the candidate wrong data. This process is conducted
ten times, and we select the most unconfident data for the model. This fake
data selection process is similar to recent researches which find the hard to
learn samples often correspond to labeling errors (Swayamdipta et al., 2020).
Finally, we feed those samples to the authoritative checkers who are well
informed of the ComQA tasks and relabeled those samples.
After the above four processes, we obtain the training ComQA samples that have
gone through the quality inspection. Each data instance is made up of: a URL,
a question, a document containing a list of quaternion with the form
$<id,type,content,label>$, id is the index of the nodes in the document, type
is the node type, for example, sentence or image, label could be 0 or 1
indicating whether the node is an answer component.
#### 3.1.5. Test Set Construction
Since the test set demands a higher quality than the training set, we ask the
two authoritative checkers to annotate the same data and select the instances
with which the two checkers accord.
Finally, we get 122,343 samples from crowdworkers. We use 5000 of them as the
development set and the rest 117,343 as the training set. Moreover, we get
2,054 test samples annotated by authoritative checkers. The documents have
average 581 words, and the queries have 7.4 words on average. And each
document containing about 23.1 nodes, with 21.2% (4.9) of them are annotated
as the answer nodes. Each node has average 24.8 words. The questions in ComQA
cover a broad range of topics such as medical, entertainment, etc. The most
common question words in ComQA is shown in Figure 2.
Figure 3. Hierarchical graph neural networks for ComQA, the case shows how to
represent the question ($q$), two sentences ($s_{1},s_{2}$) and a special
image node ($h_{1}$). Left:, the basic BERT sequence encoder to obtain the
contextualized representations for each token. The input is the concatenation
of question and document tokens. We append some special tokens to indicate the
question (<SEP>), sentence (<EOS>) and special html image element <IMG>.
Middle: The hierarchical graph neural networks blocks, which uses the intra-
sentence connection, inter-sentence connection, and global connection (omitted
for concise view) to build a hierarchical representation of the document
graph. The final prediction is made upon the sentence nodes (green) and
special html nodes (purple). Right: The connection mask matrix (or better
known as the adjacency matrix in graph neural networks) used to connect the
different tokens in the graph.
## 4\. Hierarchical Graph Neural Networks for ComQA
In this section, we describe the proposed hierarchical graph neural networks
(HGNN) for the compositional QA. Denote the document as a graph $\mathcal{G}$,
which is a tuple $(\mathcal{N},\mathcal{E})$.
$\mathcal{N}=\\{\mathcal{N}_{i}\\}_{i=1}^{N_{\mathcal{N}}}$ is the node set,
and the $i_{\text{th}}$ node $\mathcal{N}_{i}$ consists of a sequence of words
$\\{\mathcal{N}_{i}^{1},\mathcal{N}_{i}^{2},...,\mathcal{N}_{i}^{|\mathcal{N}_{i}|}\\}$.
$|\mathcal{N}_{i}|$ is the number of words in the $i_{\text{th}}$ node. We
represent the question as a special node in $\mathcal{N}_{q}$.
$\mathcal{E}=\\{e_{i}\\}_{i=1}^{N_{\mathcal{E}}}$ is the edge set. The
prediction is made upon the normal nodes
$\\{\mathcal{N}_{i}\\}_{i=1}^{N_{\mathcal{N}}}$. Our proposed HGNN consists of
three modules: the basic sequence encoder, hierarchical graph attention layer,
and final prediction layer, the whole architecture is illustrated in Figure 3.
### 4.1. Sequence Encoder
The sequence encoder is a BERT-like module to obtain the basic
representations. First of all, we represent the question and document into a
single sequence by concatenating the question tokens and every document node
tokens. We add a special <SEP> token after the question tokens, and add a
special <EOS> token after each sentence. For each special HTML node, we also
use the special tokens, such as <IMG>, <BR>, etc., to represent them.
#### 4.1.1. Embedding Layer
We first use the Byte Pair Encoding (BPE) (Sennrich et al., 2016) to tokenize
the sequence into word pieces. Suppose the sequence length is $N$, for each of
the word pieces in the sequence, we use the word embedding layer to transform
them into $\textbf{H}_{w}\in\mathbb{R}^{N\times D}$ where $D$ is the embedding
size. We also use position embedding to get the sequence positional embedding
$\textbf{H}_{p}\in\mathbb{R}^{N\times D}$. Finally, we also apply the type
embedding to the sequence resulting $\textbf{H}_{t}\in\mathbb{R}^{N\times D}$
to indicate whether it is in the question or in the document. The final
embedding output is the addition of the three embeddings:
(1) $\textbf{H}_{0}=\textbf{H}_{w}+\textbf{H}_{p}+\textbf{H}_{t}$
#### 4.1.2. Self Attention Layer
After the embedding layer, we apply the self-attention based Transformer
(Vaswani et al., 2017) layer to the input. A list of $L$ Transformer blocks
are used to project the input embedding into the contextualized
representations. For the $l_{\text{th}}$ layer in the Transformer, the output
could be denoted as:
(2) $\displaystyle\textbf{Q}_{l},\textbf{K}_{l},\textbf{V}_{l}=$
$\displaystyle\textbf{W}_{q}^{l}\textbf{H}_{l-1},\textbf{W}_{k}^{l}\textbf{H}_{l-1},\textbf{W}_{v}^{l}\textbf{H}_{l-1},$
$\displaystyle\hat{\textbf{H}}_{l}^{k}=$
$\displaystyle\text{Softmax}(\frac{\textbf{Q}_{l}^{T}\textbf{K}_{l}}{\sqrt{D}})\textbf{V}_{l},\forall
k\in[1,K]$ $\displaystyle\bar{\textbf{H}}_{l}=$
$\displaystyle\text{Projection}(\text{Concat}([\hat{\textbf{H}}_{l}^{1},...,\hat{\textbf{H}}_{l}^{K}]))$
$\displaystyle\textbf{H}_{l}=$
$\displaystyle\text{LayerNorm}(\bar{\textbf{H}}_{l})$
W is the learnable weight matrices and the Project is a multi-layer perceptron
to project the output of the multi-head attention back to the hidden space.
$K$ is the number of head in Transformer. Layer normalization (Ba et al.,
2016) is applied before the output.
### 4.2. Hierarchical Graph Neural Networks
In HGNN, node embeddings are initialized with sequence encoder’s last layer
output $\textbf{H}_{L}$, followed by hierarchical graph representations.
Formally, the HGNN is built upon the general message passing architecture:
(3) $\small\textbf{H}^{k}=f\left(A,\textbf{H}^{k-1};\textbf{W}^{k}\right)$
where $\textbf{H}^{k}\in\mathbb{R}^{N\times D}$ are the node embeddings after
$k$ hop of computation in the graph, $A\in\mathbb{R}^{N\times N}$ is the
adjacency matrix representing the graph structure.
$\textbf{W}^{k}\in\mathbb{R}^{D\times D}$ is the trainable parameters for
different graph layers. $f$ is the message propagation function for
information aggregation. In this paper, the layer-wise propagation rule of
HGNN is formulated as:
(4)
$f\left(A,\textbf{H}^{k-1};\textbf{W}^{(k)}\right)=\operatorname{GeLU}\left(\tilde{D}^{-\frac{1}{2}}\tilde{A}\tilde{D}^{-\frac{1}{2}}\textbf{H}^{k-1}\textbf{W}^{k}\right)$
where Gelu is the gaussian error linear activation function (Hendrycks and
Gimpel, 2016). $\tilde{A}=A+I,\tilde{D}=\sum_{j}\tilde{A}_{ij}$ are used to
normalization since directly multiplication with $A$ will completely change
the scale of the input. There are $M$ layers (hops) in the HGNN and the final
output is $\textbf{H}^{M}$.
#### 4.2.1. Graph Construction
Graph construction is one of the most important factors for the good
performance of graph neural networks (Hamilton et al., 2017; Kipf and Welling,
2016). In this paper, we consider three types of connection:
Intra-Sentence Connection: For each word in a sentence, we connected them with
each other. Specifically, the special tokens such as <EOS> are also connected
with other words in the same sentence:
$A_{i,j}^{\text{intra}}=\left\\{\begin{array}[]{ll}1&\text{{if i and j are
located in the same sentence.}}\\\ 0&\text{{otherwise.} }\end{array}\right.$
Thus, the intra-sentence graph connection is formulated by the adjacent matrix
$A^{\text{intra}}$.
Inter-Sentence Connection: We have already appended the special tokens after
each sentence as the high-level node identification, so we add the inter-
sentence connection to those special tokens in the document. If the node
corresponding tokens belongs to the special tokens, i.e. <SEP>, <EOS> and the
special HTML element tokens, we add a connection between them:
$A_{i,j}^{\text{inter}}=\left\\{\begin{array}[]{ll}1&\text{{if i and j belongs
to the special tokens}}\\\ &\text{{indicating the higher level node.}}\\\
0&\text{{otherwise}.}\end{array}\right.$
Global Connection: When represent the document in QA, question attention is
very important (Seo et al., 2016; Yu et al., 2018). However, only employing
the inter-sentence connection, where the interaction between the words in the
document and the words in the question are all based on their high-level
sentence indicators, may incur the modeling burden for the attention based
model (Zaheer et al., 2020). Therefore, we also construct the global
connection between the question indicator <SEP> and all other words in the
document:
$A_{i,j}^{\text{global}}=\left\\{\begin{array}[]{ll}1&\text{{if i is
{<{SEP}>}},}\forall j\in[1,N]\\\ 0&\text{{otherwise}.}\end{array}\right.$
#### 4.2.2. Information Aggregation
After the graph construction process, we build the graph representations based
on Equation 4. However, since the graph is hierarchical containing different
level of nodes with different connections, we have two options to represent
them:
Pipeline Aggregation: We could build the hierarchical representation in
pipeline, that is, we first build the low-level intra-sentence
representations, then we build the higher level inter-sentence and global
representations:
(5)
$\displaystyle\textbf{H}_{\text{intra}}^{k}=\operatorname{GeLU}\left(\tilde{D}_{\text{intra}}^{-\frac{1}{2}}\tilde{A}_{\text{intra}}\tilde{D}_{\text{intra}}^{-\frac{1}{2}}\textbf{H}^{k-1}\textbf{W}_{\text{intra}}^{k}\right)$
$\displaystyle\textbf{H}_{\text{inter}}^{k}=\operatorname{GeLU}\left(\tilde{D}_{\text{inter}}^{-\frac{1}{2}}\tilde{A}_{\text{inter}}\tilde{D}_{\text{inter}}^{-\frac{1}{2}}\textbf{H}_{\text{intra}}^{k}\textbf{W}_{\text{inter}}^{k}\right)$
$\displaystyle\textbf{H}^{k}=\operatorname{GeLU}\left(\tilde{D}_{\text{global}}^{-\frac{1}{2}}\tilde{A}_{\text{global}}\tilde{D}_{\text{global}}^{-\frac{1}{2}}\textbf{H}_{\text{inter}}^{k}\textbf{W}_{\text{global}}^{k}\right)$
Fused Aggregation: The pipeline aggregation build the graph representation in
a hierarchical way, however, we can instead pack them into a single operation
by fusing the three-level adjacency matrices into one:
(6) $A_{i,j}=\left\\{\begin{array}[]{ll}1&\text{if
}{\left\\{\begin{array}[]{ll}A_{i,j}^{\text{intra}}=1\text{ or}\\\
A_{i,j}^{\text{inter}}=1\text{ or}\\\
A_{i,j}^{\text{global}}=1\end{array}\right.}\\\
0&\text{{otherwise}.}\end{array}\right.$
Based on the fused adjacency matrix, we can build the graph representations
based on Equation 4. The adjacency matrix is also illustrated in the right of
Figure 3. In experiment, we will compare the two types of aggregation scheme.
### 4.3. Prediction and Objective
After representing the question and document by the graph neural networks, we
collect the sentence-level nodes and make predictions on them. Those sentence
level nodes including the <EOS> and special HTML tokens, forming the answer
candidate set $\mathcal{S}$. The final probability of each node in
$\mathcal{S}$ is calculated by:
(7)
$P_{i\in\mathcal{S}}=\sigma(\textbf{H}^{M}_{i\in\mathcal{S}}\textbf{w}_{o})$
where $\sigma$ is the sigmoid function and $\textbf{w}_{o}$ is the weight
vector to project the last layer representations of HGNN into scalar values.
The objective of the model is to minimize the binary cross-entropy between the
predictions and real labels:
(8)
$\mathcal{L}_{BCE}=-\mathbbm{1}_{i\in\mathcal{S}_{P}}\log\left(P_{i\in\mathcal{S}_{P}}\right)-\mathbbm{1}_{i\in\mathcal{S}_{N}}\log\left(1-P_{i\in\mathcal{S}_{N}}\right)$
where $\mathcal{S}_{P}$ is the positive (1) nodes set and $\mathcal{S}_{N}$ is
the negative (0) nodes set in a document,
$\mathcal{S}_{P}\cup\mathcal{S}_{N}=\mathcal{S}$.
### 4.4. Unsupervised Pre-training
Recent works on NLP have shown the great advantage of large-scale pre-training
(Devlin et al., 2018; Radford et al., 2019; Roberts et al., 2020). However,
different from the previous works of pre-training where the input is the raw
text sequence, the ComQA contains the information of document structure.
Therefore, we devised three types of unsupervised objectives for pre-training:
Masked Language Model (MLM): similar to Devlin et al. (2018), we masked some
words in the sequence, and the model must predict the original words based on
the surrounding context. We mask 15% of the sequence and do not mask the
special tokens.
Question Selection (QS): Since ComQA is constructed from the web pages where
the question is the page’s title and the document is the page content. There
is an inherent correlation between the question and the document. So we
propose the question selection pre-training, a binary classification task to
determine whether the question, i.e., the title, is relevant to the document.
We replace the title with a random title as the fake example, and the original
title-document as the positive example. We use the representations of the
question indicator $\textbf{H}^{M}_{i=\texttt{<{SEP}>}}$ for prediction just
as in Equation 7, except that we use the different weight vector.
Node Selection (NS): In addition to the question selection, we can also
perform a node selection task on the nodes to determine whether it is relevant
to its contextual document. This task is similar to the ComQA where the
prediction is also made on document nodes. We use two heuristic rules to
construct the negative nodes: 1) we replace the sentence in the document with
a random sentence. 2) we randomly shuffle the words in the sentence and treat
the resulting sentence as a negative node. 3) we randomly swap the two nodes
in the document and treat the two nodes as negative. We also replace 15% of
the original nodes in the document as the negative nodes.
| | Dev | Test
---|---|---|---
| | Precision | Recall | F1 | Accuracy | Precision | Recall | F1 | Accuracy
| SequentialTag${}_{\text{lstm}}$ | 51.7 | 59.1 | 56.0 | 33.9 | 43.2 | 56.1 | 47.9 | 25.3
| SequentialTag${}_{\text{bert}}$ | 65.3 | 78.2 | 73.1 | 38.2 | 61.9 | 73.7 | 66.5 | 32.9
| LSTM | 73.1 | 39.9 | 55.3 | 28.2 | 83.4 | 46.1 | 59.4 | 28.2
| BiDAF | 72.5 | 64.7 | 69.1 | 35.7 | 69.4 | 58.7 | 63.9 | 30.2
| QA-Net | 75.4 | 63.3 | 68.8 | 36.6 | 71.6 | 60.9 | 65.3 | 31.8
| BERT${}_{\text{official}}^{*}$ | 81.9 | 64.8 | 70.9 | 35.3 | 81.1 | 66.2 | 72.9 | 35.4
| BERT${}_{\text{base}}$ | 84.2 | 78.3 | 80.5 | 42.9 | 73.5 | 75.4 | 74.1 | 37.9
| BERT${}_{\text{large}}$ | 85.1 | 79.2 | 82.1 | 46.3 | 77.3 | 78.0 | 77.6 | 43.2
Pipeline | HGNN${}_{\text{small}}$ | 79.3 | 76.1 | 76.9 | 42.0 | 68.2 | 74.5 | 72.0 | 36.5
HGNN${}_{\text{base}}$ | 83.0 | 81.7 | 81.9 | 44.8 | 74.2 | 75.9 | 75.1 | 40.7
HGNN${}_{\text{large}}$ | 84.4 | 82.5 | 83.3 | 48.2 | 77.1 | 78.9 | 78.0 | 44.9
HGNN${}_{\text{large}}$+QS+NS | 85.9 | 83.1 | 84.2 | 50.8 | 78.4 | 78.5 | 78.4 | 46.3
Fused | HGNN${}_{\text{small}}$ | 80.2 | 75.1 | 77.2 | 42.9 | 70.7 | 75.8 | 73.9 | 38.2
HGNN${}_{\text{base}}$ | 86.0 | 82.1 | 83.4 | 47.3 | 76.6 | 78.1 | 77.3 | 43.8
HGNN${}_{\text{large}}$ | 86.9 | 83.7 | 85.4 | 50.7 | 78.9 | 80.0 | 79.0 | 47.1
HGNN${}_{\text{large}}$+QS+NS | 87.3 | 84.9 | 86.8 | 53.9 | 79.6 | 80.3 | 79.9 | 49.2
| Human | - | - | - | - | 88.5 | 91.0 | 89.8 | 80.5
Table 2. Main results on ComQA. The first SequentialTag model is based on
sequential tagging to select the relevant segments in the doc. The middle six
models are based on the plain sequential tags without structure information.
For BERT${}_{\text{official}}$ model, since the max sequence length was
limited to 512, the recall and accuracy are relatively low. Pipeline and Fused
refers to the two graph information aggregation operations. QS and NS refers
to our model pre-trained with additional question selection and node selection
tasks described in Section 4.4.
## 5\. Experiments
### 5.1. Common Setup
In all experiments, we tokenize the text with BPE by sentencepiece (Kudo and
Richardson, 2018). We set the vocabulary size to 50,000. We use the Adam
(Kingma and Ba, 2014) optimizer with 5k warm-up steps and linearly decay the
learning rate. $\beta_{1},\beta_{2},\epsilon$ was set to 0.9, 0.99 and
$10^{-6}$, respectively. For both pre-training and fine-tuning, the max
learning rate was set to $10^{-4}$. We applied decoupled weight decay
(Loshchilov and Hutter, 2018) to the model with scale 0.1. Dropout (Srivastava
et al., 2014) was applied to every parameter and attention mask with drop
probability 0.1. We clip the max grad norm to 0.2. The batch size was 256
during pre-training and 64 during fine-tuning. We truncate the document
sequence length to 1024. We use Pytorch (Paszke et al., 2019) 1.4.0 framework.
All experiments are conducted through 16 volta V100 GPUs.
We trained three types of HGNN model. HGNN${}_{\text{small}}$ is the small
version of the model containing 6 layers of the Transformer encoder, each
layer contains 8 heads with hidden size 512. HGNN${}_{\text{base}}$ is the
base version of the proposed model, which contains 12 layers of the
Transformer encoder, each layer containing 12 heads with hidden size 768.
HGNN${}_{\text{large}}$ is the large version of the proposed model containing
24 layers of Transformer encoder, the head, and hidden size was set to 16,
1024, respectively. For the hierarchical graph neural networks, we set the
number of layers $M$ to 4.
We use 400 million Chinese web pages for pre-training, those data occupy
nearly 1.2Tb disk space. We clean the document by removing the hyperlink,
phone number, or special symbols. It needs to mention that for the QS pre-
training task, we restrict the title to be a valid question. For other types
of pre-training such as MLM and NS, we just use the unfiltered web pages. We
conduct 1 million pre-training steps for all pre-training model. The pre-
training loss of HGNN${}_{\text{base}}$ is shown in Figure 4.
Figure 4. The pre-training loss of the proposed HGNN${}_{\text{base}}$ model.
The x-axis is the pre-training steps ($\times 100$).
### 5.2. Evaluation Metrics
In ComQA, each document has several nodes that could be selected as the answer
components, denote the nodes that the model predict is true as
$\mathcal{S}_{R}:\\{i|P_{i}>\text{threshold}\\}$. For each document, we could
define the Precision, Recall and F1 and Accuracy:
$\displaystyle\text{Precision}=\frac{|\mathcal{S}_{P}\bigcap\mathcal{S}_{R}|}{|\mathcal{S}_{P}|}$
$\displaystyle\text{Recall}=\frac{|\mathcal{S}_{P}\bigcap\mathcal{S}_{R}|}{|\mathcal{S}_{R}|}$
$\displaystyle\text{F1}=\frac{2\times\text{Precison}\times\text{Recall}}{\text{Precison}+\text{Recall}}$
$\displaystyle\text{Accuracy}={\left\\{\begin{array}[]{ll}1&\text{if
}\mathcal{S}_{P}\equiv\mathcal{S}_{R}\\\
0&\text{otherwise}\end{array}\right.}$
The threshold was tuned in the development set. The accuracy is a binary value
denoting whether all answer nodes are correctly selected. And we average every
document’s precision, recall, F1, and accuracy as the final precision, recall,
F1, and accuracy, respectively. It needs to mention that the positive node set
$\mathcal{S}_{P}$ could be empty.
### 5.3. Baseline Methods
We compare our model with five baseline models:
* •
SequentialTagging: We use a sequential tagging method which predicts the begin
, in and out (BIO) tag of each word in the document. This model is flexible to
select the discrete segments in the document. However, the selected segments
are not guaranteed to match a sentence. Therefore, we select the sentence with
more than half of its words are annotated as $B$ or $I$, as the prediction set
$\mathcal{S}_{R}$. Specifically, we select two types of sequential tagging
model, the first one is based on BiLSTM (Chiu and Nichols, 2016), and the
second one is based on BERT (Devlin et al., 2018).
* •
LSTM (Hochreiter and Schmidhuber, 1997) is based on the bi-directional LSTM to
process the question and document in a sequential way. Like the proposed HGNN,
we concatenate the question and document tokens. The prediction is also made
on the special token.
* •
BiDAF (Seo et al., 2016) is a widely used attention-based method. It builds
the LSTM based bi-directional attention flow between the words in the question
and the words in the documents, which achieves promising results in many QA
tasks. Other than predicting the start and end position of the answer span, we
use the same prediction methods in Section 4.3 for answer nodes selection.
* •
QA-Net (Yu et al., 2018) is the current state-of-the-arts machine reading
comprehension method without pre-training. QAnet does not require recurrent
networks but consists exclusively of convolution and self-attention, where
convolution models local interactions and self-attention models global
interactions. Different from the original paper (Yu et al., 2018), we only
adopt the architecture and did not apply backtranslation for data
augmentation.
* •
BERT (Devlin et al., 2018) is a pre-training method that pushes many NLP tasks
into new record. It is based on the encoder of the Transformer (Vaswani et
al., 2017), the objective is the masked language modeling and next sentence
prediction. In fact, our model without hierarchical graph neural networks is
reduced to the BERT model. We use two types of BERT model, the first one is
the official released Chinese version BERT base
model333https://github.com/huggingface/transformers
(BERT${}_{\text{official}}$). However, the max length of the official BERT is
512, which will truncate too many samples in ComQA. So we also pre-train the
BERT on the same data we pre-trained the HGNN with max length 1024. We also
train two types of BERT model, i.e. BERT${}_{\text{base}}$ and
BERT${}_{\text{large}}$.
| #Head | #Layer | #Hidden | #Interconnect | #Parameter
---|---|---|---|---|---
LSTM | - | 3 | 128 | - | 6,796,544
QA-Net | - | - | 128 | - | 13,876,706
BiDAF | - | - | 100 | - | 10,747,168
BERT${}_{\text{official}}$ | 12 | 12 | 768 | 3072 | 102,268,416
BERT${}_{\text{base}}$ | 12 | 12 | 768 | 3072 | 102,268,416
BERT${}_{\text{large}}$ | 16 | 24 | 1024 | 4096 | 329,486,296
HGNN${}_{\text{small}}$ | 8 | 6 | 512 | 2048 | 45,561,856
HGNN${}_{\text{base}}$ | 12 | 12 | 768 | 3072 | 125,400,576
HGNN${}_{\text{large}}$ | 16 | 24 | 1024 | 4096 | 362,579,328
Table 3. The specific configuration of different models.
The specific configurations of those models are listed in Table 3. In addition
to the above baselines, we also evaluate human performance. We randomly sample
200 questions from the test sets and asks a volunteer to get human
performance. The result of those models in dev and test set are shown in Table
2.
### 5.4. Main Results
We can see from the table that our model achieves the best result compared
with the other baseline models. For the sequential tagging model, the
prediction is made upon the word level, but the evaluation is made upon the
sentence level. This training-evaluation gap results in poor performance in
ComQA. The performance of LSTM is really poor since it contains no attention
mechanism that is vital for QA (Huang et al., 2018). Besides, the input of the
ComQA is a document containing more than 500 tokens on average. The current
sequential model may have difficulty processing such long sequence. For BiDAF
and QA-Net, although they have achieved good performance in many QA tasks,
they still lag behind the pre-training methods.
The proposed HGNN is based on BERT, nevertheless, it achieves 3.2/5.9 absolute
gain of F1/accuracy on the based version of the model, and 1.4/3.9 absolute
improvement on the large model, which shows the effectiveness of the proposed
hierarchical graph neural networks. The HGNN is based on BERT, but it
introduces the inductive bias of the document structure to the model. From our
point of view, the graph structure guides the information aggregation from the
words to the sentences to the document, which is very important for ComQA.
For the two types of information aggregation methods, we found the fused
aggregation, which merges the three-level connection in a single adjacency
matrix, has a better performance. Since the pipeline aggregation directly
separates the three-level aggregation, the low-level information still has the
difficulty to propagate to the high level, let alone the HGNN is multi-layer.
However, as human achieves 80.5 accuracy on the test set, our results suggest
that there is still room for improvement in ComQA.
Figure 5. Ablation over number of pre-training steps. The x-axis is the number
of parameter updates. We fine-tune each checkpoint 20 epochs, and show the
boxplot of the 20 results. We report the F1 and accuracy over the
HGNN${}_{\text{base}}$ model (based on Fused connection, upper half) and
BERT${}_{\text{base}}$ model (lower half).
### 5.5. The Effect of Pre-training
We can see from the table 2 that the model with pre-training could obtain
significant improvement. In this section, we want to investigate the effect of
pre-training in more detail. Concretely, we want to investigate the pre-
training in two aspects: the scale of the pre-training and the objective of
pre-training.
First of all, Liu et al. (2019) and Devlin et al. (2018) have shown that more
pre-training steps could consistently improve the performance of the
downstream tasks a lot. So we load the checkpoints of the model during the
pre-training dynamics and fine-tuned it on the ComQA dataset. Each model is
fine-tuned 20 epochs. We plot the F1 and accuracy of different pre-training
model in Figure 5. We can see that with the pre-training process, the pre-
training model could be consistently improved regardless of its architecture.
Therefore, scaling is a sure-fire approach for better model quality in ComQA.
Nonetheless, when we have pre-trained sufficient number of steps, the
performance of the model is saturated. In fact, we find the best results were
obtained after nearly 1.2 million pre-training steps, which takes only 25% of
the pre-training data.
Figure 6. Ablation over the pre-training objectives. The result is based on
HGNN${}_{\text{base}}$. Figure 7. The F1 of the models with different number
of fine-tuning ComQA data. The x-axis is the proportion(number) of the fine
tuning ComQA data.
Next, we investigate the pre-training tasks proposed in Section 4.4. We have
already shown in Table 2 that our model could obtain better performance when
pre-trained with the additional QS and NS. We conduct the ablation study on
the three pre-training objectives for HGNN${}_{\text{base}}$. The result is
shown in Figure 6.
We can see that the model could achieve a better result in terms of both F1
and accuracy with QS and NS. In the MLM objective, the input is always the
corrupted sentence containing the special <MASK> token, which is not the case
during fine-tuning. This discrepancy between pre-training and fine-tuning may
hurt the model (Yang et al., 2019). On the contrary, our proposed question
selection and node selection pre-training tasks are in accordance with the
ComQA, where the QS aims at learning the relevance between the question and
the document while the NS aims at learning the dependency of a node to a
document. In fact, when pre-trained with the NS, our model could be directly
applied to ComQA without any new parameters, which results in a faster
convergence rate. Figure 7 shows the performance of the model with the
different number of ComQA training data. We can see that the model pre-trained
with QS and NS could achieve very good result even with moderate six thousands
of labeled data, which excels the full-trained QA-Net. Therefore, devising
appropriate pre-training tasks could potentially improve the generalization
ability of the model.
### 5.6. The Effect of Connection Type
In section 4.2.1, we introduce three types of the graph connection, i.e. the
intra-sentence, inter-sentence, and global connection. As the graph structure
of the document is a key factor in ComQA, we analyze their specific
importance. Concretely, we build the adjacency matrix based on fused
aggregation in Equation 6, with the combination of specific connections.
$\clubsuit$ | $\heartsuit$ | $\spadesuit$ | Precision | Recall | F1 | Accuracy
---|---|---|---|---|---|---
$\surd$ | $\surd$ | $\surd$ | 76.6 | 78.1 | 77.3 | 43.8
$\surd$ | $\times$ | $\times$ | -2.4 | -2.2 | -2.3 | -5.6
$\times$ | $\surd$ | $\times$ | -0.6 | -0.7 | -0.6 | -0.8
$\times$ | $\times$ | $\surd$ | -1.1 | -1.8 | -1.5 | -3.3
$\surd$ | $\surd$ | $\times$ | -0.5 | -0.8 | -0.6 | -0.9
$\surd$ | $\times$ | $\surd$ | -1.3 | -1.9 | -1.6 | -3.5
$\times$ | $\surd$ | $\surd$ | -0.3 | -0.4 | -0.3 | -0.6
$\times$ | $\times$ | $\times$ | -3.1 | -2.7 | -3.0 | -5.9
Table 4. Ablation results of the different coonection types of
HGNN${}_{\text{base}}$ by fused operation. $\clubsuit$ denotes the intra-
sentence connection, $\heartsuit$ denotes the inter-sentence connection and
$\spadesuit$ denotes the global connection. $\surd$ means the connection is
kept in Equation 6. The last raw with no connection reduced to the
BERT${}_{\text{base}}$ model.
The result of the test set is shown in Table 4. We can see from the Table that
all three types of the connection are important to ComQA, however, the inter-
sentence connection is the most important one: removing $A^{\text{inter}}$
will result in more than one point drop. On the other hand, the intra-sentence
connection is less useful for the ComQA. We conjecture that the basic sentence
representations have already been built from the BERT based sequential
encoder, adding more intra-sentence interaction may not benefit a lot for the
task. On the contrary, the inter-sentence connection and global connection
introduce the specific inductive bias of the document structure to the model
(Battaglia et al., 2018), which helps achieve good performance on ComQA. The
result also suggests that incorporating the sentence level information is very
useful for document understanding.
### 5.7. Error Analysis
Finally, we can see from Table 2 there are still gaps between our model and
human performance, to better understand the remaining challenges, we randomly
sample 100 incorrectly predicted samples of the HGNN${}_{\text{base}}$ model
from the test set, based on the error type, we classify them into 4 classes:
1) Redundancy(41%), we find the most common mistake our model has made is the
redundancy. That is, our model is prone to select the uninformative sentences.
Especially, our model sometimes selects the sentence at the beginning of the
document which repeats the question. However, it is redundant from the
perspective of QA.
2) Missed Sentence(30%), we find our model also prone to miss some important
sentences. For the case where the document processing is not very well, i.e.,
the sentences are not perfectly segmented, some short but useful sentences are
missed.
3) Answerability(24%), another type of error is answerability. Although some
extracted answers seem to be right, however, the answer itself is wrong, such
as the counterfactual answers or too subjective answers. Resolving this type
of error requires background knowledge, such as commonsense.
4) Others(5%). The rest of the errors are due to various reasons such as
ambiguous questions, incorrect labels, and so on.
## 6\. Conclusion
In this paper, we study the compositional question answering where the answer
is composed of discontiguous segments in the document. We present a large
scale Chinese ComQA dataset containing more than 120k human-labeled questions.
The data construction process has undergone rigid inspections to ensure high
quality. To solve the ComQA problem, we propose a hierarchical graph neural
networks that incorporate document graph structure to the model. We also
devise two novel tasks, i.e., question selection and node selection, to pre-
train the model. The proposed methods achieve significant improvement over
previous methods. We also conduct several ablation studies to demonstrate the
superiority of the proposed pre-training tasks and the graph structure.
However, there is still a large gap between our model with human performance,
suggesting that there is still room for improvement in ComQA.
## 7\. Acknowledgments
We thank the anonymous reviewers for their insightful comments. We also
appreciate the dedicated annotation efforts contributed by 马琳、李潇如、张智程, among
others.
## Appendix A Appendix
The annotation interface is shown in Figure 8. And the templates we used to
determine weather a title of user query is a valid question is shown in Table
5.
Figure 8. A snapshot of the annotation interface. Each row is a single node
that could be served as the final answer component. The question is the page
title in the top. Note that the image or table could also be selected.
为什么 (why),怎么回事 (what happened),什么情况 (what is the matter),啥情况 (what is the
matter),咋回事 (what is wrong),原因 (reason),原理 (principle),由来 (origin),来由
(reason),会怎 (how will it be),为啥 (why),为何 (why),怎么还 (why still),怎么不 (why
not),为甚 (why),为什 (why),看法 (opinion),评价 (evaluation),推荐 (recommendation),分享
(share),评测 (evaluate),排行 (rankings),排名 (ranking),对比 (compare),对待 (treat),区别
(distinguish),差异 (divergence),差别 (difference),不同 (different),好吗 (okay ?),好不好
(it is okay ?),有用吗 (is it useful ?),哪个好 (which is bettter ?),哪家好 (which one is
better ?),比较好 (better),谁好 (who is good ?),谁厉害 (who is better ?),价值 (value),意义
(meaning),作用 (effect),用处 (use),功效 (function),危害 (harm),禁忌 (taboo),好处
(advantage),坏处 (disadvantage),优点 (advantage),缺点 (disadvantage),特点 (feature),特征
(characteristic),影响 (influence),哪些 (which),怎么样的 (how about it ?),哪 (where),还是
(still),怎么 (how),如何 (how about),怎样 (how),方法 (method),步骤 (step),操作
(operation),方案 (plan),办法 (method),教程 (course),方式 (way),玩法 (way of playing),攻略
(strategy),设置 (set up),自制 (homemade),做法 (action),过程 (process),流程 (process),规划
(planning),技巧 (skill),手续 (procedure),办理 (handle),规定 (regulation),要求
(demand),事项 (iitem),范围 (range),用什么 (by what),是谁 (who),谁是 (who is),谁最 (who is
the best),什么 (what),多长 (how long),多少 (how many),多大 (how old),多重 (how heavy),多久
(how long),多远 (how far),多小 (how small),多块 (multiple blocks),多高 (how high),条件
(condition),介绍 (introduction),简介 (introduction),概况 (overview),简要 (brief),简明
(concise),意思 (meanning),标准 (standard),指标 (index),现状是 (the present situation
is),什么叫 (what is called),何为 (what is),何谓 (what is),解释 (explanation),含义
(meaning),是否 (whether),能否 (can),可否 (can),是不是 (yes or no ?),会不会 (is not it
?),能不能 (can or not ?),有没有 (have or not),可不可 (can or not),吗 (is it ?),么
(what),哪 (where),几 (how many),有多 (how much)
Table 5. The templates we used to determine whether a title is a question when
constructing the ComQA. There are 122 templates in total and the derived
questions have more than 90% accuracy.
## References
* (1)
* Ba et al. (2016) Jimmy Ba, Jamie Ryan Kiros, and Geoffrey E. Hinton. 2016\. Layer Normalization. _ArXiv_ abs/1607.06450 (2016).
* Baradaran et al. (2020) Razieh Baradaran, Razieh Ghiasi, and Hossein Amirkhani. 2020\. A survey on machine reading comprehension systems. _arXiv preprint arXiv:2001.01582_ (2020).
* Battaglia et al. (2018) Peter W Battaglia, Jessica B Hamrick, Victor Bapst, Alvaro Sanchez-Gonzalez, Vinicius Zambaldi, Mateusz Malinowski, Andrea Tacchetti, David Raposo, Adam Santoro, Ryan Faulkner, et al. 2018\. Relational inductive biases, deep learning, and graph networks. _arXiv preprint arXiv:1806.01261_ (2018).
* Chiu and Nichols (2016) Jason PC Chiu and Eric Nichols. 2016. Named entity recognition with bidirectional LSTM-CNNs. _Transactions of the Association for Computational Linguistics_ 4 (2016), 357–370.
* Cui et al. (2005) Hang Cui, Renxu Sun, Keya Li, Min-Yen Kan, and Tat-Seng Chua. 2005. Question answering passage retrieval using dependency relations. In _Proceedings of the 28th annual international ACM SIGIR conference on Research and development in information retrieval_. ACM, 400–407.
* Devlin et al. (2018) Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understanding. _arXiv preprint arXiv:1810.04805_ (2018).
* Dhingra et al. (2017) Bhuwan Dhingra, Kathryn Mazaitis, and William W. Cohen. 2017\. Quasar: Datasets for Question Answering by Search and Reading. _CoRR_ abs/1707.03904 (2017).
* Dua et al. (2019) Dheeru Dua, Yizhong Wang, Pradeep Dasigi, Gabriel Stanovsky, Sameer Singh, and Matt Gardner. 2019\. DROP: A Reading Comprehension Benchmark Requiring Discrete Reasoning Over Paragraphs. In _Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers)_. 2368–2378.
* Dunn et al. (2017) Matthew Dunn, Levent Sagun, Mike Higgins, Ugur Guney, Volkan Cirik, and Kyunghyun Cho. 2017\. Searchqa: A new q&a dataset augmented with context from a search engine. _arXiv preprint arXiv:1704.05179_ (2017).
* Green Jr et al. (1961) Bert F Green Jr, Alice K Wolf, Carol Chomsky, and Kenneth Laughery. 1961. Baseball: an automatic question-answerer. In _Papers presented at the May 9-11, 1961, western joint IRE-AIEE-ACM computer conference_. ACM, 219–224.
* Greenwood (2005) Mark Andrew Greenwood. 2005\. _Open-domain question answering_. Ph.D. Dissertation. University of Sheffield, UK.
* Hamilton et al. (2017) William L Hamilton, Rex Ying, and Jure Leskovec. 2017\. Representation learning on graphs: Methods and applications. _arXiv preprint arXiv:1709.05584_ (2017).
* Harabagiu et al. (2003) Sanda M Harabagiu, Steven J Maiorano, and Marius A Pasca. 2003\. Open-domain textual question answering techniques. _Natural Language Engineering_ 9, 3 (2003), 231\.
* Hastie et al. (2009) Trevor Hastie, Robert Tibshirani, and Jerome Friedman. 2009\. _The elements of statistical learning: data mining, inference, and prediction_. Springer Science & Business Media.
* Hendrycks and Gimpel (2016) Dan Hendrycks and Kevin Gimpel. 2016. Gaussian error linear units (gelus). _arXiv preprint arXiv:1606.08415_ (2016).
* Hermann et al. (2015) Karl Moritz Hermann, Tomas Kocisky, Edward Grefenstette, Lasse Espeholt, Will Kay, Mustafa Suleyman, and Phil Blunsom. 2015. Teaching machines to read and comprehend. In _NIPS_. 1684–1692.
* Hochreiter and Schmidhuber (1997) Sepp Hochreiter and Jürgen Schmidhuber. 1997. Long short-term memory. _Neural computation_ 9, 8 (1997), 1735–1780.
* Huang et al. (2018) Hsin-Yuan Huang, Chenguang Zhu, Yelong Shen, and Weizhu Chen. 2018\. FusionNet: Fusing via Fully-aware Attention with Application to Machine Comprehension. In _International Conference on Learning Representations_.
* Ji et al. (2019) Tao Ji, Yuanbin Wu, and Man Lan. 2019. Graph-based dependency parsing with graph neural networks. In _Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics_. 2475–2485.
* Khashabi et al. (2018) Daniel Khashabi, Snigdha Chaturvedi, Michael Roth, Shyam Upadhyay, and Dan Roth. 2018\. Looking beyond the surface: A challenge set for reading comprehension over multiple sentences. In _Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers)_. 252–262.
* Kingma and Ba (2014) Diederik P. Kingma and Jimmy Ba. 2014. Adam: A Method for Stochastic Optimization. _ICLR_ (2014).
* Kipf and Welling (2016) Thomas N Kipf and Max Welling. 2016. Semi-supervised classification with graph convolutional networks. _arXiv preprint arXiv:1609.02907_ (2016).
* Kudo and Richardson (2018) Taku Kudo and John Richardson. 2018. SentencePiece: A simple and language independent subword tokenizer and detokenizer for Neural Text Processing. _arXiv preprint arXiv:1808.06226_ (2018).
* Kwiatkowski et al. (2019) Tom Kwiatkowski, Jennimaria Palomaki, Olivia Redfield, Michael Collins, Ankur Parikh, Chris Alberti, Danielle Epstein, Illia Polosukhin, Jacob Devlin, Kenton Lee, et al. 2019\. Natural questions: a benchmark for question answering research. _Transactions of the Association for Computational Linguistics_ 7 (2019), 453–466.
* Liu et al. (2019) Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized bert pretraining approach. _arXiv preprint arXiv:1907.11692_ (2019).
* Loshchilov and Hutter (2018) Ilya Loshchilov and Frank Hutter. 2018. Decoupled Weight Decay Regularization. In _International Conference on Learning Representations_.
* Nguyen et al. (2016) Tri Nguyen, Mir Rosenberg, Xia Song, Jianfeng Gao, Saurabh Tiwary, Rangan Majumder, and Li Deng. 2016. MS MARCO: A Human Generated MAchine Reading COmprehension Dataset. _arXiv preprint arXiv:1611.09268_ (2016).
* Paszke et al. (2019) Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, et al. 2019\. PyTorch: An imperative style, high-performance deep learning library. In _Advances in Neural Information Processing Systems_. 8024–8035.
* Phillips (1960) Anthony Valiant Phillips. 1960\. Artificial Intelligence Project—RLE and MIT Computation Center Memo 16—A Question-Answering Routine’. (1960).
* Radford et al. (2019) Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019\. Language models are unsupervised multitask learners. _OpenAI Blog_ 1, 8 (2019), 9.
* Rajpurkar et al. (2016) Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. 2016. SQuAD: 100,000+ Questions for Machine Comprehension of Text. In _EMNLP_.
* Richardson et al. (2013) Matthew Richardson, Christopher JC Burges, and Erin Renshaw. 2013. MCTest: A Challenge Dataset for the Open-Domain Machine Comprehension of Text.. In _EMNLP_ , Vol. 1. 2.
* Roberts et al. (2020) Adam Roberts, Colin Raffel, and Noam Shazeer. 2020\. How Much Knowledge Can You Pack Into the Parameters of a Language Model? _arXiv preprint arXiv:2002.08910_ (2020).
* Sennrich et al. (2016) Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016\. Neural Machine Translation of Rare Words with Subword Units. In _Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)_. 1715–1725.
* Seo et al. (2016) Min Joon Seo, Aniruddha Kembhavi, Ali Farhadi, and Hannaneh Hajishirzi. 2016. Bidirectional Attention Flow for Machine Comprehension. _CoRR_ abs/1611.01603 (2016).
* Srivastava et al. (2014) Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. 2014. Dropout: a simple way to prevent neural networks from overfitting. _The journal of machine learning research_ 15, 1 (2014), 1929–1958.
* Swayamdipta et al. (2020) Swabha Swayamdipta, Roy Schwartz, Nicholas Lourie, Yizhong Wang, Hannaneh Hajishirzi, Noah A Smith, and Yejin Choi. 2020. Dataset Cartography: Mapping and Diagnosing Datasets with Training Dynamics. _arXiv preprint arXiv:2009.10795_ (2020).
* Tian et al. (2020) Zhixing Tian, Yuanzhe Zhang, Xinwei Feng, Wenbin Jiang, Yajuan Lyu, Kang Liu, and Jun Zhao. 2020. Capturing Sentence Relations for Answer Sentence Selection with Multi-Perspective Graph Encoding.. In _AAAI_. 9032–9039.
* Trischler et al. (2016) Adam Trischler, Tong Wang, Xingdi Yuan, Justin Harris, Alessandro Sordoni, Philip Bachman, and Kaheer Suleman. 2016. NewsQA: A Machine Comprehension Dataset. _arXiv preprint arXiv:1611.09830_ (2016).
* Vaswani et al. (2017) Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In _Advances in neural information processing systems_. 5998–6008.
* Veličković et al. (2018) Petar Veličković, Guillem Cucurull, Arantxa Casanova, Adriana Romero, Pietro Liò, and Yoshua Bengio. 2018. Graph Attention Networks. In _International Conference on Learning Representations_.
* Verberne et al. (2009) Suzan Verberne, H van Halteren, Stephan Raaijmakers, DL Theijssen, and LWJ Boves. 2009\. Learning to Rank QA Data: Evaluating Machine Learning Techniques for Ranking Answers to Why-Questions. (2009).
* Ward Jr (1963) Joe H Ward Jr. 1963\. Hierarchical grouping to optimize an objective function. _Journal of the American statistical association_ 58, 301 (1963), 236–244.
* Welbl et al. (2018) Johannes Welbl, Pontus Stenetorp, and Sebastian Riedel. 2018\. Constructing datasets for multi-hop reading comprehension across documents. _Transactions of the Association for Computational Linguistics_ 6 (2018), 287–302.
* Yang et al. (2020) Junjie Yang, Zhuosheng Zhang, and Hai Zhao. 2020. Multi-span Style Extraction for Generative Reading Comprehension. _arXiv preprint arXiv:2009.07382_ (2020).
* Yang et al. (2019) Zhilin Yang, Zihang Dai, Yiming Yang, Jaime Carbonell, Russ R Salakhutdinov, and Quoc V Le. 2019\. Xlnet: Generalized autoregressive pretraining for language understanding. In _Advances in neural information processing systems_. 5754–5764.
* Yang et al. (2018) Zhilin Yang, Peng Qi, Saizheng Zhang, Yoshua Bengio, William W Cohen, Ruslan Salakhutdinov, and Christopher D Manning. 2018\. HotpotQA: A Dataset for Diverse, Explainable Multi-hop Question Answering. In _EMNLP_.
* Yao et al. (2019) Liang Yao, Chengsheng Mao, and Yuan Luo. 2019. Graph convolutional networks for text classification. In _Proceedings of the AAAI Conference on Artificial Intelligence_ , Vol. 33. 7370–7377.
* Yogish et al. (2017) Deepa Yogish, TN Manjunath, and Ravindra S Hegadi. 2017\. Survey on trends and methods of an intelligent answering system. In _2017 International Conference on Electrical, Electronics, Communication, Computer, and Optimization Techniques (ICEECCOT)_. IEEE, 346–353.
* Yu et al. (2018) Adams Wei Yu, David Dohan, Minh-Thang Luong, Rui Zhao, Kai Chen, Mohammad Norouzi, and Quoc V Le. 2018. QANet: Combining Local Convolution with Global Self-Attention for Reading Comprehension. In _International Conference on Learning Representations_.
* Zaheer et al. (2020) Manzil Zaheer, Guru Guruganesh, Avinava Dubey, Joshua Ainslie, Chris Alberti, Santiago Ontanon, Philip Pham, Anirudh Ravula, Qifan Wang, Li Yang, et al. 2020\. Big bird: Transformers for longer sequences. _arXiv preprint arXiv:2007.14062_ (2020).
* Zeng et al. (2020) Chengchang Zeng, Shaobo Li, Qin Li, Jie Hu, and Jianjun Hu. 2020. A survey on machine reading comprehension: Tasks, evaluation metrics, and benchmark datasets. _arXiv preprint arXiv:2006.11880_ (2020).
* Zhang et al. (2018) Yuhao Zhang, Peng Qi, and Christopher D Manning. 2018\. Graph Convolution over Pruned Dependency Trees Improves Relation Extraction. In _Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing_. 2205–2215.
* Zhang et al. (2020) Zhuosheng Zhang, Yiqing Zhang, Hai Zhao, Xi Zhou, and Xiang Zhou. 2020. Composing Answer from Multi-spans for Reading Comprehension. _arXiv preprint arXiv:2009.06141_ (2020).
|
Ars Inveniendi Analytica (2021), Paper No. 5, 35 pp.
DOI 10.15781/hk9g-zz18
00footnotetext: Licensed under a Creative Commons Attribution License (CC-
BY).
A Liouville-type theorem for
stable minimal hypersurfaces
Leon Simon
Stanford University
Communicated by Guido De Philippis
Abstract. We prove that if $M$ is a strictly stable complete minimal
hypersurface in $\smash{\varmathbb{R}^{n+1+\ell}}$ which has finite density at
infinity and which lies on one side of a cylinder
$\smash{\varmathbb{C}=\varmathbb{C}_{0}\times\varmathbb{R}^{\ell}}$, where
$\smash{\varmathbb{C}_{0}}$ is a strictly stable area minimizing hypercone in
$\smash{\varmathbb{R}^{n+1}}$ with ${\rm sing\,\,}\varmathbb{C}_{0}=\\{0\\}$,
then $M$ must be cylindrical—i.e. $M=S\times\varmathbb{R}^{\ell}$, where
$S\subset\smash{\varmathbb{R}^{n+1}}$ is a smooth strictly stable minimal
hypersurface in $\,\smash{\varmathbb{R}^{n+1}}$. Applications will be given
in _[Sim21a], [Sim21b]_.
Keywords. minimal hypersurface, strict stability, minimizing cone
## Introduction
The main theorem here (Theorem 1) establishes that if $M$ is a complete
strictly stable minimal hypersurface in $\varmathbb{R}^{n+1+\ell}$ lying on
one side of a cylindrical hypercone
$\varmathbb{C}=\varmathbb{C}_{0}\times\varmathbb{R}^{\ell}$ with ${\rm
sing\,\,}\varmathbb{C}_{0}=\\{0\\}$ and $\varmathbb{C}_{0}$ strictly stable
and minimizing, then $M$ must itself be cylindrical—i.e. of the form
$S\times\varmathbb{R}^{\ell}$ where $S$ is a smooth complete (not necessarily
connected) strictly stable minimal hypersurface in $\varmathbb{R}^{n+1}$.
This result, or more correctly its Corollary 1 below, is a crucial ingredient
in the author’s recent proof (in [Sim21b]) that, with respect to a suitable
$C^{\infty}$ metric for $\varmathbb{R}^{n+1+\ell}$, $n+\ell\geqslant 8$, there
are examples of strictly stable minimal hypersurfaces which have singular set
of the form $\\{0\\}\times K$, where $K$ is an arbitrary closed subset of
$\varmathbb{R}^{\ell}$.
An outline of the present paper is as follows. After a description of the main
results in §1 and some notational and technical preliminaries in §2 and §3, in
§4 we establish $L^{2}$ growth estimates for solutions of the Jacobi equation
(i.e. the linearization of the minimal surface equation) on $M$. These
estimates are applied to give growth estimates on (i) $(x,y)\cdot\nu$, where
$\nu$ is the unit normal of $M$, (ii)
$\nu_{\\!y}=(e_{n+2}\cdot\nu,\ldots,e_{n+1+\ell}\cdot\nu)$ (i.e. the
components of the unit normal $\nu$ of $M$ in the $y$-coordinate directions),
and (iii) (in §5) $d|M$, where $d(x)={\rm dist\,}((x,y),\varmathbb{C})$ is the
distance to the cylinder $\varmathbb{C}$. In §6 the growth estimates
established in §4 and §5 are combined to show that, if $\nu_{\\!y}$ is not
identically zero,
$R^{\gamma-\alpha}\leqslant\int_{M\cap\\{(x,y):|(x,y)|<R\\}}\nu_{\\!y}^{2}\,d\mu\leqslant
R^{-2+\gamma+\alpha},\,\,\gamma=\ell+2+\beta_{1},\,\,$
for each $\alpha\in(0,1)$ and all sufficiently large $R$ (depending on
$\alpha$ and $M$), where $\mu$ is $(n+\ell)$-dimensional Hausdorff measure,
and $\beta_{1}=2(({\textstyle\frac{n-2}{2}})^{2}+\lambda_{1})^{1/2}$, with
$\lambda_{1}$ the first eigenvalue of the Jacobi operator of the compact
minimal surface $\Sigma=\varmathbb{C}_{0}\cap\varmathbb{S}^{n}$. These
inequalities are clearly impossible for $R>1$, so we finally conclude that
indeed $\nu_{\\!y}$ is identically zero, showing that $M$ is cylindrical as
claimed in the main theorem.
## 1\. Main Results
Let $\varmathbb{C}_{0}$ be a minimal hypercone in $\varmathbb{R}^{n+1}$ with
${\rm
sing\,\,}\varmathbb{C}_{0}=\overline{\varmathbb}{C}_{0}\setminus\varmathbb{C}_{0}=\\{0\\}$
and let $\mathcal{L}_{\varmathbb{C}_{0}}$ be the Jacobi operator (linearized
minimal surface operator) defined by
$\mathcal{L}_{\varmathbb{C}_{0}}u=\Delta_{\varmathbb{C}_{0}}u+|A_{\varmathbb{C}_{0}}|^{2}u,$
where $|A_{\varmathbb{C}_{0}}|^{2}$ is the squared length of the second
fundamental form of $\varmathbb{C}_{0}$. In terms of the Jacobi operator
1.1 $\mathcal{L}_{\Sigma}=\Delta_{\Sigma}u+|A_{\Sigma}|^{2}u$
of the compact submanifold $\Sigma=\varmathbb{C}_{0}\cap\varmathbb{S}^{n}$
(which is minimal in $\varmathbb{S}^{n}$), we have
1.2 $\mathcal{L}_{\varmathbb{C}_{0}}u=r^{1-n}\frac{\partial}{\partial
r}\bigl{(}r^{n-1}\frac{\partial u}{\partial
r}\bigr{)}+r^{-2}\mathcal{L}_{\Sigma},\quad r=|x|.$
If $\lambda_{1}$ is the first eigenvalue of $-\mathcal{L}_{\Sigma}$, and
$\varphi_{1}>0$ is the corresponding eigenfunction (unique up to a constant
factor) we have (possibly complex-valued) solutions
$r^{\gamma_{1}^{\pm}}\varphi_{1}$ of $\mathcal{L}_{\varmathbb{C}_{0}}u=0$,
where $\gamma_{1}^{\pm}$ are the “characteristic exponents”
1.3
$\gamma_{1}^{\pm}=-\frac{n-2}{2}\pm\sqrt{\bigl{(}\frac{n-2}{2}\bigr{)}^{2}+\lambda_{1}}.$
We assume $\varmathbb{C}_{0}$ is strictly stable, i.e. there is $\lambda>0$
with
1.4
$\lambda\int_{\varmathbb{C}_{0}}|x|^{-2}\zeta^{2}(x)\,d\mu(x)\leqslant\int_{\varmathbb{C}_{0}}\bigl{(}|\nabla_{\varmathbb{C}_{0}}\zeta|^{2}-|A_{\varmathbb{C}_{0}}|^{2}\zeta^{2}\bigr{)}\,d\mu\,\,\,\,\forall\zeta\in
C_{c}^{\infty}(\varmathbb{R}^{n+1}),$
where $\mu$ is $n$-dimensional Hausdorff measure; subsequently $\mu$ will
always denote Hausdorff measure of the appropriate dimension.
Using 1.2 and 1.3, 1.4 is readily checked to be equivalent to the condition
that
1.5 $\lambda_{1}>-((n-2)/2)^{2},$
in which case
1.6 $\gamma_{1}^{-}<-\frac{n-2}{2}<\gamma_{1}^{+}<0.$
The main theorem here relates to hypersurfaces
$M\subset\varmathbb{R}^{n+1+\ell}$, where $\ell\geqslant 1$; points in
$\varmathbb{R}^{n+1+\ell}=\varmathbb{R}^{n+1}\times\varmathbb{R}^{\ell}$ will
be denoted
$(x,y)=(x_{1},\ldots,x_{n+1},y_{1},\ldots,y_{\ell}).$
In the main theorem, which we now state and which will be proved in §6, we
assume that (i) $M$ is a smooth complete minimal hypersurface in
$\varmathbb{R}^{n+1+\ell}$ lying on one side of
$\varmathbb{C}=\varmathbb{C}_{0}\times\varmathbb{R}^{\ell}$, i.e.
1.7 $M\subset U_{+}\times\varmathbb{R}^{\ell},$
where $U_{+}$ is one of the two connected components of
$\varmathbb{R}^{n+1+\ell}\setminus\overline{C}_{0}$, that (ii) $M$ is strictly
stable in the sense that (Cf. 1.4) there is $\lambda>0$ with
1.8
$\lambda\int_{M}|x|^{-2}\zeta^{2}(x,y)\,d\mu(x,y)\leqslant\int_{M}\bigl{(}|\nabla_{M}\zeta|^{2}-|A_{M}|^{2}\zeta^{2}\bigr{)}\,d\mu\,\,\,\,$
for all $\zeta\in C_{c}^{1}(\varmathbb{R}^{n+1+\ell})$, and that (iii) $M$ has
finite density at $\infty$, i.e.
1.9 $\sup_{R>1}R^{-n-\ell}\mu(M\cap B_{R})<\infty,$
where, here and subsequently, $B_{R}$ is the closed ball of radius $R$ and
centre $(0,0)$ in $\varmathbb{R}^{n+1+\ell}$:
$B_{R}=\\{(x,y)\in\varmathbb{R}^{n+1+\ell}:|(x,y)|\leqslant R\\};$
the corresponding open ball, which we shall occasionally use, is denoted
$\breve{B}_{R}=\\{(x,y)\in\varmathbb{R}^{n+1+\ell}:|(x,y)|<R\\}.$
1.10 Theorem (Liouville-type Theorem.) If $M$ is a smooth, complete, embedded
minimal hypersurface (without boundary) in $\varmathbb{R}^{n+1+\ell}$ such
that 1.7, 1.8 and 1.9 hold, then $M$ is cylindrical, i.e.
$M=S\times\varmathbb{R}^{\ell},$
where $S$ is a smooth complete strictly stable minimal hypersurface which is
contained in $U_{+}$.
Remark: Using the regularity theory of [SS81], the conclusion continues to
hold, with no essential change in the proof, in case $M$ is allowed to have a
singular set of finite $(n+\ell-2)$-dimensional Hausdorff measure. Indeed if
we use the regularity theory of [Wic14] then we need only assume _a priori_
that $M$ has zero $(n+\ell-1)$-dimensional Hausdorff measure.
For applications in [Sim21a], [Sim21b] we now state a corollary of the above
theorem, in which the stability hypothesis on $M$ is dropped, and instead
$\varmathbb{C}_{0}$ is assumed to be both strictly stable and strictly
minimizing, and we impose an _a priori_ smallness assumption on the last
$\ell$ components of the unit normal $\nu_{M}=(\nu_{1},\ldots,\nu_{n+1+\ell})$
of $M$; i.e. a smallness assumption on $(\nu_{n+2},\ldots,\nu_{n+1+\ell})\,$
which we subsequently write as
1.11 $\nu_{y}=(\nu_{y_{1}},\ldots,\nu_{y_{\ell}});\,\,\text{ i.e.\
}\nu_{y_{j}}=e_{n+1+j}\cdot\nu_{M},\,\,\,j=1,\ldots,\ell.$
Notice that such a smallness assumption on $|\nu_{y}|$ amounts to a
restriction on how rapidly $M$ can vary in the $y$-directions.
Recall (see [HS85]) $\varmathbb{C}_{0}$ is said to be strictly minimizing if
there is a constant $c>0$ such that
1.12 $\displaystyle\mu(\varmathbb{C}_{0}\cap B_{1})\leqslant\mu(T\cap
B_{1})-c\rho^{n}\text{ whenever $\rho\in(0,{\textstyle\frac{1}{2}}]$ and $T$
is a smooth }$ $\displaystyle\hskip 36.135pt\text{compact hypersurface-with-
boundary in $B_{1}\setminus B_{\rho}$ with $\partial
T=\varmathbb{C}_{0}\cap\varmathbb{S}^{n}$}.$
1.13 Corollary. Suppose $\varmathbb{C}_{0}$ is strictly stable and strictly
minimizing (as in 1.12) and $\alpha\in(0,1)$. Then there is
$\varepsilon_{0}=\varepsilon_{0}(\varmathbb{C}_{0},\alpha)\in(0,{\textstyle\frac{1}{2}}]$
such that if $M$ is a smooth, complete, embedded minimal hypersurface in
$\varmathbb{R}^{n+1+\ell}$ with
$\left\\{\begin{aligned} &M\subset U_{+}\times\varmathbb{R}^{\ell},\\\
\vskip-2.0pt\cr&{\sup}_{R>1}R^{-n-\ell}\mu(M\cap
B_{R})\leqslant(2-\alpha)\mu(\varmathbb{C}\cap B_{1}),and\\\
\vskip-2.0pt\cr&{\sup}_{M}|\nu_{y}|<\varepsilon_{0},\end{aligned}\right.$
then
$M=\lambda S\times\varmathbb{R}^{\ell}$
for some $\lambda>0$, where $S$ is the minimal hypersurface in $U_{+}$ as in
_[HS85]_ (see 7.1 in §7 below).
Applications of the above results are given in [Sim21a] and [Sim21b]. Although
the assumptions of strict stability in Theorem 1 and $|\nu_{y}|$ small in
Corollary 1 are appropriate for the applications in [Sim21a], [Sim21b], it
would be of interest to know if these restrictions can be significantly
relaxed—for example the question of whether or not mere stability would
suffice in place of the strict stability assumption in Theorem 1.
## 2\. Preliminaries concerning $M$
As in §1, $\varmathbb{C}_{0}$ will be a smooth embedded minimal hypercone in
$\varmathbb{R}^{n+1}$ with ${\rm sing\,\,}\varmathbb{C}_{0}=\\{0\\}$, and we
let $U_{\pm}$ be the two connected components of
$\varmathbb{R}^{n+1}\setminus\overline{\varmathbb{C}}_{0}$.
$M$ will be a smooth embedded minimal hypersurface, usually contained in
$U_{+}\times\varmathbb{R}^{\ell}$, although some results are formulated to
apply locally in a ball, and also independent of the inclusion assumption
$M\subset U_{+}\times\varmathbb{R}^{\ell}$.
For the moment we assume only that $M$ is minimal (i.e. has first variation
zero) in $\breve{B}_{R}$. Thus
2.1 $\int_{M}{\rm
div}_{M}Z(x,y)\,d\mu(x,y)=0,\,\,Z=(Z_{1},\ldots,Z_{n+1+\ell}),\,Z_{j}\in
C^{1}_{c}(\breve{B}_{R}),$
where ${\rm div}_{M}Z$ is the “tangential divergence” of $Z$:
$\displaystyle{\rm
div}_{M}Z_{|(x,y)}={\textstyle\sum}_{j=1}^{n+\ell}\tau_{j}\cdot D_{\tau_{j}}Z$
$\displaystyle={\textstyle\sum}_{k,m=1}^{n+1+\ell}{\textstyle\sum}_{j=1}^{n+\ell}\tau_{j}^{k}\tau_{j}^{m}D_{k}Z_{m}$
$\displaystyle={\textstyle\sum}_{i,j=1}^{n+1+\ell}g^{ij}(x,y)D_{i}Z_{j}(x,y),$
with $\tau_{j}=(\tau_{j}^{1},\ldots,\tau_{j}^{n+1+\ell})$,
$j=1,\ldots,n+\ell$, any locally defined orthonormal basis of $T_{(x,y)}M$,
$(x,y)\in M$ and
2.2
$g^{ij}=\delta_{ij}-\nu_{i}\nu_{j},\quad\nu=(\nu_{1},\ldots,\nu_{n+1+\ell})\text{
a unit normal for }M.$
For $v\in C^{2}(M)$ we let ${\rm graph\,}v$ be the graph of $v$ taken off $M$:
${\rm graph\,}v=\bigl{\\{}(x,y)+v(x,y)\nu_{\\!M}(x,y):(x,y)\in M\bigr{\\}}$
(notice that this may fail to be an embedded hypersurface unless $v$ has small
enough $C^{2}$ norm), and we take
2.3 $\mathcal{M}_{\\!M}(v)=\text{ the mean curvature operator on $M$}.$
Thus $\mathcal{M}_{\\!M}(v)$ is the Euler-Lagrange operator of the area
functional $\mathcal{A}_{\\!M}$ on $M$, defined by
$\mathcal{A}_{\\!M}(v)=\int_{M}J_{M}(V)\,d\mu$
where $V(x,y)=(x,y)+v(x,y)\nu_{M}(x,y)$ is the graph map taking $M$ to ${\rm
graph\,}v$ and $J_{M}(V)$ is the Jacobian of $V$:
$J_{M}(V)=\sqrt{\det\bigl{(}D_{\tau_{i}}V\cdot
D_{\tau_{j}}V\bigr{)}}=\sqrt{\det\bigl{(}\delta_{ij}+v_{i}v_{j}+v^{2}{\textstyle\sum}_{k}h_{ik}h_{jk}+2vh_{ij}\bigr{)}},$
where, for $(x,y)\in M$, $\tau_{1},\ldots,\tau_{n+\ell}$ is an orthonormal
basis for $T_{(x,y)}M$, $h_{ij}$ is the second fundamental form of $M$ with
respect to this basis, and $v_{i}=D_{\tau_{i}}v$. Since
$\sum_{i=1}^{n+\ell}h_{ii}=0$ we then have
2.4
$J_{M}(V)=\sqrt{1+|\nabla_{M}v|^{2}-|A_{M}|^{2}v^{2}+E\bigl{(}v(h_{ij}),v^{2}({\textstyle\sum}_{k}h_{ik}h_{jk}),(v_{i}v_{j})\bigr{)}}$
where $E$ is a polynomial of degree $n+\ell$ in the indicated variables with
constant coefficients depending only on $n,\ell$ and with each non-zero term
having at least degree $3$. So the second variation of $\mathcal{A}_{M}$ is
given by
${\textstyle\frac{d^{2}}{dt^{2}}}\bigl{|}_{t=0}\mathcal{A}(t\zeta)=\int_{M}\bigl{(}|\nabla_{M}\zeta|^{2}-|A_{M}|^{2}\zeta^{2}\bigr{)}\,d\mu=-\int_{M}\zeta\mathcal{L}_{M}\zeta\,d\mu,\quad\zeta\in
C^{1}_{c}(\varmathbb{R}^{n+1+\ell}),$
where $\mathcal{L}_{M}$ is the Jacobi operator on $M$ defined by
2.5 $\mathcal{L}_{M}v=\Delta_{M}v+|A_{M}|^{2}v,$
with $\Delta_{M}f={\rm div}_{M}(\nabla_{M}f)$ the Laplace-Beltrami operator on
$M$ and $|A_{M}|=\bigl{(}\sum h_{ij}^{2}\bigr{)}^{1/2}$ (the length of the
second fundamental form of $M$). Of course, by 2.3 and 2.4, $\mathcal{L}_{M}$
is the linearization of the mean curvature operator at $0$, i.e.
2.6
$\mathcal{L}_{M}v={\textstyle\frac{d}{dt}}\bigl{|}_{t=0}\mathcal{M}_{M}(tv).$
In view of 2.4 and the definition of $\mathcal{L}_{M}$, if
$|A_{M}||u|+|\nabla_{M}u|<1$ on a domain in $M$ then we can write
2.7 $\mathcal{M}(u)=\mathcal{L}_{M}(u)+{\rm div}_{M}E(u)+F(u)$
on that domain, where $|E(u)|\leqslant
C(|A_{M}|^{2}|u|^{2}+|\nabla_{M}u|^{2})$ and $|F(u)|\leqslant
C(|A_{M}|^{3}v^{2}+|A_{M}||\nabla_{M}v|^{2})$, $C=C(n,\ell)$.
We say that $M$ is _strictly stable_ in $\breve{B}_{R}$ if the second
variation of $\mathcal{A}_{\\!M}$ is strictly positive in the sense that
${\inf}_{\zeta\in
C^{1}_{c}(\breve{B}_{R}),\,\int_{M}|x|^{-2}\zeta^{2}(x,y)\,d\mu(x,y)=1}\,\,\,{\textstyle\frac{d^{2}}{dt^{2}}}\bigl{|}_{t=0}\mathcal{A}(t\zeta)>0,$
or, equivalently, that there is $\lambda>0$ such that 1.8 holds.
Observe that if we have such strict stability then by replacing $\zeta$ in 1.8
with $\zeta w$ we obtain
$\displaystyle\smash[b]{\lambda\int_{M}|x|^{-2}w^{2}\zeta^{2}(x,y)\,d\mu(x,y)\leqslant\int_{M}\bigl{(}w^{2}|\nabla_{M}\zeta|^{2}+\zeta^{2}|\nabla_{M}w|^{2}}$
$\displaystyle\hskip 180.67499pt+2\zeta
w\nabla_{M}\zeta\cdot\nabla_{M}w-|A_{M}|^{2}\zeta^{2}w^{2}\bigr{)}\,d\mu,$
and $\int_{M}2\zeta w\nabla_{M}\zeta\cdot\nabla_{M}w=\int
w\nabla_{M}\zeta^{2}\cdot\nabla_{M}w=-\int_{M}(|\nabla_{M}w|^{2}\zeta^{2}+\zeta^{2}w\Delta_{M}w)$,
so if $w$ is a smooth solution of $\mathcal{L}_{M}w=0$ on $M\cap\breve{B}_{R}$
then
2.8
$\lambda\int_{M}|x|^{-2}w^{2}\zeta^{2}\,d\mu(x,y)\leqslant\int_{M}w^{2}|\nabla_{M}\zeta|^{2}\,d\mu,\quad\zeta\in
C^{1}_{c}(\breve{B}_{R}).$
We need one further preliminary for $M$, concerning asymptotics of $M$ at
$\infty$ in the case when $M$ is complete in all of $\varmathbb{R}^{n+1+\ell}$
with $M\subset U_{+}\times\varmathbb{R}^{\ell}$:
2.9 Lemma. If $M$ is a complete embedded minimal hypersurface in all of
$\varmathbb{R}^{n+1+\ell}$ satisfying 1.7 and 1.9, then $M$ has
$\varmathbb{C}$ with some constant integer multiplicity $q$ as its unique
tangent cone at $\infty$.
Furthermore if $M$ is stable (i.e. _1.8_ holds with $\lambda=0$), then for
each $\delta>0$ there is $R_{0}=R_{0}(\varmathbb{C}_{0},q,\delta)>1$ such that
$None$
$M\setminus\bigl{(}B_{R_{0}}\cup\bigl{\\{}(x,y):|x|\leqslant\delta|y|\bigr{\\}}\bigr{)}\subset\cup_{j=1}^{q}{\rm
graph\,}u_{j}\subset M$
where each $u_{j}$ is a $C^{2}$ function on a domain $\Omega$ of
$\varmathbb{C}$ containing
$\varmathbb{C}\setminus(B_{R_{0}}\cup\bigl{\\{}(x,y):|x|\leqslant\delta|y|\bigr{\\}}$
and ${\rm
graph\,}u_{j}=\\{x+u_{j}(x,y)\nu_{\varmathbb{C}_{0}}(x):(x,y)\in\Omega\\}$
($\nu_{\varmathbb{C}_{0}}=$ unit normal of $\varmathbb{C}_{0}$ pointing into
$U_{+}$), and
$None$ $\lim_{R\to\infty}{\sup}_{(x,y)\in\Omega\setminus
B_{R}}\bigl{(}|x||\nabla_{\varmathbb{C}}^{2}u_{j}(x,y)|+|\nabla_{\varmathbb{C}}u_{j}(x,y)|+|x|^{-1}u_{j}(x,y)\bigr{)}=0$.
Proof: Let $C(M)$ be a tangent cone of $M$ at $\infty$. Thus $C(M)$ is a
stationary integer multiplicity varifold with $\lambda C(M)=C(M)$ for each
$\lambda>0$, and support of
$C(M)\subset\overline{U}_{+}\times\varmathbb{R}^{\ell}$ and, by the
compactness theorem of [SS81], $C(M)$ is stable and ${\rm sing\,\,}C(M)$ has
Hausdorff dimension $\leqslant n+\ell-7$. So by the maximum principle of
[Ilm96] and the constancy theorem, $C(M)=\varmathbb{C}$ with constant
multiplicity $q$ for some $q\in\\{1,2,\ldots\\}$.
Thus $\varmathbb{C}$, with multiplicity $q$, is the unique tangent cone of $M$
at $\infty$, and the “sheeting theorem” [SS81, Theorem 1] is applicable,
giving (i) and (ii). $\Box$
## 3\. Preliminaries concerning $\varmathbb{C}_{0}$ and $\varmathbb{C}$
$\varmathbb{C}_{0}\subset\varmathbb{R}^{n+1}\setminus\\{0\\}$ continues to
denote a smooth embedded minimal hypercone with ${\rm sing\,\,}\hskip
0.5pt\varmathbb{C}_{0}=\overline{\varmathbb}{C}_{0}\setminus\varmathbb{C}_{0}=\\{0\\}$,
$U_{\pm}$ denote the two connected components of
$\varmathbb{R}^{n+1}\setminus\overline{\varmathbb}{C}_{0}$, and we assume here
that $\varmathbb{C}_{0}$ is strictly stable as in 1.4.
With $\mathcal{L}_{\Sigma}$ the Jacobi operator of
$\Sigma=\varmathbb{C}_{0}\cap\varmathbb{S}^{n}$ as in 1.1, we let
$\varphi_{1}>0,\,\varphi_{2},\ldots$ be a complete orthonormal set of
eigenfunctions of $-L_{\Sigma}$. Thus
3.1 $\displaystyle-
L_{\Sigma}(\varphi_{j})=\lambda_{j}\varphi_{j},\,\,\,j=1,2,\ldots,\text{with
$\varphi_{1}>0$ and}$ $\displaystyle\hskip
79.49744pt\lambda_{1}<\lambda_{2}\leqslant\lambda_{3}\leqslant\cdots\leqslant\lambda_{j}\leqslant\cdots,\,\,\lambda_{j}\to\infty\text{
as }j\to\infty,$
and every $L^{2}(\Sigma)$ function $v$ can be written $v=\sum_{j}\langle
v,\varphi_{j}\rangle_{L^{2}(\Sigma)}\varphi_{j}$.
Notice that by orthogonality each $\varphi_{j},\,j\neq 1$, must then change
sign in $\Sigma$, and, with $\mathcal{L}_{\varmathbb{C}_{0}}$ the Jacobi
operator for $\varmathbb{C}_{0}$ as in 1.2,
3.2
$\mathcal{L}_{\varmathbb{C}_{0}}(r^{\gamma}\varphi_{j})=r^{\gamma-2}(\gamma^{2}+(n-2)\gamma-\lambda_{j})\,\varphi_{j},$
so in particular if $\varmathbb{C}_{0}$ is strictly stable (i.e. if
$\lambda_{1}>-({\textstyle\frac{n-2}{2}})^{2}$ as in 1.5) we have
3.3
$\displaystyle\mathcal{L}_{\varmathbb{C}_{0}}(r^{\gamma_{j}^{\pm}}\varphi_{j})=0,\quad\gamma_{j}^{\pm}=-{\textstyle\frac{n-2}{2}}\pm\bigl{(}({\textstyle\frac{n-2}{2}})^{2}+\lambda_{j}\bigr{)}^{1/2},$
$\displaystyle\hskip
20.0pt-\infty\leftarrow\gamma_{j}^{-}\leqslant\cdots\leqslant\gamma_{2}^{-}<\gamma_{1}^{-}<-{\textstyle\frac{n-2}{2}}<\gamma_{1}^{+}<\gamma_{2}^{+}\leqslant\cdots\leqslant\gamma_{j}^{+}\to\infty.$
Henceforth we write
3.4 $\gamma_{j}=\gamma_{j}^{+},\quad j=1,2,\ldots.$
The Jacobi operator $\mathcal{L}_{\varmathbb{C}}$ on the minimal cylinder
$\varmathbb{C}=\varmathbb{C}_{0}\times\varmathbb{R}^{\ell}$ is
3.5
$\mathcal{L}_{\varmathbb{C}}(v)=\mathcal{L}_{\varmathbb{C}_{0}}(v)+{\textstyle\sum}_{j=1}^{\ell}D_{y_{j}}^{2}v,$
and we can decompose solutions $v$ of the equation
$\mathcal{L}_{\varmathbb{C}}v=0$ in terms of the eigenfunctions $\varphi_{j}$
of 3.1: for $v$ any given smooth function on $\varmathbb{C}\cap\breve{B}_{R}$,
we can write $v(r\omega,y)=\sum_{j}v_{j}(r,y)\varphi_{j}(\omega)$, where
3.6 $v_{j}(r,y)=\int_{\Sigma}v(r\omega,y)\,\varphi_{j}(\omega)\,d\mu(\omega),$
and then $\mathcal{L}_{\varmathbb{C}}v=0$ on $\breve{B}_{R}$ if and only if
$v_{j}$ satisfies
$r^{1-n}\frac{\partial}{\partial r}\bigl{(}r^{n-1}\frac{\partial
v_{j}}{\partial
r}\bigr{)}+{\textstyle\sum}_{k=1}^{\ell}\frac{\partial^{2}v_{j}}{\partial
y_{k}^{2}}-\frac{\lambda_{j}}{r^{2}}v_{j}=0$
for $(r,y)\in\breve{B}_{R}^{+}\,(=\\{(r,y):r>0,\,r^{2}+|y|^{2}<R^{2}\\})$.
Direct computation then shows that
3.7 $\frac{1}{r^{1+\beta}}\frac{\partial}{\partial
r}\Bigl{(}r^{1+\beta}\frac{\partial h}{\partial
r}\Bigr{)}+\sum_{k=1}^{\ell}\frac{\partial^{2}h}{\partial y_{k}^{2}}=0$
on $\breve{B}_{R}^{+}$, with
3.8
$h(r,y)=h_{j}(r,y)=r^{-\gamma_{j}}\int_{\Sigma}v(r\omega,y)\,\varphi_{j}(\omega)\,d\mu\text{
and }\beta=\beta_{j}=2\sqrt{({\textstyle\frac{n-2}{2}})^{2}+\lambda_{j}}.$
A solution $h$ of 3.7 (with any $\beta>0$) which has the property
3.9
$\int_{\\{(r,y):r^{2}+|y|^{2}<R^{2}\\}}r^{-2}h^{2}\,r^{1+\beta}\,drdy<\infty$
will be referred to as a _$\beta$ -harmonic function on_ $\\{(r,y):r\geqslant
0,\,r^{2}+|y|^{2}<R^{2}\\}$. Using 3.9 together with the weak form of the
equation 3.7 this is easily shown to imply the $W^{1,2}$ estimate
3.10
$\int_{\\{(r,y):r^{2}+|y|^{2}<\rho^{2}\\}}\bigl{(}(D_{r}h)^{2}+{\textstyle\sum}_{j=1}^{\ell}(D_{y_{j}}h)^{2}\bigr{)}\,r^{1+\beta}\,drdy<\infty$
for each $\rho<R$, and there is a unique such function with prescribed $C^{1}$
data $\varphi$ on $\bigl{\\{}(r,y):r^{2}+|y|^{2}=R^{2}\bigr{\\}}$, obtained,
for example, by minimizing $\int_{r^{2}+|y|^{2}\leqslant
R^{2}}(u_{r}^{2}+|u_{y}|^{2})\,r^{1+\beta}d\mu$ among functions with trace
$\varphi$ on $r^{2}+|y|^{2}=R^{2}$.
If $\beta$ is an integer (e.g. $\beta=1$ when $n=7$, $j=1$, and
$\varmathbb{C}_{0}$ is the “Simons cone”
$\bigl{\\{}x\in\varmathbb{R}^{8}:\sum_{i=1}^{4}(x^{i})^{2}=\sum_{i=5}^{8}(x^{i})^{2}\bigr{\\}}$)
then the $\beta\,$-Laplacian operator as in 3.7 is just the ordinary Laplacian
in $\varmathbb{R}^{2+\beta+\ell}$, at least as it applies to functions
$u=u(r,y)$ with $r=|x|$. Even for $\beta$ non-integer, there is, analogous to
the case when $\beta$ is an integer, $\beta$-harmonic polynomials of each
order (i.e. homogeneous polynomial solutions $h$ of 3.7) of each order
$q=0,1,\ldots$.
Indeed, as shown in the Appendix below (extending the discussion of [Sim94] to
arbitrary $\ell$ and at the same time showing the relevant power series
converge in the entire ball $\bigl{\\{}(r,y):r\geqslant
0,\,r^{2}+|y|^{2}<R^{2}\bigr{\\}}$ rather than merely in
$\bigl{\\{}(r,y):r\geqslant 0,\,r^{2}+|y|^{2}<\theta R^{2}\bigr{\\}}$ for
suitable $\theta\in(0,1)$), if $h=h(r,y)$ is a solution of the weak form of
3.7 on $\\{(r,y):r>0,\,r^{2}+|y|^{2}<R^{2}\\}$, i.e.
3.11
$\int_{\breve{B}_{1}^{+}}(u_{r}\zeta_{r}+u_{y}\cdot\zeta_{y})\,d\mu_{+}=0$
for all Lipschitz $\zeta$ with compact support in $r^{2}+|y|^{2}<R^{2}$ and if
$u$ satisfies 3.9, then $h$ is a real analytic function of the variables
$r^{2},y$, and, on all of $\\{(r,y):r\geqslant 0,\,r^{2}+|y|^{2}<R^{2}\\}$,
$h$ is a convergent sum
3.12 $h(r,y)={\textstyle\sum}_{q=0}^{\infty}h_{q}(r,y)\,\,\,\forall(r,y)\text{
with }r\geqslant 0\text{ and }\sqrt{r^{2}+|y|^{2}}<R,$
where $h_{q}$ is the degree $q$ homogeneous $\beta$-harmonic polynomial in the
variables $r^{2},y$ obtained by selecting the order $q$ terms in the power
series expansion of $h$.
In case $\ell=1$ there is, up to scaling, a unique $\beta$-harmonic polynomial
of degree $q$ of the form
$h_{q}=y^{q}+\sum_{k>0,\ell\geqslant 0,2k+j=q}c_{k\ell}r^{2k}y^{j},$
By direct computation, in case $\ell=1$ the $\beta$-harmonic polynomials
$h_{0},h_{1},h_{2},h_{3},h_{4}$ are respectively
$1,\,\,\,y,\,\,\,y^{2}-{\textstyle\frac{1}{2+\beta}}r^{2},\,\,\,y^{3}-{\textstyle\frac{3}{2+\beta}}r^{2}y,\,\,\,y^{4}-{\textstyle\frac{6}{2+\beta}}r^{2}y^{2}+{\textstyle\frac{3}{(2+\beta)(4+\beta)}}r^{4}.$
In the case $\ell\geqslant 2$, for each homogeneous degree $q$ polynomial
$p_{0}(y)$ there is a unique $\beta$-harmonic homogeneous polynomial $h$ of
degree $q$, with
$h(r,y)=\begin{cases}p_{0}(y)+{\textstyle\sum}_{j=1}^{q/2}r^{2j}p_{j}(y),&q\text{
even }\\\ p_{0}(y)+{\textstyle\sum}_{j=1}^{(q-1)/2}r^{2j}p_{j}(y),&q\text{
odd},\end{cases}$
where each $p_{j}(y)$ is a homogeneous degree $q-2j$ polynomial, and the
$p_{j}$ are defined inductively by
$p_{j+1}=-(2j+2)^{-1}(2j+2+\beta)^{-1}\Delta_{y}p_{j}$, $j\geqslant 0$.
In terms of spherical coordinates $\rho=\sqrt{r^{2}+|y|^{2}}$ and
$\omega=\rho^{-1}(r,y)\in\varmathbb{S}^{\ell}_{+}$,
$\varmathbb{S}^{\ell}_{+}=\\{\omega=(\omega_{1},\ldots,\omega_{\ell})\in\varmathbb{R}^{\ell+1}:\omega_{1}>0,\,|\omega|=1\\},$
the $\beta$-Laplacian $\Delta_{\beta}$ (i.e. the operator on the left of 3.7)
is
3.13
$\Delta_{\beta}h=\rho^{-\ell-1-\beta}\frac{\partial}{\partial\rho}\Bigl{(}\rho^{\ell+1+\beta}\frac{\partial
h}{\partial\rho}\Bigr{)}+\rho^{-2}\omega_{1}^{-1-\beta}{\rm
div}_{\varmathbb{S}_{+}^{\ell}}\bigl{(}\omega_{1}^{1+\beta}\nabla_{\varmathbb{S}_{+}^{\ell}}h\bigr{)}.$
Notice that in the case $\ell=1$ we can write $h=h(r,\theta)$ where
$\theta=\arctan(y/r)\in(-\pi/2,\pi/2)$ and 3.13 can be written
$\Delta_{\beta}h=\rho^{-2-\beta}\frac{\partial}{\partial\rho}\Bigl{(}\rho^{2+\beta}\frac{\partial
h}{\partial\rho}\Bigr{)}+\rho^{-2}\cos^{-1-\beta}\theta\frac{\partial}{\partial\theta}\bigl{(}\cos^{1+\beta}\theta\frac{\partial
h}{\partial\theta}\bigr{)}.$
Using 3.13 we see that the order $q$ homogeneous $\beta$-harmonic polynomials
$h_{q}$ (which are homogeneous of degree $q$ in the variable $\rho$) satisfy
the identity
3.14 ${\rm
div}_{\varmathbb{S}_{+}^{\ell}}\bigl{(}\omega_{1}^{1+\beta}\nabla_{\varmathbb{S}_{+}^{\ell}}h_{q}\bigr{)}=q(q+\ell+\beta)h_{q}\omega_{1}^{1+\beta}.$
Hence we have the orthogonality of $h_{p},h_{q}$ for $p\neq q$ on
3.15 $\varmathbb{S}^{\ell}_{+}=\\{(r,y):r>0,\,r^{2}+|y|^{2}=1\\}$
with respect to the measure $d\nu_{+}=\smash{{\omega_{1}}^{\\!1+\beta}}d\mu$
($\mu=\ell$-dimensional Hausdorff measure on $\varmathbb{S}_{+}^{\ell}$):
3.16 $\int_{\varmathbb{S}_{+}^{\ell}}h_{p}\,h_{q}\,d\nu_{+}=0\text{ for $p\neq
q$},\quad d\nu_{+}=\omega_{1}^{1+\beta}d\mu.$
Thus if $h$ satisfies 3.7 and 3.9 then
3.17 $\displaystyle\int_{B^{+}_{R}}h^{2}(r,y)\,r^{1+\beta}drdy$
$\displaystyle=\int_{0}^{R}\int_{\varmathbb{S}_{+}^{\ell}}h^{2}(\rho\omega)\,\rho^{1+\beta}\,d\nu_{+}\,\rho^{\ell}d\rho$
$\displaystyle=\sum_{q=0}^{\infty}(\ell+2+\beta+2q)^{-1}N_{q}^{2}R^{\ell+2+\beta+2q},$
where $B_{R}^{+}=\\{(r,y):r\geqslant 0,\,\,r^{2}+|y|^{2}\leqslant R\\}$ and
$N_{q}=\bigl{(}\int_{\varmathbb{S}_{+}^{\ell}}h_{q}^{2}(\omega)\,\omega_{1}^{1+\beta}d\mu\bigr{)}^{1/2}$
and $h_{q}(r,y)$ are as in 3.12.
Using 3.16 it is shown in the Appendix that the homogeneous $\beta$-harmonic
polynomials are complete in $L^{2}(\nu_{+})$, where
$d\nu_{+}=\omega_{1}^{1+\beta}d\mu_{\ell}$ on $\varmathbb{S}^{\ell}_{+}$. Thus
each $\varphi\in L^{2}(\nu_{+})$ can be written as an $L^{2}(\nu_{+})$
convergent series
3.18 $\varphi=\sum_{q=0}^{\infty}h_{q}|\varmathbb{S}^{\ell}_{+},$
with each $h_{q}$ either zero or a homogeneous degree $q$ $\beta$-harmonic
polynomial.
Next observe that if
3.19 $\mathcal{L}_{\varmathbb{C}}v=0\text{ on }\varmathbb{C}\cap B_{R}\text{
with }\int_{\varmathbb{C}\cap B_{R}}|x|^{-2}v^{2}\,d\mu<\infty,$
then each $h_{j}(r,y)$ defined as in 3.8 does satisfy the condition 3.9, and
hence we have
3.20
$v=\sum_{j=1}^{\infty}\sum_{q=0}^{\infty}r^{\gamma_{j}}h_{j,q}(r,y)\varphi_{j}(\omega),$
where $h_{j,q}$ is a homogeneous degree $q$ $\beta_{j}$-harmonic polynomial
and, using the orthogonality 3.16,
3.21 $\int_{\varmathbb{C}\cap
B_{\rho}}v^{2}\,d\mu=\rho^{\ell+2+\beta_{1}}\sum_{j=1}^{\infty}\sum_{q=0}^{\infty}(2+\ell+\beta_{j}+2q)^{-1}N_{j,q}^{2}\rho^{2q+\beta_{j}-\beta_{1}},\,\,\rho<R,$
where
$N_{j,q}=\bigl{(}\int_{\varmathbb{S}_{+}^{\ell}}h_{j,q}^{2}(\omega)\,d\nu_{+}\bigr{)}^{1/2}$.
Observe that (except for $(j,q)=(1,0)$) it is possible that
$r^{\gamma_{j}}h_{j,q}$ could have the same homogeneity as
$r^{\gamma_{i}}h_{i,p}$ (i.e. $\gamma_{j}+q=\gamma_{i}+p$) for some $i\neq
j,\,p\neq q$, but in any case, after some rearrangement of the terms in 3.21,
we see that there are exponents
$0=q_{1}<q_{2}<q_{2}<\cdots<q_{i}\to\infty,\quad
q_{i}=q_{i}(\varmathbb{C}_{0}),$
($q_{i}$ not necessarily integers, but nevertheless fixed depending only on
$\varmathbb{C}_{0}$) such that if $v$ is as in 3.19 then
3.22 $\hskip 1.0pt\text{\small$-$}\hskip-10.6pt\int_{\varmathbb{C}\cap
B_{\rho}}v^{2}=\sum_{i=1}^{\infty}b_{i}^{2}\rho^{2q_{i}}$
(which in particular is an increasing function of $\rho$), $\rho\in(0,R]$, for
suitable constants $b_{j},\,j=1,2,\ldots$, where we use the notation
3.23 $\hskip 1.0pt\text{\small$-$}\hskip-10.6pt\int_{\varmathbb{C}\cap
B_{\rho}}f=\rho^{-\ell-2-\beta_{1}}\int_{\varmathbb{C}\cap
B_{\rho}}f\,d\mu,\quad\rho>0.$
We claim that the logarithm of the right side of 3.22 is a convex function
$\psi(t)$ of $t=\log R$:
3.24 $\psi^{\prime\prime}(t)\geqslant 0\text{ where
}\psi(t)=\log\bigl{(}\hskip
1.0pt\text{\small$-$}\hskip-10.6pt\int_{\varmathbb{C}\cap
B_{\rho}}v^{2}\bigr{|}_{\rho=e^{t}}\bigr{)}\,\,\bigl{(}\,=\log({\textstyle\sum}_{i}b_{i}^{2}e^{2q_{i}t})\text{
by\leavevmode\nobreak\ \ref{l2-norm-v}}\bigr{)}.$
To check this, note that
$\psi^{\prime\prime}(t)=\psi^{-2}(t)\bigl{(}({\textstyle\sum}_{i}b_{i}^{2}e^{2q_{i}t})({\textstyle\sum}_{i}4q_{i}^{2}b_{i}^{2}e^{2q_{i}t})-({\textstyle\sum}_{i}2q_{i}b_{i}^{2}e^{2q_{i}t})^{2}\bigr{)}$
and by Cauchy-Schwarz this is non-negative for $t\in(0,R)$ and if there is a
$t_{0}\in(-\infty,\log R)$ such that $\psi^{\prime\prime}(t_{0})=0$ then there
is $i_{0}\in\\{1,2,\ldots\\}$ such that $b_{i}=0$ for every $i\neq i_{0}$, in
which case
3.25 $\psi(t)=\log b_{i_{0}}+2q_{i_{0}}t\quad t\in(-\infty,\log R).$
In particular the convexity 3.24 of $\psi$ implies
3.26 $\psi(t)-\psi(t-\log 2)\geqslant\psi(t-\log 2)-\psi(t-2\log 2),\quad
t\in(-\infty,\log R),$
and equality at any point $t\in(-\infty,\log R)$ implies 3.25 holds for some
$i_{0}$ and the common value of each side of 3.26 is $\log 4^{q_{i_{0}}}$.
Thus we see that if $Q\in(0,\infty)\setminus\\{4^{q_{1}},4^{q_{2}},\ldots\\}$
and $v$ is not identically zero then for each given $\rho\in(0,R]$
3.27 $\hskip 1.0pt\text{\small$-$}\hskip-10.6pt\int_{\varmathbb{C}\cap
B_{\rho/2}}v^{2}\bigg{/}\hskip
1.0pt\text{\small$-$}\hskip-10.6pt\int_{\varmathbb{C}\cap
B_{\rho/4}}v^{2}\geqslant Q\Longrightarrow\hskip
1.0pt\text{\small$-$}\hskip-10.6pt\int_{\varmathbb{C}\cap
B_{\rho}}v^{2}\bigg{/}\hskip
1.0pt\text{\small$-$}\hskip-10.6pt\int_{\varmathbb{C}\cap B_{\rho/2}}v^{2}>Q.$
3.28 Remark: Notice that if $v_{1},\ldots,v_{q}$ are solutions of
$\mathcal{L}_{\varmathbb{C}}v=0$ on $\varmathbb{C}\cap B_{R}$ satisfying
$\hskip 1.0pt\text{\small$-$}\hskip-10.6pt\int_{\varmathbb{C}\cap
B_{R}}|x|^{-2}v_{j}^{2}(x,y)\,d\mu(x,y)<\infty\text{ for each }j=1,\ldots,q$
with $v_{j}\neq 0$ for some $j$, then
$\text{\scriptsize$-$}\hskip-6.85pt{\textstyle\int}_{\varmathbb{C}\cap
B_{R}}\smash{\sum_{j=1}^{q}v_{j}^{2}\,d\mu}$ has again the form of _3.22_ ,
so the implication _3.27_ applies with the sum $\sum_{j=1}^{q}v_{j}^{\,2}$ in
place of $v^{2}$.
## 4\. Growth estimates for solutions of $\mathcal{L}_{\\!M}w=0$
Here we discuss growth estimates analogous to those used in [Sim83], [Sim85]
for solutions of $\mathcal{L}_{M}w=0$; in particular we discuss conditions on
$M$ and $w$ which ensure that the growth behaviour of solutions $w$ of
$\mathcal{L}_{M}w=0$ is analogous to that of the solutions $v$ of
$\mathcal{L}_{\varmathbb{C}}v=0$ discussed in the previous section.
The main growth lemma below (Lemma 4) applies locally in balls (so $M$ could
be a complete minimal hypersurface in a ball $\breve{B}_{R}$ rather than the
whole space), and we do not need the inclusion $M\subset
U_{+}\times\varmathbb{R}^{\ell}$. We in fact assume $R,\lambda,\Lambda>0$ and
4.1 $\left\\{\begin{aligned} &\text{\,$M\subset\breve{B}_{R}$ is embedded,
minimal, and satisfies\leavevmode\nobreak\ \ref{str-stab-M}}\,\,\text{ for
every }\,\zeta\in C^{1}_{c}(\breve{B}_{R}),\\\
\vskip-3.0pt\cr&\,R^{-n-\ell-2}\int_{M\cap
B_{R}}d^{2}(x)\,d\mu(x,y)<\varepsilon,\text{ and }R^{-n-\ell}\mu(M\cap
B_{R})\leqslant\Lambda,\end{aligned}\right.$
with an $\varepsilon$ (small) to be specified and with
4.2 $d(x)={\rm dist\,}(x,\varmathbb{C}_{0})\,\,(={\rm
dist\,}((x,y),\varmathbb{C}))\text{ for
}(x,y)\in\varmathbb{R}^{n+1}\times\varmathbb{R}^{\ell}.$
Taking $\zeta$ in 2.8 to be $1$ in $B_{\theta R}$, $0$ on $\partial B_{R}$,
and $|D\zeta|\leqslant 2/(1-\theta)$, we obtain
4.3 $\int_{M\cap B_{\theta
R}}\hskip-5.0pt|x|^{-2}w^{2}(x,y)\,d\mu(x,y)\leqslant CR^{-2}\int_{M\cap
B_{R}}\hskip-5.0ptw^{2}\,d\mu\,\,\forall\theta\in[{\textstyle\frac{1}{2}},1),\,\,C=C(\lambda,\theta).$
Notice that if we have a “doubling condition”
4.4 $\int_{M\cap B_{R}}w^{2}(x,y)\,d\mu(x,y)\leqslant K\int_{M\cap
B_{R/2}}w^{2}\,d\mu,$
then 4.3 implies that
$\int_{M\cap B_{\theta R}}\hskip-5.0pt|x|^{-2}w^{2}(x,y)\,d\mu(x,y)\leqslant
CKR^{-2}\int_{M\cap B_{R/2}}w^{2}\,d\mu,$
so, for $\delta^{-2}\geqslant 2CK$ (i.e. $\delta\leqslant(2CK)^{-1/2}$) we
have
4.5 $\int_{M\cap B_{\theta
R}}\hskip-5.0pt|x|^{-2}w^{2}(x,y)\,d\mu(x,y)\leqslant CR^{-2}\int_{M\cap
B_{R/2}\setminus\\{(x,y):|x|\leqslant\delta
R\\}}\hskip-5.0ptw^{2}\,d\mu\,\,\forall\theta\in[{\textstyle\frac{1}{2}},1),$
where $C=C(\lambda,\theta,K)$. The following lemma shows that in fact the
inequality 4.5 can be improved.
4.6 Lemma. For each $\lambda,\Lambda,K>0$ and
$\theta\in\smash{[{\textstyle\frac{1}{2}},1)}$ there is
$\varepsilon=\varepsilon(\lambda,\Lambda,\theta,K,\varmathbb{C})>0$ such that
if $M$ satisfies _4.1_ and $\mathcal{L}_{M}w=0$ on $M\cap\breve{B}_{R}$, and
if the doubling condition _4.4_ holds then
$\int_{M\cap B_{\theta R}}|x|^{-2}w^{2}(x,y)\,d\mu(x,y)\leqslant
CR^{-2}\int_{M\cap
B_{R/2}\setminus\\{(x,y):|x|\leqslant\frac{1}{3}R\\}}w^{2}\,d\mu,$
where $C=C(\lambda,\Lambda,\theta,K,\varmathbb{C})$.
Proof: Let $\theta\in[{\textstyle\frac{3}{4}},1)$. By scaling we can assume
$R=1$. If there is no $\varepsilon$ ensuring the claim of the lemma, then
there is a sequence $M_{k}$ of strictly stable hypersurfaces (with fixed
$\lambda$) with $\int_{M_{k}\cap B_{1}}d^{2}(x)\,d\mu(x,y)\to 0$ and a
sequence $w_{k}$ of solutions of $\mathcal{L}_{M_{k}}w_{k}=0$ such that
(1) $\int_{M_{k}\cap B_{1}}w_{k}^{2}(x,y)\,d\mu(x,y)\leqslant K\int_{M_{k}\cap
B_{1/2}}w_{k}^{2}\,d\mu,$
yet such that
(2) $\int_{M_{k}\cap B_{\theta}}|x|^{-2}w_{k}^{2}(x,y)\,d\mu(x,y)\geqslant
k\,\int_{M_{k}\cap
B_{1/2}\setminus\\{(x,y):|x|\leqslant\frac{1}{3}\\}}w_{k}^{2}\,d\mu.$
By 4.5,
(3) $\int_{M_{k}\cap B_{\theta}}|x|^{-2}w_{k}^{2}(x,y)\,d\mu(x,y)\leqslant
C\int_{M_{k}\cap
B_{1/2}\setminus\\{(x,y):|x|\leqslant\delta\\}}w_{k}^{2}\,d\mu,$
with $\delta=\delta(\lambda,\theta,K)$ and $C=C(\lambda,\theta,K)$.
Since $\int_{M_{k}\cap B_{1}}d^{2}\,d\mu_{k}\to 0$, in view of 2 (i), (ii)
there are sequences $\eta_{k}\downarrow 0$ and $\tau_{k}\uparrow 1$ with
(4) $\left\\{\begin{aligned}
&\,\,M_{k}\cap\breve{B}_{\tau_{k}}\setminus\\{(x,y):|x|\leqslant\eta_{k}\\}\subset\cup_{j=1}^{q}{\rm
graph\,}u_{k,j}\subset M_{k}\\\
&\qquad\sup(|u_{k,j}|+|\nabla_{\varmathbb{C}}u_{k,j}|+|\nabla^{2}_{\varmathbb{C}}u_{k,j}|)\to
0,\end{aligned}\right.$
where $q=q(\Lambda)\geqslant 1$ and each $u_{k,j}$ is $C^{2}$ on a domain
containing
$\varmathbb{C}\cap\breve{B}_{\tau_{k}}\setminus\\{(x,y):|x|\leqslant\eta_{k}\\}$.
Thus, with $\nu_{\varmathbb{C}_{0}}$ the unit normal of $\varmathbb{C}_{0}$
pointing into $U_{+}$ and with
(5) $w_{k,j}(x,y)=w_{k}((x+u_{k,j}(x,y)\nu_{\varmathbb{C}_{0}}(x),y)),$
for
$(x,y)\in\varmathbb{C}\cap\breve{B}_{\tau_{k}}\setminus\\{(x,y):|x|\leqslant\eta_{k}\\}$
and $j=1,\ldots,q$, we see that $w_{k,j}$ satisfies a uniformly elliptic
equation with coefficients converging in $C^{1}$ to the coefficients of
$\mathcal{L}_{\varmathbb{C}}$ on
$\Omega_{\sigma,\delta}=\varmathbb{C}\cap\breve{B}_{\sigma}\setminus\\{(x,y):|x|>\delta\\}$
for each $\sigma\in[{\textstyle\frac{1}{2}},1)$ and
$\delta\in(0,{\textstyle\frac{1}{2}}]$, hence, by Schauder estimates and (1),
(6) $|w_{k,j}|_{C^{2}(\Omega_{\sigma,\delta})}\leqslant
C\|w_{k}\|_{L^{2}(B_{1})}\leqslant CK\|w_{k}\|_{L^{2}(B_{1/2})}$
with $C$ independent of $k$, $k\geqslant k(\sigma,\delta)$. Hence, by (4) and
(6),
$\tilde{w}_{k,j}=({\textstyle\int}_{M_{k}\cap
B_{1/2}}w_{k}^{2}\,d\mu_{k})^{-1/2}w_{k,j}$
has a subsequence converging in $C^{1}$ locally in
$\varmathbb{C}\cap\breve{B}_{1}$ to a smooth solution $v_{j}$ of
$\mathcal{L}_{\varmathbb{C}}v_{j}=0$ on $\varmathbb{C}\cap\breve{B}_{1}$, and
since by (3) $\int_{M_{k}\cap
B_{1/2}\cap\\{(x,y):|x|\leqslant\sigma\\}}w_{k}^{2}\leqslant
C\sigma^{2}\int_{M_{k}\cap B_{1/2}}w_{k}^{2}$ for all
$\sigma\in(0,{\textstyle\frac{1}{2}}]$, we can then conclude
(7) $\int_{\varmathbb{C}\cap B_{1/2}}\sum_{j=1}^{q}v_{j}^{2}\,d\mu=1.$
But by (1) and (2)
$\int_{M_{k}\cap
B_{1/2}\setminus\\{(x,y):|x|\leqslant\frac{1}{3}\\}}w_{k}^{2}\,d\mu\leqslant
Ck^{-1}\int_{M_{k}\cap B_{1/2}}w_{k}^{2}\,d\mu,$
and so, multiplying each side by $({\textstyle\int}_{M_{k}\cap
B_{1/2}}w_{k}^{2}\,d\mu_{k})^{-1/2}$ and taking limits, we conclude
$\int_{\varmathbb{C}\cap
B_{1/2}\setminus\\{(x,y):|x|\leqslant\frac{1}{3}\\}}\sum_{j=1}^{p}v_{j}^{2}=0.$
In view of (7) this contradicts unique continuation for solutions of
$\mathcal{L}_{\varmathbb{C}}v=0$ (applicable since the solutions $v$ of
$\mathcal{L}_{\varmathbb{C}}v=0$ are real-analytic). $\Box$
We can now establish the growth lemma.
4.7 Lemma. For each $\lambda,\Lambda>0$,
$Q\in(0,\infty)\setminus\\{4^{q_{1}},4^{q_{2}},\ldots\\}$
($q_{1},q_{2},\ldots$ as in 3.22), and $\alpha\in[{\textstyle\frac{1}{2}},1)$,
there is $\varepsilon=\varepsilon(Q,\alpha,\lambda,\Lambda,\varmathbb{C})>0$
such that if $M$ satisfies _4.1_ and if $\mathcal{L}_{M}w=0$ on
$M\cap\breve{B}_{R}$ then
$None$ $\hskip 1.0pt\text{\small$-$}\hskip-10.6pt\int_{M\cap
B_{R/2}}w^{2}\geqslant Q\hskip 1.0pt\text{\small$-$}\hskip-10.6pt\int_{M\cap
B_{R/4}}w^{2}\Longrightarrow\hskip
1.0pt\text{\small$-$}\hskip-10.6pt\int_{B_{R}}w^{2}\geqslant Q\hskip
1.0pt\text{\small$-$}\hskip-10.6pt\int_{B_{R/2}}w^{2},$
and
$None$ $\hskip 1.0pt\text{\small$-$}\hskip-10.6pt\int_{M\cap
B_{R}}w^{2}\geqslant\alpha\hskip 1.0pt\text{\small$-$}\hskip-10.6pt\int_{M\cap
B_{R/2}}w^{2},$
where we use the notation (analogous to 3.23)
$\hskip 1.0pt\text{\small$-$}\hskip-10.6pt\int_{M\cap
B_{\rho}}f=\rho^{-\ell-2-\beta}\int_{M\cap B_{\rho}}f\,d\mu.$
Notice that no hypothesis like $M\subset U_{+}\times\varmathbb{R}^{\ell}$ is
needed here.
Proof: By scaling we can assume $R=1$. If there is
$Q\in(0,\infty)\setminus\\{4^{q_{1}},4^{q_{2}},\ldots\\}$ such that there is
no $\varepsilon$ ensuring the first claim of the lemma, then there is a
sequence $M_{k}$ satisfying 1.8 (with fixed $\lambda$) with $\int_{M_{k}\cap
B_{1}}d^{2}(x)\,d\mu(x,y)\to 0$ and a sequence $w_{k}$ of solutions of
$\mathcal{L}_{M_{k}}w_{k}=0$, such that
(1) $\hskip 1.0pt\text{\small$-$}\hskip-10.6pt\int_{M_{k}\cap
B_{1/2}}w_{k}^{2}\geqslant Q\hskip
1.0pt\text{\small$-$}\hskip-10.6pt\int_{M_{k}\cap B_{1/4}}w_{k}^{2}\text{ and
}\hskip 1.0pt\text{\small$-$}\hskip-10.6pt\int_{M_{k}\cap
B_{1}}w_{k}^{2}<Q\hskip 1.0pt\text{\small$-$}\hskip-10.6pt\int_{M_{k}\cap
B_{1/2}}w_{k}^{2}.$
The latter inequality implies we have the doubling condition 4.4 with $K=Q$,
so we can repeat the compactness argument in the proof Lemma 4. Thus by (1),
with the same notation as in the proof of 4, we get convergence of $w_{k,j}$
to a smooth solution $v_{j}$ of $\mathcal{L}_{\varmathbb{C}}v_{j}=0$ on
$\varmathbb{C}\cap\breve{B}_{1}$ with
$\text{\scriptsize$-$}\hskip-6.85pt{\textstyle\int}_{\varmathbb{C}\cap
B_{1/2}}{\textstyle\sum}_{j=1}^{q}v_{j}^{2}\,d\mu=1$, $\int_{\varmathbb{C}\cap
B_{1}}|x|^{-2}{\textstyle\sum}_{j=1}^{q}v_{j}^{2}<\infty$,
(2) $\displaystyle\hskip
1.0pt\text{\small$-$}\hskip-10.6pt\int_{\varmathbb{C}\cap
B_{1/2}}{\textstyle\sum}_{j=1}^{q}v_{j}^{2}\,d\mu\geqslant Q\hskip
1.0pt\text{\small$-$}\hskip-10.6pt\int_{\varmathbb{C}\cap
B_{1/4}}{\textstyle\sum}_{j=1}^{q}v_{j}^{2}\,d\mu\text{ and }$
$\displaystyle\hskip 72.26999pt\hskip
1.0pt\text{\small$-$}\hskip-10.6pt\int_{\varmathbb{C}\cap
B_{1}}{\textstyle\sum}_{j=1}^{q}v_{j}^{2}\,d\mu\leqslant Q\hskip
1.0pt\text{\small$-$}\hskip-10.6pt\int_{\varmathbb{C}\cap
B_{1/2}}{\textstyle\sum}_{j=1}^{q}v_{j}^{2}\,d\mu.$
In view of 3, this contradicts 3.27.
Similarly, if the second claim of the lemma fails for some
$\alpha\in[{\textstyle\frac{1}{2}},1)$, after taking a subsequence of $k$
(still denoted $k$) we get sequences $M_{k},w_{k}$ and $w_{k,j}$ with
$\hskip 1.0pt\text{\small$-$}\hskip-10.6pt\int_{M_{k}\cap
B_{1}}w_{k}^{2}\,d\mu<\alpha\hskip
1.0pt\text{\small$-$}\hskip-10.6pt\int_{M_{k}\cap B_{1/2}}w_{k}^{2}\,d\mu$
(i.e. the doubling condition 4.4 with $K=\alpha$), and smooth solutions
$v_{j}=\lim w_{k,j}$ of $\mathcal{L}_{\varmathbb{C}}v_{j}=0$ on
$\varmathbb{C}\cap\breve{B}_{1}$ with $0<\int_{M\cap
B_{1}}|x|^{-2}v_{j}^{2}<\infty$ and
$\hskip 1.0pt\text{\small$-$}\hskip-10.6pt\int_{\varmathbb{C}\cap
B_{1}}{\textstyle\sum}_{j=1}^{q}v_{j}^{2}\,d\mu\leqslant\alpha\hskip
1.0pt\text{\small$-$}\hskip-10.6pt\int_{\varmathbb{C}\cap
B_{1/2}}{\textstyle\sum}_{j=1}^{q}v_{j}^{2}\,d\mu,$
which is impossible by 3.22 and 3. $\Box$
Since $\mathcal{L}_{M}$ is the linearization of the minimal surface operator
$\mathcal{M}_{M}$, a smooth family $\\{M_{t}\\}_{|t|<\varepsilon}$ of minimal
submanifolds with $M_{0}=M$ must generate a velocity vector $v$ at $t=0$ with
normal part $w=v\cdot\nu_{M}$ ($\nu_{M}$ a smooth unit normal of $M$) being a
solution of the equation $\mathcal{L}_{M}w=0$. In particular the family of
homotheties $(1+t)M$ generates the solution $w=(x,y)\cdot\nu_{M}(x,y)$ and the
translates $M+te_{n+1+j}$ generate the solutions
$w=e_{n+1+j}\cdot\nu_{M}(x,y)=\nu_{y_{j}}(x,y)$, $j=1,\ldots,\ell$. Thus
4.8 $w=(x,y)\cdot\nu_{M}\text{ and }w=\nu_{y_{j}}\text{ both satisfy
}\mathcal{L}_{M}w=0\text{ and the inequality }\ref{pre-gro-1}.$
4.9 Corollary. Suppose $\Lambda,\lambda,\gamma>0$, $q\in\\{1,2,\ldots\\}$, and
assume $M$ is a complete, embedded, minimal hypersurface in
$\varmathbb{R}^{n+1+\ell}$, strict stability 1.8 holds, and $M$ has
$\varmathbb{C}$ with some multiplicity $q$ as its unique tangent cone at
$\infty$. If $\mathcal{L}_{M}w=0$, and
$\sup_{R>1}R^{-\gamma}\text{\scriptsize$-$}\hskip-6.85pt{\textstyle\int}_{M\cap
B_{R}}w^{2}<\Lambda$, then there is $R_{0}=R_{0}(M,\gamma,\lambda,\Lambda,q)$
such that for all $R\geqslant R_{0}$ we have the “strong doubling condition”
$\hskip 1.0pt\text{\small$-$}\hskip-10.6pt\int_{M\cap
B_{R}}|x|^{-2}w^{2}\leqslant CR^{-2}\hskip
1.0pt\text{\small$-$}\hskip-10.6pt\int_{M\cap
B_{R/2}\setminus\\{(x,y):|x|<{\scriptsize\frac{1}{3}}R\\}}w^{2},\quad
C=C(\gamma,q,\lambda,\Lambda).$
4.10 Remarks: (1) Observe that in case $M\subset
U_{+}\times\varmathbb{R}^{\ell}$ the unique tangent cone assumption here is
automatically satisfied by virtue of Lemma 2.
(2) Since $\mu(M\cap B_{R})\leqslant q\mu(\varmathbb{C}\cap B_{1})R^{n+\ell}$,
$|(x,y)\cdot\nu_{M}(x,y)|\leqslant R$, and $|\nu_{y_{j}}|\leqslant 1$ on
$M\cap B_{R}$, in view of 4.8 the above lemma applies to both
$w=(x,y)\cdot\nu_{M}$ and $w=\nu_{y_{j}}$ with $\gamma=2|\gamma_{1}|+2$ and
$\gamma=2|\gamma_{1}|$ respectively.
Proof of Corollary 4: We can of course assume that $w$ is not identically
zero. In view of Lemma 2, for any $\tilde{\gamma}>\gamma$ with
$2^{\tilde{\gamma}}\in(1,\infty)\setminus\\{4^{q_{1}},4^{q_{2}},\ldots\\}$,
$M\cap\breve{B}_{R}$ satisfies the hypotheses of Lemma 4 with
$Q=2^{\tilde{\gamma}}$ for all sufficiently large $R$, so
$\hskip 1.0pt\text{\small$-$}\hskip-10.6pt\int_{M\cap
B_{2^{k}R}}w^{2}\geqslant 2^{\tilde{\gamma}}\hskip
1.0pt\text{\small$-$}\hskip-10.6pt\int_{M\cap
B_{2^{k-1}R}}w^{2}\Longrightarrow\hskip
1.0pt\text{\small$-$}\hskip-10.6pt\int_{M\cap B_{2^{k+1}R}}w^{2}\geqslant
2^{\tilde{\gamma}}\hskip 1.0pt\text{\small$-$}\hskip-10.6pt\int_{M\cap
B_{2^{k}R}}w^{2}$
for $k=1,2,\ldots$ and for any choice of $R$ sufficiently large (depending on
$M,\tilde{\gamma},\lambda$), and hence, by iteration, if
$\text{\scriptsize$-$}\hskip-6.85pt{\textstyle\int}_{M\cap
B_{R}}w^{2}\geqslant
2^{\tilde{\gamma}}\text{\scriptsize$-$}\hskip-6.85pt{\textstyle\int}_{M\cap
B_{R/2}}w^{2}$ we would have
$\text{\scriptsize$-$}\hskip-6.85pt{\textstyle\int}_{M\cap
B_{2^{k}R}}w^{2}\geqslant C2^{k\tilde{\gamma}}$, $k=1,2,\ldots$ with $C>0$
independent of $k$ , contrary to the hypothesis on $w$ since
$C2^{k(\tilde{\gamma}-\gamma)}>\Lambda$ for sufficiently large $k$. Thus, with
such a choice of $\tilde{\gamma}$, we have the doubling condition 4.4 with
$K=2^{\tilde{\gamma}}$:
$\hskip 1.0pt\text{\small$-$}\hskip-10.6pt\int_{M\cap B_{R}}w^{2}\leqslant
2^{\tilde{\gamma}}\hskip 1.0pt\text{\small$-$}\hskip-10.6pt\int_{M\cap
B_{R/2}}w^{2}$
for all $R\geqslant R_{0}$, $R_{0}=R_{0}(M,\tilde{\gamma})$. Then the required
result is proved by Lemma 4. $\Box$
## 5\. Asymptotic $L^{2}$ Estimates for $d$
The following lemma gives an $L^{2}$ estimate for the distance function
$d|M\cap B_{R}$ as $R\to\infty$ in case $M$ is as in Theorem 1, where as usual
$d(x)={\rm dist\,}(x,\varmathbb{C}_{0})\,\,(={\rm
dist\,}((x,y),\varmathbb{C})\text{ for
}(x,y)\in\varmathbb{R}^{n+1}\times\varmathbb{R}^{\ell}).$
In this section we continue to use the notation
$\hskip 1.0pt\text{\small$-$}\hskip-10.6pt\int_{M\cap
B_{\rho}}f=\rho^{-\ell-2-\beta_{1}}\int_{M\cap B_{\rho}}f\,d\mu.$
5.1 Lemma. Let $\alpha,\lambda,\Lambda>0$ and assume $M$ is embedded, minimal,
$M\subset U_{+}\times\varmathbb{R}^{\ell}$, $R^{-n-\ell}\mu(M\cap
B_{R})\leqslant\Lambda$ for each $R>1$ and $M$ satisfies the strict stability
1.8. Then there is $R_{0}=R_{0}(M,\lambda,\Lambda,\alpha)>1$ such that
$C^{-1}R^{-\alpha}\hskip 1.0pt\text{\small$-$}\hskip-10.6pt\int_{M\cap
B_{R_{0}}}d^{2}\leqslant\hskip 1.0pt\text{\small$-$}\hskip-10.6pt\int_{M\cap
B_{R}}d^{2}\leqslant CR^{\alpha}\hskip
1.0pt\text{\small$-$}\hskip-10.6pt\int_{M\cap B_{R_{0}}}d^{2}\,\,\,\,\forall
R\geqslant R_{0},\quad C=C(\lambda,\Lambda,\varmathbb{C}_{0}).$
To facilitate the proof we need the technical preliminaries of the following
lemma. In this lemma
5.2 $\nu_{M}=(\nu_{1},\ldots,\nu_{n+\ell+1})$
continues to denote a smooth unit normal for $M$, $\nu_{\varmathbb{C}_{0}}$
continues to denote the unit normal of $\varmathbb{C}_{0}$ pointing into
$U_{+}$, and
$\varepsilon_{0}=\varepsilon_{0}(\varmathbb{C}_{0})\in(0,{\textstyle\frac{1}{2}}]$
is assumed small enough to ensure that there is a smooth nearest point
projection
5.3
$\pi:\bigl{\\{}x\in\varmathbb{R}^{n+1}:d(x)<\varepsilon_{0}|x|\bigr{\\}}\to\varmathbb{C}_{0}.$
5.4 Lemma. Suppose $\Lambda>0$, $\delta\in(0,{\textstyle\frac{1}{2}}]$,
$\theta\in[{\textstyle\frac{1}{2}},1)$, $M$ satisfies 4.1, and $M\subset
U_{+}\times\varmathbb{R}^{\ell}$. Then there is
$\varepsilon=\varepsilon(\delta,\lambda,\Lambda,\theta,\varmathbb{C})\in(0,\varepsilon_{0}]$
such that, for all $(x,y)\in M\cap B_{\theta R}$ with $d(x)<\varepsilon|x|$,
$None$
$\min\bigl{\\{}\bigl{|}\nu_{M}(x,y)-\bigl{(}\nu_{\varmathbb{C}_{0}}(\pi(x)),0\bigr{)}\bigr{|},\,\bigl{|}\nu_{M}(x,y)+\bigl{(}\nu_{\varmathbb{C}_{0}}(\pi(x)),0\bigr{)}\bigr{|}\bigr{\\}}<\delta$
in particular
$|\nu_{y}|(x,y)\,\,(\,=|(\nu_{y_{1}}(x,y),\ldots,\nu_{y_{\ell}}(x,y))|)<\delta$),
and
(ii) $\displaystyle(1-7\delta)|x|^{-1}|(x,0)\cdot\nu_{M}(x,y)|\leqslant$
$\displaystyle\hskip
72.26999pt\bigl{|}(x,0)\cdot\nabla_{M}\bigl{(}d(x)/|x|\bigr{)}\bigr{|}\leqslant(1+7\delta)|x|^{-1}|(x,0)\cdot\nu_{M}(x,y)|.$
Remark: The inequalities in (ii) do not depend on the minimality of $M$—(ii)
is true for any smooth hypersurface $M$ at points where $d(x)<\varepsilon|x|$
and where (i) holds for suitably small $\delta\in(0,{\textstyle\frac{1}{2}}]$.
Proof of Lemma 5: By scaling we can assume $R=1$. If (i) fails for some
$\Lambda,\delta,\theta$ then there is a sequence $(\xi_{k},\eta_{k})\in M\cap
B_{\theta}$ with $d(\xi_{k})\leqslant k^{-1}|\xi_{k}|$ and
(1)
$\min\bigl{\\{}|\nu(\xi_{k},\eta_{k})-\nu_{\varmathbb{C}_{0}}(\pi(\xi_{k}))|,|\nu(\xi_{k},\eta_{k})+\nu_{\varmathbb{C}_{0}}(\pi(\xi_{k}))|\bigr{\\}}\geqslant\delta.$
Then passing to a subsequence we have
$\tilde{\xi}_{k}=|\xi_{k}|^{-1}\xi_{k}\to\xi\in\varmathbb{C}_{0}$ and
$M_{k}=|\xi_{k}|^{-1}(M-(0,\eta_{k}))$ converges in the varifold sense to an
integer multiplicity $V$ which is stationary in $B_{\theta^{-1}}(\xi,0)$,
${\rm spt\,}V\subset\overline{U}_{+}\times\varmathbb{R}^{\ell}$, and
$\xi\in{\rm spt\,}V\cap\varmathbb{C}_{0}$. Also, by the compactness and
regularity theory of [SS81], ${\rm sing\,\,}V\cap\breve{B}_{\theta^{-1}}$ has
Hausdorff dimension $\leqslant n+\ell-7$. So, since $\varmathbb{C}_{0}$ is
connected, by the maximum principle of [Ilm96] we have
(2) $V\hbox{ {\vrule height=6.25963pt}{\leaders\hrule\hskip 5.69046pt}
}\breve{B}_{\theta^{-1}}(\xi,0)=(q\varmathbb{C}+W)\hbox{ {\vrule
height=6.25963pt}{\leaders\hrule\hskip 5.69046pt}
}\breve{B}_{\theta^{-1}}(\xi,0)$
where $W$ is integer multiplicity, stationary in $\breve{B}_{\theta^{-1}}$ and
with ${\rm spt\,}W\subset\overline{U}_{+}\times\varmathbb{R}^{\ell}$. Taking
the maximum integer $q$ such that this holds, we then have
(3) $B_{\sigma}(\xi,0)\cap{\rm spt\,}W=\emptyset$
for some $\sigma>0$, because otherwise ${\rm
spt\,}W\cap\overline{\varmathbb}{C}\neq\emptyset$ and we could apply [Ilm96]
again to conclude $W\hbox{ {\vrule height=6.25963pt}{\leaders\hrule\hskip
5.69046pt} }\breve{B}_{\theta^{-1}}(\xi,0)=(\varmathbb{C}+W_{1})\hbox{ {\vrule
height=6.25963pt}{\leaders\hrule\hskip 5.69046pt}
}\breve{B}_{\theta^{-1}}(\xi,0)$ where $W_{1}$ is stationary integer
multiplicity, contradicting the maximality of $q$ in (2).
In view of (2), (3) we can then apply the sheeting theorem [SS81, Theorem 1]
to assert that $M_{k}\cap B_{\sigma/2}(\xi,0)$ is $C^{2}$ close to
$\varmathbb{C}$, and in particular
$\min\bigl{\\{}|\nu(\xi_{k},\eta_{k})-\nu_{\varmathbb{C}_{0}}(\pi(\xi_{k}))|,|\nu(\xi_{k},\eta_{k})+\nu_{\varmathbb{C}_{0}}(\pi(\xi_{k}))|\bigr{\\}}\to
0,$
contradicting (1).
To prove (ii), let $(x_{0},y_{0})\in M$ such that
$d(x_{0})<\varepsilon|x_{0}|$ and such that (i) holds with
$(x,y)=(x_{0},y_{0})$, and let $\sigma>0$ be small enough to ensure that both
$d(x)<\varepsilon|x|$ and (i) hold for all $(x,y)\in M\cap
B_{\sigma}(x_{0},y_{0})$. Let
$M_{y_{0}}=\bigl{\\{}x\in\varmathbb{R}^{n+1}:(x,y_{0})\in M\cap
B_{\sigma}(x_{0},y_{0})\bigr{\\}}.$
Then, taking a smaller $\sigma$ if necessary, we can assume
(4) $M_{y_{0}}={\rm
graph\,}u=\bigl{\\{}(\xi+u(\xi)\nu_{\varmathbb{C}_{0}}(\xi),y_{0}):\xi\in\Omega\bigr{\\}},$
where $\nu_{\varmathbb{C}_{0}}$ is the unit normal of $\varmathbb{C}_{0}$
pointing into $U_{+}$, $\Omega$ is an open neighborhood of $\pi(x_{0})$ in
$\varmathbb{C}_{0}$, and $u$ is $C^{2}(\Omega)$.
Then with $x=\xi+u(\xi)\nu_{\varmathbb{C}_{0}}(\xi)\in M_{y_{0}}$ we have also
$x(t)=t\xi+u(t\xi)\nu_{\varmathbb{C}_{0}}(\xi)\in M_{y_{0}}$ for $t$
sufficiently close to $1$ (because
$\nu_{\varmathbb{C}_{0}}(t\xi)=\nu_{\varmathbb{C}_{0}}(\xi)$) and hence
(5)
${\textstyle\frac{d}{dt}}\bigr{|}_{t=1}x(t)=\xi+\xi\cdot\nabla_{\varmathbb{C}_{0}}u(\xi)\nu_{\varmathbb{C}_{0}}(\xi)\in
T_{x}M_{y_{0}}.$
With $\nu^{\prime}(x)=(\nu_{1}(x,y_{0}),\ldots,\nu_{n+1}(x,y_{0}))$ (which is
normal to $M_{y_{0}}$), we can assume, after changing the sign of $\nu$ if
necessary, that
(6) $\nu_{\varmathbb{C}_{0}}(\xi)\cdot\nu^{\prime}(x)>0\text{ for }x\in
M_{y_{0}},\text{ so
}|\nu_{\varmathbb{C}_{0}}(\xi)-\nu^{\prime}(x)|<\delta\text{ by (i)}.$
By (5) we have
$x\cdot\nu^{\prime}(x)=\bigl{(}x-{\textstyle\frac{d}{dt}}\bigr{|}_{t=1}x(t)\bigr{)}\cdot\nu^{\prime}(x)=\bigl{(}u(\xi)-\xi\cdot\nabla_{\varmathbb{C}_{0}}u(\xi)\bigr{)}\nu_{\varmathbb{C}_{0}}(\xi)\cdot\nu^{\prime}(x),$
hence
(7)
$u(\xi)-\xi\cdot\nabla_{\varmathbb{C}_{0}}u(\xi)=(\nu_{\varmathbb{C}_{0}}(\xi)\cdot\nu^{\prime}(x))^{-1}x\cdot\nu^{\prime}(x).$
Differentiating the identity
$u(t\xi)=d\bigl{(}t\xi+u(t\xi)\nu_{\varmathbb{C}_{0}}(\xi)\bigr{)}$ at $t=1$,
we obtain
$\displaystyle-\xi\cdot\nabla_{\varmathbb{C}_{0}}u(\xi)$
$\displaystyle=-\bigl{(}\xi+\xi\cdot\nabla_{\varmathbb{C}_{0}}u(\xi)\nu_{\varmathbb{C}_{0}}(\xi)\bigr{)}\cdot\nabla_{M_{y_{0}}}d(x)$
$\displaystyle=-\bigl{(}x-(u(\xi)-\xi\cdot\nabla_{\varmathbb{C}_{0}}u(\xi))\nu_{\varmathbb{C}_{0}}(\xi)\bigr{)}\cdot\nabla_{M_{y_{0}}}d(x)$
$\displaystyle=-x\cdot\nabla_{M_{y_{0}}}d(x)+\bigl{(}(u(\xi)-\xi\cdot\nabla_{\varmathbb{C}_{0}}u(\xi))\nu_{\varmathbb{C}_{0}}(\xi)\bigr{)}\cdot\nabla_{M_{y_{0}}}d(x),$
hence, adding $d(x)\,\,(\,=u(\xi))$ to each side of this identity,
(8) $\displaystyle d(x)-x\cdot\nabla_{M_{y_{0}}}d(x)$
$\displaystyle=\bigl{(}u-\xi\cdot\nabla_{\varmathbb{C}_{0}}u(\xi)\bigr{)}\bigl{(}1-\nu_{\varmathbb{C}_{0}}(\xi).\nabla_{M_{y_{0}}}d(x)\bigr{)}$
$\displaystyle=\bigl{(}\nu_{\varmathbb{C}_{0}}(\xi)\cdot\nu^{\prime}(x)\bigr{)}^{-1}x\cdot\nu^{\prime}(x)\bigl{(}1-\nu_{\varmathbb{C}_{0}}(\xi).\nabla_{M_{y_{0}}}d(x)\bigr{)}$
by (7). Using the identity
$|x|^{-1}x\cdot\nabla_{M_{y_{0}}}|x|=1-|\nu^{\prime}(x)|^{-2}(x\cdot\nu^{\prime}(x)/|x|)^{2}$
we see that the left side of (8) is
$\bigl{(}-x\cdot\nabla_{M_{y_{0}}}(d(x)/|x|)-|\nu^{\prime}(x)|^{-2}(|x\cdot\nu^{\prime}(x)|/|x|)^{2}\bigr{)}|x|$,
so (8) gives
$x\cdot\nabla_{M_{y_{0}}}(d(x)/|x|)=|x|^{-1}x\cdot\nu^{\prime}(x)(1+E),$
where
$E=|\nu^{\prime}(x)|^{-2}x\cdot\nu^{\prime}(x)/|x|+(|\nu_{\varmathbb{C}_{0}}(\xi)\cdot\nu^{\prime}(x)|^{-1}-1)-|\nu_{\varmathbb{C}_{0}}(\xi)\cdot\nu^{\prime}(x)|^{-1}\nu_{\varmathbb{C}_{0}}(\xi)\cdot\nabla_{M_{y_{0}}}d(x).$
Since
$\nu_{\varmathbb{C}_{0}}(\xi)\cdot\nabla_{M_{y_{0}}}d(x)=(\nu_{\varmathbb{C}_{0}}(\xi)-\nu^{\prime}(x))\cdot\nabla_{M_{y_{0}}}d(x)$
and
$x\cdot\nu^{\prime}(x)=x\cdot\nu_{\varmathbb{C}_{0}}(\xi)+x\cdot(\nu^{\prime}(x)-\nu_{\varmathbb{C}_{0}}(\xi))=d(x)-x\cdot(\nu_{\varmathbb{C}_{0}}(\xi)-\nu^{\prime}(x))$,
and $|\nu_{\varmathbb{C}_{0}}(\xi)-\nu^{\prime}(x)|<\delta$ by (6), direct
calculation then shows that $|E|<7\delta$. Since
$x\cdot\nabla_{M_{y_{0}}}(d(x)/|x|)=(x,0)\cdot\nabla_{M}(d(x)/|x|)\bigr{|}_{(x,y_{0})}$
and $x\cdot\nu^{\prime}(x)=(x,0)\cdot\nu(x,y_{0})$ we thus have the required
inequalities (ii) at $(x_{0},y_{0})$. $\Box$
Proof of Lemma 5. First we establish an $L^{2}$ bound for $|x|^{-1}d(x)$,
namely
(1) $\int_{M\cap B_{R}}|x|^{-2}d^{2}(x,y)\,d\mu(x,y)\leqslant
CR^{-2}\int_{M\cap
B_{R/2}\setminus\\{(x,y):|x|<{\scriptsize\frac{1}{4}}R\\}}d^{2}(x,y)\,d\mu(x,y)$
for $R$ sufficiently large (depending on $M$), where
$C=C(\lambda,\Lambda,\varmathbb{C})$.
To check this observe that since $R^{-n-\ell-2}\int_{M\cap B_{R}}d^{2}\to 0$
by 2, we can apply Lemma 5 for sufficiently large $R$ giving conclusions
(i),(ii) of 5 with $\delta={\textstyle\frac{1}{8}}$ and
$\theta={\textstyle\frac{1}{2}}$.
Let
$\varepsilon=\varepsilon(\delta,\lambda,\Lambda,\theta)\in(0,\varepsilon_{0}]$
be as in Lemma 5 with $\delta={\textstyle\frac{1}{8}}$ and
$\theta={\textstyle\frac{1}{2}}$, and let
$\bigl{(}d/|x|\bigr{)}_{\varepsilon}=\min\\{d(x)/|x|,\varepsilon\\}$. Then,
since $d(x)/|x|\leqslant 1$ for all $x$,
(2)
$d(x)/|x|\leqslant\varepsilon^{-1}\bigl{(}d(x)/|x|\bigr{)}_{\varepsilon}\text{
at all points of }M,$
and since $\nabla_{M}\bigl{(}d(x)/|x|\bigr{)}_{\varepsilon}=0$ for $\mu$-a.e.
point $(x,y)$ with $d(x)/|x|\geqslant\varepsilon$, by 5(ii)
(3) $|(x,0)\cdot\nabla_{M}\bigl{(}d(x)/|x|\bigr{)}_{\varepsilon}|\leqslant
2|x|^{-1}|(x,0)\cdot\nu(x,y)|\text{ at $\mu$-a.e.\ point of }M\setminus
B_{R_{0}},$
for suitable $R_{0}=R_{0}(M,\lambda,\Lambda,\varmathbb{C})$.
Take $R>R_{0}$ and $\zeta\in C^{1}_{c}(\breve{B}^{n+1}_{R})$, $\chi\in
C^{1}_{c}(\breve{B}^{\ell}_{R})$ with $\zeta(x)\equiv 1$ for
$|x|\leqslant{\textstyle\frac{1}{4}}R$, $\zeta(x)=0$ for
$|x|>{\textstyle\frac{1}{2}}R$, $\chi(y)\equiv 1$ for
$|y|\leqslant{\textstyle\frac{1}{4}}R$, $\chi(y)=0$ for
$|y|>{\textstyle\frac{1}{2}}R$, and $|D\zeta|,|D\chi|\leqslant 3R^{-1}$. Since
${\rm div}_{M}(x,0)\geqslant n>2\text{ and
}|(x,0)\cdot\nabla_{M}\chi(y)|\leqslant|x|\,|D\chi(y)|\,|\nu_{y}|,$
we can apply (2), (3), and 2.1 with
$Z_{|(x,y)}=\bigl{(}d(x)/|x|\bigr{)}_{\varepsilon}^{2}\zeta^{2}(x)\chi^{2}(y)(x,0)$
to conclude
(4) $\displaystyle\hskip 1.0pt\text{\small$-$}\hskip-10.6pt\int_{M\cap
B_{R/4}}\hskip-8.0pt|x|^{-2}d^{2}(x)\,d\mu$ $\displaystyle\hskip
14.45377pt\leqslant C\hskip 1.0pt\text{\small$-$}\hskip-10.6pt\int_{M\cap
B_{R}}\hskip-6.0pt\bigl{(}|x|^{-2}((x,0)\cdot\nu)^{2}+|\nu_{y}|^{2}\bigr{)}\,d\mu+CR^{-2}\hskip
1.0pt\text{\small$-$}\hskip-10.6pt\int_{M\cap
B_{R}\setminus\\{(x,y):|x|<\frac{1}{4}R\\}}\hskip-20.0ptd^{2}\,d\mu$
$\displaystyle\hskip 14.45377pt\leqslant 2C\hskip
1.0pt\text{\small$-$}\hskip-10.6pt\int_{M\cap
B_{R}}\hskip-6.0pt\bigl{(}|x|^{-2}((x,y)\cdot\nu)^{2}+R^{2}|x|^{-2}|\nu_{y}|^{2}\bigr{)}\,d\mu+$
$\displaystyle\hskip 187.90244ptCR^{-2}\hskip
1.0pt\text{\small$-$}\hskip-10.6pt\int_{M\cap
B_{R}\setminus\\{(x,y):|x|<\frac{1}{4}R\\}}\hskip-20.0ptd^{2}\,d\mu,$
where $C=C(\lambda,\Lambda,\varmathbb{C})$.
By 4, with either of the choices $w=(x,y)\cdot\nu$ or $w=R\nu_{y_{j}}$,
$\hskip 1.0pt\text{\small$-$}\hskip-10.6pt\int_{M\cap
B_{R}}|x|^{-2}w^{2}\leqslant CR^{-2}\hskip
1.0pt\text{\small$-$}\hskip-10.6pt\int_{M\cap
B_{R/2}\setminus\\{(x,y):|x|<{\scriptsize\frac{1}{3}}R\\}}w^{2},$
and hence (4) gives
(5) $\displaystyle\hskip 1.0pt\text{\small$-$}\hskip-10.6pt\int_{M\cap
B_{R/4}}\hskip-8.0pt|x|^{-2}d^{2}(x)\,d\mu\leqslant CR^{-2}\hskip
1.0pt\text{\small$-$}\hskip-10.6pt\int_{M\cap
B_{R}\setminus\\{(x,y):|x|<{\scriptsize\frac{1}{3}}R\\}}\hskip-20.0pt(\,((x,y)\cdot\nu)^{2}+R^{2}|\nu_{y}|^{2}\,)$
$\displaystyle\hskip 180.67499pt+CR^{-2}\hskip
1.0pt\text{\small$-$}\hskip-10.6pt\int_{M\cap
B_{R}\setminus\\{(x,y):|x|<{\scriptsize\frac{1}{4}}R\\}}\hskip-30.0ptd^{2}\,d\mu.$
In view of 2 there exists $R_{0}>1$ and
$\delta,\tilde{\delta}:(0,\infty)\to(0,\infty)$ such that
$\delta(t),\tilde{\delta}(t)\to 0$ as $t\to\infty$ and
(6) $\left\\{\begin{aligned} &\bigl{\\{}(x,y)\in
M:|x|\leqslant\tilde{\delta}(|(x,y))|(x,y)|\bigr{\\}}\setminus
B_{R_{0}}\subset\cup_{j=1}^{q}{\rm graph\,}u_{j}\subset M\\\
&{\sup}_{(\xi,\eta)\in\Omega\setminus
B_{R}}{\textstyle\sum}_{j=1}^{q}\bigl{(}|Du_{j}(\xi,\eta)|+|(\xi,\eta)|^{-1}u_{j}(\xi,\eta)\bigr{)}\to
0\text{ as }R\to\infty,\end{aligned}\right.$
where $u_{j}$ are positive $C^{2}$ functions on the domain $\Omega$,
$\Omega\supset\\{(x,y)\in\varmathbb{C}:|x|<\delta((|x,y|))|(x,y)|\\}\setminus
B_{R_{0}}.$
For $(x,y)=(\xi+u_{j}(\xi,y)\nu_{\varmathbb{C}_{0}}(\xi),y)$ with
$(\xi,y)\in\Omega$, take an orthonormal basis $\tau_{1},\ldots,\tau_{n+\ell}$
for $T_{(\xi,y)}\varmathbb{C}$ with $\tau_{1},\ldots,\tau_{n-1}$ principal
directions for $\varmathbb{C}_{0}\cap\varmathbb{S}^{n}$ (so
$\nabla_{\tau_{i}}\nu_{\varmathbb{C}_{0}}=\kappa_{i}\tau_{i}$ for
$i=1,\ldots,n-1$), $\tau_{n}=|\xi|^{-1}\xi$, and $\tau_{n+j}=e_{n+1+j}$,
$j=1,\ldots,\ell$. Then the unit normal $\nu(x,y)$ of $M$ is
$\displaystyle\nu(x,y)=\bigl{(}1+{\textstyle\sum}_{i=1}^{n}(1+\kappa_{i}u_{j}(\xi,y))^{-2}(D_{\tau_{i}}u_{j}(\xi,y))^{2}+|D_{y}u(\xi,y)|^{2}\bigr{)}^{-1/2}$
$\displaystyle\hskip
14.45377pt\times\bigl{(}\nu_{\varmathbb{C}_{0}}(\xi,y)-{\textstyle\sum}_{i=1}^{n}(1+\kappa_{i}u_{j}(\xi,y))^{-1}D_{\tau_{i}}u_{j}(\xi,y)\tau_{i}-{\textstyle\sum}_{k=1}^{\ell}D_{y_{k}}u_{j}(\xi,y)e_{n+1+k}\bigr{)},$
so for $R$ sufficiently large
(7) $|\nu_{y}(x,y)|\leqslant|D_{y}u_{j}(\xi,y)|\text{ and
}|(x,0)\cdot\nu_{M}(x,y)|\leqslant|u_{j}(\xi,y)|+2|\xi||\nabla_{\varmathbb{C}}u_{j}(\xi,y)|.$
Also, since $u_{j}$ satisfies the equation
$\mathcal{M}_{\varmathbb{C}}(u_{j})=0$, where $\mathcal{M}_{\varmathbb{C}}$
satisfies 2.7 with $\varmathbb{C}$ in place of $M$, we have
(8)
$\int_{B_{R}\setminus\\{(x,y):|x|<{\scriptsize\frac{1}{3}}R\\}}|\nabla_{\varmathbb{C}}u_{j}|^{2}\leqslant
CR^{-2}\int_{B_{3R/2}\setminus\\{(x,y):|x|<{\scriptsize\frac{1}{4}}R\\}}u_{j}^{2},$
Also, $d(\xi+u_{j}(\xi,y)\nu_{\varmathbb{C}_{0}}(\xi))=u_{j}(\xi,y)$ so, by
(7) and (8),
(9) $\hskip 1.0pt\text{\small$-$}\hskip-10.6pt\int_{M\cap
B_{R}\setminus\\{(x,y):|x|<{\scriptsize\frac{1}{3}}R\\}}w^{2}\leqslant C\hskip
1.0pt\text{\small$-$}\hskip-10.6pt\int_{M\cap
B_{3R/2}\setminus\\{(x,y):|x|<{\scriptsize\frac{1}{4}}R\\}}d^{2}$
for either of the choices $w=(x,y)\cdot\nu$ and $w=R\nu_{y_{j}}$, and hence
(5) implies
(10) $\hskip 1.0pt\text{\small$-$}\hskip-10.6pt\int_{M\cap
B_{R/4}}|x|^{-2}d^{2}(x)\leqslant CR^{-2}\hskip
1.0pt\text{\small$-$}\hskip-10.6pt\int_{M\cap
B_{3R/2}\setminus\\{(x,y):|x|<{\scriptsize\frac{1}{4}}R\\}}d^{2},\quad
C=C(\lambda,\Lambda,\varmathbb{C}).$
Since $\mathcal{M}(u_{j})=0$ and the $u_{j}$ are positive with small gradient,
and also
$d(\xi+u_{j}(\xi,\eta)\nu_{\varmathbb{C}_{0}}(\xi),\eta)=u_{j}(\xi,\eta)$, we
can use the Harnack inequality in balls of radius $R/20$, to give
$\hskip 1.0pt\text{\small$-$}\hskip-10.6pt\int_{M\cap
B_{3R/2}\setminus\\{(x,y):|x|<{\scriptsize\frac{1}{4}}R\\}}d^{2}\leqslant
C\hskip 1.0pt\text{\small$-$}\hskip-10.6pt\int_{M\cap
B_{R/8}\setminus\\{(x,y):|x|<{\scriptsize\frac{1}{16}}R\\}}d^{2},$
and so (10) gives
(11) $\hskip 1.0pt\text{\small$-$}\hskip-10.6pt\int_{M\cap
B_{R/4}}|x|^{-2}d^{2}(x)\leqslant CR^{-2}\hskip
1.0pt\text{\small$-$}\hskip-10.6pt\int_{M\cap
B_{R/8}\setminus\\{(x,y):|x|<{\scriptsize\frac{1}{16}}R\\}}d^{2},\quad
C=C(\lambda,\Lambda,\varmathbb{C}).$
Then (1) follows from (11) after replacing $R$ by $4R$. (1) in particular
implies, for all sufficiently large $R$,
(12) $\left\\{\begin{aligned} &\int_{M\cap B_{R}\cap\\{(x,y):|x|<\delta
R\\}}d^{2}\,d\mu\leqslant C\delta^{2}\int_{M\cap
B_{R/2}}d^{2}\,d\mu\,\,\,\forall\delta\in(0,{\textstyle\frac{1}{2}}],\\\
&\int_{M\cap B_{R}}d^{2}\,d\mu\leqslant C\int_{M\cap
B_{R/2}\setminus\\{(x,y):|x|<\frac{1}{4}R\\}}d^{2}\,d\mu,\end{aligned}\right.$
where $C=C(\lambda,\Lambda,\varmathbb{C})$.
So let $R_{k}\to\infty$ be arbitrary, $M_{k}=R_{k}^{-1}M$, $d_{k}=d|M_{k}$,
and
$u^{k}_{j}(x,y)=R_{k}^{-1}u_{j}(R_{k}x,R_{k}y)\big{/}\bigl{(}\text{\scriptsize$-$}\hskip-6.85pt{\textstyle\int}_{M_{k}\cap
B_{1}}d_{k}^{2}\bigr{)}^{1/2}.$
By virtue of (1), (6), and (12), we have a subsequence of $k$ (still denoted
$k$) such that $u^{k}_{j}$ converges locally in $\varmathbb{C}$ to a solution
$v_{j}\geqslant 0$ of $\mathcal{L}_{\varmathbb{C}}v_{j}=0$ with $v_{j}$
strictly positive for at least one $j$, and $\int_{\varmathbb{C}\cap
B_{R}}|x|^{-2}v_{j}^{2}\,d\mu<\infty$ for each $R>0$. Hence $v_{j}$ has a
representation of the form 3.20 on all of $\varmathbb{C}$, and then since
$v_{j}\geqslant 0$ we must have
$\smash{v_{j}=c_{j}r^{\gamma_{1}^{+}}\varphi_{1}}$ with $c_{j}\geqslant 0$ and
$c_{j}>0$ for at least one $j$. But then
$\text{\scriptsize$-$}\hskip-6.85pt{\textstyle\int}_{\varmathbb{C}\cap
B_{R}}\sum_{j=1}^{q}v^{2}_{j}$ is constant, independent of $R$, and so (using
(12) again)
$\lim_{k\to\infty}\hskip 1.0pt\text{\small$-$}\hskip-10.6pt\int_{M_{k}\cap
B_{2R_{k}}}d^{2}\bigg{/}\hskip
1.0pt\text{\small$-$}\hskip-10.6pt\int_{M_{k}\cap B_{R_{k}}}d^{2}=\hskip
1.0pt\text{\small$-$}\hskip-10.6pt\int_{\varmathbb{C}\cap
B_{2}}\sum_{j=1}^{q}v^{2}_{j}\bigg{/}\hskip
1.0pt\text{\small$-$}\hskip-10.6pt\int_{\varmathbb{C}\cap
B_{1}}\sum_{j=1}^{q}v^{2}_{j}=1.$
Therefore, in view of the arbitrariness of the sequence $R_{k}$, for any
$\alpha\in(0,{\textstyle\frac{1}{2}}]$ we have
$2^{-\alpha}\leqslant\hskip 1.0pt\text{\small$-$}\hskip-10.6pt\int_{M\cap
B_{R}}d^{2}\bigg{/}\hskip 1.0pt\text{\small$-$}\hskip-10.6pt\int_{M\cap
B_{R/2}}d^{2}\leqslant 2^{\alpha}\,\,\,\forall\,R>R_{0},$
with $R_{0}=R_{0}(M,\lambda,\Lambda,\alpha)$, and 5 follows. $\Box$
## 6\. Proof of Theorem 1
According to Lemma 2 the unique tangent cone of $M$ at $\infty$ is
$\varmathbb{C}$ (with multiplicity $q$). Let $\alpha\in(0,1)$. By Lemma 5
6.1 $\hskip 1.0pt\text{\small$-$}\hskip-10.6pt\int_{M\cap
B_{R}}d^{2}\,d\mu\leqslant R^{\alpha}$
for all sufficiently large $R$. Also by 4 either $\nu_{y}$ is identically zero
or there is a lower bound
6.2 $\hskip 1.0pt\text{\small$-$}\hskip-10.6pt\int_{M\cap
B_{R}}|\nu_{y}|^{2}\geqslant R^{-\alpha}$
for all sufficiently large $R$, and by Lemma 4 with $w=\nu_{y_{j}}$ we have
6.3 $\hskip 1.0pt\text{\small$-$}\hskip-10.6pt\int_{M\cap
B_{R}}|\nu_{y}|^{2}\leqslant C\hskip
1.0pt\text{\small$-$}\hskip-10.6pt\int_{M\cap
B_{R/2}\setminus\\{(x,y):|x|<{\scriptsize\frac{1}{3}}R\\}}|\nu_{y}|^{2},\quad
C=C(q,\lambda,\varmathbb{C}).$
By inequality (9) in the proof of Lemma 5 (with $R/2$ in place of $R$),
6.4 $\hskip 1.0pt\text{\small$-$}\hskip-10.6pt\int_{M\cap
B_{R/2}\setminus\\{(x,y):|x|<{\scriptsize\frac{1}{3}}R\\}}|\nu_{y}|^{2}\leqslant
CR^{-2}\hskip 1.0pt\text{\small$-$}\hskip-10.6pt\int_{M\cap B_{R}}d^{2}.$
Combining 6.1, 6.2, 6.3 and 6.4, we then have
$R^{-\alpha}\leqslant\hskip 1.0pt\text{\small$-$}\hskip-10.6pt\int_{M\cap
B_{R}}|\nu_{y}|^{2}\leqslant CR^{-2}\hskip
1.0pt\text{\small$-$}\hskip-10.6pt\int_{M\cap B_{R}}d^{2}\leqslant
R^{-2+\alpha}$
for all sufficiently large $R$ (depending on
$\alpha,q,\lambda,\varmathbb{C}$), which is impossible for $R>1$. Thus the
alternative that $\nu_{y}$ is identically zero on $M$ must hold, and so $M$ is
a cylinder $S\times\varmathbb{R}^{\ell}$.
## 7\. Proof of Corollary 1
We aim here to show that the hypotheses of Corollary 1 ensure that $M$ is
strictly stable as in 1.8, so that Corollary 1 is then implied by Theorem 1.
Before discussing the proof that $M$ is strictly stable, we need a couple of
preliminary results. First recall that, according to [HS85, Theorem 2.1],
since $\varmathbb{C}_{0}$ is minimizing there is a smooth embedded minimal
hypersurface $S\subset U_{+}$ ($U_{+}$ either one of the two connected
components of $\varmathbb{R}^{n+1}\setminus\overline{\varmathbb{C}}_{0}$),
with
7.1 ${\rm dist\,}(S,\\{0\\})=1$, ${\rm sing\,\,}S=\emptyset$ and $S$ is a
smooth radial graph; i.e. each ray $\bigl{\\{}tx:t>0\bigr{\\}}$ with $x\in
U_{+}$ meets $S$ transversely in just one point,
so in particular
7.2 $\displaystyle x\cdot\nu_{S}(x)>0\text{ for each $x\in S$, where $\nu_{S}$
is the smooth}$ unit normal of $S$ pointing away from $\varmathbb{C}_{0}$.
Furthermore, since $\varmathbb{C}_{0}$ is strictly stable and strictly
minimizing, [HS85, Theorem 3.2] ensures that, for
$R_{0}=R_{0}(\varmathbb{C}_{0})$ large enough, there is an $C^{2}$ function
$u$ defined on an open subset of $\Omega\subset\varmathbb{C}_{0}$ with
$\Omega\supset\bigl{\\{}x\in\varmathbb{C}_{0}:|x|>2R_{0}\bigr{\\}}$ and
7.3 $S\setminus
B^{n+1}_{R_{0}}=\bigl{\\{}x+u(x)\nu_{\varmathbb{C}_{0}}(x):x\in\Omega\bigr{\\}}\text{
with }u(x)=\kappa|x|^{\gamma_{1}}\varphi_{1}(|x|^{-1}x)+E(x),$
where $\kappa$ is a positive constant, $\nu_{\varmathbb{C}_{0}}$ is the unit
normal of $\varmathbb{C}_{0}$ pointing into $U_{+}$, $\varphi_{1}>0$ is as in
3.3 with $j=1$, and, for some $\alpha=\alpha(\varmathbb{C}_{0})>0$,
$\lim_{R\to\infty}\sup_{|x|>R}|x|^{k+|\gamma_{1}|+\alpha}|D^{k}E(x)|=0\text{
for }k=0,1,2.$
We claim that $S$ is strictly stable:
7.4 Lemma. If $S$ is as above, there is a constant
$\lambda=\lambda(\varmathbb{C}_{0})>0$ such that
$\lambda\int_{S}|x|^{-2}\zeta^{2}(x,y)\,d\mu(x,y)\leqslant\int_{S}\bigl{(}\bigl{|}\nabla_{S}\zeta\bigr{|}^{2}-|A_{S}|^{2}\zeta^{2}\bigr{)}\,d\mu,\quad\zeta\in
C_{c}^{1}(\varmathbb{R}^{n+1}),$
where $|A_{S}|$ is the length of the second fundamental form of $S$.
Proof: The normal part of the velocity vector of the homotheties $\lambda
S|_{\lambda>0}$ at $\lambda=1$ is $\psi=x\cdot\nu_{S}(x)\,(\,>0\text{
by\leavevmode\nobreak\ \ref{radial-graph}})$, and since the $\lambda S$ are
minimal hypersurfaces, this is a Jacobi function, i.e. a solution of
(1) $\Delta_{S}\psi+|A_{S}|^{2}\psi=0.$
By properties 7.2 and 7.3 we also have
(2) $C^{-1}|x|^{\gamma_{1}}\leqslant\psi(x)\leqslant C|x|^{\gamma_{1}}\text{
on $S$},\quad C=C(\varmathbb{C}_{0}).$
After a direct calculation and an integration by parts, (1) implies
(3)
$\int_{S}\psi^{2}|\nabla_{\\!S}\bigl{(}\zeta/\psi\bigr{)}|^{2}\,d\mu=\int_{S}\bigl{(}|\nabla_{\\!S}\zeta|^{2}-|A_{S}|^{2}\zeta^{2}\bigr{)}\,d\mu,$
and using the first variation formula 2.1 with $Z(x)=|x|^{-p-2}f^{2}(x)x$ and
noting that ${\rm div}_{S}x=n$, we have, after an application of the Cauchy-
Schwarz inequality,
(4) $\int_{S}|x|^{-p-2}f^{2}\leqslant
C\int_{S}|x|^{-p}|\nabla_{S}f|^{2},\,\,f\in
C^{1}_{c}(\varmathbb{R}^{n+1}),\quad p<n-2,\,\,C=C(p,n).$
7 is now proved by taking $f=\zeta/\psi$ and $p=2|\gamma_{1}|\,(<n-2)$, and
using (2), (3) and (4). $\Box$
As a corollary, any hypersurface sufficiently close to $S$ in the
appropriately scaled $C^{2}$ sense must also be strictly stable:
7.5 Corollary. For each $\theta\in(0,{\textstyle\frac{1}{2}}]$, there is
$\delta=\delta(\varmathbb{C}_{0},\theta)>0$ such that if $v\in C^{2}(S)$,
$M_{0}=\\{x+v(x)\nu_{S}(x):x\in S\\}\text{ and
}|x||\nabla^{2}_{S}v|+|\nabla_{S}v|+|x|^{-1}|v|\leqslant\delta\,\,\forall x\in
S$, then $M_{0}$ satisfies the inequality
$\lambda(1-\theta)\int_{M_{0}}|x|^{-2}\zeta^{2}\,d\mu\leqslant\int_{M_{0}}\bigl{(}\bigl{|}\nabla_{M_{0}}\zeta\bigr{|}^{2}-|A_{M_{0}}|^{2}\zeta^{2}\bigr{)}\,d\mu,\quad\zeta\in
C_{c}^{1}(\varmathbb{R}^{n+1}),$
where $|A_{M_{0}}|$ is the length of the second fundamental form of $M_{0}$,
and $\lambda$ is the constant of 7.
Proof: By 7, with $\tilde{\zeta}(x)=\zeta(x+v(x)\nu_{S}(x))$ for $x\in S$,
(1)
$\lambda\int_{S}|x|^{-2}\tilde{\zeta}^{2}\,d\mu\leqslant\int_{S}\bigl{(}\bigl{|}\nabla_{S}\tilde{\zeta}\bigr{|}^{2}-|A_{S}|^{2}\tilde{\zeta}^{2}\bigr{)}\,d\mu$
and for any $C^{1}$ function $f$ with compact support on $M_{0}$, with
$\tilde{f}(x)=f(x+v(x)\nu_{S}(x))$ for $x\in S$,
(2) $\left\\{\hskip 2.0pt\begin{aligned}
&{\textstyle\int}_{S}\tilde{f}\,d\mu={\textstyle\int}_{M_{0}}fJ\,d\mu\text{
with }|J-1|\leqslant C\delta\text{ (by the area formula)},\\\
&|\nabla_{S}\tilde{f}(x)-(\nabla_{M_{0}}f)(x+v(x)\nu_{S}(x)|\leqslant
C\delta|\nabla_{S}\tilde{f}(x)|\\\
&|A_{S}(x)-A_{M_{0}}(x+v(x)\nu_{S}(x))|<C|x|^{-1}\delta,\,\,\,C^{-1}|x|^{-1}\leqslant|A_{S}(x)|\leqslant
C|x|^{-1},\end{aligned}\right.$
where $C=C(S)$. By combining (1) and (2) we then have the required inequality
with $\theta=C\delta$. $\Box$
Next we need a uniqueness theorem for stationary integer multiplicity
varifolds with support contained in $\overline{U}_{+}$.
7.6 Lemma. If $M_{0}$ is a stationary integer multiplicity $n$-dimensional
varifold in $\varmathbb{R}^{n+1}$ with the properties that ${\rm
spt\,}M_{0}\subset\overline{U}_{+}$, ${\rm
spt\,}M_{0}\neq\overline{\varmathbb}{C}_{0}$, and
$\sup_{R>1}R^{-n}\mu(M_{0}\cap B_{R})<2\mu(\varmathbb{C}_{0}\cap B_{1})$, then
$M_{0}=\lambda S$ (with multiplicity $1$) for some $\lambda>0$, where $S$ is
as in 7.1.
Proof of 7: Let $C(M_{0})$ be a tangent cone of $M_{0}$ at $\infty$. Then by
the Allard compactness theorem $C(M_{0})$ is a stationary integer multiplicity
varifold with $\mu_{C(M_{0})}(B_{1})<2\mu(\varmathbb{C}_{0}\cap B_{1})$ and
${\rm spt\,}C(M_{0})\subset\overline{U}_{+}$. If ${\rm spt\,}C(M_{0})$
contains a ray of $\varmathbb{C}_{0}$ then, by the Solomon-White maximum
principle [SW89], either $C(M_{0})=\varmathbb{C}_{0}$ (with multiplicity one)
or else $C(M_{0})=\varmathbb{C}_{0}+V_{1}$, where $V_{1}$ is a non-zero
integer multiplicity cone with $\mu(V_{1}\cap
B_{1}(0))<\mu(\varmathbb{C}_{0}\cap B_{1})$ and support contained in
$\overline{U}_{+}$. On the other hand if ${\rm
spt\,}C(M_{0})\cap\varmathbb{C}_{0}=\emptyset$ then a rotation of ${\rm
spt\,}C(M_{0})$ has a ray in common with $\varmathbb{C}_{0}$ so by the same
argument applied to this rotation we still conclude that there is a stationary
cone $V_{1}$ with $\mu_{V_{1}}(B_{1})<\mu(\varmathbb{C}_{0}\cap B_{1})$ and
${\rm spt\,}V_{1}\subset\overline{U}_{+}$. But now by applying exactly the
same reasoning to $V_{1}$ we infer
$\mu_{V_{1}}(B_{1})\geqslant\mu(\varmathbb{C}_{0}\cap B_{1})$, a
contradiction. So $C(M_{0})=\varmathbb{C}_{0}$ and hence $\varmathbb{C}_{0}$,
with multiplicity one, is the unique tangent cone of $M_{0}$ at infinity.
Hence there is a $R_{0}>1$ and a $C^{2}(\varmathbb{C}_{0}\setminus B_{R_{0}})$
function $h$ with
(1) $\displaystyle\hskip 50.58878pt\text{$\sup_{x\in\varmathbb{C}_{0}\setminus
B_{R}}(|Dh(x)|+|x|^{-1}h(x))\to 0$ as $R\to\infty$},$ (2)
$\displaystyle\\{x+h(x)\nu_{\varmathbb{C}_{0}}(x):x\in\varmathbb{C}_{0}\setminus
B_{R_{0}}\\}\subset M_{0}$ $\displaystyle\hskip 72.26999pt\text{ and
}M_{0}\setminus\\{x+h(x)\nu_{\varmathbb{C}_{0}}(x):x\in\varmathbb{C}_{0}\setminus
B_{R_{0}}\\}\text{ is compact.}$
We also claim
(3) $0\notin M_{0}.$
Indeed otherwise the Ilmanen maximum principle [Ilm96] would give
$M_{0}\cap\varmathbb{C}_{0}\neq\emptyset$ and the above argument using the
Solomon-White maximum principle can be repeated, giving
$M_{0}=\varmathbb{C}_{0}$, contrary to the hypothesis that ${\rm
spt\,}M_{0}\neq\overline{\varmathbb}{C}_{0}$.
Observe next that if $r_{k}\to\infty$ the scaled minimal hypersurfaces
$r_{k}^{-1}M_{0}$ are represented by the graphs (taken off
$\varmathbb{C}_{0}$) of the functions $r_{k}^{-1}h(r_{k}x)\to 0$, and hence,
for any given $\omega_{0}\in\Sigma\,(=\varmathbb{C}_{0}\cap\partial B_{1})$,
the rescalings $(h(r_{k}\omega_{0}))^{-1}h(r_{k}r\omega)$ (which are bounded
above and below by positive constants on any given compact subset of
$\varmathbb{C}_{0}$ by (1) and the Harnack inequality) generate positive
solutions of the Jacobi equation $\mathcal{L}_{\varmathbb{C}_{0}}v=0$ on
$\varmathbb{C}_{0}$ as $k\to\infty$. But, by 3.1 and 3.2,
$(c_{1}r^{\gamma_{1}}+c_{2}r^{\gamma_{1}^{-}})\,\varphi_{1}(\omega)$ with
$c_{1},c_{2}\geqslant 0$ and $c_{1}+c_{2}>0$ are the only positive solutions
of $\mathcal{L}_{\varmathbb{C}_{0}}(\varphi)=0$ on all of $\varmathbb{C}_{0}$.
Thus in view of the arbitrariness of the sequence $r_{k}$, we have shown that
(4) $h(r\omega)=c(r)\varphi_{1}(\omega)+o(c(r))\text{ as $r\to\infty$,
uniformly for $\omega\in\Sigma$},$
and hence there are
$c_{-}(r)<c(r)<c_{+}(r)\text{ with
}c_{-}(r)\varphi_{1}(\omega)<h(r\omega)<c_{+}(r)\varphi_{1}(\omega)\text{ and
}c_{+}(r)/c_{-}(r)\to 1.$
Now, for suitable $R_{0}>0$, $S\setminus B_{R_{0}}$ ($S$ as in 7.1) also has a
representation of this form with some $\tilde{h}$ in place of $h$, where
$\tilde{h}(r\omega)=\kappa
r^{\gamma_{1}}\varphi_{1}(\omega)+o(r^{\gamma_{1}})\text{ as $r\to\infty$,
uniformly in $\omega$}$
and, similar to the choice of $c_{\pm}$, we can take $\tilde{c}_{\pm}(r)$ such
that
$\tilde{c}_{-}(r)\varphi_{1}(\omega)<\tilde{h}(r\omega)<\tilde{c}_{+}(r)\varphi_{1}(\omega)\text{
and }\tilde{c}_{+}(r)/\tilde{c}_{-}(r)\to 1\text{ as }r\to\infty.$
Now $\lambda S\setminus B_{\lambda R_{0}}$ can be represented by the
geometrically scaled function $\tilde{h}_{\lambda}$ with
(5)
$\tilde{h}_{\lambda}(r\omega)=\kappa\lambda^{1+|\gamma_{1}|}r^{-|\gamma_{1}|}\varphi_{1}+o(\kappa\lambda^{1+|\gamma_{1}|}r^{-|\gamma_{1}|}\varphi_{1})$
and we let $\lambda_{k}^{-}$ be the largest value of $\lambda$ such that
$\tilde{h}_{\lambda}(r_{k}\omega)\leqslant c_{-}(r_{k})\varphi_{1}(\omega)$
and $\lambda_{k}^{+}$ the smallest value of $\lambda$ such that
$\tilde{h}_{\lambda}(r_{k}\omega)\geqslant c_{+}(r_{k})\varphi_{1}(\omega)$.
Evidently there are then points $\omega_{\pm}\in\Sigma$ with
$\tilde{h}_{\lambda_{k}^{\pm}}(r_{k}\omega_{\pm})=c_{\pm}(r_{k})\varphi_{1}(\omega_{\pm})$
respectively. Also $M_{0}\cap\breve{B}_{r_{k}}$ must entirely lie in the
component $B_{r_{k}}\setminus\lambda_{k}^{+}S$ which contains
$\varmathbb{C}_{0}\cap B_{r_{k}}$; otherwise we could take the smallest
$\lambda>\lambda_{k}^{+}$ such that $M_{0}\cap B_{r_{k}}$ lies on in the
closure of that component of $B_{r_{k}}\setminus\lambda S$, and
$M_{0}\cap\breve{B}_{r_{k}}\cap\lambda S\neq\emptyset$, which contradicts the
maximum principle [SW89]. Similarly, $M_{0}\cap\breve{B}_{r_{k}}$ must
entirely lie in the component $B_{r_{k}}\setminus\lambda_{k}^{-}S$ which does
not contain $\varmathbb{C}_{0}\cap B_{r_{k}}$.
Thus $M_{0}\cap B_{r_{k}}$ lies between $\lambda_{k}^{+}S$ and
$\lambda_{k}^{-}S$ and by construction $\lambda_{k}^{+}/\lambda_{k}^{-}\to 1$,
and since $\lambda_{k}^{-}$ is bounded above, we then have a subsequence of
$k$ (still denoted $k$) such that $\lambda_{k}^{\pm}$ have a common (positive)
limit $\lambda$. So ${\rm spt\,}M_{0}\subset\lambda S$ and hence
$M_{0}=\lambda S$ with multiplicity $1$ by the constancy theorem and the fact
that $\sup_{R>1}R^{-n}\mu(M_{0}\cap B_{R})<2\mu(\varmathbb{C}_{0}\cap B_{1})$.
$\Box$
Finally we need to show that any $M$ satisfying the hypotheses of Corollary 1
with sufficiently small $\varepsilon_{0}$ must be strictly stable; then (as
mentioned at the beginning of this section) Corollary 1 follows from Theorem
1.
7.7 Lemma. For each $\alpha,\theta\in(0,1)$, there is
$\varepsilon_{0}=\varepsilon_{0}(\varmathbb{C},\alpha,\theta)\in(0,{\textstyle\frac{1}{2}}]$
such that if $M\subset U_{+}\times\varmathbb{R}^{\ell}$,
${\sup}_{R>1}R^{-n-\ell}\mu(M\cap
B_{R})\leqslant(2-\alpha)\mu(\varmathbb{C}\cap B_{1})$, and
${\sup}_{M}|\nu_{y}|<\varepsilon_{0}$, then $M$ is strictly stable in the
sense that
$(1-\theta)\lambda\int_{M}|x|^{-2}\zeta^{2}\,d\mu\leqslant\int_{M}\bigl{(}\bigl{|}\nabla_{M}\zeta\bigr{|}^{2}-|A_{M}|^{2}\zeta^{2}\bigr{)}\,d\mu,\quad\zeta\in
C_{c}^{1}(\varmathbb{R}^{n+1+\ell}),$
with $\lambda=\lambda(\varmathbb{C}_{0})>0$ as in 7.
Proof: For $y\in\varmathbb{R}^{\ell}$, define
(1) $M_{y}=\lambda_{y}^{-1}(M-(0,y)),\quad\lambda_{y}={\rm
dist\,}(M-(0,y),0).$
We claim that for each given $\delta>0$ the hypotheses of the lemma guarantee
that a strip of $M$ near a given slice $M_{z}=\\{(x,y)\in M:y=z\\}$ can be
scaled so that it is $C^{2}$ close to $S\times B^{\ell}_{1}$ ($S$ as in 7.2)
in the appropriately sense; more precisely, we claim that for each $\delta>0$
there is
$\varepsilon_{0}=\varepsilon_{0}(\varmathbb{C},\alpha,\theta,\delta)>0$ such
that the hypotheses of the lemma imply that for each
$z\in\varmathbb{R}^{\ell}$ there is $v_{z}\in C^{2}(\\{(x,y):x\in
S,\,|y|<1\\})$ such that
(2) $\left\\{\begin{aligned}
&M_{z}\cap\\{(x,y):|y|<1\\}=\bigl{\\{}(x+v_{z}(x,y)\nu_{S}(x),y):x\in
S,\,|y|<1\bigr{\\}}\\\
&\,|x||\nabla^{2}_{S\times\varmathbb{R}^{\ell}}v_{z}(x,y)|+|\nabla_{S\times\varmathbb{R}^{\ell}}v_{z}|(x,y)+|x|^{-1}|v_{z}(x,y)|\leqslant\delta\,\,\forall
x\in S,|y|<1.\end{aligned}\right.$
Otherwise this fails with $\varepsilon_{0}=1/k$ for each $k=1,2,\ldots$, so
there are minimal submanifolds $M_{k}$ such that the hypotheses hold with
$\varepsilon_{0}=1/k$ and with $M_{k}$ in place of $M$, yet there are points
$z_{k}\in M_{k}$ such that (2) fails with $z_{k},M_{k}$ in place of $z,M$.
We claim first that then there are fixed
$k_{0}=k_{0}(\delta)\in\\{1,2,\ldots\\},\,R_{0}=R_{0}(\delta)>1$ such that
(3) ${\rm
dist\,}((x,y),M_{z_{k}})<\delta|x|\,\,\forall(x,y)\in\varmathbb{C}\setminus
B_{R_{0}}^{n+1}\text{ with }|x|\geqslant|y|,\,\,\,k\geqslant k_{0}.$
Otherwise there would be a subsequence of $k$ (still denoted $k$) with
${\rm dist\,}((x_{k},y_{k}),M_{z_{k}})\geqslant\delta|x_{k}|$ with
$(x_{k},y_{k})\in\varmathbb{C}$, $|y_{k}|\leqslant|x_{k}|$ and
$|x_{k}|\to\infty$.
Then let
$\,\,\widetilde{\hskip-2.0ptM}_{k}=|x_{k}|^{-1}M_{z_{k}}.$
By the Allard compactness theorem, $\,\,\widetilde{\hskip-2.0ptM}_{k}$
converges in the varifold sense to a stationary integer multiplicity varifold
$V$ with support $M$ and density $\Theta$, where $M$ is a closed rectifiable
set, $\Theta$ is upper semi-continuous on $\varmathbb{R}^{n+1+\ell}$ and has
integer values $\mu$-a.e. on $M$, with $\Theta=0$ on
$\varmathbb{R}^{n+1+\ell}\setminus M$, and
(4) $M=\\{x:\Theta(x)\geqslant 1\\},$ (5) $\Theta(x)=\lim_{\rho\downarrow
0}\bigl{(}\omega_{n+\ell}\rho^{n+\ell}\bigr{)}^{-1}\int_{M\cap
B_{\rho}(x)}\Theta(\xi)\,d\mu(\xi)\,\,\,\forall x\in M,$
where $\omega_{n+\ell}$ is the volume of the unit ball in
$\varmathbb{R}^{n+\ell}$, $0\in
M\subset\overline{U}_{+}\times\varmathbb{R}^{\ell}$, and $M\cap
U_{+}\times\varmathbb{R}^{\ell}\neq 0$.
Taking $x_{n+1+j}=y_{j}$, so points in $\varmathbb{R}^{n+1+\ell}$ are written
$x=(x_{1},\ldots,x_{n+1+\ell})$, and letting
$\nu_{k}=(\nu_{k\,1},\ldots,\nu_{k\,n+1+\ell})$ be a unit normal for
$\,\,\widetilde{\hskip-2.0ptM}_{k}$ (so that the orthogonal projection of
$\varmathbb{R}^{n+1+\ell}$ onto $T_{x}\,\,\widetilde{\hskip-2.0ptM}_{k}$ has
matrix $(\delta_{ij}-\nu_{k\,i}\nu_{k\,j})$), the first variation formula for
$\,\,\widetilde{\hskip-2.0ptM}_{k}$ can be written
(6)
$\int_{\,\,\widetilde{\hskip-2.0ptM}_{k}}\sum_{i,j=1}^{n+1+\ell}(\delta_{ij}-\nu_{k\,i}\nu_{k\,j})D_{i}X_{j}\,d\mu=0,\,\,\,X_{j}\in
C^{1}_{c}(\varmathbb{R}^{n+1+\ell}),\,\,j=1,\ldots,n+1+\ell.$
Let $x_{0}\in M,\,\sigma>0$, and let $\tau\in
C^{\infty}(\varmathbb{R}^{n+1+\ell})$ with support $\tau\subset
B_{\sigma}(x_{0})$, and consider the function $T$ defined by
$T(x)=T(x_{1},\ldots,x_{n+1+\ell})=\int_{-\infty}^{x_{n+1+\ell}}\bigl{(}\tau(x_{1},\ldots,x_{n+\ell},t)-\tau(x_{1},\ldots,2\sigma+x_{n+\ell},t)\bigr{)}\,dt.$
Evidently $T\in C^{\infty}_{c}(\varmathbb{R}^{n+1+\ell})$ (indeed $T(x)=0$ on
$\varmathbb{R}^{n+1+\ell}\setminus K$, where $K$ is the cube
$\\{x\in\varmathbb{R}^{n+1+\ell}:|x_{i}-x_{0\,i}|<4\sigma\,\,\forall
i=1,\ldots,n+1+\ell\\}$). Therefore we can use (6) with
$X(x)=T(x)e_{n+1+\ell}$, giving
$\int_{\,\,\widetilde{\hskip-2.0ptM}_{k}}\bigl{(}1-\nu_{k\,n+1+\ell}^{2}\bigr{)}\bigl{(}\tau(x)-\tau(x+2\sigma
e_{n+1+\ell})\bigr{)}\,d\mu=\int_{\,\,\widetilde{\hskip-2.0ptM}_{k}}\sum_{i=1}^{n+\ell}\nu_{k\,n+1+\ell}\nu_{k\,i}D_{i}T\,d\mu,$
and hence
$\Bigl{|}\int_{\,\,\widetilde{\hskip-2.0ptM}_{k}}\tau(x)\,d\mu-\int_{\,\,\widetilde{\hskip-2.0ptM}_{k}}\tau(x+2\sigma
e_{n+1+\ell})\,d\mu\Bigr{|}\leqslant C/k.$
Using the fact that varifold convergence of
$\,\,\widetilde{\hskip-2.0ptM}_{k}$ implies convergence of the corresponding
mass measures $\mu\hbox{ {\vrule height=6.25963pt}{\leaders\hrule\hskip
5.69046pt} }\,\,\widetilde{\hskip-2.0ptM}_{k}$ to $\mu\hbox{ {\vrule
height=6.25963pt}{\leaders\hrule\hskip 5.69046pt} }\Theta$, we then have
(7) $\int_{M}\tau(x)\,\Theta(x)d\mu(x)=\int_{M}\tau(x+2\sigma
e_{n+1+\ell})\,\Theta(x)d\mu(x).$
Taking $\rho\in(0,\sigma)$ and replacing $\tau$ by a sequence $\tau_{k}$ with
$\tau_{k}\downarrow\chi_{B_{\rho}(x_{0})}$ (the indicator function of the
closed ball $\smash{B_{\rho}(x_{0})}$), we then conclude
$\int_{M\cap B_{\rho}(x_{0})}\,\Theta\,d\mu=\int_{M\cap B_{\rho}(x_{0}+2\sigma
e_{n+1+\ell})}\ \Theta\,d\mu$
and hence by (5)
$\Theta(x_{0})=\Theta(x_{0}+2\sigma e_{n+1+\ell}).$
In view of the arbitrariness of $\sigma$ this shows that $\Theta(x)$ is
independent of the variable $x_{n+1+\ell}$, and the same argument shows that
$\Theta(x)$ is also independent of $x_{n+1+j}$, $j=1,\ldots,\ell-1$. Thus, by
(4), $M$ is cylindrical: $M=M_{0}\times\varmathbb{R}^{\ell}$, where
$R^{-n}\int_{M_{0}\cap B_{R}}\Theta\,d\mu<2\mu(\varmathbb{C}_{0}\cap B_{1})$,
$0\in M_{0}\subset\overline{U}_{+}$, and $M_{0}\cap U_{+}\neq\emptyset$. Hence
$M_{0}=\lambda S$ for some $\lambda>0$ by virtue of Lemma 7, contradicting the
fact that $0\in M_{0}$. So (3) is proved, and (3) together with the Allard
regularity theorem implies that
$M_{z_{k}}\cap\bigl{(}\bigl{\\{}(x,y)\in\varmathbb{R}^{n+1+\ell}\setminus
B_{2R_{0}}:|y|<{\textstyle\frac{1}{2}}|x|\bigr{\\}}\bigr{)}$ is $C^{2}$ close
to $\varmathbb{C}$, in the sense that there is are $C^{2}$ functions $v_{k}$
on a domain in $\varmathbb{C}$ with
(8) $\displaystyle{\rm
graph\,}v_{k}=M_{z_{k}}\cap\bigl{(}\bigl{\\{}(x,y)\in\varmathbb{R}^{n+1+\ell}\setminus
B_{2R_{0}}:|y|<{\textstyle\frac{1}{2}}|x|\bigr{\\}}\bigr{)}$
$\displaystyle\hskip 122.85876pt\text{ and
}|x|^{-1}|v_{k}|+|\nabla_{\varmathbb{C}}v_{k}|+|x||\nabla^{2}_{\varmathbb{C}}v_{k}|<C\delta.$
Next, exactly the same compactness discussion can be applied with $M_{z_{k}}$
in place of $\,\,\widetilde{\hskip-2.0ptM}_{k}$, giving a cylindrical varifold
limit $M=M_{0}\times\varmathbb{R}^{\ell}$, $M_{0}$ a closed subset of
$\overline{U}_{+}$ with density $\Theta\geqslant 1$, but this time ${\rm
dist\,}(M_{0},0)=1$. Hence $\Theta\equiv 1$ and $M_{0}=S$ by 7. But then the
Allard regularity theorem guarantees that the convergence of $M_{z_{k}}$ to
$M$ is smooth and hence, by virtue of (8), (2) holds with $M_{z_{k}}$ in place
of $M$, a contradiction. Thus (2) is proved.
Then by Corollary 7, for small enough $\delta=\delta(\varmathbb{C},\theta)$,
for each $y\in\varmathbb{R}^{\ell}$
$(1-\theta/2)\lambda\int_{M_{y}}|x|^{-2}\zeta_{y}^{2}(x)\,d\mu(x)\leqslant\int_{M_{y}}\bigl{(}\bigl{|}\nabla_{M_{y}}\zeta_{y}(x)\bigr{|}^{2}-|A_{M_{y}}|^{2}\zeta_{y}^{2}(x)\bigr{)}\,d\mu(x),$
$\zeta\in C_{c}^{1}(\varmathbb{R}^{n+1})$, where
$M_{y}=\\{x\in\varmathbb{R}^{n+1}:(x,y)\in M\\}$, $\zeta_{y}(x)=\zeta(x,y)$,
and $|A_{M_{y}}|$ is the length of the second fundamental form of $M_{y}$.
Since
$||A_{M_{y}}|^{2}(x)-|A_{M}|^{2}(x,y)|\leqslant
C\delta/|x|^{2},\,\,\,|A_{M_{y}}|^{2}(x)\leqslant C/|x|^{2}$
by (2), the proof is completed by taking $\delta$ small enough, integrating
with respect to $y\in\varmathbb{R}^{\ell}$, and using the coarea formula
together with (2). $\Box$
## 8\. A Boundary Version of Theorem 1 and Corollary 1
Since it will be needed in [Sim21b], we here also want to discuss a version of
Theorem 1 which is valid in case $M$ has a boundary.
8.1 Theorem. Suppose $M\subset\varmathbb{R}^{n+1}\times\\{y:y_{\ell}\geqslant
0\\}$ is a complete embedded minimal hypersurface-with-boundary, with
$None$ $\left\\{\hskip 2.0pt\begin{aligned} &\partial
M=S\times\\{y:y_{\ell}=0\\},\\\ \vskip-1.0pt\cr&|\nu_{y_{\ell}}|<1\text{ on
}S,\,\,\\\ \vskip-1.0pt\cr&{\sup}_{R>1}R^{-n-\ell}\mu(M\cap
B_{R})<2\mu(\varmathbb{C}\cap\bigl{\\{}(x,y)\in
B_{1}:y_{\ell}>0\bigr{\\}}),\text{ and }\\\ \vskip-1.0pt\cr&M\subset
U_{\lambda}\times\\{y:y_{\ell}\geqslant 0\\}\end{aligned}\right.$
for some $\lambda\geqslant 1$, where $U_{\lambda}$ denotes the component of
$U_{+}\setminus\lambda S$ with $\partial
U_{\lambda}=\overline{\varmathbb}{C}_{0}\cup\lambda S$.
Suppose also that $M$ is strictly stable in the sense there is $\lambda>0$
such that that the inequality _1.8_ holds for all $\zeta\in
C^{1}_{c}(M\cap\breve{B}_{R})$ with $e_{n+1+\ell}\cdot\nabla_{M}\zeta=0$ on
$\partial M$ and for all $R>0$.
Then
$M=S\times\\{y:y_{\ell}\geqslant 0\\}.$
Proof: Since $M\subset U_{\lambda}$, the Allard compactness theorem, applied
to the rescalings $\tau M,\,\tau\downarrow 0$, plus the constancy theorem,
tells us that $M$ has a tangent cone at $\infty$ which is
$\varmathbb{C}_{0}\times\\{y:y_{\ell}\geqslant 0\\}$ with some integer
multiplicity $k\geqslant 1$, and then the condition
$\sup_{R>1}R^{-n-1}\mu(M\cap B_{R})<2\mu(\varmathbb{C}\cap\bigl{\\{}(x,y)\in
B_{1}:y>0\bigr{\\}})$ implies $k=1$. Thus $M$ has
$\varmathbb{C}_{0}\times\\{y:y_{\ell}\geqslant 0\\}$ with multiplicity 1 as
its unique tangent cone at $\infty$.
We claim that $\nu_{y_{\ell}}$ satisfies the free boundary condition
(1) $e_{n+1+\ell}\cdot\nabla_{M}\nu_{y_{\ell}}=0\text{ at each point of
$\partial M$. }$
Indeed if $\Sigma\subset\varmathbb{R}^{N}\times[0,\infty)$ is any minimal
hypersurface-with-boundary with smooth unit normal
$\nu=(\nu_{1},\ldots,\nu_{N+1})$ and $|\nu_{N+1}|\neq 1$ on $\partial\Sigma$
(i.e. $\Sigma$ intersects the hyperplane $x_{N+1}=0$ transversely), and if
$\partial\Sigma$ is a minimal hypersurface in
$\varmathbb{R}^{N}\times\\{0\\}$, then
$e_{N+1}\cdot\nabla_{\Sigma}\nu_{N+1}=0$ on $\partial\Sigma$, as one readily
checks by using the fact that the mean curvature (i.e. trace of second
fundamental form) of $\Sigma$ and $\partial\Sigma$ are both zero at each point
of $\partial\Sigma$.
We claim $\nu_{y_{\ell}}(x,y)=0\,\,\forall(x,y)\in M$. To check this, first
observe that there is a version of 4 which is valid in the half-space
$y_{\ell}\geqslant 0$ in the case when $w$ has free boundary condition
$e_{n+1+\ell}\cdot\nabla_{M}w=0$ on $\partial M$; indeed the proof of 4 goes
through with little change—the linear solutions $v$ of
$\mathcal{L}_{\varmathbb{C}}v=0$ obtained in the proof being defined on the
half-space $\varmathbb{C}_{0}\times\\{y:y_{\ell}\geqslant 0\\}$ and having the
free boundary condition $e_{n+1+\ell}\cdot\nabla_{\varmathbb{C}}v=0$. So $v$
extends to a solution of $\mathcal{L}_{\varmathbb{C}}v=0$ on all of
$\varmathbb{C}$ by even reflection and the rest of the argument is unchanged.
In particular, since $\nu_{y_{\ell}}$ has free boundary condition $0$ by (1),
we have an analogue of Lemma 4 in the half-space, giving
(2) $R^{-\alpha}\leqslant C\hskip
1.0pt\text{\small$-$}\hskip-10.6pt\int_{M\cap B^{+}_{R}}\nu_{y_{\ell}}^{2}$
for each $\alpha\in(0,1)$, where
$B^{+}_{R}=B_{R}\cap\\{(x,y):y_{\ell}\geqslant 0\\}$, and, since
$\nu_{y_{\ell}}\leqslant 1$ there is a bound
$\text{\scriptsize$-$}\hskip-6.85pt{\textstyle\int}_{M\cap
B^{+}_{R}}\nu_{y_{\ell}}^{2}\leqslant CR^{n-2-\beta}$, so by Corollary 4
(3) $\hskip 1.0pt\text{\small$-$}\hskip-10.6pt\int_{M\cap
B^{+}_{R}}\nu_{y_{\ell}}^{2}\leqslant
CR^{-\ell-2-\beta_{1}}\int_{B^{+}_{R/2}\setminus\\{(x,y):|x|<{\scriptsize\frac{1}{3}}R\\}}\nu_{y_{\ell}}^{2},$
where $C=C(\varmathbb{C}_{0},\alpha)$. Since $M\subset
U_{\lambda}\times[0,\infty)$ also we have $d(x)\leqslant
C\lambda\min\\{1,\,|x|^{\gamma_{1}}\\}$ for all $x\in\varmathbb{R}^{n}$ and
hence
(4) $\hskip 1.0pt\text{\small$-$}\hskip-10.6pt\int_{M\cap
B^{+}_{R}}d^{2}\leqslant C\lambda^{2},\quad C=C(\varmathbb{C}_{0}).$
(Notice that here we do not need growth estimates for
$\text{\scriptsize$-$}\hskip-6.85pt{\textstyle\int}_{B_{R}}d^{2}$ as in §5
because we are now assuming $d\leqslant
C\lambda\min\\{1,\,|x|^{\gamma_{1}}\\}$.) Then the proof that $\nu_{y_{\ell}}$
must be identically zero is completed using (2), (3), and (4) analogously to
the proof of non-boundary version of Theorem 1.
So $\nu_{y_{\ell}}$ is identically zero on $M$. This completes the proof in
the case $\ell=1$ and shows that, for $\ell\geqslant 2$, by even reflection
$M$ extends to a minimal submanifold $\,\,\widetilde{\hskip-2.0ptM}$ (without
boundary)
$\,\,\widetilde{\hskip-2.0ptM}=M\cup\bigl{\\{}(x,y_{1},\ldots,y_{\ell-1},-y_{\ell}):(x,y)\in
M\bigr{\\}}\subset U_{+}\times\varmathbb{R}^{\ell},$
which is translation invariant by translations in the direction of
$e_{n+1+\ell}$. Then $\,\,\widetilde{\hskip-2.0ptM}$ is strictly stable and
Theorem 1 applies, giving $M=S\times\varmathbb{R}^{\ell-1}\times[0,\infty)$,
as claimed. $\Box$
8.2 Corollary. There is $\delta=\delta(\varmathbb{C},\lambda)>0$ such that if
$M$ satisfies 8 $({\ddagger})$ and $\sup|\nu_{y}|<\delta$, then $M$
automatically satisfies the strict stability hypothesis in 8, and hence
$M=S\times\\{y:y_{\ell}\geqslant 0\\}$.
Proof: With $\,\,\widetilde{\hskip-2.0ptM}_{k}$, $M_{z_{k}}$ as in the proof
of 7, with only minor modifications to the present situation when $\partial
M=S\times\\{0\\}$, we have $\,\,\widetilde{\hskip-2.0ptM}_{k}$ and $M_{z_{k}}$
both have cylindrical limits $S\times\varmathbb{R}^{\ell}$ or
$S\times\\{y\in\varmathbb{R}^{\ell}:y_{\ell}\geqslant K\\}$ for suitable $K$,
and so (using the Allard boundary regularity theorem in the latter case) the
$y=$ const. slices of $M$ have the $C^{2}$ approximation property 7(2). Hence,
by integration over the slices as in 7, $M$ is strictly stable in the sense
that 1.8 holds for all $\zeta\in C^{1}_{c}(\breve{B}_{R}^{+})$, where
$\breve{B}_{R}^{+}=\\{(x,y):|(x,y)|<R\text{ and }y_{\ell}\geqslant 0\\}$.
$\Box$
## Appendix: Analyticity of $\beta$-harmonic functions
For $\rho>0$, $r_{0}\geqslant 0$ and $y_{0}\in\varmathbb{R}^{\ell}$, let
$\displaystyle
B_{\rho}^{+}(r_{0},y_{0})=\\{(r,y)\in\varmathbb{R}\times\varmathbb{R}^{\ell}:r\geqslant
0,\,\,(r-r_{0})^{2}+|y-y_{0}|^{2}\leqslant\rho^{2}\\},$
$\displaystyle\breve{B}^{+}_{\rho}(r_{0},y_{0})=\\{(r,y)\in\varmathbb{R}\times\varmathbb{R}^{\ell}:r>0,\,\,(r-r_{0})^{2}+|y-y_{0}|^{2}<\rho^{2}\\},$
and $B^{+}_{\rho},\,\breve{B}^{+}_{\rho}$ will be used as abbreviations for
$B^{+}_{\rho}(0,0),\,\breve{B}^{+}_{\rho}(0,0)$ respectively.
Our aim is to prove real analyticity extension across $r=0$ of
$\beta$-harmonic functions, i.e. solutions $u\in
C^{\infty}(\breve{B}_{1}^{+})$ of
A.1 $r^{-\gamma}\frac{\partial}{\partial r}\bigl{(}r^{\gamma}\frac{\partial
u}{\partial r}\bigr{)}+\Delta_{y}u=0,$
where $\gamma=1+\beta\,\,(\,>1)$ and
$\Delta_{y}u={\textstyle\sum}_{j=1}^{\ell}D_{y_{j}}D_{y_{j}}u$, assuming
A.2
$\int_{B^{+}_{\rho}}(u_{r}^{2}+|u_{y}|^{2})\,\,r^{\gamma\\!}drdy<\infty\,\,\,\forall\,\rho<1.$
In fact we show under these conditions that $u$ can be written as a convergent
series of homogeneous $\beta$-harmonic polynomials in $\\{(r,y):r\geqslant
0,\,r^{2}+|y|^{2}<1\\}$ with the convergence uniform in $B_{\rho}^{+}$ for
each $\rho<1$.
Of course all solutions of A.1 satisfying A.2 are automatically real-analytic
in $\breve{B}^{+}_{1}$ because the equation is uniformly elliptic with real-
analytic coefficients in each closed ball $\subset\breve{B}^{+}_{1}$. Also, if
$\gamma$ is an integer $\geqslant 1$ then the operator in A.1 is just the
Laplacian in $\varmathbb{R}^{1+\gamma+\ell}$, at least as it applies to
functions $u=u(x,y)$ on $\varmathbb{R}^{1+\gamma+\ell}$ which can be expressed
as a function of $r=|x|$ and $y$, so in this case smoothness across $r=0$ is
Weyl’s lemma and analyticity is then standard.
To handle the case of general $\gamma>1$, first note that by virtue of the
calculus inequality $\int_{0}^{a}r^{\gamma-2}f^{2}\,dr\leqslant
C\int_{0}^{a}r^{\gamma}(f^{\prime}(r))^{2}\,dr$ for any $f\in C^{1}(0,a]$ with
$f(a)=0$ (which is proved simply by using the identity
$\int_{0}^{a}\bigl{(}r^{\gamma-1}(\min\\{|f|,k\\})^{2}\bigr{)}^{\prime}\,dr=0$
and then using Cauchy-Schwarz inequality and letting $k\to\infty$), we have
A.3 $\int_{B^{+}_{\rho}}r^{-2}\zeta^{2}\,d\mu_{+}\leqslant
C\int_{B^{+}_{\rho}}\bigl{(}D_{r}\zeta\bigr{)}^{2}\,d\mu_{+},\quad
C=C(\gamma)$
for any $\rho\in[{\textstyle\frac{1}{2}},1)$ and any Lipschitz $\zeta$ on
$B^{+}_{1}$ with support contained in $B^{+}_{\rho}$.
Next observe that $u$ satisfies the weak form of A.1:
A.4 $\int_{\breve{B}_{1}^{+}}(u_{r}\zeta_{r}+u_{y}\cdot\zeta_{y})\,d\mu_{+}=0$
for any Lipschitz $\zeta$ with support in $B^{+}_{\rho}$ for some $\rho<1$,
where, here and subsequently,
$d\mu_{+}=r^{\gamma}drdy$
and subscripts denote partial derivatives:
$u_{r}=D_{r}u,\,\,\,u_{y}=D_{\\!y}u=(D_{\\!y_{1}}u,\ldots,D_{\\!y_{\ell}}u).$
A.4 is checked by first observing that it holds with
$\varphi_{\sigma}(r)\zeta(r,y)$ in place of $\zeta(r,y)$, where
$\varphi_{\sigma}(r)=0$ for $r<\sigma/2$, $\varphi_{\sigma}(r)=1$ for
$r>\sigma$, $|\varphi_{\sigma}^{\prime}(r)|<C/\sigma$, and then letting
$\sigma\downarrow 0$ and using A.2.
Similarly since $r^{-\gamma}\frac{\partial}{\partial
r}(r^{\gamma}\frac{\partial u^{2}}{\partial
r})+{\textstyle\sum}_{j=1}^{\ell}D_{y_{j}}D_{y_{j}}u^{2}=2(u_{r}^{2}+|u_{y}|^{2})\geqslant
0$, we can check using A.2 and A.3 that $u^{2}$ is a weak subsolution, meaning
that
A.5
$\int_{\breve{B}_{1}^{+}}\bigl{(}(u^{2})_{r}\zeta_{r}+(u^{2})_{y}\cdot\zeta_{y}\bigr{)}\,d\mu_{+}\leqslant
0,$
for any non-negative Lipschitz function $\zeta$ on $B^{+}_{1}$ with support
$\subset B^{+}_{\rho}$ for some $\rho<1$.
Next we note that if $\rho\in[{\textstyle\frac{1}{2}},1)$, $r_{0}\geqslant 0$
and $B_{\rho}^{+}(r_{0},y_{0})\subset B^{+}_{1}$, then
A.6 $|u(r_{0},y_{0})|\leqslant
C\bigl{(}\rho^{-\ell-1-\gamma}\int_{B^{+}_{\rho}(r_{0},y_{0})}u^{2}\,d\mu_{+}\bigr{)}^{1/2},\quad
C=C(\gamma,\ell).$
To check this, first observe that if
$B^{+}_{\sigma}(r_{0},y_{0})\subset\breve{B}_{1}^{+}$ and $r_{0}>\sigma$ then
the equation A.1 is uniformly elliptic divergence form with smooth
coefficients on $B_{\sigma/2}(r_{0},y_{0})$, so we can use standard elliptic
estimates for $u$ to give
A.7 $|u(r_{0},y_{0})|^{2}\leqslant
C\sigma^{-\gamma-1-\ell}\int_{B^{+}_{\sigma}(r_{0},y_{0})}u^{2}\,d\mu_{+}$
with $C=C(\gamma,\ell)$. So now assume $B_{\rho}^{+}(r_{0},y_{0})\subset
B^{+}_{1}$. If $r_{0}>\rho/4$ we can use A.7 with $\sigma=\rho/4<r_{0}$ to
give A.6, while on the other hand if $r_{0}\leqslant\rho/4$ then we can first
take $\sigma=r_{0}/2$ in A.7 to give
A.8 $|u(r_{0},y_{0})|^{2}\leqslant
Cr_{0}^{-\ell-1-\gamma}\int_{B^{+}_{r_{0}/2}(r_{0},y_{0})}u^{2}\,d\mu_{+}\leqslant
Cr_{0}^{-\ell-1-\gamma}\int_{B^{+}_{2r_{0}}(0,y_{0})}u^{2}\,d\mu_{+},$
and then observe, using a straightforward modification of the relevant
argument for classical subharmonic functions to the present case of the
$\beta$-subharmonic function $u^{2}$ as in A.5,
$\text{$\sigma^{-\ell-1-\gamma}\int_{B^{+}_{\sigma}(0,y_{0})}u^{2}\,d\mu_{+}$
is an increasing function of $\sigma$ for $\sigma\in(0,\rho/2]$}.$
So from A.8 we conclude
$|u(r_{0},y_{0})|^{2}\leqslant
C\rho^{-\ell-1-\gamma}\int_{B^{+}_{\rho/2}(0,y_{0})}u^{2}\,d\mu_{+}\leqslant
C\rho^{-\ell-1-\gamma}\int_{B^{+}_{\rho}(r_{0},y_{0})}u^{2}\,d\mu_{+}$
where $C=C(\gamma,\ell)$. Thus A.6 is proved.
For $\rho\in[{\textstyle\frac{1}{2}},1)$,
$kh\in\bigl{(}-(1-\rho),1-\rho\bigr{)}\setminus\\{0\\}$, and
$k\in\\{1,2,\ldots\\}$, let $u_{h}^{(k)}$ (defined on $\breve{B}^{+}_{\rho}$)
denote the vector of $k$-th order difference quotients with respect to the
$y$-variables; so for example
$\displaystyle u_{h}^{(1)}$
$\displaystyle=\bigl{(}h^{-1}(u(x,y+he_{1})-u(x,y)),\ldots,h^{-1}(u(x,y+he_{\ell})-u(x,y))\bigr{)},$
$\displaystyle u_{h}^{(2)}$
$\displaystyle=\bigl{(}h^{-1}(u_{h}^{(1)}(x,y+he_{1})-u_{h}^{(1)}(x,y)),\ldots,h^{-1}(u_{h}^{(1)}(x,y+he_{\ell})-u_{h}^{(1)}(x,y))\bigr{)},$
and generally, for $k|h|<1-\rho$,
$\displaystyle
u_{h}^{(k)}(x,y)=h^{-1}\bigl{(}(u_{h}^{(k-1)}(x,y+he_{1})-u_{h}^{(k-1)}(x,y)),\ldots,$
$\displaystyle\hskip
166.2212pt(u_{h}^{(k-1)}(x,y+he_{\ell})-u_{h}^{(k-1)}(x,y))\bigr{)},$
(which is a function with values in $\varmathbb{R}^{\ell^{k}}$). For
notational convenience we also take
$u^{(0)}_{h}(x,y)=u(x,y).$
Then, replacing $\zeta$ in A.4 by $\zeta_{-h}^{(k)}$ and changing variables
appropriately (i.e. “integration by parts” for finite differences instead of
derivatives),
$\int_{\breve{B}_{1}^{+}}\bigl{(}D_{r}u^{(k)}_{h}D_{r}\zeta+{\textstyle\sum}_{j=1}^{\ell}D_{y_{j}}u^{(k)}_{h}\,D_{y_{j}}\zeta\bigr{)}\,d\mu_{+}=0,$
for all Lipschitz $\zeta$ on $B_{1}^{+}$ with support $\subset B^{+}_{\rho}$
and for $k|h|<1-\rho$. Replacing $\zeta$ by $\zeta^{2}u_{h}^{(k)}$ gives
$\displaystyle\int_{\breve{B}_{1}^{+}}\bigl{(}|D_{r}u_{h}^{(k)}|^{2}+|D_{y}u_{h}^{(k)}|^{2}\bigr{)}\zeta^{2}$
$\displaystyle\hskip
72.26999pt=-2\smash[t]{\int_{\breve{B}_{1}^{+}}}\bigl{(}\zeta u_{h}^{(k)}\cdot
D_{r}u_{h}^{(k)}D_{r}\zeta+\zeta u_{h}^{(k)}\cdot
D_{r}u_{h}^{(k)}D_{y_{j}}\zeta D_{y_{j}}\zeta\bigr{)}\,d\mu_{+},$
so by Cauchy-Schwarz
$\int_{\breve{B}_{1}^{+}}\bigl{(}|D_{r}u_{h}^{(k)}|^{2}+|D_{y}u_{h}^{(k)}|^{2}\bigr{)}\zeta^{2}\,d\mu_{+}\leqslant
4\int_{\breve{B}_{1}^{+}}|u_{h}^{(k)}|^{2}|D\zeta|^{2}\,d\mu_{+},$
and by A.3 we then have
A.9
$\int_{\breve{B}_{1}^{+}}\bigl{(}r^{-2}|u_{h}^{(k)}|^{2}+|D_{r}u_{h}^{(k)}|^{2}+|D_{y}u_{h}^{(k)}|^{2}\bigr{)}\zeta^{2}\,d\mu_{+}\leqslant
C\int_{\breve{B}_{1}^{+}}|u_{h}^{(k)}|^{2}|D\zeta|^{2}\,d\mu_{+},$
where $C=C(\gamma,\ell)$. Now let $u^{(k)}=\lim_{h\to 0}u_{h}^{(k)}$ (i.e.
$u^{(k)}$ is the array of all mixed partial derivatives of order $k$ with
respect to the $y$ variables defined inductively by $u^{(0)}=u$ and
$u^{(k+1)}=D_{y}u^{(k)}$). Then A.9 gives
A.10
$\int_{\breve{B}_{1}^{+}}\bigl{(}r^{-2}|u^{(k)}|^{2}+|D_{r}u^{(k)}|^{2}+|u^{(k+1)}|^{2}\bigr{)}\zeta^{2}\leqslant
C\int_{\breve{B}_{1}^{+}}|u^{(k)}|^{2}|D\zeta|^{2}\,d\mu_{+}$
for each $\zeta\in C^{1}(B^{+}_{\rho})$ with support in $B^{+}_{\theta\rho}$
for some $\theta\in[{\textstyle\frac{1}{2}},1)$ and for each $k$ such that the
right side is finite. By A.2 the right side is finite for $k=0,1$, and taking
$k=1$ in A.10 then implies the right side is also finite with $k=2$.
Proceeding inductively we see that in fact that the right side is finite for
each $k=0,1,\ldots$, so A.10 is valid and all integrals are finite for all
$k=1,2,\ldots$.
Let $k\in\\{1,2,\ldots\\}$ and $(r_{0},y_{0})\in B^{+}_{1}$ with
$r_{0}\geqslant 0$. Then if $B^{+}_{\rho}(r_{0},y_{0})\subset B^{+}_{1}$ and
we let
$\rho_{j}=\rho-{\textstyle\frac{j}{k}}\rho/2,\,\,j=0,\ldots,k-1,$
and, applying A.10 with $k=j$ and
$\zeta=1\text{ on }B^{+}_{\rho_{j+1}}(r_{0},y_{0}),\,\,\zeta=0\text{ on
}\varmathbb{R}^{1+\ell}\setminus B^{+}_{\rho_{j}}(r_{0},y_{0}),\text{ and
}|D\zeta|\leqslant 3k,$
we obtain
$\int_{\breve{B}^{+}_{\rho_{j+1}}(r_{0},y_{0})}|u^{(j+1)}|^{2}\,d\mu_{+}\leqslant
C\rho^{-2}k^{2}\int_{\breve{B}^{+}_{\rho_{j}}(r_{0},y_{0})}|u^{(j)}|^{2}\,d\mu_{+}$
with $C=C(\gamma,\ell)$. By iteration this gives
$\int_{\breve{B}_{\rho/2}^{+}(r_{0},y_{0})}|u^{(k)}|^{2}\,d\mu_{+}\leqslant
C^{k}\rho^{-2k}(k!)^{2}\int_{\breve{B}_{\rho}^{+}(r_{0},y_{0})}|u|^{2}\,d\mu_{+}$
with suitable $C=C(\gamma,\ell)$ (independent of $k$), and then by A.6 with
$u^{(k)}$ in place of $u$ and $\rho/2$ in place of $\rho$ we obtain
$|u^{(k)}(r_{0},y_{0})|^{2}\leqslant
C^{k}\rho^{-2k}(k!)^{2}\rho^{-\gamma-1-\ell}\int_{\breve{B}^{+}_{\rho}(r_{0},y_{0})}|u|^{2}\,d\mu_{+},$
with $C=C(\gamma,\ell)$ (independent of $k$ and $\rho$). In view of the
arbitrariness of $\rho,r_{0},y_{0}$ this implies
A.11 $\sup_{B^{+}_{\rho/2}}|u^{(k)}|^{2}\leqslant
C^{k}(k!)^{2}\int_{\breve{B}^{+}_{\rho}}|u|^{2}\,d\mu_{+},\text{ for each
$\rho\in(0,1)$},$
where $C=C(\gamma,\ell,\rho)$.
Next let $L_{r}$ be the second order operator defined by
$L_{r}(f)=r^{-\gamma}\frac{\partial}{\partial
r}\bigl{(}r^{\gamma}\frac{\partial f}{\partial r}\bigr{)},$
so that A.1 says $L_{r}u=-\Delta_{y}u$, where
$\Delta_{y}u={\textstyle\sum}_{j=1}^{\ell}D^{2}_{y_{j}}u$, and by repeatedly
differentiating this identity with respect to the $y$ variables, we have
$L_{r}u^{(k)}(r,y)=-\Delta_{y}u^{(k)},$
and hence
A.12 $L_{r}^{j}u^{(k)}=-(\Delta_{y})^{j}u^{(k)}.$
for each $j,k=0,1,\ldots$. In particular, since $|\Delta_{y}^{j}f|\leqslant
C^{j}|f^{(2j)}|$ with $C=C(\ell)$,
A.13 $|L_{r}^{j}u^{(k)}|^{2}\leqslant C^{2j}|u^{(k+2j)}|^{2}$
for each $j,k=0,1,2,\ldots$, where $C=C(\ell)$
Next we note that, since $|u^{(k)}|$ is bounded on $B^{+}_{\rho}$ for each
$\rho<{\textstyle\frac{1}{2}}$ by A.11, for small enough $r>0$ and
$|y|<{\textstyle\frac{1}{2}}$ we can apply elliptic estimates to give
$|D_{r}u^{(k)}|<C/r$ with $C$ fixed independent of $r$ (but depending on $k$),
and since $\gamma>1$ we have $|r^{\gamma}D_{r}u^{(k)}(r,y)|\leqslant
Cr^{\gamma-1}\to 0$ as $r\downarrow 0$. But using A.13 with $j=1$ and A.11 we
have $|D_{r}(r^{\gamma}D_{r}u^{(k)})|\leqslant Cr^{\gamma}$ and hence by
integrating with respect to $r$ and using the above fact that
$r^{\gamma}D_{r}u^{(k)}(r,y)\leqslant Cr^{\gamma-1}\to 0$ as $r\downarrow 0$,
we then have
A.14 $|D_{r}u^{(k)}(r,y)|\leqslant Cr\text{ for small enough $r$ and all
$|y|\leqslant\rho$, $\rho<{\textstyle\frac{1}{2}}$},$
with $C$ depending on $k$, $\gamma$, $\rho$ and $\ell$, and in particular, for
each $\rho<{\textstyle\frac{1}{2}}$,
A.15 $D_{r}u^{(k)}(0_{+},y)=0=\lim_{r\downarrow 0}D_{r}u^{(k)}(r,y)\text{
uniformly for $|y|\leqslant\rho$.}$
We now claim the following polynomial approximation property: For each $u$ as
in A.1, A.2, each $j,k=1,2,\ldots$ with $k\leqslant j$, and each
$(r,y)\in\breve{B}^{+}_{\rho}$, $\rho<{\textstyle\frac{1}{2}}$,
A.16
$\bigl{|}L_{r}^{j-k}u(r,y)-{\textstyle\sum}_{i=1}^{k}c_{ijk}L_{r}^{j-i}u(0,y)r^{2(k-i)}/(2(k-i))!\bigr{|}\leqslant
r^{2k}{\sup}_{B_{\rho}^{+}}|L_{r}^{j}u|/(2k)!$
where $0<c_{ijk}\leqslant 1$. To prove the case $k=1$, by virtue of A.14,
A.15, we can simply integrate from $0$ to $r$, using
$D_{r}(r^{\gamma}D_{r}L_{r}^{j-1}u(r,y))=r^{\gamma}L^{j}u(r,y)$, followed by a
cancellation of $r^{\gamma}$ from each side of the resulting identity. This
gives
$|D_{r}L_{r}^{j-1}u(r,y)|\leqslant
r\,{\sup}_{B_{\rho}^{+}}|L^{j}u(r,y)|/(\gamma+1),$
and then a second integration using A.15 gives
$\bigl{|}L_{r}^{j-1}u(r,y)-L_{r}^{j-1}u(0,y)\bigr{|}\leqslant(2(\gamma+1))^{-1}r^{2}{\sup}_{B_{\rho}^{+}}|L_{r}^{j}u(r,y)|,\,\,j=1,2,\ldots,$
which establishes the case $k=1$. Assume $k+1\leqslant j$ and that A.16 is
correct for $k$. Multiplying each side of A.16 by $r^{\gamma}$ and
integrating, we obtain
$\displaystyle\bigl{|}r^{\gamma}D_{r}L_{r}^{j-k-1}u(r,y)-{\textstyle\sum}_{i=1}^{k}(2(k-i)+\gamma+1)^{-1}c_{ijk}L_{r}^{j-i}u(0,y)\frac{r^{2(k-i)+\gamma+1}}{(2(k-i))!}\bigr{|}$
$\displaystyle\hskip
130.08621pt\leqslant(2k+\gamma+1)^{-1}\frac{r^{2k+\gamma+1}}{(2k)!}{\sup}_{B_{\rho}^{+}}|L_{r}^{j}u|,$
where we used the fact that
$r^{\gamma}L_{r}^{j-k}u(r,y)=D_{r}(r^{\gamma}D_{r}L_{r}^{j-k-1}u(r,y))$.
After cancelling the factor $r^{\gamma}$ and integrating again, we obtain
$\displaystyle\bigl{|}L_{r}^{j-k-1}u(r,y)-L_{r}^{j-k-1}u(0,y)$
$\displaystyle\hskip
7.22743pt-{\textstyle\sum}_{i=1}^{k}(2(k-i)+\gamma+1)^{-1}(2(k-i)+2)^{-1}c_{ijk}L_{r}^{j-i}u(0,y)\frac{r^{2(k-i)+2}}{(2(k-i))!}\bigr{|}$
$\displaystyle\hskip
10.84006pt\leqslant(2k+\gamma+1)^{-1}(2k+2)^{-1}r^{2k+2}{\sup}_{B_{\rho}^{+}}|L_{r}^{j}u|/(2k)!\leqslant{\sup}_{B_{\rho}^{+}}|L_{r}^{j}u|r^{2k+2}/(2k+2)!$
which confirms the validity of A.16 with $k+1$ in place of $k$. So A.16 is
proved for all $k\leqslant j$, and in particular with $k=j$ and suitable
constants $c_{ij}\in(0,1]$ we get
$u(r,y)={\textstyle\sum}_{i=0}^{j-1}c_{ij}L_{r}^{i}u(0,y)r^{2i}/(2i)!+E_{j}(r,y),\text{where\,}|E_{j}(r,y)|\leqslant
r^{2j}{\sup}_{B_{\rho}^{+}}|L_{r}^{j}u|/(2j)!$
By A.13 and A.11,
${\sup}_{B_{\rho/2}^{+}}|L_{r}^{i}u|/(2i)!={\sup}_{B_{\rho/2}^{+}}|\Delta_{y}^{i}u|/(2i)!\leqslant
C^{i}{\sup}_{B_{\rho/2}^{+}}|u^{(2i)}|/(2i)!\leqslant
C^{i}\bigl{(}\int_{\breve{B}^{+}_{\rho}}|u|^{2}\,d\mu_{+}\bigr{)}^{1/2},$
with $C=C(\gamma,\ell,\rho)$, so, for suitable $C=C(\gamma,\ell)$, we conclude
that $u(r,y)$ has a power series expansion in terms $r^{2}$:
A.17 $u(r,y)={\textstyle\sum}_{j=0}^{\infty}\,\,\,\,a_{j}(y)r^{2j},\quad
0\leqslant r<\sigma,$
where $\sigma=\sigma(\gamma,\ell)\in(0,{\textstyle\frac{1}{2}}]$, and $a_{j}$
satisfies the bounds
${\sup}_{B^{+}_{\sigma}}|a_{j}|\leqslant
C^{j}\bigl{(}\int_{B_{1/2}^{+}}u^{2}\,d\mu_{+}\bigr{)}^{1/2},$
where $C=C(\gamma,\ell)$. Thus A.17 implies
A.18 ${\sup}_{B_{\sigma/2}^{+}}|D_{r}^{j}u(r,y)|\leqslant
C^{j}j!\,\bigl{(}\int_{B_{1/2}^{+}}u^{2}\,d\mu_{+}\bigr{)}^{1/2},\,\,\,C=C(\gamma,\ell).$
Since the same holds with $u^{(k)}$ in place of $u$, and since
$\int_{B_{1/2}^{+}}|u^{(k)}|^{2}\,d\mu_{+}\leqslant
C^{k}(k!)^{2}\int_{B_{3/4}^{+}}u^{2}\,d\mu_{+},$
by A.11, we deduce from A.18 that for suitable
$\sigma=\sigma(\gamma,\ell)\in(0,{\textstyle\frac{1}{2}})$
A.19 ${\sup}_{B_{\sigma}^{+}}|D_{r}^{j}D_{y}^{k}u(r,y)|\leqslant
C^{j+k}j!k!\,\bigl{(}\int_{B_{3/4}^{+}}u^{2}\,d\mu_{+}\bigr{)}^{1/2},$
where $C=C(\gamma,\ell)$, and hence in particular $u$ is real-analytic in the
variables $r^{2}$ and $y_{1},\ldots,y_{\ell}$ in a neighborhood of $(0,0)$ as
claimed.
Finally we show that if $u$ is $\beta$-harmonic in $\breve{B}_{1}^{+}$ then
the power series for $u$ converges in $B_{\rho}^{+}$ for each $\rho<1$, and
also that the homogeneous $\beta$-harmonic polynomials restricted to
$S_{+}^{\ell}$ are complete in $L^{2}(\nu_{+})$ on $S_{+}^{\ell}$, where
$\nu_{+}$ is the measure $d\nu_{+}=\omega_{1}^{\gamma}d\mu_{\ell}$ on
$S_{+}^{\ell}$.
So let $u\in L^{2}(\mu_{+})$ satisfy A.1 and A.2. The above discussion shows
that for suitably small $\sigma$ we can write
A.20 $u={\textstyle\sum}_{j=0}^{\infty}u_{j}\,\,\,\text{ in }B_{\sigma}^{+},$
where $u_{j}$ consists of the homogeneous degree $j$ terms in the power series
expansion of $u$ in $B_{\sigma}$ (and $u_{j}=0$ if there are no such terms).
Then each $u_{j}\neq 0$ is a homogeneous degree j $\beta$-harmonic polynomial
and we let
$\tilde{u}_{j}(\omega)=\rho^{-j}u_{j}(\rho\omega),\,\,\,\hat{u}_{j}(\omega)=\|\tilde{u}_{j}\|_{L^{2}(\nu_{+})}^{-1}\tilde{u}_{j}(\omega),\,\,\,\omega\in\varmathbb{S}^{\ell}_{+},$
and we set $\hat{u}_{j}(\omega)=0$ if $u_{j}=0$. Then, with
$\langle\,,\,\rangle=$ the $L^{2}(\nu_{+})$ inner product, $\langle
u,\hat{u}_{j}\rangle\hat{u}_{j}=u_{j}$ in $B^{+}_{\sigma}$ for each $j$, and
hence by A.20 the series $\sum_{j}\langle u,\hat{u}_{j}\rangle\hat{u}_{j}$
converges smoothly (and also in $L^{2}(\mu_{+})$) to $u$ in $B^{+}_{\sigma}$.
By definition $\rho^{j}\hat{u}_{j}(\omega)$ is either zero or a homogeneous
degree $j$ harmonic polynomial, so by 3.14 ${\rm
div}_{\varmathbb{S}^{\ell}_{+}}(\omega_{1}^{\gamma}\nabla_{\varmathbb{S}^{\ell}_{+}}\hat{u}_{j})=-j(j+\ell+\beta)\omega_{1}^{\gamma}\hat{u}_{j}$,
and hence using the formula 3.13 we can directly check that $\langle
u,\hat{u}_{j}\rangle\hat{u}_{j}$ is $\beta$-harmonic on all of
$\breve{B}_{1}^{+}$. Since by construction it is equal to $u_{j}$ on
$B_{\sigma}$, by unique continuation (applicable since $u$ is real-analytic on
$\breve{B}_{1}^{+}$) we conclude
A.21 $\displaystyle\langle u,\hat{u}_{j}\rangle\hat{u}_{j}\text{ is either
zero or the homogeneous degree $j$ $\beta$-harmonic}$ $\displaystyle\hskip
101.17755pt\text{ polynomial $u_{j}$ on all of $B_{1}^{+}\setminus
S^{\ell}_{+}$ for each $j=0,1,\ldots$}.$
Also, by the orthogonality 3.16,
$\displaystyle\bigl{\|}\sum_{j=p}^{q}\bigl{\langle}u,\hat{u}_{j}\bigr{\rangle}\hat{u}_{j}\bigr{\|}^{2}_{L^{2}(\mu_{+}^{\rho})}$
$\displaystyle=\sum_{j=p}^{q}\int_{0}^{\rho}\bigl{\langle}u(\tau\omega),\hat{u}_{j}(\omega)\bigr{\rangle}^{2}\,\tau^{\gamma+\ell}d\tau$
$\displaystyle\leqslant\int_{0}^{\rho}\bigl{\|}u(\tau\omega)\bigr{\|}^{2}_{L^{2}(\nu_{+})}\,\tau^{\gamma+\ell}d\rho=\bigl{\|}u\bigr{\|}^{2}_{L^{2}(\mu^{\rho}_{+})}\,\,(<\,\infty)$
for each $\rho<1$ and each $p<q$, where $\smash{\mu_{+}^{\rho}}$ is the
measure $\mu_{+}$ on $B_{\rho}$. So $\sum_{j=0}^{q}\langle
u,\hat{u}_{j}\rangle\hat{u}_{j}$ is Cauchy, hence convergent, in
$L^{2}(\mu_{+}^{\rho})$ to a $\beta$-harmonic function $v$ on
$\breve{B}_{\rho}^{+}$. But $v=u$ on $B_{\sigma}^{+}$ and hence, again using
unique continuation, $v=u$ in all of $\breve{B}_{\rho}^{+}$. Thus
$\sum_{j=0}^{q}\langle u,\hat{u}_{j}\rangle\hat{u}_{j}$ converges to $u$ in
$L^{2}(\smash{\mu_{+}^{\rho}})$ for each $\rho<1$ and the convergence is in
$L^{2}(\mu_{+})$ if $\|u\|_{L^{2}(\mu_{+})}<\infty$.
Now observe that the bounds A.19 were established for balls centred at
$(0,0)$, but with only notational changes the same argument gives similar
bounds in balls centred at $(0,y_{0})$ with $|y_{0}|<1$. Specifically for each
$\rho\in(0,1)$ and each $|y_{0}|<\rho$ there is
$\sigma=\sigma(\gamma,\ell,\rho)<{\textstyle\frac{1}{2}}(1-\rho)$ such that
${\sup}_{B_{\sigma}^{+}(0,y_{0})}|D_{r}^{j}D_{y}^{k}u|\leqslant
C^{j+k}j!k!\bigl{(}\int_{B_{(1-\rho)/2}^{+}(0,y_{0})}u^{2}\,d\mu_{+}\bigr{)}^{1/2},\,\,C=C(\gamma,\ell,\rho).$
So in fact, with $\sigma=\sigma(\gamma,\ell,\rho)$ small enough,
${\sup}_{\\{(r,y):r\in[0,\sigma],|y|\leqslant\rho\\}}|D_{r}^{j}D_{y}^{k}u(r,y)|\leqslant
C^{j+k}j!k!\bigl{(}\int_{B_{1}^{+}}u^{2}\,d\mu_{+}\bigr{)}^{1/2},\quad\rho<1.$
Also in $B^{+}_{\rho}\setminus([0,\sigma]\times\varmathbb{R}^{\ell})$ we can
use standard elliptic estimates, so in fact we have
A.22 ${\sup}_{B^{+}_{\rho}}|D_{r}^{j}D_{y}^{k}u(r,y)|\leqslant
C\bigl{(}\int_{B_{1}^{+}}u^{2}\,d\mu_{+}\bigr{)}^{1/2},$
with $C=C(j,k,\gamma,\rho,\ell)$, so the $L^{2}$ convergence of the series
$\sum_{j}\langle u,\hat{u}_{j}\rangle\hat{u}_{j}(=\sum_{j}u_{j})$ proved above
is also $C^{k}$ convergence in $B_{\rho}^{+}$ for each $k\geqslant 1$ and each
$\rho<1$.
Finally to prove the completeness of the homogeneous $\beta$-harmonic
polynomials in $L^{2}(\nu_{+})$ (on $\varmathbb{S}^{\ell}_{+}$), let $\varphi$
be any smooth function on $\varmathbb{S}^{\ell}_{+}$ with $\varphi$ zero in
some neighborhood of $r=0$. By minimizing the energy
$\int_{B_{1}^{+}}(u_{r}^{2}+|u_{y}|^{2})\,r^{\gamma}d\mu$ among functions with
trace $\varphi$ on $\varmathbb{S}_{\ell}$ we obtain a solution of A.4 with
trace $\varphi$ on $\varmathbb{S}^{\ell}_{+}$. The above discussion plus
elliptic boundary regularity shows that $u$ is $C^{0}$ on all of $B_{1}^{+}$
and that the sequence $\\{\sum_{j=0}^{q}u_{j}\\}_{q=0,1,2,\ldots}$, which we
showed above to be convergent to $u$ in $L^{2}(\mu_{+})$ on $B_{1}^{+}$, is
also uniformly convergent to $u$ on all of $B_{1}^{+}$. Hence
$\varphi(\omega)=\sum_{j=0}^{\infty}u_{j}(\omega)$ on
$\varmathbb{S}^{\ell}_{+}$ with the convergence uniform and hence in
$L^{2}(\nu_{+})$. Thus $\varphi$ is represented as an $L^{2}(\nu_{+})$
convergent series of $\beta$-harmonic polynomials on
$\varmathbb{S}^{\ell}_{+}$. Since such $\varphi$ are dense in
$L^{2}(\nu_{+})$, the required completeness is established.
## References
* [HS85] R. Hardt and L. Simon, _Area minimizing hypersurfaces with isolated singularities_ , J. Reine u. Angew. Math. 362 (1985), 102–129.
* [Ilm96] T. Ilmanen, _A strong maximum principle for singular minimal hypersurfaces_ , Calc. Var. and PDE 4 (1996), 443–467.
* [Sim83] L. Simon, _Asymptotics for a class of evolution equations, with applications to geometric problems_ , Annals of Mathematics 118 (1983), 525–571.
* [Sim85] by same author, _Isolated singularities of extrema of geometric variational problems_ , Lecture notes in mathematics—Ed. E. Giusti, Springer Verlag, Berlin-Heidelberg-New York 1161 (1985), 206–277.
* [Sim94] by same author, _Uniqueness of some cylindrical tangent cones_ , Comm. in Anal. and Geom. 2 (1994), 1–33.
* [Sim21a] by same author, In preparation (2021).
* [Sim21b] by same author, _Stable minimal hypersurfaces with singular set an arbitrary closed $K\subset\\{0\\}\times\varmathbb{R}^{\ell}$_, arXiv Jan. 2021
* [SS81] R. Schoen and L. Simon, _Regularity of stable minimal hypersurfaces_ , Comm. Pure and Appl. Math 34 (1981), 741–797.
* [SW89] B. Solomon and B. White, _A strong maximum principle for varifolds that are stationary with respect to even parametric functionals_ , Indiana Univ. Math. J. 38 (1989), 683–691.
* [Wic14] N. Wickramasekera, _A general regularity theory for stable codimension $1$ integral varifolds_, Ann. of Math. 179 (2014), 843–1007.
|
# Community Detection in Blockchain
Social Networks
Sissi Xiaoxiao Wu, Zixian Wu, Shihui Chen, Gangqiang Li, and Shengli Zhang
This work is supported by the National Natural Science Foundation of China
under Grant 61701315.S. X. Wu, Z. Wu, S. Chen, G. Li, and S. Zhang are with
the College of Electronics and Information Engineering, Shenzhen University,
Shenzhen, China. E-mails: {xxwu.eesissi<EMAIL_ADDRESS>{1900432037,
2172262956<EMAIL_ADDRESS>
###### Abstract
In this work, we consider community detection in blockchain networks. We
specifically take the Bitcoin network and Ethereum network as two examples,
where community detection serves in different ways. For the Bitcoin network,
we modify the traditional community detection method and apply it to the
transaction social network to cluster users with similar characteristics. For
the Ethereum network, on the other hand, we define a bipartite social graph
based on the smart contract transactions. A novel community detection
algorithm which is designed for low-rank signals on graph can help find users’
communities based on user-token subscription. Based on these results, two
strategies are devised to deliver on-chain advertisements to those users in
the same community. We implement the proposed algorithms on real data. By
adopting the modified clustering algorithm, the community results in the
Bitcoin network is basically consistent with the ground-truth of betting site
community which has been announced to the public. At the meanwhile, we run the
proposed strategy on real Ethereum data, visualize the results and implement
an advertisement delivery on the Ropsten test net.
###### Index Terms:
Blockchain, Bitcoin, Ethereum, community detection, recommendation
## I Introduction
Ever since Satoshi Nakamoto’s Bitcoin white paper in the year of $2008$[36],
blockchain has been launched in many areas such as banking[20], network
security, supply-chain management[48], internet-of-things (IoT)[12], financial
cryptocurrency[34], serving as a decentralized ledger. Recently, governments
in different countries begin to pay a huge attention to blockchain and put
technologies involving blockchain into the cutting edge. In this work, instead
of treating blockchain as a ledger, we try to study blockchain from a social
media perspective. Specifically, we define the blockchain network as a
decentralized social network, based on which we try different algorithms to
analyze users’ relationship underlying the ledger records. Our final goals are
two-folded. First, we define different types of social networks for both
Bitcoin and Ethereum. Second, by discovering users’ clusters in the defined
social graph we analyze users’ behavior in both networks, and especially try
to deliver advertisements in the Ethereum network.
Social network data is valuable as we can mine users’ preferences from it and
thus explore potential marketing. In traditional centralized social network,
data is stored in a fusion center which is owned by the platform. Therefore,
the platform monopolizes all data mining applications to make a big fortune.
The blockchain network, however, decentralizes data among users and allows
each user in the blockchain network fully access the whole piece of data and
develops its own applications. Moreover, traceability ensured by the
blockchain endorses the quality of the data, which further improves the
efficiency of the data mining applications. The above nice properties make the
blockchain as a social network promising in the future.
In this work, according to the way of recording the ledgers, Bitcoin and
Ethereum are respectively defined as different kinds of social networks. In
Bitcoin, every user can generate multiple private-public key pairs and the
only purpose of transaction is to send BTC coins. The public key (also known
as the address) is visible in the transaction block, as either the transaction
input or the transaction output. Multiple input addresses and multiple output
addresses may exist in the same Bitcoin transaction. In this context, one
important pre-processing task of analyzing the ledger data in Bitcoin is to
associate those addressees which belong to the same user and group them into a
super-node in the social graph. This kind of operation is usually referred to
“common spend” or “change address” heuristics. Based on the pre-processing
results, we define the Bitcoin social network as an undirected graph where
each super-node corresponds to a node in the graph and the edge weight of any
two nodes in the graph is defined by historical coin-based transactions
between any two super-nodes. Then we propose a specific clustering algorithm,
which originated from the spectral clustering algorithm, to the Bitcoin social
graph to find communities.
Ethereum, published in $2014$ by Vitalik Buterin and launched in $2015$, is
the world’s leading programmable blockchain as it has added a self-enforcing
piece, i.e., the so called smart contract, which ensures a coordinated and
enforced agreement between network participants by means of an organization or
to create tokens[9]. In Ethereum, a transaction has only one input address and
one output address and thus we can bypass the super-address pre-processing.
Unlike Bitcoin, users in Ethereum interact with each other by not only a
direct ETH coin transaction but also the smart contract transaction. In this
work, we focus on the smart contract transactions and define the Ethereum
social network as a bipartite social graph. Particularly, we are interested in
those smart contract transactions specific for initial coin offering (ICO)
events. Based on their bipartite graph, we introduce an effective community
detection algorithm for low-rank signals to group users into different
clusters[57]. These results can be further used for other purposes. For
example, in both Bitcoin and Ethereum networks, if one person creates two user
accounts, it is generally difficult to associate those accounts to the same
person due to pseudo-anonymity. Our community detection algorithms may help do
the association by analyzing the accounts’ preferences, given that two
accounts for the same user should share similarities. Also, the communities
results may also be used to label users’ potential preferences and thus
provide a targeted referral service in blockchain.
### I-A Related work
There has been a branch of works in the literature on analyzing the Bitcoin
transaction data and most of them focus on two issues: anonymization and de-
anonymization. The review on these two issues can be found in Refs. [50, 13,
44, 46]. Therein, shared coin and send mixers in Refs. [5, 30, 31, 7, 60] are
two basic anonymization approaches, while other approaches such as fair
exchange[5], transaction remote release[51] and zero cash[49] are also
popular. In this work, our main focus is on de-anonymization. In the
literature, there are many approaches for implementing de-anonymization. For
example, an early work in Ref. [5] first brought up the notion of “change
address” and people realized that one can use the heuristics to associate
addresses that involving common spending and one-time change. This approach is
widely used as the first step to process Bitcoin data[17, 18, 21, 32, 45, 11,
2]. There are also more advanced approaches. In Refs. [25, 6, 62], the authors
tried to de-anonymize user’s identity of Bitcoin by linking the Bitcoin
address with IP address. Ref. [53] summarized prior approaches of clustering
addresses and implemented a modular framework to group users and classify them
with different labels. Sometimes, off-chain information is also useful. For
example, in Refs. [44, 17] the authors proposed to use off-chain information
to guide different clustering models with a purpose of reducing the algorithm
complexity. Ref. [15] and Ref. [35] proposed novel methodologies for analyzing
Bitcoin users based on the observation of Bitcoin transactions over time.
In this work, we try to deanonymize a blockchain network by finding users’
communities. Some prior works have been done for Bitcoin. The idea of treating
Bitcoin as a social network has appeared in Ref. [52]. Therein, people usually
used the notions of “user graph” and “transaction graph”, and some analysis
based on these two graphs has been elaborated in Ref. [44]. Ref. [29] studied
the anonymity for Bitcoin by analyzing the transaction graph with the help of
public data. Ref. [19] studied how different features of the date influence
communities results on the “transaction graph” based on the ground truth of
some known hack subnetworks. Authors in Ref. [43] and Ref. [22] extracted
various features from these two graphs and pointed it out that features on the
graph are crucial for the analysis results. Ref. [45] showed that a two-party
community detection based on normalization mutual information could be used to
re-identify users in Bitcoin. Our community detection approach on the Bitcoin
data is based on the above works. Specifically, our approach has the following
properties: 1) we study the “user graph” based on the super-address which is
associated with addresses that involve common spending and one-time change; 2)
we use historical BTC coin amount as the key feature to perform the community
detection algorithm; 3) a modified clustering method is proposed for the
Bitcoin social graph to find communities.
The above approaches are effective for Bitcoin while it is difficult to be
directly carried forward to the Ethereum network, as Ethereum is mechanically
different from Bitcoin. In Ethereum, there are two types of accounts:
externally-owned accounts (EoAs) and contract accounts (CAs). In Refs. [26,
10], the authors pointed it out that existing methods such as discovering IP
addresses and Bitcoin addresses clustering usually do not fit the Ethereum
network due to the differences between both networks in the volatility of
entry nodes and the way transactions are handled. Some works tried to use
traditional clustering methods for Ethereum, such as support vector machine
(SVM) and k-means in Ref. [8], the standard k-means in Refs. [42, 23], long
short-term memory (LSTM) and cnonvolutional neural network (CNN) models in
Ref. [23], affinity propagation k-medoids in Ref. [40], k-means clustering,
agglomerative clustering, Birch clustering in Refs. [41, 54] and Neo4j in Ref.
[10]. However, they basically equally treat EoAs and CAs as nodes in the
transaction graph[42, 10, 54], while these two types of accounts are
essentially different. Some works in Refs. [40, 27] also used side information
to analyze the on-chain transactions. Therein Ref. [27] utilized the smart
contract codes to analyze the smart contract nodes in the transaction graph.
It is worth mentioning that our community detection approach in a bipartite
graph differs from traditional one in a bipartite graph as we treat nodes on
both sides of the bipartite graph as two parties of nodes, therefore utilizing
the connections between those two parties to cluster nodes in one party, while
the traditional approach usually treats all the nodes in both sides equally
and find communities over all the nodes in both sides [4, 1, 61]. Our work on
Ethereum data differs from the above work in five aspects: 1) we analyze the
on-chain data without any off-chain side information; 2) we separately treat
EoAs and CAs as different nodes and put them on two sides of a bipartite
social graph; 3) we target on smart contract transactions which involve ICO
events; 4) we apply a novel low-rank community detection algorithm based on
graph signal processing (GSP) on the bipartite graph; 5) we utilize the
clustering results to deliver on-chain advertisement in Ethereum.
The rest of the paper is organized as follows. In Section II, we show an
example to define the Bitcoin social graph and then apply a novel clustering
method on this graph. Based on this result, Section III demonstrates the
difference between the Bitcoin social graph and the Ethereum social graph. We
then define the bipartite Ethereum social graph and utilize a particular
method to find communities on it. Simulation results are illustrated in
Section IV where both Bitcoin data and Ethereum data are analyzed and
compared. Moreover, we also test the on-chain advertisement for Ethereum based
on the clustering results. We conclude and further discuss with simulation
results in Section V.
## II Bitcoin Transactions Analysis
To start our work, we first introduce how Bitcoin verifies and records
transactions. As shown in Fig. 1, in a Bitcoin network, when one node
initiates a transaction, the transaction information will be signed and
packaged, thus broadcasting to other nodes. Those nodes who receive the
transaction will verify its legality and then help broadcast the verified
transaction message. During the process of broadcasting, some of the received
nodes are miners, who not only bear the responsibility of broadcasting
transactions, but also undertake the task of “mining”. The miner who has
successfully mined by solving a difficult mathematical problem will get the
right to write the ledger and add all transactions they have verified. When
most of the nodes in the entire network agree on the same transactions, these
transactions are recorded in the block.
### II-A The Bitcoin Social Network
As mentioned in the previous section, the only purpose of transaction in
Bitcoin is to transfer BTC coins. To achieve this goal, each user generates a
key pair (represented as the addresses in the transaction) to join the Bitcoin
network and transfer BTC coins based on the so-called unspent transaction
output (UTXO) model[36]. UTXO can be seen as an abstraction of electronic
money, representing a chain of ownership implemented as a chain of digital
signatures. In Fig. 2, we show a basic structure of a Bitcoin transaction
where we can find multiple addresses in the input and output fields. Every
address could contain multiple UTXO, wherein UTXO in the input addresses are
consumed while in the output addresses are created.
A basic idea to define the Bitcoin social network is to let each address one-
to-one correspond to a node in the social graph. However, a crucial problem
here is that one user usually possess multiple addresses. Given that there are
so many addresses in the blockchain, the dimensionality of the social network
could be huge. To reduce the size of the graph, we try the following way to
associate multiple addresses (key pairs) to a super-address in the social
graph: 1) multiple input addresses of a transaction; 2) those bitcoin users
that have a common change address. More details for such operations will be
shown in the numerical experiments. After such a pre-processing, we define a
social network where each node corresponds to a processed super-address.
Figure 1: The consensus process in Bitcoin network. Figure 2: The Bitcoin UTXO
model: “inAddr” and “outAddr” are the abbreviations of the input and output
addresses. We use the “transactionHash” to denote the transaction including
these addresses.
Input: a set of nodes $\bm{V}=\\{{v}_{1},\cdots,{v}_{n}\\}$, the number of
clusters $k$ and weight matrix $\bm{W}$ for all nodes in $\bm{S}$
Step 1: Define $\bm{D}$ to be the diagonal matrix whose $(i,i)$-element is the
sum of $\bm{W}$’s $i$-th row. Letting $\bm{L}=\bm{D}-\bm{W}$, construct a
matrix $\bm{\bar{L}}=\bm{D}^{-1/2}\bm{L}\bm{D}^{-1/2}$.
Step 2: Find $\bm{x}_{1},\bm{x}_{2},...,\bm{x}_{k}$, the $k$ largest
eigenvectors of $\bm{\bar{L}}$ (chosen to be orthogonal to each other in the
case of repeated eigenvalues), and form the matrix
$\bm{X}=[\bm{x}_{1},\bm{x}_{2},...,\bm{x}_{k}]$ by stacking the eigenvectors
in columns.
Step 3: Form the matrix $\bm{Y}$ from $\bm{X}$ by renormalizing each of
$\bm{X}$’s rows to have unit
length$(i.e.,\bm{Y}_{ij}=\bm{X}_{ij}/(\sum_{j}{\bm{X}^{2}_{ij}})^{1/2})$.
Step 4: Treating each row of $\bm{Y}$ as a point in $\mathbb{R}^{k}$ , cluster
them into $k$ clusters via $k$-means or any other algorithm that attempts to
minimize distortion.
Step 5: Finally, assign the original point $\bm{v}_{i}$ to cluster $j$ if and
only if row $i$ of the matrix $\bm{Y}$ was assigned to cluster $j$.
Output: Partition nodes in $\bm{V}$ into $k$ communities.
Algorithm 1 Clustering the Bitcoin social graph
### II-B Community Detection for Bitcoin
To well define the graph, we need to specify the edge weight between any two
nodes (super-addresses). In fact, the edge weight between any two nodes could
be defined following criterion in Refs. [19, 43, 22]. Herein, we extract
features from the total transaction amount and set them as the edge weight.
Then, we run a clustering method which is modified from the spectral
clustering algorithm[39, 58] to cluster this Bitcoin social graph.
Specifically, we denote the social graph by $\bm{G(V,E)}$, where $\bm{V}$
denotes the user nodes $(v_{1},v_{2},...v_{n})$ and $\bm{E}$ the edges. We let
$\bm{W}$ denote the weight matrix where each entry $w_{ij}$ is the edge weight
between node $v_{i}$ and node $v_{j}$. In this paper, we let $a_{ij}$ be the
historical total transaction amount between node $i$ and node $j$ and
$w_{ij}=a_{ij}/\max_{i,j}\\{a_{ij}\\}.$
Apparently, we will thus have an undirected graph with $w_{ij}=w_{ji}$. We
then apply the following clustering algorithm in Algorithm 1 to find
communities in the graph.
Remark 1: It is worth noting that the above clustering algorithm is quite
similar to the well known spectral clustering algorithm except for that the
similarity matrix is replaced by the weight matrix in our algorithm. This
modification is meaningful since we re-define the “similarity” as that two
nodes have a significant transaction relationship. This redefinition can help
cluster users that have more connections with each other.
## III The Ethereum Social Network
As a blockchain network, Ethereum is different from Bitcoin in many aspects[9,
59, 3]. Particularly, Ethereum is not only a platform for providing ETH coin
transactions, but also a programming language that enables users to build and
publish distributed applications via the smart contract. In Ethereum, each
user generates a pair of asymmetrically encrypted public key and private key
to join the network. Each public key could be considered as a node in our
Ethereum social network. In Ethereum, there are two types of accounts:
externally-owned accounts (EoAs) and contract accounts (CAs). EoAs are
considered as individual users in the external world while CAs are the
contracts that could connect EoA users. Both EoAs and CAs are presented by
unique hash addresses.
According to the properties of Ethereum, we define its social network as a
bipartite graph, where EoA nodes and CA nodes are put into two sides of the
graph; seen in Fig. 3. Each EoA node has their attention on different CA
nodes. For example, in Fig. 3 (left), supposing that we have $N$ EoA nodes and
$T$ CA nodes in the graph, for an EoA node $i\in\\{1,...,N\\}$, we define
${\bm{x}}_{i}=[x_{i}^{1},x_{i}^{2},...,x_{i}^{T}]$ and $x_{i}^{t}$ is EoA node
$i$’s attention on CA node $t$. Herein, the CA nodes could be any smart
contract in Ethereum. A typical example could be a token created by an ICO
event, where a possible choice of ${x}_{i}^{t}$ could be user $i$’s
transaction amount on token $t$. We remark that in this paper we use ICO
events and token transaction amounts as features to define the bipartite
graph, while it actually could be any other type of smart contracts which
connect EoA nodes.
Figure 3: Construct a social network based on different ICO tokens:
${x}_{i}^{t}$ represents user $i$’s balance for token $t$.
1: Input: Graph signals ${\bm{y}}_{t=1}^{T}$ ; desired number of clusters $K$.
2: Use ${\bm{y}}_{t=1}^{T}$ to compute the sample covariance
$\hat{\bm{C}}_{x}$ as in (5).
3: Find the $K$ eigenvectors to $\hat{\bm{C}}_{x}$ associated with the largest
$K$ eigenvalues. Denote the set of eigenvectors as
$\hat{\bm{P}}_{K}\in\mathbb{R}^{{N}\times K}$.
4: Perform $K$-means clustering[56], which optimizes:
$\min\limits_{{\cal C}_{1},...,{\cal C}_{K}}{\sum_{i=1}^{K}}{\sum_{j\in{\cal
C}_{i}}}{\|\hat{\bm{p}}_{j}-\frac{1}{|{\cal{C}}_{i}|}\sum_{q\in{\cal{C}}_{i}}{\hat{\bm{p}}_{q}\|_{2}^{2}}}\quad
s.t.\quad{\cal{C}}_{i}\in V$
where $\hat{\bm{p}}_{j}:=[\hat{\bm{P}}_{K}]_{i},:\in\mathbb{R}^{K}$.Let the
solution be $\hat{\cal C}_{1},...,\hat{\cal C}_{K}$.
5:Output: Partition of ${\bm{V}}$ into $K$ communities,$\hat{\cal
C}_{1},...,\hat{\cal C}_{K}$.
Algorithm 2 Community detection from low-rank excitation
### III-A Algorithms and Strategies
Now our purpose is to perform a community detection on this bipartite graph
and group all EoA nodes into clusters. In this subsection, we adopt the low-
rank community detection algorithm in Ref. [57] to cluster the EoA nodes in
the bipartite graph. The idea is to assume that all EoA nodes form a low-rank
social sub-graph, where some lead EoA nodes will decide other nodes’ attention
on CA nodes. In this spirit, we partition the EoA node set of the bipartite
graph into subsets with high edge densities. This could be done by applying a
clustering algorithm on the low-rank output covariance matrix of the observed
graph signal at EoA nodes. To proceed it, we regard this community detection
problem as a problem of GSP, wherein the input of the graph
${\bm{z}}\in\mathbb{R}^{R}$ is on the EoA lead node (see Fig. 3) and it goes
through a filter ${\cal H}({\bm{S}})$ :
$\mathcal{H}(\bm{S}):={\sum}_{\ell=0}^{L-1}h_{\ell}\bm{S}^{\ell}=\bm{V}({\sum}_{\ell=0}^{L-1}h_{\ell}{\bm{\Lambda}}^{\ell})\bm{V}^{H}$
(1)
where $\bm{S}$ is the graph Laplacian matrix, $L$ is the degree of the filter
and $R$ is the number of lead node. The output signal
${\bm{x}}\in\mathbb{R}^{N}$ is defined on all the EoA nodes and it is
generated by
${\bm{x}}=\mathcal{H}(\bm{S}){\bm{z}}\;.$ (2)
Herein, ${\bm{V}}$ and ${\bm{\Lambda}}$ are from a SVD decomposition of
${\bm{S}}$. The above equation means that in our graph model, the opinion of
the EoA lead node decides all nodes’ status. Based on the above model, the
graph signal observed at all the EoA nodes can be expressed as
$\bm{y}^{t}=\bm{x}^{t}+\bm{w}^{t}\ \mathrm{and}\
\bm{x}^{t}=\mathcal{H}(\bm{S})\bm{z}^{t},\quad t=1,...T$ (3)
where $\bm{y}^{t}$ is the observation of the graph signal which represents EoA
nodes’ attention on CA node $t$ in our problem setting and
${\bm{w}}^{t}\sim{\cal N}(0,{\sigma}^{2}{\bm{I}})$ is the noise. Notice that
the input signal ${\bm{z}}^{t}$ is applied on only a subset of $R$ EoA lead
nodes and thus the number of variations in the excitation signal is limited to
$R$ mode. To further explore the graph structure, we let
$\bm{C}_{z}=\mathbb{E}[\bm{z}^{t}(\bm{z}^{t})^{\mathsf{T}}]=\bm{BB}^{\mathsf{T}},$
(4)
where ${\bm{B}}\in\mathbb{R}^{N\times R}$ with $R<N$. Then, we can recover the
community structure in ${\bm{S}}$ by applying Algorithm 2 on the empirical
sampled covariance of the observed signal $\bm{y}^{t}$:
$\hat{\bm{C}}_{x}=(1/T){\sum}_{t=1}^{T}\bm{y}^{t}(\bm{y}^{t})^{\mathsf{T}},$
(5)
which is an estimate of
$\bm{C}_{x}=\mathcal{H}(\bm{S})\bm{BB}^{\mathsf{T}}\mathcal{H}(\bm{S})^{\mathsf{T}}$.
An illustration of the algorithm model for Ethereum is depicted in Fig. 4. In
practice, $\hat{\bm{C}}_{x}$ could be obtained by observing the graph signals
from many instances $t$. For example, in the Ethereum social network, we
consider each instance $t$ as one CA node in the bipartite graph. Thus, we
obtain the graph signal ${\bm{y}}^{t}$ by observing EoA nodes’ attention on CA
nodes and utilize it to detect communities of EoA nodes. For example, we could
consider users’ transaction amount on different tokens as their attention on
such tokens.
Figure 4: An algorithm model for Ethereum. Figure 5: A flow chart for the
recommendation system. Figure 6: Coin transaction (left) and smart contract
transaction (right). Ui denotes User $i$.
### III-B Group Tokens by Users Subscription
In previous discussion, we discuss how to find communities of EoA nodes by
using graph signal processing. In fact, this process can be reversed to
cluster tokens by users’ subscription. That is, we exchange the position of
the EoA nodes and CA nodes and observe the graph signal at each CA node, which
represents all EoA nodes’ attention on a specific CA node. This equals to
transposing $\hat{\bm{C}}_{x}$ in Algorithm 2 and performing the same
clustering method on the new $\hat{\bm{C}}_{x}$. Numerical results are shown
in Section IV-B.
At the end of this part, two remarks are in order. First, the relationship
between user clustering and token clustering is analog to that between the
user-based collaborative filtering and the item-based collaborative filtering.
Second, the token clustering result can also be used to recommend tokens to
EoA nodes. A flow chart for the recommendation system is shown in Fig. 5. We
will introduce the detailed recommendation process in the next subsection.
### III-C Advertisement Strategies
The proposed community detection algorithms can help find EoA users sharing
the same interests on CA nodes (by user clustering), as well as CA nodes
favored by groups of EoA nodes (by token clustering). We then discuss 1) how
to deliver advertisements to a user whose community members have shown
interest on a specific token, and 2) how to recommend other tokens to a user
who has shown interest on a specific token. Specifically, we may resort to the
“InputData” field in the transaction script to serve our purpose. In Fig. 6,
we show two types of transactions in Ethereum. Transaction $1$ is an ETH coin
transaction (left) and Transaction $2$ is a smart contract transaction
(right). For both transactions, there is an “InputData” field in the script
which can be used to run functions or send messages. We therefore design two
on-chain advertisement strategies. One approach is to send a small amount ETH
coin (could be zero) to the target user and attach a recommendation message in
the “InputData” field in this coin transaction. This implementation can only
be done in a one-to-one manner and one has to cost some gas to send the
message. Another approach is similar to the so-called “airdrop”[33, 14],
wherein new ICO project distributes part of their tokens for free to a
community to advertise their ICO project. “Airdrop” could be done via smart
contract in a group message manner and no extra ETH coin is consumed except
for the gas. Notice that to successfully deliver the message, one has to
negotiate with the wallet company to register their token in the target user’s
list. Otherwise, users can not see the new-added token, as well as the
advertisement message. We remark that our design differs from the original
“airdrop” in two aspects: 1) we have resorted to a community detection to
target potential users; 2) we utilize the “InputData” field to send the
advertisement message.
Figure 7: Graph representation after associating the common spend addresses.
Figure 8: Bitcoin clustering: (Left) The $k$-var curve, where the number of
token’s clusters is found by the Elbow method; (Right) ${\bm{t}}$-SNE
Clustering results.
## IV Dataset and Numerical Results
In this section, numerical results are provided for both Bitcoin data and
Ethereum data. The data sets are downloaded from the actual blockchain systems
while we also utilize some known pre-processed results based on the real
blockchain data.
### IV-A Numerical Results for the Bitcoin Data
The Bitcoin data we studied comes from the website http://vo.elte.hu/bitcoin,
where the raw Bitcoin data is processed and compressed into several documents;
more details could be found in Ref. [16]. In our experiment, we used the
document “txhash.txt” which is a list of transaction IDs (indexed by the
website) and hash pairs to record the hash for each transactions in
chronological order. We intercepted the block data from block number $250,000$
to $252,000$, whose time interval is between “2013-08-03 12:36:23” and
“2013-08-13 18:11:30”. By searching transaction hash in “txhash.txt” we found
that these transaction IDs range from $21,490,941$ to $22,003,698$. With these
IDs, we can search documents “txin.txt” and “txout.txt” to find the input
addresses and output addresses. Herein, “txin.txt” records each transaction’s
input addresses with the amount of Satoshis111 The satoshi is currently the
smallest unit of the bitcoin currency recorded on the block chain. and
“txout.txt” records each transaction’s output addresses with the amount of
Satoshis. Within this time interval, we can extract in total $512,756$
transactions involving $515,765$ addresses. It is worth noting that at this
moment, the addresses may be duplicated.
Our next step is to pre-process the data by associating the addresses using
the heuristic of “common spend” and “change address”. To process the data by
“common spend”, we utilized the data set “contraction.txt” from the website
http://vo.elte.hu/bitcoin, which is a list of addresses possibly belonging to
the same user. The basic idea of this process is that any two input addresses
which belong to the same “user” appear as inputs in the same transaction at
least once. After this processing, we have in total $132,431$ transactions and
$65,811$ identified unique users (super-addresses) left. To better analyze the
details of some key users, we assume that the “change address” is used rarely
and thus we can eliminate the addresses whose occurrence (appears in the
transaction input or output) is less than $30$. This process significantly
reduces the size of the graph to $3930$ transactions and $279$ users. Notice
that some of the users are the combinations of several common spend addresses,
and others are the change addresses which appeared in the output of
transactions but never appeared in the inputs of a transaction. In Fig. 7, we
plot the graph representation by using the Gephi software[24], where the
continuous graph layout algorithm ForceAltas $2$ is adopted to visualize the
graph. Note that herein we do not consider to use any features to define the
edge weights. The edge weight is either ‘0’ (no transaction) or ‘1’ (with
transaction). From the plot we see that after associating common spend, the
graph is well clustered while the dimension of the graph is still large.
Figure 9: How noise affects the recovery of the communities. Figure 10: Number
of instances versus the recovery accuracy.
We then evaluate and visualize the community detection results based on the
social graph we have defined for the Bitcoin data. First, we run the Elbow
method in Ref. [55] to determine the optimal number of clusters $k$. Fig. 8
(left) shows that the “inflection point” is $k=5$ and thus we consider there
are $5$ clusters in our example. Interestingly, this is also roughly
consistent with the results in Fig. 7 although they have defined different
edge weight. We then run Algorithm 1 under a random initialization with $k=5$
and find that the number of nodes in each cluster are $101$, $78$, $60$, $27$,
and $13$, respectively. A $2$D visualization of the clustering results are
shown in Fig. 8 (right), where a machine learning algorithm called
${\bm{t}}$-SNE[28] is used for a nonlinear dimensionality reduction. The
results show that the target users are indeed clustered in the compressed $2$D
space. We use two parameters to evaluate the community results. One is the
Silhouette score, which combines the two factors of cohesion and resolution to
evaluate the clustering results[47]. The other one is the Modularity score
which is usually used to measure the structural of network communities[38]. In
this experiment, we have the Silhouette score $0.5871$ and theModularity score
$0.3733$. Normally the Silhouette score is at a range of $[-1.0,1.0]$ and the
range of the Modularity score is $[-0.5,1.0]$. The more two scores approach to
$1$, the better quality of the network partition. The Modularity score around
$0.3\sim 0.7$ is considered as a good clustering result[37]. We track the
nodes of the gambling website and find that the gambling website nodes and
other $55$ nodes that had transactions with the gambling nodes have been all
clustered into the same cluster. This cluster has in total $101$ nodes.
### IV-B Numerical Results for the Ethereum Data
#### IV-B1 Synthetic data test
To verify our model, we first generate synthetic data to test how the proposed
method works for a known graph with given input signal. Specifically, a graph
${\cal G}(N,K,P_{a},P_{b})$ is generated where $N$ is the number of nodes, $K$
is the number of communities, $P_{a}$ is the probability of node connection
within the community and $P_{b}$ is the probability of node connection between
communities. We then define the graph filter as
$\mathcal{H}(\bm{S})=(1-\alpha{\bm{S}})^{L-1}$ where ${\bm{S}}$ is the Laplace
matrix of ${\cal G}$ and $L$ is the order of the graph filter. The graph
signal ${\bm{y}}_{t}$ is thus generated following (3) where we set
${\bm{z}}={\bm{B}}{\bm{\alpha}}$ with ${\bm{B}}$ having $R$ non-zero rows and
each row having $[{Rd/N}]$ ones where $d$ is the degree of the graph and we
generate ${\bm{\alpha}}\sim{\cal N}(0,{\bm{I}})$ to get different instances.
Given the structure of the graph and input signal ${\bm{z}}$, we use three
different parameters to evaluate the recovery of the graph. Herein, the
Silhouette score and Modularity score have defined before, and the recovery
rate is defined as the percentage of nodes that are recovered in the correct
cluster.
Fig. 9 shows how noise corrupted in the observed graph signal effects the
recovery accuracy. Herein, we set $N=150,P_{a}=0.89,P_{b}=0.11,R=15$ and
generate $1000$ instances as the input signal. We vary the noise variance to
see the recovery performance. The results tell that in the noise-free case, we
can $100\%$ recover the communities, while as the noise level increases, the
recovery accuracy deteriorates. In Fig. 10, we set
$N=150,P_{a}=0.8,P_{b}=0.2,R=15$ and assume the noiseless case. We vary the
number of instances of the input signal to see how it affects the recovery
accuracy. Apparently, the results tell that we need to observe sufficient
instances to recover the community information. The synthetic data test in
Fig. 9 and Fig. 10 demonstrates that the proposed model can find communities
given that the noise in the observation model is not so heavy and we have data
for sufficient instances. Motivated by this, in the real data test, we utilize
user-token subscription information in the Ethereum network to find the EoA
users’ communities.
Figure 11: User clustering: (Left) The $k$-var curve, where the number of
user’s clusters is found by the Elbow method; (Right) ${\bm{t}}$-SNE
Clustering results.
Figure 12: Token clustering: (Left) The $k$-var curve, where the number of
token’s clusters is found by the Elbow method; (Right) ${\bm{t}}$-SNE
Clustering results.
Figure 13: View advertisements in Ropsten. These two transactions can be
viewed at links:
(Left)
https://ropsten.etherscan.io/tx/0x51546492d2d778b6821deba98fa8de30b5b6c2c0681130385f52de74ac97584e,
(Right)
https://ropsten.etherscan.io/tx/0x290f57a7c26fed2fdcb2b2c56b3eddb0458286449b1ac0051730f75c979b4079.
#### IV-B2 Real data test
In this part, we focus on the token transactions in the Ethereum network to
show a toy example of the data processing. To set up a meaningful example, we
screen out the user-token pairs by three steps. In step one, we pick $20$ top
market capitalization tokens from website https://etherscan.io on the date of
July 1st 2019, and record the top $100$ user addresses of each token. In step
two, we extract aforementioned users’ transactions, recorded all the tokens
they have owned at the current moment (that would be much more than $20$
tokens mentioned above). After this operation, there are $141$ users with
$1837$ tokens in total. Then, in the last step, to maintain an appropriate
level of attention for each token, we first delete those users who have
subscribed less than $20$ tokens and then delete those tokens which have been
subscribed by less than $60$ EoA users. In the end, we have selected $141$
users who have focused on $21$ tokens to form the bipartite graph. This graph
corresponds to a user-token matrix ${\bm{A}}\in\mathbb{R}^{141\times 21}$ with
entry ${A}_{ij}$ denoting user $i$’s transaction amount on token $j$. To
further process ${\bm{A}}$, we unify the token value to ETH value by
multiplying their currency to ETH on July 1st 2019. Therefore, in the
bipartite graph, the edge weight between the user and the token is defined as
the amount of ETH value this user has owned on this token at the current
moment. We remark that such a data process can be applied to a much bigger
size of data and we can thus analyze a large size graph at one shot. Herein we
focus on a small size graph so that we can have a more clear description of
the process.
$\bullet$ The user clustering: After a row normalization and a column
normalization to matrix ${\bm{A}}$, we apply Algorithm 2 to perform the
community detection. In particular, the Elbow method is used to determine the
optimal number of clusters for clustering[55]. Fig. 11 (left) shows that the
“inflection point” is $k=5$ and thus we consider there are $5$ clusters in our
toy example. Fig. 11 (right) shows a $2$D visualization of the clustering
results, where ${\bm{t}}$-SNE is used for a nonlinear dimensionality
reduction. We also calculate the Silhouette score and the Modularity score for
this case, which are $0.6586$ and $0.5999$, respectively.
$\bullet$ The token clustering: For the previous user clustering process,
graph signals ${\bm{y}}_{t=1}^{T}\in\mathbb{R}^{N}$ represents the
subscription of all users on token-$t~{}(t=1,...,T)$, which can be considered
as the features of EoA users. To proceed the token clustering, we redefine
${\bm{y}}_{n=1}^{N}\in R^{M}$ that can be interpreted as
user-$n~{}(n=1,...,N)$’s subscription on all tokens. We pre-process the data
in a similar way as that in the user clustering process and delete the tokens
which are not subscribed by any users. The new bipartite graph is thus based
on the token-user matrix ${\bm{A}}\in\mathbb{R}^{1811\times 141}$. The sample
covariance $\hat{\bm{C}}_{x}$ can also be calculated as in (5). We then apply
Algorithm 2 to cluster the tokens. Through the Elbow method, Fig. 12 (left)
shows that the “inflection point” of the token-user graph is $k=9$. Thus we
consider there are $9$ clusters in the token-user graph. By the use of
${\bm{t}}$-SNE [28], the clustering results are shown in Fig. 12 (right). The
figure displays that the target CA tokens are clustered as $9$ groups. The
Silhouette score and Modularity score are $0.5517$ and $0.5027$, respectively.
We remark here that for both the user clustering and token clustering
${\bm{t}}$-SNE results, we could see that most of the nodes are well clustered
while there are still few nodes wrongly distributed, owing to the noise in the
observation model.
#### IV-B3 Implementation of the On-chain Advertisement
In this part, we show the implementation of the two on-chain advertisement
strategies. Our experiment is done on the Ropsten test net, which is a testing
environment for Ethereum. Therein, we have used the MEW module (c.f.,
https://www.myetherwallet.com/) to build transactions, the Remix module(c.f.,
https://remix.ethereum.org/) for deployment contracts and website
https://etherscan.io to view the block.
In the left of Fig. 13, we implement how to deliver advertisement in an ETH
transaction. We use MEW to build the transaction directly while adding an
advertisement message in the input field. Note that to visualize the message
we need to convert the string information into hexadecimal. The website
https://etherscan.io shows us the message in the block. This block will be
synchronized to the users’ wallet and the wallet will push the message to the
target user. Delivering advertisement via smart contract is shown in the right
screen of Fig. 13. Therein, a smart contract transaction is generated by the
ICO initiator to send message to a group of users. In this approach, the ICO
initiator needs to cooperate with the wallet company to register their token
address, as well as pushing the valid advertisement message to target users.
Experiments show that both strategies are valid in the testing environment.
## V Conclusion
In this work, we have considered community detection on blockchain networks.
We respectively studied Bitcoin and Ethereum networks. In particular, for
Bitcoin we defined the social network based on transactions and proposed a
modified clustering method for the transaction graph. For Ethereum, a
bipartite social graph was defined and a novel low-rank clustering method was
adopted to cluster users in this graph. We implemented both methods for real
blockchain data, visualized and analyzed the community results. We also
demonstrated advertisement strategies for delivering on-chain advertisements
in the Ethereum network. Our work verified the possibility of applying
community detection in different blockchain networks, given that the
observation model is not too noisy and sufficient data is provided. How to
reduce the effect of heavy noise would be our next task to conquer.
## References
* [1] T. Alzahrani and K. J. Horadam, “Community detection in bipartite networks: Algorithms and case studies,” in _Complex systems and networks_. Springer, 2016, pp. 25–50.
* [2] E. Androulaki, G. O. Karame, M. Roeschlin, T. Scherer, and S. Capkun, “Evaluating user privacy in bitcoin,” in _International Conference on Financial Cryptography and Data Security_. Springer, 2013, pp. 34–51.
* [3] A. M. Antonopoulos and G. Wood, _Mastering ethereum: building smart contracts and dapps_. O’Reilly Media, 2018\.
* [4] M. J. Barber, “Modularity and community detection in bipartite networks,” _Physical Review E_ , vol. 76, no. 6, p. 066102, 2007.
* [5] S. Barber, X. Boyen, E. Shi, and E. Uzun, “Bitter to better—how to make bitcoin a better currency,” in _International Conference on Financial Cryptography and Data Security_. Springer, 2012, pp. 399–414.
* [6] A. Biryukov and I. Pustogarov, “Bitcoin over tor isn’t a good idea,” in _2015 IEEE Symposium on Security and Privacy_. IEEE, 2015, pp. 122–134.
* [7] J. Bonneau, A. Narayanan, A. Miller, J. Clark, J. A. Kroll, and E. W. Felten, “Mixcoin: Anonymity for bitcoin with accountable mixes,” in _International Conference on Financial Cryptography and Data Security_. Springer, 2014, pp. 486–504.
* [8] E. Brinckman, A. Kuehlkamp, J. Nabrzyski, and I. J. Taylor, “Techniques and applications for crawling, ingesting and analyzing blockchain data,” in _2019 International Conference on Information and Communication Technology Convergence (ICTC)_. IEEE, 2019, pp. 717–722.
* [9] V. Buterin _et al._ , “A next-generation smart contract and decentralized application platform,” _white paper_ , vol. 3, p. 37, 2014.
* [10] W. Chan and A. Olmsted, “Ethereum transaction graph analysis,” in _2017 12th International Conference for Internet Technology and Secured Transactions (ICITST)_. IEEE, 2017, pp. 498–500.
* [11] T.-H. Chang and D. Svetinovic, “Improving bitcoin ownership identification using transaction patterns analysis,” _IEEE Transactions on Systems, Man, and Cybernetics: Systems_ , 2018.
* [12] M. Conoscenti, A. Vetro, and J. C. De Martin, “Blockchain for the internet of things: A systematic literature review,” in _2016 IEEE/ACS 13th International Conference of Computer Systems and Applications (AICCSA)_. IEEE, 2016, pp. 1–6.
* [13] M. Conti, E. S. Kumar, C. Lal, and S. Ruj, “A survey on security and privacy issues of bitcoin,” _IEEE Communications Surveys & Tutorials_, vol. 20, no. 4, pp. 3416–3452, 2018.
* [14] M. Di Angelo and G. Salzer, “Collateral use of deployment code for smart contracts in ethereum,” in _2019 10th IFIP International Conference on New Technologies, Mobility and Security (NTMS)_. IEEE, 2019.
* [15] J. DuPont and A. C. Squicciarini, “Toward de-anonymizing bitcoin by mapping users location,” in _Proceedings of the 5th ACM Conference on Data and Application Security and Privacy_. ACM, 2015, pp. 139–141.
* [16] H. Eotvos Lorand University, Budapest. ELTE bitcoin project website and resources. http://vo.elte.hu/bitcoin. Accessed Jan 10, 2020.
* [17] D. Ermilov, M. Panov, and Y. Yanovich, “Automatic bitcoin address clustering,” in _2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA)_. IEEE, 2017, pp. 461–466.
* [18] M. Fleder, M. S. Kester, and S. Pillai, “Bitcoin transaction graph analysis,” _arXiv preprint arXiv:1502.01657_ , 2015.
* [19] D. Goldsmith, K. Grauer, and Y. Shmalo, “Analyzing hack subnetworks in the bitcoin transaction graph,” _arXiv preprint arXiv:1910.13415_ , 2019.
* [20] Y. Guo and C. Liang, “Blockchain application and outlook in the banking industry,” _Financial Innovation_ , vol. 2, no. 1, p. 24, 2016.
* [21] M. Harrigan and C. Fretter, “The unreasonable effectiveness of address clustering,” in _2016 Intl IEEE Conferences on Ubiquitous Intelligence & Computing, Advanced and Trusted Computing, Scalable Computing and Communications, Cloud and Big Data Computing, Internet of People, and Smart World Congress (UIC/ATC/ScalCom/CBDCom/IoP/SmartWorld)_. IEEE, 2016, pp. 368–373.
* [22] D. Hossain, “Community detection and observation in large-scale transaction-based networks,” Ph.D. dissertation, Technische Universität Berlin, 2018.
* [23] T. H.-D. Huang, P.-W. Hong, Y.-T. Lee, Y.-L. Wang, C.-L. Lok, and H.-Y. Kao, “Soc: hunting the underground inside story of the ethereum social-network opinion and comment,” _arXiv preprint arXiv:1811.11136_ , 2018.
* [24] M. Jacomy, T. Venturini, S. Heymann, and M. Bastian, “ForceAtlas2, a continuous graph layout algorithm for handy network visualization designed for the Gephi software,” _PloS one_ , vol. 9, no. 6, p. e98679, 2014.
* [25] D. Kaminsky, “Black ops of tcp/ip,” _Black Hat USA_ , vol. 44, 2011.
* [26] R. Klusman and T. Dijkhuizen, “Deanonymisation in ethereum using existing methods for bitcoin,” 2018.
* [27] S. Linoy, N. Stakhanova, and A. Matyukhina, “Exploring ethereum’s blockchain anonymity using smart contract code attribution.”
* [28] L. v. d. Maaten and G. Hinton, “Visualizing data using t-sne,” _Journal of machine learning research_ , vol. 9, no. Nov, pp. 2579–2605, 2008.
* [29] G. Maxwell, “Anonymity of bitcoin transactions: An analysis of mixing services,” in _Proceedings of Münster Bitcoin Conference_ , 2013, pp. 17–18.
* [30] ——, “Coinjoin: Bitcoin privacy for the real world,” in _Post on Bitcoin forum_ , 2013.
* [31] ——, “Coinswap: Transaction graph disjoint trustless trading,” _CoinSwap: Transactiongraphdisjointtrustlesstrading (October 2013)_ , 2013\.
* [32] S. Meiklejohn, M. Pomarole, G. Jordan, K. Levchenko, D. McCoy, G. M. Voelker, and S. Savage, “A fistful of bitcoins: characterizing payments among men with no names,” in _Proceedings of the 2013 conference on Internet measurement conference_. ACM, 2013, pp. 127–140.
* [33] B. Mills, “X2-ventures.”
* [34] M. H. Miraz and M. Ali, “Applications of blockchain technology beyond cryptocurrency,” _arXiv preprint arXiv:1801.03528_ , 2018.
* [35] J. V. Monaco, “Identifying bitcoin users by transaction behavior,” in _Biometric and Surveillance Technology for Human and Activity Identification XII_ , vol. 9457. International Society for Optics and Photonics, 2015, p. 945704.
* [36] S. Nakamoto, “Bitcoin: A peer-to-peer electronic cash system,” 2008.
* [37] M. E. Newman, “Fast algorithm for detecting community structure in networks,” _Physical review E_ , vol. 69, no. 6, p. 066133, 2004.
* [38] ——, “Modularity and community structure in networks,” _Proceedings of the national academy of sciences_ , vol. 103, no. 23, pp. 8577–8582, 2006.
* [39] A. Y. Ng, M. I. Jordan, and Y. Weiss, “On spectral clustering: Analysis and an algorithm,” in _Advances in neural information processing systems_ , 2002, pp. 849–856.
* [40] R. Norvill, B. B. F. Pontiveros, R. State, I. Awan, and A. Cullen, “Automated labeling of unknown contracts in ethereum,” in _2017 26th International Conference on Computer Communication and Networks (ICCCN)_. IEEE, 2017, pp. 1–6.
* [41] J. Payette, S. Schwager, and J. Murphy, “Characterizing the ethereum address space,” 2017.
* [42] M. Petrov, “Identification of unusual wallets on ethereum platform,” 2019.
* [43] T. Pham and S. Lee, “Anomaly detection in bitcoin network using unsupervised learning methods,” _arXiv preprint arXiv:1611.03941_ , 2016.
* [44] F. Reid and M. Harrigan, “An analysis of anonymity in the bitcoin system,” in _Security and privacy in social networks_. Springer, 2013, pp. 197–223.
* [45] C. Remy, B. Rym, and L. Matthieu, “Tracking bitcoin users activity using community detection on a network of weak signals,” in _International conference on complex networks and their applications_. Springer, 2017, pp. 166–177.
* [46] D. Ron and A. Shamir, “Quantitative analysis of the full bitcoin transaction graph,” in _International Conference on Financial Cryptography and Data Security_. Springer, 2013, pp. 6–24.
* [47] P. J. Rousseeuw, “Silhouettes: a graphical aid to the interpretation and validation of cluster analysis,” _Journal of computational and applied mathematics_ , vol. 20, pp. 53–65, 1987.
* [48] S. Saberi, M. Kouhizadeh, J. Sarkis, and L. Shen, “Blockchain technology and its relationships to sustainable supply chain management,” _International Journal of Production Research_ , vol. 57, no. 7, pp. 2117–2135, 2019.
* [49] E. B. Sasson, A. Chiesa, C. Garman, M. Green, I. Miers, E. Tromer, and M. Virza, “Zerocash: Decentralized anonymous payments from bitcoin,” in _2014 IEEE Symposium on Security and Privacy_. IEEE, 2014, pp. 459–474.
* [50] Q. ShenTu and J. Yu, “Research on anonymization and de-anonymization in the bitcoin system,” _arXiv preprint arXiv:1510.07782_ , 2015.
* [51] ——, “Transaction remote release (trr): A new anonymization technology for bitcoin,” _arXiv preprint arXiv:1509.06160_ , 2015.
* [52] S. Somin, G. Gordon, and Y. Altshuler, “Social signals in the ethereum trading network,” _arXiv preprint arXiv:1805.12097_ , 2018.
* [53] M. Spagnuolo, F. Maggi, and S. Zanero, “Bitiodine: Extracting intelligence from the bitcoin network,” in _International Conference on Financial Cryptography and Data Security_. Springer, 2014, pp. 457–468.
* [54] H. Sun, N. Ruan, and H. Liu, “Ethereum analysis via node clustering,” in _International Conference on Network and System Security_. Springer, 2019, pp. 114–129.
* [55] R. L. Thorndike, “Who belongs in the family?” _Psychometrika_ , vol. 18, no. 4, pp. 267–276, 1953.
* [56] U. Von Luxburg, “A tutorial on spectral clustering,” _Statistics and computing_ , vol. 17, no. 4, pp. 395–416, 2007.
* [57] H.-T. Wai, S. Segarra, A. E. Ozdaglar, A. Scaglione, and A. Jadbabaie, “Community detection from low-rank excitations of a graph filter,” in _2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)_. IEEE, 2018, pp. 4044–4048.
* [58] L. Wang, S. Ding, and H. Jia, “An improvement of spectral clustering via message passing and density sensitive similarity,” _IEEE Access_ , vol. 7, pp. 101 054–101 062, 2019.
* [59] G. Wood _et al._ , “Ethereum: A secure decentralised generalised transaction ledger,” _Ethereum project yellow paper_ , vol. 151, no. 2014, pp. 1–32, 2014.
* [60] Y. Yanovich, P. Mischenko, and A. Ostrovskiy, “Shared send untangling in bitcoin,” _The Bitfury Group white paper_ , 2016.
* [61] C. Zhou, L. Feng, and Q. Zhao, “A novel community detection method in bipartite networks,” _Physica A: Statistical Mechanics and its Applications_ , vol. 492, pp. 1679–1693, 2018.
* [62] J. Zhu, P. Liu, and L. He, “Mining information on bitcoin network data,” in _2017 IEEE International Conference on Internet of Things (iThings) and IEEE Green Computing and Communications (GreenCom) and IEEE Cyber, Physical and Social Computing (CPSCom) and IEEE Smart Data (SmartData)_. IEEE, 2017, pp. 999–1003.
| Sissi Xiaoxiao Wu received the B.Eng. degree in electronic information
engineering from the Huazhong University of Science and Technology, Wuhan,
China, in 2005, the M.Phil. degree from the Department of Electronic and
Computer Engineering, Hong Kong University of Science and Technology, Hong
Kong, in 2009, and the Ph.D. degree in electronic engineering from the Chinese
University of Hong Kong (CUHK), Hong Kong, in 2013.,From December 2013 to
November 2015, she was a Postdoctoral Fellow in the Department of Systems
Engineering and Engineering Management, CUHK. From December 2015 to March
2017, she was a Postdoctoral Fellow in the Signal, Information, Networks and
Energy Laboratory supervised by Prof. A. Scaglione of Arizona State
University, Tempe, AZ, USA. Since March 2017, she has been an Assistant
Professor at the Department of Communication and Information Engineering,
Shenzhen University, Shenzhen, China. Her research interests are in wireless
communication theory, optimization theory, stochastic process, and channel
coding theory, and with a recent
---|---
| Zixian Wu is currently pursuing a M.Eng degree in communication and
information engineering at Shenzhen University. Prior, he received his B.Eng.
degree in electronic and information engineering from Shenzhen University,
China, in 2019. His research interests are in machine learning, data mining.
---|---
| Shihui Chen is currently pursuing a M.Eng. degree in Electronics and
Communication Engineering at Shenzhen University. Prior, he received his
B.Eng. degree in the measurement and control technology and instrument from
the Nanchang Institute of Technology, Nanchang, China, in 2017. His research
interests are in data mining and blockchain technology.
---|---
| Gangqiang Li is currently pursuing a PhD degree in communication and
information engineering at Shenzhen University. Prior, he received his B.Eng.
degree in electronic engineering from the Henan University of Urban
Construction, Pingdingshan, China, in 2014, and the M.Eng. degree in Control
Science and Engineering, Shenzhen University, Shenzhen, China, in 2017. His
research interests are in machine learning, data mining and distributed
protocols.
---|---
| Shengli Zhang [corresponding author] received the B.Eng. degree in
electronic engineering and the M.Eng. degree in communication and information
engineering from the University of Science and Technology of China, Hefei,
China, in 2002 and 2005, respectively, and the Ph.D. degree from The Chinese
University of Hong Kong, Hong Kong, China, in 2008. After that, he joined the
Communication Engineering Department, Shenzhen University. He is currently a
Full Professor. From March 2014 to March 2015, he was a Visiting Associate
Professor with Stanford University. He is the pioneer of Physical-layer
network coding. He has authored or coauthored more than 20 IEEE top journal
papers and ACM top conference papers, including IEEE JSAC, IEEE TWC, IEEE TMC,
IEEE TCom, and ACM Mobicom. His research interests include physical layer
network coding, interference cancellation, and cooperative wireless networks.
He served as an Editor for the IEEE TVT, IEEE WCL, and IET Communications. He
has also served as TPC member in several IEEE conferences, including IEEE
Globecom2016, Globecom2014, ICC2015, ICC2014, WCNC2012, and WCNC2014.
---|---
|
# Four-dimensional Bloch sphere representation of qutrits using Heisenberg-
Weyl Operators
Gautam Sharma<EMAIL_ADDRESS>Optics and Quantum Information Group,
Institute of Mathematical Sciences,
HBNI, CIT Campus, Taramani, Chennai 600113, India Sibasish Ghosh
<EMAIL_ADDRESS>Optics and Quantum Information Group, Institute of
Mathematical Sciences,
HBNI, CIT Campus, Taramani, Chennai 600113, India
###### Abstract
In the Bloch sphere based representation of qudits with dimensions greater
than two, the Heisenberg-Weyl operator basis is not preferred because of
presence of complex Bloch vector components. We try to address this issue and
parametrize a qutrit using the Heisenberg-Weyl operators by identifying eight
real parameters and separate them as four weight and four angular parameters
each. The four weight parameters correspond to the weights in front of the
four mutually unbiased bases sets formed by the eigenbases of Heisenberg-Weyl
observables and they form a four-dimensional unit radius Bloch sphere. The
analysis of purity, rank, orthogonality, mutual unbiasedness conditions and
the distance between two qutrit states inside the sphere indicates that it is
a natural extension of the qubit Bloch sphere. However, unlike the qubit Bloch
sphere, the four-dimensional sphere is not a solid sphere. We also analyze the
two and three-dimensional sections which gives a non-convex but closed
structure for physical qutrit states inside the sphere. Significantly, we have
applied our representation to find mutually unbiased bases (MUBs), to
characterize the unital maps in three dimensions and also to characterize the
ensemble generated using Hilbert-Schmidt and Bures metrics. Lastly, we also
give a basic idea of how to extend this idea in higher dimensions.
## I Introduction
A density matrix representing the state of a finite-dimensional quantum system
of dimension $d$, is a $d\crossproduct d$ matrix, which satisfies the
conditions of positive semi-definiteness, hermiticity and trace one. In
general, it is difficult to study the properties of a density matrix directly.
Parametrizations of the density matrix provides a simple method to study the
properties of the density matrix and to use the density matrix in solving
various problems of physics.
One of the most famous representations of density matrices is by using the
Bloch vector parametrization. The Bloch vector parametrization is extremely
popular because of its simplicity in representing a qubit and its various
applications, see refs Bertlmann and Krammer (2008); Fano (1983); Brüning _et
al._ (2012). A qubit can be uniquely represented by a three-dimensional vector
in the Bloch sphere so that every point inside the Bloch sphere corresponds to
a physical qubit state. This lends a simple method to not only represent the
qubit states but also to identify the dynamics of the qubit. For example, all
rotations of the Bloch sphere correspond to a unitary operation. However, such
an extension of all the beautiful properties of qubit Bloch sphere is not
completely possible for higher dimensional states.
It is known that $d^{2}-1$ parameters are needed to characterize a $d$-level
density matrix $\rho$ Bertlmann and Krammer (2008). Most of the works till now
have used the Gell-Mann operator basis to characterize the qudits as they
admit real numbers as the Bloch vector elements. However, the structure of the
$d^{2}-1$ dimensional solid corresponding to valid quantum states is extremely
complex, even in three dimensions Kimura (2003); Kimura and Kossakowski
(2004); Bertlmann and Krammer (2008); Kryszewski and Zachcial (2006); Mendaš
(2006); Goyal _et al._ (2016); Eltschka _et al._ (2021). Moreover, all the
points inside the $d^{2}-1$ dimensional sphere do not represent physical
quantum states. A shortcoming of this feature is that all the rotations in the
sphere do not represent a unitary operation, which is a prominent feature in
the qubit Bloch sphere. To resolve this issue, there have been efforts to
develop a qubit based parametrization in higher dimensions Kurzyński (2011);
Kurzyński _et al._ (2016), but without much success. Moreover, the structure
corresponding to physical states using the Gell-Mann operator based
representation, is asymmetric with respect to different axes. Also, in the
Gell-Mann operator based representation the relation between Bloch vectors
corresponding to orthonormal kets is not very useful geometrically. That is,
it is not very useful for constructing an orthonormal basis starting from a
pure qutrit state, by using the eight-D Bloch sphere. Therefore, these
features which are very prominent and useful in the qubit Bloch sphere are not
present for qudits, when we use Gell-mann basis based representation.
On contrary, the Heisenberg-Weyl(HW) operators have received much less
attention because they are not hermitian and thereby require complex numbers
as Bloch vector components Vourdas (2004); Asadian _et al._ (2016). As such
it becomes difficult to study the parameters and put them to use. However, as
the HW operators do provide an alternative way to represent a quantum state,
it is worthwhile to study them despite the presence of complex coefficients as
there can be certain tasks where the HW operator based representation could be
more suitable.
In this work, we try to address the issue of complex Bloch vector components
while using the Weyl operator basis to parametrize a qutrit. We do so by
identifying 4 weight and 4 angular parameters and construct a four-
dimensional(four-D) Bloch sphere from the weight parameters. We also obtain
the constraints on the weight and angular parameters, which give a physical
qutrit state density matrix. It is found that not all the points inside the
four-D sphere correspond to a positive semidefinite matrix. But still, the new
Bloch sphere has several features that looks like an extension of the qubit
Bloch sphere. Namely, 1) the length of the Bloch vector determines the purity
of the state, 2) the rank of a randomly chosen qutrit state can be guessed to
a certain extent 3) the conditions for two orthogonal or mutually unbiased
vectors are quite similar to qubit Bloch sphere under some restriction and 4)
the Hilber-Schmidt distance between qutrit states is equivalent to a factor
time the Euclidean distance in the Bloch sphere for some states. It is also
conjectured that the orthonormal basis kets lie on the same Bloch vector or on
the antipodal points, similar to what we observe in the qubit Bloch sphere.
Further, we apply our representation to 1) identify mutually unbiased bases
(MUBs) in 3 dimensions from the geometry of four-D Bloch sphere 2) to
characterize the unital map acting on qutrit states and 3) to find the
representation of ensembles generated from Hilbert-Schmidt and Bures metric.
We also suggest a method to find a similar Bloch sphere representation in
higher dimensions than three.
The paper is organized as follows. First, we review the HW operator expansion
of a qudit in Sec.II. Then, in Sec.III we present the four-D Bloch sphere and
obtain the constraints on the parameters from one, two, and three-dimensional
sections in Sec.IV. In Sec.V, we have studied a few features of the new Bloch
sphere. Sec.VI describe applications of our representation. After that, in
Sec.VII we describe a way to use a similar approach in higher dimensions than
three. Finally, we conclude in sec.VIII with a summary and future works
possible based on our work.
## II Expanding a Qudit in the Heisenberg Weyl operator basis
HW operators are unitary operators with several useful properties which makes
them useful in several applications Cotfas and Dragoman (2012); Baumgartner
_et al._ (2006); Asadian _et al._ (2015); Bennett _et al._ (1993). One of
the most significant applications of the HW operators has been in the
construction of the discrete Wigner function in finite dimensions Ruskai
(2003); Gibbons _et al._ (2004). These operators are constructed from the
generalized Pauli operators $X$ and $Z$, which are also referred to as boost
and shift operators respectively. They can be defined by their action on a
pure state in the computational basis as
$\displaystyle X\ket{i}=\ket{i+1\text{ mod d}},$ $\displaystyle
Z\ket{i}=\omega^{i}\ket{i}.$
where $\omega=e^{\frac{i2\pi}{d}}$ is the $d^{th}$ root of unity. Now, the HW
displacement operators are defined as
$\displaystyle U_{pq}=\omega^{\frac{-pq}{2}}Z^{p}X^{q}.$ (1)
where $p$ and $q$ are integers modulo $d$. It is known Bertlmann and Krammer
(2008); Asadian _et al._ (2016) that a qudit can be represented using the HW
operator basis. These operators satisfy the following properties that are
necessary to parametrize a physical state.
1. 1.
There are $d^{2}$ matrices including the Identity matrix and $d^{2}-1$ other
traceless matrices.
2. 2.
The $d^{2}$ matrices are unitary and form an orthonormal basis, i.e.,
$\Tr(U_{pq}^{\dagger}U_{p^{\prime}q^{\prime}})=d\delta_{pp^{\prime}}\delta_{qq^{\prime}}$.
Therefore, we can decompose a bounded density matrix operator using the HW
operator basis in the following form
$\displaystyle\rho=\frac{1}{d}\sum_{p=0,q=0}^{d-1}b_{pq}U_{pq}=\frac{1}{d}\big{(}\mathbb{I}+\sum_{p=0,q=0,p\neq
0\neq q}^{d-1}b_{pq}U_{pq}\big{)},$ (2)
where $b_{00}=1$ and $b_{pq}=\Tr{\rho U_{pq}^{\dagger}}$ form the Bloch vector
components. However, the $b_{pq}$’s are complex in general because of $U_{pq}$
not being hermitian. Hence, we need to find $d^{2}-1$ complex numbers to
completely characterize a state. Moreover, it is not possible to define what
is a “Bloch” sphere for these complex components. A solution was suggested in
Asadian _et al._ (2016) by introducing a hermitian generalization of the HW
operators to make the Bloch vector components real. In the next section we
suggest an alternate approach to address this issue.
## III four-D Bloch sphere representation of a qutrit
In this section, we propose a four-D unit radius Bloch sphere representation
of a qutrit state. We will now demonstrate the procedure to represent a qutrit
state by our technique. For convenience, it is helpful to write the HW
operators in three dimensions using Eq.(1), as following
$\displaystyle U_{00}=\begin{bmatrix}1&0&0\\\ 0&1&0\\\
0&0&1\end{bmatrix},U_{01}=\begin{bmatrix}0&1&0\\\ 0&0&1\\\
1&0&0\end{bmatrix},U_{10}=\begin{bmatrix}1&0&0\\\ 0&\omega&0\\\
0&0&\omega^{2}\end{bmatrix},$ $\displaystyle
U_{11}=\begin{bmatrix}0&\omega^{-\frac{1}{2}}&0\\\ 0&0&\omega^{\frac{1}{2}}\\\
-1&0&0\end{bmatrix},U_{02}=\begin{bmatrix}0&0&1\\\ 1&0&0\\\
0&1&0\end{bmatrix},U_{20}=\begin{bmatrix}1&0&0\\\ 0&\omega^{2}&0\\\
0&0&\omega\end{bmatrix},$ $\displaystyle
U_{12}=\begin{bmatrix}0&0&\omega^{2}\\\ 1&0&0\\\
0&\omega&0\end{bmatrix},U_{21}=\begin{bmatrix}0&\omega^{2}&0\\\ 0&0&\omega\\\
1&0&0\end{bmatrix},U_{22}=\begin{bmatrix}0&0&\omega\\\ 1&0&0\\\
0&\omega^{2}&0\end{bmatrix}.$ (3)
From Eq.(2) and Eq.(III) , a qutrit can be expanded as
$\displaystyle\rho=$
$\displaystyle\frac{1}{3}\left(U_{00}+b_{01}U_{01}+b_{10}U_{10}+b_{11}U_{11}+b_{02}U_{02}+b_{20}U_{20}+\right.$
$\displaystyle\left.b_{12}U_{12}+b_{21}U_{21}+b_{22}U_{22}\right).$ (4)
Using the form of HW matrices in Eq.(III), we find that the coefficients
$b_{pq}=\Tr{\rho U_{pq}^{\dagger}}$ must obey following relations
$\displaystyle b_{01}=n_{1}e^{i\theta_{1}},b_{02}=n_{1}e^{-i\theta_{1}},$
$\displaystyle b_{10}=n_{2}e^{i\theta_{2}},b_{20}=n_{2}e^{-i\theta_{2}},$
$\displaystyle
b_{12}=n_{3}e^{i\theta_{3}},b_{21}=n_{3}e^{-i(\theta_{3}-\frac{2\pi}{3})},$
$\displaystyle
b_{22}=n_{4}e^{i\theta_{4}},b_{11}=n_{4}e^{-i(\theta_{4}-\frac{\pi}{3})}.$ (5)
Thus, we can rewrite the expansion of $\rho$ as
$\displaystyle\rho=$
$\displaystyle\frac{1}{3}\left(U_{00}+n_{1}(e^{i\theta_{1}}U_{01}+e^{-i\theta_{1}}U_{02})+n_{2}(e^{i\theta_{2}}U_{10}+e^{-i\theta_{2}}U_{20})+\right.$
$\displaystyle\left.n_{3}(e^{i\theta_{3}}U_{12}+e^{-i(\theta_{3}-\frac{2\pi}{3})}U_{21})+n_{4}(e^{i\theta_{4}}U_{11}+e^{-i(\theta_{4}-\frac{\pi}{3})}U_{22})\right).$
(6)
Therefore, we have obtained eight real parameters that can completely
characterize a qutrit. The unique property of these parameters is that they
consist of four weight parameters $n_{i}$’s and and four angular parameters
$\theta_{i}$’s, with $-1\leq n_{i}\leq 1$ and $0\leq\theta_{i}\leq\pi$. We
define $\vec{n}=\\{n_{1},n_{2},n_{3},n_{4}\\}$ as the four-D Bloch vector. The
$n_{i}^{\prime}s$ are the weight elements corresponding to each commuting pair
of HW operators. We will now obtain the constraints on the Bloch vector
parameters $n_{i}$ and the angular parameters $\theta_{i}$.
Note 1- An implication of using the four-D Bloch vector representation is that
more than one state lie at the same point in the sphere. The states lying on
the same point are distinguished only by the angular parameters $\theta_{i}$.
These states are equivalent under the action of some unitary operators. It
would be interesting to identify these unitary operators.
Note 2- One can also characterize a qutrit state by allowing $0\leq n_{i}\leq
1$ and $0\leq\theta_{i}\leq 2\pi$ range of values, but then we will not obtain
a sphere structure.
Such a structure might also be worth studying because the weight parameters
have proper meaning when they are positive valued. However, we do not study
this in the current work.
## IV Constraints on the Bloch vector and angular parameters
The matrix form of a qutrit can be written in terms of $n_{i}$s and
$\theta_{i}$s as following
$\displaystyle\rho=\frac{1}{3}\begin{pmatrix}1+2n_{2}\cos(\theta_{2})&n_{1}e^{i\theta_{1}}+n_{3}e^{-i\theta_{3}}+n_{4}e^{-i\theta_{4}}&n_{1}e^{-i\theta_{1}}+n_{3}e^{i(\theta_{3}-\frac{2\pi}{3})}+n_{4}e^{i(\theta_{4}+\frac{2\pi}{3})}\\\
\\\
n_{1}e^{-i\theta_{1}}+n_{3}e^{i\theta_{3}}+n_{4}e^{i\theta_{4}}&1-n_{2}\cos(\theta_{2})-\sqrt{3}n_{2}\sin(\theta_{2})&n_{1}e^{i\theta_{1}}+n_{3}e^{-i(\theta_{3}+\frac{2\pi}{3})}+n_{4}e^{-i(\theta_{4}-\frac{2\pi}{3})}\\\
\\\
n_{1}e^{i\theta_{1}}+n_{3}e^{-i(\theta_{3}-\frac{2\pi}{3})}+n_{4}e^{-i(\theta_{4}+\frac{2\pi}{3})}&n_{1}e^{-i\theta_{1}}+n_{3}e^{i(\theta_{3}+\frac{2\pi}{3})}+n_{4}e^{i(\theta_{4}-\frac{2\pi}{3})}&1-n_{2}\cos(\theta_{2})+\sqrt{3}n_{2}\sin(\theta_{2})\end{pmatrix}$
(7)
It is clear that, $\rho$ is hermitian, which is guaranteed by the choice of
expansion coefficients. Moreover, $\Tr[\rho]=1$ as the HW matrices are
traceless except for $U_{00}=\mathcal{I}$. The only condition that remains to
be satisfied is the positive semi-definiteness of $\rho$, i.e. $x_{i}\geq 0$,
where $x_{i}$’s are the eigenvalues of $\rho$. In order to do this, we
construct the characteristic polynomial Det($x\mathbb{I}-\rho$), of the
density matrix $\rho$. The necessary and sufficient condition for the
eigenvalues $x_{i}$ to be positive semi-definite is that the coefficients
$a_{i}$’s of the characteristic polynomial are also positive semi-definite
Kimura (2003). The characteristic polynomial has the following form
$\displaystyle\text{Det}(x\mathbb{I}-\rho)=\prod_{i=1}^{N}(x-x_{i})=\sum_{j=0}^{N}(-1)^{j}a_{j}x^{N-j}=0.$
(8)
Notice that $a_{0}=1$ by definition. Now, we apply Newton’s formulas to find
the values of other coefficients $a_{i}$’s(for details please see ref. Kimura
(2003)). Newton’s formulas relate the coefficients $a_{i}$ and the eigenvalues
$x_{i}$ as
$\displaystyle la_{l}=\sum_{k=1}^{l}C_{N,k}a_{l-k},(1\leq l\leq N),$
where $C_{N,k}=\sum_{i=1}^{N}x^{k}$. Using the results directly from ref.
Kimura (2003), we get the following expressions for $a_{i}$’s in terms of
$\rho$
$\displaystyle a_{0}=1$ $\displaystyle a_{1}=\Tr\rho,$ $\displaystyle
a_{2}=\frac{(1-\Tr\rho^{2})}{2!},$ $\displaystyle
a_{3}=\frac{1-3\Tr\rho^{2}+2\Tr\rho^{3}}{3!}.$ (9)
Ensuring that the terms in Eq.(IV) are non-negative results in positive semi-
definite density matrix $\rho$. $a_{1}=\Tr\rho=1>0$ is trivially satisfied.
Where as, $a_{2}=\frac{(1-\Tr\rho^{2})}{2!}\geq 0$ imposes the following
constraint
$\displaystyle 2!a_{2}=1-\Tr[\rho^{2}]$
$\displaystyle=1-\frac{1}{3}\Bigg{(}1+2\bigg{(}n_{1}^{2}+n_{2}^{2}+n_{3}^{2}+n_{4}^{2}\bigg{)}\Bigg{)}\geq
0$ $\displaystyle\implies n_{1}^{2}+n_{2}^{2}+n_{3}^{2}+n_{4}^{2}\leq 1.$ (10)
This constraint simply states that the physical states must lie inside a
four-D sphere of radius one. The only condition remaining to be satisfied now
is $a_{3}\geq 0$, which has the following form
$\displaystyle 3!a_{3}=1-3\Tr\rho^{2}+2\Tr\rho^{3}\geq 0$
$\displaystyle\implies\frac{2}{9}\Bigg{(}1-3\bigg{(}n_{1}^{2}+n_{2}^{2}+n_{3}^{2}+n_{4}^{2}\bigg{)}+2\bigg{(}n_{1}^{3}\cos(3\theta_{1})+n_{2}^{3}\cos(3\theta_{2})+n_{3}^{3}\cos(3\theta_{3})+n_{4}^{3}\cos(3\theta_{4})\bigg{)}-6n_{1}n_{3}n_{4}\cos(\theta_{1}-\theta_{3}-\theta_{4})$
$\displaystyle+6n_{1}n_{2}n_{3}\cos(\theta_{1}-\theta_{2}+\theta_{3}-\frac{\pi}{3})+6n_{2}n_{3}n_{4}\cos(\theta_{2}+\theta_{3}-\theta_{4}+\frac{\pi}{3})+6n_{1}n_{2}n_{4}\cos(\theta_{1}+\theta_{2}+\theta_{4}+\frac{\pi}{3})\Bigg{)}\geq
0.$ (11)
In the above form, it is difficult to picture the set of valid states inside
the sphere. We take the one, two, and three-dimensional sections passing
through the centre to get a better understanding of the allowed space inside
the four-D sphere.
### IV.1 One-dimensional sections
One dimensional sections(one sections) passing through center can be obtained
by setting three out of four $n_{i}$’s as zero, in Eq.(IV). We find that the
expressions of one sections of $a_{3}$ are the same with respect to all
$n_{i}$’s with the following form
$\displaystyle
3!a_{3}^{one}=\frac{2}{9}\Bigg{(}1-3n_{i}^{2}+2n_{i}^{3}\cos[\theta_{i}]\Bigg{)}\geq
0.$ (12)
Therefore, these one-dimensional sections are symmetric with respect to the
four axes. The non-negativity constraints are:-
* •
$a_{3}\geq 0$ $\forall$ $\theta_{i}$ for $-\frac{1}{2}\leq
n_{i}\leq\frac{1}{2}$.
* •
For $0.5<n_{2}\leq 1$,
$\theta\in[0,\zeta]\cup[\frac{2\pi}{3}-\zeta,\frac{2\pi}{3}+\zeta]$.
* •
For $-1\leq n_{2}<-0.5$,
$\theta\in[\frac{\pi}{3}-\zeta,\frac{\pi}{3}+\zeta]\cup[\pi-\zeta,\pi]$.
where $\zeta=\arccos(\frac{-1}{2|n_{2}|})-\frac{2\pi}{3}$. It can be also
observed (see Fig.1) that the range of allowed values of $\theta_{i}$ is
gradually reducing as we move away from the origin along the $n_{i}$, axis
after $|n_{i}|\geq 0.5$.
Figure 1: (Color online)The shaded region depicts the allowed values of
$n_{i}$ and $\theta_{i}$ for a physical state lying on the $n_{i}$ axis. As
can be easily seen that for $-0.5\leq n_{i}\leq 0.5$, all values of
$\theta_{i}$ correspond to a physical density matrix.
### IV.2 Two-dimensional sections
A two-dimensional section(two section) centered at the origin can be obtained
by setting two out of four $n_{i}$’s to be zero in Eq.(IV). There are 6
possible two sections which can be constructed in the four-D sphere. The
positivity constraint for all the two-dimensional sections have the following
same form
$\displaystyle
3!a_{3}^{two}=\frac{2}{9}\bigg{(}1-3n_{i}^{2}-3n_{j}^{2}+2n_{i}^{3}\cos(3\theta_{i})+2n_{j}^{3}\cos(3\theta_{j})\bigg{)}\geq
0.$ (13)
Similar to the one-dimensional sections, we see that the two dimensional
sections are also symmetric with respect to the four axes. We point out that
this is unlike the Gell-Mann basis based Bloch vector representation of a
qutrit, where there are 4 different types of such two sections Kimura (2003),
which are asymmetric with respect to the axes.
Now, we are interested in obtaining the region in the two-dimensional section
which corresponds to physical qutrit states, i.e. there exist values of
$\theta_{i}$ and $\theta_{j}$ so that the inequality in Eq.(13) is satisfied.
This can be easily found by maximizing $a_{3}^{two}$ with respect $\theta_{i}$
and $\theta_{j}$, which gives the following inequality see Fig.2.
$\displaystyle{a_{3}}_{max}^{two}\geq
0\implies\frac{2}{9}\bigg{(}1-3n_{i}^{2}-3n_{j}^{2}+2|n_{i}^{3}|+2|n_{j}^{3}|\bigg{)}\geq
0.$ (14)
Figure 2: (Color online)The shaded region depicts the allowed values of
$n_{i}$ and $n_{j}$ for a physical state lying on the two-dimensional section
built on $n_{i}$ and $n_{j}$ axes.
(a) $\phi=\pi/4$ (b) $\phi=\cos^{-1}0.2$
(c) $\phi=0$ (d) $\phi=\pi/2$
Figure 3: (Color online)The solid region in the plots gives the allowed values
of $\theta_{i}$ and $\theta_{j}$ against r. Here $\phi$ is the angle the
direction makes with $n_{i}$.
Further, it is informative to see what are the allowed values of $\theta_{i}$
and $\theta_{j}$ in different directions in the two-dimensional section as we
move away from the center in the four-D sphere. To do this, we replace with
$n_{i}=r\cos(\phi)$ and $n_{j}=r\sin(\phi)$ in Eq.(13), so that
$\displaystyle
3!a_{3}^{two}=\frac{2}{9}\Bigg{(}1-3r^{2}+2r^{3}\bigg{(}\cos^{3}(\phi)\cos(3\theta_{i})+\sin^{3}(\phi)\cos(3\theta_{j})\bigg{)}\Bigg{)}\geq
0.$
We note that when $r\leq 0.5$, the above inequality is satisfied for all
values of $\theta_{i}$ and $\theta_{j}$ $\in$ {0,$\pi$}. Moreover, to find
what are the allowed values of $\theta_{i}$ and $\theta_{j}$ as we move away
from the origin in a radial direction, we do a 3D plot of the allowed region
between $r$, $\theta_{i}$ and $\theta_{j}$ by fixing the directions determined
by $\phi$ in fig.3, where $\phi$ is the angle the direction makes with
$n_{i}$.
### IV.3 Three-dimensional sections
Next, we consider the three-dimensional sections(three sections) centered at
the origin inside the four-D sphere. There are four such three-dimensional
sections possible which can be obtained by setting one of the $n_{i}$’s as
zero in Eq.(IV). However, unlike the one and two dimensional sections the
three-dimensional sections are all different, with following expressions.
$\displaystyle
3!a_{3}^{three1}=\frac{2}{9}\Bigg{(}1-3\bigg{(}n_{1}^{2}+n_{2}^{2}+n_{3}^{2}\bigg{)}+2\bigg{(}n_{1}^{3}\cos(3\theta_{1})+n_{2}^{3}\cos(3\theta_{2})+n_{3}^{3}\cos(3\theta_{3})\bigg{)}+6n_{1}n_{2}n_{3}\cos(\theta_{1}-\theta_{2}+\theta_{3}-\frac{\pi}{3})\Bigg{)}.$
$\displaystyle
3!a_{3}^{three2}=\frac{2}{9}\Bigg{(}1-3\bigg{(}n_{1}^{2}+n_{2}^{2}+n_{4}^{2}\bigg{)}+2\bigg{(}n_{1}^{3}\cos(3\theta_{1})+n_{2}^{3}\cos(3\theta_{2})+n_{4}^{3}\cos(3\theta_{4})\bigg{)}+6n_{1}n_{2}n_{4}\cos(\theta_{1}+\theta_{2}+\theta_{4}+\frac{\pi}{3})\Bigg{)}.$
$\displaystyle
3!a_{3}^{three3}=\frac{2}{9}\Bigg{(}1-3\bigg{(}n_{1}^{2}+n_{3}^{2}+n_{4}^{2}\bigg{)}+2\bigg{(}n_{1}^{3}\cos(3\theta_{1})+n_{3}^{3}\cos(3\theta_{3})+n_{4}^{3}\cos(3\theta_{4})\bigg{)}-6n_{1}n_{3}n_{4}\cos(\theta_{1}-\theta_{3}-\theta_{4})\Bigg{)}.$
$\displaystyle
3!a_{3}^{three4}=\frac{2}{9}\Bigg{(}1-3\bigg{(}n_{2}^{2}+n_{3}^{2}+n_{4}^{2}\bigg{)}+2\bigg{(}n_{2}^{3}\cos(3\theta_{2})+n_{3}^{3}\cos(3\theta_{3})+n_{4}^{3}\cos(3\theta_{4})\bigg{)}+6n_{2}n_{3}n_{4}\cos(\theta_{2}+\theta_{3}-\theta_{4}+\frac{\pi}{3})\Bigg{)}.$
(15)
Figure 4: (Color online) Two different views of the three section
$a_{3}^{three1}$ showing that the three section is closely approximated by an
octahedron. The other three sections are also approximated by an octahedron,
which we have not shown here for brevity. The colourscale used is only for a
better clarity of the plots.
It seems that these three-dimensional sections are not symmetric with respect
to the axes as they have different forms in Eq.(IV.3). Therefore we need to
find the regions for which the expressions in Eq.(IV.3) are non-negative. To
find the non-negative regions of the three-dimensional sections means to find
out whether for a given triple of $n_{i},n_{j}$ and $n_{k}$, a corresponding
$\theta_{i},\theta_{j}$ and $\theta_{k}$ exists which gives a non-negative
value of terms in Eq.(IV.3). It is difficult to do so analytically, i.e., to
maximize the expressions of $a_{3}^{three}$ in Eq.(IV.3) with respect to
$\theta$ parameters only. Instead we numerically plot the regions which
satisfy $a_{3}^{three1}\geq 0$ only. As we can see from the fig.4 that the
different faces have different forms, we conclude that the three dimensional
sections are not symmetric with respect to the axes.
It can be seen from fig.4 that the three section $a_{3}^{three1}$ is of the
form with bulges on the faces of an octahedron. The other three sections also
are of a similar form but not the same, which we have not put in the paper for
lack of space and brevity.
A few remarks from the study of the one, two and three sections are in order.
1. 1.
It is possible to approximately construct the three sections from the
knowledge of the two sections, which is not the case in the representation
using Gell-Mann operator based representation. This is because by rotating the
two sections along one of the axes, one will obtain an octahedron which is a
very good approximation to all the three sections.
2. 2.
It looks like from the numerical plots, that the three sections structure is
not convex. This could be because of the presence of complex coefficients.
3. 3.
It is also clearly visible how the one section arise from the two sections and
the two sections from the three sections.
Adding to the remarks from the study of one, two and three dimensional
sections is the following result.
Lemma 1\- The four dimensional Bloch sphere is not a solid sphere.
An implication of this result is that a rotation in the Bloch sphere does not
always correspond to a unitary operation, unlike the qubit Bloch sphere.
However, a very interesting finding is that inside the Bloch sphere of radius
$r\leq 0.5,a_{3}$ is positive for all the values of angular parameters
$\theta_{i}$. This can be proven by using the polar coordinate forms of
$n_{i}$’s in Eq.(IV), i.e. we replace with
$n1=r\cos\alpha_{1},n2=r\sin\alpha_{1}\cos\alpha_{2},n3=r\sin\alpha_{1}\sin\alpha_{2}\cos\alpha_{3},n4=r\sin\alpha_{1}\sin\alpha_{2}\sin\alpha_{3}\cos\alpha_{4}$
in Eq.(IV), where $\alpha_{1},\alpha_{2},\alpha_{3}$ and $\alpha_{4}$ are the
polar angles. Then $a_{3}$ can be written in the following simple form
$\displaystyle
3!a_{3}=1-3r^{2}+2r^{3}F(\theta_{1},\theta_{2},\theta_{3},\theta_{4},\alpha_{1},\alpha_{2},\alpha_{3},\alpha_{4}).$
(16)
where $F$ is a function of $\theta_{i}$’s and $\alpha_{i}$’s. It can be easily
shown that $F$ can have values in the range [-1,1]. And, for all values of
$F$, $a_{3}$ is monotonically decreasing with $r$. If $a_{3}\geq 0$ at a
certain point $\\{r_{max},\alpha_{1},\alpha_{2},\alpha_{3},\alpha_{4}\\}$, for
some theta values $\\{\theta_{1},\theta_{2},\theta_{3},\theta_{4}\\}$, then
$a_{3}\geq 0$ for all points on $\\{r\leq
r_{max},\alpha_{1},\alpha_{2},\alpha_{3},\alpha_{4}\\}$ for the same theta
values. We state the following result from this analysis
Lemma 2\- If in a certain direction the point at distance $r=r_{max}$
represents a physical state, then all the points in the same direction with
$r\leq r_{max}$ represent a physical state.
Now, we set $r=0.5$ in Eq.(16), and then minimize it with respect to all the
angles $\alpha_{i}$’s and $\theta_{i}$’s, which gives a minimum value of zero.
Therefore, for $r<0.5$, $a_{3}$ must have values greater than zero for all
values of $\theta_{i}$’s. Therefore, we can present the following Lemma.
Lemma 3-All the points inside sphere of radius $r\leq 0.5$ are physical states
for all the angular parameter values of $\theta_{i}$’s.
This result is significant, as all the points lying inside the sphere of
radius 0.5 correspond to physical states.
## V Features of the Bloch sphere
In this section, we discuss several features of the Bloch sphere which have
surprising resemblance with the qubit Bloch sphere representation.
### V.1 Mixed and Pure states
$\Tr[\rho^{2}]$ is the measure of the purity of a density matrix operator.
Using Eq.(7) we can write
$\displaystyle\Tr[\rho^{2}]=\frac{1}{3}\Bigg{(}1+2\bigg{(}n_{1}^{2}+n_{2}^{2}+n_{3}^{2}+n_{4}^{2}\bigg{)}\Bigg{)}.$
(17)
Thus, we find that the length of the Bloch vector determines the purity of the
qutrit state. Further, $\Tr[\rho^{2}]=1$ for
$n_{1}^{2}+n_{2}^{2}+n_{3}^{2}+n_{4}^{2}=1$, i.e., the pure states lie on the
surface of the unit sphere. Also, $\Tr[\rho^{2}]=0$ only when
$n_{1}^{2}+n_{2}^{2}+n_{3}^{2}+n_{4}^{2}=0$, i.e., the maximally mixed state
lies at the center of the sphere. Also, the purity increases as we move away
from the center of the sphere.
### V.2 Rank of a Qutrit state
A closely related concept to purity/mixedness is the rank of a physical state.
From the qubit Bloch sphere it is very easy to determine that the rank 1
states lie on the surface while all the remaining states are of rank 2. In our
representation of qutrit states, all the pure states lie on the surface, hence
they have rank 1. The challenging task is to determine where the rank 2 and 3
states lie inside the four-D sphere. To do this we note that if the
determinant of a qutrit is greater than zero, i.e., $Det(\rho)>0$, then the
state must be of rank 3. From the Eq.(7), we find that $Det(\rho)=a_{3}$. But
we saw in Lemma 3 that for $r<0.5$, $a3=Det(\rho)>0$. Hence all the states
inside the four-D sphere with radius $r=0.5$ are rank 3 states. However, the
states on the surface of this sphere with $r=0.5$ can be both rank 2 and 3.
Further, when $0.5\leq r<1$ the states can be both of rank 2 and rank 3
depending on the choice of the angular parameters $\theta_{i}$. To summarize
1. 1.
$r=1\implies\rank(\rho)=1.$
2. 2.
$0.5\leq r<1\implies\rank(\rho)=2\hskip 2.84544pt\text{and}\hskip 2.84544pt3.$
3. 3.
$r<0.5\implies\rank(\rho)=3.$
A representative figure of the qutrit state space depicting the location of
rank 1, 2 and 3 states can be seen in Fig.(5). This, helps us to identify the
rank of the qutrit states by simply looking at where they lie inside the
four-D sphere.
Figure 5: (Color online)A representative figure depicting the position of the
Rank-1, 2, and 3 states and the qutrit state space containing a solid ball of
radius 1/2.
### V.3 Orthogonal states
Let us consider two pure states $\rho_{a}$ and $\rho_{b}$ and expand it in the
form of Eq.(III)
$\displaystyle\rho_{a}=$
$\displaystyle\frac{1}{3}\left(U_{00}+n_{a1}(e^{i\theta_{a1}}U_{01}+e^{-i\theta_{a1}}U_{02})+n_{a2}(e^{i\theta_{a2}}U_{10}+e^{-i\theta_{a2}}U_{20})+\right.$
$\displaystyle\left.n_{a3}(e^{i\theta_{a3}}U_{12}+e^{-i(\theta_{a3}-\frac{2\pi}{3})}U_{21})+n_{a4}(e^{i\theta_{a4}}U_{11}+e^{-i(\theta_{a4}-\frac{\pi}{3})}U_{22})\right).$
$\displaystyle\rho_{b}=$
$\displaystyle\frac{1}{3}\left(U_{00}+n_{b1}(e^{i\theta_{b1}}U_{01}+e^{-i\theta_{b1}}U_{02})+n_{b2}(e^{i\theta_{b2}}U_{10}+e^{-i\theta_{b2}}U_{20})+\right.$
$\displaystyle\left.n_{b3}(e^{i\theta_{b3}}U_{12}+e^{-i(\theta_{b3}-\frac{2\pi}{3})}U_{21})+n_{b4}(e^{i\theta_{b4}}U_{11}+e^{-i(\theta_{b4}-\frac{\pi}{3})}U_{22})\right).$
The states $\rho_{a}$ and $\rho_{b}$ have Bloch vectors
$\vec{n_{a}}=\\{n_{a1},n_{a2},n_{a3},n_{a4}\\}$ and
$\vec{n_{b}}=\\{n_{b1},n_{b2},n_{b3},n_{b4}\\}$ respectively. The
orthogonality condition can simply be checked by $\Tr[\rho_{a}\rho_{b}]=0$,
which reduces to
$\displaystyle\Tr[\rho_{a}\rho_{b}]$
$\displaystyle=\frac{1}{3}\bigg{(}1+2n_{a1}n_{b1}\cos(\theta_{a1}-\theta_{b1})+2n_{a2}n_{b2}\cos(\theta_{a2}-\theta_{b2})$
$\displaystyle+2n_{a3}n_{b3}\cos(\theta_{a3}-\theta_{b3})+2n_{a4}n_{b4}\cos(\theta_{a4}-\theta_{b4})\bigg{)}=0.$
(18)
This condition is richer than the orthogonality condition that we know for the
two orthogonal qubit states. We know that Bloch vectors for two orthogonal
qubit states obey, $\vec{b1}\cdot\vec{b2}=-1=\cos(\frac{2\pi}{2})$. The
orthogonality condition in Eq.(V.3) reduces to
$\vec{n_{a}}.\vec{n_{b}}=\cos(\frac{2\pi}{3})$, whenever
$\theta_{ai}=\theta_{bi}$. However, it is also needed that the points
corresponding to $\vec{n_{a}}$ and $\vec{n_{b}}$ with an overlap of
$\cos(\frac{2\pi}{3})$ correspond to a positive semidefinite matrix for
$\theta_{ai}=\theta_{bi}$, which we were unable to identify. Thus, it is
similar to the orthogonality constraint for qubit Bloch vectors. For
$\theta_{ai}\neq\theta_{bi}$, the orthogonality condition is far more complex.
It can be easily shown that eigenbases of the four sets of non-commuting HW
operators lie at antipodal points on the Bloch sphere. For example, the
eigenkets of $Z$ lies on points with Bloch vectors
$\\{n_{1},n_{2},n_{3},n_{4}\\}:\\{0,1,0,0\\}$ and $\\{0,-1,0,0\\}$. This is
very similar to what we observe in the qubit Bloch sphere, where the
orthonormal basis kets lie at the antipodal points. Although, all the
orthonormal kets do not lie on the antipodal points in the four-D sphere. We
conjecture the following regarding the Bloch vectors belonging to an
orthonormal basis.
Conjecture 1\- The kets belonging to an orthonormal basis lie at the antipodal
points or at the same point on the surface of the Bloch sphere.
The motivation of this conjecture comes from the observation (see Appendix A)
that the orthonormal basis kets which are mutually unbiased to the
computational basis, lie on the same point in the four-D sphere.
### V.4 Mutually unbiased states
By doing a similar analysis as for orthogonal states, we can also obtain the
relation between Bloch vectors corresponding to two mutually unbiased state
vectors. For two mutually unbiased state vectors in 3 dimension, $\rho_{a}$
and $\rho_{b}$, $\Tr[\rho_{a}\rho_{b}]=\frac{1}{3}$. By using the result from
previous subsection we get
$\displaystyle\frac{1}{3}\bigg{(}1+2n_{a1}n_{b1}\cos(\theta_{a1}-\theta_{b1})+2n_{a2}n_{b2}\cos(\theta_{a2}-\theta_{b2})$
$\displaystyle+2n_{a3}n_{b3}\cos(\theta_{a3}-\theta_{b3})+2n_{a4}n_{b4}\cos(\theta_{a4}-\theta_{b4})\bigg{)}=\frac{1}{3}$
$\displaystyle\bigg{(}n_{a1}n_{b1}\cos(\theta_{a1}-\theta_{b1})+n_{a2}n_{b2}\cos(\theta_{a2}-\theta_{b2})$
$\displaystyle+n_{a3}n_{b3}\cos(\theta_{a3}-\theta_{b3})+n_{a4}n_{b4}\cos(\theta_{a4}-\theta_{b4})\bigg{)}=0.$
(19)
Whenever $\cos(\theta_{ai}-\theta_{bi})=\text{const}$, Eq.(V.4) reduces to
$\vec{n_{a}}.\vec{n_{b}}=0$, i.e. the Bloch vectors corresponding to mutually
unbiased state vectors are orthogonal to each other, exactly what we get for
mutually unbiased qubits. Here we require that for orthogonal Bloch vectors
the angular parameters obey the relation
$\cos(\theta_{ai}-\theta_{bi})=\text{const}$. For
$\cos(\theta_{ai}-\theta_{bi})\neq\text{const}$, the condition is more
complex.
### V.5 Distance between density matrices
Let us consider two states
$\rho_{1}:n_{1},n_{2},n_{3},n_{4},\theta_{1},\theta_{2},\theta_{3},\theta_{4}$
and $\rho_{2}:m_{1},m_{2},m_{3},m_{4},\phi_{1},\phi_{2},\phi_{3},\phi_{4}$. By
using the form of Eq.(7), we obtain the Hilbert-Schmidt distance between them
as Wilde (2013)
$\displaystyle
D_{HS}(\rho_{1},\rho_{2})=\Big{(}\Tr(\rho_{1}-\rho_{2})^{2}\Big{)}^{\frac{1}{2}}$
(20)
$\displaystyle=\sqrt{\frac{2}{3}}\Big{(}n_{1}^{2}+n_{2}^{2}+n_{3}^{2}+n_{4}^{2}+m_{1}^{2}+m_{2}^{2}+m_{3}^{2}+m_{4}^{2}$
$\displaystyle-2n_{1}m_{1}\cos(\theta_{1}-\phi_{1})-2n_{2}m_{2}\cos(\theta_{2}-\phi_{2})$
$\displaystyle-2n_{3}m_{3}\cos(\theta_{3}-\phi_{3})-2n_{4}m_{4}\cos(\theta_{4}-\phi_{4})\Big{)}^{\frac{1}{2}}.$
(21)
When $\theta_{i}=\phi_{i}$, the Hilbert-Schmidt distance reduces to the
euclidean distance(times a const.) in the four-D sphere, i.e.
$D_{HS}(\rho_{1},\rho_{2})=\sqrt{\frac{2}{3}}\sqrt{\sum_{i}(n_{i}-m_{i})^{2}}$.
This is analogous to the fact that the Hilbert-Schmidt distance between two
density matrices is proportional to the euclidean distance between them in the
qubit Bloch sphere Wilde (2013).
## VI Applications
### VI.1 Employing the Bloch sphere geometry to find MUBs in three dimensions
It is known that in prime or power of prime dimension $d=p^{n}$,where $p$ is a
prime number and $n$ is an integer greater than zero, there exist a maximum of
$d+1$ MUBs. For the qubit, the existence of 3 MUBs can be very easily
explained through qubit Bloch sphere, but such an explanation is difficult in
higher dimensions. In this section, we show that the qutrit Bloch sphere
geometry restricts the maximum number of MUBs to four.
MUBs in 2 dimensions\- The qubit Bloch sphere is a three dimensional sphere,
in which the Bloch vectors corresponding to orthonormal basis kets lie on the
antipodal points on the sphere, i.e. they lie along the line passing through
the center. Also, the Bloch vectors corresponding to mutually unbiased kets
are orthogonal to each other Sharma _et al._ (2021). As, there can be only
three such orthogonal lines passing through the center, which explains why
there are only three possible mutually unbiased bases in dimension 2.
MUBs in 3 dimensions\- To find the qutrit MUBs, we first fix one of the
orthonormal basis to be the eigenbasis of HW operator $Z$ or the computational
basis. The eigenkets of $Z$ have the following Bloch vector and angular
parameters
$\displaystyle\ket{0}:n_{2}=1,\theta_{2}=0,n_{1}=n_{3}=n_{4}=0,\\{\theta_{1},\theta_{3},\theta_{4}\\}-\text{any
value},$
$\displaystyle\ket{1}:n_{2}=-1,\theta_{2}=\frac{\pi}{3},n_{1}=n_{3}=n_{4}=0,\\{\theta_{1},\theta_{3},\theta_{4}\\}-\text{any
value},$
$\displaystyle\ket{2}:n_{2}=1,\theta_{2}=\frac{2\pi}{3},n_{1}=n_{3}=n_{4}=0,\\{\theta_{1},\theta_{3},\theta_{4}\\}-\text{any
value}.$
Now any qutrit ket which is mutually unbiased to all the computational basis
kets must have $n_{2}=0$ and $\theta_{2}$ can have any arbitrary value, which
can be also deduced from Eq.(V.4). Hence, we need to find the remaining
mutually unbiased kets using the remaining weight parameters $n_{1},n_{3}$ and
$n_{4}$ and the angular parameters $\theta_{1},\theta_{3},\theta_{4}$.
We fix one of the ket $\vec{n_{\theta 0}}$, parametrized as
$\displaystyle\vec{n_{\theta
0}}:n_{1}=m_{1},n_{3}=m_{3},n_{4}=m_{4},\theta_{1}=\phi_{1},\theta_{3}=\phi_{3}\hskip
5.69054pt\text{and}\hskip 5.69054pt\theta_{4}=\phi_{4}.$
As shown in Appendix A, we find that an orthonormal basis $N=\\{\vec{n_{\theta
0}},\vec{n_{\theta 1}},\vec{n_{\theta 2}}\\}$ including $\vec{n_{\theta 0}}$
are parametrized as
$\displaystyle\vec{n_{\theta
0}}:n_{1}=m_{1},n_{3}=m_{3},n_{4}=m_{4},\theta_{1}=\phi_{1},\theta_{3}=\phi_{3}\hskip
5.69054pt\text{and}\hskip 5.69054pt\theta_{4}=\phi_{4},$
$\displaystyle\vec{n_{\theta
1}}:n_{1}=m_{1},n_{3}=m_{3},n_{4}=m_{4},\theta_{1}=\phi_{1}^{\prime},\theta_{3}=\phi_{3}^{\prime}\hskip
5.69054pt\text{and}\hskip 5.69054pt\theta_{4}=\phi_{4}^{\prime},$
$\displaystyle\vec{n_{\theta
2}}:n_{1}=m_{1},n_{3}=m_{3},n_{4}=m_{4},\theta_{1}=\phi_{1}^{\prime\prime},\theta_{3}=\phi_{3}^{\prime\prime}\hskip
5.69054pt\text{and}\hskip 5.69054pt\theta_{4}=\phi_{4}^{\prime\prime}.$
We have derived the values of angles $\phi^{{}^{\prime}}_{1}$,
$\phi^{{}^{\prime}}_{3}$, $\phi^{{}^{\prime}}_{4}$,
$\phi^{{}^{\prime\prime}}_{1}$, $\phi^{{}^{\prime\prime}}_{3}$ and
$\phi^{{}^{\prime\prime}}_{4}$ relative to $\phi_{1}$, $\phi_{3}$ and
$\phi_{4}$ in the Appendix.A. It should be noted that all the kets of the
orthonormal basis lie on the same point $\vec{n}=\\{m_{1},0,m_{3},m_{4}\\}$ in
the four-D sphere. Moreover, from Appendix A we note that the vectors mutually
unbiased to the kets of orthonormal basis $N$ lie on the points
$\vec{p}=\\{m_{3},m_{4},m_{1}\\}$ and $\vec{q}=\\{m_{4},m_{1},m_{3}\\}$. For
points $\vec{p}$ and $\vec{q}$ again we can form an orthonormal bases ${P}$
and ${Q}$ respectively, by varying the angular parameters only. The three
orthonormal bases lie at the end points of the Bloch vectors, with Bloch
vectors which can be obtained by cyclic permutations of
$\\{n_{1},n_{3},n_{4}\\}$, which we write down for brevity.
$\displaystyle{N}:n_{1}=m_{1},n_{3}=m_{3}\hskip 4.26773pt\text{and}\hskip
4.26773ptn_{4}=m_{4},$ $\displaystyle{P}:n_{1}=m_{4},n_{3}=m_{1}\hskip
4.26773pt\text{and}\hskip 4.26773ptn_{4}=m_{3},$
$\displaystyle{Q}:n_{1}=m_{3},n_{3}=m_{4}\hskip 4.26773pt\text{and}\hskip
4.26773ptn_{4}=m_{1}.$ (22)
Therefore, there can be only three orthonormal bases are mutually unbiased to
each other, so that all of them are mutually unbiased to the computational
basis. Hence, we have obtained from the geometry of the Bloch hyperesphere
that there are maximum four mutually unbiased bases in three dimension.
### VI.2 Characterization of Unital Maps
In this section, we characterize the unital maps acting on the qutrit states.
Unital maps are quantum operations that preserve the identity matrix or the
maximally mixed density matrix. It is known that the unital maps acting on a
qubit density matrix is characterized by a convex tetrahedronKing and Ruskai
(2001); Beth Ruskai _et al._ (2002).
To analyze the unital channels acting on a qutrit density matrix
$\rho=\frac{1}{3}\sum_{p,q}^{2}b_{pq}U_{pq}$ with Bloch vector $\vec{b}_{pq}$
(see Eq.(2)), we note that a linear quantum map can be written in the form of
an affine transformation acting on the $d^{2}-1=8$ dimensional Bloch vector.
Thus, every linear qutrit quantum map $\Phi:\mathcal{C}^{3\crossproduct
3}\rightarrow\mathcal{C}^{3\crossproduct 3}$ can be represented using a
$9\crossproduct 9$ matrix $\mathcal{L}=\begin{pmatrix}1&0\\\ l&L\end{pmatrix}$
acting on the column vector {1, $\vec{b_{pq}}$}, where $L$ is an
$8\crossproduct 8$ matrix and $l$ is a column vector containing eight
elements. The action of the quantum channel
$\rho\rightarrow\Phi(\rho)=\frac{1}{3}\sum_{p,q}^{2}b_{pq}^{\prime}U_{pq}$ can
be written as
$\displaystyle\vec{b}\rightarrow\vec{b^{\prime}}=L\vec{b}+\vec{l}.$
By observing Eq.(III), it can be seen that to make sure that
$\vec{b^{\prime}}$ corresponds to a hermitian density matrix
$\mathcal{E}(\rho)$, it is necessary that 1) $L$ is a diagonal matrix with
eigenvalues {$\lambda_{01},\lambda_{02},...,\lambda_{22}$} and 2) the
eigenvalues must be of the following form
$\displaystyle\lambda_{01}=\lambda_{1}e^{i\phi_{1}},\lambda_{02}=\lambda_{1}e^{-i\phi_{1}}$
$\displaystyle\lambda_{10}=\lambda_{2}e^{i\phi_{2}},\lambda_{20}=\lambda_{2}e^{-i\phi_{2}}$
$\displaystyle\lambda_{12}=\lambda_{3}e^{i\phi_{3}},\lambda_{21}=\lambda_{3}e^{-i\phi_{3}}$
$\displaystyle\lambda_{22}=\lambda_{4}e^{i\phi_{4}},\lambda_{11}=\lambda_{4}e^{-i\phi_{4}}.$
Next, we note that to preserve the identity matrix, $\vec{l}=\vec{0}$, so that
$\mathcal{L}=\begin{pmatrix}1&0\\\ 0&L\end{pmatrix}$. Now, to do the complete
characterization of the map $\mathcal{L}$ we impose the complete positivity
requirement via Choi’s theorem which requires that Choi Matrix
$\mathbf{C}=(\mathbb{I}\otimes\Phi)(\sum_{ij}E_{ij}\otimes E_{ij})$ is
positive semidefinite. To simplify the problem, we find the eigenvalues only
when the angles $\phi_{i}=0,\forall i\in\\{0,3\\}$. The constraints on the
parameters {$\lambda_{i}$}’s are given by
$\displaystyle 1+2\lambda_{1}-\lambda_{2}-\lambda_{3}-\lambda_{4}\geq 0,$
$\displaystyle 1-\lambda_{1}+2\lambda_{2}-\lambda_{3}-\lambda_{4}\geq 0,$
$\displaystyle 1-\lambda_{1}-\lambda_{2}+2\lambda_{3}-\lambda_{4}\geq 0,$
$\displaystyle 1-\lambda_{1}-\lambda_{2}-\lambda_{3}+2\lambda_{4}\geq 0,$
$\displaystyle 1+2\lambda_{1}+2\lambda_{2}+2\lambda_{3}+2\lambda_{4}\geq 0.$
(23)
The above constraint give a convex polygon space with five vertices
$\displaystyle v_{1}=\\{1,1,1,1\\},$ $\displaystyle
v_{2}=\\{1,\frac{-1}{2},\frac{-1}{2},\frac{-1}{2}\\},v_{3}=\\{\frac{-1}{2},1,\frac{-1}{2},\frac{-1}{2}\\},$
$\displaystyle
v_{4}=\\{\frac{-1}{2},\frac{-1}{2},1,\frac{-1}{2}\\},v_{5}=\\{\frac{-1}{2},\frac{-1}{2},\frac{-1}{2},1\\}.$
It is an irregular polygon with 8 edges, out of which 4 edges have euclidean
length $\sqrt{\frac{9}{2}}$ and 4 other edges have euclidean length
$\sqrt{\frac{27}{4}}$.
It is insightful to visualize the effect of the action of the channel on a
state in the four-D sphere. The parameters {$\lambda_{i}$} reduce the length
of each Bloch vector component from $n_{i}$ to $\lambda_{i}n_{i}$, thus
bringing the state closer to the origin. Now notice that, we have obtained the
constraints in Eq.(VI.2) assuming that all $\phi_{i}=0$, i.e., there is no
change in the angular parameters $\theta_{i}$. The allowed values of
$\phi_{i}$’s will therefore depend on how much the lengths of the Bloch vector
components have been reduced by $\lambda_{i}$.
### VI.3 Characterization of Randomly Generated Density Matrices
In this section, we characterize the structure of the state space of randomly
generated density matrices, using the four-D Bloch sphere. Specifically, we
show the representation of ensembles generated by Hilbert-Schmidt and Bures
metrics Życzkowski _et al._ (2011); Życzkowski and Sommers (2001). The
infinitesimal Hilbert-Schmidt(Eq.(20)) distance between $\rho$ and
$\delta\rho$, has a very simple form given as
$d_{HS}^{2}=\Tr[(\delta(\rho))^{2}]$. In $n$-dimensions, the probability
distribution induced by this metric, derived by Hall Hall (1998) is given by
$\displaystyle
P_{HS}(\lambda_{1},...,\lambda_{n})=C_{N}\delta(1-\sum_{i=1}^{n}\lambda_{i})\prod_{j<k}^{n}(\lambda_{j}-\lambda_{k})^{2},$
(24)
where $\lambda_{i}$’s are the eigenvalues of $\rho$ and $C_{N}$ is determined
by the normalization.
For mixed quantum states there is another useful distance measure known as the
Bures distanceBures (1969); Uhlmann (1992)
$\displaystyle
D_{Bures}(\rho_{1},\rho_{2})=\sqrt{2(1-\Tr[(\sqrt{\rho_{1}}\rho_{2}\sqrt{\rho_{1}})^{1/2}])}.$
Similar to the Hilbert-schmidt case, there exists the infinitesimal form Bures
metric derived by Hubner Hübner (1992)
$\displaystyle
d_{Bures}^{2}=\frac{1}{2}\sum_{j,k=1}^{n}\frac{|\bra{j}\delta_{\rho}\ket{k}|^{2}}{\lambda_{j}+\lambda_{k}},$
where again $\lambda_{k}$ and $\ket{k}$ are respectively the eigenvalues and
eigenvectors of $\rho$. For this metric also, the probability distribution was
derived by Hall Hall (1998), which is given by
$\displaystyle
P_{Bures}(\lambda_{1},...,\lambda_{n})=C_{N}^{\prime}\frac{\delta(1-\sum_{i=1}^{n}\lambda_{i})}{(\lambda_{1},...,\lambda_{n})^{1/2}}\prod_{j<k}^{n}\frac{(\lambda_{j}-\lambda_{k})^{2}}{\lambda_{j}+\lambda_{k}},$
(25)
where $C_{N}^{\prime}$ is again determined by the normalization. In Eqns.(24)
and (25), we have the probability distributions defined on the simplex of
eigenvalues. However, we want to see how this probability distribution picks
out the states from the bloch sphere. For a two-dimensional state
$\rho=\frac{1}{2}(I+\vec{r}\cdot\sigma)$, we can translate the eigenvalues to
Bloch sphere parameters using the simple formulas $\lambda_{1}=\frac{1+r}{2}$
and $\lambda_{2}=\frac{1-r}{2}$, where $\lambda_{1},\lambda_{2}$ are the two
eigenvalues of $\rho$. By substituting these in Eqns.(24) and (25), we get the
following probability distributions in terms of Bloch sphere parameters Hall
(1998)
$\displaystyle P_{HS}(\vec{r})=\frac{3}{4\pi},$ (26) $\displaystyle
P_{B}(\vec{r})=\frac{4}{\pi\sqrt{1-r^{2}}}.$ (27)
We can see that both probability distributions are dependent only on the
radial parameter $r$. While the HS distribution is uniform over the Bloch
sphere while the Bures distribution is sharply peaked at the surface of the
Bloch sphere.
Next, we derive the form of these probability distributions with respect to
our representation of qutrit states. For a qutrit state $\rho_{3}$, its
eigenvalues $\lambda_{1},\lambda_{2}$ and $\lambda_{3}$ can be written
directly in terms of the Bloch sphere parameters $n_{i}$’s and angular
parameters $\theta_{i}$’s. However, a direct approach will lead to cumbersome
calculations. Instead, we write the eigenvalues $\lambda_{i}$’s in terms of
the characteristic equation coefficients $a_{i}$’s from Eq.(8) and substitute
in the Eqns.(24) and (25) which gives us the following
$\displaystyle P_{HS}(\vec{r},\zeta_{i},\theta_{i})$
$\displaystyle=C_{HS}\frac{\left((r-1)^{2}(2r+1)-27\text{Det}(\rho)\right)\left((r+1)^{2}(2r-1)+27\text{Det}(\rho)\right)}{27r^{3}},$
(28) $\displaystyle P_{B}(\vec{r},\zeta_{i},\theta_{i})$
$\displaystyle=C_{B}\frac{\left((r-1)^{2}(2r+1)-27\text{Det}(\rho)\right)\left((r+1)^{2}(2r-1)+27\text{Det}(\rho)\right)}{27r^{3}\left(\frac{1-r^{2}}{3}-\text{Det}(\rho)\right)\sqrt{\text{Det}(\rho)}}.$
(29)
where we have switched to polar representation with
$n_{1}=r\cos(\zeta_{1}),n_{2}=r\sin(\zeta_{1})\cos(\zeta_{2}),n_{3}=r\sin(\zeta_{1})\sin(\zeta_{2})\cos(\zeta_{3})$
and $n_{4}=r\sin(\zeta_{1})\sin(\zeta_{2})\sin(\zeta_{3})$. Also, $C_{HS}$ and
$C_{B}$ are constants and are determined by the normalization. In this form
these probability distributions don’t give much information about the states
in the Bloch sphere because of dependence on the angular parameters
$\theta_{i}$’s which are not a part of the four-D sphere. We can obtain a
distribution for a subset of states by fixing the $\theta_{i}$’s and then
analyze the probability distributions. After fixing all the $\theta_{i}$’s
values(say all zero), we get Det$(\rho)=f(r,\zeta_{1},\zeta_{2},\zeta_{3})$.
The distributions in Eqns.(28) and (29) are not invariant with respect to
unitary operations unlike in the qubit scenario. This is a signature of the
fact that all points inside the four-D Bloch sphere don’t represent physical
states.
After some algebraic calculations, it is found that the HS distribution in
Eq.(28) is always positive irrespective of Det($\rho)$ being positive or
negative. Where as the Bures distribution in Eq.(29) is positive iff
Det($\rho)\geq 0$, hence picks out the closed structure of the qutrit states
inside the Bloch sphere. Moreover, the HS distribution in non-decreasing with
respect to the radial parameter $r$, everywhere. Where as, the Bures
distribution is non-decreasing with respect to $r$ in the region where the
Det($\rho)\geq 0$. It can also be seen that the Bures distribution is sharply
peaked whenever the denominator
$(\frac{1-r^{2}}{3}-\text{Det}(\rho))\text{Det}(\rho)$ vanishes.While
$\text{Det}(\rho)=0$ for rank-2 or rank-1 states, the
$\frac{1-r^{2}}{3}-\text{Det}(\rho)$ term can vanishes only at the surface of
the Bloch sphere or beyond.
Thus if we fix the $\theta_{i}$’s, both these distributions are localized
closer to the surface of the Bloch sphere. For the HS distribution this is
unlike to what happens in the qubit scenario where it is uniform all over the
sphere. Whereas, the Bures distribution is sharply peaked near or at the
surface of the Bloch sphere. It is similar to the behavior of the Bures
distribution in the qubit scenario, where the Bures distribution is sharply
peaked on the surface. These results are matching with the plots presented in
Fig.2 of Ref.Życzkowski and Sommers (2001), which depicts the plots in the
simplex of eigenvalues.
As an example, we fix the all $\theta_{i}=0$’ and all polar angles
$\zeta_{i}$’s as $\zeta_{1}=\pi/3,\zeta_{2}=0,\zeta_{3}=\pi/7$, to see the
dependence on the radial parameter $r$, and obtain the following
$\displaystyle P_{HS}(r)=C_{HS}\frac{6-\sqrt{3}}{72}r^{3}$ $\displaystyle
P_{B}(r)=C_{B}\frac{162(6-\sqrt{3})r^{3}}{(\sqrt{4-12r^{2}+6.19r^{3}})(-32+24r^{2}+6.19r^{3})}.$
(30)
We see that in the chosen direction, HS distribution is peaked on the surface
of the Bloch sphere and it is everywhere positive. While the Bures
distribution is sharply peaked at $r\approx 0.73$ and while is negative for
$r>0.73$. It simply tells that for the chosen $\theta_{i}$’s there are no more
physical states beyond $r\approx 0.73$ in the chosen direction and also that
there is a rank 2 state at $r\approx 0.73$. The other singularity of the Bures
distribution lies at $r\approx 1.02$, but $P_{B}(r)$ is negative after
$r=0.73$ and hence we ignore it.
In section B of the Appendix, we also do the analysis of HS and Bures
distributions when the qutrit states are represented using Gell-Mann
operators. There also we make similar observations, i.e., 1) The HS
distributions is always positive where as the Bures distribution is positive
iff Det$(\rho)\geq 0$. 2) HS distribution is non-decreasing with respect to
the radial parameter and hence the states are localized on the surface of the
convex structure of the states and 3) Bures distribution is non-decreasing for
Det$(\rho)\geq 0$ and it also blows up at the surface of the Bloch sphere or
for the rank-2 states.
## VII Bloch vector for qudits
This approach of separating the weight parameters and angular parameters, in
the HW operator based representation can also be extended in higher
dimensions. If we consider a qudit of dimension $d$, the number of HW
operators are $d^{2}-1$ (excluding the identity matrix), among them there will
be $d+1$ sets of commuting HW operators containing $d-1$ HW operators each.
Each set will have one weight parameter associated with it as $w_{i}$ and
$d-2$ angular parameters, so that there are total $d^{2}-1$ real parameters.
In this way one can create a $d+1$ dimensional Bloch sphere built from $d+1$
weight parameters. However, as dimensions increase this analysis will only
become more complex.
## VIII Conclusion
To conclude, we have used the HW operator basis to represent a qutrit state.
In doing so, we identified eight independent parameters consisting of four
weight and four angular parameters and constructed a four-D Bloch sphere
representation of the qutrit. We have obtained the constraints which must be
satisfied for the parametrization to represent a physical qutrit. This
representation seems like a natural extension of the qubit Bloch sphere
because of the following properties.
1. 1.
The one and two-dimensional sections are symmetric with respect to the axes.
2. 2.
Purity of a state depends on the length of the Bloch vector.
3. 3.
Rank of the states can be identified to some extent by looking at the distance
from the origin of the four-D sphere.
4. 4.
The conditions of orthogonality and mutual unbiasedness of two Bloch vectors
has lot of similarity with conditions for the qubit Bloch vectors.
5. 5.
Hilbert-Schmidt distance between two qutrit states(with same angular parameter
values $\theta_{i}$’s) is proportional to the Euclidean distance in the four-D
sphere.
We have applied our Bloch vector representation to show that there can be a
maximum of four MUBs in three dimensions. The characterization of unital maps
acting on qutrits is also demonstrated using our representation. We also did a
characterization of randomly generated density matrices, when the probability
distributions are induced by Hilbert-Schmidt and Bures distances. Lastly, we
have mentioned the basic steps required to extend this representation in
dimensions greater than three.
One of the future works based on extending this work could be to identify the
structure of the allowed set of points which represent a physical qutrit
state. Another significant work would be to generalize this representation in
higher dimensions than three.
As we have shown in this paper that the geometry of the Bloch sphere limits
the existence of the number of MUBs in qubits and qutrits. This approach can
be used to study the existence of MUBs in 6 dimensions, where the maximum
number of MUBs is not known yet Grassl (2004); Raynal _et al._ (2011);
Bengtsson _et al._ (2007). An extension to the characterization of unital
maps would be to characterize qutrit entanglement breaking channels similar to
qubit entanglement breaking channels Ruskai (2003). Similar to
characterization of ensembles generated by HS and Bures metric, another
interesting study could be to identify the form of Fubini-Study metric and the
corresponding volume element Bengtsson _et al._ (2002). Such an analysis
could be useful for sampling of pure qutrit states and averaging over them.
Our four-D sphere representation could also have significant applications in
studying the dynamics of qudit states and to find the constants of motion in
$d$-level systems. It can also be used to detect entanglement of bipartite
systems and identifying the reachable states in open system dynamics. We hope
that this approach leads to better insight in the study of qudit systems and
their dynamics.
## References
* Bertlmann and Krammer (2008) R. A. Bertlmann and P. Krammer, Journal of Physics A: Mathematical and Theoretical 41, 235303 (2008).
* Fano (1983) U. Fano, Rev. Mod. Phys. 55, 855 (1983).
* Brüning _et al._ (2012) E. Brüning, H. Mäkelä, A. Messina, and F. Petruccione, Journal of Modern Optics 59, 1 (2012).
* Kimura (2003) G. Kimura, Physics Letters A 314, 339 (2003).
* Kimura and Kossakowski (2004) G. Kimura and A. Kossakowski, “The bloch-vector space for n-level systems – the spherical-coordinate point of view,” (2004), arXiv:quant-ph/0408014 [quant-ph] .
* Kryszewski and Zachcial (2006) S. Kryszewski and M. Zachcial, arXiv e-prints , quant-ph/0602065 (2006), arXiv:quant-ph/0602065 [quant-ph] .
* Mendaš (2006) I. P. Mendaš, Journal of Physics A: Mathematical and General 39, 11313 (2006).
* Goyal _et al._ (2016) S. K. Goyal, B. N. Simon, R. Singh, and S. Simon, Journal of Physics A: Mathematical and Theoretical 49, 165203 (2016).
* Kurzyński (2011) P. Kurzyński, Quantum Info. Comput. 11, 361–373 (2011).
* Kurzyński _et al._ (2016) P. Kurzyński, A. Kołodziejski, W. Laskowski, and M. Markiewicz, Phys. Rev. A 93, 062126 (2016).
* Eltschka _et al._ (2021) C. Eltschka, M. Huber, S. Morelli, and J. Siewert, Quantum 5, 485 (2021).
* Bengtsson _et al._ (2013) I. Bengtsson, S. Weis, and K. Życzkowski, in _Geometric Methods in Physics_ , edited by P. Kielanowski, S. T. Ali, A. Odzijewicz, M. Schlichenmaier, and T. Voronov (Springer Basel, Basel, 2013) pp. 175–197.
* Vourdas (2004) A. Vourdas, Reports on Progress in Physics 67, 267 (2004).
* Asadian _et al._ (2016) A. Asadian, P. Erker, M. Huber, and C. Klöckl, Phys. Rev. A 94, 010301 (2016).
* Cotfas and Dragoman (2012) N. Cotfas and D. Dragoman, Journal of Physics A: Mathematical and Theoretical 45, 425305 (2012).
* Baumgartner _et al._ (2006) B. Baumgartner, B. C. Hiesmayr, and H. Narnhofer, Phys. Rev. A 74, 032327 (2006).
* Asadian _et al._ (2015) A. Asadian, C. Budroni, F. E. S. Steinhoff, P. Rabl, and O. Gühne, Phys. Rev. Lett. 114, 250403 (2015).
* Bennett _et al._ (1993) C. H. Bennett, G. Brassard, C. Crépeau, R. Jozsa, A. Peres, and W. K. Wootters, Phys. Rev. Lett. 70, 1895 (1993).
* Ruskai (2003) M. B. Ruskai, Reviews in Mathematical Physics 15, 643 (2003), https://doi.org/10.1142/S0129055X03001710 .
* Gibbons _et al._ (2004) K. S. Gibbons, M. J. Hoffman, and W. K. Wootters, Phys. Rev. A 70, 062101 (2004).
* Wilde (2013) M. M. Wilde, _Quantum Information Theory_ (Cambridge University Press, 2013).
* Sharma _et al._ (2021) G. Sharma, S. Sazim, and S. Mal, Phys. Rev. A 104, 032424 (2021).
* King and Ruskai (2001) C. King and M. B. Ruskai, IEEE Transactions on Information Theory 47, 192 (2001).
* Beth Ruskai _et al._ (2002) M. Beth Ruskai, S. Szarek, and E. Werner, Linear Algebra and its Applications 347, 159 (2002).
* Życzkowski _et al._ (2011) K. Życzkowski, K. A. Penson, I. Nechita, and B. Collins, Journal of Mathematical Physics 52, 062201 (2011).
* Życzkowski and Sommers (2001) K. Życzkowski and H.-J. Sommers, Journal of Physics A: Mathematical and General 34, 7111 (2001).
* Hall (1998) M. J. Hall, Physics Letters A 242, 123 (1998).
* Bures (1969) D. Bures, Transactions of the American Mathematical Society 135, 199 (1969).
* Uhlmann (1992) A. Uhlmann, in _Groups and related Topics_ (Springer, 1992) pp. 267–274.
* Hübner (1992) M. Hübner, Physics Letters A 163, 239 (1992).
* Grassl (2004) M. Grassl, “On sic-povms and mubs in dimension 6,” (2004), arXiv:quant-ph/0406175 [quant-ph] .
* Raynal _et al._ (2011) P. Raynal, X. Lü, and B.-G. Englert, Phys. Rev. A 83, 062303 (2011).
* Bengtsson _et al._ (2007) I. Bengtsson, W. Bruzda, Å. Ericsson, J.-Å. Larsson, W. Tadej, and K. Życzkowski, Journal of mathematical physics 48, 052106 (2007).
* Bengtsson _et al._ (2002) I. Bengtsson, J. Brännlund, and K. Życzkowski, International Journal of Modern Physics A 17, 4675 (2002).
## Appendix A Finding the orthonormal bases mutually unbiased to the
computational basis
Here we will show, how to construct an orthonormal basis given a qutrit ket
parametrized by $\vec{n_{\theta
0}}:n_{1}=m_{1},n_{2}=0,n_{3}=m_{3},n_{4}=m_{4},\theta_{1}=\phi_{1},\theta_{2}=\text{any
number},\theta_{3}=\phi_{3}\hskip 5.69054pt\text{and}\hskip
5.69054pt\theta_{4}=\phi_{4}$, so that the orthonormal basis is mutually
unbiased with respect to to the computational basis. Since, we are looking for
pure states mutually unbiased to the computational basis, we can also
represent $\vec{n_{\theta 0}}$:{$\alpha=\delta,\beta=\gamma$} using only two
parameters as
$\displaystyle\vec{n_{\theta 0}}\equiv\frac{1}{\sqrt{3}}\begin{pmatrix}1\\\
e^{i\delta}\\\
e^{i\gamma}\end{pmatrix}\rightarrow\frac{1}{3}\begin{pmatrix}1&e^{-i\delta}&e^{-i\gamma}\\\
e^{i\delta}&1&e^{-i(\gamma-\delta)}\\\
e^{i\gamma}&e^{i(\gamma-\delta)}&1\end{pmatrix}.$
The corresponding Bloch vector parameters can be obtained on comparing the
Bloch vector representation with the above two parameter representation.
$\displaystyle
n_{1}=\frac{1}{3}\sqrt{3+2\cos(\gamma-2\delta)+2\cos(2\gamma-\delta)+2\cos(\gamma+\delta)},$
$\displaystyle
n_{3}=\frac{1}{3}\sqrt{3+2\cos(\gamma-2\delta-\frac{2\pi}{3})+2\cos(2\gamma-\delta+\frac{2\pi}{3})+2\cos(\gamma+\delta-\frac{2\pi}{3})},$
$\displaystyle
n_{4}=\frac{1}{3}\sqrt{3+2\cos(\gamma-2\delta+\frac{2\pi}{3})+2\cos(2\gamma-\delta-\frac{2\pi}{3})+2\cos(\gamma+\delta+\frac{2\pi}{3})},$
$\displaystyle
e^{i\theta_{1}}=\frac{e^{-i\delta}+e^{-i(\gamma-\delta)}+e^{i\gamma}}{3n_{1}},e^{i\theta_{3}}=\frac{e^{i\delta}+\omega
e^{-i\gamma}+\omega^{2}e^{i(\gamma-\delta)}}{3n_{3}},e^{i\theta_{4}}=\frac{e^{i\delta}+\omega^{2}e^{-i\gamma}+\omega
e^{i(\gamma-\delta)}}{3n_{4}},$ (31)
where $\omega=e^{i\frac{2\pi}{3}}$ is the cube root of unity. In this
representation, a ket orthogonal to $\vec{n_{\theta 0}}$ must have parameters
$\vec{n_{1}}$:$\\{\alpha=\delta+\frac{2\pi}{3},\beta\gamma-\frac{2\pi}{3}\\}$
or
$\vec{n_{2}}$:$\\{\alpha=\delta-\frac{2\pi}{3},\beta=\gamma+\frac{2\pi}{3}\\}$.
It is straightforward to observe from Eq.(A) that for orthonormal vectors the
weight parameters $n_{i}$ are the sane, only the angular parameters
$\theta_{i}$ vary.
Moreover, the orthonormal bases ${P}$ and ${Q}$ which are mutually unbiased to
basis ${N}$ (and to each other) are obtained by changing the angular
parameters to $\alpha=\delta+\frac{2\pi}{3},\beta=\gamma$ and
$\alpha=\delta+\frac{4\pi}{3},\beta=\gamma$ respectively, and then forming an
orthonormal basis from them. It can be seen that on moving between the MUBs
${N}$, ${P}$ and ${Q}$, the weight parameters permute cyclically. Thus,
concluding the proof.
## Appendix B Random Density Matrices in Gell-Mann operator repreentation
Using the Gell-Mann operator basis also one can write a qutrit state in the
following way Bertlmann and Krammer (2008)
$\displaystyle\rho=\frac{1}{3}(\mathbb{I}+\sum_{i=1}^{d^{2}-1}g_{i}\Lambda_{i}),$
(32)
where $\Lambda_{i}$ are the Gell-Mann operators in three dimensions and
$g_{i}=\Tr(\Lambda_{i}\rho)$ form the components of the eight-
dimensional(eight-D) Bloch vector $\vec{g}$. The eight Gell-Mann operators in
three dimensions contain diagonal, symmetric and anti-symmetric matrices, but
for simplicity we denote all of them with $\Lambda_{i}$. Using the similar
trick as in the case of Weyl operator representation we can get the HS and
Bures distribution in terms of the Bloch vector parameters $g_{i}$ as
following
$\displaystyle P_{HSG}(\vec{r_{g}},\gamma_{i})$
$\displaystyle=\frac{C_{HSG}}{r^{7}}\left(\frac{1}{729}(r^{2}-3)^{2}(4r^{2}-3)+(2-2r^{2}-27\text{Det}(\rho))\text{Det}(\rho)\right),$
(33) $\displaystyle P_{BG}(\vec{r_{g}},\gamma_{i})$
$\displaystyle=\frac{C_{BG}}{r^{7}(3-r^{2}-9\text{Det}(\rho))\sqrt{\text{Det}(\rho)}}\Big{(}\frac{1}{729}(r^{2}-3)^{2}(4r^{2}-3)$
$\displaystyle+(2-2r^{2}-27\text{Det}(\rho))\text{Det}(\rho)\Big{)},$ (34)
where we have switched to polar representation with $r_{g}$ being the radial
distance in the eight-D Bloch sphere and $\gamma_{i}$’s are the seven polar
angles. $C_{HSG}$ and $C_{BG}$ are constants determined by the normalization.
As in the Weyl representation, here also the HS distribution is always
positive inside the eight-D Bloch sphere irrespective of Det$(\rho)$ being
positive of negative. Also, it is non-decreasing with respect to $r_{g}$. Thus
the states chosen are localized at the surface of the Bloch sphere.
The Bures distribution also behaves similar to the Weyl representation. It is
positive iff Det$(\rho)\geq 0$ and also it is non-decreasing for
Det$(\rho)\geq 0$.The singularity in $P_{BG}(\vec{r_{g}},\gamma_{i})$ occurs
either at $(3-r^{2}-9\text{Det}(\rho))=0$ or when $\sqrt{\text{Det}(\rho)}=0$.
The first condition is only possible at or beyond the surface of the eight-D
sphere. Where as, $\text{Det}(\rho)=0$ can happen for rank-1 or rank-2 states,
i.e., at the surface of the structure formed by the qutrit states. Thus,
$H_{BG}$ is sharply localized at the surface of the convex structure formed by
the qutrit states.
|
# Shape Back-Projection In 3D Scenes
Ashish Kumar1, L. Behera1, Senior Member IEEE, K. S. Venkatesh1
{https://github.com/ashishkumar822} 1Mr. Ashish Kumar, Dr. L. Behera and Dr.
K. S. Venkatesh are with the Department of Electrical Engineering, Indian
Institute of Technology, Kanpur<EMAIL_ADDRESS>
###### Abstract
In this work, we propose a novel framework shape back-projection for
computationally efficient point cloud processing in a probabilistic manner.
The primary component of the technique is shape histogram and a back-
projection procedure. The technique measures similarity between 3D surfaces,
by analyzing their geometrical properties. It is analogous to color back-
projection which measures similarity between images, simply by looking at
their color distributions. In the overall process, first, shape histogram of a
sample surface (e.g. planar) is computed, which captures the profile of
surface normals around a point in form of a probability distribution. Later,
the histogram is back-projected onto a test surface and a likelihood score is
obtained. The score depicts that how likely a point in the test surface
behaves similar to the sample surface, geometrically. Shape back-projection
finds its application in binary surface classification, high curvature edge
detection in unorganized point cloud, automated point cloud labeling for
3D-CNNs (convolutional neural network) etc. The algorithm can also be used for
real-time robotic operations such as autonomous object picking in warehouse
automation, ground plane extraction for autonomous vehicles and can be
deployed easily on computationally limited platforms (UAVs).
## I Introduction
Accurate depth sensing and its efficient processing is crucial for a robotic
system to reliably perform tasks such as object pick, place or autonomous
navigation. The depth data is acquired by the depth sensors and is commonly
known as point clouds. The current depth sensors can provide millions of point
cloud data in near real time. However, processing them, in general, requires
huge computing power. Hence, driven by the significance of the depth
information, in this paper, we focus on exploring local geometrical properties
of a point cloud such that multiple tasks in the area of 3D perception can be
reasoned by computing only once.
Typically, a point cloud is either organized or unorganized. An organized
cloud is represented as a 2D matrix in which each location corresponds to a 3D
point similar to a pixel in an image. Such clouds offers straightforward use
of the 2D image processing techniques (e.g. edge detection) and facilitates
fast nearest neighbor computations. On the other hand, unorganized clouds are
simply a collection of 3D points which does not convey any spatial or
structural connectivity. These clouds are often represented in form of kD-
trees [1] or Octrees [2] which facilitate efficient (but slower then
organized) nearest neighbor search [3] as well as reduced memory storage. Most
common sources of organized point cloud are stereo cameras, Time-of-Flight
cameras such as Microsoft Kinect, Intel real sense, IDS-Ensenso etc. Whereas,
the LIDAR [4] sensors are a major source of unorganized point clouds. Due to
long range and high accuracy, they are preferred in autonomous driving and
navigation [5], [6], warehouse automation, robotic object manipulation and
other industrial applications.
(a) Point cloud(b) Predicted planar regions(c) Predicted curved regions(d)
Predicted edges Figure 1: Output of the algorithm for tasks such as plane
segmentation, curved region segmentation, and edge detection.
In order to advance the state-of-art in the above areas, several worldwide
robotic challenges have been hosted previously such as Amazon Picking
Challenge, 2015 and 2016, Amazon Robotics Challenge, 2017 for warehouse
automation, MBZIRC, DARPA humanoid [7] and autonomous driving [8] challenges.
Interestingly, all of them have a large technological overlap between them
which primarily includes object detection and segmentation of images and point
clouds, edge detection (2D or 3D), object pose estimation, 3D model fitting
for robot grasping.
The state-of-art (SOA) algorithms for object detection [9], [10], [11], [12],
[13], [14], [15] in images are based on Convolutional Neural Network (CNN)
architectures [16], [17], [18]. The CNNs have also been employed in 3D
perception, such as point cloud segmentation [19], semantic 3D scene
completion [20], and 3D object segmentation [21]. These variants of CNNs are
known as 3D-CNNs. The 3D-CNNs require huge amount of labeled point cloud data
which comes at a cost of specialized softwares and exhaustive manual efforts.
Despite the accuracies, their computational and memory intensive nature limits
their scope for real time applications [22], [23] on limited computing
platforms such as Unmanned Aerial Vehicles (UAVs).
As an alternative, traditional feature matching [24], [25] and consensus based
model fitting [26] methods are preferred. The former involve computing of
handcrafted features based on local geometrical information of a model point
cloud and matching them with the features computed for a target point cloud.
The feature matching procedure is a compute intensive task and its performance
is severely deteriorated even for minor 3D surface variations between the
model and the target. Inconsistent depth data is the primary reason for this
which often results in inaccurate feature estimation and matching.
On the other hand, the consensus based methods of RANSAC [27], LMeds [28] are
a primary choice to fit primitive shapes (e.g. plane, cylinder or sphere) into
point clouds. These methods perform iterative random sampling of the input
points and estimates model parameters (plane coefficients, cylinder or sphere
radii). However, sampling and the estimation process becomes computationally
inefficient when a large number of points are irrelevant to the model to be
fit. In case of multiple model instances, the algorithm is iterated exactly
equal to number of instances, which in some applications, is not known
beforehand. Moreover, their capabilities to only deal with primitive shapes,
limit their applicability in real world scenarios, because many objects are
often complex shaped, i.e. neither a plane nor a cylinder.
Furthermore, robust edge detection in the point clouds is quite important for
various robotic applications. It can be achieved in the organized point clouds
(depth images) simply by using the techniques of edge detection in RGB images.
Whereas, unorganized cloud needs special treatment. Such cases are generally
occurs when a raw organized point cloud undergoes noise removal preprocessing
operations and loses its spatial and structural connectivity (unorganized
cloud). Sometimes, the depth sensors (LIDARs) directly provides unorganized
point cloud data. Due to the lost spatial relations, edge detection in
unorganized point clouds becomes quite challenging and have gained only a
little attention [29]. The approach is based on Eigen value analysis with all
results reported only for synthetic data.
In this paper, we propose a novel probabilistic framework Shape Back-
projection which addresses all of the above limitations in a non-iterative
manner. The most crucial part of the algorithm is shape histograms which
encodes the 3D information in such a way that it can be utilized for a number
of applications while avoiding complex computations. We experimentally
demonstrate that, shape back-projection can be deployed independently for the
tasks of point cloud classification, edge detection, especially in the
unorganized clouds. Unlike consensus based methods, the proposed algorithm can
deal with any number of instances, without manual specification. The algorithm
also outperforms a recent algorithm [29] of edge detection in unorganized
point clouds. Moreover, the algorithm can also be effective to facilitate
automated labeling of 3D point cloud data, required for 3D-CNNs
In the following sections, first, we lay a preliminary ground on which the
whole idea is based (Sec. II). Then, we discuss core of the algorithm (Sec.
III). At the end, we report comprehensive experimental analysis (Sec.IV) and
discuss the primary applications of shape back-projection.
## II Preliminaries
### II-A Color Histograms
(a)
$0$$180$$360$$0$$2$$4$$6$$8$$10$$\cdot 10^{-2}$Color
intensityP(color$|$I)HueSatVal (b)
(c)
(d)
Figure 2: (a) Sample image, (b) corresponding HSV histogram, (c) test image,
and (d) back-projection of the hue histogram of the object pixels in the
sample image onto the test image.
Color histogram is a discretization of a color space such as (Red, Green,
Blue), (Hue, Saturation, Value), etc and represents a frequency distribution
over the pixel colors in an image. Each component of a color space is referred
as a channel. Let $C$ be a $k$-bin color histogram to be computed over a
single color channel. First, a $bin$-$id$ (Eq. 1) for a pixel is obtained and
the corresponding bin value is incremented by one. This process is repeated
for all or only a desired set of pixels. The obtained frequency distribution
($C$) is then normalized ($C_{n}$) with the number of pixels which
participated in the histogram computation. The normalization step essentially
scales all the bin values to sum them up-to one so that it can follow the
properties of a valid probability density function. Fig. 2b shows single
channel normalized histograms of H, S, V components of a sample image (Fig.
2a).
$bin\text{-}id=\Bigl{\lfloor}\frac{color}{k}\Bigr{\rfloor}$ (1)
### II-B Color Back-projection
Back-projection [30] is a technique to identify test data patterns, behaving
almost similar to that of a given distribution. In the context of images,
color histograms are back-projected to localize color patterns similar to a
given color histogram. In the first step of back-projection, normalized color
histogram of the pixels of interest in a sample image is computed (object
pixels in Fig. 2a). Later, for each pixel in a test image (Fig. 2c),
$bin$-$id$ is obtained and assigned a score (Eq. 2) equal to the value of bin-
id in the color histogram. The score is referred as back-projection likelihood
which depicts that how likely a pixel in the test image belongs to the object
in the sample image.
$P(pixel=color\leavevmode\nobreak\ |\leavevmode\nobreak\
C_{n})=C_{n}(bin\text{-}id)$ (2)
In general, hue component of the HSV color space is preferred for color back-
projection because it carries the chrominance information about a pixel-color.
However, the selection of color component may vary from application to
application. Color back-projection has been explored previously in various
applications such as real time object tracking in images using Mean-Shift
[31], Cam-Shift [32].
## III Shape Back-Projection
Shape back-projection is inspired by color back-projection and comprises of
shape-histogram analogous to a color histogram. By back-projecting the shape
histograms onto 3D surfaces, points of a particular interest can be obtained
in a probabilistic manner, similar to obtaining pixels of interest in the
color back-projection process.
### III-A Shape Histograms
A shape histogram has been designed by carefully observing the geometrical
properties of 3D surfaces such as normals, and curvature. To understand this,
consider a planar and a curved surface ($S$) as shown in Fig. 3a and 3c. Let
$p_{i},p_{j}\in S$ be a point and its neighbor respectively and $n_{i},n_{j}$
be their normals. For each ($p_{i},p_{j}$) pair, we define an angle
$\alpha_{ij}$ between $n_{i}$ and $n_{j}$. An individual $\alpha_{ij}$ does
not convey much information, however, the $\alpha_{ij}s$ for all neighbors of
$p_{i}$ governs the behavior of local surface variations around $p_{i}$. By
the visual inspection of Fig.3b and 3d, it can be inferred that larger the
$\alpha$, more is the surface curved around $p_{i}$. Based on this fact, we
exploit the information contained in the $\alpha_{ij}s$ which we call Inter
Normal Angle Difference (INAD). We parameterize INAD as a mean and standard
deviation ($\mu,\sigma$) value pair of the $\alpha_{ij}s$ for each $p_{i}$.
The INAD pair essentially captures a Gaussian distribution
$\mathcal{N}(\mu,\sigma^{2})$ of curvature around $p_{i}$ which is given by
Eq.3, 4.
$\mu_{i}=\frac{1}{N}\sum_{j=1}^{N}\alpha_{ij}$ (3)
$\sigma_{i}=\sqrt{\frac{1}{N}\sum_{j=1}^{N}(\alpha_{ij}-\mu_{i})^{2}}$ (4)
Further, in order to obtain the shape histogram of a given surface, firstly,
INAD for each $p_{i}\in S$ is computed. Later, a cumulative distribution of
these INAD pairs is obtained in a way similar to that of color histogram
computation. It is important to note that the INAD is composed of two values:
$\mu,\sigma$. Thus, a shape histogram is simply two dimensional and to
accommodate this, the Eq.1 can be rewritten as Eq.5, where,
$k_{\mu},k_{\sigma}$ are the number of bins in the $\mu$ and $\sigma$ axis
$bin\text{-}id_{\mu_{i}}=\Bigl{\lfloor}\frac{\mu_{i}}{k_{\mu}}\Bigr{\rfloor},\leavevmode\nobreak\
\leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\
bin\text{-}id_{\sigma_{i}}=\Bigl{\lfloor}\frac{\sigma_{i}}{k_{\sigma}}\Bigr{\rfloor}$
(5)
Next, the obtained cumulative distribution is normalized by dividing it with
the maximum value in the distribution.The normalized cumulative distribution
is termed as a shape-histogram. Fig.3e-3h depicts the shape histogram of a
synthetically generated planar and a curved surface for different values of
neighborhood search radii $r$.
(a)
(b)
(c)
(d)
$0$$50$$100$$0$$50$$100$$\sigma$ ($\deg$)$\mu$ ($\deg$) (e) $r1$
$0$$50$$100$$0$$50$$100$$\sigma$ ($\deg$)$\mu$ ($\deg$) (f) $r2$
$0$$50$$100$$0$$50$$100$$\sigma$ ($\deg$)$\mu$ ($\deg$) (g) $r1$
$0$$50$$100$$0$$50$$100$$\sigma$ ($\deg$)$\mu$ ($\deg$) (h) $r2$
Figure 3: (a), (c) Sample planar and curved surfaces. (b), (d) corresponding
profile of surface normals, points (red) and normals(blue). (e), (f) shape
histograms of planar surface, and (g), (h) curved surface for different values
of neighborhood search radii $r1$ and $r2$, where $r1<r2$ Higher green, higher
probability.
The INAD computations are entirely based on surface normals. Therefore, noisy
normals can severely affect the shape histograms. Most of the noise comes
directly from the depth sensors which is aggregated with the approximate
solutions errors, introduced by the normal estimation process. Hence, it
becomes necessary to deal with the case of noisy normals. Ideally, it would be
impossible to obtain 100% noise free normals. Thus, we employ a noise
cancellation procedure based on simple statistical outlier removal technique.
Before discussing the noise cancellation procedure, we briefly introduce the
normal estimation process below.
#### III-A1 Normal Estimation
Let $p_{i}$ be any point in $\mathbb{R}^{3}$ and $N\subset S$ be the set of
all neighboring points around $P$ in a spherical region of radii $r$. A
covariance matrix ($\mathcal{C}$) in $\mathbb{R}^{3}$ is computed from a point
set $P\cup N$ and decomposed into Eigen vectors. The matrix $\mathcal{C}$
essentially captures the spatial behavior (variance or spread) of neighboring
points around $p_{i}$ and an Eigen vector quantifies both the direction and
amount of spread. In general, a surface has minimal spread in a direction
normal to it. Therefore, the Eigen vector having lowest Eigen value can be
taken as an approximation to the normal at $p_{i}$. The above methodology is
the most simple and fastest approximation of normals and its practical
implementations are publicly available in the Point Cloud Library (PCL) [33].
#### III-A2 Noise Removal
After the normal estimation process, $\alpha_{ij}s$ for each $p_{i}$ are
computed using inverse cosine
rule111$\alpha_{ij}=cos^{-1}\Bigl{(}\frac{n_{i}.n_{j}}{|n_{i}|.|n_{j}|}\Bigr{)}$.
At this stage, we assume that some of the $\alpha_{ij}s$ may be noisy. To
filter such values, we perform statistical outlier rejection operation on
$\alpha_{ij}s$. This step eliminates noisy normals up-to a great extent,
leading to consistent normals in the output. The mathematical treatment of the
operation is given by Eq. (3, 4, 6). Here, $c$ is referred as an outlier rate
which controls the number of outliers in the output. Decision for a point to
be an outlier or inlier is given by Eq. 6.
$p_{j}\leavevmode\nobreak\ \text{is }=\begin{cases}Inlier,&\text{if
}\frac{|\alpha_{ij}-\mu_{i}|}{\sigma_{i}}\leq c\\\
Outlier,&\text{Otherwise}\end{cases}$ (6)
The filtered $\alpha_{ij}s$ are now plugged into Eq. 3, 4 in order to compute
the new values of $\mu_{i}$ and $\sigma_{i}$ which collectively represents
INAD at $p_{i}$. It must be noted that any point $p_{j}\in N$ marked as an
outlier is not included into computation of $\mu_{i}$ and $\sigma_{i}$. Fig.
3e and 3g shows shape histogram computed for a planar and a curved surface.
### III-B Shape Histogram Back-projection
$S_{s}$$S_{t}$$P(p\in S_{s})$$S_{s}$$S_{t}$$P(p\in S_{s})$ Figure 4: Shape
back-projection for various $S_{s}$-$S_{t}$ pair
Shape histogram back-projection is functionally analogous to the color
histogram back-projection. In this case, the back-projection likelihood score
depicts that: “how likely a point in a test surface is following geometrical
properties similar to that of represented by the shape histogram of a sample
surface”. The mathematical expression for such a likelihood is given in Eq.7,
where $S_{s}$ and $SH$ refers to a sample surface and its shape histogram
respectively.
$P(\leavevmode\nobreak\ p_{i}\in S_{s}\leavevmode\nobreak\
|\leavevmode\nobreak\ SH,r)=SH(bin\text{-}id_{\mu_{i}},\leavevmode\nobreak\
bin\text{-}id_{\sigma_{i}})$ (7)
In order to back-project a shape histogram, first the INAD for each point in a
test surface $S_{t}$ is computed (Eq.3, 4). The INAD is then used to obtain
$bin\text{-}id_{\mu}$ and $bin\text{-}id_{\sigma}$ which are then plugged into
Eq.7 to obtain the likelihood.
Fig.4 shows examples of the shape back-projection procedure. The column-$1$
represents the case when shape histograms of a planar and a curved surface
($S_{s}$) are back-projected onto a test surface ($S_{t}$), having three
orthogonal planes. It can be noticed that the likelihood obtained is higher (
) when both $S_{s}$ and $S_{t}$ belong to similar kind of surfaces (planar-
planar). Whereas, it is lower ( ) when $S_{s}$ and $S_{t}$ are of different
kinds (curved-planar). Similarly, the column-$2$ represents the case when
shape histogram of the planar and curved sample surfaces is back-projected
onto a curved test surface $S_{t}$.
In the performance of shape back-projection, the neighborhood search radii $r$
is a crucial parameter . The variations in the values of $r$ lead to different
utilities of shape back-projection, which we have experimentally demonstrated
in the following section.
## IV Experiments
Test surfaces ${P(p\in S_{plane})}$ Ground truth ${P(p\in S_{plane})}$
Predicted ${P(p\in S_{curved})}$ Ground truth ${P(p\in S_{curved})}$ Predicted
${P(p\in S_{edge})}$ Ground truth ${P(p\in S_{edge})}$ Predicted
$\mathbf{0.0}$$\mathbf{1.0}$$\mathbf{P(p\in Surface|SH,r)}$ Figure 5:
Qualitative results of shape back-projection for binary surface classification
and edge detection
In this section, we provide quantitative and qualitative results of Shape
Back-projection. The experimental evaluation is done by varying the values of
parameters such as number of histograms bins ($k$), neighbor search radii
($r$). In order to evaluate the algorithm against noise and demonstrate its
real world usefulness, we choose publicly available point cloud dataset [33]
containing $108$ organized point clouds, acquired by Microsoft Kinect sensor.
Each cloud contains a clutter of household objects placed on top of a table.
The clouds are deliberately converted into unorganized format. Due to the
unavailability of ground truths for the purpose of binary surface
classification and edge detection, we manually generate them using
CloudCompare [34]. All the experiments are performed only on a i7-6850-K CPU
and 64GB RAM. It must be noticed that shape-histograms does not represent any
feature e.g.[24], [25], therefore we have limited our discussion only to
variations in hyper-parameters of shape-histograms and its utilities.
### IV-A Varying “$\mathbf{r}$”
The INAD score for a point depends on the number of the spatial neighbors,
which in turn is governed by $r$. Fig. 3e-3h shows shape histograms of Fig. 3a
and 3c for varying $r$. It can be seen that as $r$ is increased, the peak in
the shape histogram of a planer surface doesn’t change whereas it shifts
towards higher bins for the case of a curved surface. Thus, we make an
important observation that the value of $r$ essentially captures the
information about the curvature at a point. We exploit the observation and
show that by computing shape histogram of a surface only once, we can use it
for different purposes as given below.
#### IV-A1 Binary Surface Classification
Surface segmentation is a most fundamental operation which is required in
order to segregate points with similar geometrical properties into different
clusters. In this direction, the learning based [21], [19] approaches exists,
however, they require tremendous amounts of data and GPUs for their
operations. Here, we demonstrate that shape back-projection can achieve high
accuracies for binary surface classification merely on a CPU.
The 3D surfaces can be broadly classified into two categories i.e. planar and
curved (non-planar). We can say that if a point is more likely to be on a
planar surface, it will be less likely to be on a curved surface and vice-
versa. Therefore if we have shape histogram of a planar surface, the Eq. 8
holds.
$P(p\in S_{plane}|SH,r)\leavevmode\nobreak\ +\leavevmode\nobreak\ P(p\in
S_{curved}|SH,r)\leavevmode\nobreak\ =1$ (8)
To verify this, we compute shape histogram of a planar surface and back-
project onto the real 3D scenes as shown in column-($1$) Fig. 5. Corresponding
color coded likelihoods $P(p\in S_{plane}|SH,r)$ and $P(p\in S_{curved}|SH,r)$
are shown in column-($2,3$) Fig. 5. The likelihood score proves the validity
of Eq. 8 qualitatively. The Table I depicts the precision, recall, F$1$ and
mIoU [35] scores to assess the binary classification quality. These values are
heavily dependent on the quality of the ground truth which we have manually
generated. The planar class have high precision and high recall values.
Whereas, high recall and relatively low precision for curved surfaces can be
accounted by the misclassification of planar points as well as inaccurate
ground truths near the high curvature edges. Despite the relatively low
precisions for the curved surfaces, the qualitative analysis of binary
classification (Fig. 5 column-$2$-$5$) is quite pleasant.
#### IV-A2 Edge Detection in Unorganized Point Clouds
We target this task as a utility of shape back-projection by employing two
facts: first, the INAD captures the amount of curvature at a point and second,
the edges have high curvature as compared to a plane. Therefore, we reduce $r$
to a smaller value ($\sim 6mm$) and then back-project the shape histogram of a
planar surface onto the 3D scenes as shown in column-$1$ Fig. 5. Since, the
edges are also curved regions in a cloud, therefore, the Eq. 8 can be
rewritten as given below.
$P(p\in S_{plane}|SH,r)\leavevmode\nobreak\ +\leavevmode\nobreak\ P(p\in
S_{edge}|SH,r)\leavevmode\nobreak\ =1$ (9)
Table II shows the precision, recall and the F$1$-Score for edge detection
using shape back-projection and a recent method [29]. It can be inferred that
the recent method performs well for synthetic data which they have reported in
their paper, however, performs quite inferior on real data. This is where, the
shape back-projection marks its importance to reliably deal with real data.
Fig. 5 column-($6$-$7$) shows the color coded edge likelihood $P(p\in
S_{edge}|SH,r)$ corresponding to the clouds in column-($1$). It can be seen
that the predicted edges appears quite similar to the ground truth.
### IV-B Number of Bins ($k_{\mu},k_{\sigma}$) and its Effect on Noise
To study the effect of noise, we don’t rely on adding noise in the synthetic
point clouds as performed in [29], instead the chosen point clouds fulfills
the purpose, as they are full of random noise and inconsistence depth
measurements. Table II depicts the precision, recall and F$1$ measure for high
curvature edge detection in the real point clouds data well as a synthetic
cloud (Fig. 6). It is noticeable that the best F$1$ (in blue) on real data is
obtained with higher number of bins while for the synthetic point cloud, it is
achieved with lower number of bins. It happens because noisy point cloud
contains local surface variations which can not be represented for small
values of $k$. On the other hand, high values of $k$ accurately encodes such
information and does not let the noise hamper the underlying geometrical
information. Hence, the shape histograms inherently deals with surface noise
which is desirable for many practical applications.
(a) Synthetic cloud(b) Ground truth edges(c) Predicted edges Figure 6: Edge
detection in synthetic unorganized point cloud
### IV-C Timing Analysis and Point Cloud Density
The INAD score heavily relies on number of nearest neighbors ($k$-NNs) which
depends on the cloud density. In general, the real point cloud data is noisy
and posses non-uniform density. Therefore, rather changing the radii, timing
analysis is performed by explicitly varying the number of nearest neighbors in
a synthetic point cloud (Fig. 6). Table III shows the INAD computing
performance for a single point with different number of nearest neighbors. On
a scale, $10$ and $500$ neighbors roughly corresponds to a radii of $0.005$m
and $0.03$m on surface of resolution $0.001$m. The method is quite fast even
on single thread. This enables shape back-projection to be deployed on
computationally limited platforms. The speeds can be increased several times
by taking advantage of multi-threading and GPUs, if available.
### IV-D Rotation and Translation Invariancy
Shape back-projection exploits local geometrical properties. In this process,
the INAD score for a point is computed using surface normals which are in turn
computed locally and invariant to 3D affine transformation. Therefore, the
INAD score and the shape histograms remain rotational and translational
invariant.
### IV-E A Promising Alternative to Model Consensus
Consider the point cloud in the Fig.5 row-$3$. In order to extract all the
cylindrical items from the cloud, model consensus needs to be deployed
iteratively and number of cylinders (instances) must be known beforehand. On
the other hand, binary surface classification using shape back-projection can
classify all instances without requiring to have the total number of instances
apriori and can also deal with variety of shapes, unlike model consensus which
are primitive only. Table IV shows the precision, recall and F$1$-score of
shape back-projection and RANSAC to extract all cylinders in the the the
cloud-Id $33$ (Fig.5, row-$3$).
Table I: Performance analysis of binary surface classification Scene-Id | $k_{\mu}\times k_{\sigma}$
---|---
$10\times 10$ | $20\times 20$
Planar | Curved | mIoU | Planar | Curved | mIoU
Precision | Recall | F1 | Precision | Recall | F1 | Precision | Recall | F1 | Precision | Recall | F1
$4$ | $0.99$ | $0.95$ | $0.97$ | $0.50$ | $0.98$ | $0.66$ | $0.95$ | $0.99$ | $0.93$ | $0.96$ | $0.38$ | $0.99$ | $0.55$ | $0.93$
$20$ | $1.00$ | $0.95$ | $0.97$ | $0.63$ | $1.00$ | $0.67$ | $0.95$ | $0.92$ | $0.96$ | $0.92$ | $0.51$ | $1.00$ | $0.68$ | $0.92$
$33$ | $0.97$ | $0.95$ | $0.96$ | $0.74$ | $0.82$ | $0.78$ | $0.92$ | $0.99$ | $0.91$ | $0.95$ | $0.66$ | $0.97$ | $0.79$ | $0.91$
$44$ | $0.98$ | $0.96$ | $0.97$ | $0.71$ | $0.89$ | $0.79$ | $0.95$ | $0.99$ | $0.93$ | $0.96$ | $0.61$ | $0.99$ | $0.76$ | $0.93$
Table II: Performance analysis of high curvature edge detection, S.B. referres to Shape Back-projection Scene-Id | Method | $k_{\mu}$$\times$$k_{\sigma}$ | Precision | Recall | F$1$ | Time (S)
---|---|---|---|---|---|---
$4$ | S.B. | $10\times 10$ | $0.98$ | $0.51$ | $0.67$ | $2.2$
S.B. | $20\times 20$ | $0.94$ | $0.72$ | $0.82$ | $2.2$
[29] | - | $0.05$ | $1.00$ | $0.09$ | $2.7$
$20$ | S.B. | $10\times 10$ | $0.95$ | $0.63$ | $0.75$ | $2.0$
S.B. | $20\times 20$ | $0.90$ | $0.83$ | $0.86$ | $2.0$
[29] | - | $0.05$ | $1.00$ | $0.11$ | $2.5$
$33$ | S.B. | $10\times 10$ | $0.87$ | $0.48$ | $0.62$ | $1.5$
S.B. | $20\times 20$ | $0.78$ | $0.75$ | $0.77$ | $1.5$
[29] | - | $0.05$ | $1.00$ | $0.09$ | $2.1$
$44$ | S.B. | $10\times 10$ | $0.94$ | $0.62$ | $0.75$ | $1.4$
S.B. | $20\times 20$ | $0.84$ | $0.82$ | $0.83$ | $1.4$
[29] | - | $0.04$ | $1.00$ | $0.08$ | $2.0$
Fig.6a | S.B. | $10\times 10$ | $0.98$ | $0.98$ | $0.98$ | $1.5$
S.B. | $20\times 20$ | $0.42$ | $0.96$ | $0.59$ | $1.5$
[29] | - | $0.95$ | $0.98$ | $0.97$ | $1.6$
Table III: Timing analysis of INAD computations per point $k$-NNs | 10 | 100 | 200 | 300 | 400 | 500
---|---|---|---|---|---|---
Time ($\mu$S) | $4.2$ | $23.1$ | $45.5$ | $67.6$ | $87.5$ | $100.7$
Table IV: Performance analysis for multi-instance cylinder extraction Method | Precision | Recall | F$1$
---|---|---|---
S.B. | $0.66$ | $0.97$ | $0.79$
RANSAC [27] | $0.16$ | $0.92$ | $0.27$
## V Practical Applications
#### V-1 Warehouse Automation
A large number of novel items arrive in the warehouses on a daily basis.
Dealing with such items becomes a major challenge because not every item can
be included in the dataset of the learning based perception algorithms. In
such cases, the binary surface classification using shape back-projection can
be utilized in order to segregate items in the cluttered containers on the
basis of their 3D shape and the segregated items then can be sent to different
destinations for further processing.
#### V-2 Suction Grasp Location Estimation
Through our participation in the Amazon challenges APC’$16$ and ARC’$17$, we
have realized that the centroid based approach [36] is not suitable for
suction based grasping in the presence of partial occlusions, and especially
when a smaller object lies on top of a larger object. As a solution, shape-
histogram of a planar or curved surface (depending on the target object shape)
is back-projected onto the point cloud of a target object. Then, an adaptive
mean-shift operation is performed on the back-projection likelihood and its
convergence point is chosen as the suction grasp location (Fig. 7). This
strategy is quite fast, effective and also eliminates a need of time consuming
6D pose estimation such as Super4PCS [37].
Figure 7: Our robotic setup performing an autonomous picking operation on the
grasp location provided by shape back-projection.
#### V-3 Automated data labeling for 3D-CNNs
In order to train 3D-CNNs for the task of surface classification, a huge
amount of data is required. Manual labeling of 3D point clouds is quite
exhaustive and challenging as compared to images and requires specialized
softwares. Hence, shape back-projection can be deployed to segment specific
kind of surfaces from a 3D scene and its output can be used to generate ground
truths. For example, the outputs of algorithm in Fig.4 can be used to generate
dataset to train a 3D CNN for purpose of surface segmentation and to detect
edges.
## VI Conclusion
In this work, we have presented a novel algorithm of shape back-projection in
3D scenes. Its inspiration lies in the working of color back-projection which
obtains color similarity between two images by analyzing their color
histograms. Here, we have developed a novel shape histogram, by means of
which, a probabilistic measure of similarity between two 3D point clouds can
be obtained. The utility of the shape back-projection ranges from warehouse
automation to automated labeled data generation for 3D-CNNs. The existing
feature based 3D object detection methods can also be benefitted by extracting
salient points using shape back-projection. The algorithm is a robust and a
computationally efficient alternative to the SOA algorithms for the above
purposes. Therefore, it can be easily deployed on computationally limited
platforms (UAVs) for complex tasks of object manipulation.
## References
* [1] J. L. Bentley, “Multidimensional binary search trees used for associative searching,” Communications of the ACM, vol. 18, no. 9, pp. 509–517, 1975\.
* [2] D. J. Meagher, Octree encoding: A new technique for the representation, manipulation and display of arbitrary 3-d objects by computer. Electrical and Systems Engineering Department Rensseiaer Polytechnic Institute Image Processing Laboratory, 1980.
* [3] M. Muja and D. G. Lowe, “Flann, fast library for approximate nearest neighbors,” in International Conference on Computer Vision Theory and Applications (VISAPP’09), vol. 3, INSTICC Press, 2009.
* [4] Velodyne, “Velodyne HDL-64E: A high definition LIDAR sensor for 3D applications,” tech. rep., Velodyne, October 2007. Available at www.velodyne.com/lidar/products/white_paper.
* [5] D. Lavrinc, “Ford unveils its first autonomous vehicle prototype http://www. wired. com/autopia/2013/12/ford-fusion-hybrid-autonomous,” Accessed December 16th, 2013.
* [6] E. Ackerman, “Tesla model S: Summer software update will enable autonomous driving,” IEEE Spectrum Cars That Think, 2015.
* [7] G. Pratt and J. Manzo, “The darpa robotics challenge [competitions],” IEEE Robotics & Automation Magazine, vol. 20, no. 2, pp. 10–12, 2013.
* [8] M. Buehler, K. Iagnemma, and S. Singh, The DARPA urban challenge: autonomous vehicles in city traffic, vol. 56. springer, 2009.
* [9] C. Zhu, Y. Zheng, K. Luu, and M. Savvides, “Cms-rcnn: contextual multi-scale region-based cnn for unconstrained face detection,” in Deep Learning for Biometrics, pp. 57–79, Springer, 2017.
* [10] R. Girshick, “Fast r-cnn,” in Proceedings of the IEEE international conference on computer vision, pp. 1440–1448, 2015.
* [11] S. Ren, K. He, R. Girshick, and J. Sun, “Faster r-cnn: Towards real-time object detection with region proposal networks,” in Advances in neural information processing systems, pp. 91–99, 2015.
* [12] J. Long, E. Shelhamer, and T. Darrell, “Fully convolutional networks for semantic segmentation,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 3431–3440, 2015.
* [13] H. Zhao, J. Shi, X. Qi, X. Wang, and J. Jia, “Pyramid scene parsing network,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2017.
* [14] K. He, G. Gkioxari, P. Dollár, and R. Girshick, “Mask r-cnn,” CVPR, 2017\.
* [15] G. Lin, A. Milan, C. Shen, and I. D. Reid, “Refinenet: Multi-path refinement networks for high-resolution semantic segmentation.,” in Cvpr, vol. 1, p. 5, 2017.
* [16] K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 770–778, 2016.
* [17] G. Huang, Z. Liu, K. Q. Weinberger, and L. van der Maaten, “Densely connected convolutional networks,” CVPR, 2017.
* [18] K. Simonyan and A. Zisserman, “Very deep convolutional networks for large-scale image recognition,” CoRR, vol. abs/1409.1556, 2014.
* [19] J. Huang and S. You, “Point cloud labeling using 3d convolutional neural network,” in Pattern Recognition (ICPR), 2016 23rd International Conference on, pp. 2670–2675, IEEE, 2016.
* [20] S. Song, F. Yu, A. Zeng, A. X. Chang, M. Savva, and T. Funkhouser, “Semantic scene completion from a single depth image,” in Computer Vision and Pattern Recognition (CVPR), 2017 IEEE Conference on, pp. 190–198, IEEE, 2017\.
* [21] B. Wu, A. Wan, X. Yue, and K. Keutzer, “Squeezeseg: Convolutional neural nets with recurrent crf for real-time road-object segmentation from 3d lidar point cloud,” arXiv preprint arXiv:1710.07368, 2017.
* [22] F. Caccavale, G. Giglio, G. Muscio, and F. Pierri, “Adaptive control for uavs equipped with a robotic arm,” IFAC Proceedings Volumes, vol. 47, no. 3, pp. 11049–11054, 2014.
* [23] T. W. Danko, K. P. Chaney, and P. Y. Oh, “A parallel manipulator for mobile manipulating uavs,” in Technologies for Practical Robot Applications (TePRA), 2015 IEEE International Conference on, pp. 1–6, IEEE, 2015.
* [24] R. B. Rusu, N. Blodow, and M. Beetz, “Fast point feature histograms (fpfh) for 3d registration,” in Robotics and Automation, 2009. ICRA’09. IEEE International Conference on, pp. 3212–3217, Citeseer, 2009.
* [25] F. Tombari, S. Salti, and L. Di Stefano, “Unique signatures of histograms for local surface description,” in European conference on computer vision, pp. 356–369, Springer, 2010.
* [26] F. Tombari and L. Di Stefano, “Object recognition in 3d scenes with occlusions and clutter by hough voting,” in 2010 Fourth Pacific-Rim Symposium on Image and Video Technology, pp. 349–355, IEEE, 2010.
* [27] M. A. Fischler and R. C. Bolles, “Random sample consensus: a paradigm for model fitting with applications to image analysis and automated cartography,” Communications of the ACM, vol. 24, no. 6, pp. 381–395, 1981\.
* [28] R. Garcia, J. Batlle, and X. Cufi, “A system to evaluate the accuracy of a visual mosaicking methodology,” in OCEANS, 2001. MTS/IEEE Conference and Exhibition, vol. 4, pp. 2570–2576, IEEE, 2001.
* [29] D. Bazazian, J. R. Casas, and J. Ruiz-Hidalgo, “Fast and robust edge extraction in unorganized point clouds,” in Digital Image Computing: Techniques and Applications (DICTA), 2015 International Conference on, pp. 1–8, IEEE, 2015.
* [30] M. J. Swain and D. H. Ballard, “Color indexing,” International journal of computer vision, vol. 7, no. 1, pp. 11–32, 1991.
* [31] D. Comaniciu and P. Meer, “Robust analysis of feature spaces: color image segmentation,” in Computer Vision and Pattern Recognition, 1997. Proceedings., 1997 IEEE Computer Society Conference on, pp. 750–755, IEEE, 1997\.
* [32] G. R. Bradski, “Computer vision face tracking for use in a perceptual user interface,” 1998.
* [33] R. B. Rusu and S. Cousins, “3d is here: Point cloud library (pcl),” in Robotics and automation (ICRA), 2011 IEEE International Conference on, pp. 1–4, IEEE, 2011.
* [34] D. Girardeau-Montaut, “Cloud compare—3d point cloud and mesh processing software,” Open Source Project, 2015.
* [35] M. Everingham, L. Van Gool, C. K. Williams, J. Winn, and A. Zisserman, “The pascal visual object classes (voc) challenge,” International journal of computer vision, vol. 88, no. 2, pp. 303–338, 2010.
* [36] C. F. Lehnert, A. English, C. McCool, A. W. Tow, and T. Perez, “Autonomous sweet pepper harvesting for protected cropping systems.,” IEEE Robotics and Automation Letters, vol. 2, no. 2, pp. 872–879, 2017.
* [37] N. Mellado, D. Aiger, and N. J. Mitra, “Super 4pcs fast global pointcloud registration via smart indexing,” Computer Graphics Forum, vol. 33, no. 5, pp. 205–215, 2014.
|
# Black holes of $(2+1)$-dimensional $f(R)$ gravity coupled to a scalar field
Thanasis Karakasis<EMAIL_ADDRESS>Physics Division, National Technical
University of Athens, 15780 Zografou Campus, Athens, Greece. Eleftherios
Papantonopoulos<EMAIL_ADDRESS>Physics Division, National Technical
University of Athens, 15780 Zografou Campus, Athens, Greece. Zi-Yu Tang
<EMAIL_ADDRESS>Center for Astronomy and Astrophysics, School of Physics
and Astronomy, Shanghai Jiao Tong University, Shanghai 200240, China Bin Wang
<EMAIL_ADDRESS>School of Aeronautics and Astronautics, Shanghai Jiao Tong
University, Shanghai 200240, China Center for Gravitation and Cosmology,
College of Physical Science and Technology, Yangzhou University, Yangzhou
225009, China
###### Abstract
We consider $f(R)$ gravity theories in the presence of a scalar field
minimally coupled to gravity with a self-interacting potential in
$(2+1)$-dimensions. Without specifying the form of the $f(R)$ function, we
first obtain an exact black hole solution dressed with scalar hair with the
scalar charge to appear in the $f(R)$ function and we discuss its
thermodynamics. This solution at large distances gives a hairy BTZ black hole,
and it reduces to the BTZ black hole when the scalar field decouples. In a
pure $f(R)$ gravity supported by the scalar field, we find an exact hairy
black hole similar to the BTZ black hole with phantom hair and an analytic
$f(R)$ form and discuss its thermodynamics.
###### Contents
1. I Introduction
2. II The setup-derivation of the field equations
1. II.1 Without self-interacting potential
2. II.2 With self-interacting potential
3. III Black hole solutions
1. III.1 $c_{1}=1,c_{2}=0$
2. III.2 Exact Black Hole Solution with Phantom Hair
4. IV Conclusions
## I Introduction
In three-dimensions one of the first exact black holes with a negative
cosmological constant was discussed by Bañados, Teitelboim, and Zanelli (BTZ)
[1, 2]. This solution came as a surprise in the scientific community, since in
three dimensions the Weyl tensor, describing the distortion of the shape of a
body in the presence of tidal gravitational force, vanishes by definition
while the Ricci tensor, describing how the volume of the body changes due to
this tital force, vanishes in the absence of matter. Therefore, since
$Riemann=Weyl+Ricci$ we can only have flat spacetime. In this solution, the
existence of a cosmological constant term and the electromagnetic field
results to a non-zero Ricci tensor allowing in this way the existence of a
black hole solution.
After the discovery of this solution, scalar fields minimally and nonminimally
coupled to gravity were introduced as matter fields. In [3, 4] three
dimensional black holes with a conformally coupled scalar field, being regular
everywhere, were discussed. After these first results other hairy black holes
in three-dimensions were discussed [5, 6, 7, 8, 9, 10]. In [11] three-
dimensional gravity with negative cosmological constant in the presence of a
scalar field and an Abelian gauge field was introduced. Both fields are
conformally coupled to gravity, the scalar field through a nonminimal coupling
with the curvature and the gauge field by means of a Lagrangian given by a
power of the Maxwell one. A sixth-power self-interaction potential, which does
not spoil conformal invariance is also included in the action, resulting to a
very simple relation between the scalar curvature and the cosmological
constant. In [12] and [13] $(2+1)$ dimensional charged black holes with scalar
hair where derived, where the scalar potential is not fixed ad hoc but instead
derived from Einstein’s equations. In [14] exact three dimensional black holes
with non-minimal scalar field were discussed. Finally, in [15] and [16],
static black holes in three dimensional dilaton gravity and modifications of
the BTZ black hole by a dilaton/scalar were investigated.
In four dimensions, the first black hole solution with a scalar field as a
matter field was derived by Bocharova, Bronnikov and Melnikov and
independently by Bekenstein, called BBMB black hole [17]. The scalar field is
conformally coupled to gravity, resulting to the vanishing of scalar
curvature, the metric takes the form of the extremal Reissner-Nordstrom
spacetime and the scalar field diverges at the horizon. It was also shown at
[18] that this solution is unstable under scalar perturbations. Later, a scale
was introduced to the theory via a cosmological constant at [19] and also a
quartic scalar potential that does not break the conformal invariance of the
action, resulting at a very simple relation between the scalar curvature and
the cosmological constant. The scalar field does not diverge at the horizon,
but the solution is found to be unstable [20]. In [21] asymptotically (anti)
de Sitter black holes and wormholes with a self interacting scalar field in
four dimensions were discussed. Regarding the minimal coupling case, the first
exact black hole solution was presented at [22], the MTZ black hole. The
scalar potential is fixed ad hoc, the geometry of the solution is hyperbolic
and the scalar field remains finite at the black hole horizons. In [23] the
electrically charged case is discussed. In [24], a potential that breaks the
conformal invariance of the action of the MTZ black hole in the Jordan frame
was considered and new black hole solutions where derived. In [25] the scalar
field is fixed ad hoc and novel black hole solutions are investigated, letting
the scalar potential to be determined from the equations and in [26] the
electrically charged case is considered. In [27, 28] black holes with
nonminimal derivative coupling were studied. However, the scalar field which
was coupled to Einstein tensor should be considered as a particle living
outside the horizon of the black hole because it blows up on the horizon.
Finally, in [29] Plebanski-Demianski solutions in Quadratic gravity with
conformally coupled scalar fields were investigated.
The $f(R)$ theories of gravity were mainly introduced in an attempt to
describe the early and late cosmological history of our Universe [30]-[40]. In
particular, following the recent cosmological observational results the $f(R)$
gravity cosmological models were used to explain the deceleration-acceleration
transition. This requirement imposed constraints on the $f(R)$ models allowing
viable choices of $f(R)$ [41]. These theories exclude contributions from any
curvature invariants other than $R$ and they avoid the Ostrogradski
instability [42] which usually is present in higher derivative theories [43].
Several black hole solutions in these theories were found and they either are
deviations of the known black hole solutions of General Relativity, or they
possess new properties that should be investigated. Static and spherically
symmetric black hole solutions were derived in $(3+1)$ and $(2+1)$ dimensions
[44, 45, 46, 47], while in [48, 49, 50, 51, 52, 53] charged and rotating
solutions were discussed. Static and spherically symmetric black hole
solutions were investigated with constant curvature, with and without electric
charge and cosmological constant in [54, 55, 56]. In [57] curvature
scalarization of black holes in $f(R)$ gravity was discussed and it was shown
that the presence of a scalar field minimally coupled to gravity with a self-
interacting potential can generate a rich structure of scalarized black hole
solutions, while in [58] exact charged black hole solutions with dynamic
curvature in D-dimensions were obtained in Maxwell-f(R) gravity theories.
In this work we present, to the best of our knowledge, the first exact black
hole solution in $(2+1)$-dimensions of a scalar field minimally coupled to
gravity in the context of $f(R)$ gravity. Without specifying the form of the
$f(R)$ function, we first obtain an exact black hole solution dressed with a
scalar hair with the scalar charge to appear in the metric and in the $f(R)$
function and discuss its thermodynamics. This solution at large distances
gives a hairy BTZ black hole, and it reduces to the BTZ black hole when the
scalar field decouples. In a pure $f(R)$ gravity supported by the scalar
field, we find an exact hairy black hole similar to the BTZ black hole with
phantom hair and discuss its thermodynamics.
The work is organized as follows. In Section II we derive the field equations
with and without a self-interacting potential for the scalar field. In Section
III we discuss hairy black hole solutions of the field equations when we have
the Einstein-Hilbert term with curvature corrections and also black hole
solutions when the $f(R)$ is purely supported by the scalar field. Finally we
conclude in Section IV.
## II The setup-derivation of the field equations
We will consider the $f(R)$ gravity theory with a scalar field minimally
coupled to gravity in the presence of a self-interacting potential. Varying
this action we will look for hairy black hole solutions. We will show that if
this scalar field decouples, we recover $f(R)$ gravity. First we will consider
the case in which the scalar field does not have self-interactions.
### II.1 Without self-interacting potential
Consider the action
$S=\int
d^{3}x\sqrt{-g}\left\\{\frac{1}{2\kappa}f(R)-\frac{1}{2}g^{\mu\nu}\partial_{\mu}\phi\partial_{\nu}\phi\right\\}~{},$
(1)
where $\kappa$ is the Newton gravitational constant $\kappa=8\pi G$. The
Einstein equations read
$f_{R}R_{\mu\nu}-\frac{1}{2}g_{\mu\nu}f(R)+g_{\mu\nu}\square
f_{R}-\nabla_{\mu}\nabla_{\nu}f_{R}=\kappa T_{\mu\nu}~{},$ (2)
where $f^{\prime}(R)=f_{R}$ and the energy-momentum tensor $T_{\mu\nu}$ is
given by
$T_{\mu\nu}=\partial_{\mu}\phi\partial_{\nu}\phi-\frac{1}{2}g_{\mu\nu}g^{\alpha\beta}\partial_{\alpha}\phi\partial_{\beta}\phi~{}.$
(3)
The Klein-Gordon equation reads
$\square\phi=0~{}.$ (4)
We consider a spherically symmetric ansatz for the metric
$ds^{2}=-b(r)dt^{2}+\frac{1}{b(r)}dr^{2}+r^{2}d\theta^{2}~{}.$ (5)
For the metric above, the Klein-Gordon equation becomes
$\Box\phi=b(r)\phi^{\prime\prime}(r)+\phi^{\prime}(r)\Big{(}b^{\prime}(r)+\cfrac{b(r)}{r}\Big{)}=0~{},$
(6)
and takes the form of a total derivative
$b(r)\phi^{\prime}(r)r=C~{},$ (7)
where $C$ is a constant of integration. In order to have a black hole, we
require at the horizon to have $r=r_{H}\rightarrow b(r_{H})=0$. Then, $C=0$.
This means that either $b(r)=0$ for any $r>0$ and no geometry can be formed,
or the scalar field is constant $\phi(r)=c$. We indeed expected this
behaviour, which cannot be cured with the addition of a second degree of
freedom in the metric (5). From the no-hair theorem [59] we know that the
scalar field should satisfy its equation of motion for the black hole
geometry, thus if we multiply the Klein-Gordon equation by $\phi$ and
integrate over the black hole region we have
$\int d^{3}x\sqrt{-g}\big{(}\phi\Box\phi\big{)}\approx\int
d^{3}x\sqrt{-g}\nabla^{\mu}\phi\nabla_{\mu}\phi=0~{},$ (8)
where $\approx$ means equality modulo total derivative terms. From equation
(8) one can see that the scalar field is constant.
### II.2 With self-interacting potential
We shown that if the matter does not have self-interactions then there are no
hairy black holes in the $f(R)$ gravity. We then have to introduce self-
interactions for the scalar field. Consider the action
$S=\int
d^{3}x\sqrt{-g}\left\\{\frac{1}{2\kappa}f(R)-\frac{1}{2}g^{\mu\nu}\partial_{\mu}\phi\partial_{\nu}\phi-V(\phi)\right\\}~{}.$
(9)
The scalar field and the scalar potential obey the following conditions
$\phi\left(r\to\infty\right)=0~{},\quad V\left(r\to\infty\right)=0~{},\quad
V\big{|}_{\phi=0}=0~{}.$ (10)
Varying the action (9) using the metric ansatz (5) we get the
$tt,rr,\theta\theta$ components of Einstein’s equations (for $\kappa=1$) and
the Klein-Gordon equation
$r\left(b^{\prime}(r)f_{R}^{\prime}(r)-f_{R}(r)b^{\prime\prime}(r)-f\left(r\right)+b(r)\left(2f_{R}^{\prime\prime}(r)+\phi^{\prime}(r)^{2}\right)+2V(\phi)\right)-f_{R}(r)b^{\prime}(r)+2b(r)f_{R}^{\prime}(r)=0~{},$
(11)
$b(r)\left(r\left(-b^{\prime}(r)f_{R}^{\prime}(r)+f_{R}(r)b^{\prime\prime}(r)+f\left(r\right)+b(r)\phi^{\prime}(r)^{2}-2V(\phi)\right)+f_{R}(r)b^{\prime}(r)-2b(r)f_{R}^{\prime}(r)\right)=0~{},$
(12)
$-r\left(2b^{\prime}(r)f_{R}^{\prime}(r)+b(r)\left(2f_{R}^{\prime\prime}(r)+\phi^{\prime}(r)^{2}\right)+2V(\phi)\right)+2f_{R}(r)b^{\prime}(r)+rf\left(r\right)=0~{},$
(13)
$\frac{\left(rb^{\prime}(r)+b(r)\right)\phi^{\prime}(r)}{r}+b(r)\phi^{\prime\prime}(r)-\frac{V^{\prime}(r)}{\phi^{\prime}(r)}=0~{}.$
(14)
The Ricci Curvature for the metric (5) reads
$R(r)=-\frac{2b^{\prime}(r)}{r}-b^{\prime\prime}(r)~{}.$ (15)
From (11) and (12) equations we obtain the relation between $f_{R}(r)$ and
$\phi(r)$
$f_{R}^{\prime\prime}(r)+\phi^{\prime}(r)^{2}=0~{},$ (16)
while the (11) and (13) equations yield the relation between the metric
finction $b(r)$ and $f_{R}(r)$
$\left(2b(r)-rb^{\prime}(r)\right)f_{R}^{\prime}(r)+f_{R}(r)\left(b^{\prime}(r)-rb^{\prime\prime}(r)\right)=0~{}.$
(17)
Both equations (16), (17) can be immediately integrated to yield
$f_{R}(r)=c_{1}+c_{2}r-\int\int\phi^{\prime}(r)^{2}drdr~{},$ (18)
$b(r)=c_{3}r^{2}-r^{2}\int\cfrac{K}{r^{3}f_{R}(r)}dr~{}$ (19)
where $c_{1},c_{2},c_{3}$ and $K$ are constants of integration. We can also
integrate the Klein-Gordon equation
$V(r)=V_{0}+\int\frac{rb^{\prime}(r)\phi^{\prime}(r)^{2}+rb(r)\phi^{\prime}(r)\phi^{\prime\prime}(r)+b(r)\phi^{\prime}(r)^{2}}{r}dr~{}.$
(20)
Equation (18) is the central equation of this work. First of all, we recover
General Relativity for the vanishing of scalar field and for
$c_{1}=1,c_{2}=0$. We stress the fact that in $f(R)$ gravity we are able to
derive non-trivial configurations for the scalar field with one degree of
freedom as can be seen in the metric (5). This is not the case in the context
of General Relativity, as it is discussed in [16]. There we can see that a
second degree of freedom (equation (4) in [16]) must be added for the
existence of non-trivial solutions for the scalar field. Here, the fact of
non-linear gravity makes $f_{R}\neq const.$, and therefore we can have a one
degree of freedom metric. The integration constants $c_{1}$ and $c_{2}$ have
physical meaning. $c_{1}$ is related with the Einstein-Hilbert term, while
$c_{2}$ is related to possible (if $c_{2}\neq 0$) geometric corrections to
General Relativity that are encoded in $f(R)$ gravity. The last term of this
equation is related directly to the scalar field. This means that the matter
not only modifies the curvature scalar $R$ but also the gravitational model
$f(R)$.
## III Black hole solutions
In this section we will discuss the cases where $c_{1}=1,c_{2}=0$ and
$c_{1}=c_{2}=0$ for a given scalar field configuration. For the second case to
satisfy observational and thermodynamical constraints we will introduce a
phantom scalar field and we will reconstruct the $f(R)$ theory, looking for
black hole solutions.
### III.1 $c_{1}=1,c_{2}=0$
Equations (18), (19) and (20) are three independent equations for the four
unknown functions of our system, $f_{R},\phi,V,b$, hence we have the freedom
to fix one of them and solve for the others. We fix the scalar field
configuration as
$\phi(r)=\sqrt{\cfrac{A}{r+B}}~{},$ (21)
where $A$ and $B$ are some constants with unit $[L]$, the scalar charges. We
now obtain from equation (18) $f_{R}(r)$
$f_{R}(r)=1-\frac{A}{8(B+r)}~{},$ (22)
where we have set $c_{2}=0$ and $c_{1}=1$. Therefore, we expect that, at least
in principle, a pure Einstein-Hilbert term will be generated if we integrate
$f_{R}$ with respect to the Ricci scalar.
Now, from equation (19) we obtain the metric function
$b(r)=c_{3}r^{2}-\frac{4BK}{A-8B}-\frac{8AKr}{(A-8B)^{2}}-\cfrac{64AKr^{2}}{(A-8B)^{3}}\ln\left(\cfrac{8(B+r)-A}{r}\right)~{}.$
(23)
The metric function is always continuous for positive $r$ when the scalar
charges satisfy $0<A<8B$. Here we show its asymptotic behaviors at the origin
and space infinity
$\displaystyle b(r\rightarrow 0)$ $\displaystyle=$
$\displaystyle-\frac{4BK}{A-8B}-\frac{8AKr}{(A-8B)^{2}}+c_{3}r^{2}+\frac{64AKr^{2}}{(A-8B)^{3}}\ln\left(-\frac{r}{A-8B}\right)+\mathcal{O}(r^{3})~{},$
(24) $\displaystyle b(r\rightarrow\infty)$ $\displaystyle=$
$\displaystyle\frac{K}{2}+\frac{AK}{24r}-r^{2}\Lambda_{\text{eff}}+\mathcal{O}(r^{-2})~{},$
(25)
where the effective cosmological constant of this solution is generated from
the equations can be read off
$\Lambda_{\text{eff}}=-c_{3}+\frac{192AK\ln(2)}{(A-8B)^{3}}~{}.$ (26)
It is important to discuss the asympotic behaviours of the metric function. At
large distances, we can see that we obtain the BTZ black hole where the scalar
charges appear in the effective cosmological constant of the solution.
Corrections in the structure of the metric appear as $\mathcal{O}(r^{-n})$
(where $n\geq 1$) terms and are completely supported by the scalar field. At
small distances we can see that the metric function has a completely different
behaviour from the BTZ black hole. Besides the constant and
$\mathcal{O}(r^{2})$ terms there are present $\mathcal{O}(r)$ and
$\mathcal{O}(r^{2}\ln(r))$ terms that have an impact on the metric for small
$r$. Our findings are in agreement with the work [57] where in four dimensions
Schwarzchild black holes are obtained at infinity with a scalarized mass term
while at small distances a rich structure of black holes is unveiled. This is
expected since at small distances the Ricci curvature becomes strong and
therefore changing the form of spacetime. The Ricci scalar and the Kretschmann
scalar are both divergent at the origin
$\displaystyle R\left(r\to 0\right)$ $\displaystyle=$
$\displaystyle\frac{16AK}{r(A-8B)^{2}}+\mathcal{O}\left(\ln{r}\right)~{},$
(27) $\displaystyle K\left(r\to 0\right)$ $\displaystyle=$
$\displaystyle\frac{128K^{2}A^{2}}{r^{2}(A-8B)^{4}}+\mathcal{O}\left(\frac{1}{r}\ln{r}\right)~{},$
(28)
indicating a singularity at $r=0$. As a consistency check for $A=0$ we indeed
obtain the BTZ [1] black hole solution
$b(r)=c_{3}r^{2}+\frac{K}{2}~{},$ (29)
which means that for vanishing scalar field we go back to General Relativity.
Hence the solution (23) can be regarded as a scalarized version of the BTZ
black hole in the context of $f(R)$ gravity.
Now we solve the expression of the potential from the Klein-Gordon equation
$V(r)=\frac{1}{8AB^{2}(A-8B)^{3}(B+r)^{3}}\\\
\bigg{(}B(4A^{4}(-B^{2}(K-18c_{3}r^{2})+36B^{3}c_{3}r+12B^{4}c_{3}-4BKr-2Kr^{2})-64A^{3}B(r^{2}(9B^{2}c_{3}+K)\\\
+Br(18B^{2}c_{3}+K)+6B^{4}c_{3})+256A^{2}B(B(6r^{2}(B^{2}c_{3}+K)+2Br(6B^{2}c_{3}+5K)+4B^{4}c_{3}+3B^{2}K)+\\\
30K\ln(2)(B+r)^{3})-A^{5}Bc_{3}(2B^{2}+6Br+3r^{2})+64BK(-A^{3}(2B^{2}+6Br+3r^{2})\ln(\frac{r}{8(B+r)-A})\\\
-8(5A^{2}-32AB+64B^{2})(B+r)^{3}\ln(8(B+r)-A))-4096AB^{2}K(B+r)^{2}(12\ln(2)(B+r)+B)\\\
+98304B^{3}K\ln(2)(B+r)^{3})-8A^{2}K(A^{2}-32AB+64B^{2})(B+r)^{3}\ln(r)+8K(A-8B)^{4}(B+r)^{3}\ln(B+r)\bigg{)}~{},$
(30)
the asymptotic behaviors of which are
$\displaystyle V(r\rightarrow 0)$ $\displaystyle=$
$\displaystyle-\frac{K\ln(r)}{B^{2}(A-8B)}+\mathcal{O}(r^{0})~{},$ (31)
$\displaystyle V(r\rightarrow\infty)$ $\displaystyle=$
$\displaystyle\frac{3A\left(24A^{2}Bc_{3}-A^{3}c_{3}-192A\left(B^{2}c_{3}-K\ln(2)\right)+512B^{3}c_{3}\right)}{8r(A-8B)^{3}}+\mathcal{O}\left(\frac{1}{r^{2}}\right)~{}.$
(32)
To ensure that the potential vanishes at space infinity, we need to set the
integration constant $V_{0}$ at (20) equal to
$V_{0}=\frac{192K\ln{2}\left(5A^{2}-32AB+64B^{2}\right)}{A(A-8B)^{3}}~{}.$
(33)
In addition, there is a mass term in the potential that has the same sign with
the effective cosmological constant
$m^{2}=V^{\prime\prime}(\phi=0)=\frac{3}{4}\left(\frac{192AK\ln(2)}{(A-8B)^{3}}-c_{3}\right)=\frac{3}{4}\Lambda_{\text{eff}}~{},$
(34)
which satisfies the Breitenlohner-Freedman bound in three dimensions [60, 61],
ensuring the stability of AdS spacetime under perturbations if we are working
in the AdS spacetime.
Substituting the obtained configurations into one of the Einstein equations we
can solve for $f(r)$
$f(r)=\frac{1}{AB^{2}r(A-8B)^{3}(A-8(B+r))}\bigg{[}B(192BKr\ln(2)(5A^{2}-32AB+64B^{2})(A-8(B+r))+A(A-8B)^{2}(16Bc_{3}r^{2}(A-8B)\\\
-2Bc_{3}r(A-8B)^{2}+8Kr(A+8B)-AK(A-8B)))+A^{2}Kr(-(A^{2}-32AB+64B^{2}))\ln(r)(A-8(B+r))\\\
+Kr(8(B+r)-A)\left(64B^{2}((5A^{2}-32AB+64B^{2})\ln(8(B+r)-A)+2A^{2}\ln(\frac{r}{8(B+r)-A}))-(A-8B)^{4}\ln(B+r)\right)\bigg{]}.$
(35)
On the other side, the Ricci scalar can be calculated from the metric function
$R(r)=\frac{16AK\left(-36r(A-8B)+(A-8B)^{2}+192r^{2}\right)}{r(A-8B)^{2}(A-8(B+r))^{2}}+\frac{384AK}{(A-8B)^{3}}\ln\left(\cfrac{8(B+r)-A}{r}\right)-6c_{3}.~{}$
(36)
As one can see it is difficult to invert the Ricci scalar and solve the exact
form of $f(R)$, though we have the expressions of $R(r)$, $f(r)$ and
$f_{R}(r)$. Nevertheless we can still obtain the asymptotic $f(R)$ forms by
studying their asymptotic behaviors
$\displaystyle f\left(r\to\infty\right)$ $\displaystyle=$
$\displaystyle-\frac{AK(A-8B)}{128r^{4}}+\frac{768AK\ln(2)}{(A-8B)^{3}}-4c_{3}+\mathcal{O}\left(\frac{1}{r^{5}}\right)~{},$
(37) $\displaystyle R\left(r\to\infty\right)$ $\displaystyle=$
$\displaystyle-\frac{AK(A-8B)}{128r^{4}}+\frac{1152AK\ln(2)}{(A-8B)^{3}}-6c_{3}+\mathcal{O}\left(\frac{1}{r^{5}}\right)~{},$
(38) $\displaystyle f\left(r\to 0\right)$ $\displaystyle=$
$\displaystyle-\frac{2AK}{(A-8B)Br}+\mathcal{O}\left(\ln{r}\right)~{},$ (39)
$\displaystyle R\left(r\to 0\right)$ $\displaystyle=$
$\displaystyle\frac{16AK}{r(A-8B)^{2}}+\mathcal{O}\left(\ln{r}\right)~{},$
(40)
which leads to
$\displaystyle f(R)$ $\displaystyle\simeq$ $\displaystyle
R+2c_{3}-\frac{384AK\ln(2)}{(A-8B)^{3}}=R-2\Lambda_{\text{eff}}~{},\quad\quad
r\to\infty~{},$ (41) $\displaystyle f(R)$ $\displaystyle\simeq$ $\displaystyle
R\left(1-\frac{A}{8B}\right)~{},\quad\quad r\to 0~{}.$ (42)
Figure 1: All the physical quantities of the AdS black holes are plotted with
different scalar charges $A$, where other parameters have been fixed as $B=1$,
$K=-5$ and $c_{3}=1$.
The fact that the Ricci scalar contains logarithmic terms prevents us from
obtaining the non-linear corrections near the origin, where we expect the
modified part of the $f(R)$ model to be stronger, since it is supported by the
existence of the scalar field and the scalar field takes its maximum value for
$r=0\to\phi(0)=\sqrt{A/B}$. To avoid the tachyonic instability, we check the
Dolgov-Kawasaki stability ctiterion [47] which states that the second
derivative of the gravitational model $f_{RR}$ must be always positive [30,
31, 32]. Using the chain rule
$f_{RR}=\cfrac{df_{R}(R)}{dR}=\cfrac{df_{R}(r)}{dr}\cfrac{dr}{dR}=\cfrac{f^{\prime}_{R}(r)}{R^{\prime}(r)}=-\frac{r^{2}(A-8(B+r))^{3}}{128K(A-8B)(B+r)^{2}}~{},$
(43)
we can see that the above expression is always positive for $K<0$ when the
continuity condition $0<A<8B$ is considered. So far we have not imposed any
condition on $c_{3}$, therefore the spacetime might be asympotically AdS or dS
depending on the value of parameter $c_{3}$
$\displaystyle c_{3}$ $\displaystyle>$
$\displaystyle\frac{192AK\ln(2)}{(A-8B)^{3}}>0\hskip
14.22636pt\text{asympotically~{}AdS~{}},$ (44) $\displaystyle c_{3}$
$\displaystyle<$ $\displaystyle\frac{192AK\ln(2)}{(A-8B)^{3}}\hskip
14.22636pt\text{asympotically~{}dS~{}}.$ (45)
We can prove that the metric function has at most one root, which can not
describe a dS black hole. For the asympotically AdS spacetime, the condition
$K<0$ gives an AdS black hole solution while the condition $K>0$ gives the
pure AdS spacetime with a naked singularity at origin. For the asympotically
dS spacetime, the condition $K>0$ gives a pure dS spacetime with a
cosmological horizon. Therefore pure AdS or dS spacetime described by this
solution suffers from the tachyonic instability, only AdS black holes can
survive from this instability. We plot all the physical quantities of the AdS
black holes in FIG. 1 and FIG. 2. In FIG. 1 we plot the metric function, the
potential, the scalar field, the Ricci scalar, the $f(r)$ and $f_{R}$
functions along with the $A=0$ (BTZ black hole) case in order to compare them.
In FIG. 2 we plot the $f(R)$ model along with $f(R)=R-2\Lambda_{\text{eff}}$
in order to compare our model with Einstein’s Gravity. For FIG. 2 we used the
expression for the Ricci scalar (36) for the horizontal axes and the
expression for $f(r)$ (35) for the vertical axes.
Figure 2: The $f(R)$ function. The black dashed line represents the Einstein
Gravity $f(R)=R-2\Lambda_{\text{eff}}$, where other parameters have been fixed
as $B=1$, $K=-5$ and $c_{3}=1$.
From FIG. 1 and FIG. 2 we can see that the existence of scalar charge $A$
makes the solution deviate from the GR solution, and the stronger the scalar
charge is, the larger it deviates. The figure of the metric function shows
that the hairy solution with stronger scalar charge has larger radius of the
event horizon, while its influence on the curvature is qualitative, from
constant to dynamic, with a divergence appearing at origin. The scalar charge
also modifies the $f(R)$ model and the potential to support such hairy
structures, where the potential develops a well near the origin to trap the
scalar field providing the right matter concentration for a hairy black hole
to be formed. For the $f(R)$ model, the scalar charge only sets aside a small
distance with the Einstein Gravity while the slope changes little, indicating
our $f(R)$ model is very close to Einstein Gravity. We can see that even
slight deviations from General Relativity can support hairy structures. The
asymptotic expressions (41) (42) tell us that at large scale the scalar field
only modifies the effective cosmological constant while at small scale the
slope of $f(R)$ can also be modified, which agrees with the figure of $f(R)$.
Next we study the thermodynamics of this solution. The Hawking temperature and
Bekenstein-Hawking entropy are defined as [62, 63]
$\displaystyle T(r_{+})$ $\displaystyle=$
$\displaystyle\cfrac{b^{\prime}(r_{+})}{4\pi}=\frac{2K(B+r_{+})}{\pi
r_{+}(A-8(B+r_{+}))},$ (46) $\displaystyle S(r_{+})$ $\displaystyle=$
$\displaystyle\frac{\mathcal{A}}{4G}f_{R}(r_{+})=4\pi^{2}r_{+}f_{R}(r_{+})=4\pi^{2}r_{+}\left(1-\frac{A}{8(B+r_{+})}\right),$
(47)
where $r_{+}$ is the radius of the event horizon of the AdS black hole and
$A=2\pi r_{+}$ is the area of the event horizon, where the gravitational
constant $G$ equals $1/8\pi$ since we’ve set $8\pi G=1$. Here in the first
expression we have already used $r_{+}$ to replace the parameter $c_{3}$. It
is clear that the Hawking temperature and Bekenstein-Hawking entropy are both
positive for $K<0$ and $0<A<8B$. We present their figures in FIG. 3. FIG. 3
shows that for the same radius of the event horizon, the hairy black hole
solution owns higher Hawking temperature but lower Bekenstein-Hawking entropy.
However, with fixed parameters $B,c_{3}$ and $K$, the hairy black hole
solution has larger radius of the event horizon, therefore, we plot the
entropy inside the event horizon as a function of the scalar charge $A$ in
FIG. 4 to confirm if the hairy solution is thermodynamically preferred or not.
The fact is that hairy black hole solution is thermodynamically preferred,
which owns higher entropy than its corresponding GR solution, BTZ black hole,
and the entropy grows with the increase of the scalar charge $A$. It can be
easily understood that the participation of the scalar field gains more
entropy for the black hole.
Figure 3: The Hawking temperature and Bekenstein-Hawking entropy are plotted
with different scalar charges $A$, where other parameters have been fixed as
$B=1$ and $K=-5$.
.
Figure 4: The Bekenstein-Hawking entropy as a function of the scalar charge
$A$, where other parameters have been fixed as $B=1$, $K=-5$ and $c_{3}=1$.
.
### III.2 Exact Black Hole Solution with Phantom Hair
In the previous section, we have set $c_{1}=1$ and $c_{2}=0$, therefore the
$f(R)$ model consists of the pure Einstein-Hilbert term and corrections that
arise from the existence of the scalar field. We have shown that with the
vanishing of scalar field, we obtain the well known results of General
Relativity, the BTZ black hole [1].
We will now discuss the possibillity that the scalar field, purely supports
the $f(R)$ model by setting $c_{1}=c_{2}=0$. From equation (18) we can see
that due to the $\mathcal{O}(r^{-n})$ (where $n>0$) nature of the scalar field
and the double integration, there will be regions where $f_{R}<0$. For example
for our scalar profile (21) the $f_{R}$ turns out to be
$f_{R}(r)=-\frac{A}{8(B+r)}~{},$ (48)
which is always negative for $A,B>0$. With this form of $f_{R}$ one can derive
an exact hairy black hole solution similar to a hairy BTZ black hole which
however has negative entropy as can be seen from the relation (47).
It is clear that a sign reversal of $f(R)$ can fix the negative entropy
problem. As a result, the sign reversal of other terms in the action is also
required, which leads to a phantom scalar field instead of the regular one.
This comes in agrement with recent observational results which they require
that at the early universe to explain the equation of state $w<-1$ phantom
energy is needed to support the cosmological evolution [64, 65, 66]. As it
will be shown in the following, in the pure $f(R)$ gravity theory the
curvature acquires non-linear correction terms which makes the curvature
stronger as it is expected in the early universe.
Hence, we consider the following action
$S=\int
d^{3}x\sqrt{-g}\left\\{\frac{1}{2\kappa}f(R)+\frac{1}{2}g^{\mu\nu}\partial_{\mu}\phi\partial_{\nu}\phi-V(\phi)\right\\}~{},$
(49)
which is the action (9) but the kinetic energy of the scalar field comes with
the positive sign which corresponds to a phantom scalar field instead of the
regular one. Under the same metric ansatz (5), equation (16) now becomes
$f_{R}^{\prime\prime}(r)-\phi^{\prime}(r)^{2}=0~{},$ (50)
and by integration
$f_{R}(r)=\int\int\phi^{\prime}(r)^{2}drdr~{},$ (51)
having set $c_{1}=0$ and $c_{2}=0$. With the same profile of the scalar field,
the solution of this action becomes
$\displaystyle\phi(r)$ $\displaystyle=$
$\displaystyle\sqrt{\frac{A}{B+r}}~{},$ (52) $\displaystyle f_{R}(r)$
$\displaystyle=$ $\displaystyle\frac{A}{8(B+r)}~{},$ (53) $\displaystyle b(r)$
$\displaystyle=$ $\displaystyle\frac{4BK}{A}+\frac{8Kr}{A}-\Lambda r^{2}~{},$
(54) $\displaystyle R(r)$ $\displaystyle=$ $\displaystyle
6\Lambda-\frac{16K}{Ar}~{},$ (55) $\displaystyle V(r)$ $\displaystyle=$
$\displaystyle\frac{B(AB\Lambda+4K)}{8(B+r)^{3}}-\frac{3AB\Lambda+8K}{8B(B+r)}-\frac{K}{B^{2}}\ln{\left(\frac{B+r}{r}\right)}~{},$
(56) $\displaystyle f(r)$ $\displaystyle=$
$\displaystyle-\frac{2K}{Br}+\frac{2K}{B^{2}}\ln{\left(\frac{B+r}{r}\right)}~{},$
(57) $\displaystyle f(R)$ $\displaystyle=$
$\displaystyle\frac{AR}{8B}-\frac{3A\Lambda}{4B}+\frac{2K}{B^{2}}\ln\left(\frac{6AB\Lambda-
ABR+16K}{16K}\right)~{},$ (58) $\displaystyle V(\phi)$ $\displaystyle=$
$\displaystyle-\frac{K\phi^{2}}{AB}-\frac{3\Lambda\phi^{2}}{8}+\frac{B^{2}\Lambda\phi^{6}}{8A^{2}}+\frac{BK\phi^{6}}{2A^{3}}+\frac{K}{B^{2}}\ln\left(\frac{A}{A-B\phi^{2}}\right)~{}.$
(59)
The $f(R)$ model avoids the afforementioned tachyonic instability when
$f_{RR}>0$, and for the obtained $f(R)$ function we have
$f_{RR}=-\frac{A^{2}r^{2}}{128K(B+r)^{2}}>0\Rightarrow K<0~{}.$ (60)
For a particular combination of the scalar charges: $B=A/8$, the $f(R)$ model
is simplified and takes the form:
$f(R)=R-6\Lambda+\frac{128K}{A^{2}}\ln\left(1-\frac{A^{2}(R-6\Lambda)}{128K}\right)$
(61)
The metric function (54) as we can see, is similar to the BTZ black hole with
the addition of a $\mathcal{O}(r)$ term because of the presence of the scalar
field, and this term gives Ricci scalar its dynamical behaviour. The potential
satisfies the conditions
$V(r\rightarrow\infty)=V(\phi\rightarrow 0)=0~{},$ (62)
and also $V^{\prime}(\phi=0)=0$. It has a mass term which is given by
$m^{2}=V^{\prime\prime}(\phi=0)=-\frac{3}{4}\Lambda~{}.$ (63)
The metric function for $\Lambda=-1/l^{2}$ (AdS spacetime) and for $A,B>0$ has
a positive root, since $K<0$. For $\Lambda=1/l^{2}$ (dS spacetime) the metric
function is always negative provided for $A,B>0$ and $K<0$, therefore we will
discuss only the AdS case.
The horizon is located at
$r_{+}=\frac{2l\left(\sqrt{K\left(4Kl^{2}-AB\right)}-2Kl\right)}{A}~{},$ (64)
where we have set $\Lambda=-1/l^{2}$. As we can see, in this $f(R)$ gravity
theory we have a hairy black hole supported by a phantom scalar field.
Figure 5: We plot the metric function, the potential, the Ricci scalar and the
$f(R)$ function of the phantom black hole for different scalar charge $A$,
where other parameters have been fixed as $B=A/8$, $K=-1$ and $\Lambda=-1$.
Figure 6: The temperature and the entropy at the horizon of the black hole, as
functions of the scalar charge $A$ while changing scalar charge $B$.
In FIG. 5 we show the behaviour of the metric function $b(r)$, the potential
$V(r)$, the dynamical Ricci scalar $R(r)$ and the $f(R)$ function. As can be
seen in the case of $B=A/8$, the scalar charge $A$ plays an important role on
the behaviour of the above functions. For example if the scalar charge $A$ is
getting smaller values the radius of the horizon of the black hole is getting
larger. This means that even a small distribution of phantom matter can
support a hairy black hole.
Looking at the thermodynamic properties of the model the Hawking temperature
at the horizon is given by
$T(r_{+})=\frac{2K}{\pi A}+\frac{r_{+}}{2\pi
l^{2}}=\frac{\sqrt{K\left(4Kl^{2}-AB\right)}}{\pi Al}~{},$ (65)
which is always positive for $A,B>0$ and $K<0$, while the Bekenstein-Hawking
entropy is given by
$S(r_{+})=\frac{\mathcal{A}}{4G}f_{R}(r_{+})=4\pi^{2}r_{+}f_{R}(r_{+})=\frac{A\pi^{2}r_{+}}{2(B+r_{+})}=-\frac{\pi^{2}AKl}{\sqrt{K\left(4Kl^{2}-AB\right)}}>0~{}.$
(66)
For the thermodynamic behaviour of the hairy black hole we can see from FIG. 6
that for larger scalar charge $A$ we are getting smaller temperatures, while
the entropy has the opposite behaviour.
## IV Conclusions
In this work, we considered $(2+1)$-dimensional $f(R)$ gravity with a self
interacting scalar field as a matter field. Without specifying the form of the
$f(R)$ function we derived the field equations and we showed that the $f(R)$
model has a direct contribution from the scalar field. At first we considered
the case, where $f_{R}(r)=1-\int\int\phi^{\prime}(r)^{2}drdr$, which indicates
that if we integrate with respect to the Ricci scalar we will obtain a pure
Einstein-Hilbert term and another term that depends on the scalar field. The
asymptotic analysis of the metric function unveiled the physical meaning of
our results. At infinity we obtain a scalarized BTZ black hole. The scalar
charges appear in the effective cosmological constant that is generated from
the equations. Corrections in the form of spacetime appear as
$\mathcal{O}(r^{-n})$ (where $n\geq 1$) terms that depend purely on the scalar
charges. At the origin we obtain a different solution from the BTZ black hole,
where $\mathcal{O}(r)$ and $\mathcal{O}(r^{2}\ln(r))$ terms change the form of
spacetime.
The scalar curvature is dynamical and due to its complexity it was difficult
to obtain an exact form of the $f(R)$ function. Using asymptotic
approximations, we show that the scalar charges make our theory to deviate
form Einstein’s Gravity. In the obtained results we considered the Dolgov-
Kawasaki stability ctiterion [47] to ensure that our theory avoids tachyonic
instabilities [30, 31, 32]. We then calculated the Bekenstein-Hawking entropy
and the Hawking temperature of the solution and we showed that the hairy
solution is thermodynamically preferred since it has higher entropy.
We then considered a pure $f(R)$ theory supported by the scalar field. We
showed that thermodynamic and observational constraints require that the pure
$f(R)$ theory should be builded with a phantom scalar field. The black hole
solution we found has a metric function which is similar to the BTZ solution
with the addition of a $\mathcal{O}(r)$ term. The scalar charge is the one
that determines the behaviour of the solution. For bigger scalar charge, the
horizon radius is getting smaller meaning that the black hole is formed closer
to the origin. The $\mathcal{O}(r)$ term is the one that gives to the Ricci
scalar its dynamical behaviour. The obtained $f(R)$ model is free from
tachyonic instabilities. We computed the Hawking temperature and the
Bekenstein-Hawking entropy to find out that they are both positive, with the
temperature getting smaller with the increase of the scalar charge while the
entropy behaves the opposite way, growing with the increase of the scalar
charge.
In the $f(R)$ gravity theories if a conformal transformation is applied from
the original Jordan frame to the Einstein frame then, a new scalar field
appears which is coupled minimally to the conformal metric and also a scalar
potential is generated. The resulted theory can be considered as a scalar-
tensor theory with a geometric (gravitational) scalar field. Then it was shown
in [67, 68], that this geometric scalar field cannot dress a $f(R)$ black hole
with hair. On the other hand on cosmological grounds, it was shown in [41]
that dark energy can be considered as a geometrical fluid that adds to the
conventional stress-energy tensor, which means that the determination of the
dark energy equation of state depends on the understanding of which $f(R)$
theory better fits current data. In our study we have introduced real matter
parameterized by a scalar field coupled to gravity, therefore, it would be
interesting to study the interplay of the geometric scalar field with the
matter scalar field and see what are their implications to cosmology. However,
to study this effect we have to extend this work to a study of
$(3+1)$-dimensional $f(R)$ gravity theories. The main difficulty of
constructing such theories is the complexity of their resulting equations.
Nevertheless, even numerically we can get important information of how matter
is coupled to $f(R)$ gravity and what are the cosmological implications.
It would be interesting to extent this theory including an electromagnetic
field. In three dimensions the electric charge makes a contribution to the
Ricci scalar, therefore we expect, like in the BTZ black hole, to find a
charged hairy black hole in $f(R)$ gravity. One could also study the
properties of the boundary CFT, consider a rotationally symmetric metric
anstaz to find rotating hairy black holes or study hairy axially symmetric
solutions from hairy spherically symmetric solutions [69].
###### Acknowledgements.
We thank Christos Charmousis for very stimulating discussions.
## References
* [1] M. Bañados, C. Teitelboim and J. Zanelli, “The Black hole in three-dimensional space-time,” Phys. Rev. Lett. 69, 1849 (1992) [arXiv:hep-th/9204099].
* [2] M. Bañados, M. Henneaux, C. Teitelboim and J. Zanelli, “ Geometry of the (2+1) black hole,” Phys. Rev. D 48, 1506 (1993) [arXiv:gr-qc/9302012].
* [3] C. Martínez and J. Zanelli, “Conformally dressed black hole in (2+1)-dimensions,” Phys. Rev. D 54, 3830 (1996) [gr-qc/9604021].
* [4] M. Henneaux, C. Martínez, R. Troncoso and J. Zanelli, “Black holes and asymptotics of 2+1 gravity coupled to a scalar field,” Phys. Rev. D 65, 104007 (2002) [hep-th/0201170].
* [5] F. Correa, C. Martínez and R. Troncoso, “Scalar solitons and the microscopic entropy of hairy black holes in three dimensions,” JHEP 1101, 034 (2011). [arXiv:1010.1259 [hep-th]].
* [6] F. Correa, C. Martínez and R. Troncoso, “Hairy Black Hole Entropy and the Role of Solitons in Three Dimensions,” JHEP 1202, 136 (2012). [arXiv:1112.6198 [hep-th]].
* [7] F. Correa, A. Faúndez and C. Martínez, “Rotating hairy black hole and its microscopic entropy in three spacetime dimensions,” Phys. Rev. D 87, 027502 (2013) [arXiv:1211.4878 [hep-th]].
* [8] M. Natsuume and T. Okamura, “Entropy for asymptotically AdS(3) black holes,” Phys. Rev. D 62, 064027 (2000) [hep-th/9911062].
* [9] J. Aparício, D. Grumiller, E. Lopez, I. Papadimitriou and S. Stricker, “Bootstrapping gravity solutions,” JHEP 1305, 128 (2013) [arXiv:1212.3609 [hep-th]].
* [10] W. Xu, L. Zhao and D. -C. Zou, “Three dimensional rotating hairy black holes, asymptotics and thermodynamics,” arXiv:1406.7153 [gr-qc].
* [11] M. Cardenas, O. Fuentealba and C. Martínez, “Three-dimensional black holes with conformally coupled scalar and gauge fields,” Phys. Rev. D 90, no.12, 124072 (2014) [arXiv:1408.1401 [hep-th]].
* [12] W. Xu and D. C. Zou, “$(2+1)$ -Dimensional charged black holes with scalar hair in Einstein–Power–Maxwell Theory,” Gen. Rel. Grav. 49, no.6, 73 (2017) [arXiv:1408.1998 [hep-th]].
* [13] W. Xu and L. Zhao, “Charged black hole with a scalar hair in (2+1) dimensions,” [arXiv:1305.5446 [gr-qc]].
* [14] Z. Y. Tang, Y. C. Ong, B. Wang and E. Papantonopoulos, “General black hole solutions in ( 2+1 )-dimensions with a scalar field nonminimally coupled to gravity,” Phys. Rev. D 100, no.2, 024003 (2019) [arXiv:1901.07310 [gr-qc]].
* [15] K. C. K. Chan and R. B. Mann, ‘̀Static charged black holes in (2+1)-dimensional dilaton gravity,” Phys. Rev. D 50, 6385 (1994) [erratum: Phys. Rev. D 52, 2600 (1995)] [arXiv:gr-qc/9404040 [gr-qc]].
* [16] K. C. K. Chan, “Modifications of the BTZ black hole by a dilaton / scalar,” Phys. Rev. D 55, 3564-3574 (1997) [arXiv:gr-qc/9603038 [gr-qc]].
* [17] N. Bocharova, K. Bronnikov and V. Melnikov, Vestn. Mosk. Univ. Fiz. Astron. 6, 706 (1970);
J. D. Bekenstein, Annals Phys. 82, 535 (1974);
J. D. Bekenstein, “Black Holes With Scalar Charge,” Annals Phys. 91, 75
(1975).
* [18] K. A. Bronnikov and Y. N. Kireyev, “Instability of black holes with scalar charge,” Phys. Lett. A 67, 95 (1978).
* [19] C. Martinez, R. Troncoso and J. Zanelli, “De Sitter black hole with a conformally coupled scalar field in four-dimensions,” Phys. Rev. D 67, 024008 (2003) [arXiv:hep-th/0205319 [hep-th]].
* [20] T. J. T. Harper, P. A. Thomas, E. Winstanley and P. M. Young, “Instability of a four-dimensional de Sitter black hole with a conformally coupled scalar field,” Phys. Rev. D 70, 064023 (2004) [arXiv:gr-qc/0312104 [gr-qc]].
* [21] A. Anabalon and A. Cisterna, “Asymptotically (anti) de Sitter Black Holes and Wormholes with a Self Interacting Scalar Field in Four Dimensions,” Phys. Rev. D 85, 084035 (2012) doi:10.1103/PhysRevD.85.084035 [arXiv:1201.2008 [hep-th]].
* [22] C. Martinez, R. Troncoso and J. Zanelli, “Exact black hole solution with a minimally coupled scalar field,” Phys. Rev. D 70, 084035 (2004) [arXiv:hep-th/0406111 [hep-th]].
* [23] C. Martinez and R. Troncoso, “Electrically charged black hole with scalar hair,” Phys. Rev. D 74, 064007 (2006) [arXiv:hep-th/0606130 [hep-th]].
* [24] T. Kolyvaris, G. Koutsoumbas, E. Papantonopoulos and G. Siopsis, “A New Class of Exact Hairy Black Hole Solutions,” Gen. Rel. Grav. 43, 163-180 (2011) [arXiv:0911.1711 [hep-th]].
* [25] P. A. González, E. Papantonopoulos, J. Saavedra and Y. Vásquez, “Four-Dimensional Asymptotically AdS Black Holes with Scalar Hair,” JHEP 12, 021 (2013) [arXiv:1309.2161 [gr-qc]].
* [26] P. A. González, E. Papantonopoulos, J. Saavedra and Y. Vásquez, “Extremal Hairy Black Holes,” JHEP 11, 011 (2014) [arXiv:1408.7009 [gr-qc]].
* [27] M. Rinaldi, “Black holes with non-minimal derivative coupling,” Phys. Rev. D 86, 084048 (2012) [arXiv:1208.0103 [gr-qc]].
* [28] A. Anabalon, A. Cisterna and J. Oliva, “Asymptotically locally AdS and flat black holes in Horndeski theory,” Phys. Rev. D 89, 084050 (2014) doi:10.1103/PhysRevD.89.084050 [arXiv:1312.3597 [gr-qc]].
* [29] A. Cisterna, A. Neira-Gallegos, J. Oliva and S. C. Rebolledo-Caceres, “Pleba\’nski-Demia\’nski solutions in Quadratic gravity with conformally coupled scalar fields,” [arXiv:2101.03628 [gr-qc]].
* [30] A. De Felice and S. Tsujikawa, “f(R) theories,” Living Rev. Rel. 13, 3 (2010), [arXiv:1002.4928 [gr-qc]].
* [31] O. Bertolami and M. C. Sequeira, “Energy Conditions and Stability in f(R) theories of gravity with non-minimal coupling to matter,” Phys. Rev. D 79, 104010 (2009) [arXiv:0903.4540 [gr-qc]].
* [32] V. Faraoni, “Matter instability in modified gravity,” Phys. Rev. D 74, 104017 (2006) [arXiv:astro-ph/0610734 [astro-ph]].
* [33] G. Cognola, E. Elizalde, S. Nojiri, S. D. Odintsov, L. Sebastiani and S. Zerbini, “A Class of viable modified f(R) gravities describing inflation and the onset of accelerated expansion,” Phys. Rev. D 77 (2008) 046009 [arXiv:0712.4017 [hep-th]].
* [34] P. Zhang, “Testing $f(R)$ gravity against the large scale structure of the universe.,” Phys. Rev. D 73 (2006) 123504 [astro-ph/0511218].
* [35] B. Li and J. D. Barrow, “The Cosmology of f(R) gravity in metric variational approach,” Phys. Rev. D 75 (2007) 084010 [gr-qc/0701111].
* [36] Y. S. Song, H. Peiris and W. Hu, “Cosmological Constraints on f(R) Acceleration Models,” Phys. Rev. D 76 (2007) 063517 [arXiv:0706.2399 [astro-ph]].
* [37] S. Nojiri and S. D. Odintsov, “Modified f(R) gravity unifying R**m inflation with Lambda CDM epoch,” Phys. Rev. D 77 (2008) 026007 [arXiv:0710.1738 [hep-th]].
* [38] S. Nojiri and S. D. Odintsov, “Unifying inflation with LambdaCDM epoch in modified f(R) gravity consistent with Solar System tests,” Phys. Lett. B 657 (2007) 238 [arXiv:0707.1941 [hep-th]].
* [39] S. Capozziello, C. A. Mantica and L. G. Molinari, “Cosmological perfect-fluids in f(R) gravity,” Int. J. Geom. Meth. Mod. Phys. 16 (2018) no.01, 1950008 [arXiv:1810.03204 [gr-qc]].
* [40] A. A. Starobinsky, “A New Type of Isotropic Cosmological Models Without Singularity,” Phys. Lett. B 91 (1980) 99 [Phys. Lett. 91B (1980) 99] [Adv. Ser. Astrophys. Cosmol. 3 (1987) 130].
* [41] S. Capozziello, O. Farooq, O. Luongo and B. Ratra, “Cosmographic bounds on the cosmological deceleration-acceleration transition redshift in $f(\mathfrak{R})$ gravity,” Phys. Rev. D 90, no.4, 044016 (2014) [arXiv:1403.1421 [gr-qc]].
* [42] M. Ostrogradsky, “Mémoires sur les équations différentielles, relatives au problème des isopérimètres,” Mem. Acad. St. Petersbourg 6 (1850) no.4, 385.
* [43] R. P. Woodard, “Avoiding dark energy with 1/r modifications of gravity,” Lect. Notes Phys. 720 (2007) 403 [astro-ph/0601672].
* [44] L. Sebastiani and S. Zerbini, “Static Spherically Symmetric Solutions in F(R) Gravity,” Eur. Phys. J. C 71, 1591 (2011) [arXiv:1012.5230 [gr-qc]].
* [45] T. Multamaki and I. Vilja, “Spherically symmetric solutions of modified field equations in f(R) theories of gravity,” Phys. Rev. D 74, 064022 (2006) [arXiv:astro-ph/0606373 [astro-ph]].
* [46] S. H. Hendi, “(2+1)-Dimensional Solutions in $F(R)$ Gravity,” Int. J. Theor. Phys. 53, no.12, 4170-4181 (2014) [arXiv:1410.7527 [gr-qc]].
* [47] S. H. Hendi, B. Eslam Panah and R. Saffari, “Exact solutions of three-dimensional black holes: Einstein gravity versus $F(R)$ gravity,” Int. J. Mod. Phys. D 23, no.11, 1450088 (2014) [arXiv:1408.5570 [hep-th]].
* [48] T. Multamaki and I. Vilja, “Static spherically symmetric perfect fluid solutions in f(R) theories of gravity,” Phys. Rev. D 76, 064021 (2007) [arXiv:astro-ph/0612775 [astro-ph]].
* [49] G. G. L. Nashed and E. N. Saridakis, “New rotating black holes in non-linear Maxwell $f({\cal R})$ gravity,” [arXiv:2010.10422 [gr-qc]].
* [50] G. G. L. Nashed and S. Nojiri, “Analytic charged BHs in $f(\mathcal{R})$ gravity,” [arXiv:2010.04701 [hep-th]].
* [51] G. G. L. Nashed and K. Bamba, “Charged spherically symmetric Taub–NUT black hole solutions in $f(R)$ gravity,” PTEP 2020, no.4, 043E05 (2020) [arXiv:1902.08020 [gr-qc]].
* [52] G. G. L. Nashed and S. Capozziello, “Charged spherically symmetric black holes in $f(R)$ gravity and their stability analysis,” Phys. Rev. D 99, no.10, 104018 (2019) [arXiv:1902.06783 [gr-qc]].
* [53] J. A. R. Cembranos, A. de la Cruz-Dombriz and P. Jimeno Romero, “Kerr-Newman black holes in $f(R)$ theories,” Int. J. Geom. Meth. Mod. Phys. 11 (2014) 1450001 [arXiv:1109.4519 [gr-qc]].
* [54] A. de la Cruz-Dombriz, A. Dobado and A. L. Maroto, “Black Holes in f(R) theories,” Phys. Rev. D 80 (2009) 124011 Erratum: [Phys. Rev. D 83 (2011) 029903] [arXiv:0907.3872 [gr-qc]].
* [55] S. H. Hendi, B. Eslam Panah and S. M. Mousavi, “Some exact solutions of F(R) gravity with charged (a)dS black hole interpretation,” Gen. Rel. Grav. 44, 835 (2012) [arXiv:1102.0089 [hep-th]].
* [56] E. F. Eiroa and G. Figueroa-Aguirre, “Thin shells in (2+1)-dimensional F(R) gravity,” [arXiv:2011.14952 [gr-qc]].
* [57] Z. Y. Tang, B. Wang, T. Karakasis and E. Papantonopoulos, “Curvature Scalarization of Black Holes in f(R) Gravity,” [arXiv:2008.13318 [gr-qc]].
* [58] Z. Y. Tang, B. Wang and E. Papantonopoulos, “Exact charged black hole solutions in D-dimensions in f(R) gravity,” [arXiv:1911.06988 [gr-qc]].
* [59] J. D. Bekenstein, “Nonexistence of baryon number for static black holes,” Phys. Rev. D 5, 1239-1246 (1972)
* [60] P. Breitenlohner and D. Z. Freedman, “Stability in Gauged Extended Supergravity,” Annals Phys. 144, 249 (1982)
* [61] L. Mezincescu and P. K. Townsend, “Stability at a Local Maximum in Higher Dimensional Anti-de Sitter Space and Applications to Supergravity,” Annals Phys. 160, 406 (1985)
* [62] M. Akbar and R. G. Cai, “Thermodynamic Behavior of Field Equations for f(R) Gravity,” Phys. Lett. B 648, 243-248 (2007) [arXiv:gr-qc/0612089 [gr-qc]].
* [63] U. Camci, “Three-dimensional black holes via Noether symmetries,” [arXiv:2012.06064 [gr-qc]].
* [64] R. R. Caldwell, “A Phantom menace?,” Phys. Lett. B 545, 23-29 (2002) [arXiv:astro-ph/9908168 [astro-ph]].
* [65] L. Zhang, X. Zeng and Z. Li, “AdS Black Hole with Phantom Scalar Field,” Adv. High Energy Phys. 2017, 4940187 (2017) [arXiv:1707.04429 [hep-th]].
* [66] K. A. Bronnikov and J. C. Fabris, “Regular phantom black holes,” Phys. Rev. Lett. 96, 251101 (2006) [arXiv:gr-qc/0511109 [gr-qc]].
* [67] P. Cañate, L. G. Jaime and M. Salgado, “Spherically symmetric black holes in $f(R)$ gravity: Is geometric scalar hair supported ?,” Class. Quant. Grav. 33, no.15, 155005 (2016) [arXiv:1509.01664 [gr-qc]].
* [68] P. Cañate, “A no-hair theorem for black holes in $f(R)$ gravity,” Class. Quant. Grav. 35, no.2, 025018 (2018)
* [69] S. Capozziello, M. De laurentis and A. Stabile, “Axially symmetric solutions in f(R)-gravity,” Class. Quant. Grav. 27, 165008 (2010) [arXiv:0912.5286 [gr-qc]].
|
# Towards Deep Learning Assisted Autonomous UAVs for Manipulation Tasks in
GPS-Denied Environments
Ashish Kumar†, Mohit Vohra†, Ravi Prakash†, L. Behera†, Senior Member IEEE
{https://github.com/ashishkumar822, https://youtu.be/kxg9xmr3aEM} †All authors
are with the Department of Electrical Engineering, Indian Institute of
Technology, Kanpur<EMAIL_ADDRESS>
###### Abstract
In this work, we present a pragmatic approach to enable unmanned aerial
vehicle (UAVs) to autonomously perform highly complicated tasks of object pick
and place. This paper is largely inspired by challenge-$2$ of MBZIRC $2020$
and is primarily focused on the task of assembling large $3$D structures in
outdoors and GPS-denied environments. Primary contributions of this system
are: (i) a novel computationally efficient deep learning based unified multi-
task visual perception system for target localization, part segmentation, and
tracking, (ii) a novel deep learning based grasp state estimation, (iii) a
retracting electromagnetic gripper design, (iv) a remote computing approach
which exploits state-of-the-art MIMO based high speed ($5000$Mb/s) wireless
links to allow the UAVs to execute compute intensive tasks on remote high end
compute servers, and (v) system integration in which several system components
are weaved together in order to develop an optimized software stack. We use
DJI Matrice-$600$ Pro, a hex-rotor UAV and interface it with the custom
designed gripper. Our framework is deployed on the specified UAV in order to
report the performance analysis of the individual modules. Apart from the
manipulation system, we also highlight several hidden challenges associated
with the UAVs in this context.
## I Introduction
Despite being an easy task for humans, object manipulation using robots
(robotic manipulation) is not very straight forward. Robotic manipulation
using multi-DoF robotic arms has been studied extensively in the literature
and has witnessed major breakthrough in the past decade, thanks to the advent
of deep learning based visual perception methods and high performance parallel
computing hardware (GPUs, TPUs). Various international level robotics
challenges such as DARPA, Amazon Picking $2016$ and Robotics Challenge-$2017$,
have played a major role in pushing state-of-the-art in this area.
Autonomous robotic manipulation using UAVs, on the other hand, has just
started marking its presence. It is because, developing low cost UAVs or
vertical-takeoff-landing (VTOL) micro-aerial-vehicles (MAV)111used
interchangeably with UAV throughout the paper has only recently become
possible. The low cost revolution is primarily driven by the drone
manufacturers such as DJI which covers $\sim 70\%$ of the drone market
worldwide and provides low cost industrial drones for manual operation. Apart
from that, the open-source autopilot projects such as Ardupilot [1] have also
contributed in this revolution. The above reasons have resulted in their
increased popularity among the worldwide researches to develop algorithms for
their autonomous operations. In the area of manipulation using UAVs, one of
the most prominent and visible effort comes from E-commerce giant Amazon, a
multi-rotor UAV based delivery system.
BatteryAntennaStereo RigPropellersEM GripperPolymer FoamElectromagnetsNvidia
Jetson TX$2$Intel RealSense D$435$iLanding Gear$1.0$Kg$0.2m\times 0.2m\times
0.3m$$1.0$Kg$0.2m\times 0.2m\times 0.6m$$1.5$Kgs$0.2m\times 0.2m\times
1.2m$$2.0$Kgs$0.2m\times 0.2m\times 2.0m$ Figure 1: Top: M$600$ Pro with our
gripper. Bottom: Bricks
Another famous example is Mohammad Bin Zayed International Robotics Challenge
(MBZIRC) $2020$, which is establishing new benchmarks for the drone based
autonomy. The Challenge-$2$ of MBZIRC $2020$ requires a team of UAVs and an
Unmanned Ground Vehicle (UGV) to locate, pick, transport and assemble fairly
large cuboidal shaped objects (bricks (Fig.1)) into a given pattern to unveil
tall $3$D structures. The bricks consist of identifiable ferromagnetic regions
with yellow shade.
Inspired by the challenge discussed above, in this paper, we present an end-
to-end industry grade solution for UAV based maniulation tasks. The developed
solution is not limited to constrained workspaces and can be readily deployed
into real world applications. Before diving into the actual solution, we first
uncover several key challenges associated with UAV based manipulation below.
### I-A Autonomous Control
Multi-rotor VTOL MAVs UAVs are often termed as under-actuated, highly non-
linear and complex dynamical system. These characteristics allow them to enjoy
high agility and ability to perform complex maneuvering tasks such as mid-air
flipping, sudden sharp turns etc. However, the agility comes at a cost which
is directly related to its highly coupled control variables. Due to this
reason, UAV control design in not an easy task. Hierarchical Proportional-
Integral-Derivative (PID) control is one of the popular approaches which is
generally employed for this purpose. Being quite simple, PID controllers are
not intelligent enough to account for dynamic changes in the surroundings
while their autonomous operations. To cope up with this, recently, the focus
of researchers community has shifted towards machine learning based techniques
for UAV control. However, due to data dependency and lack of generalization,
learning based algorithms are still far from real deployable solution.
Therefore, robust control of UAVs for autonomous operations still remains a
challenge.
### I-B Rotor Draft
UAV rotor draft is a crucial factor which must be considered while performing
drone based object manipulation. In order to execute a grasp operation, either
the UAV must fly close to the target object or must have a long enough
manipulator so that rotor draft neither disturbs the object nor the UAV
itself. The former is only the case which is feasible but it’s not an easy
task. When flying at low altitudes, the rotor draft severely disturbs the UAV
and therefore, poses a significant difficulty in front of stabilization and
hovering algorithms. The latter, on the other hand, is not even possible as it
would require a gripper of several meters long in order to diminish the
effects of rotor draft. In addition, even a grasping mechanism $1-2m$ long,
will increase the payload of UAV, resulting in quicker power source drainage.
Also, such a long gripper will be infeasible to be accommodated into the UAV
body.
### I-C Dynamic Payload
Dynamic payload attachment to the UAV is another important issue which arises
after a successfull grasp operation. When a relatively heavier payload is
attached to the UAV dynamically, the system characteristics and static thrust
a.k.a hovering thrust also needs to be adjusted. However, as static thrust is
increased in order to compensate for the gravity, the issues of heating of
batteries and propulsion system arises. This raises safety concerns about the
battery system and also decreases operational flight time. Although, addition
of small weights to the UAV can be ignored, this case becomes severe for
larger payload weights i.e. when the weight to be attached is order of $4$ Kg
while having an UAV with payload capacity $6$Kg. Thus, requirement of
intelligent control algorithms which can handle non-linearities associated
with UAVs, becomes evident.
### I-D Visual Perception
In order to perform a seemingly easier task of picking, it requires a well
defined sequence of certain visual perception tasks to be executed. This
primarily includes object detection, instance detection and segmentation,
instance selection, and instance tracking. While performing these tasks in
constrained environments such as uniform background, the perception algorithms
turns out be relatively simpler. However, in the scenarios such as outdoors,
several uncontrolled variables jumps in, for example, ambient light, dynamic
objects, highly complex and dynamic visual scenes, multiple confusing
candidates. All such external variables increase the difficulty level to
several degrees. Apart from that, real-time inference of perception modules is
also desired, however, limited onboard computational and electric power
further convolutes the system design.
### I-E Localization and Navigation
UAV localization and navigation play an important role in exploring the
workspace autonomously and execute waypoint operations. In the GPS based
systems, location information can be obtained easily and optionally fused with
inertial-measurements-units (IMUs) for high frequency state-estimation.
However, in GPS denied situations, the localization and navigation no longer
remain an easy task from algorithmic point of view. In such scenarios, visual-
SLAM or visual odometery based algorithms are generally employed. These
algorithms perform feature matching in images, $3$D point clouds and carryout
several optimization procedures which in turn require significant amount of
computational power, thus transforming the overall problem into a devil.
Due to the above mentioned complexities, limitations and constraints,
performing the task of object manipulation using UAVs is not easy. Hence, in
this work, we pave a way to solve the problem of UAV based manipulation in the
presence of several above discussed challenges. In this paper, we primarily
focus on real-time accurate visual perception, localization and navigation of
using visual-SLAM for $6$-DoF state-estimation in GPS-denied environments. The
state-estimation is used for several other high level tasks such as autonomous
take-off, landing, planning, navigation and control. Later, these modules are
utilized to perform an object pick, estimation of successful grasp, transport
and place operation which is primarily governed by our deep learning based
visual perception pipeline. Highlights of the paper are as follows:
1. 1.
Identification and in detailed discussion of the challenges associated with
UAV based object manipulation.
2. 2.
A novel computationally efficient, deep learning based unified multi-task
visual perception system for instance level detection, segmentation, part
segmentation, and tracking.
3. 3.
A novel visual learning based grasp state feedback.
4. 4.
A remote computing approach for UAVs.
5. 5.
An electromagnet based gripper design for UAVs.
6. 6.
Developing high precision $6$-DoF state estimation on top of ORB-SLAM$2$ [2]
visual-SLAM.
A side contributory goal of this paper is to introduce and benefit the
community by providing fine details on low level complexities associated with
UAV based manipulation and potential solutions to the problem. In the next
section, we first discuss the available literature. In Sec. III, we discuss
the system design. In Sec. IV, we discuss the unified visual perception
system. In Sec. V, system integration is discussed. The Sec. VI provides the
experimental study and Finally, the Sec. VII provides the conclusions about
the paper.
## II Related Work
Due to diverse literature on robotic vision and very limited space, we are
bound to provide very short discussion on relevant work. We introduce only a
few popular learning based approaches for the instance detection,
segmentation, tracking and visual-SLAM. AlexNet [3], VGG [4], ResNet [5] are
very first Convlutional Neural Networks (CNN). The object detectors Faster-
RCNN [6], Fast-RCNN [7], RCNN [8] are developed on top of them. FCN [9],
PSPNet [10], RefineNet [11] are approaches developed for segmentation tasks.
Mask-RCNN [12] combines [6] and FPN [13] to improve object detction accuracies
and also proposes an ROI align approach to facilicate instance segmentation.
ORB-SLAM [14] and ORB-SLAM$2$ [2] are popular approaches for monocular and
stereo based visual-SLAM. UnDeepVO [15] is a very recent approach to visual
odometery using unsupervised deep learning. DeepSORT [16] is a deep learning
based multi object tracker inspired by kalman filter.
All of the above algorithms are being used in robotics worldwise and many
recent works revolves around them. Nonetheless, the issue of deploying these
algorithms on computationally limited platforms altogether is still a
challenge.
## III System Design
### III-A UAV Platform
According to our experimental observation, an MAV must have a payload capacity
atleast double of the maximum payload to be lifted. It is required in order to
avoid complexities involved with dynamic payload attachment (Sec. I-C).
Keeping this observation in mind, we use DJI Matrice-$600$ Pro hex rotor (Fig.
1) which has a payload capacity of $6$ Kgs, flight time of $16$ and $32$
minutes with and without payload respectively. The UAV flight controller can
be accessed through a UART bus via DJI onboard-SDK APIs. The DJI UAVs are
quite popular worldwide for manual flying, however, turning them into
autonomous platforms is not straight forward. It is because, being an
industrial drone, it is optimized for manual control. Moreover, the SDK APIs
lack proper documentation and other important details of low level control,
the rate at which the controller can listen commands remain hidden.
Availability of these details are quite crucial for robust autonomous control
of the UAVs, since a delay of merely $20$ms in the control commands can
degrade the performance severely. Despite the challenges, their low cost and
decent payload capacity make them attractive choice for research. It
encourages us to examine the control responses, delays associated in the
communication system of the UAV in order to adapt it for autonomous
operations.
### III-B Gripper Design
We develop a retractable servo actuated gripper (Fig. 2) enabled with
electromagnetic grasping. It consists of two carbon fiber tubes, called left
and right arms. Robotics grade heavy duty servos are employed to arm and
disarm the gripper when in air. Four servos, two on each side are used for
high torque and better design stability. Two electromagnets (EMs) on each arm
are used for the gripping mechanism. Each arm and respective electromagnets
are connected via an assembly which consists of polymer foams sandwiched
between carbon fiber plates. The carbon fiber tubes are linked at both ends
(servo and Foam assembly) via $3$D printed parts as shown in Fig. 2. While
performing a grasping operation, the foam compression/decompression action
accounts for the drifts in UAV’s actual and desired height. The control
circuit and firmware is developed on PIC$18$F$2550$, a Microchip USB-series
microcontroller.
Figure 2: The CAD design of our gripper and $3$D printed parts
### III-C Compute Infrastructure and Communication Device
We equip our platform with DJI-Manifold $2$ mini computer, based on NVIDIA
Jetson Tegra TX2. The computer is powered by $6$ core CPUs along with $256$
GPU cores, $8$GB RAM and $32$GB eMMC. A dual band $150$ ($2.4$GHz) + $717$
($5$GHz) = $867$Mbps onboard WiFi is also available. We use MIMO based ASUS
ROG RAPTURE GT-AX-$11000$ Tri-Band wireless router, in order to link base
station computers with onboard computer. The router offers very high data
transfer rates of upto $1148$Mbps on $2.4$GHz and $4802$Mbps on two $5$GHz
bands, aggregating upto $11000$Mbps. The motivation behind using such high
speed device is to enable real-time high performance remote computing, system
monitoring, remote data logging, and algorithmic debugging. The router also
make it feasible to develop multiple UAVs based solutions.
## IV Uinified Multi-Task Visual Perception
$C_{1}$FPN$C_{2}$FPNTarget maskRPNROI Pooling $7\times 7$ROI Align $14\times
14$instance classificationinstance box regressioninstance segmentationpart
segmentation$t$$t-1$$t-1$$t$$C_{3}$Foam compressed/uncompressedObject
Attached/Not AttachedConv-BN-ReLUConcatenationFPNFeature Pyramid
NetworkRPNRegion Proposal NetworkEltwise ProductGrasp State ClassifierBrick
ClassBrick Bounding BoxBrick Instance MaskMagnetic Region MaskTrackerMask-
RCNNDetector Figure 3: Unified Visual Perception System
Deep learning based visual perception systems perform with very high accuracy
on tasks such as object detection, instance detection, segmentation, visual
tracking. All of these methods exploit the power of CNNs to generate unique
high dimensional latent representation of the data. During the past decade, a
number of CNN based algorithms have been proposed which have shown increasing
improvements in these tasks over time. The CNN models of these algorithms are
well tested on standard datasets, however, lacks generalization. These models,
often, are also very large in size. Therefore, direct deployment of such
algorithms for real-time robotic applications is not a wise choice.
As an example of our case, four tasks i.e. instance detection, segmentation,
part segmentation and object tracking need to be performed in order to execute
a high level task. It is quite important to note that despite the availability
of several algorithms and their open source implementations individually, none
of them combines these primary tasks together.
Therefore, in the realm of limited computational and power resources, we put
an effort to combine aforementioned four tasks in a unified CNN powered visual
perception framework. Instead of developing perception modules from scratch,
we develop the unified multi-task visual perception system on the top of the
state-of-art algorithms. It is done in order to achieve the goals of
computational and resource efficiency, real-time, robust and fail-safe
performance.
Being a highly complex visual perception pipeline, we try our best to explain
the overall perception system pictorially in Fig. 3. Below we have discussed
each of the component in detail.
### IV-A Instance Detection and Segmentation
In the presence of multiple instances of an object, it is necessary to perform
instance level detection and segmentation. Mask-RCNN [12] is one of the
popular choices for this purpose. The pretrained models of Mask-RCNN utilize
ResNet-$50$, ResNet-$101$ as CNN backbones for large datasets such as MS-COCO,
PASCAL-VOC. Run time performance of this algorithm is limited to $2$-$5$ FPS
even on a high-end GPU device having $\sim 3500$ GPU cores. Hence, deploying
even a baseline version of Mask-RCNN on Jetson TX$2$ category boards appears
as a major bottleneck in algorithmic design.
As a solution to adapt Mask-RCNN for our system, we carefully develop an
AlexNet style five stage CNN backbone with MobileNet-v1 style depthwise
spearable convolution. After several refinements of parameter selection, we
come up with a very small and lightweight model which can run in real-time. In
the baseline Mask-RCNN, object detection is performed upto stage-$2$ of
ResNet. However, due to resource limitation, we limit the object detection
upto stage-3 starting from stage-$5$. Rest of the architecture for instance
detection, segmentation remains same.
Further, in order to improve the accuracy, we first train the final model on
mini ImageNet ILSVRC-$2012$ dataset so that primary layers of the model can
learn meaningfull edge and color responses similar to primary visual cortex in
biological vision system. We then fine tune the model for our task of instance
detection and segmentation. Pretraining on ImageNet improves the accuracy,
generalization and suppresses false positives.
### IV-B Part Detection
The Mask-RCNN is only limited to the task of instance detection and
segmentation. However, we have an additional requirement to localize specific
part of an object, in our case, the ferromagnetic regions. Therefore, we
extend the previously developed CNN infrastructure to accommodate this
functionality. In order to achieve that, features from ROI-Align layer are
passed through a stack of two convolution layers, similar to instance
segmentation branch (Fig. 3). The features corresponding to an instance are
then multiplied elementwise with the segmentation mask of the same instance
obtained from instance segmentation branch. Mask-RCNN follows the object
centric approach for instance segmentation and performs binary classification
using binary cross entropy loss. We also follow the same approach to learn
binary mask. For more detailed explanation, please refer to [12].
### IV-C Target and Conditional Tracking and Visual Servoing
Once the instances of desired class are detected and segmented, the instance
requiring minimal translation control efforts is selected for grasping
operation. A binary mask $M_{t-1}$ corresponding to the selected target is
sent to the tracker. The overall tracking process is conditioned on $M_{t-1}$
where the non-zeros pixels of $M_{t-1}$ guides the tracker “where to look”.
For this reason, we term the target mask as support mask. The task of tracker
thus can be defined as to predict support mask image $M_{t}$ at current time
step given current $I_{t}$ and previous $I_{t-1}$ RGB frames and support mask
$M_{t-1}$.
We further extend the developed perception system so far to be used as
lightweight tracker (Fig. 3). In order to realize the the tracking process, we
design a separate AlexNet style CNN ($C_{2}$) (similar to $C_{1}$ and) to
extract spatial feature embeddings of the support mask image. The backbone
$C_{2}$ has negligible number of weights as compared to the backbone $C_{1}$
for RGB images. We have designed the architecture such that both the tracker
and the detector share a common CNN backbone $C_{1}$ for high dimensional
embedding of RGB images. This is done in order to prevent the computational
resources (limited GPU memory) from being exhausted.
Next, feature embedding of both RGB images and the support mask are fused by
first performing a concatenation operation on the embeddings of stage-$3$,
stage-$4$ and stage-$5$ of both $C_{1}$ and $C_{2}$. This step essentially
represents the conditioning of the tracking process onto $M_{t-1}$. Later,
these fused representations are aggregated by using FPN in order to predict
highly detailed support mask for next time step. Despite the very small size
of CNN architecture, the FPN module is responsible [17] for highly detailed
support mask. Higher resolution embeddings are avoided due to overly large
memory requirements.
To begin with the tracking process, first an instance in an image $I_{t-1}$ is
selected and a corresponding binary mask $M_{t-1}$ is obtained. Later,
$I_{t},I_{t-1}$ and $M_{t-1}$ are fed to their respective CNNs $C_{1},C_{2}$
of the tracker. A binary mask $M_{t}$ is predicted by the tracker which
depicts the location of the target object at time instant $t$. This newly
estimated location of the target instance is then sent to the control system
which perform real time visual servoing in order to execute a pick operation.
Further, it is important to note that while tracking, the detection modules
following the FPN block of the detector (Fig. 3) are halted to save
computations. The detection is generally performed on a single Image while
tracking process requires two images to be fed to $C_{1}$. As mentioned
previously, the detector and tracker shares a common backbone, therefore
$C_{1}$ must be provided with two images regardless of the task. Therefore,
$I_{t-1}$ and $I_{t}$ are essentially a copy of each other in the detection
while these two are different during the tracking process.
### IV-D Learning Based Feedback for Robust Grasping
In order to execute a robust grasping operation, continuous feedback of the
grasp state i.e. “gripper contact with object”, “object attached with gripper”
are required. The feedback device typically depends on the nature of gripper.
For example, finger gripper generally use force sensors to sense the grasp
state. In our case, EMs are used which requires dedicated circuitry to obtain
the feedbacks such as “whether the EMs are in contact with any ferromagnetic
material”. It poses additional design constraints and system gets extremely
complicated.
To simplify the design, we translate the problem of obtaining grasp state
feedback into image classification problem. In order to achieve that we take
advantage of the fact that the foam assembly of our gripper gets compressed at
the moment of contact with the object. In order to realize the approach, We
develop a five stage CNN $C_{3}$ (Fig. 2) where the stage-5 is followed by two
classification heads. One of them classifies an image into $G_{1}$ or $G_{2}$
whereas another classifies into $G_{3}$ or $G_{4}$.
The four classes $G_{1},G_{2},G_{3},G_{4}$ represent the states “Foam
Compressed”, “Foam Uncompressed”, “Object Attached” and “Object Not-Attached”
respectively. The input image is captured from the downward facing Intel
realsense D435i mounted on the gripper. Due to the fixed rigid body
transformation between gripper and camera, the gripper remains visible at a
constant place in the image. Therefore, input to the classifier is a cropped
image. The cropping is done such that foam, the electromagnets and the
ferromagnetic regions (in case object is attached) remains image centered. The
cropped image is resized to $64\times 64$ before feeding to the classifier.
For the data collection and training details are provided in Sec. VI.
At the time of performing grasp operation, the UAV descents at very low speed
with the help of visual servoing until the grasp state classifier predicts
$G_{2}$, also referred as touch down. Once the touch down is received, the UAV
starts ascending slowly while the grasp state classifier continuously monitors
the gripper in this phase. Once the classifier confirms successfull grasp
which is denoted by $G_{3}$, the UAV continues to ascent, transports and
places the brick at desired location.
## V System Integration
DetectorTracker Visual SLAM, IMU Fusion State EstimationGrasp State Pop from
List of Bricks Navigate and Hover Over Brick Pool Electromagnets Off, Ascent
Untill Altitude $>5m$ Detect Instances of class “C” Descent Untill Relative
Altitude $<0.3m$ Target Selection and Locking, Electromagnets On Adjust over
Dropping Location Descent with Visual Tracking Untill Foam is compressed Move
to Brick Assembling Marker Start Ascent Continue Ascent Untill Altitude $>5m$
object Attached Num Retries++ ; Num retries $>3$ “C”NoYesNoYes Figure 4:
Simplified State Machine of the overall System
### V-A Sensor Fusion and State Estimation
In order to navigate and perform various tasks accurately in a workspace, the
UAV must be aware of its $6$D pose i.e. $\\{x,y,z,r,p,y\\}$ at every instant
of time. To address this issue in GPS-Denied environments, we develop a state-
estimation solution which can provide a real-time state-estimation despite
limited computational resources onboard the UAV.
The solution is based on very popular real time ORB-SLAM$2$ [2] algorithm.
ORB-SLAM$2$ can work on stereo as well as RGB-D sensors. We prefer to use
stereo based SLAM over RGB-D SLAM because of three reasons. First, RGB-D
sensors quite costly as compared to RGB cameras and most of them does not work
in outdoor. Second, RGB-D sensors have several points with missing depth
information. Third, RGB-D cameras have limited depth range. On the other hand,
in stereo based SLAM, the depth of even far points can be computed provided
adequate baseline separation between cameras. However, the accuracy of stereo
based SLAM comes at a cost which is related to time consumed in keypoint
detection, stereo matching, bundle adjustment procedure. The standard
implementation of ORB-SLAM2 can process images of resolution $320\times 240$
at $25$ FPS on a core $i7$ CPU. When it is deployed onto the Jetson-TX$2$
along with other processes, the speed drops to $5$FPS.
With the limited power budget, it is not possible to place another system
alongside the TX$2$ board. Therefore, in order to solve this issue, we take
advantage of our high speed networking infrastructure. In order to execute the
SLAM algorithm, the captured stereo images are sent to a base station. The
SLAM algorithm is executed on the received images to update the $6$D UAV state
and $3$D map. As soon as $6$D pose from SLAM is obtained, it is fused with IMU
by using Extended Kalman Filter in order to obtain a high frequency state
estimate.
### V-B Path Planning and Collision Avoidance
We use RRTConnect* algorithm for real time planning and flexible collision
library (FCL) library for collision avoidance. In order to use them, we modify
an open source project MoveIt!, which is was originally developed for robotic
manipulators and already has support for RRTConnect* and FCL. However, MoveIt!
has several bugs related to planning of $6$D joints222“Floating Joint” in the
context of MoveIt!. For our use, we resolve the issues and configure MoveIt!
for UAV.
### V-C Control
As mentioned previously, DJI SDK APIs does not expose its internal controller
details. It also doesn’t allow the state-estimation sent to its internal
controllers. Therefore, We tune four outer loop PIDs, one for each of vertical
velocity, roll and pitch angle and yaw velocity respectively. Based on state-
estimation, the tuned PIDs performs position control of the UAV by producing
commands for internal hidden controllers,
Fig. 4 shows simplified state machine of our UAV based manipulation system.
For the sake of brevity, very obvious steps have been dropped from the state
machine. It can be noticed that the state machine can be completely explained
by the visual perception, state-estimation, planning and control algorithms
discussed previously.
## VI Experiments
Seq$.1$Seq$.2$Seq$.3$Seq$.4$$t_{0}$$T$$t_{0}<T$ (a)
RGB Image Brick Detection Magnetic Region (b)
Figure 5: Qualitative results for (a) Tracking, and (b) Instance detection,
segmentation and part detection.
In order to justify the validity and usefulness of our work, we provide the
experimental analysis of the timing performance and accuracy of detector,
tracker and grasp state classifier in the context of real-time autonomous
robotic applications. Performance evaluation of the visual perception system
on the public benchmarks for object detection such as MS-COCO, PASCAL-VOC is
out of the scope of this paper.
### VI-A Dataset
#### VI-A1 Detection
The data collection process is based on our work [17] which won $3$rd prize in
Amazon Robotics Challenge, $2017$. Following [17], we collect a very small
sized dataset of about $100$ images which contains multiple instances of all
four categories. We split the dataset into training and testing set in the
ratio of $7:3$. We perform both box regression and segmentation for the
instances while only mask segmentation is performed for the ferromagnetic
regions. In order to achieve that, for each instance in an image, two masks
are generated manually, one for instance itself and another for its
ferromagnetic region. These masks serve as groundtruth for instance
segmentation and part segmentation pipeline. The ground truth for box
regression is extracted automatically based on the convex hull of the mask
pixels belonging to the instance. The instance class-ID is defined at the time
of mask generation. In order to multiply the dataset size, run time occlusion
aware scene synthesis is performed similar to [17]. The synthetic scene
synthesis technique allows us to annotate only a few images and appears to be
a powerful tool for data multiplication.
#### VI-A2 Tracking
We collect $10$ indoor and $10$ outdoor video sequences at $\sim 30$FPS with
instance size varying from smaller to larger. We downsample the video
sequences to $\sim 2$ FPS and annotate each frame for instance detection,
classification, segmentation and part segmentation. The overall dataset for
tracking purpose roughly consists of $200$ images. The $100$ indoor images are
used for training and $100$ outdoor images are kept for testing. This is done
in order to examine the generalization of the tracker. Synthetic scene
generation [17] plays an important role in this process.
#### VI-A3 Grasp State Classification
In order to train the grasp state classifier, we collect four kinds of images
($50$ each).
1. 1.
The foam in compressed state with object attached.
2. 2.
The foam in compressed state without object attached.
3. 3.
The foam in uncompressed state with object attached.
4. 4.
The foam in uncompressed state with no object.
The sets of images $\\{1,2\\}$ and $\\{3,4\\}$ represent the states $G_{1}$,
$G_{2}$, $G_{3}$, $G_{4}$. All the collected images are cropped and resized as
described in Sec.IV-D. Practically, the actual cropping region is kept
slightly larger than discussed above. It is done in order to provide
contextual details for better classification performance. The dataset is
splitted into $2:3$ for training and testing i.e. $40$ training and $60$ test
images are available for each category. While training, we again utilize
synthetic scene synthesis to accommodate for the dynamic outdoor scenes. For
more details, please refer to [17].
### VI-B Training Policy
We refer the baseline architecture as Arch1. We perform training and testing
on three different variants Arch1, Arch2 and Arch3. The Arch2 and Arch3
consists of $25\%$ and $50\%$ more number of parameteres as compared to Arch1,
in each layer of all CNN backbones. The kernel sizes for various layers have
been provided in Table-I.
Since, detection and tracking are two different tasks, therefore, separate
training is required. In order to avoid that, instances and parts are also
annotated along with the masks in the training dataset of the tracker. In
other words, the detector now can be trained on the tracker training data. To
balance the training process, images are chosen randomly with a probability of
$0.5$ from both the detector and the tracker training dataset. Through this
technique, the unified system experiences glimpse of temporal data (due to
presence of video sequences) as well as ordinary single image object
detection.
Further, we report mean-intersection-over-union (mIoU) scores for box
detection, instance segmentation, and part segmentation. The mIoU score is a
well known metric for reporting box regression and segmentation performance.
Since, tracker essentially performs segmentation, therefore same is reported
for the tracker. Due to very high inter-class and low intra-class variance, we
did not encountered any misclassification of the bricks. Therefore, we have
dropped the box classification accuracy from the results.
### VI-C Training Hyperparameters
We use base learning rate = $0.01$, learning rate policy=$step$ , $SGD$
optimizer with nestrov momentum = $0.9$ for pretraining the backbone on mini
imagenet dataset. While base learning rate = $0.001$, learning rate
policy=$step$, $ADAM$ optimizer with $\beta_{1}=0.9$ and $\beta_{2}=0.99$ are
used for finetuning on our dataset.
Table I: Kernel sizes. ‘$*$’ equals to the input channels Layer | $C_{1}$ | $C_{2}$ | $C_{3}$
---|---|---|---
Stage-$1$ | $4\times 3\times 3\times 3$ | $2\times 1\times 3\times 3$ | $2\times 3\times 3\times 3$
Stage-$2$ | $8\times 4\times 3\times 3$ | $4\times 2\times 3\times 3$ | $4\times 2\times 3\times 3$
Stage-$3$ | $16\times 8\times 3\times 3$ | $4\times 4\times 3\times 3$ | $8\times 4\times 3\times 3$
Stage-$4$ | $32\times 16\times 3\times 3$ | $8\times 4\times 3\times 3$ | $8\times 8\times 3\times 3$
Stage-$5$ | $32\times 32\times 3\times 3$ | $16\times 8\times 3\times 3$ | $16\times 8\times 3\times 3$
Others | $12\times*\times 3\times 3$ | $4\times*\times 3\times 3$ | $8\times*\times 3\times 3$
Table II: Effect of Synthetic Scenes and Comprehensive Data Augmentation fro Arch1, FP$16$ Augmentation | Box Detection mIoU | Segmentation mIoU | Part Segmentation mIoU | Tracker mIoU
---|---|---|---|---
colour | scale | mirror | blur | rotate | synthetic scenes
✗ | ✗ | ✗ | ✗ | ✗ | ✗ | $23.7$ | $31.8$ | $18.8$ | $15.3$
✓ | | | | | | $25.3$ | $32.1$ | $21.3$ | $17.7$
✓ | ✓ | | | | | $30.1$ | $38.6$ | $29.1$ | $23.1$
✓ | ✓ | ✓ | | | | $32.3$ | $40.2$ | $31.9$ | $25.5$
✓ | ✓ | ✓ | ✓ | | | $32.5$ | $41.4$ | $33.8$ | $24.1$
✓ | ✓ | ✓ | ✓ | ✓ | | $37.2$ | $49.8$ | $37.7$ | $28.4$
✓ | ✓ | ✓ | ✓ | ✓ | ✓ | $80.3$ | $79.2$ | $73.1$ | $83.4$
### VI-D Ablation Study of Perception System
Table-III depicts the performance of various perception components along with
the timing performance. For each performance metric, we perform ablation study
of all the three architectures Arch1, Arch2, and Arch3. It can be noticed that
Arch2 shows an minor improvement over Arch1, however, Arch3 does not show
significant improvements despite an increment in the number of learnable
parameters by $50\%$. In our views, It happens because the backbone
architecture for all the three variants remains same except the representation
power. Another reason for that is the objects under consideration are of
uniform colours and therefore, intra-class variance is very low.
The table also shows various performance metrics for both Half-Precision
(FP$16$) and single-precision (FP$32$) computations on GPU. A single precision
floating point number consists of $32$ bits whereas half precision consists of
$16$ bits. There also exists double precision FP$64$ which consists of $64$
bits. As the number of bits of a floating point number increases, the smallest
number representable decreases i.e. more details comes in. However, the amount
of computation time required increases non-linearly and drastically.
Therefore, in practice FP$32$ is used on desktop grade GPUs. However, due to
real time constraints and power considerations, NVIDIA has recently developed
half-precision based high-performance libraries especially for deep learning
algorithms. We take advantage of half precision in our application without
sacrificing the accuracy.
Table III: Performance Analysis of Perception System Network Architecture | Time (ms) | Box mIoU | Seg mIoU | Part Seg mIoU | Tracker mIoU
---|---|---|---|---|---
Arch1 (FP$16$) | $35$ | $80.3$ | $79.2$ | $73.1$ | $83.4$
Arch1 (FP$32$) | $69$ | $82.2$ | $80.0$ | $72.9$ | $84.8$
Arch2 (FP$16$) | $41$ | $81.2$ | $79.7.$ | $73.6$ | $84.1$
Arch2 (FP$32$) | $73$ | $83.5$ | $81.1$ | $74.0$ | $85.6$
Arch3 (FP$16$) | $52$ | $85.3$ | $80.6$ | $73.5$ | $84.4$
Arch3 (FP$32$) | $97$ | $86.7$ | $81.9$ | $74.1$ | $86.4$
### VI-E Synthetic Scenes and Comprehensive Data Augmentation
Tabel-II shows the effect of synthetic scenes and other augmentations as
performed in [17]. It can be noticed that synthetic scenes alone contribute to
the improved performance as compared to the other remaining augmentations
(highlighted in Blue). The decreasing value of tracker mIoU is observed when
blur is included in the augmentation process.
### VI-F Unified vs Distributed Perception System
The unification of various tasks is a primary contribution of this paper.
Hence, we compare the timing performance of the unified system against three
different networks for detection, part segmentation and tracking (Table-IV).
These isolated networks have exactly same configurations for each layer in
CNNs and other required components such as RPN, FPN etc. It is clearly evident
from the table that our unified pipeline is far better in terms of timing
performance, because all the three modules can be run simultaneously. Fig. 5b
and Fig. 5a show qualitative results of unified perception system. The results
in Table-IV are with Arch1 trained using all kinds of augmentations.
### VI-G Grasp State Classification
The performance of grasp state feedback CNN is shown in Table-V. The
definitions of Arch1, Arch2, and Arch3 remains same, except the parameters of
Arch1 belong to the Grasp State Classifier $C_{3}$ (Fig. 3). The $100\%$
accuracy is evident due to very simple classification task. Also, it is
important to note that, for both Arch1 and Arch2, the timing performance of
FP$16$ is roughly twice of FP$32$ where as it is not the case with Arch3
(highlighted in red). It happens because of the non-linear and saturating
speed-up curves of the GPU device.
Table IV: Unified Vs Distributed Perception System, Arch1, FP$16$ Network Architecture | Time (ms)
---|---
Detection | Part Seg | Tracking | Total
Arch1 | $--$ | $--$ | $--$ | $35$
Detection | $29$ | $--$ | $--$ | $92$
Part Segmentation | $--$ | $30$ | $--$
Tracking | $--$ | $--$ | $33$ |
Table V: Performance Analysis of Grasp State Classification Network Architecture | Time (ms) | Foam State Accuracy($\%$) | Object Attached State Accuracy($\%$)
---|---|---|---
Arch1 (FP$16$) | $15$ | $100.0$ | $100.0$
Arch1 (FP$32$) | $32$ | $100.0$ | $100.0$
Arch2 (FP$16$) | $20$ | $100.0$ | $100.0$
Arch2 (FP$32$) | $41$ | $100.0$ | $100.0$
Arch3 (FP$16$) | $25$ | $100.0$ | $100.0$
Arch3 (FP$32$) | $56$ | $100.0$ | $100.0$
## VII Conclusion
This work introduces an end-to-end framework for UAV based manipulations tasks
in GPS-denied environments. A deep learning based real-time unified visual
perception system is developed which combines the primary tasks of instance
detection, segmentation, part segmentation and object tracking. The perception
system can run at $30$ FPS on Jetson TX$2$ board. A complete electromagnets
based gripper design is proposed. A novel approach to handle the grasp state
feedback is also developed in order to avoid external electronics. To the best
of our knowledge, the unified vision system combining four tasks altogether
and the grasp state feedback in the context of UAVs has not been developed in
the literature. Apart from that, a remote computing based approach for 6DoF
state-estimation is introduced. Certain improvements in the opensource
framework MoveIt! to accommodate UAVs have also been done.
## References
* [1] A. D. Team, “Ardupilot,” URL: www. ardupilot. org, accessed, vol. 2, p. 12, 2016.
* [2] R. Mur-Artal and J. D. Tardós, “Orb-slam2: An open-source slam system for monocular, stereo, and rgb-d cameras,” IEEE Transactions on Robotics, vol. 33, no. 5, pp. 1255–1262, 2017.
* [3] A. Krizhevsky, I. Sutskever, and G. E. Hinton, “Imagenet classification with deep convolutional neural networks,” in Advances in neural information processing systems, pp. 1097–1105, 2012.
* [4] K. Simonyan and A. Zisserman, “Very deep convolutional networks for large-scale image recognition,” CoRR, vol. abs/1409.1556, 2014.
* [5] K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 770–778, 2016.
* [6] S. Ren, K. He, R. Girshick, and J. Sun, “Faster r-cnn: Towards real-time object detection with region proposal networks,” in Advances in neural information processing systems, pp. 91–99, 2015.
* [7] R. Girshick, “Fast r-cnn,” in Proceedings of the IEEE international conference on computer vision, pp. 1440–1448, 2015.
* [8] C. Zhu, Y. Zheng, K. Luu, and M. Savvides, “Cms-rcnn: contextual multi-scale region-based cnn for unconstrained face detection,” in Deep Learning for Biometrics, pp. 57–79, Springer, 2017.
* [9] J. Long, E. Shelhamer, and T. Darrell, “Fully convolutional networks for semantic segmentation,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 3431–3440, 2015.
* [10] H. Zhao, J. Shi, X. Qi, X. Wang, and J. Jia, “Pyramid scene parsing network,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2017.
* [11] G. Lin, A. Milan, C. Shen, and I. D. Reid, “Refinenet: Multi-path refinement networks for high-resolution semantic segmentation.,” in Cvpr, vol. 1, p. 5, 2017.
* [12] K. He, G. Gkioxari, P. Dollár, and R. Girshick, “Mask r-cnn,” CVPR, 2017\.
* [13] T.-Y. Lin, P. Dollár, R. Girshick, K. He, B. Hariharan, and S. Belongie, “Feature pyramid networks for object detection,” CVPR, 2017.
* [14] R. Mur-Artal, J. M. M. Montiel, and J. D. Tardos, “Orb-slam: a versatile and accurate monocular slam system,” IEEE transactions on robotics, vol. 31, no. 5, pp. 1147–1163, 2015.
* [15] F. Liu, Z. Liu, and Q. Wu, “Monocular visual odometry using unsupervised deep learning,” in 2019 Chinese Automation Congress (CAC), pp. 3274–3279, IEEE, 2019.
* [16] N. Wojke, A. Bewley, and D. Paulus, “Simple online and realtime tracking with a deep association metric,” in 2017 IEEE international conference on image processing (ICIP), pp. 3645–3649, IEEE, 2017.
* [17] A. Kumar and L. Behera, “Semi supervised deep quick instance detection and segmentation,” in 2019 International Conference on Robotics and Automation (ICRA), pp. 8325–8331, IEEE, 2019.
|
# A new method to generate superoscillating functions and supershifts
Y. Aharonov 111Schmid College of Science and Technology, Chapman University,
Orange 92866, CA, US<EMAIL_ADDRESS><EMAIL_ADDRESS>, F. Colombo
222Politecnico di Milano, Dipartimento di Matematica, Via E. Bonardi, 9 20133
Milano, Italy<EMAIL_ADDRESS><EMAIL_ADDRESS>, I.
Sabadini† , T. Shushi333Department of Business Administration, Guilford Glazer
Faculty of Business and Management, Ben-Gurion University of the Negev, Beer-
Sheva, Israel<EMAIL_ADDRESS>D.C. Struppa444The Donald Bren
Distinguished Presidential Chair in Mathematics, Chapman University, Orange,
USA<EMAIL_ADDRESS>, J. Tollaksen∗
###### Abstract
Superoscillations are band-limited functions that can oscillate faster than
their fastest Fourier component. These functions (or sequences) appear in weak
values in quantum mechanics and in many fields of science and technology such
as optics, signal processing and antenna theory. In this paper we introduce a
new method to generate superoscillatory functions that allows us to construct
explicitly a very large class of superoscillatory functions.
AMS Classification: 26A09, 41A60.
Key words: Superoscillating functions, New method to generate
superoscillations, Supershift.
## 1 Introduction
Superoscillating functions are band-limited functions that can oscillate
faster than their fastest Fourier component. Physical phenomena associated
with superoscillatory functions are known since long time and in more recent
years there has been a wide interest both from the physical and the
mathematical point of view. These functions (or sequences) appeared in weak
values in quantum mechanics, see [2, 11, 22], in antenna theory this
phenomenon was formulated in [34]. The literature on superoscillations is
large, and without claiming completeness we mention the papers [13],
[18]-[21], [28]-[30] and [32]. This class of functions has been investigated
also from the mathematical point of view, as function theory, but large part
of the results are associated with the study of the evolution of
superoscillations by quantum fields equations with particular attention to
Schrödiger equation. We give a quite complete list of papers [1], [3]-[9],
[14]-[16], [23]-[27], [33] where one can find an up-to-date panorama of this
field. In order to have an overview of the techniques developed in the recent
years to study the evolution of superoscillations and their function theory,
we refer the reader to the introductory papers [10, 12, 14] and [31]. Finally
we mention the Roadmap on superoscillations, see [17], where the most recent
advances in superoscillations and their applications to technology are well
explained by the leading experts in this field.
A fundamental problem is to determine how large is the class of
superoscillatory functions. The prototypical superoscillating function that is
the outcome of weak values is given by
$F_{n}(x,a)=\Big{(}\cos\Big{(}\frac{x}{n}\Big{)}+ia\sin\Big{(}\frac{x}{n}\Big{)}\Big{)}^{n}=\sum_{j=0}^{n}C_{j}(n,a)e^{i(1-2j/n)x},\
\ x\in\mathbb{R},$ (1)
where $a>1$ and the coefficients $C_{j}(n,a)$ are given by
$C_{j}(n,a)={n\choose
j}\left(\frac{1+a}{2}\right)^{n-j}\left(\frac{1-a}{2}\right)^{j}.$ (2)
If we fix $x\in\mathbb{R}$ and we let $n$ go to infinity, we obtain that
$\lim_{n\to\infty}F_{n}(x,a)=e^{iax}.$
Clearly the name superoscillations comes from the fact that in the Fourier’s
representation of the function (1) the frequencies $1-2j/n$ are bounded by 1,
but the limit function $e^{iax}$ has a frequency $a$ that can be arbitrarily
larger than $1$. A precise definition of superoscillating functions is as
follows.
We call generalized Fourier sequence a sequence of the form
$f_{n}(x):=\sum_{j=0}^{n}X_{j}(n,a)e^{ih_{j}(n)x},\ \ j=0,...,n,\ \ \
n\in\mathbb{N},$ (3)
where $a\in\mathbb{R}$, $X_{j}(n,a)$ and $h_{j}(n)$ are complex and real
valued functions of the variables $n,a$ and $n$, respectively. A generalized
Fourier sequence of the form (3) is said to be a superoscillating sequence if
$\sup_{j,n}|k_{j}(n)|\leq 1$ and there exists a compact subset of
$\mathbb{R}$, which will be called a superoscillation set, on which $f_{n}(x)$
converges uniformly to $e^{ig(a)x}$, where $g$ is a continuous real valued
function such that $|g(a)|>1$.
The classical Fourier expansion is obviously not a superoscillating sequence
since its frequencies are not, in general, bounded. Using infinite order
differential operators we can define a class of superoscillatory function
applying them to the functions (1) and we obtain superoscillating functions of
the form
$Y_{n}(x,a)=\sum_{j=0}^{n}C_{j}(n,a)e^{ig(1-2j/n)x},$ (4)
where $C_{j}(n,a)$ are the coefficients in (2), $g$ are given entire
functions, monotone increasing in $a$, and $x\in\mathbb{R}$. We have shown
that
$\lim_{n\to\infty}Y_{n}(x,a)=e^{ig(a)x}$
under suitable conditions on $g$ and the simplest, but important, example is
$Y_{n}(x,a)=\sum_{j=0}^{n}C_{j}(n,a)e^{i(1-2j/n)^{m}x},\ \ {\rm for\ fixed}\ \
m\in\mathbb{N}.$
From the above considerations we deduce that there exists a large class of
superoscillating functions taking different functions $g$.
In this paper we further enlarge the class of superoscillating functions
enlarging both the class of the coefficients $C_{j}(n,a)$ and of the sequence
of frequencies $1-2j/n$ that are bounded by 1. A large class of
superoscillating functions can be determined solving the following problem.
###### Problem 1.1.
Let $h_{j}(n)$ be a given set of points in $[-1,1]$, $j=0,1,...,n$, for
$n\in\mathbb{N}$ and let $a\in\mathbb{R}$ be such that $|a|>1$. Determine the
coefficients $X_{j}(n)$ of the sequence
$f_{n}(x)=\sum_{j=0}^{n}X_{j}(n)e^{ih_{j}(n)x},\ \ \ x\in\mathbb{R}$
in such a way that
$f_{n}^{(p)}(0)=(ia)^{p},\ \ \ {\rm for}\ \ \ p=0,1,...,n.$
###### Remark 1.2.
The conditions $f_{n}^{(p)}(0)=(ia)^{p}$ mean that the functions $x\mapsto
e^{iax}$ and $f_{n}(x)$ have the same derivatives at the origin, for
$p=0,1,...,n$, so they have the same Taylor polynomial of order $n$.
Under the condition that the points $h_{j}(n)$ for $j=0,...,n$, (often denoted
by $h_{j}$) are distinct we obtain an explicit formula for the coefficients
$X_{j}(n,a)$ given by
$X_{j}(n,a)=\prod_{k=0,\
k\not=j}^{n}\Big{(}\frac{h_{k}(n)-a}{h_{k}(n)-h_{j}(n)}\Big{)},$
so the superoscillating sequence $f_{n}(x)$, that solves Problem 1.1, takes
the explicit form
$f_{n}(x)=\sum_{j=0}^{n}\prod_{k=0,\
k\not=j}^{n}\Big{(}\frac{h_{k}(n)-a}{h_{k}(n)-h_{j}(n)}\Big{)}\
e^{ih_{j}(n)x},\ \ \ x\in\mathbb{R},$
as shown in Theorem 2.2. Observe that, by construction, this function is band
limited and it converges to $e^{iax}$ with arbitrary $|a|>1$, so it is
superoscillating.
Observe that different sequences $X_{j}(n)$ can be explicitly computed when we
fix the points $h_{j}(n)$. See, for example, the case of the sequence
$h_{j}(n)=1-2j/n^{p}\ {\rm for}\ j=0,...,n,\ \ \ n\in\mathbb{N}\ \ {\rm and}\
{\rm for\ fixed}\ \ p\in\mathbb{N},$
in Section 3.
We consider now the frequencies $h_{j}(n)=(1-2j/n)^{m}$, for fixed
$m\in\mathbb{N}$, to explain some facts.
(I) If we consider the sequence (4), with coefficients $C_{j}(n,a)$ given by
(2), we obtain
$\lim_{n\to\infty}\sum_{j=0}^{n}C_{j}(n,a)e^{i(1-2j/n)^{m}x}=e^{ia^{m}x},\ \ \
\ {\rm for\ fixed}\ \ m\in\mathbb{N}.$
Note that, in this case, we could have used the frequencies
$h_{j}(n)=(1-2j/n)$ and the coefficients $\tilde{C}_{j}(n,a):=C_{j}(n,a^{m})$
to get as limit function $e^{ia^{m}x}$. Thus the same limit function
$e^{ia^{m}x}$ can be obtained by tuning the frequencies and the coefficients.
(II) By solving Problem 1.1 with the frequences $h_{j}(n)=(1-2j/n)^{m}$, we
can determine the coefficients $X_{j}(n)=X_{j}(n,a)$ such that we obtain as
limit function $e^{iax}$, namely
$\lim_{n\to\infty}\sum_{j=0}^{n}{X}_{j}(n,a)e^{i(1-2j/n)^{m}x}=e^{iax},\ \ \
{\rm for\ fixed}\ \ m\in\mathbb{N}.$ (5)
Changing the coefficients ${\tilde{X}}_{j}(n,a)$ we can get, as limit
function, $e^{ia^{m}x}$.
(III) The coefficients $X_{j}$ and $C_{j}$ in the procedures (I) and (II) are
different form each other because the two methods to generate
superoscillations are different as explained in Section 3.
In section 4 we will also discuss how to generalize this method to obtain
analogous results in the case of the supershift property of functions, a
mathematical concept that generalizes the notion of superoscillating function.
## 2 A new class of superoscillating functions
In this section we show the main procedure to determine the coefficients
$X_{j}(n)$ and so to construct explicitly the superoscillating functions
solving Problem 1.1.
###### Theorem 2.1 (Existence and uniqueness of the solution of Problem 1.1).
Let $h_{j}(n)$ be a given set of points in $[-1,1]$, $j=0,1,...,n$ for
$n\in\mathbb{N}$ and let $a\in\mathbb{R}$ be such that $|a|>1$. If
$h_{j}(n)\not=h_{i}(n)$, for every $i\not=j$, then there exists a unique
solution $X_{j}(n)$ of the linear system
$f_{n}^{(p)}(0)=(ia)^{p},\ \ \ {\rm for}\ \ \ p=0,1,...,n,$
in Problem 1.1.
###### Proof.
For the sake of simplicity, we denote $h_{j}(n)$ by $h_{j}$. Observe that the
derivatives of order $p$ of $f_{n}(x)$ are
$f^{(p)}_{n}(x)=\sum_{j=0}^{n}X_{j}(n)(ih_{j})^{p}e^{ih_{j}x},\ \ \
x\in\mathbb{R},$
so if we require that these derivatives are equal to the derivatives of order
$p$ for $p=0,1,...,n$ of the function $x\mapsto e^{iax}$ at the origin we
obtain the linear system
$\sum_{j=0}^{n}X_{j}(n)(ih_{j})^{p}=(ia)^{p},\ \ \ \ p=0,1,...,n$ (6)
from which we deduce
$\sum_{j=0}^{n}X_{j}\ (h_{j})^{p}=a^{p},\ \ \ \ p=0,1,...,n,$ (7)
where we have written $X_{j}$ instead of $X_{j}(n)$. Now we write explicitly
the linear system (7) of $(n+1)$ equations and $(n+1)$ unknowns
$(X_{0},...,X_{n})$
$\begin{split}X_{0}+X_{1}+\ldots+X_{n}&=1\\\
X_{0}h_{0}+X_{1}h_{1}+\ldots+X_{n}h_{n}&=a\\\ &\ldots\\\
X_{0}h_{0}^{n}+X_{1}h_{1}^{n}+\ldots+X_{n}h_{n}^{n}&=a^{n}\end{split}$
and, in matrix form, it becomes
$H(n)X=B(a)$ (8)
where $H$ is the $(n+1)\times(n+1)$ matrix
$H(n)=\begin{pmatrix}1&1&\ldots&1\\\ h_{0}&h_{1}&\ldots&h_{n}\\\
\ldots&\ldots&\ldots&\ldots\\\
h_{0}^{n}&h_{1}^{n}&\ldots&h_{n}^{n}\end{pmatrix}$ (9)
and
$X=\begin{pmatrix}X_{0}\\\ X_{1}\\\ \ldots\\\ X_{n}\end{pmatrix}\ \ \ {\rm
and}\ \ \ B(a)=\begin{pmatrix}1\\\ a\\\ \ldots\\\ a^{n}\end{pmatrix}.$ (10)
Observe that the determinant of $H$ is the Vandermonde determinant, so it is
given by
$\det(H(n))=\prod_{0\leq i<j\leq n}(h_{j}(n)-h_{i}(n)).$
Thus, if $h_{j}(n)\not=h_{i}(n)$ for every $i\not=j$, there exists a unique
solution of the system. ∎
###### Theorem 2.2 (Explicit solution of Problem 1.1).
Let $h_{j}(n)$ be a given set of points in $[-1,1]$, $j=0,1,...,n$ for
$n\in\mathbb{N}$ and let $a\in\mathbb{R}$ be such that $|a|>1$. If
$h_{j}(n)\not=h_{i}(n)$, for every $i\not=j$, the unique solution of system
(8) is given by
$X_{j}(n,a)=\prod_{k=0,\
k\not=j}^{n}\Big{(}\frac{h_{k}(n)-a}{h_{k}(n)-h_{j}(n)}\Big{)}.$ (11)
As a consequence, the superoscillating function takes the form
$f_{n}(x)=\sum_{j=0}^{n}\prod_{k=0,\
k\not=j}^{n}\Big{(}\frac{h_{k}(n)-a}{h_{k}(n)-h_{j}(n)}\Big{)}\ e^{ih_{j}x},\
\ \ x\in\mathbb{R}.$
###### Proof.
In Theorem 2.1 we proved that, if $h_{j}\not=h_{i}$ for every $i\not=j$, there
exists a unique solution of the system (8). The solution is given by
$X_{j}(n,a)=\frac{\det(H_{j}(n,a))}{\det(H(n))}$ (12)
for
$H_{j}(n,a)=\begin{pmatrix}1&1&\ldots&1&\ldots&1\\\
h_{0}&h_{1}&\ldots&a&\ldots&h_{n}\\\ \ldots&\ldots&\ldots&\ldots&\ldots\\\
h_{0}^{n}&h_{1}^{n}&\ldots&a^{n}&\ldots&h_{n}^{n}\end{pmatrix}$ (13)
where the $j^{th}$-column contains $a$ and its powers. The explicit form of
the determinant of the matrix $H$ is given by:
$det(H(n))=(h_{1}-h_{0})\cdot(h_{2}-h_{0})(h_{2}-h_{1})\cdot(h_{3}-h_{0})(h_{3}-h_{1})(h_{3}-h_{2})$
$\cdot(h_{4}-h_{0})(h_{4}-h_{1})(h_{4}-h_{2})(h_{4}-h_{3})\cdot\ldots\cdot(h_{n}-h_{0})(h_{n}-h_{1})(h_{n}-h_{2})(h_{n}-h_{3}).....(h_{n}-h_{n-1}).$
The matrix $H_{j}(n,a)$ is still of Vandermonde type and its determinant can
be computed similarly. So we have that the solution
$(X_{0}(n,a),\ldots,X_{n}(n,a))$ is such that
$\begin{split}X_{0}(n,a)&=\frac{(h_{1}-a)\cdot(h_{2}-a)\cdot(h_{3}-a)\cdot(h_{4}-a)\cdot\ldots\cdot(h_{n}-a)}{(h_{1}-h_{0})\cdot(h_{2}-h_{0})\cdot(h_{3}-h_{0})\cdot(h_{4}-h_{0})\cdot\ldots\cdot(h_{n}-h_{0})}\\\
&=\frac{\prod_{k=0,\ k\not=0}^{n}(h_{k}-a)}{\prod_{k=0,\
k\not=0}^{n}(h_{k}-h_{0})},\end{split}$
$\begin{split}X_{1}(n,a)&=\frac{(a-h_{0})\cdot(h_{2}-a)\cdot(h_{3}-a)\cdot(h_{4}-a)\cdot\ldots\cdot(h_{n}-a)}{(h_{1}-h_{0})\cdot(h_{2}-h_{1})\cdot(h_{3}-h_{1})\cdot(h_{4}-h_{1})\cdot\ldots\cdot(h_{n}-h_{1})}\\\
&=\frac{\prod_{k=0,\ k\not=1}^{n}(h_{k}-a)}{\prod_{k=0,\
k\not=1}^{n}(h_{k}-h_{1})},\end{split}$
and so on, up to
$\begin{split}X_{n}(n,a)&=\frac{1\cdot 1\cdot 1\cdot
1\cdot\ldots\cdot(a-h_{0})(a-h_{1})(a-h_{2})(a-h_{3}).....(a-h_{n-1})}{1\cdot
1\cdot 1\cdot
1\cdot\ldots\cdot(h_{n}-h_{0})(h_{n}-h_{1})(h_{n}-h_{2})(h_{n}-h_{3}).....(h_{n}-h_{n-1})}\\\
&=\frac{\prod_{k=0,\ k\not=n}^{n}(h_{k}-a)}{\prod_{k=0,\
k\not=n}^{n}(h_{k}-h_{n})}.\end{split}$
So we get the statement with the recursive formula. ∎
## 3 The methods to generate superoscillations and examples
Below we compare the superoscillating functions obtained by solving Problem
1.1 and the superoscillating functions obtained via the sequence $F_{n}(x,a)$
and infinite order differential operators. For different methods see also
[30].
(I) Observe that the limit
$\lim_{n\to\infty}\Big{(}\cos\Big{(}\frac{x}{n}\Big{)}+ia\sin\Big{(}\frac{x}{n}\Big{)}\Big{)}^{n}=e^{iax}$
is a direct consequence of
$\lim_{n\to\infty}\Big{(}1+ia\frac{x}{n}\Big{)}^{n}=e^{iax},$
while the construction method to generate superoscillations in Theorem 2.2 has
a different nature because we require that the linear system (6) in the $n+1$
unknowns $X_{j}(n)$ are determined in such a way that
$\sum_{j=0}^{n}X_{j}(n)(ih_{j})^{p}=(ia)^{p},\ \ \ \ p=0,1,...,n$ (14)
so the derivatives
$f^{(p)}_{n}(x)=\sum_{j=0}^{n}X_{j}(n)(ih_{j})^{p}e^{ih_{j}(n)x},\ \ \
x\in\mathbb{R},$
at $x=0$, are equal to the derivatives of the exponential function $e^{iax}$
at the origin. This means that the sequence of functions
$f_{n}(x)=\sum_{j=0}^{n}X_{j}(n)e^{ih_{j}(n)x},\ \ \ x\in\mathbb{R}$
has $n$ derivatives equal to the derivatives of exponential function $e^{iax}$
at the origin so the limit
$\lim_{n\to\infty}f_{n}(x)=e^{iax}$
follows by construction of the $f_{n}(x)$.
(II) In the definition of the superoscillating function (1) the derivatives
are given by
$F_{n}^{(p)}(x,a)=\sum_{j=0}^{n}C_{j}(n,a)\Big{(}i(1-2j/n)\Big{)}^{p}e^{i(1-2j/n)x},\
\ x\in\mathbb{R}$
and it is just in the limit that we get the derivatives of order $p$ of the
exponential function $e^{iax}$ at the origin, namely we have
$\lim_{n\to\infty}\sum_{j=0}^{n}C_{j}(n,a)\Big{(}i(1-2j/n)\Big{)}^{p}=(ia)^{p},\
\ \ p\in\mathbb{N}.$
(III) With the new procedure proposed in this paper we impose the conditions
$f^{(p)}_{n}(0)=(ia)^{p},\ \ \ p=0,1,2,...,n$
(where we have genuine equalities, not in the limit) and we link $n$ with the
derivatives of $f_{n}(x)$ in order to determine the coefficients $X_{j}(n)$ in
(3), so we have that the Taylor polynomials of the two functions $f_{n}(x)$
and $e^{iax}$ are the same up to order $n$, i.e.,
$e^{iax}=1+iax+\frac{(iax)^{2}}{2!}+...+\frac{(iax)^{n}}{n!}+R_{n}(x),$
so we get
$\begin{split}f_{n}(x)&=\sum_{j=0}^{n}X_{j}(n)e^{ih_{j}x}\\\
&=f_{n}(0)+f_{n}^{(1)}(0)x+f_{n}^{(2)}(0)\frac{x^{2}}{2!}+...+f_{n}^{(n)}(0)\frac{x^{n}}{n!}+\tilde{R}_{n}(x),\
\ \ x\in\mathbb{R}\\\
&=1+iax+(ia)^{2}\frac{x^{2}}{2!}+...+(ia)^{n}\frac{x^{n}}{n!}+\tilde{R}_{n}(x),\
\ \ x\in\mathbb{R},\end{split}$
where ${R}_{n}(x)$ and $\tilde{R}_{n}(x)$ are the errors.
It is now easy to generate a very large class of superoscillatory functions.
We write a few examples to further clarify the generality of our new
construction of superoscillating sequences: given the sequence $h_{j}(n)$ we
determine the coefficients accordingly for the
$f_{n}(x)=\sum_{j=0}^{n}\prod_{k=0,\
k\not=j}^{n}\Big{(}\frac{h_{k}(n)-a}{h_{k}(n)-h_{j}(n)}\Big{)}\
e^{ih_{j}(n)x},\ \ \ x\in\mathbb{R}.$
###### Example 3.1.
Let $n\in\mathbb{N}$ and set
$h_{j}(n)=1-\frac{2}{n}j$
where $j=0,...,n$. We have
$h_{k}(n)-a=1-\frac{2}{n}k-a$
and
$h_{k}(n)-h_{j}(n)=1-\frac{2}{n}k-\Big{(}1-\frac{2}{n}j\Big{)}=\frac{2}{n}\Big{(}j-k\Big{)}.$
Thus, we obtain
$f_{n}(x)=\sum_{j=0}^{n}\prod_{k=0,\
k\not=j}^{n}\frac{n}{2}\Big{(}\frac{1-\frac{2}{n}k-a}{j-k}\Big{)}\
e^{i(1-\frac{2}{n}j)x},\ \ \ x\in\mathbb{R}.$
###### Example 3.2.
Let $n\in\mathbb{N}$, and set
$h_{j}(n)=1-\frac{2}{n^{p}}j$
where $j=0,...,n$, for a fixed $p\in\mathbb{N}$. We have
$h_{k}(n)-a=1-\frac{2}{n^{p}}k-a,$
and
$h_{k}(n)-h_{j}(n)=1-\frac{2}{n^{p}}k-\Big{(}1-\frac{2}{n^{p}}j\Big{)}=\frac{2}{n^{p}}\Big{(}j-k\Big{)}.$
So, we obtain:
$f_{n}(x)=\sum_{j=0}^{n}\prod_{k=0,\
k\not=j}^{n}\frac{n^{p}}{2}\Big{(}\frac{1-\frac{2}{n^{p}}k-a}{j-k}\Big{)}\
e^{i(1-\frac{2}{n^{p}}j)x},\ \ \ x\in\mathbb{R}.$
###### Example 3.3.
Let $n\in\mathbb{N}$, and set
$h_{j}(n)=1-\Big{(}\frac{2j}{n}\Big{)}^{p}$
where $j=0,...,n,$ for a fixed $p\in\mathbb{N}$. We have
$h_{k}(n)-a=1-\Big{(}\frac{2k}{n}\Big{)}^{p}-a,$
and
$h_{k}(n)-h_{j}(n)=1-\Big{(}\frac{2k}{n}\Big{)}^{p}-\Big{(}1-\Big{(}\frac{2j}{n}\Big{)}^{p}\Big{)}=\frac{2^{p}}{n^{p}}\Big{(}j^{p}-k^{p}\Big{)}.$
So, we obtain
$f_{n}(x)=\sum_{j=0}^{n}\prod_{k=0,\
k\not=j}^{n}\frac{n^{p}}{2^{p}}\Big{(}\frac{1-\frac{2k}{n}-a}{j^{p}-k^{p}}\Big{)}\
e^{i(1-(2j/n)^{p})x},\ \ \ x\in\mathbb{R}.$
## 4 A new class of supershifts
The procedure to define superoscillatory functions can be extended to
supershift. We recall that the supershift property of a function extends the
notion of superoscillations and it turned out to be the crucial concept behind
the study of the evolution of superoscillatory functions as initial conditions
of Schrödinger equation or of any other field equation. We recall the
definition before to state our result.
###### Definition 4.1 (Supershift).
Let $\mathcal{I}\subseteq\mathbb{R}$ be an interval with
$[-1,1]\subset\mathcal{I}$ and let
$\varphi:\,\mathcal{I}\times\mathbb{R}\to\mathbb{R}$, be a continuous function
on $\mathcal{I}$. We set
$\varphi_{\lambda}(x):=\varphi(\lambda,x),\ \ \lambda\in\mathcal{I},\ \
x\in\mathbb{R}$
and we consider a sequence of points $(\lambda_{j,n})$ such that
$(\lambda_{j,n})\in[-1,1]\ \ {\rm for}\ \ j=0,...,n\ \ {\rm and}\ \
n=0,\ldots,+\infty.$
We define the functions
$\psi_{n}(x)=\sum_{j=0}^{n}c_{j}(n)\varphi_{\lambda_{j,n}}(x),$ (15)
where $(c_{j}(n))$ is a sequence of complex numbers for $j=0,...,n$ and
$n=0,\ldots,+\infty$. If $\lim_{n\to\infty}\psi_{n}(x)=\varphi_{a}(x)$ for
some $a\in\mathcal{I}$ with $|a|>1$, we say that the function
$\lambda\to\varphi_{\lambda}(x)$, for $x$ fixed, admits a supershift in
$\lambda$.
###### Remark 4.2.
We observe that the definition of supershift of a function given above is not
the most general one, but it is useful to explain our new procedure for the
supershift case. In the following, we will take the interval $\mathcal{I}$, in
the definition of the supershift, to be equal to $\mathbb{R}$.
###### Remark 4.3.
Let us stress that the term supershift comes from the fact that the interval
$\mathcal{I}$ can be arbitrarily large (it can also be $\mathbb{R}$) and so
also the constant $a$ can be arbitrarily far away from the interval $[-1,1]$
where the function $\varphi_{\lambda_{j,n}}(\cdot)$ is computed, see (15).
###### Remark 4.4.
Superoscillations are a particular case of supershift. In fact, for the
sequence $(F_{n})$ in (1), we have $\lambda_{j,n}=1-2j/n$,
$\varphi_{\lambda_{j,n}}(t,x)=e^{i(1-2j/n)x}$ and $c_{j}(n)$ are the
coefficients $C_{j}(n,a)$ defined in (2).
Problem 1.1, for the supershift case, is formulated as follows.
###### Problem 4.5.
Let $h_{j}(n)$ be a given set of points in $[-1,1]$, $j=0,1,...,n$, for
$n\in\mathbb{N}$ and let $a\in\mathbb{R}$ be such that $|a|>1$. Suppose that
for every $x\in\mathbb{R}$ the function $\lambda\mapsto G(\lambda x)$ is
holomorphic and entire in $\lambda$. Consider the function
$f_{n}(x)=\sum_{j=0}^{n}Y_{j}(n)G(h_{j}(n)x),\ \ \ x\in\mathbb{R}$
where $\lambda\mapsto G(\lambda x)$ depends on the parameter $x\in\mathbb{R}$.
Determine the coefficients $Y_{j}(n)$ in such a way that
$f_{n}^{(p)}(0)=(a)^{p}G^{(p)}(0)\ \ \ for\ \ \ p=0,1,...,n.$ (16)
The solution of the above problem is obtained in the following theorem.
###### Theorem 4.6.
Let $h_{j}(n)$ be a given set of points in $[-1,1]$, $j=0,1,...,n$ for
$n\in\mathbb{N}$ and let $a\in\mathbb{R}$ be such that $|a|>1$. If
$h_{j}(n)\not=h_{i}(n)$ for every $i\not=j$ and $G^{(p)}(0)\not=0$ for all
$p=0,1,...,n$, then there exists a unique solution $Y_{j}(n,a)$ of the linear
system (16) and it is given by
$Y_{j}(n,a)=\prod_{k=0,\
k\not=j}^{n}\Big{(}\frac{h_{k}(n)-a}{h_{k}(n)-h_{j}(n)}\Big{)},$
so that
$f_{n}(x)=\sum_{j=0}^{n}\prod_{k=0,\
k\not=j}^{n}\Big{(}\frac{h_{k}(n)-a}{h_{k}(n)-h_{j}(n)}\Big{)}G(h_{j}(n)x),\ \
\ x\in\mathbb{R}$
and, by construction, it is
$\lim_{n\to\infty}f_{n}(x)=G(ax),\ \ x\in\mathbb{R}.$
###### Proof.
Observe that we have
$f_{n}^{(p)}(x)=\sum_{j=0}^{n}Y_{j}(n)(h_{j}(n))^{p}G^{(p)}(h_{j}(n)x),\ \ \
x\in\mathbb{R}$
where $G^{(p)}$ are the derivatives of order $p$, for $p=0,...,n$ with respect
to $x$ of the function $G(\lambda x)$, for $\lambda\in\mathbb{R}$ considered
as a parameter. So we get the system
$f_{n}^{(p)}(0)=\sum_{j=0}^{n}Y_{j}(n)(h_{j}(n))^{p}G^{(p)}(0)=a^{p}G^{(p)}(0).$
Now, since we have assumed that $G^{(p)}(0)\not=0$ for all $p=0,1,...,n$, the
system becomes
$\sum_{j=0}^{n}Y_{j}(n)(h_{j}(n))^{p}=a^{p}$
and we can solve it as in Theorem 2.2 to get
$Y_{j}(n,a)=\prod_{k=0,\
k\not=j}^{n}\Big{(}\frac{h_{k}(n)-a}{h_{k}(n)-h_{j}(n)}\Big{)}.$
Finally, we get
$f_{n}(x)=\sum_{j=0}^{n}\prod_{k=0,\
k\not=j}^{n}\Big{(}\frac{h_{k}(n)-a}{h_{k}(n)-h_{j}(n)}\Big{)}G(h_{j}(n)x),\ \
\ x\in\mathbb{R}$
and by construction it is
$\lim_{n\to\infty}f_{n}(x)=G(ax),\ \ x\in\mathbb{R}.$
∎
## References
* [1] D. Alpay, F. Colombo, I. Sabadini, D.C. Struppa, Aharonov-Berry superoscillations in the radial harmonic oscillator potential, Quantum Stud. Math. Found., 7 (2020), 269–283.
* [2] Y. Aharonov, D. Albert, L. Vaidman, How the result of a measurement of a component of the spin of a spin-1/2 particle can turn out to be 100, Phys. Rev. Lett., 60 (1988), 1351-1354.
* [3] Y. Aharonov, J. Behrndt, F. Colombo, P. Schlosser, Schrödinger evolution of superoscillations with $\delta$\- and $\delta^{\prime}$-potentials, Quantum Stud. Math. Found., 7 (2020), 293–305.
* [4] Y. Aharonov, J. Behrndt, F. Colombo, P. Schlosser, Green’s Function for the Schrödinger Equation with a Generalized Point Interaction and Stability of Superoscillations, J. Differential Equations, 277 (2021), 153–190.
* [5] Y. Aharonov, F. Colombo, I. Sabadini, D.C. Struppa, J. Tollaksen, Evolution of superoscillations in the Klein–Gordon field, Milan J. Math., 88 (2020), no. 1, 171–189.
* [6] Y. Aharonov, F. Colombo, I. Sabadini, D.C. Struppa, J. Tollaksen, How superoscillating tunneling waves can overcome the step potential, Ann. Physics, 414 (2020), 168088, 19 pp.
* [7] Y. Aharonov, F. Colombo, I. Sabadini, D.C. Struppa, J. Tollaksen, On the Cauchy problem for the Schrödinger equation with superoscillatory initial data, J. Math. Pures Appl., 99 (2013), 165–173.
* [8] Y. Aharonov, F. Colombo, I. Sabadini, D.C. Struppa, J. Tollaksen, Superoscillating sequences in several variables, J. Fourier Anal. Appl., 22 (2016), 751–767.
* [9] Y. Aharonov, F. Colombo, I. Sabadini, D.C. Struppa, J. Tollaksen, The mathematics of superoscillations, Mem. Amer. Math. Soc., 247 (2017), no. 1174, v+107 pp.
* [10] Y. Aharonov, F. Colombo, D.C. Struppa, J. Tollaksen, Schrödinger evolution of superoscillations under different potentials, Quantum Stud. Math. Found., 5 (2018), 485–504.
* [11] Y. Aharonov, D. Rohrlich, Quantum Paradoxes: Quantum Theory for the Perplexed, Wiley-VCH Verlag, Weinheim, 2005.
* [12] Y. Aharonov, I. Sabadini, J. Tollaksen, A. Yger, Classes of superoscillating functions, Quantum Stud. Math. Found., 5 (2018), 439–454.
* [13] Y. Aharonov, T. Shushi, A new class of superoscillatory functions based on a generalized polar coordinate system, Quantum Stud. Math. Found., 7 (2020), 307–313.
* [14] T. Aoki, F. Colombo, I. Sabadini, D. C. Struppa, Continuity of some operators arising in the theory of superoscillations, Quantum Stud. Math. Found., 5 (2018), 463–476.
* [15] T. Aoki, F. Colombo, I. Sabadini, D.C. Struppa, Continuity theorems for a class of convolution operators and applications to superoscillations, Ann. Mat. Pura Appl., 197 (2018), 1533–1545.
* [16] J. Behrndt, F. Colombo, P. Schlosser, Evolution of Aharonov–Berry superoscillations in Dirac $\delta$-potential, Quantum Stud. Math. Found., 6 (2019), 279–293.
* [17] M. Berry et al, Roadmap on superoscillations, 2019, Journal of Optics 21 053002.
* [18] M. V. Berry, Faster than Fourier, in Quantum Coherence and Reality; in celebration of the 60th Birthday of Yakir Aharonov ed. J.S.Anandan and J. L. Safko, World Scientific, Singapore, (1994), pp. 55-65.
* [19] M. Berry, Exact nonparaxial transmission of subwavelength detail using superoscillations, J. Phys. A 46, (2013), 205203.
* [20] M. V. Berry, Representing superoscillations and narrow Gaussians with elementary functions, Milan J. Math., 84 (2016), 217–230.
* [21] M. V. Berry, S. Popescu, Evolution of quantum superoscillations, and optical superresolution without evanescent waves, J. Phys. A, 39 (2006), 6965–6977.
* [22] M.V. Berry, P. Shukla, Pointer supershifts and superoscillations in weak measurements, J. Phys A, 45 (2012), 015301.
* [23] F. Colombo, J. Gantner, D.C. Struppa, Evolution by Schrödinger equation of Aharonov-Berry superoscillations in centrifugal potential, Proc. A., 475 (2019), no. 2225, 20180390, 17 pp.
* [24] F. Colombo, I. Sabadini, D.C. Struppa, A. Yger, Gauss sums, superoscillations and the Talbot carpet, Journal de Mathématiques Pures et Appliquées, https://doi.org/10.1016/j.matpur.2020.07.011.
* [25] F. Colombo, I. Sabadini, D.C. Struppa, A. Yger, Superoscillating sequences and hyperfunctions, Publ. Res. Inst. Math. Sci., 55 (2019), no. 4, 665–688.
* [26] F. Colombo, D.C. Struppa, A. Yger, Superoscillating sequences towards approximation in $S$ or $S^{\prime}$-type spaces and extrapolation, J. Fourier Anal. Appl., 25 (2019), no. 1, 242–266.
* [27] F. Colombo, G. Valente, Evolution of Superoscillations in the Dirac Field, Found. Phys., 50 (2020), 1356–1375.
* [28] P. J. S. G. Ferreira, A. Kempf, Unusual properties of superoscillating particles, J. Phys. A, 37 (2004), 12067-76.
* [29] P. J. S. G. Ferreira, A. Kempf, Superoscillations: faster than the Nyquist rate, IEEE Trans. Signal Processing, 54 (2006), 3732–3740.
* [30] P. J. S. G. Ferreira, A. Kempf, M. J. C. S. Reis, Construction of Aharonov-Berry’s superoscillations, J. Phys. A 40 (2007), 5141–5147.
* [31] A. Kempf, Four aspects of superoscillations, Quantum Stud. Math. Found., 5 (2018), 477–484.
* [32] J. Lindberg, Mathematical concepts of optical superresolution, Journal of Optics, 14 (2012), 083001\.
* [33] B. Šoda, A. Kempf, Efficient method to create superoscillations with generic target behavior, Quantum Stud. Math. Found., 7 (2020), no. 3, 347–353.
* [34] G. Toraldo di Francia, Super-Gain Antennas and Optical Resolving Power, Nuovo Cimento Suppl., 9 (1952), 426–438.
|
# The Cauchy Problem for a non Strictly Hyperbolic $3\times 3$ System of
Conservation Laws Arising in Polymer Flooding
Graziano Guerra Department of Mathematics and its Applications, University of
Milano - Bicocca, Italy, ([email protected]). Wen Shen Mathematics
Department, Pennsylvania State University, USA, ([email protected]).
###### Abstract
We study the Cauchy problem of a $3\times 3$ system of conservation laws
modeling two–phase flow of polymer flooding in rough porous media with
possibly discontinuous permeability function. The system loses strict
hyperbolicity in some regions of the domain where the eigenvalues of different
families coincide, and BV estimates are not available in general. For a
suitable $2\times 2$ system, a singular change of variable introduced by
Temple [T82, IT86] could be effective to control the total variation
[WSCauchy]. An extension of this technique can be applied to a $3\times 3$
system only under strict hypotheses on the flux functions [CocliteRisebro]. In
this paper, through an adapted front tracking algorithm we prove the existence
of solutions for the Cauchy problem under mild assumptions on the flux
function, using a compensated compactness argument.
Keywords: Conservation Laws; Discontinuous Flux; Compensated Compactness;
Polymer Flooding; Wave Front Tracking; Degenerate Systems
AMS subject classifications: 35L65; 35L45; 35L80; 35L40; 35L60
## 1 Introduction
We consider a simple model for polymer flooding in two phase flow in rough
media
$\begin{cases}\partial_{t}s+\partial_{x}f(s,c,k)&=~{}0,\\\
\partial_{t}[cs]+\partial_{x}[cf(s,c,k)]&=~{}0,\\\
\partial_{t}k&=~{}0,\end{cases}$ (1.1)
associated with the initial data
$\left(s,c,k\right)(0,x)=\left(\bar{s},\bar{c},\bar{k}\right)(x),\quad
x\in\mathbb{R}.$ (1.2)
Here, the unknown vector is $(s,c,k)$, where $s$ is the saturation of water
phase, $c$ is the fraction of polymer dissolved in water, and $k$ denotes the
permeability of the porous media. We see that $k$ does not change in time,
$k(t,x)=\bar{k}(x)$, and the initial data $\bar{k}(\cdot)$ might be
discontinuous.
We neglect both the adsorption of polymers in the porous media and the
gravitational force, where the solution to the Riemann problem becomes more
complex. For such Riemann solvers, see [JohWin, MR3663611] for the effect of
the adsorption term, and [WSCauchy] for the addition of the gravitational
force term. In particular, when the adsorption effect is included, the $c$
family described below would no longer be linearly degenerate, while adding
the gravitational force term, the $s$ waves described below could have
negative speed. Both effects would disrupt the carefully designed wave front
tracking algorithm we use to prove the main theorem.
The conserved quantities and their fluxes are given by, respectively
$\mathbf{G}\left(s,c,k\right)=\begin{pmatrix}s\\\ cs\\\
k\end{pmatrix},\qquad\mathbf{F}\left(s,c,k\right)=\begin{pmatrix}f\left(s,c,k\right)\\\
cf\left(s,c,k\right)\\\ 0\end{pmatrix}.$
Denoting the three families as the $s$, $c$ and $k$ family, we have the
following 3 eigenvalues as functions of the variables
$\left(\sigma,\gamma,\kappa\right)$ in the $\left(s,c,k\right)$ space.
$\lambda_{s}=\partial_{\sigma}f\left(\sigma,\gamma,\kappa\right),\qquad\lambda_{c}=\frac{f\left(\sigma,\gamma,\kappa\right)}{\sigma},\qquad\lambda_{k}=0,$
and the three corresponding right eigenvectors (in the $\left(s,c,k\right)$
space):
$r_{s}=\begin{pmatrix}1\\\ 0\\\ 0\end{pmatrix},\qquad
r_{c}=\begin{pmatrix}-\partial_{\gamma}f\left(\sigma,\gamma,\kappa\right)\\\
\partial_{\sigma}f\left(\sigma,\gamma,\kappa\right)-\frac{f\left(\sigma,\gamma,\kappa\right)}{\sigma}\\\
0\end{pmatrix},\qquad
r_{k}=\begin{pmatrix}-\partial_{\kappa}f\left(\sigma,\gamma,\kappa\right)\\\
0\\\ \partial_{\sigma}f\left(\sigma,\gamma,\kappa\right)\end{pmatrix}.$
A straight computation shows that both the $c$ and $k$ families are linearly
degenerate. Furthermore, there exist regions in the domain such that
$\lambda_{s}=\lambda_{c}$ or $\lambda_{s}=\lambda_{c}=\lambda_{k}$, where the
system is parabolic degenerate.
The flux function $f\left(\sigma,\gamma,\kappa\right)$ has the following
properties. For any given $(\gamma,\kappa)$, the mapping $\sigma\mapsto
f\left(\sigma,\gamma,\kappa\right)$ is the well-known S-shaped Buckley-
Leverett function [BL] with a single inflection point, see Fig. 1. To be
specific, we have
$f(\sigma,\gamma,\kappa)\in[0,1],\qquad\partial_{\sigma}f(\sigma,\gamma,\kappa)\geq
0,\qquad\mbox{for all }(\sigma,\gamma,\kappa),$
and, for all $(\gamma,\kappa)$,
$\begin{split}&f(0,\gamma,\kappa)=0,\qquad\quad~{}f(1,\gamma,\kappa)=1,\\\
&\partial_{\sigma}f(0,\gamma,\kappa)=0,\qquad~{}\partial_{\sigma}f(1,\gamma,\kappa)=0,\\\
&\partial_{\sigma\sigma}f(0,\gamma,\kappa)>0,\qquad\partial_{\sigma\sigma}f(1,\gamma,\kappa)<0.\end{split}$
(1.3)
Remark that conditions (1.3) guarantee that the eigenvalues and the
eigenvectors written above are well defined (can be extended) when $\sigma=0$.
For each given $(\gamma,\kappa)$, there exists a unique
$\sigma^{*}\left(\gamma,\kappa\right)\in\left]0,1\right[$ such that
$\partial_{\sigma\sigma}f(\sigma^{*}\left(\gamma,\kappa\right),\gamma,\kappa)=0.$
A detailed analysis of the wave properties for this system is carried out in
[WS3x3], with the following highlights:
* •
$k$ waves are the slowest with speed $0$. Both $f$ and $c$ are continuous
across any $k$ wave;
* •
$c$ waves travel with non negative speed. Both $\frac{f}{s}$ and $k$ are
continuous across any $c$ wave;
* •
$s$ waves travel with positive speed. Both $c$ and $k$ are continuous across
any $s$ wave.
In [WS3x3], the global Riemann solver is constructed. Here we give a brief
summary. Let $(s_{l},c_{l},k_{l})$ and $(s_{r},c_{r},k_{r})$ be the left and
right state of a Riemann problem, respectively. In general, the solution of
the Riemann problem consists of a $k$ wave, a $c$ wave and possibly some $s$
waves. They can be constructed as follows.
* •
Let $(s_{m},c_{l},k_{r})$ denote the right state of the $k$ wave. The value
$s_{m}$ is uniquely determined by the condition
$f(s_{m},c_{l},k_{r})=f(s_{l},c_{l},k_{l}).$
* •
For the remaining waves, we have $k\equiv k_{r}$ throughout. We then solve the
Riemann problem for the $2\times 2$ sub-system
$\partial_{t}s+\partial_{x}f(s,c,k_{r})=0,\qquad\partial_{t}(cs)+\partial_{x}(cf(s,c,k_{r}))=0$
(1.4)
with Riemann data $(s_{m},c_{l})$ and $(s_{r},c_{r})$ as the left and right
states. The solution consists of waves with non-negative speed.
We now give a precise definition of weak solution to the Cauchy problem
(1.1)–(1.2) and state the main theorem.
###### Definition 1.1.
The vector-valued function
$\left(s,c,k\right)\in\mathbf{L}^{\infty}\left([0,+\infty)\times\mathbb{R},[0,1]^{3}\right)$
is a solution to the Cauchy problem (1.1)–(1.2) if for any
$\phi\in\textbf{C}^{1}_{c}\left([0,+\infty)\times\mathbb{R},\mathbb{R}\right)$
the following equalities hold
$\displaystyle\int_{\Omega}\left[s\partial_{t}\phi+f(s,c,k)\partial_{x}\phi\right](t,x)\;dtdx+\int_{\mathbb{R}}\bar{s}(x)\phi\left(0,x\right)\;dx=0,$
$\displaystyle\int_{\Omega}\left[cs\partial_{t}\phi+cf(s,c,k)\partial_{x}\phi\right](t,x)\;dtdx+\int_{\mathbb{R}}\bar{c}\left(x\right)\bar{s}(x)\phi\left(0,x\right)\;dx=0,$
$\displaystyle k\left(t,x\right)=\bar{k}(x),\quad\forall(t,x)\in\Omega,$
where $\Omega=\left]0,+\infty\right[\times\mathbb{R}$.
###### Theorem 1.1.
If the initial data $\left(\bar{s},\bar{c},\bar{k}\right)$ satisfy
$\bar{s}\in\mathbf{L}^{\infty}\left(\mathbb{R},[0,1]\right),\qquad\bar{c}\in\mathbf{BV}\left(\mathbb{R},[0,1]\right),\qquad\bar{k}\in\mathbf{BV}\left(\mathbb{R},[0,1]\right),$
then there exists a solution to the Cauchy problem (1.1)–(1.2) in the sense of
Definition 1.1.
We emphasize the fact that $s=0$ is not excluded in our theorem since we do
not make use of Lagrangian coordinates which would have required $s>0$.
Indeed, under the hypotheses $s\geq s^{*}>0$, $k(t,x)=\text{const.}$, system
(1.1) is equivalent to its Lagrangian formulation [Wagner]:
$\begin{cases}\partial_{t}\left(\frac{1}{s}\right)-\partial_{y}\left(\frac{f\left(s,c,k\right)}{s}\right)&=0,\\\
\partial_{t}c&=0,\\\ k&=\text{const.},\end{cases}$ (1.5)
where $y$ is the Lagrangian coordinate satisfying $\partial_{x}y=s$,
$\partial_{t}y=-f(s,c,k)$. Therefore, under some additional hypotheses, the
result in [BGS], which holds for _scalar_ equations since it is based on the
maximum principle for Hamilton–Jacobi equation, could be used to prove the
existence of a unique vanishing viscosity solution to the first equation in
(1.5) and hence a (in some sense) unique solution to system (1.5). However,
since we consider the case where $s$ can become 0, the analysis in [BGS]
cannot be applied. Instead, we need to solve the original non triangular
$2\times 2$ system in Eulerian coordinates. Furthermore, since we consider
rough permeability function $k$, the corresponding system in the Lagrangian
coordinate is no longer triangular,
$\begin{cases}\partial_{t}\left(\frac{1}{s}\right)-\partial_{y}\left(\frac{f\left(s,c,k\right)}{s}\right)&=0,\\\
\partial_{t}c&=0,\\\
\partial_{t}\left(\frac{k}{s}\right)-\partial_{y}\left(\frac{kf\left(s,c,k\right)}{s}\right)&=0.\end{cases}$
Remark that the Lagrangian coordinates introduced in [BGS, Section 6] for
(1.1) make the system triangular, but still require $s\geq s^{*}>0$ and
moreover they mix time and space, therefore the Cauchy problem for (1.1) in
Eulerian coordinates is not equivalent to the Cauchy problem in the
coordinates introduced in [BGS, Section 6].
In this paper, the proof for the existence of solution is carried out by
showing that wave front tracking approximate solutions are compact by a
compensated compactness argument (see for instance [KarRasTad] for an
application of compensated compactness to a $2\times 2$ bi–dimensional related
model).
The remaining of the paper is organized as follows. In Section 2 the wave
front tracking approximate solutions are constructed. In Section 3 we prove
the necessary entropy estimates. Finally in Section 4 the compensated
compactness argument is carried out to prove Theorem 1.1.
## 2 Front Tracking Approximations
In this section we modify the algorithm constructed in [CocliteRisebro] and
[WSCauchy] to adapt it to system (1.1). We define the functions (see Figure
1):
$g\left(\sigma,\gamma,\kappa\right)=\frac{f\left(\sigma,\gamma,\kappa\right)}{\sigma},\qquad
P\left(\sigma,\gamma,\kappa\right)=\int_{0}^{\sigma}\left|\partial_{\xi}g\left(\xi,\gamma,\kappa\right)\right|d\xi.$
(2.6)
Since at $\sigma=0$ both $f$ and its derivative $\partial_{\sigma}f$ vanish,
we define $g(0,\gamma,\kappa)=0$. Hypotheses (1.3) imply that
$\partial_{\sigma}g\left(0,\gamma,\kappa\right)>0$ and that the function
$\sigma\mapsto g\left(\sigma,\gamma,\kappa\right)$ has one single maximum
point somewhere between the single inflexion point of $f$ and the point
$\sigma=1$. The function $P$ is continuous with respect to its three variables
and strictly increasing and invertible with respect to the variable $\sigma$.
$\sigma$$\left(1,1\right)$$\left(1,0\right)$$\left(0,0\right)$${\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}f}$${\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}g}$
Figure 1: Diagrams of the flux $\sigma\mapsto
f\left(\sigma,\gamma,\kappa\right)$ and of the function $\sigma\mapsto
g\left(\sigma,\gamma,\kappa\right)=f\left(\sigma,\gamma,\kappa\right)/\sigma$
for fixed values of $\gamma$ and $\kappa$.
Fixing initial data $\left(\bar{s},\bar{c},\bar{k}\right)$ satisfying the
hypotheses of Theorem 1.1 and fixing an approximation parameter
$\varepsilon>0$, we can construct piecewise constant approximate initial data
$\left(\bar{s}^{\varepsilon},\bar{c}^{\varepsilon},\bar{k}^{\varepsilon}\right)$
with values in $\left[0,1\right]^{3}$ such that:
$\displaystyle\left\|\bar{k}^{\varepsilon}-\bar{k}\right\|_{\mathbf{L}^{\infty}}<\varepsilon,$
$\displaystyle\operatorname{Tot.Var.}\bar{k}^{\varepsilon}\leq\operatorname{Tot.Var.}\bar{k},$
$\displaystyle\left\|\bar{c}^{\varepsilon}-\bar{c}\right\|_{\mathbf{L}^{\infty}}<\varepsilon,$
$\displaystyle\operatorname{Tot.Var.}\bar{c}^{\varepsilon}\leq\operatorname{Tot.Var.}\bar{c},$
$\displaystyle\left\|\bar{s}^{\varepsilon}-\bar{s}\right\|_{\mathbf{L}^{1}\left(\left(-\frac{1}{\varepsilon},\frac{1}{\varepsilon}\right),\mathbb{R}\right)}\leq\varepsilon,$
(2.7)
Let $\bar{x}_{1},\ldots,\bar{x}_{N}$ be the set of points in which
$\bar{k}^{\varepsilon}$ has jumps such that
$\bar{k}^{\varepsilon}\left(x\right)=k_{0}\chi_{]-\infty,\bar{x}_{1}]}+\sum_{i=1}^{N-1}k_{i}\chi_{]\bar{x}_{i},\bar{x}_{i+1}]}(x)+k_{N}\chi_{]\bar{x}_{N},+\infty[},$
and let $\bar{y}_{1},\ldots,\bar{y}_{M}$ be the set of points in which
$\bar{c}^{\varepsilon}$ has jumps such that
$\bar{c}^{\varepsilon}\left(x\right)=c_{0}\chi_{]-\infty,\bar{y}_{1}]}+\sum_{j=1}^{M-1}c_{j}\chi_{]\bar{y}_{j},\bar{y}_{j+1}]}(x)+c_{M}\chi_{]\bar{y}_{M},+\infty[}.$
Without loss of generality, we can suppose that no $\bar{y}_{j}$ coincides
with any $\bar{x}_{i}$. Define the constant
$L=\left\lceil\frac{1}{\varepsilon}\sup_{\gamma,\kappa}\left\|f\left(\cdot,\gamma,\kappa\right)\right\|_{\mathbf{C}^{2}}\right\rceil\cdot\left(N+M\right),$
where $\lceil\alpha\rceil$ denotes the least integer greater than or equal to
the real number $\alpha$. In the following we denote by $\land$ the logical
operator and. We consider the following finite sets of possible values for the
function $g$:
$\displaystyle\mathcal{G}_{0}^{1}$ $\displaystyle=\left\\{g\mid
g=g\left(\bar{s}^{\varepsilon}(x),\bar{c}^{\varepsilon}(x),\bar{k}^{\varepsilon}(x)\right),\quad
x\in\mathbb{R}\right\\},$ $\displaystyle\mathcal{G}_{0}^{2}$
$\displaystyle=\left\\{g\mid g=g\left(\frac{\ell}{L},c_{j},k_{i}\right),\quad
i=0,\ldots,N,\;j=0,\ldots,M,\;\ell=0,\ldots,L\right\\},$
$\displaystyle\mathcal{G}_{0}^{3}$ $\displaystyle=\left\\{g\mid
g=\max_{0\leq\sigma\leq 1}g\left(\sigma,c_{j},k_{i}\right),\quad
i=0,\ldots,N,\;j=0,\ldots,M\right\\},$ $\displaystyle\mathcal{G}_{0}^{4}$
$\displaystyle=\Big{\\{}g\mid
g=g\left(\sigma,c_{j},k_{i}\right)=g\left(\sigma,c_{j^{*}},k_{i^{*}}\right)\land
g_{s}\left(\sigma,c_{j},k_{i}\right)\cdot
g_{s}\left(\sigma,c_{j^{*}},k_{i^{*}}\right)<0,$
$\displaystyle\qquad\qquad\text{ for some
}i,i^{*}=0,\ldots,N,\;j,j^{*}=0,\ldots,M,\;\sigma\in[0,1]\Big{\\}},$
$\displaystyle\mathcal{G}_{0}$
$\displaystyle=\mathcal{G}_{0}^{1}\cup\mathcal{G}_{0}^{2}\cup\mathcal{G}_{0}^{3}\cup\mathcal{G}_{0}^{4}.$
###### Remark 2.1.
* •
The set $\mathcal{G}_{0}^{1}$ includes all the possible initial values for
$g$;
* •
The set $\mathcal{G}_{0}^{2}$ includes a sufficiently fine grid for $g$ in
order that any $s$ grid that contains all the counter images of
$\mathcal{G}_{0}^{2}$ through $g\left(\cdot,c_{j},k_{i}\right)$, for any fixed
$j,i$, is finer than $\frac{1}{L}$;
* •
The set $\mathcal{G}_{0}^{3}$ includes all the possible maxima of
$g\left(\cdot,c_{j},k_{i}\right)$ for any $j,i$;
* •
The set $\mathcal{G}_{0}^{4}$ includes all the possible values of $g$ where
two graphs of functions of type $g\left(\cdot,c_{j},k_{i}\right)$ intersect
with derivatives of different sign. Because of the shape of $g$, this set too
is finite.
We start the front tracking algorithm from the region $x<\bar{x}_{1}$. For
this purpose we define the allowed values for $s$ in that region:
$\mathcal{S}_{0,j}=\left\\{\sigma\mid
g\left(\sigma,c_{j},k_{0}\right)\in\mathcal{G}_{0}\right\\}.$
We call $f^{0,j}(\sigma)$ the linear interpolation of the map $\sigma\mapsto
f\left(\sigma,c_{j},k_{0}\right)$ according to the points in
$\mathcal{S}_{0,j}$. Observe that, since we have included
$\mathcal{G}_{0}^{2}$ in $\mathcal{G}_{0}$, the set
$\left\\{\frac{\ell}{L}\right\\}_{\ell=0}^{L}$ is included in
$\mathcal{S}_{0,j}$ for every $j$, hence we have the uniform estimate
$\begin{cases}\left|f\left(\sigma,c_{j},k_{0}\right)-f^{0,j}(\sigma)\right|\leq\varepsilon,\\\
\left|\partial_{\sigma}f\left(\sigma,c_{j},k_{0}\right)-\partial_{\sigma}f^{0,j}(\sigma)\right|\leq\frac{\varepsilon}{N+M},\end{cases}\quad\text{
for all }\sigma\in\left[0,1\right],\ j=0,\ldots,M.$
$\Omega_{0,0}$$\Omega_{0,1}$$\Omega_{1,2}$$\Omega_{1,4}$$\Omega_{3,3}$$\Omega_{4,3}$$\bar{x}_{1}$$\bar{x}_{2}$$\bar{x}_{3}$$\bar{x}_{4}$$\bar{x}_{5}$$\bar{y}_{2}$$\bar{y}_{3}$$\bar{y}_{1}$$\bar{y}_{4}$$\bar{y}_{5}$$\bar{y}_{7}$
Figure 2: Wave front tracking pattern. $k$ waves in blue, $c$ waves in green
and $s$ waves in red
We solve all the Riemann problems in $x<\bar{x}_{1}$ at $t=0$ in the following
way. Let $\bar{x}\in\left]\bar{y}_{j},\bar{y}_{j+1}\right[$
($\bar{y}_{0}=-\infty$) be a jump in $\bar{s}^{\varepsilon}$. Here we take the
entropic solution to the Riemann problem
$\partial_{t}s+\partial_{x}f^{0,j}\left(s\right)=0,\qquad
s\left(0,x\right)=\begin{cases}\bar{s}^{\varepsilon}\left(\bar{x}-\right)&\text{
for }x<\bar{x},\\\ \bar{s}^{\varepsilon}\left(\bar{x}+\right)&\text{ for
}x>\bar{x}.\end{cases}$
Since $f^{0,j}$ is piecewise linear, the solution to the Riemann problem is
piecewise constant and takes values in the set $\mathcal{S}_{0,j}$ of the kink
points of $f^{0,j}$, moreover the entropy condition in [Bbook, Theorem 4.4] is
satisfied. The same Riemann solver is used whenever at $t>0$ two $s$ waves
interact in some region $\Omega_{0,j}$ defined below.
At the points $\bar{y}_{j}$, we solve the Riemann problem according to the
minimum jump condition described in [WSCauchy] (see also [GimseRisebro]) that
we briefly outline (see Fig 3).
$\sigma$$s^{L}=s^{-}$${\color[rgb]{0,1,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,1,0}\gamma}$
$\sigma$$s^{L}$$s^{-}$${\color[rgb]{0,1,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,1,0}\gamma}$
$\sigma$$s^{-}$$s^{L}$${\color[rgb]{0,1,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,1,0}\gamma}$
$\sigma$$s^{R}\ $$\
s^{+}$${\color[rgb]{0,1,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,1,0}\gamma}$
$\sigma$$s^{+}$$s^{R}$${\color[rgb]{0,1,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,1,0}\gamma}$
Figure 3: The graphs of $G^{L}$ and $G^{R}$ are drawn respectively in blue and
red. For each graph, a possible transition level $\gamma$ (the level at which
given $G^{L}$ and $G^{R}$ intersect) is drawn in green.
Define
$s^{L}=\bar{s}^{\varepsilon}\left(\bar{y}_{j}-\right),\quad
c^{L}=\bar{c}^{\varepsilon}\left(\bar{y}_{j}-\right)=c_{j-1},\quad
s^{R}=\bar{s}^{\varepsilon}\left(\bar{y}_{j}+\right),\quad
c^{R}=\bar{c}^{\varepsilon}\left(\bar{y}_{j}+\right)=c_{j},$
and the two auxiliary monotone functions (the first one non increasing and the
second one non decreasing)
$G^{L}\left(\sigma\right)=\begin{cases}\max\left\\{g\left(\varsigma,c^{L},k_{0}\right)\mid\varsigma\in\left[\sigma,s^{L}\right]\right\\},&\text{
for }\sigma\leq s^{L},\\\\[10.0pt]
\min\left\\{g\left(\varsigma,c^{L},k_{0}\right)\mid\varsigma\in\left[s^{L},\sigma\right]\right\\},&\text{
for }\sigma\geq s^{L},\end{cases}$
$G^{R}\left(\sigma\right)=\begin{cases}\min\left\\{g\left(\varsigma,c^{R},k_{0}\right)\mid\varsigma\in\left[\sigma,s^{R}\right]\right\\},&\text{
for }\sigma\leq s^{R},\\\\[10.0pt]
\max\left\\{g\left(\varsigma,c^{R},k_{0}\right)\mid\varsigma\in\left[s^{R},\sigma\right]\right\\},&\text{
for }\sigma\geq s^{R}.\end{cases}$
Call $\gamma$ the unique level at which $G^{L}$ and $G^{R}$ intersect. Because
of the hypotheses on $g$, $\gamma$ is equal to either
$g\left(s^{L},c^{L},k_{0}\right)$, $g\left(s^{R},c^{R},k_{0}\right)$, a
maximum of either $g\left(\cdot,c^{L},k_{0}\right)$ or
$g\left(\cdot,c^{R},k_{0}\right)$, or a point in which these two function
intersect with derivatives having opposite sign. In any case
$\gamma\in\mathcal{G}_{0}$ holds. Define the closed intervals
$I^{L}=\left[G^{L}\right]^{-1}\left(\left\\{\gamma\right\\}\right),\qquad
I^{R}=\left[G^{R}\right]^{-1}\left(\left\\{\gamma\right\\}\right).$
Finally call $s^{-}$ and $s^{+}$ respectively the unique projections of
$s^{L}$ and $s^{R}$ on the closed strictly convex sets $I^{L}$ and $I^{R}$. It
is not difficult to show that
$\begin{array}[]{ll}\gamma=G^{L}\left(s^{-}\right)=g\left(s^{-},c^{L},k_{0}\right),&s^{L},s^{-}\in\mathcal{S}_{0,j-1},\\\
\gamma=G^{R}\left(s^{+}\right)=g\left(s^{+},c^{R},k_{0}\right),&s^{R},\;s^{+}\in\mathcal{S}_{0,j}.\end{array}$
Take any wave, with left and right states $s_{l},s_{r}\in\mathcal{S}_{0,j-1}$,
of the entropic solution to the Riemann problem
$\partial_{t}s+\partial_{x}f^{0,j-1}\left(s\right)=0,\qquad
s\left(0,x\right)=\begin{cases}s^{L}&\text{ for }x<\bar{y}_{j},\\\
s^{-}&\text{ for }x>\bar{y}_{j},\end{cases}$ (2.8)
and suppose $s^{L}<s^{-}$. Then, $s^{L}\leq s_{l}<s_{r}\leq s^{-}$ and,
because of the entropy condition [Bbook, Theorem 4.4], its speed satisfies
($f^{0,j-1}$ coincides with $f\left(\cdot,c^{L},k_{0}\right)$ on
$\mathcal{S}_{0,j-1}$):
$\begin{split}\lambda_{s}&=\frac{f\left(s_{r},c^{L},k_{0}\right)-f\left(s_{l},c^{L},k_{0}\right)}{s_{r}-s_{l}}\leq\frac{f\left(s^{-},c^{L},k_{0}\right)-f\left(s_{l},c^{L},k_{0}\right)}{s^{-}-s_{l}}\\\
&\quad=g\left(s^{-},c^{L},k_{0}\right)+s_{l}\frac{g\left(s^{-},c^{L},k_{0}\right)-g\left(s_{l},c^{L},k_{0}\right)}{s^{-}-s_{l}}\\\
&\quad=\lambda_{c}+s_{l}\frac{G^{L}\left(s^{-}\right)-g\left(s_{l},c^{L},k_{0}\right)}{s^{-}-s_{l}}\\\
&\quad\leq\lambda_{c},\end{split}$
with
$\lambda_{c}=\gamma=g\left(s^{-},c^{L},k_{0}\right)=g\left(s^{+},c^{R},k_{0}\right)$
and where we used the definition of $G^{L}$. If instead $s^{-}<s^{L}$, then
$s^{-}\leq s_{r}<s_{l}\leq s^{L}$ and, as in the previous computation, we have
$\lambda_{s}\leq\lambda_{c}+s_{l}\frac{G^{L}\left(s^{-}\right)-g\left(s_{l},c^{L},k_{0}\right)}{s^{-}-s_{l}}\leq\lambda_{c},$
Therefore, in any case, the solution to the Riemann problem (2.8) can be
patched with a $c$ wave that travel with speed $\lambda_{c}$ and connect the
left state $\left(s^{-},c^{L},k_{0}\right)$ to the right state
$\left(s^{+},c^{R},k_{0}\right)$. Similar computations can be done at the
right of the $c$ wave so that the complete solution includes a $c$ wave
travelling with speed $\lambda_{c}$, possibly together some entropic $s$ waves
to its left (solutions to
$\partial_{t}s+\partial_{x}f^{0,j-1}\left(s\right)=0$) and some entropic $s$
waves to its right (solutions to
$\partial_{t}s+\partial_{x}f^{0,j}\left(s\right)=0$). We also use this Riemann
solver whenever, for $t>0$, a $c$ wave interact with one or more $s$ waves.
We point out the following properties of this Riemann solver that will be
needed in the proof of the main theorem.
* •
The $c$ wave satisfies Rankine-Hugoniot
$\begin{cases}f\left(s^{+},c^{R},k_{0}\right)-f\left(s^{-},c^{L},k_{0}\right)=\lambda_{c}\left(s^{+}-s^{-}\right),\\\
c^{R}f\left(s^{+},c^{R},k_{0}\right)-c^{L}f\left(s^{-},c^{L},k_{0}\right)=\lambda_{c}\left(c^{R}s^{+}-c^{L}s^{-}\right).\\\
\end{cases}$
* •
The $c$ wave is an “admissible” path as defined in [WSCauchy] and satisfies
the following entropy condition:
* –
if $s^{-}<s^{+}$ there exists $s^{*}\in\left[s^{-},s^{+}\right]$ such that
$\begin{cases}g\left(\sigma,c^{L},k_{0}\right)\geq\lambda_{c}&\text{ for all
}\sigma\in\left[s^{-},s^{*}\right],\\\
g\left(\sigma,c^{R},k_{0}\right)\geq\lambda_{c}&\text{ for all
}\sigma\in\left[s^{*},s^{+}\right].\end{cases}$ (2.9)
* –
if $s^{+}<s^{-}$ there exists $s^{*}\in\left[s^{+},s^{-}\right]$ such that
$\begin{cases}g\left(\sigma,c^{R},k_{0}\right)\leq\lambda_{c}&\text{ for all
}\sigma\in\left[s^{+},s^{*}\right],\\\
g\left(\sigma,c^{L},k_{0}\right)\leq\lambda_{c}&\text{ for all
}\sigma\in\left[s^{*},s^{-}\right].\end{cases}$ (2.10)
Let $y_{1}\left(t\right),\ldots,y_{M}\left(t\right)$ denote all the $c$ wave
fronts at the time $t$. Their initial positions are the discontinuity points
of $\bar{c}^{\varepsilon}$. We will show that they do not interact between
each other and keep the same number and order as time goes on.
We define the open regions
$\Omega_{0,j}=\left\\{(t,x)\in\left[0,+\infty\right)\times\mathbb{R}\mid
x<\bar{x}_{1}\land y_{j}(t)<x<y_{j+1}(t)\right\\},$
and the flux
$F^{\varepsilon}\left(t,x,\sigma\right)=f^{0,j}\left(\sigma\right),\text{ for
}\left(t,x\right)\in\Omega_{0,j}.$
The wave front tracking approximation $s^{\varepsilon}$ so constructed is an
exact weak entropic solution to
$\begin{cases}\partial_{t}s^{\varepsilon}+\partial_{x}\left[F^{\varepsilon}\left(t,x,s^{\varepsilon}\right)\right]=0,\\\
\partial_{t}\left(c^{\varepsilon}s^{\varepsilon}\right)+\partial_{x}\left[c^{\varepsilon}F^{\varepsilon}\left(t,x,s^{\varepsilon}\right)\right]_{x}=0,\\\
\end{cases}$
in the region $x<\bar{x}_{1}$.
Since the $c$ family is linearly degenerate, $c$ waves will not interact with
each other. Indeed, given two consecutive $c$ waves located respectively in
$y_{j}(t)$ and $y_{j+1}(t)$, the first conservation law in the previous system
implies
$\begin{split}\frac{d}{dt}\int_{y_{j}(t)}^{y_{j+1}(t)}s^{\varepsilon}\left(t,x\right)\;dx&=\dot{y}_{j+1}\left(t\right)s^{\varepsilon}\left(t,y_{j+1}\left(t\right)-\right)-\dot{y}_{j}\left(t\right)s^{\varepsilon}\left(t,y_{j}\left(t\right)+\right)\\\
&\quad-f^{0,j}\left(s^{\varepsilon}\left(t,y_{j+1}\left(t\right)-\right)\right)+f^{0,j}\left(s^{\varepsilon}\left(t,y_{j}\left(t\right)+\right)\right)=0\end{split}$
since $\dot{y}=\lambda_{c}=\frac{f}{\sigma}$. Hence $c$ waves cannot interact
with each other (if $s^{\varepsilon}=0$ between two $c$ waves, then they both
must travel with zero speed and therefore even in this case they cannot
interact).
Since any interaction with the $k$ wave located at $\bar{x}_{1}$ cannot give
rise to waves entering the region $x<\bar{x}_{1}$, following [WSCauchy], the
wave front tracking algorithm can be carried out for all times in that region.
Observe that for a fixed $\varepsilon$ the total variation of the singular
variable $P$ introduced in (2.6) is bounded. Since the grid
$\mathcal{S}_{0,j}$ contains all the possible maximum points of
$g\left(\cdot,c_{j},k_{0}\right)$, in the regions $\Omega_{0,j}$,
$P\left(\sigma,c_{j},k_{0}\right)=\int_{0}^{\sigma}\left|\partial_{\xi}\frac{f^{0,j}\left(\xi\right)}{\xi}\right|d\xi$
for any $\sigma\in\mathcal{S}_{0,j}$. The variable $P$ is well behaved in the
interplay between the two resonant waves $s$ and $c$. Unfortunately, this
behavior is disrupted by the third family of waves, the $k$ waves, except in
the case where very strong hypothesis are assumed on the flux as in
[CocliteRisebro]. In fact, in [CocliteRisebro] it is assumed that the point of
maximum of $g$ does not change with $k$ (actually in [CocliteRisebro] our $k$
waves correspond to discontinuities in time because of Lagrangian
coordinates). Since such assumptions are not realistic for our model, we are
not able to prove a bound on the total variation of $P$ uniformly in
$\varepsilon$. Instead, we resolve this difficulty by applying a compensated
compactness argument.
Up to now, all the values of
$\left(s^{\varepsilon},c^{\varepsilon},k^{\varepsilon}\right)$ are determined
for $x<\bar{x}_{1}$. Since $k^{\varepsilon}$ is constant in time, its value at
the right of $\bar{x}_{1}$ is known. The jump conditions $\Delta f=\Delta c=0$
determine all the values of
$\left(s^{\varepsilon},c^{\varepsilon},k^{\varepsilon}\right)$ to the right of
$\bar{x}_{1}$. These values could introduce new values for the function $g$
that must be added to the grid, i.e.,
$\mathcal{G}_{1}=\mathcal{G}_{0}\cup\left\\{g\left(s^{\varepsilon}\left(t,\bar{x}_{1}+\right),c^{\varepsilon}\left(t,\bar{x}_{1}+\right),k^{\varepsilon}\left(\bar{x}_{1}+\right)\right)\mid\;t\geq
0\right\\}.$
This gives the new allowed values for $s$ for $\bar{x}_{1}<x<\bar{x}_{2}$:
$\mathcal{S}_{1,j}=\left\\{\sigma\mid
g\left(\sigma,c_{j},k_{1}\right)\in\mathcal{G}_{1}\right\\}.$
Using these values we now build the corresponding approximations
$f^{1,j}\left(\sigma\right)$ of the flux as the linear interpolation of
$f\left(\sigma,c_{j},k_{1}\right)$ according to the points in
$\mathcal{S}_{1,j}$. Then we solve, as before, all the Riemann problems at
$t=0$, $\bar{x}_{1}<x<\bar{x}_{2}$ and $t\geq 0$, $x=\bar{x}_{1}$. As before
$c$ waves cannot interact with each other so, by induction, we can carry out
the wave front tracking algorithm on the semi plane $t\geq 0$.
We define the open regions ($\bar{x}_{0}=y_{0}(t)=-\infty$,
$\bar{x}_{N+1}=y_{M+1}(t)=+\infty$)
$\Omega_{i,j}=\left\\{(t,x)\in\left[0,+\infty\right)\times\mathbb{R}\mid\bar{x}_{i}<x<\bar{x}_{i+1}\land
y_{j}(t)<x<y_{j+1}(t)\right\\}.$
The wave front tracking approximations so obtained are weak entropic solutions
to
$\begin{cases}\partial_{t}s^{\varepsilon}+\partial_{x}\left[F^{\varepsilon}\left(t,x,s^{\varepsilon}\right)\right]=0,\\\
\partial_{t}\left(c^{\varepsilon}s^{\varepsilon}\right)+\partial_{x}\left[c^{\varepsilon}F^{\varepsilon}\left(t,x,s^{\varepsilon}\right)\right]=0,\\\
\partial_{t}k^{\varepsilon}=0,\\\
\left(s^{\varepsilon},c^{\varepsilon},k^{\varepsilon}\right)(0,x)=\left(\bar{s}^{\varepsilon},\bar{c}^{\varepsilon},\bar{k}^{\varepsilon}\right)(x),\end{cases}$
(2.11)
where the flux $F^{\varepsilon}$ is defined by
$F^{\varepsilon}\left(t,x,\sigma\right)=f^{i,j}\left(\sigma\right),\text{ for
}\left(t,x\right)\in\Omega_{i,j}.$
For all $\left(t,x\right)\in\left[0,+\infty\right[\times\mathbb{R}\text{ and
}\sigma\in\left[0,1\right]$, the flux satisfies the estimates
$\begin{cases}\left|F^{\varepsilon}\left(t,x,\sigma\right)-f\left(\sigma,c^{\varepsilon}\left(t,x\right),k^{\varepsilon}\left(x\right)\right)\right|\leq\varepsilon,\\\
\left|\partial_{\sigma}F^{\varepsilon}\left(t,x,\sigma\right)-\partial_{\sigma}f\left(\sigma,c^{\varepsilon}\left(t,x\right),k^{\varepsilon}\left(x\right)\right)\right|\leq\frac{\varepsilon}{N+M}.\end{cases}$
(2.12)
We remark that, in any region $\Omega_{i,j}$, $s^{\varepsilon}$ is an entropic
solution to the scalar conservation law
$\partial_{t}s^{\varepsilon}+\partial_{x}\left[f^{i,j}\left(s^{\varepsilon}\right)\right]=0.$
## 3 Entropy estimates
Given a smooth (not necessarily convex) entropy function
$\eta\left(\sigma\right)$ with $\eta(0)=0$, we define the corresponding
entropy flux $q^{\varepsilon}$ (relative to the approximate flux
$F^{\varepsilon}$) as
$q^{\varepsilon}\left(t,x,\sigma\right)=\int_{0}^{\sigma}\eta^{\prime}\left(\varsigma\right)\partial_{\varsigma}F^{\varepsilon}\left(t,x,\varsigma\right)d\varsigma.$
(3.13)
###### Theorem 3.1.
For a fixed convex entropy $\eta$, the positive part of the measure
$\mu_{\varepsilon}=\partial_{t}\left[\eta\left(s^{\varepsilon}\right)\right]+\partial_{x}\left[q^{\varepsilon}\left(t,x,s^{\varepsilon}\right)\right]$
(3.14)
is uniformly (with respect to the approximation parameter $\varepsilon$)
locally bounded. More precisely, for any compact set
$K\subset\left]0,+\infty\right[\times\mathbb{R}$ there exists a constant
$C_{K}$ such that
$\mu_{\varepsilon}^{+}\left(K\right)\leq C_{K}.$
Here the constant $C_{K}$ may depend on $\eta$, $f$ and on the total variation
of the initial data $\bar{c}$ and $\bar{k}$, but it does not depend on the
approximation parameter $\varepsilon$.
###### Proof.
Fixing a non negative test function $\phi\in
C^{\infty}_{c}\left(\left]0,+\infty\right[\times\mathbb{R}\right)$, we compute
$\begin{split}\langle\partial_{t}\left[\eta\left(s^{\varepsilon}\right)\right]+\partial_{x}\left[q^{\varepsilon}\left(t,x,s^{\varepsilon}\right)\right],\phi\rangle&=-\int\eta\left(s^{\varepsilon}\right)\partial_{t}\phi+q^{\varepsilon}\left(t,x,s^{\varepsilon}\right)\partial_{x}\phi\;dtdx\\\
&=\int_{0}^{+\infty}\sum_{\ell=1}^{\mathcal{N}(t)}\left[\Delta
q^{\varepsilon}_{\ell}\left(t\right)-\Delta\eta_{\ell}\left(t\right)\dot{\xi}_{\ell}\left(t\right)\right]\phi\left(t,\xi_{\ell}\left(t\right)\right)dt.\end{split}$
(3.15)
Here $\xi_{1},\ldots,\xi_{\mathcal{N}(t)}$ are the locations of the
discontinuities in
$\left(s^{\varepsilon},c^{\varepsilon},k^{\varepsilon}\right)$, and the
notation $\Delta$ denotes the jumps:
$\begin{cases}\Delta\eta_{\ell}\left(t\right)=\eta\left(s^{\varepsilon}\left(t,\xi_{\ell}\left(t\right)+\right)\right)-\eta\left(s^{\varepsilon}\left(t,\xi_{\ell}\left(t\right)-\right)\right),\\\
\Delta
q^{\varepsilon}_{\ell}\left(t\right)=q^{\varepsilon}\left(t,\xi_{\ell}(t)+,s^{\varepsilon}\left(t,\xi_{\ell}\left(t\right)+\right)\right)-q^{\varepsilon}\left(t,\xi_{\ell}(t)-,s^{\varepsilon}\left(t,\xi_{\ell}\left(t\right)-\right)\right).\end{cases}$
We study separately the three different kinds of waves and denote with the
superscripts “$-$” and “$+$” the values computed respectively to the left and
the right of the discontinuities. We omit the superscript for the values that
do not change across the discontinuities.
$s$ waves: Both $c$ and $k$ are constant while the jump in $s$ satisfies
Rankine-Hugoniot and is entropic according to the approximate flux. If
$\left(t,\xi_{\ell}\left(t\right)\right)\in\Omega_{i,j}$, then
$\dot{\xi}_{\ell}\left(s^{+}-s^{-}\right)=f^{i,j}\left(s^{+}\right)-f^{i,j}\left(s^{-}\right)=f^{+}-f^{-}.$
Hence, applying the definition of $q^{\varepsilon}$ and integrating by parts,
we compute
$\begin{split}\Delta
q^{\varepsilon}_{\ell}-\Delta\eta_{\ell}\dot{\xi}_{\ell}&=\int_{s^{-}}^{s^{+}}\eta^{\prime}\left(\varsigma\right)\left[\partial_{\varsigma}f^{i,j}\left(\varsigma\right)-\dot{\xi}_{\ell}\right]d\varsigma\\\
&=\left[\eta^{\prime}\left(\varsigma\right)\left(f^{i,j}\left(\varsigma\right)-f^{-}-\dot{\xi}_{\ell}\left(\varsigma-s^{-}\right)\right)\right]_{s^{-}}^{s^{+}}\\\
&\qquad\qquad-\int_{s^{-}}^{s^{+}}\eta^{\prime\prime}\left(\varsigma\right)\left[f^{i,j}\left(\varsigma\right)-f^{-}-\dot{\xi}_{\ell}\left(\varsigma-s^{-}\right)\right]d\varsigma\\\
&\leq 0.\end{split}$
Since $\eta^{\prime\prime}\geq 0$ and the $s$ wave in $\xi_{\ell}$ is an
entropic wave for the flux $f^{i,j}$, therefore for all
$\varsigma\in\left[\min\left\\{s^{-},s^{+}\right\\},\max\left\\{s^{-},s^{+}\right\\}\right]$
one has
$\operatorname{sign}\left(s^{+}-s^{-}\right)\left[f^{i,j}\left(\varsigma\right)-f^{-}-\frac{f^{+}-f^{-}}{s^{+}-s^{-}}\left(\varsigma-s^{-}\right)\right]\geq
0.$
$c$ waves: Both $k$ and $g=\frac{f}{s}$ are constants and the speed
$\dot{\xi}_{\ell}$ of the wave equals
$g\left(s^{-},c^{-},k\right)=g\left(s^{+},c^{+},k\right)$, where $\xi_{\ell}$
is the boundary between the regions $\Omega_{i,j}$ and $\Omega_{i,j+1}$.
Denoting by $C$ a generic constant that depends only on $\eta$ and $f$, the
uniform estimates (2.12) lead to
$\begin{split}\Delta&q^{\varepsilon}_{\ell}-\Delta\eta_{\ell}\dot{\xi}_{\ell}=\int_{0}^{s^{+}}\eta^{\prime}\left(\varsigma\right)\partial_{\varsigma}f^{i,j+1}\left(\varsigma\right)d\varsigma-\int_{0}^{s^{-}}\eta^{\prime}\left(\varsigma\right)\partial_{\varsigma}f^{i,j}\left(\varsigma\right)d\varsigma-\dot{\xi}_{\ell}\left(\eta\left(s^{+}\right)-\eta\left(s^{-}\right)\right)\\\
&\leq
C\frac{\varepsilon}{N+M}+\int_{0}^{s^{+}}\eta^{\prime}\left(\varsigma\right)\partial_{\varsigma}f\left(\varsigma,c^{+},k\right)d\varsigma-\int_{0}^{s^{-}}\eta^{\prime}\left(\varsigma\right)\partial_{\varsigma}f\left(\varsigma,c^{-},k\right)d\varsigma\\\
&\qquad-\int_{0}^{s^{+}}\eta^{\prime}\left(\varsigma\right)\dot{\xi}_{\ell}d\varsigma+\int_{0}^{s^{-}}\eta^{\prime}\left(\varsigma\right)\dot{\xi}_{\ell}d\varsigma\\\
&\leq
C\frac{\varepsilon}{N+M}+\int_{0}^{s^{+}}\eta^{\prime}\left(\varsigma\right)\left[\partial_{\varsigma}f\left(\varsigma,c^{+},k\right)-\dot{\xi}_{\ell}\right]d\varsigma-\int_{0}^{s^{-}}\eta^{\prime}\left(\varsigma\right)\left[\partial_{\varsigma}f\left(\varsigma,c^{-},k\right)-\dot{\xi}_{\ell}\right]d\varsigma\\\
&\leq
C\frac{\varepsilon}{N+M}-\int_{0}^{s^{+}}\eta^{\prime\prime}\left(\varsigma\right)\left[f\left(\varsigma,c^{+},k\right)-\dot{\xi}_{\ell}\varsigma\right]d\varsigma+\int_{0}^{s^{-}}\eta^{\prime\prime}\left(\varsigma\right)\left[f\left(\varsigma,c^{-},k\right)-\dot{\xi}_{\ell}\varsigma\right]d\varsigma.\\\
\end{split}$
Here we have integrated by parts and used the relations
$f\left(0,c^{\pm},k\right)=0,\qquad
f\left(s^{\pm},c^{\pm},k\right)=s^{\pm}g\left(s^{\pm},c^{\pm},k\right)=s^{\pm}\dot{\xi}_{\ell}.$
Suppose $s^{-}\leq s^{+}$, the other case being symmetric. Because of the
entropy condition on $c$ waves (2.9) there exists
$s^{*}\in\left[s^{-},s^{+}\right]$ such that
$\begin{cases}g\left(\varsigma,c^{-},k\right)\geq\dot{\xi}_{\ell}&\text{ for
all }\varsigma\in\left[s^{-},s^{*}\right],\\\
g\left(\varsigma,c^{+},k\right)\geq\dot{\xi}_{\ell}&\text{ for all
}\varsigma\in\left[s^{*},s^{+}\right].\end{cases}$
The estimates (2.12) further lead to
$\begin{split}\Delta
q^{\varepsilon}_{\ell}-\Delta\eta_{\ell}\dot{\xi}_{\ell}&\leq-\int_{s^{*}}^{s^{+}}\eta^{\prime\prime}\left(\varsigma\right)\left[f\left(\varsigma,c^{+},k\right)-\dot{\xi}_{\ell}\varsigma\right]d\varsigma+\int_{s^{*}}^{s^{-}}\eta^{\prime\prime}\left(\varsigma\right)\left[f\left(\varsigma,c^{-},k\right)-\dot{\xi}_{\ell}\varsigma\right]d\varsigma\\\
&\qquad+C\left(\frac{\varepsilon}{N+M}+\left|c^{+}-c^{-}\right|\right)\\\
&=-\int_{s^{*}}^{s^{+}}\eta^{\prime\prime}\left(\varsigma\right)\varsigma\left[g\left(\varsigma,c^{+},k\right)-\dot{\xi}_{\ell}\right]d\varsigma+\int_{s^{*}}^{s^{-}}\eta^{\prime\prime}\left(\varsigma\right)\varsigma\left[g\left(\varsigma,c^{-},k\right)-\dot{\xi}_{\ell}\right]d\varsigma\\\
&\qquad+C\left(\frac{\varepsilon}{N+M}+\left|c^{+}-c^{-}\right|\right)\\\
&\leq C\left(\frac{\varepsilon}{N+M}+\left|\Delta
c_{\ell}\right|\right).\end{split}$
$k$ waves: For a $k$ wave, both $c$ and $f$ are constant and
$\dot{\xi}_{\ell}=0$, where $\xi_{\ell}$ is the boundary between two regions
$\Omega_{i,j}$ and $\Omega_{i+1,j}$. We have
$\begin{split}\Delta&q^{\varepsilon}_{\ell}-\Delta\eta_{\ell}\dot{\xi}_{\ell}=\int_{0}^{s^{+}}\eta^{\prime}\left(\varsigma\right)\partial_{\varsigma}f^{i+1,j}\left(\varsigma\right)d\varsigma-\int_{0}^{s^{-}}\eta^{\prime}\left(\varsigma\right)\partial_{\varsigma}f^{i,j}\left(\varsigma\right)d\varsigma\\\
&\leq
C\frac{\varepsilon}{N+M}+\int_{0}^{s^{+}}\eta^{\prime}\left(\varsigma\right)\partial_{\varsigma}f\left(\varsigma,c,k^{+}\right)d\varsigma-\int_{0}^{s^{-}}\eta^{\prime}\left(\varsigma\right)\partial_{\varsigma}f\left(\varsigma,c,k^{-}\right)d\varsigma\\\
&\leq
C\left(\frac{\varepsilon}{N+M}+\left|k^{+}-k^{-}\right|\right)+\int_{s^{-}}^{s^{+}}\eta^{\prime}\left(\varsigma\right)\partial_{\varsigma}f\left(\varsigma,c,k^{-}\right)d\varsigma\\\
&\leq C\left(\frac{\varepsilon}{N+M}+\left|\Delta
k_{\ell}\right|\right)+\left\|\eta^{\prime}\right\|_{\infty}\operatorname{sign}\left(s^{+}-s^{-}\right)\int_{s^{-}}^{s^{+}}\partial_{\varsigma}f\left(\varsigma,c,k^{-}\right)d\varsigma\\\
&=C\left(\frac{\varepsilon}{N+M}+\left|\Delta
k_{\ell}\right|+\left|f\left(s^{+},c,k^{-}\right)-f\left(s^{-},c,k^{-}\right)\right|\right)\\\
&=C\left(\frac{\varepsilon}{N+M}+\left|\Delta
k_{\ell}\right|+\left|f\left(s^{+},c,k^{-}\right)-f\left(s^{+},c,k^{+}\right)\right|\right)\\\
&=C\left(\frac{\varepsilon}{N+M}+\left|\Delta
k_{\ell}\right|\right)\end{split}$
where we used the fact that
$\partial_{\varsigma}f\left(\varsigma,c,k^{-}\right)\geq 0$ and that
$f\left(s^{-},c,k^{-}\right)=f\left(s^{+},c,k^{+}\right)$.
Finally, if the compact support of $\phi$ is contained in
$\left]0,T\right[\times\mathbb{R}$, equality (3.15) and the previous analysis
on the three types of waves lead to
$\begin{split}\langle\partial_{t}\left[\eta\left(s^{\varepsilon}\right)\right]+\partial_{x}\left[q^{\varepsilon}\left(t,x,s^{\varepsilon}\right)\right],\phi\rangle&\leq
CT\left(\frac{N\varepsilon+M\varepsilon}{N+M}+\operatorname{Tot.Var.}\left\\{\bar{c}\right\\}+\operatorname{Tot.Var.}\left\\{\bar{k}\right\\}\right)\left\|\phi\right\|_{\infty}\\\
&\leq
CT\left(1+\operatorname{Tot.Var.}\left\\{\bar{c}\right\\}+\operatorname{Tot.Var.}\left\\{\bar{k}\right\\}\right)\left\|\phi\right\|_{\infty}\end{split}$
for any $\varepsilon\in\left]0,1\right[$, proving the theorem. ∎
###### Theorem 3.2.
For any smooth entropy $\eta$ (even non convex) and decreasing sequence
$\varepsilon_{j}\to 0$ there exists a compact set $\mathcal{K}\subset
H^{-1}_{loc}\left(\Omega\right)$, independent of ${j}$, such that
$\mu_{\varepsilon_{j}}=\partial_{t}\left[\eta\left(s^{\varepsilon_{j}}\right)\right]+\partial_{x}\left[q^{\varepsilon_{j}}\left(t,x,s^{\varepsilon_{j}}\right)\right]\in\mathcal{K}.$
###### Proof.
We apply standard arguments in compensated compactness theory [diperna].
Integrating the measure $\mu_{\varepsilon}$ over a rectangle (with $t_{1}>0$)
$R=\left[t_{1},t_{2}\right]\times\left[-L,L\right]$ we obtain
$\begin{split}\mu_{\varepsilon}\left(R\right)&=\int_{t_{1}}^{t_{2}}q^{\varepsilon}\left(t,L+,s^{\varepsilon}\left(t,L+\right)\right)-q^{\varepsilon}\left(t,-L-,s^{\varepsilon}\left(t,-L-\right)\right)dt\\\
&\qquad+\int_{-L}^{L}\eta\left(s^{\varepsilon}\left(t_{2}+,x\right)\right)-\eta\left(s^{\varepsilon}\left(t_{1}-,x\right)\right)dx.\end{split}$
Since $s^{\varepsilon}$ is uniformly bounded, there exists a constant
$\bar{C}_{R}$ such that
$\left|\mu_{\varepsilon}\left(R\right)\right|\leq\bar{C}_{R}$ for any
$\varepsilon\in\left]0,1\right[$. If $\eta$ is convex, we can apply Theorem
3.1 to estimate the total variation of $\mu_{\varepsilon}$ uniformly with
respect to $\varepsilon$:
$\left|\mu_{\varepsilon}\right|\left(R\right)=\mu_{\varepsilon}^{+}\left(R\right)+\mu_{\varepsilon}^{-}\left(R\right)=2\mu_{\varepsilon}^{+}\left(R\right)-\mu_{\varepsilon}\left(R\right)\leq
2C_{R}+\bar{C}_{R}.$
If $\eta$ is not convex, then we take a strictly convex entropy $\eta^{*}$
(for instance $\eta^{*}\left(\sigma\right)=\sigma^{2}$) and define
$\tilde{\eta}=\eta+H\eta^{*}$. The entropy $\tilde{\eta}$ is convex for a
sufficiently big constant $H$. We denote by $\mu_{\varepsilon}$,
$\mu_{\varepsilon}^{*}$ and $\tilde{\mu}_{\varepsilon}$ the measures
corresponding to the entropies $\eta$, $\eta^{*}$ and $\tilde{\eta}$. Since
the definition of the entropy flux (3.13) is linear with respect to the
associated entropy, the measures satisfy
$\tilde{\mu}_{\varepsilon}=\mu_{\varepsilon}+H\mu_{\varepsilon}^{*}$. Hence
the inequality
$\left|\mu_{\varepsilon}\right|\left(R\right)\leq\left|\tilde{\mu}_{\varepsilon}\right|\left(R\right)+H\left|\mu^{*}_{\varepsilon}\right|\left(R\right)$
holds. This means that $\left|\mu_{\varepsilon}\right|\left(R\right)$ is
bounded uniformly with respect to $\varepsilon$ since both
$\tilde{\mu}_{\varepsilon}$ and $\mu^{*}_{\varepsilon}$ are associated with
convex entropies. Since the measure
$\mu_{\varepsilon}=\partial_{t}\left[\eta\left(s^{\varepsilon}\right)\right]+\partial_{x}\left[q^{\varepsilon}\left(t,x,s^{\varepsilon}\right)\right]$
restricted to $R$ lies both in a bounded set of the space of measures
$\mathcal{M}\left(R\right)$ and in a bounded set of
$W^{-1,\infty}\left(R\right)$, [Dafermos, Lemma 17.2.2] allows us to conclude
the proof of the theorem. ∎
## 4 Strong Convergence
The following result is a step towards the proof of Theorem 1.1.
###### Theorem 4.1.
There exists a sequence $\varepsilon_{j}\to 0$ such that
$\left(s^{\varepsilon_{j}},c^{\varepsilon_{j}},k^{\varepsilon_{j}}\right)\to\left(\tilde{s},\tilde{c},\tilde{k}\right)$
in $L^{1}_{loc}\left(\Omega\right)$.
###### Proof.
We suitably modify the proof of [BGS, Theorem 4.2], omitting some computations
already written there. The proof takes several steps.
### 1.
Observe that by construction we have
$\operatorname{Tot.Var.}\left\\{c^{\varepsilon}\left(t,\cdot\right)\right\\}=\operatorname{Tot.Var.}\left\\{\bar{c}^{\varepsilon}\right\\}\leq\operatorname{Tot.Var.}\left\\{\bar{c}\right\\}$
and the wave speeds are uniformly bounded. Hence Helly’s theorem implies that
there exist a sequence $c^{\varepsilon_{j}}\to\tilde{c}$ in
$L^{1}_{loc}\left(\Omega\right)$. Since $k^{\varepsilon}$ is constant in time,
we have $k^{\varepsilon_{j}}\to\tilde{k}=\bar{k}$ in
$L^{1}_{loc}\left(\Omega\right)$ as well. In the following we always take
subsequences of this sequence and we will drop the index $j$ to simplify
notations. We define the limit flux
$F\left(t,x,\sigma\right)=f\left(\sigma,\tilde{c}(t,x),\tilde{k}(x)\right),\quad\text{
for all }\left(t,x\right)\in\Omega,\text{ and }\sigma\in\left[0,1\right]$
and for any entropy $\eta$ we define the limit entropy flux
$q\left(t,x,\sigma\right)=\int_{0}^{\sigma}\eta^{\prime}\left(\varsigma\right)\partial_{\varsigma}F\left(t,x,\varsigma\right)d\varsigma.$
The estimate (uniform in $\sigma\in\left[0,1\right]$)
$\begin{split}&\left|q\left(t,x,\sigma\right)-q^{\varepsilon}\left(t,x,\sigma\right)\right|\leq\int_{0}^{1}\left|\eta^{\prime}\left(\varsigma\right)\right|\bigg{(}\left|\partial_{\varsigma}f\left(\varsigma,\tilde{c}(t,x),\tilde{k}(x)\right)-\partial_{\varsigma}f\left(\varsigma,c^{\varepsilon}(t,x),k^{\varepsilon}(x)\right)\right|\\\
&\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad+\left|\partial_{\varsigma}f\left(\varsigma,c^{\varepsilon}(t,x),k^{\varepsilon}(x)\right)-\partial_{\varsigma}F^{\varepsilon}\left(t,x,\varsigma\right)\right|\bigg{)}d\varsigma\\\
&\quad\leq
C\left(\left|\tilde{c}(t,x)-c^{\varepsilon}(t,x)\right|+\left|\tilde{k}(x)-k^{\varepsilon}(x)\right|+\varepsilon\right)\to
0\qquad\text{ in }L^{1}_{loc}\left(\Omega\right)\end{split}$
implies that
$\partial_{x}\left[q\left(t,x,s^{\varepsilon}\right)-q^{\varepsilon}\left(t,x,s^{\varepsilon}\right)\right]\to
0,\quad\mbox{ in}~{}H^{-1}_{loc}\left(\Omega\right).$
Together with Theorem 3.2, it implies that the sequence
$\partial_{t}\left[\eta\left(s^{\varepsilon}\right)\right]+\partial_{x}\left[q\left(t,x,s^{\varepsilon}\right)\right]=\partial_{t}\left[\eta\left(s^{\varepsilon}\right)\right]+\partial_{x}\left[q^{\varepsilon}\left(t,x,s^{\varepsilon}\right)\right]+\partial_{x}\left[q\left(t,x,s^{\varepsilon}\right)-q^{\varepsilon}\left(t,x,s^{\varepsilon}\right)\right]$
belongs to a compact set in $H^{-1}_{loc}\left(\Omega\right)$.
### 2.
For any $(t,x)\in\Omega$ and $v,w\in[0,1]$ we define
$I(t,x,v,w)~{}\doteq~{}(v-w)\int_{w}^{v}\bigl{[}\partial_{\sigma}F(t,x,\sigma)\bigr{]}^{2}\,d\sigma-\bigl{[}F(t,x,v)-F(t,x,w)\bigr{]}^{2}.$
(4.16)
The following properties hold.
1. (i)
$(v,w)\mapsto I(t,x,v,w)$ is continuous with $I(t,x,v,v)=0$ for any
$v\in\bigl{[}0,1\bigr{]}$.
2. (ii)
$I(t,x,v,w)>0$ for any $v,w\in\bigl{[}0,1\bigr{]}$ with $v\not=w$.
Indeed, (i) is trivial, while (ii) follows from Jensen’s inequality and the
fact that $\sigma\mapsto f\left(\sigma,\gamma,\kappa\right)$ and hence
$\sigma\mapsto F\left(t,x,\sigma\right)$ have a unique inflection point.
Indeed suppose $w<v$, we observe that
$\sigma\mapsto\partial_{\sigma}F(t,x,\sigma)$ is not constant over the
interval $\omega\in[w,v]$, and we compute
$\displaystyle I(t,x,v,w)~{}$
$\displaystyle=~{}(v-w)\int_{w}^{v}\bigl{[}\partial_{\sigma}F(t,x,\sigma)\bigr{]}^{2}\,d\sigma-(v-w)^{2}\left[\frac{1}{v-w}\int_{w}^{v}\partial_{\sigma}F(t,x,\sigma)\,d\sigma\right]^{2}$
$\displaystyle>~{}(v-w)\int_{w}^{v}\bigl{[}\partial_{\sigma}F(t,x,\sigma)\bigr{]}^{2}\,d\sigma-(v-w)^{2}\frac{1}{v-w}\int_{w}^{v}\bigl{[}\partial_{\sigma}F(t,x,\sigma)\bigr{]}^{2}\,d\sigma$
$\displaystyle=~{}0.$
### 3.
Fixing $(\tau,y)\in\Omega$ and we consider the following entropies and
corresponding limit fluxes
$\displaystyle\eta(\sigma)$ $\displaystyle=\sigma,$ $\displaystyle
q(t,x,\sigma)$ $\displaystyle=F(t,x,\sigma),$
$\displaystyle\eta_{\left(\tau,y\right)}(\sigma)$
$\displaystyle=F(\tau,y,\sigma),$ $\displaystyle
q_{\left(\tau,y\right)}(t,x,\sigma)$
$\displaystyle=\int_{0}^{\sigma}\partial_{\varsigma}F(\tau,y,\varsigma)\partial_{\varsigma}F(t,x,\varsigma)\,d\varsigma.$
The same computations as the ones used to obtain [BGS, (4.16)] prove that
there exists a constant $C_{2}\geq 0$ such that
$(v-w)\bigl{[}q_{\left(\tau,y\right)}(t,x,v)-q_{\left(\tau,y\right)}(t,x,w)\bigr{]}\\\
\geq~{}I(t,x,v,w)+\bigl{[}F(t,x,v)-F(t,x,w)\bigr{]}^{2}-C_{2}\sup_{\sigma\in[0,1]}\bigl{|}F(\tau,y,\sigma)-F(t,x,\sigma)\bigr{|}.$
(4.17)
### 4.
By possibly taking subsequences, we can achieve the following weak∗
convergences in $L^{\infty}(\Omega)$:
$\begin{cases}\displaystyle
s^{\varepsilon}(t,x)~{}\overset{*}{\rightharpoonup}~{}\tilde{s}(t,x),\\\\[2.84526pt]
\displaystyle
F\bigl{(}t,x,s^{\varepsilon}(t,x)\bigr{)}~{}\overset{*}{\rightharpoonup}~{}\tilde{F}(t,x),\\\\[2.84526pt]
\displaystyle
I\bigl{(}t,x,s^{\varepsilon}(t,x),\tilde{s}(t,x)\bigr{)}~{}\overset{*}{\rightharpoonup}~{}\tilde{I}(t,x).\end{cases}$
(4.18)
Taking further subsequences (which this time may depend on
$\left(\tau,y\right)$) we can achieve these further weak∗ convergences in
$L^{\infty}(\Omega)$
$\displaystyle F\bigl{(}\tau,y,s^{\varepsilon}(t,x)\bigr{)}$
$\displaystyle~{}\overset{*}{\rightharpoonup}~{}\tilde{F}_{\left(\tau,y\right)}(t,x),$
$\displaystyle
q_{\left(\tau,y\right)}\bigl{(}t,x,s^{\varepsilon}(t,x)\bigr{)}$
$\displaystyle~{}\overset{*}{\rightharpoonup}~{}\tilde{q}_{\left(\tau,y\right)}(t,x).$
(4.19)
Notice that the weak limits $\tilde{s}$, $\tilde{f}$, $\tilde{I}$ in (4.18) do
not depend on the values $\left(\tau,y\right)$. Step 1 implies
$\partial_{t}\left[s^{\varepsilon}(t,x)\right]+\partial_{x}\left[F\bigl{(}t,x,s^{\varepsilon}(t,x)\bigr{)}\right],\quad\partial_{t}\left[F\bigl{(}\tau,y,s^{\varepsilon}(t,x)\bigr{)}\right]+\partial_{x}\left[q_{\left(\tau,y\right)}\bigl{(}t,x,s^{\varepsilon}(t,x)\bigr{)}\right]~{}\in~{}\mathcal{K},$
where $\mathcal{K}$ is a compact set (independent of the subsequence index) in
$H_{loc}^{-1}(\Omega)$. By an application of the _div–curl lemma_ , see for
example Theorem 16.2.1 in [Dafermos], one obtains
$\begin{array}[]{l}s^{\varepsilon}(t,x)q_{\left(\tau,y\right)}\bigl{(}t,x,s^{\varepsilon}(t,x)\bigr{)}-F\bigl{(}t,x,s^{\varepsilon}(t,x)\bigr{)}F\bigl{(}\tau,y,s^{\varepsilon}(t,x)\bigr{)}\\\\[8.53581pt]
\qquad\qquad\overset{*}{\rightharpoonup}~{}\tilde{s}(t,x)\tilde{q}_{\left(\tau,y\right)}(t,x)-\tilde{F}(t,x)\tilde{F}_{\left(\tau,y\right)}(t,x).\end{array}$
(4.20)
Following the proof of [BGS, Theorem 4.2] we set $v=s^{\varepsilon}(t,x)$ and
$w=\tilde{s}(t,x)$ in (4.17) and take the weak∗ limit as $\varepsilon\to 0$ to
obtain
$\begin{split}&\tilde{I}(t,x)-\left[\tilde{s}(t,x)\tilde{q}_{\left(\tau,y\right)}(t,x)-\tilde{F}(t,x)\tilde{F}_{\left(\tau,y\right)}(t,x)\right]+\tilde{s}(t,x)\tilde{q}_{\left(\tau,y\right)}(t,x)\\\
&\qquad-2\tilde{F}(t,x)F\bigl{(}t,x,\tilde{s}(t,x)\bigr{)}+F\bigl{(}t,x,\tilde{s}(t,x)\bigr{)}^{2}~{}\leq~{}C_{3}\sup_{\sigma\in[0,1]}\bigl{|}F(\tau,y,\sigma)-F(t,x,\sigma)\bigr{|}.\end{split}$
This can be written as
$\begin{split}\tilde{I}(t,x)+\bigl{[}\tilde{F}(t,x)-F\bigl{(}t,x,\tilde{s}(t,x)\bigr{)}\bigr{]}^{2}&\leq
C_{3}\sup_{\sigma\in[0,1]}\bigl{|}F(\tau,y,\sigma)-F(t,x,\sigma)\bigr{|}\\\
&\qquad+\bigl{|}\tilde{F}(t,x)\bigr{|}\bigl{|}\tilde{F}_{\left(\tau,y\right)}(t,x)-\tilde{F}(t,x)\bigr{|},\end{split}$
which holds for any fixed $(\tau,y)\in\Omega$ and a.e. $(t,x)\in\Omega$.
Taking the weak∗ limit in
$\displaystyle-\sup_{\sigma\in[0,1]}\bigl{|}F(\tau,y,\sigma)-F(t,x,\sigma)\bigr{|}$
$\displaystyle\leq$ $\displaystyle
F\bigl{(}\tau,y,s^{\varepsilon}(t,x)\bigr{)}-F\bigl{(}t,x,s^{\varepsilon}(t,x)\bigr{)}$
$\displaystyle\leq$
$\displaystyle\sup_{\sigma\in[0,1]}\bigl{|}F(\tau,y,\sigma)-F(t,x,\sigma)\bigr{|},$
we obtain
$\displaystyle-\sup_{\sigma\in[0,1]}\bigl{|}F(\tau,y,\sigma)-F(t,x,\sigma)\bigr{|}$
$\displaystyle\leq$
$\displaystyle\tilde{F}_{\left(\tau,y\right)}(t,x)-\tilde{F}(t,x)$
$\displaystyle\leq$
$\displaystyle\sup_{\sigma\in[0,1]}\bigl{|}F(\tau,y,\sigma)-F(t,x,\sigma)\bigr{|}.$
Hence for any fixed $(\tau,y)\in\Omega$, we have for a.e. $(t,x)\in\Omega$
$\tilde{I}(t,x)+\bigl{[}\tilde{F}(t,x)-F\bigl{(}t,x,\tilde{s}(t,x)\bigr{)}\bigr{]}^{2}~{}\leq~{}C_{4}\sup_{\sigma\in[0,1]}\bigl{|}F(\tau,y,\sigma)-F(t,x,\sigma)\bigr{|}.$
(4.21)
### 4.
We call $E_{1}$ the set of Lebesgue points of the left hand side of (4.21).
Moreover, for each $\sigma\in[0,1]$ let $E_{\sigma}$ be the set of Lebesgue
points of the map $(t,x)\mapsto F(t,x,\sigma)$. Defining
$E\doteq
E_{1}\cap\left(\displaystyle{\bigcap_{q\in\mathbb{Q}\cap[0,1]}E_{q}}\right),$
we observe that its complement $\Omega\setminus E$ has zero measure. Take any
$(\tau,y)\in E$ and fix $\epsilon>0$. Let
$\mathcal{F}_{\epsilon}\subset\mathbb{Q}\cap[0,1]$ be a finite set such that
$\displaystyle{\inf_{q\in\mathcal{F}_{\epsilon}}}\bigl{|}q-\sigma\bigr{|}<\epsilon$
for every $\sigma\in[0,1]$. Then we have
$\displaystyle\sup_{\sigma\in[0,1]}\bigl{|}F(\tau,y,\sigma)-F(t,x,\sigma)\bigr{|}$
$\displaystyle\leq$
$\displaystyle\max_{q\in\mathcal{F}_{\epsilon}}\bigl{|}F(\tau,y,q)-F(t,x,q)\bigr{|}+2L\epsilon$
(4.22) $\displaystyle\leq$
$\displaystyle\sum_{q\in\mathcal{F}_{\epsilon}}\bigl{|}F(\tau,y,q)-F(t,x,q)\bigr{|}+2L\epsilon,$
where $L$ is a uniform Lipchitz constant for $\varsigma\mapsto
F\left(t,x,\varsigma\right)$. Let $B_{\delta}(\tau,y)$ be the disc in $\Omega$
centered in $(\tau,y)$ with radius $\delta>0$ whose area is $\pi\delta^{2}$.
Integrating (4.21) and using (4.22) we obtain
$\begin{split}&\frac{1}{\pi\delta^{2}}\int_{B_{\delta}(\tau,y)}\Big{(}\tilde{I}(t,x)+\bigl{[}\tilde{F}(t,x)-F\bigl{(}t,x,\tilde{s}(t,x)\bigr{)}\bigr{]}^{2}\Big{)}\,dt\,dx\\\
&\qquad\qquad\qquad\leq\frac{C_{4}}{\pi\delta^{2}}\sum_{q\in\mathcal{F}_{\epsilon}}\int_{B_{\delta}(\tau,y)}\bigl{|}F(\tau,y,q)-F(t,x,q)\bigr{|}\,dt\,dx+2C_{4}L\epsilon.\end{split}$
Since $(\tau,y)$ is a Lebesgue point for the map $(t,x)\mapsto F(t,x,q)$, for
all $q\in\mathcal{F}_{\epsilon}$, letting $\delta\to 0$ we obtain
$\tilde{I}(\tau,y)+\bigl{[}\tilde{F}(\tau,y)-F\bigl{(}\tau,y,\tilde{s}(\tau,y)\bigr{)}\bigr{]}^{2}~{}\leq~{}C_{4}L\epsilon\,.$
Since $\epsilon>0$ is arbitrary, this implies
$\tilde{I}(\tau,y)+\bigl{[}\tilde{F}(\tau,y)-F\bigl{(}\tau,y,\tilde{s}(\tau,y)\bigr{)}\bigr{]}^{2}~{}\leq~{}0\qquad\text{
for every~{} }(\tau,y)\in E\,.$
Hence $\tilde{I}(t,x)\leq 0$ a.e. in $\Omega$. Since, by Step 2,
$I\bigl{(}t,x,s^{\varepsilon}(t,x),\tilde{s}(t,x)\bigr{)}\geq 0$, its weak∗
limit $\tilde{I}(t,x)$ must be greater or equal to zero almost everywhere.
Therefore we get
$\tilde{I}(t,x)=0,\qquad\text{ and
}\qquad\tilde{F}\left(t,x\right)=F\left(t,x,\tilde{s}\left(t,x\right)\right),\quad\text{
a.e. in }\Omega.$
Since $I(t,x,s^{\varepsilon}(t,x),\tilde{s}(t,x)\bigr{)}\geq 0$ converges
weakly∗ to zero, we conclude that it converges strongly in
$L^{1}_{loc}\left(\Omega\right)$. We can thus take a subsequence such that
$I(t,x,s^{\varepsilon}(t,x),\tilde{s}(t,x)\bigr{)}\to 0$ a.e. in $\Omega$.
Finally, property (ii) proved in Step 2 implies
$s^{\varepsilon}(t,x)\to\tilde{s}(t,x)$ a.e. in $\Omega$, completing the
proof.∎
###### Proof of Theorem 1.1.
By Theorem 4.1 we know that there exists a subsequence of wave front tracking
approximate solutions constructed in Section 2
$\left(s^{\varepsilon},c^{\varepsilon},k^{\varepsilon}\right)$ which converges
strongly in $L^{1}_{loc}\left(\Omega\right)$ to a limit
$\left(\tilde{s},\tilde{c},\tilde{k}\right)$. Clearly $\tilde{k}_{t}=0$. Let
$\phi$ be a test function with compact support in
$\left[0,+\infty\right[\times\mathbb{R}$. By construction (see Section 2) the
approximate solutions satisfy
$\displaystyle\int_{\Omega}\left[s^{\varepsilon}\phi_{t}+F^{\varepsilon}(t,x,s^{\varepsilon})\phi_{x}\right](t,x)\;dtdx+\int_{\mathbb{R}}\bar{s}^{\varepsilon}(x)\phi\left(0,x\right)\;dx=0,$
$\displaystyle\int_{\Omega}\left[c^{\varepsilon}s^{\varepsilon}\phi_{t}+c^{\varepsilon}F^{\varepsilon}(t,x,s^{\varepsilon})\phi_{x}\right](t,x)\;dtdx+\int_{\mathbb{R}}\bar{c}^{\varepsilon}\left(x\right)\bar{s}^{\varepsilon}(x)\phi\left(0,x\right)\;dx=0,$
$\displaystyle
k^{\varepsilon}\left(t,x\right)=\bar{k}^{\varepsilon}(x),\quad\forall(t,x)\in\Omega.$
The uniform estimate (2.12) and the strong convergence of approximate
solutions allows us to pass to the limit and to conclude that the limit
$\left(\tilde{s},\tilde{c},\tilde{k}\right)$ satisfies Definition 1.1. ∎
Acknowledgment: The present work was supported by the PRIN 2015 project
_Hyperbolic Systems of Conservation Laws and Fluid Dynamics: Analysis and
Applications_ and by GNAMPA 2019 project _Equazioni alle derivate parziali di
tipo iperbolico o non locale ed applicazioni._. The authors would like to
thank the anonymous referee for carefully reading the manuscript and providing
many useful suggestions.
## References
|
11institutetext: National Institute of Technology Karnataka, Mangalore, India
11email<EMAIL_ADDRESS>22institutetext: International
Institute of Information Technology Bangalore, Bangalore, India
22email<EMAIL_ADDRESS>
# Revisiting Driver Anonymity in ORide
Deepak Kumaraswamy 11 0000-0003-0245-9520 Shyam Murthy 22 0000-0002-0222-322X
Srinivas Vivek 33 0000-0002-8426-0859
###### Abstract
Ride Hailing Services (RHS) have become a popular means of transportation, and
with its popularity comes the concerns of privacy of riders and drivers. ORide
is a privacy-preserving RHS proposed at the USENIX Security Symposium 2017 and
uses Somewhat Homomorphic Encryption (SHE). In their protocol, a rider and all
drivers in a zone send their encrypted coordinates to the RHS Service Provider
(SP) who computes the squared Euclidean distances between them and forwards
them to the rider. The rider decrypts these and selects the optimal driver
with least Euclidean distance.
In this work, we demonstrate a location-harvesting attack where an honest-but-
curious rider, making only a single ride request, can determine the exact
coordinates of about half the number of responding drivers even when only the
distance between the rider and drivers are given. The significance of our
attack lies in inferring locations of other drivers in the zone, which are not
(supposed to be) revealed to the rider as per the protocol.
We validate our attack by running experiments on zones of varying sizes in
arbitrarily selected big cities. Our attack is based on enumerating lattice
points on a circle of sufficiently small radius and eliminating solutions
based on conditions imposed by the application scenario. Finally, we propose a
modification to ORide aimed at thwarting our attack and show that this
modification provides sufficient driver anonymity while preserving ride
matching accuracy.
ide Hailing Services, Privacy and Censorship, Applied Cryptography, Lattice
points.
###### Keywords:
R
††© Springer Nature Switzerland AG 2021. The final published version will be
available at www.springerlink.com.
## 1 Introduction
Ride Hailing Services such as Uber, Lyft are becoming popular world-wide year
over year. According to Pew Research [23], the number of Americans who have
used RHS has more than doubled since 2015. In order to provide the service,
RHS Service Providers (SP) collect upfront information about individuals
desiring to use their services, which include riders and drivers who are part
of the network. In addition, details of rides offered and accepted are also
collected as part of their billing and statistics gathering. This raises a
number of privacy concerns among the individual users. Though the SP would, in
general, keep the information secure given the need to keep its reputation
high, there is nothing to prevent breach of privacy if either the provider
turns malicious or if someone with access to information internal to the
provider wants to mine the information for personal gain [20].
A ride hailing service consists of three parties, namely, the SP, a rider who
has subscribed for services of the SP and a set of drivers involved in ride
selection. The SP is modeled as an honest-but-curious adversary. We consider
the threat model where the rider attempts to mount a location-harvesting
attack on participating drivers. While there are a number of solutions
proposed in the last few years that preserve privacy of riders and drivers
with respect to the SP, there are only a few works that look at privacy issues
of drivers with respect to riders. The work Geo-locating Drivers [38] does an
analysis of features and web APIs of non-privacy preserving RHS apps which can
be used to extract privacy sensitive driver data. PrivateRide by Pham et al.
[25] describes how riders or other malicious outsiders posing as riders can
harvest personal information of drivers for purposes of stalking, blackmailing
or other malicious activities. Apart from user-profiling, there are several
instances where leakage of information regarding driver locations can lead to
serious threats (refer Section 2.5).
One of the early privacy-preserving ride hailing services is ORide [24]. While
the primary focus of this proposal was to provide an oblivious ride-matching
solution to riders while preserving the privacy of riders and drivers from the
SP, it also considers location-harvesting attacks against drivers by a
malicious set of riders who create and cancel fake ride requests
simultaneously from multiple locations. ORide ensures the anonymity of the
drivers and riders with respect to SP primarily through the use of a Somewhat
Homomorphic Encryption (SHE) scheme. There are more recent works that also
propose privacy-preserving RHS, and an overview of the works related to RHS is
given in Section 4.
In ORide, the SP collects SHE encrypted coordinates of the drivers in the zone
of the rider, homomorphically computes the Euclidean distances between the
rider and drivers, and then sends these encrypted values to the rider. The
rider then decrypts the encrypted distances, chooses the nearest driver and
proceeds with ride establishment (the ORide protocol is recalled in Section
2.1). This clearly leaks the distances of even those drivers who were not
selected to offer the ride. But given that there are many possibilities for
the coordinates of the driver, even if only their distance is known, one would
expect that in practice the exact driver location is anonymous. However, we
show that while the protocol hides personal information of the drivers it
offers only limited anonymity for the drivers’ locations w.r.t. a rider who
requests a ride.
### 1.1 Our Contribution
In this work, we show a location-harvesting attack on the drivers in the ORide
protocol. Along with the privacy for riders and drivers with respect to the
SP, ORide also claims that its design offers location privacy for drivers with
respect to riders by preventing location-harvesting attacks. This is done
using deposit tokens and permutation of driver indices for each ride request,
which prevents a malicious rider from making fake ride requests and
triangulating locations of all drivers in the zone [24, §8]. We show in
Section 2.3 that even an honest-but-curious rider, with only one ride request
and response, can recover the exact coordinates of about half the number of
drivers who respond to her ride request. Such an attack is not easy for the SP
to detect unlike attacks that involve simultaneous ride requests and
cancellations. We remark here that except driver location information, no
personal driver information is revealed in our attack. Nonetheless, in Section
2.5 we discuss practical scenarios where revealing only the drivers locations
(without their identities) can be harmful.
Our attack is motivated by the classical Gauss’ circle problem [29, Ch. 9].
ORide uses a map-projection system such as UTM [33] to work with planar
integer coordinates. Recovering the integer coordinates $(X_{d},Y_{d})$ of the
driver by the rider reduces to solving
$(X_{r}-X_{d})^{2}+(Y_{r}-Y_{d})^{2}=N$, where $X_{r},Y_{r},N$ are non-
negative integers that are known to the rider. Relabelling this equation as
$x^{2}+y^{2}=N$ results in a variant of the Gauss’ circle problem. Since $N$
is sufficiently small, it is feasible to enumerate all the lattice points
(i.e., points with integer coordinates) on the circle of radius $\sqrt{N}$. In
our case, since $N$ always corresponds to the case where a solution is known
to exist, we experimentally observe that the number of solutions to be about
$20$ on average (over our choice of zones in Table 2.4). Then we use the
following ideas to further eliminate the potential solutions: (i) the driver
coordinates must be in the same zone as that of the rider, (ii) the driver is
typically expected to be at a motorable location such as road though the rider
can book the ride from anywhere. This allows us to eliminate most of the
possibilities (see Algorithm 1 in Section 2) and reduce the number of
solutions from 20 to about 2 on average. In Section 2.4, we validate our
attack by running experiments over zones of different sizes for four
arbitrarily chosen big cities, and show that a rider can determine the exact
locations of 45% of the responding drivers (see Table 2.4). Our attacks take
an average only 2 seconds per driver on a commodity laptop. We stress that we
are not only using the geographical information to eliminate locations, but
also the fact that all coordinates are encoded as integers and hence there are
only a handful of locations to enumerate on the circle in the first place. Our
attack exploits an inherent property of SHE schemes – namely the requirement
of integer-like encoded inputs for exact arithmetic [5]. We also believe that
the abstraction of our attack as enumerating lattice points on a circle (and
also our extension to other distance metrics in Appendix 0.A) is generic and
will motivate similar exploits in other privacy preserving solutions that use
SHE.
In Section 3, we propose a modification to the ORide protocol which serves as
a solution to overcome our attack. Here the driver obfuscates her location by
choosing random coordinates within a certain distance $R$ from her original
location. Now the rider receives Euclidean distances that are homomorphically
computed between her and the driver’s anonymized location. Accordingly we
modify the rider’s attack from Section 2 to account for the fact that these
anonymized coordinates (which are represented by different lattice solutions)
may not lie on road. However the anonymized coordinates will definitely have a
road within proximity $R$, since the driver was originally on road. Through
experiments we analyze this new attack on the proposed modification and
evaluate its effect on driver anonymity and accuracy (refer Table 2). The
optimal driver chosen in this case (based on least Euclidean distance between
the rider and anonymized drivers’ locations) is sufficiently close to the
time-wise closest driver (who takes the least time to arrive at rider’s
location). Our solution is therefore viable in practice and is successful in
preserving driver anonymity.
In Appendix 0.A, we investigate possible alternate modifications to ORide in
an attempt to mitigate our attack. However we show that these non-trivial
techniques are eventually vulnerable to the same attack.
In Section 4, we discuss related works on privacy-preserving RHS and also
briefly discuss the applicability of our attacks to these works. We also
discuss other techniques that are available in the literature for location
obfuscation.
## 2 Analysis of ORide Protocol
In this section we briefly recall the ORide protocol of [24], followed by a
security analysis of the protocol at the rider’s end. We then describe our
attack that would allow a rider to predict a driver’s location with good
accuracy, and present the results of practical experiments.
### 2.1 ORide : A Privacy-Preserving Ride Hailing Service
As mentioned in Section 1, ORide is a privacy-preserving ride hailing service
that uses an SHE scheme to match riders with drivers. In the process,
identities and locations of drivers and riders are not revealed to the SP. The
protocol provides accountability for SP and law-enforcement agencies in case
of a malicious driver or rider. It also supports convenience features like
automatic payment and reputation-rating of drivers/riders. In short, it is a
complete and practical solution along with novel methods that help keep the
identity of drivers and riders oblivious to the SP, together with
accountability and convenience. The experiments done in their paper use real
datasets consisting of taxi rides in New York city [26]. Their instantiation
provides 112-bit security, based on the FV SHE scheme [7] which relies on the
hardness of the Ring Learning With Errors (RLWE) problem.
We give below a high-level overview of the ORide protocol relevant to our
attack. (For more details, the reader is referred to the original paper). The
registered drivers periodically advertise their geographical zones to the SP.
These zones are predefined by the SP and available to drivers and riders. The
size of a zone is chosen in such a way that there are sufficiently many riders
and drivers to ensure anonymity while maintaining the efficiency of ride-
matching. When a rider wishes to hail a ride, she generates an ephemeral FV
public/private key-pair $(p_{k},s_{k})$. She encrypts her planar coordinates
using this key and sends it to the SP along with $p_{k}$ and her zone
$\mathcal{Z}$. SP broadcasts the public key $p_{k}$ received from the rider to
each driver in $\mathcal{Z}$. The $i$th driver $D_{i}$ encrypts her planar
coordinates using $p_{k}$ and sends it to SP. SP homomorphically computes the
squared values of the Euclidean distances between each driver and the rider in
parallel, and sends the encrypted result to the rider. The rider decrypts the
ciphertext sent by SP to obtain the squared Euclidean distance to each driver
$D_{i}$. She then selects the driver with smallest squared Euclidean distance
and then notifies the SP of the selected driver. This selected driver is in
turn notified by the SP. As part of the ride establishment protocol a secure
channel is then established between the rider and the driver. They then
proceed to service the ride request as per the protocol. Further steps,
although important, are not relevant to our work and, hence, we do not mention
them here.
### 2.2 ORide : Threat Model
The threat model considered in ORide is that of an honest-but-curious SP,
whereas the drivers and riders are active adversaries who do not collude with
the SP. We consider the same adversarial model in this paper as well. All the
plaintext information is encoded as integer polynomials before encrypting with
the FV SHE scheme. In ORide, the apps on the drivers and riders use a map-
projection system such as UTM [33] to convert pairs of floating-point
latitudes and longitudes to planar integer coordinates. Drivers use third-
party services like Google Maps or TomTom for navigation.
### 2.3 Attack: Predicting Driver Locations
Input : The rider’s zone $\mathcal{Z}$, number of drivers $n$ inside
$\mathcal{Z}$, rider’s coordinates $(X_{r},Y_{r})$, Euclidean distances
$d_{i}$ between the rider and driver $D_{i}$ ($\forall i=1,\cdots,n$)
Output : For each driver $D_{i}$, $\mathcal{S}^{\prime}_{i}$ denotes the
prediction set made for the location of $D_{i}$ by the rider
Procedure
_Predict_Driver(_$\mathcal{Z},n,(X_{r},Y_{r}),\\{d_{i}\\}_{i=1}^{n}$_)_ :
$avg=0$; $exact=0$
for _each driver $D_{i}$_ do
Receive: $d_{i}$ from SP
$\mathcal{S}_{i}=\phi$ $/\ast$ Store unique lattice points $\ast/$
for _$x=0$ to $\lfloor\sqrt{d_{i}}\rfloor$_ do
$y=\sqrt{d_{i}-x^{2}}$
if _$y$ is an integer_ then
$T=\\{\ (x,y),(-x,y),(-x,-y),(x,-y),$
$(y,x),(-y,x),(-y,-x),(y,-x)\ \\}$
for _$(x^{\prime},y^{\prime})\in T$_ do
$/\ast$ Compute predicted location for $D_{i}$ $\ast/$
$X_{d}=X_{r}+x^{\prime}$; $Y_{d}=Y_{r}+y^{\prime}$
if _( $X_{d},Y_{d}$) is inside $\mathcal{Z}$_ then
$\mathcal{S}_{i}\coloneqq\mathcal{S}_{i}\cup\\{\ (x,y)\ \\}$
end if
end for
end if
end for
$\mathcal{S}^{\prime}_{i}=\phi$ $/\ast$ Filtered lattice points on road
$\ast/$
for _$(X_{d},Y_{d})\in\mathcal{S}_{i}$_ do
$/\ast$ Use Google Maps API to check if the coordinates lie on road $\ast/$
${(x,y)}_{road}=\mathsf{RoadAPI}(\ (X_{d},Y_{d})\ )$
if _$\textnormal{distance between }(X_{d},Y_{d})\textnormal{ and
}{(x,y)}_{road} <=3\textnormal{ metres }$ _ then
$\mathcal{S}^{\prime}_{i}\coloneqq\mathcal{S}^{\prime}_{i}\cup\\{\
(X_{d},Y_{d})\ \\}$
end if
end for
$/\ast$ $\|\mathcal{S}^{\prime}_{i}\|$ is the number of locations that the
rider has predicted for $D_{i}$ $\ast/$
$avg\coloneqq avg+\|\mathcal{S}^{\prime}_{i}\|$
if _$\|\mathcal{S}^{\prime}_{i}\|\ ==1$_ then
$exact+=1$ $/\ast$ Exact driver loc predictions $\ast/$
end if
end for
$avg\coloneqq avg\ /\ n$; $exact\coloneqq exact\ /n\times 100$
Output: $exact$, $avg$, $\mathcal{S}^{\prime}_{i}$
Algorithm 1 Location-harvesting Attack on ORide
We now analyze the ORide protocol from the rider’s end. For ease of
explanation, the rest of our paper shall refer to the squared Euclidean
distance between two points as simply the Euclidean distance. In the ORide
protocol, before a rider finally chooses the closest driver, she is given a
list of Euclidean distances corresponding to drivers in her zone. In this
case, the rider only gets to know the Euclidean distance to each driver, and
not the driver’s exact coordinates. Mathematically, this would mean that there
are infinite possibilities for the driver’s location on the circumference of a
circle defined from the rider’s perspective.
On the contrary, we show that the driver’s Euclidean distance allows the rider
to identify the actual location of a driver with good probability. We show
that by identifying road networks on a live map (using Google Maps API [9]),
along with the fact that ORide uses integer coordinates, the number of
possible driver locations from the rider’s perspective can be reduced
significantly, to around $2$ locations on average.
Remark. While we make use of the fact that ORide uses integers coordinates,
our attack would also work for fixed- point encoding of the coordinates. This
is because the current (exact) techniques for fixed-point encodings for RLWE-
based SHE schemes essentially use the scaled-integer representation [5].
Before we proceed with our analysis, we make the assumption that when the
rider requests a ride and when each driver in the zone sends her encrypted
coordinates to SP, the drivers are on road (since we use Google Maps API in
our experiments, these include city roads, parking lot roads and many other
categories, as specified by the definition of a _road segment_ by Google Maps
[9]). This assumption is reasonable since a vast majority of the active
drivers at any point in time constantly move around the city looking for
potential rides or about to finish serving another ride.
When we say that a driver’s coordinates lie on road, we mean that the
coordinates lie within the borders of the road. The current standards for lane
width in the United States recommends that each lane is 3 metres wide on
average [19]. Since many roads within a city consist of 2 lanes, we assume
that a pair of coordinates lie on road if the location is within 3 metres from
the centre of a road (neighborhoods in many cities around the world consist
mostly of 2 lane roads, so our experiments give a fairly accurate idea of
location recovery probabilities). We stress that the drivers can be anywhere
in the zone on any road and our experiments indeed follow this distribution.
Rider’s attack. A rider performs the following attack to obtain a set of
possible locations for a driver. At the time of ride request, let the rider
coordinates be $(X_{r},Y_{r})$ and the driver coordinates be $(X_{d},Y_{d})$,
which the rider does not know. Let the rider’s zone be denoted by
$\mathcal{Z}$. SP receives the encrypted values of $X_{r},X_{d},Y_{r},Y_{d}$,
then homomorphically computes the Euclidean distance in encrypted form, and
the rider decrypts this to obtain $N=(X_{r}-X_{d})^{2}+(Y_{r}-Y_{d})^{2}$. If
$N$ is not too large (refer to Section 2.4 for a concrete discussion on bounds
for $N$), the rider can efficiently find all integer solutions to the equation
$x^{2}+y^{2}=N$. The rider could use an $O(\sqrt{N}~{})$ algorithm to
accomplish this: keep a solution-set, and for every integer
$x^{\prime}\in[0,\lfloor{\sqrt{N}}~{}\rfloor]$, compute
$y^{\prime}=(N-x^{\prime}{}^{2})^{1/2}$. If $y^{\prime}$ is an integer, add
the coordinates $(x^{\prime},y^{\prime})$, $(x^{\prime},-y^{\prime})$,
$(-x^{\prime},y^{\prime})$, $(-x^{\prime},-y^{\prime})$,
$(y^{\prime},x^{\prime})$, $(y^{\prime},-x^{\prime})$,
$(-y^{\prime},x^{\prime})$ and $(-y^{\prime},-x^{\prime})$ into this set.
Now, rider maintains a set $\mathcal{S}$ containing the possible driver
locations. For each integral solution $x_{i},y_{i}$ satisfying
$x_{i}^{2}+y_{i}^{2}=N$, the rider identifies potential driver coordinates as
$(X^{\prime}_{d,i}=X_{r}+x_{i},Y^{\prime}_{d,i}=Y_{r}+y_{i})$ and adds
$(X^{\prime}_{d,i}\,,Y^{\prime}_{d,i})$ to $\mathcal{S}$ if
$(X^{\prime}_{d,i}\,,Y^{\prime}_{d,i})$ is inside $\mathcal{Z}$.
Once the rider obtains these possible driver coordinates in $\mathcal{S}$, she
checks whether each solution lies on road. (Google Maps Road API [9] can be
used to achieve this). The rider now obtains a filtered set of coordinates
$\mathcal{S^{^{\prime}}}\subseteq\mathcal{S}$ that are inside $\mathcal{Z}$,
and also lie on a road. Note that since the actual coordinates of the driver
also satisfy these conditions, it is _always_ present in this set. The
cardinality of $\mathcal{S}^{\prime}$ would denote the number of predicted
locations for a driver. If this cardinality is exactly one, then the rider has
successfully predicted the driver’s exact location. Our attack is summarized
in Algorithm 1. The algorithm takes as input the Euclidean distances of
drivers in the zone and, for each driver, outputs the set of filtered
coordinates $\mathcal{S}^{\prime}$. It outputs the number of predicted
locations avg, averaged over all drivers. Finally, it outputs exact which
denotes the number of drivers for whom exactly one location is predicted.
We present an illustrative example in Figure 1. Consider a large zone in
Dallas, USA, with a cartesian grid embedded over the road view of the map.
Consider a driver and a rider pair inside this zone. The rider is said to be
located at the origin, and let the driver be at coordinates $(4,3)$ (which
agrees with our assumption that drivers lie on road). The rider is given the
Euclidean distance $25$ to this driver. She then obtains all lattice points
lying on this circle: $\mathcal{S}=\\{(\pm 3,\pm 4),(\pm 4,\pm 3),(\pm
5,0),(0,\pm 5)\\}$. Out of these, the rider filters out coordinates that lie
on road (shown as green dots in Figure 1) to obtain
$\mathcal{S^{\prime}}=\\{(-4,3),(4,3),(5,0),(0,-5)\\}$. Note that the driver’s
actual coordinates belong to $\mathcal{S^{\prime}}$. Note also that if the
rider’s location, her zone and the Euclidean distance to this driver was given
as input to Algorithm 1, we would receive as outputs
$\mathcal{S}^{\prime}=\\{(-4,3),(4,3),(5,0),(0,-5)\\},avg=4,exact=0$.
Figure 1: Illustrative example of location prediction of a single driver by a
rider.
Remark. In the scenario of ORide, the rider’s zone usually consists of
multiple drivers. Note that in Algorithm 1, when calculating the possible
coordinates $\mathcal{S}^{\prime}_{i}$ for the $i$th driver $D_{i}$, the
analysis that a rider performs for one driver is _independent_ of the analysis
for other drivers. Therefore, the averaged results for multiple drivers in one
execution of the attack (if the driver locations are randomly and
independently sampled subject to above conditions) is equivalent to the
averaged results over multiple executions of the attack in the case of only a
single driver present inside the zone.
Our implementation of the attack will therefore consider one driver inside the
zone, and average the results over multiple experiments, considering randomly
chosen rider and driver locations each time.
### 2.4 Implementation of Our Attack
Using Google Maps API for Python [10], we performed experiments to validate
our attack across four arbitrarily chosen cities: New York city, Dallas, Los
Angeles and London 111The code for our attack presented in Section 2 can be
accessed at https://github.com/deepakkavoor/rhs-attack. We ran our experiments
over zones of sizes $A=$ { 1 km2, 4 km2, 9 km2, 25 km2, 100 km2, 400 km2, 900
km2 } (with the exception of New York city due to its geography having
multiple smaller discontinguous areas). For each city, and for each zone size
$a\in A$, we performed 30 experiments. In each experiment, a random square
zone $\mathcal{Z}_{a}$ of area equal to $a$, was chosen. We chose a random
latitude-longitude pair inside $\mathcal{Z}_{a}$ for the rider in this zone.
For a driver, we similarly chose a random latitude-longitude pair inside
$\mathcal{Z}_{a}$ that was on road (Google Maps Road API was used to
accomplish this).
These coordinates were converted to UTM coordinates using the _utm_ library
for Python [3]. The Euclidean distance between these UTM coordinates was made
available to the rider. Finally, we obtained the driver’s filtered set of
probable locations as described in Algorithm 1. After obtaining the predicted
driver coordinates, we averaged the number of such predicted locations over
multiple experiments. We also counted the percentage of experiments in which
the rider predicted exactly one location. With these considerations, our
results for varying grid sizes and cities are shown in Table 2.4.
Table 1: The rider’s prediction based on Algorithm 1, averaged over 30
experiments.
Zone Size | Number of possible driver locations | | | |
---|---|---|---|---|---
| Exact driver locations prediction | | | |
| (Output $exact$ of Algorithm 1) | | | | | | |
(km2) | New York | Dallas | Los Angeles | London | New York | Dallas | Los Angeles | London
1 | 2.6 | 1.8 | 2.5 | 1.6 | 32% | 52% | 32% | 60%
2 | 2.4 | 1.6 | 2.1 | 1.6 | 48% | 52% | 36% | 68%
4 | 2.0 | 2.0 | 1.8 | 1.7 | 44% | 44% | 52% | 56%
9 | 2.7 | 1.9 | 2.1 | 2.3 | 38% | 56% | 48% | 40%
25 | 2.6 | 2.2 | 2.4 | 2.1 | 36% | 44% | 36% | 48%
100 | 2.7 | 2.1 | 2.2 | 1.8 | 32% | 44% | 44% | 56%
400 | 2.3 | 2.1 | 2.5 | 1.8 | 36% | 48% | 28% | 56%
900 | – | 2.8 | 3.1 | 2.4 | – | 40% | 24% | 40%
#### 2.4.1 Timings
Our experiments were performed on an Intel Core i5-8250U CPU @ 1.60 GHz with 8
GB RAM running Ubuntu 18.04.4 LTS. On an average, one experiment (as described
above) took 2 seconds for each driver, showing that our attack is indeed
efficient, thus allowing a rider to practically obtain any driver’s
coordinates with good confidence.
#### 2.4.2 Interpretation of values in Table 2.4
Our experiments showed that the average number of solutions to $x^{2}+y^{2}=N$
over all the aforementioned zone sizes was 20. When we filter these solutions
based on whether they lie inside a zone and on road, the average possible
driver coordinates were 2 in number (as indicated by the value _avg_ in Table
2.4), which is a significant reduction. Although it may seem that Euclidean
distance gives fair anonymity to driver coordinates, our attack shows that in
practice, this is not the case, and a rider can indeed find the driver’s
location with good probability. We also note from the average value of _exact_
in Table 2.4 that the rider can predict the driver’s exact location around 45%
of the times.
Note that in each city, as zones get bigger, the number of lattice solutions
and filtered coordinates tend to increase leading to higher _avg_. More
lattice solutions imply that the event when a rider predicts exactly one
location for a driver is rare, thus decreasing the value of _exact_. This
trend can be verified from Table 2.4.
#### 2.4.3 Anonymity Sets
In ORide, when the rider makes a request, she sends her zone identity to SP,
and the SP now knows which zone the rider is in. This zone could contain the
rider’s home/work address. As pointed out in the ORide paper [24], SP might be
able to guess the identities of the riders if this pick-up zone had a limited
number of ride activities, and a limited number of riders (as an extreme
example, a zone where only one rider lives). Therefore, ORide defines zones in
such a way that each zone has at least a large minimum number of ride requests
per day. This large minimum is referred to by them as the _anonymity-set_
size. The choice of size of these zones is left to the SP, based on balancing
the communication bandwidth requirements and sizes of anonymity sets in those
zones. (A very high anonymity set would mean that the demand for rides in that
zone is high, leading to longer ride matching times and higher bandwidth
usage).
We justify our choice of choosing zones of sizes $a\in A$ for our experiments:
* $\bullet$
In a densely populated city like New York City (population
density222https://worldpopulationreview.com/us-cities 11,084 persons/km2),
where more people tend to use ride hailing services, a smaller zone size would
suffice to achieve the required anonymity-set size. In a sparse city like
Dallas (population density 1,590 persons/km2), where fewer ride-hailing
activities occur, these zones would have to be bigger in size to achieve the
same anonymity for riders. Taking into consideration the different possible
zone sizes in both densely populated and sparse cities, the experiments
validate our attack in zones of areas ranging from 1 km2 to 900 km2.
* $\bullet$
We analyzed the NYC Uber-Dataset [21] for May 2019, and deduced that the
demand for taxi rides was very high in Manhattan compared to the other
boroughs of NYC. We chose May since this month had one of the highest ride
requests for Uber in 2019. Based on this, we followed the zone demarcation
that was proposed by ORide: each Census Tract (CT) [22] in Manhattan is
considered as one zone. The boroughs of Queens and Bronx are merged into one
zone, and the boroughs of Brooklyn and Staten Island are merged into one zone.
The size of each CT in Manhattan varies between 1 km2 and 4 km2 that
correspond to zone size of higher activity. Since the boroughs other than
Manhattan have lesser activity, these zones are expected to have a larger
area. Indeed, the combined area of Queens and Bronx is around 390 km2, and the
combined area of Brooklyn and Staten Island is around 330 km2. Since this is
the primary zone demarcation proposed by the authors of ORide, we found it
reasonable to include these ranges of areas for our experiments in Table 2.4.
We next discuss few details involved in the implementation of our attack.
* $\bullet$
Although zones can be of any geographical shape, we chose square zones for
ease of choosing random coordinates inside its boundary, and to simplify
checking whether a given coordinate lies inside the zone.
* $\bullet$
Latitude-longitude coordinates were converted into UTM formats using the _utm_
library for Python. On an average, this conversion results in a difference of
0.5 metres between the original coordinate and the planar coordinate’s
representation. For all practical purposes, this difference is very small, and
the two coordinates can be considered to represent the same location.
* $\bullet$
As discussed earlier, based on the NYC Uber-Dataset and ORide’s proposed
demarcation, even large sparse zones that have a sufficiently big anonymity-
set would rarely exceed $30\times 30=900$ km2. Note that the Euclidean
distance between two UTM coordinates is equal to the distance (in metres)
between latitude-longitude representation of those points. Hence, the maximum
value of $N$ for a $d\times d$ grid would be $2d^{2}$. For a 30 km $\times$ 30
km grid, $d$ would be $30,000$. Since there exists an $O(\sqrt{N})=O(d)$
algorithm to compute solutions to $x^{2}+y^{2}=N$, it is indeed feasible for
the rider to perform this analysis on modern computers in very less time, even
for different zone structures chosen by SP.
Remark. We give a brief insight into the number of drivers inside a zone,
which averages to 400. The zone demarcation proposed for New York by the
authors of ORide was discussed briefly above. According to ride information
for May 2019 in the NYC Uber-Dataset, a zone in Manhattan had at most 6,000
ride requests per day. (We chose the month of May since it experienced the
most ride-requests in the year 2019). We make the same assumption that the
authors of ORide did: the drop-off zone for a driver is her waiting zone for
new ride requests. Moreover, as in ORide, we assume that the waiting time
between a driver’s drop-off event and her next pick-up event is at most 30
minutes. This would mean that during a ride-request event, the available
drivers to answer this request are the ones who had a drop-off event inside
that zone in the last 30 minutes since the ride-request. We considered the top
20 high-ride zones, and for each zone, grouped the ride requests for a day
based on 30 minute intervals. Each 30 minute interval consisted of at most 400
drop-off events inside each zone. This would imply that when a ride request
occurs at any time of the day, at most 400 drivers would be waiting in that
zone to service this request. We stress that there are at most 400 drivers in
all zone demarcations considered above, and as the zone size increases the
density of drivers (the number of drivers available for ride request in 1 km2)
in that zone decreases.
### 2.5 Impact and Consequences of our Attack
We have experimentally shown that our location-harvesting attack can identify
the exact locations of a driver in about 45% of the cases. Equivalently, this
means that in a zone of around 400 available drivers, a ride request leaks the
locations of around 180 drivers to the rider, which is a significant number.
Although our attack doesn’t reveal additional driver data such as user
profiles, this leak of location information could still cause potential
threats to drivers and ultimately affect the SP’s reputation. For instance,
according to [12], it is claimed that non-SP taxi drivers try to identify
locations of Uber vehicles and attack them. There are also reports of people
using ride-hailing apps to locate and rob drivers registered to the SP [30].
Zhao et. al. [38] investigate several approaches through which information
regarding drivers’ locations can lead to statistical attacks and exploits. A
potential competitor to ORide can use our attack and make queries to ORide as
an honest-but-curious rider over a period of time. It can then get to know the
distribution/density of different drivers in the city without raising
suspicions. This distribution could indicate regions where there is high
demand for ride hailing services. The competitor could focus on deploying
their drivers and taxis in that region.
In general, ensuring privacy of the locations of drivers in the zone should be
an important aspect of any privacy-preserving RHS. The authors of ORide claim
that an adversary cannot obtain a snapshot of drivers’ locations (say, a
malicious rider who makes multiple fake ride requests with the goal of
harvesting drivers’ locations) [24, §8], thus preserving their anonymity. In
contrast, our work refutes this claim made in ORide using just a single ride
request by an honest-but-curious rider. We think that this flaw in ORide is
not merely an implementation error. The requirement of integer-encoded inputs
is inherent to current SHE schemes, and this helps us obtain small number of
lattice points on the circle.
Remark. One of our motivations behind considering privacy of driver locations
is to prevent physical attacks on the drivers in a zone. It can be argued that
even if locations of these drivers are not revealed, a malicious rider can
request a ride in an honest manner, and attack the chosen driver when she
arrives at the pick-up location. While this is true, and in fact, at the end
of any ride hailing service (whether it preserves privacy or not), the
selected driver must arrive at the rider’s pick-up location and so, this
attack scenario is nearly impossible to thwart. However, in almost all apps
(including ORide), the identity information of the rider (name, phone number
etc.) is revealed to the driver selected for the ride (and vice-versa) which
act as a deterrent for such attacks. The possibility of fake accounts can be
eliminated by enforcing identity verification at the time of registration.
This does not violate rider privacy since ORide assigns anonymous tokens to
all entities during the ride request process. Using our attack from Section
2.3, a rider can get to know even the locations of non-selected drivers in the
zone, namely the other drivers who have no information about the identity of
the rider. Finally, we remark that if the driver has moved from her last
revealed location or does not have a logo/sticker on her car (advertising that
she belongs to a particular RHS) then it would be hard for the rider to carry
out physical attacks.
## 3 Mitigation of our Attack
We propose a solution where the driver can thwart our attack by anonymizing
her location. Each driver could choose a random coordinate within a circle of
fixed radius around herself, encrypt and send these random coordinates to the
SP instead of her original coordinates. We show that this modification to
ORide provides sufficient anonymity while preserving ride matching accuracy,
and is therefore a reasonable solution to mitigate our attack. The idea of
adding noise to geographical data to preserve privacy has been addressed in
several works in the literature. In Section 4, we discuss some of these
techniques and their relevance to our setting. We analyze the effect of this
technique on driver anonymity and provide concrete values for ride matching
accuracy through experiments using Google Maps API.
Remark. In Appendix 0.A, we discuss other ideas that may intuitively seem to
thwart our attack. However, we show that those modifications are vulnerable to
our attack from Section 2 and hence do not preserve driver anonymity.
### 3.1 Anonymizing Driver Locations
By anonymizing her location, each driver may try to preserve the privacy of
her location with respect to a rider. Let each driver $D_{i}$ (at coordinates
$L_{i}$) choose a circle of radius $R$ centered at her location (where $R$ is
publicly known), and pick a random UTM coordinates $L_{i}^{\prime}$ inside
this circle. The driver encrypts $L_{i}^{\prime}$ (instead of $L_{i}$ as
suggested by ORide) and sends it to SP. We refer to $L_{i}^{\prime}$ as the
anonymized driver coordinates.
As per the original attack in Section 2, the rider obtains a Euclidean
distance $N_{i}$ and enumerates all lattice points that correspond to this
distance. Due to the changes described above, these lattice points need not
correspond to possible driver coordinates. They instead represent possible
anonymized driver coordinates. The rider would have next proceeded to filter
each lattice point based on whether it is on road or not. But a lattice point
which represents an anonymized driver location may not lie a point on road,
although the original driver did. Filtering in this way would lead to
erroneous conclusions by the rider, and she may throw out a lattice point that
actually corresponds to the driver location.
Observe that within distance $R$ of anonymized driver coordinates, there will
always lie a road (since the original driver was on road). We modify the
rider’s attack accordingly to cope with this fix. Suppose there was a lattice
point discovered by the rider. Within a circle of radius $R$ centered at this
point, if there were no roads at all (for instance a lattice point that was in
the middle of a park) the rider can then conclude that this point is not the
driver’s anonymized coordinates. So, the best option that a rider has (to
improve her attack against this obfuscation technique) is to filter each
lattice point based on whether there is a road within distance $R$ of that
point. As we see in Section 3.2, the possibility that a lattice point is
filtered out in a dense city is low if we choose an appropriate value of $R$.
This prevents a rider from eliminating many lattice points thus improving
driver anonymity. Moreover, this technique preserves accuracy when compared to
ORide as we show next.
### 3.2 Anonymity of Drivers with respect to Rider
The value of $R$ is public and should be decided by the SP, who can in fact
implement the end-user application in such a way that the driver’s device
locally computes $R$ based on the current location of the driver. If the
driver’s local device senses that she is in a densely populated city (and thus
there are many roads within close proximity of an arbitrary point in that
region of the city), a smaller $R$ can be chosen. On the other hand, if the
device understands that the driver is in a location where there are very few
roads within distance $R$ from an arbitrary point in that region, a larger $R$
is chosen (for instance, in a sparsely populated city with low road density).
This choice of $R$ based on the concentration of roads around the driver is
motivated by the modification to rider’s attack discussed at the end of
Section 3.1 (the rider’s attack now tries to filter lattice points based on
the availability of roads within distance $R$ from each lattice point
solution). We consider the number of (anonymized) driver locations predicted
by a rider as a measure of anonymity for that driver. This depends on the
number of lattice solutions for the Euclidean distance between (anonymized)
driver location and rider. Along with this, it also depends on the number of
solutions that the rider can further filter based on availability of roads
within distance $R$ from each solution. We expect anonymity to increase with
$R$ due to higher probability of finding a road within distance $R$ of any
location.
Similar to the setup in Section 2.4, in the following discussion we average
results over 25 experiments where each experiment chooses a random zone of
size 4 km2 in the mentioned city (along with random coordinates for a rider
and driver) and runs the modified rider’s attack for filtering coordinates.
For a small value of $R$ such as 10 m, any coordinate within distance $R$ of
some point is practically the same location. We experimentally observed that
the average anonymity for a driver in Los Angeles was around 3, which is close
to what we observe in the original attack (see Table 2.4). Hence small values
of $R$ should not be chosen since they offer low anonymity. In a densely
populated city such as Los Angeles, most locations within the city are
expected to have roads within reasonable distance. For $R=$ 50 m we observed
that the area surrounding most lattice points in Los Angeles had at least one
road within 50 m. From a rider’s perspective, this would mean that most
lattice solutions obtained by her are possible choices for the anonymized
driver coordinates. Experiments showed that the average number of filtered
lattice points when $R=50$ m was 14 (meaning the rider has 14 possible
anonymized locations of a driver). This provides sufficient anonymity to a
driver in practice, since the probability of correctly predicting a driver’s
anonymized coordinate is only $1/14$. This is certainly an improvement
compared to $1/1.8$ for ORide (Table 2.4). Considering Dallas, a city with
relatively sparse road density, we observed that a significant number of
locations did not have roads within 50 m, and this allowed the rider to filter
out many possible lattice solutions. Our experiments suggest that choosing
$R=150$ m prevents the rider from doing so and offers sufficient anonymity,
which averaged around 16.
Remark. As seen above, the rider ends up with around 15 equally probable
solutions for the driver’s obfuscated location. This means that even if the
rider applies clustering algorithms over multiple queries to eliminate noise
and tries to find the actual driver’s location, there will still be many
equally probable locations for the driver ($>10$) thus providing high
anonymity.
### 3.3 Accuracy of Ride-matching
When a driver chooses random coordinates within distance $R$ instead of her
own location, the Euclidean distance is now computed between the rider’s
location and the anonymized driver location.
Among all drivers in the zone, suppose an optimal driver is chosen according
to some metric $M$. For example, if $M$ represents Euclidean distance, the
optimal driver is the one with least Euclidean distance from her location to
the rider in the case of ORide, and the one with least Euclidean distance from
her anonymized location to the rider in the case of our modified solution. Let
$t_{M}$ be the time taken for this optimal driver to reach rider. Let $t_{T}$
be the minimum time taken among all drivers in the zone to reach rider
(corresponding to the time-wise closest driver). We evaluate the accuracy of
metric $M$ as the percentage of experiments in which $|t_{M}-t_{T}|$ is less
than or equal to 1 minute (in practice it is okay for the rider to wait
another extra minute compared to the time-wise closest driver). Google Maps
API was used to determine the time taken for a driver to reach the rider.
We chose zones of varying sizes in Los Angeles and Dallas. In each experiment
a random rider and 400 drivers were chosen in each zone. We compared the
accuracy of Euclidean metric for $R$ = 50 m and $R$ = 150 m in both scenarios
– when used in the context of ORide (computed between the rider and driver’s
actual location) and when used in the fix to our attack (computed between the
rider and driver’s anonymized location). As discussed previously, we chose
$R=50$ m for Los Angeles and $R=150$ m for Dallas, respectively, to ensure
sufficient anonymity. Moreover, the sizes of zones are chosen to be smaller in
Los Angeles (refer Section 2.4) and larger in Dallas. The inferred accuracies
were averaged over 25 experiments (refer Table 2). We see that our solution
indeed provides sufficient driver anonymity with respect to rider while
preserving accuracy of ride matching compared to ORide.
Choosing large $R$ to achieve greater anonymity in a small zone (where driver
density is high) leads to loss of accuracy. This seems intuitively correct,
since having a large anonymity radius in a small zone with high driver density
greatly changes the ordering of drivers based on Euclidean distances. To
concretely verify this, we used a similar setup described above and observed
that with $R$ = 150 m and a 4 km2 zone size in Los Angeles the accuracy of
ORide was around 84% whereas that of the modified solution was only 70%. So,
$R$ should increase with zone size both to preserve accuracy and driver
anonymity (prevent filtering of lattice solutions based on availability of
roads).
Table 2: Comparison of accuracy of selecting best driver in ORide vs. our solution (with anonymized driver locations), averaged over 25 experiments. City | Zone | Radius | ORide | Our
---|---|---|---|---
| Size (km2) | $R$ (m) | | solution
Los Angeles | 4 | 50 | 84% | 80%
25 | 50 | 92% | 90%
Dallas | 100 | 150 | 83% | 83%
## 4 Related Works
Among providers of RHS namely Lyft, DiDi, OLA, taxify and others, Uber is one
of the popular ride service providers. An in-depth analysis of the practices
followed by Uber and the impact of price-surging on passengers and drivers are
done by Chen et al. [4]. The Guardian [11] reports how anonymized details of
New York city taxi drivers can be used to easily convert the data to its
original format to obtain personal information. Different threat models are
widely considered in the literature, namely, a malicious driver targeting
riders, and an honest-but-curious SP harvesting information about riders and
drivers with the intention of selling it to other entities for advertising
purposes, or with potentially malicious intentions to target high profile
individuals. Privacy of the driver is given much less attention; so much so
that in a few papers the actual driver locations are revealed to the SP as
well as the rider [14] and [2]. As motivated in Section 1 there can be
instances where a malicious rider can target drivers of a specific SP. For
example, a competitor SP can masquerade as rider to collect driver profile
information or statistics to target the drivers belonging to the specific SP.
Geo-locating Drivers by Zhao et al. [38] does a study of leakage of sensitive
data, in particular, it evaluates the threat to driver information. They show
it is possible to harvest driver data by a malicious outsider SP by analyzing
APIs in non-privacy preserving apps provided to drivers by Uber, Lyft and
other popularly deployed SPs. PrivateRide by Pham et al. [25] is one of the
first papers to address privacy in RHS. The location of the riders are kept
hidden by means of a cloaked region, and location privacy is preserved by
using cryptographically secure constructs. Details of rider and selected
driver are mutually exchanged only after the ride request is fulfilled and
when they both are in close proximity, to prevent a malicious outsider trying
to harvest driver information. A recent work by Khazbak et al. [14] improves
upon the solution of PrivateRide by providing obfuscation techniques (spatial
and temporal cloaking), of rider locations, to achieve better results in terms
of selecting the closest driver, at the cost of slightly more computational
overhead. However, the drivers’ locations are revealed to the rider. ORide
[24] is a follow-up work by the same authors of PrivateRide that provides more
robust privacy and accountability guarantees, and has been described earlier
in this paper. All the following works try to improve upon ORide by proposing
different models of privacy-preserving closest driver selection by the SP. We
note here that our attack is relevant in cases where the rider gets to make a
choice, and is not applicable in situations where the SP selects a single
suitable driver and provides the same to the rider. pRide by Luo et al. [15]
proposes a privacy-preserving ride-matching service involving two non-
colluding servers with one being the SP and the other a third-party Crypto
Provider (CP). The solution makes use of Road Network Embedding (RNE) [27]
technique to transform a road network into a higher dimensional space so that
the distance computation between any two nodes in the network can be performed
efficiently. They propose two solutions, one using the Paillier cryptosystem
and another using BGN cryptosystem. The homomorphically encrypted driver and
rider locations received by the SP are sent to the CP along with a random
noise where it is decrypted and garbled. The SP then uses a garbled circuit to
find the closest driver to the rider and completes the ride request. They show
high accuracy in matching the closest driver while preserving the privacy of
driver and rider locations. The disadvantage of this scheme is their use of a
second Crypto Server that does not collude with the SP, which may be
inconvenient to realize in practice, and also the high communication cost
between the two servers. lpRide by Yu et al. [37] improves upon pRide to
perform all the homomorphic distance computation algorithms on a single SP
server thus eliminating high communication cost when two servers are involved.
They use modified Paillier cryptosystem [18] for encrypting RNE transformed
locations of rider and driver. However, [31] proposed an attack on the
modified Paillier scheme used in lpRide, allowing the service provider to
recover locations of all riders and drivers in the region. Wang et al. propose
TRACE [32] that uses bilinear pairing for encrypting driver and rider
locations. PSRide by Yu et al. [35] uses Paillier cryptosystem and Yao’s
garbled circuit with two servers on the same lines as pRide and hence suffers
from some of the disadvantages mentioned above. EPRide by Yu et al. [36]
efficiently finds the exact shortest road distance using a road network
hypercube embedding. They experimentally show significant improvements in
accuracy and efficiency compared to ORide and pRide. Xie et al. [34] compute
shortest distances using road network embeddings along with property-
preserving hash functions. In doing so, they remove the need for a trusted
third-party server. Maouche et al. [16] propose a user re-identification
attack on four different mobility datasets obfuscated using three different
Location Preserving Privacy Mechanisms (LLPM), with one of the datasets in the
RHS setting. Their attack makes use of previously learned user profiles and
tries to re- associate the same to obfuscated data. A number of LLPMs are
available in the literature that anonymize private data. Differential privacy
and $k$-anonymity are two popular techniques. Differential privacy, introduced
by the seminal work of Dwork et al. [6] can be applied wherever aggregate
information from several similar entities are available. Geo-
indistinguishability by Andrés et al. [1] adds exponentially decaying noise
from a Laplace distribution around the point of interest thereby obfuscating
the point of interest. The notion of $k$-anonymity by Sweeney [28] obfuscates
an entity by introducing $k-1$ dummy uniformly distributed entities which are
indistinguishable by the adversary. In our case, the driver applies noise to
her coordinates before encrypting and sending to SP. SP homomorphically
computes the Euclidean distance between the rider and each driver. Using this
(noisy) Euclidean distance, the rider solves the Gauss circle problem and
filters out solutions depending on whether they have a road in their vicinity.
Finally, the rider ends up with not one, but many possible choices for the
anonymized driver location. This is a combination of both differential privacy
(where the driver applies noise to her location) and $k$-anonymity (the rider
has many equally possible choices for the driver’s obfuscated location).
Empirically, we see that our method of adding uniformly random noise is
sufficient to provide high anonymity to the driver. Also, our method of
filtering out non-plausible driver locations is based on the region’s
topography. We leave the analysis of using other obfuscation techniques to
thwart our attack for future work.
## 5 Conclusion
In this paper we present an attack on a privacy-preserving RHS, ORide [24]. We
show that an honest-but-curious rider can determine the coordinates of nearly
half the number of drivers in a zone even when only the Euclidean distance
between the rider and a driver is available to the rider. Our attack involves
enumeration of lattice points on a circle of appropriate radius and subsequent
elimination of lattice points based on geographic conditions. Finally we
propose a modification to the ORide protocol as a strategy to mitigate our
attack. Here a driver anonymizes her location by choosing a random coordinate
within a circle of certain radius around herself. We show through concrete
experiments that this technique preserves driver anonymity and accuracy of
ride matching.
Although protocols may seem secure in theory, there may arise several
complications and vulnerabilities when they are deployed practically, as
demonstrated by our attack in Section 2. In the future it will be interesting
to experimentally investigate the notion of driver privacy with respect to
both the SP and rider in more recent works following ORide (lpRide, [37],
pRide [15]).
#### Acknowledgements.
The authors would like to thank Sonata Software Limited, Bengaluru, India for
funding this work. We also thank the anonymous reviewers of ACM CCS2020,
USENIX Security 2021 and SAC 2021 for their valuable comments and suggestions.
## References
* [1] Andrés, M., Bordenabe, N., Chatzikokolakis, K., Palamidessi, C.: Geo-indistinguishability: Differential privacy for location-based systems. In: Proceedings of the ACM Conference on Computer and Communications Security (2013)
* [2] Baza, M., Lasla, N., Mahmoud, M., Srivastava, G., Abdallah, M.: B-ride: Ride sharing with privacy-preservation, trust and fair payment atop public blockchain. IEEE Transactions on Network Science and Engineering (2019)
* [3] Bieniek, T.: Utm 0.5.0. https://pypi.org/project/utm/ (2019), retrieved: April 3, 2020
* [4] Chen, L., Mislove, A., Wilson, C.: Peeking Beneath the Hood of Uber. In: Cho, K., Fukuda, K., Pai, V.S., Spring, N. (eds.) Proceedings of the 2015 ACM Internet Measurement Conference, IMC 2015, Tokyo, Japan, October 28-30, 2015. pp. 495–508. ACM (2015)
* [5] Costache, A., Smart, N.P., Vivek, S., Waller, A.: Fixed-point arithmetic in SHE schemes. In: Avanzi, R., Heys, H.M. (eds.) SAC 2016: 23rd Annual International Workshop on Selected Areas in Cryptography. Lecture Notes in Computer Science, vol. 10532, pp. 401–422. Springer, Heidelberg, Germany, St. John’s, NL, Canada (Aug 10–12, 2016). https://doi.org/10.1007/978-3-319-69453-5_22
* [6] Dwork, C., McSherry, F., Nissim, K., Smith, A.: Calibrating noise to sensitivity in private data analysis. In: Halevi, S., Rabin, T. (eds.) Theory of Cryptography. pp. 265–284. Springer Berlin Heidelberg (2006)
* [7] Fan, J., Vercauteren, F.: Somewhat practical fully homomorphic encryption. Cryptology ePrint Archive (2012), http://eprint.iacr.org/2012/144
* [8] GitHub: SAGE code for polynomial recovery. https://github.com/shyamsmurthy/knn_polynomial_recovery (2019), retrieved: June 12, 2020
* [9] Google: Google Maps Platform. https://developers.google.com/maps/documentation/roads/intro/ (2019), retrieved: April 3, 2020
* [10] Google: Google Maps Platform, client libraries for google maps web services. https://developers.google.com/maps/web-services/client-library (2019), retrieved: April 3, 2020
* [11] Guardian, T.: New York Taxi Details can be Extracted from Anonymised Data, Researchers Say. https://www.theguardian.com/technology/2014/jun/27/new-york-taxi-details-anonymised-data-researchers-warn (2014), retrieved: March 20, 2020
* [12] Hurriyet Daily News: Istanbul taxi drivers hunt down, beat up Uber drivers as tensions rise. https://www.hurriyetdailynews.com/istanbul-taxi-drivers-hunt-down-beat-up-uber-drivers-as-tensions-rise-128443 (2018), retrieved: June 11, 2020
* [13] Kesarwani, M., Kaul, A., Naldurg, P., Patranabis, S., Singh, G., Mehta, S., Mukhopadhyay, D.: Efficient Secure $k$-Nearest Neighbours over Encrypted Data. In: Proceedings of the 21th International Conference on Extending Database Technology, EDBT 2018, Vienna, Austria, March 26-29, 2018\. pp. 564–575 (2018)
* [14] Khazbak, Y., Fan, J., Zhu, S., Cao, G.: Preserving Location Privacy in Ride-Hailing Service. In: 2018 IEEE Conference on Communications and Network Security, CNS 2018, Beijing, China, May 30 - June 1, 2018. pp. 1–9. IEEE (2018)
* [15] Luo, Y., Jia, X., Fu, S., Xu, M.: pRide: Privacy-Preserving Ride Matching Over Road Networks for Online Ride-Hailing Service. IEEE Trans. Information Forensics and Security 14(7), 1791–1802 (2019)
* [16] Maouche, M., Mokhtar, S., Bouchenak, S.: AP-attack: A novel user re-identification attack on mobility datasets. In: MobiQuitous 2017 - 14th EAI International Conference on Mobile and Ubiquitous Systems: Computing, Networking and Services. pp. 48–57 (11 2017)
* [17] Murthy, S., Vivek, S.: Cryptanalysis of a protocol for efficient sorting on SHE encrypted data. In: Albrecht, M. (ed.) Cryptography and Coding - 17th IMA International Conference, IMACC 2019, Oxford, UK, Proceedings. Lecture Notes in Computer Science, vol. 11929, pp. 278–294. Springer (2019)
* [18] Nabeel, M., Appel, S., Bertino, E., Buchmann, A.P.: Privacy preserving context aware publish subscribe systems. In: López, J., Huang, X., Sandhu, R. (eds.) Network and System Security - 7th International Conference, NSS 2013, Madrid, Spain, June 3-4, 2013. Proceedings. Lecture Notes in Computer Science, vol. 7873, pp. 465–478. Springer (2013)
* [19] NACTO: Urban Street Design Guide. https://nacto.org/publication/urban-street-design-guide/street-design-elements/lane-width/ (2019), retrieved: March 12, 2020
* [20] NortonLifeLock: Uber Announces New Data Breach Affecting 57 million Riders and Drivers. https://us.norton.com/internetsecurity-emerging-threats-uber-breach-57-million.html (2020), retrieved: April 10, 2020
* [21] NYC Taxi and Limousine Commission: TLC Trip Record Data. https://www1.nyc.gov/site/tlc/about/tlc-trip-record-data.page, retrieved: April 14, 2020
* [22] NYU Spatial Data Repository: 2010 New York City Census Tract Boundaries. https://geo.nyu.edu/catalog/nyu-2451-34513, retrieved: April 14, 2020
* [23] Pew Research Center: More Americans Are Using Ride-Hailing Apps. https://www.pewresearch.org/fact-tank/2019/01/04/more-americans-are-using-ride-hailing-apps/ (2019), retrieved: February 18, 2020
* [24] Pham, A., Dacosta, I., Endignoux, G., Troncoso-Pastoriza, J.R., Huguenin, K., Hubaux, J.: ORide: A Privacy-Preserving yet Accountable Ride-Hailing Service. In: Kirda, E., Ristenpart, T. (eds.) 26th USENIX Security Symposium, USENIX Security 2017, Vancouver, BC, Canada, August 16-18, 2017. pp. 1235–1252. USENIX Association (2017)
* [25] Pham, A., Dacosta, I., Jacot-Guillarmod, B., Huguenin, K., Hajar, T., Tramèr, F., Gligor, V.D., Hubaux, J.: PrivateRide: A Privacy-Enhanced Ride-Hailing Service. PoPETs 2017(2), 38–56 (2017), https://doi.org/10.1515/popets-2017-0015
* [26] Schneider, T.: NYC Taxi Data. https://github.com/toddwschneider/nyc-taxi-data (2019), retrieved: April 14, 2020
* [27] Shahabi, C., Kolahdouzan, M.R., Sharifzadeh, M.: A road network embedding technique for k-nearest neighbor search in moving object databases. In: Voisard, A., Chen, S. (eds.) ACM-GIS 2002, Proceedings of the Tenth ACM International Symposium on Advances in Geographic Information Systems, McLean, VA (near Washington, DC), USA, USA, November 8-9, 2002. pp. 94–10. ACM (2002)
* [28] Sweeney, L.: $k$-anonymity: A model for protecting privacy. International Journal of Uncertainty, Fuzziness and Knowledge-Based Systems 10 (05 2012)
* [29] Takloo-Bighash, R.: A Pythagorean Introduction to Number Theory: Right Triangles, Sums of Squares, and Arithmetic. Undergraduate Texts in Mathematics, Springer International Publishing (2018), https://books.google.co.in/books?id=_td7DwAAQBAJ
* [30] thejournal.ie: West Dublin gang using hailing apps to target older taxi drivers. https://www.thejournal.ie/west-dublin-taxi-robbery-4420178-Jan2019/ (2019), retrieved: June 11, 2020
* [31] Vivek, S.: Attacks on a privacy-preserving publish-subscribe system and a ride-hailing service. CoRR (2021), https://arxiv.org/abs/2105.04351
* [32] Wang, F., Zhu, H., Liu, X., Lu, R., Li, F., Li, H., Zhang, S.: Efficient and privacy-preserving dynamic spatial query scheme for ride-hailing services. IEEE Transactions on Vehicular Technology 67(11), 11084–11097 (2018)
* [33] Wikipedia contributors: Universal Transverse Mercator coordinate system. https://en.wikipedia.org/wiki/Universal_Transverse_Mercator_coordinate_system (2020), retrieved: April 27, 2020
* [34] Xie, H., Guo, Y., Jia, X.: A privacy-preserving online ride-hailing system without involving a third trusted server. IEEE Transactions on Information Forensics and Security 16, 3068–3081 (2021)
* [35] Yu, H., Jia, X., Zhang, H., Yu, X., Shu, J.: Psride: Privacy-preserving shared ride matching for online ride hailing systems. IEEE Transactions on Dependable and Secure Computing pp. 1–1 (2019)
* [36] Yu, H., Jia, X., Zhang, H., Shu, J.: Efficient and privacy-preserving ride matching using exact road distance in online ride hailing services. IEEE Transactions on Services Computing pp. 1–1 (2020)
* [37] Yu, H., Shu, J., Jia, X., Zhang, H., Yu, X.: lpride: Lightweight and privacy-preserving ride matching over road networks in online ride hailing systems. IEEE Trans. Vehicular Technology 68(11), 10418–10428 (2019)
* [38] Zhao, Q., Zuo, C., Pellegrino, G., Lin, Z.: Geo-locating Drivers: A Study of Sensitive Data Leakage in Ride-Hailing Services. In: 26th Annual Network and Distributed System Security Symposium, NDSS 2019, San Diego, California, USA, February 24-27, 2019. The Internet Society (2019), https://www.ndss-symposium.org/ndss-paper/geo-locating-drivers-a-study-of-sensitive-data-leakage-in-ride-hailing-services/
## Appendix 0.A Appendix : Further Attacks
We look at potential ways in which our attack can be thwarted and analyze
their efficacy. In the first scenario, in order to obfuscate driver locations,
the SP homomorphically adds noise to driver distances before sending them to
the rider. For this case, we show that a rider can still break anonymity by
recovering the original distances between the rider and the drivers. In the
second scenario, the SP uses $p$-norm metric instead of the Euclidean distance
and we show that our attack also extends to this case. Note that increasing
zone sizes is not a countermeasure to our attack. As discussed in Section 2.4,
zone sizes should be small enough (less than 1000 km2 in practice) to ensure
efficient ride-matching times and lower bandwidth costs.
### 0.A.1 Homomorphic Noise Addition by SP
In order to thwart our attack, the SP could try to obfuscate driver locations
by transforming the (encrypted squared) Euclidean distances using a random
monotonic polynomial $F$ with integer coefficients and of a small degree, as
suggested by Kesarwani et al. [13]. Integer coefficients are needed for ease
of representation in homomorphic computations, monotonicity is needed to
maintain the sorting order of the distance inputs (so that the rider obtains
the correct order upon decryption), and low polynomial degree is required for
efficient homomorphic evaluation. Let $N_{i}$ be the Euclidean distance
between the rider and a driver $D_{i}$ in her zone. The rider would get to
know from the SP, for each driver $D_{i}$ in her zone, the values $F(N_{i})$
for some random monotonic integer polynomial $F$ of low degree. Note that $F$
is unknown to the rider, but the degree $d$, range of coefficients of the
polynomial $[1,2^{\alpha}-1]$ and range of $N_{i}$ ($[0,2^{\beta}-1]$) are
publicly known. We claim that the rider can obtain the actual distance
$N_{i}$.
[17] provides a method of recovering a monotonic integer polynomial of low
degree and bounded input range when only sufficiently many outputs evaluated
at integer points are provided. We used the publicly available SageMath [8]
code from the authors of [17] with parameters similar to that described in
[13], namely $d=9$, $\alpha=32$ and $\beta=28$. Next one obtains outputs
$F(N_{i})$ by evaluating this polynomial on the distances $N_{i}$. These two
steps are the same as what the SP would do (homomorphically) once it receives
inputs from the rider and all drivers in a particular zone. The $F(N_{i})$
values, $d$, $\alpha$ and $\beta$ are the only values given to the SageMath
code in the experiments to recover (squared) Euclidean distances to drivers
for various zone sizes. We correlated back the results of the recovery with
the input distances and verified that in all cases the recovered distances
matched correctly, which means that the rider can proceed with the attack
mentioned in Section 2 after recovering $N_{i}$ values.
### 0.A.2 $p$-norm Metric by SP
In order to mitigate our attack in Section 2, the SP may try to
homomorphically compute the $p$-norm (instead of Euclidean distance) of
ciphertexts and send it to rider. Let $(x_{R},y_{R}),(x_{D_{i}},y_{D_{i}})$
denote coordinates of a rider and driver $D_{i}$, respectively. The rider
would thus obtain $N_{i}=|x_{R}-x_{D_{i}}|^{p}+|y_{R}-y_{D_{i}}|^{p}$ for each
driver $D_{i}$ in her zone (the value of $p$ should not be too large to allow
efficient homomorphic computations by SP).
Note that if $p$ is odd, $(x_{R}-x_{D_{i}})^{p}+(y_{R}-y_{D_{i}})^{p}$ could
represent a negative value. Since ORide uses ciphertext packing and non-
Boolean circuit representation with the underlying SHE library [24], it is
very inefficient to compute the absolute value homomorphically. Hence, the SP
would have to use only even values for $p$. Let $p=2q$. In the rider’s attack,
she has to now enumerate all lattice points satisfying the equation
$x^{p}+y^{p}=N_{i}$. Observe that if $(x,y)$ is a solution to this equation,
then the lattice point $(x^{q},y^{q})$ is a solution to $x^{2}+y^{2}=N_{i}$.
This implies that the solution set comprising of lattice points satisfying
$x^{p}+y^{p}=N_{i}$ is smaller than the solution set of lattice points
satisfying $x^{2}+y^{2}=N_{i}$. Based on our experiments on various zones and
cities, we have estimated the number of lattice points satisfying
$x^{2}+y^{2}=N_{i}$ to be around 20 (refer to Section 2.4). This means that on
average, the lattice points satisfying $x^{p}+y^{p}=N_{i}$ cannot be greater
than 20 in number. The rider (similar to the rest of the attack) can then
check whether each lattice point lies in the zone and on road, to reduce the
number of possible predicted driver locations. In this way, our attack also
applies when the SP uses $p$-norm instead of Euclidean distance.
|
# Match-Ignition: Plugging PageRank into Transformer
for Long-form Text Matching
Liang Pang1, Yanyan Lan2∗, Xueqi Cheng3<EMAIL_ADDRESS><EMAIL_ADDRESS><EMAIL_ADDRESS>1Data Intelligence System Research
Center,
Institute of Computing Technology, Chinese Academy of Sciences, Beijing, China
2Institute for AI Industry Research, Tsinghua University, Beijing, China
3CAS Key Lab of Network Data Science and Technology,
Institute of Computing Technology, Chinese Academy of Sciences, Beijing, China
(2021)
###### Abstract.
**footnotetext: Corresponding author
Neural text matching models have been widely used in community question
answering, information retrieval, and dialogue. However, these models designed
for short texts cannot well address the long-form text matching problem,
because there are many contexts in long-form texts can not be directly aligned
with each other, and it is difficult for existing models to capture the key
matching signals from such noisy data. Besides, these models are
computationally expensive for simply use all textual data indiscriminately. To
tackle the effectiveness and efficiency problem, we propose a novel
hierarchical noise filtering model, namely Match-Ignition. The main idea is to
plug the well-known PageRank algorithm into the Transformer, to identify and
filter both sentence and word level noisy information in the matching process.
Noisy sentences are usually easy to detect because previous work has shown
that their similarity can be explicitly evaluated by the word overlapping, so
we directly use PageRank to filter such information based on a sentence
similarity graph. Unlike sentences, words rely on their contexts to express
concrete meanings, so we propose to jointly learn the filtering and matching
process, to well capture the critical word-level matching signals.
Specifically, a word graph is first built based on the attention scores in
each self-attention block of Transformer, and key words are then selected by
applying PageRank on this graph. In this way, noisy words will be filtered out
layer by layer in the matching process. Experimental results show that Match-
Ignition outperforms both SOTA short text matching models and recent long-form
text matching models. We also conduct detailed analysis to show that Match-
Ignition efficiently captures important sentences and words, to facilitate the
long-form text matching process.
Text Matching; Long-form Text; PageRank Algorithm
††journalyear: 2021††copyright: acmcopyright††conference: Proceedings of the
30th ACM International Conference on Information and Knowledge Management;
November 1–5, 2021; Virtual Event, QLD, Australia††booktitle: Proceedings of
the 30th ACM International Conference on Information and Knowledge Management
(CIKM ’21), November 1–5, 2021, Virtual Event, QLD, Australia††price:
15.00††doi: 10.1145/3459637.3482450††isbn: 978-1-4503-8446-9/21/11††ccs:
Information systems Retrieval models and ranking††ccs: Computing methodologies
Neural networks
ACM Reference Format:
Liang Pang, Yanyan Lan, Xueqi Cheng. 2021. Match-Ignition: Plugging PageRank
into Transformer for Long-form Text Matching. In Proceedings of the 30th ACM
International Conference on Information and Knowledge Management (CIKM ’21),
November 1–5, 2021, Virtual Event, QLD, Australia. ACM, New York, NY, USA, 10
pages. https://doi.org/10.1145/3459637.3482450
## 1\. Introduction
Semantic text matching is an essential problem in many natural language
applications, such as community question answering (Wang et al., 2017),
information retrieval (Huang et al., 2013), and dialogue (Lu and Li, 2013).
Many deep text matching models have been proposed and gain good performance,
such as representation based models (Huang et al., 2013; Shen et al., 2014;
Palangi et al., 2016; Wan et al., 2016a), interaction based models (Hu et al.,
2014; Pang et al., 2016; Wan et al., 2016b; Guo et al., 2016), and their
combinations (Mitra et al., 2017; Yang et al., 2019; Devlin et al., 2018).
Figure 1. The top example is a short-form text matching for the paraphrasing
identification, and the lines indicate the alignments between words from two
sentences. The bottom example is a long-form text matching for the redundancy
news identification, and the highlights indicate the important matching
signals for the identity event of two news.
However, these models cannot be well applied to long-form text matching
problems, which have attracted increasing attention in the field of news
deduplication (Liu et al., 2019a), citation recommendation (Yang et al.,
2020), plagiarism detection (Zhou et al., 2020) and attachment suggestion
(Jiang et al., 2019). This is mainly because long-form text matching is quite
different from the short-form text matching problem. For short-form text
matching, almost every term in the short texts is critical to the matching
score, because short text matching tasks are just like finding a reasonable
semantic alignment between two sentences (Pang et al., 2016). For example, in
paraphrasing identification, the major problem is to find the paraphrasing
sentence for the given sentence. In this case, the matching score is mainly
determined by the alignment between each word in the sentences, as shown in
Figure 1.
Long-form text matching has its own characteristics. Firstly, long-form text
matching cares more about the global semantic meanings rather than the
bipartite alignment. The fine-grained matching signals between long-form texts
are usually very sparse, which makes the existing short text matching models
hard to figure out from huge noisy signals. For example, redundant news
identification merely focuses on where/when the event happened and what the
event is, instead of who posted this news and the detailed descriptions of the
news. Secondly, long-form text intrinsically consists of a two-level
structure, i.e. sentences and words. Most existing short text matching
approaches can only process text word by word while missing the sentence-level
structure. For example, one sentence should be ignored entirely if it is
irrelevant to the current document, e.g. advertisement, even though some of
its internal words are relevant. Thirdly, long-form text matching contains a
very long text by nature, which makes the existing short text matching models
computational expensive because they have to treat every word indiscriminately
and emphasize the sufficient interactions between words (Yang et al., 2019;
Devlin et al., 2018). In practice, the long-form text often has to be
truncated in the computation. For example, BERT only accepts text lengths of
less than 512. These operations may hurt the final matching performance. From
these discussions, we can see that noise is the main challenge in long-form
text matching, affecting both performance and efficiency.
In this paper, we propose a novel hierarchical noise filtering model, namely
Match-Ignition, to distill the significant matching signals via the well-known
link analysis algorithm PageRank (Brin and Page, 1998). PageRank utilizes
random walk on a graph to determine the importance of each node. In this way,
the noises (i.e. less important nodes) can be eliminated and the algorithm
will be accelerated. Considering the two-level structures in the long-form
text matching problem, our model contains two hierarchies, i.e. sentence-level
and word-level. In the sentence-level noise filtering process, the nodes are
defined as sentences from a pair of long-form texts, and the link is defined
as the similarities between each pair of sentences. That is to say, the
similarities inside each long-form text and between the two long-form texts
are both captured in our graph. Then the noisy sentences could be identified
by PageRank score, and be directly removed. The word-level noise filtering
process is jointly learned with the matching process. That is because each
word relies on its context to express its concrete meanings, so noisy words
need to be estimated dynamically during the matching process. To this end, we
first apply the state-of-the-art Transformer to the texts, which well captures
the contextual information among words. It turns out that the attention matrix
in the self-attention block, the key component of Transformer, could be
treated as a fully connected word-level similarity graph (Guo et al., 2019b;
Zhao et al., 2019; Dai et al., 2019). PageRank is then applied to filter out
noise words at each layer. We can see this technique is different from
previous works (Guo et al., 2019b; Zhao et al., 2019; Dai et al., 2019) which
focus on eliminating links in the graph because our model focuses on filtering
noisy words, i.e., nodes in the graph. Please note that attention weights
could be directly applied to filter noisy words. The reason why we still use
PageRank here is that the attention weights have been proven not reliable in
explaining the importance of words in (Brunner et al., 2019). Furthermore,
PageRank has the ability to consider the global importance of each word by
value propagation on a graph, which is more thorough than attention weights.
We experiment on three long-form text matching tasks, news deduplication,
citation recommendation, and plagiarism detection, including seven public
datasets, e.g. CNSE, CNSS, AAN-Abs, AAN-Body, OC, S2ORC, and PAN. The
experimental results show that Match-Ignition outperforms all baseline
methods, including both short text matching models and recent long-form text
matching models. The further detailed analysis demonstrates that Match-
Ignition efficiently captures important matching signals in long-form text,
which helps understand the matching process. Besides, we compare different
noisy filtering methods to show the superiority of using PageRank.
## 2\. Related Work
In this section, we first introduce the text matching models designed for
short-form text matching, then review the most recent works for long-form text
matching.
Short-form Text Matching Existing text matching models fall into
representation-based approaches, interaction-based approaches, and their
combinations (Guo et al., 2019a).
Representation-based matching approaches are inspired by the Siamese
architecture (Chopra et al., 2005). This kind of approach aims at encoding
each input text in a pair into the high-level representations respectively
based on a specific neural network encoder, and then the matching score is
obtained by calculating the similarity between the two corresponding
representation vectors. DSSM (Huang et al., 2013), C-DSSM (Shen et al., 2014),
ARC-I(Hu et al., 2014), RNN-LSTM (Palangi et al., 2016) and MV-LSTM (Wan et
al., 2016a) belong to this category. Interaction-based matching approaches are
closer to the nature of the matching task to some extent since they aim at
directly capturing the local matching patterns between two input text, rather
than focusing on the text representations. The pioneering work includes ARC-II
(Hu et al., 2014), MatchPyramid (Pang et al., 2016), and Match-SRNN (Wan et
al., 2016b). Recently, there has been a trend that the two aforementioned
branches of matching models should complement each other, rather than being
viewed separately as two different approaches. DUET (Mitra et al., 2017) is
composed of two separated modules, one in a representations-based way, and
another in an interaction-based way, the final matching score is just their
weighted-sum result. The attention mechanism is another way to combine the
above two approaches, such as RE2 (Yang et al., 2019) and BERT (Devlin et al.,
2018). However, these existing approaches for short-form text matching have
limited success in long-form text matching settings, due to their inability to
capture and distill the main information from long documents. Besides, these
models are computationally expensive because they simply use all textual data
indiscriminately in the matching process.
Long-form Text Matching Due to the lack of the public datasets and the
efficient algorithms, few work directly focuses on the long-form text matching
and further explores the application scenarios of it. In recent years, since
the pioneering work SMASH proposed by Jiang et al. (Jiang et al., 2019), they
are the first to point out that long-form text matching (e.g. source text and
target text both are long-form text) has a wide range of application
scenarios, such as attachment suggestion, article recommendation, and citation
recommendation. They propose a hierarchical recurrent neural network under
Siamese architecture which is a kind of representation-based matching
approach. It synthesizes information from different document structure levels,
including paragraphs, sentences, and words. SMITH model (Yang et al., 2020)
follows the SMASH’s settings, then utilizes powerful pre-trained language
model BERT (Devlin et al., 2018) as their key component and breaks the 512
tokens limitation to build a representation-based matching approach. Instead
of using BERT as a component, TransformerXL (Dai et al., 2019) and Longformer
(Beltagy et al., 2020) try to directly extent the Transformer structure
towards the long-form text by introducing a sliding window or memory strategy
into it. However, they are designed for general natural language
understanding, not for text matching tasks. Another work on long-form text
matching is Concept Interaction Graph (CIG) (Liu et al., 2019a), which
concerns modeling the relation between two documents, e.g. same event or
story. It can be treated as an interaction-based matching approach, which
selects a pair of sentences based on their concepts and similarities. Besides,
they also construct two types of duplicate news detection datasets, which are
labeled by professional editors.
All the previous works ignore the fact that long-form text provides
overabundant information for matching, that is to say, there are usually many
noises in the setting of long-form text matching. This phenomenon also is
discussed in query-document matching tasks (Guo et al., 2016; Pang et al.,
2017; Hui et al., 2017; Fan et al., 2018), which is a short to long text
matching because a query is a short-form text and a document is a long-form
text. DeepRank (Pang et al., 2017) treats query and document differently, in
their model, each query term acts as a filter that picks out text spans in the
document which contain this query term. That is to say, query irrelevant text
spans are the noise that can be ignored in the matching process. PACRR (Hui et
al., 2017) also has similar findings, they filter document words using two
kinds of processes, 1) keep first $k$ terms in the document or 2) retain only
the text that is highly relevant to the given query. These previous works
provide strong evidence that our noise filtering motivation can be effective
for long-form text matching problems.
Figure 2. The overall architecture of Match-Ignition. (a) represents the
sentence-level filter, (b) represents the outputs of the sentence-level
filter, and (c) represents the word-level filter.
## 3\. Match-Ignition
In this section, we first introduce the two components of Match-Ignition. They
are sentence-level noise filter and word-level noise filter, shown in Figure
2(a) and Figure 2(c) respectively. After that, the model training details are
described in the last subsection.
### 3.1. Sentence-level Noise Filtering
To enable the application of graph-based ranking algorithms PageRank to
natural languages, such as documents, a graph is needed to build that
represents the relation between sentences. TextRank (Mihalcea and Tarau, 2004)
makes it possible to form a sentence extraction algorithm, which can identify
key sentences in a given document. It becomes a mature approach in automatic
summarization. The most direct way is to apply the TextRank algorithm on each
long-form text independently, so that we can reduce the length of each long-
form text by summarizing. However, the goal of long-form text matching is to
find the matching signals between a pair of texts, which is different from
summarization task that extracts key information from one text. Therefore,
directly applying the TextRank algorithm to each text independently leads to
the problem of loss of matching signals.
Inspired by the previous works (Pang et al., 2017; Hui et al., 2017), they
tell us that two texts can help each other for noise detection, so that both
long-form texts should be represented in one graph to involve the matching
information across two texts. Firstly, sentences in both long-form texts are
collected together to form a united sentence collection. Formally, two long-
form texts are first split into sentences, denoted as
$d_{s}=[s^{1}_{1},s^{1}_{2},\dots,s^{1}_{L_{1}}]$ and
$d_{t}=[s^{2}_{1},s^{2}_{2},\dots,s^{2}_{L_{2}}]$, where $L_{1}$ and $L_{2}$
are the number of sentences in $d_{s}$ and $d_{t}$ respectively. The united
sentence collection
$\mathcal{S}=\\{s^{1}_{1},s^{1}_{2},\dots,s^{1}_{L_{1}},s^{2}_{1},s^{2}_{2},\dots,s^{2}_{L_{2}}\\}$
then have $L_{1}+L_{2}$ elements. Thus, the sentence similarity graph can be
constructed by evaluating the sentence pair similarities in the united
sentence collection $\mathcal{S}$. The sentence similarity is defined as the
same as in TextRank (Mihalcea and Tarau, 2004), to measures the overlapping
word ratio between two sentences:
(1) $Sim(s_{i},s_{j})=\frac{|\\{w_{k}|w_{k}\in s_{i},w_{k}\in
s_{j}\\}|}{\log(|s_{i}|)+\log(|s_{j}|)},\;\;s_{i},s_{j}\in\mathcal{S},$
where $w_{k}$ denotes the word in the sentence, $|\cdot|$ denotes the length
of the sentence or word set, and $s_{i},s_{j}$ are two sentences in the united
sentence collection $\mathcal{S}$. To make sentence similarity sparsity e.g.
returns 0 most of the time, we remove the stopwords in the sentences before we
calculate the similarities. Thus, the final sentence similarity graph has
sparse links. Finally, a PageRank algorithm is applied to this constructed
sentence similarity graph, to get the important score of each sentence. To
balance the information coming from different long-form texts for the
following step, the top $\lambda$ sentences are extracted for each long-form
texts respectively. Thus, both texts contain $\lambda$ sentences as their
digestion, which we called a sentence-level filter.
As shown in Figure 2(b), the selected sentences are concatenated as a text
sequence, which starts with [CLS] token and separates by [SEP] token. It is
then treated as the input of the model in the word-level filter. Note that the
hyper-parameter $\lambda$ should be neither too small to lose a lot of
information, nor too large to make text extremely long. A suitable $\lambda$
can yield a moderate text sequence, which length is just less than the BERT
max input length.
PageRank algorithm can also be used at the word level if we can define a word-
by-word relation graph. However, sentences are adjectives from each other,
noise in this level is discrete than an entire sentence can be removed in an
unsupervised way, while a word relies on its context to express concrete
meanings, noise in this level is continuous that should be estimated during
the model training. Therefore, we need to construct a graph within the
Transformer model structures.
### 3.2. Word-level Noise Filtering
To filter the noise in the word level, a word-level graph needs to be
constructed first in the inherent transformer structure (Sec 3.2.1). After
that, the traditional PageRank algorithm is required to implement as a tensor
version, for better to embed into the transformer structure (Sec 3.2.2).
Finally, we propose our plug PageRank to the Transformer model for word-level
noise filtering (Sec 3.2.3).
#### 3.2.1. Transformer as a Graph
Transformer architecture (Vaswani et al., 2017) boosts the natural language
processing a lot, where most well-known models are a member of this family,
such as BERT (Devlin et al., 2019), RoBERTa (Liu et al., 2019b), and GPT2
(Radford et al., [n.d.]). They achieve state-of-the-art performance in almost
all NLP tasks, e.g. named entity recognition, text classification, machine
translation, and also text semantic matching. For long-form text matching, we
also adopt this architecture.
The self-attention block is the main component in Transformer architecture,
which figure out how important all the other words in the sentence are for the
contextual word around it. Thus, the self-attention block builds the relations
between words, which can be viewed as a fully connected graph among words (Guo
et al., 2019b; Zhao et al., 2019; Dai et al., 2019). Knowing that the updated
word representations are simply the sum of linear transformations of
representations across all the words, weighted by their importance. It makes
full use of the attention mechanism in deep neural networks to update word
representations. As have shown in (Vaswani et al., 2017), the attention
function can be formalized as a scaled dot-product attention with inputs
$\mathbf{H}^{l}$:
(2) $\displaystyle\mathbf{H}^{l+1}$
$\displaystyle=\mathrm{Attn}(\mathbf{Q}^{l},\mathbf{K}^{l},\mathbf{V}^{l})=\mathrm{Softmax}\left(\frac{\mathbf{Q}^{l}(\mathbf{K}^{l})^{T}}{\sqrt{E}}\right)\mathbf{V}^{l}=\mathbf{A}^{l}\mathbf{V}^{l},$
where $\mathbf{Q}^{l}=\mathbf{W_{Q}}^{l}\mathbf{H}^{l}\in\mathbb{R}^{N\times
E}$ denote the attention query matrices,
$\mathbf{K}^{l}=\mathbf{W_{K}}^{l}\mathbf{H}^{l}\in\mathbb{R}^{N\times E}$ the
key matrix, and
$\mathbf{V}^{l}=\mathbf{W_{V}}^{l}\mathbf{H}^{l}\in\mathbb{R}^{N\times E}$ the
value matrix. $N$ denotes the number of words in a text, and $E$ denotes the
dimensions of the representation. The attention mechanism can be explained as:
for each attention query vector in $\mathbf{Q}$, it first computes the dot
products of the attention query with all keys, aiming to evaluate the
similarity between the attention query and each key. Then, it is divided each
by $\sqrt{E}$, and applies a softmax function to obtain the weights on the
values, denotes as $\mathbf{A}^{l}$. Finally, the new representation of the
attention query vector is calculated as weighed sum of values. Getting this
dot-product attention mechanism to work proves to be tricky bad random
initializations can de-stabilize the learning process. It can be overcome by
performing multiple ‘heads’ of attention and concatenating the result:
(3)
$\displaystyle\mathbf{H}^{l+1}=\mathrm{Concat}(head_{1},\cdots,head_{H})\mathbf{O}^{l},$
$\displaystyle
head_{k}=\mathrm{Attention}(\mathbf{Q}^{kl},\mathbf{K}^{kl},\mathbf{V}^{kl})=\mathbf{A}^{kl}\mathbf{V}^{kl},$
where $\mathbf{Q}^{kl}$, $\mathbf{K}^{kl}$ and $\mathbf{V}^{kl}$ are of the
$k$-th attention head at layer $l$ with different learnable weights,
$\mathbf{O}^{l}$ down-projection to match the dimensions across layers, $H$ is
the number of the heads in each layer and $L$ is the number of the layers.
If we treat each word as a node in a graph, they update their representations
by aggregating all other contextual word representations, just like messages
passing from other neighbor nodes in graph neural network (Scarselli et al.,
2008). Thus, for self-attention block, it can be treated as a fully-connected
word graph, where its adjacency matrix is the transpose of the word-by-word
similarity matrix $\mathbf{A}^{kl}$.
#### 3.2.2. PageRank in A Tensor View
PageRank (Brin and Page, 1998) is a graph-based ranking algorithm, which is
essentially a way of deciding the importance of a vertex within a graph, and
recursively attracts global information from the entire graph. Formally, given
a graph $G(V,E)$, where $V=\\{v_{1},v_{2},\dots,v_{N}\\}$ is a set of nodes
and $E$ is the links between these nodes. The goal is to determine the order
of these nodes that the more important node has a higher rank. The PageRank
value on each node $v_{i}$, denotes as $u_{i}$, is used to indicate the
importance of the node $v_{i}$. For convenience, we define $\mathbf{A}$ as the
adjacency matrix, that $\mathbf{A}_{ij}$ denotes the $v_{i}$ has a link from
$v_{j}$ with weight $\mathbf{A}_{ij}$. $\mathbf{A}$ is also a stochastic
matrix because each column sums up to 1. At the initial step all $u_{i}$ have
the same value $1/N$, denotes that all nodes are equally important. At each
following step , then PageRank value $u_{i}$ is updated using other nodes and
links pointed to it,
(4) $u_{i}=\sum\nolimits_{v_{j}\in V}\mathbf{A}_{ij}\cdot u_{j}.$
After several iterations, the PageRank values $u_{i}$ will converge to a set
of stable values $u_{i}$, and that is the solution of PageRank.
To implement PageRank in a tensor-based computational framework, such as
TensorFlow (Abadi et al., 2016) or PyTorch (Paszke et al., 2019), we need a
tensor version of PageRank algorithm. Let
$\mathbf{u}^{t}=[u^{t}_{1},u^{t}_{2},\dots,u^{t}_{n}]$ to be a vector of
length $N$, that obtains all nodes PageRank values at step $t$. Then, PageRank
can be rewritten as,
(5) $\mathbf{u}^{t+1}=\mathbf{A}\mathbf{u}^{t}.$
To solve the problem of isolated nodes, a stable version of PageRank is
proposed (Brin and Page, 1998) and adopted by our work,
(6) $\mathbf{u}^{t+1}=d\mathbf{A}\mathbf{u}^{t}+(1-d)/N\cdot\mathbb{I},$
where $d\in[0,1]$ is a real value to determine the ratio of the two parts, and
$\mathbb{I}$ is a vector of length $N$ with all its values are 1. The factor
$d$ is usually set to 0.85, and this is the value we are also using in our
implementation.
In practice, the number of the iteration step $T$ is set to a fixed value for
computational efficiency. Thus, $\mathbf{u}^{t}$ is the final PageRank scores
for each $v_{i}\in V$, and the larger of PageRank denotes the more importance
of this node in the current graph, thus we can filter out the nodes with small
PageRank values.
#### 3.2.3. Plug PageRank in Transformer
In this section, we propose a novel approach that plugs PageRank in the
Transformer model to filter the noise at the word level. Notice that, word-
level noise is composite, thus need to be estimated dynamically during the
matching process. In each self-attention block, an inherent PageRank algorithm
is utilized to dynamically filter the noisy words, which can reduce the
sequence length layer by layer.
Standard Transformer structure, which has been selected as our base model
structure, has $L$ layers of multi-head self-attention blocks, stacked one
after another, and maintains the same sequence length $N$ at each layer. From
the description in Section 3.2.1, we have known that self-attention block in
Transformer can be treated as a word-by-word graph, which can be specified
using an adjacency matrix $(\mathbf{A}^{kl})^{\top}$ at $k$-th head and $l$-th
layer in Eq 3. The word-level noise filtering process is once per layer, thus
we need to average the effects of all adjacency matrices across different
heads in the $l$-th layer,
(7) $\mathbf{A}^{l}=\frac{1}{H}\sum\nolimits_{k=1}^{H}\mathbf{A}^{kl}.$
Because $\mathbf{A}^{l}$ is the output of row-wise Softmax function, each row
of $\mathbf{A}^{l}$ sum to 1. Thus, $(\mathbf{A}^{l})^{\top}$ is a stochastic
matrix, which can be treated as the adjacency matrix in a graph. With above
observation, we substitute $(\mathbf{A}^{l})^{\top}$ into Eq 5 and yield:
(8)
$\mathbf{u}^{t+1}=d(\mathbf{A}^{l})^{\top}\mathbf{u}^{t}+(1-d)/N\cdot\mathbb{I}.$
Iteratively solving the equation above, we then get the PageRank values for
all words/nodes in the $(l-1)$-th layer, denote as $\mathbf{u}$. Thus,
$\mathbf{u}$ represents the importance of the words in the $(l-1)$-th layer.
After applying the attention mechanism to the words in the $(l-1)$-th layer,
we get a list of new word representations as to the input of $l$-th layer. To
filter noisy words, we have to estimate the importance of the words/nodes in
$l$-th layer, which can be evaluated by redistributing the importance of the
word in $(l-1)$-th layer under the distribution $\mathbf{A}^{l}$, thus the
word importance scores are $\mathbf{r}=\mathbf{A}^{l}\mathbf{u}$. Finally, we
can reduce the sequence length at $l$-th layer by removing the nodes which
have the small values in $\mathbf{r}$.
In this work, we design a strategy that remove the percentage
$\alpha\in[0\%,100\%]$ nodes per layer, so that the $l$-th layer has
$(\alpha)^{l-1}\cdot N$ nodes. The hyper-parameter $\alpha$ is called a word
reduction ratio. For example, let $L=12,N=400$, if we set $\alpha$ to 10%, the
numbers of nodes at each layer are 400, 360, 324, 291, 262, 236, 212, 191,
172, 154, 139, 125.
For the BERT model, some words are too special to be removed, such as [CLS]
token and [SEP] token. If the model occasionally removes these tokens during
the training, it will lead to an unstable training process. It also affects
the overall performance. Therefore, a token mask is designed to keep these
tokens across all the layers.
Discussions: Many previous works (Guo et al., 2019b; Zhao et al., 2019; Dai et
al., 2019) have also noticed the relation between Transformer and graph. Star-
Transformer (Guo et al., 2019b) adds a hub node to model the long-distance
dependence and eliminates the links far from 3-term steps. TransformerXL (Dai
et al., 2019) uses a segment-level recurrence with a state reuse strategy to
remove all the links between words in different segments, so that can break
the fixed-length limitation. Sparse-Transformer (Zhao et al., 2019) explicitly
eliminate links in which attention scores are lower than the threshold to make
the attention matrix sparse. All of these previous works focus on eliminating
links in the graph, while in this work, we focus on filtering noise words, as
well as nodes, in the graph.
### 3.3. Model Training
The sentence-level filter is the heuristic approach that does not need a
training process. Thus, in this section, we only consider model training for
the word-level filter component.
For the model training of word-level filter, we adopt the “pre-training +
fine-tuning” paradigm as in BERT. In this paradigm, the pre-trained
Transformer is firstly obtained using a large unlabeled plain text in an
unsupervised learning fashion. Then, the Transformer plugging PageRank at each
layer is fine-tuned using the supervised downstream task. Note that word-level
filters do not change the parameters in the original Transformer, due to all
the parameters in the Transformer are input sequence length independent.
Therefore, change the sequence length layer by layer does not affect the
structure of the Transformer. Benefit from the good property of PageRank-
Transformer, we can directly adopt a publicly released Transformer model, such
as BERT or RoBERTa trained on a large corpus, as our pre-trained model.
In the fine-tuning step, we add the PageRank module in each self-attention
layer, without introducing any additional parameters. The objective function
for long-form text matching task is a binary cross-entropy loss:
(9) $\mathcal{L}=-\sum\nolimits_{i}y_{i}\log{p_{i}}+(1-y_{i})\log(1-p_{i}),$
where $p_{i}$ is the probability represents the matching score, generated by
the representation of [CLS], and $y_{i}$ is the ground-truth label.
## 4\. Experiments
In this section, we conduct experiments and in-depth analysis on three long-
form text matching tasks to demonstrate the effectiveness and efficiency of
our proposed model.
Table 1. Description of evaluation datasets, AvgWPerD denotes average number
of words per document and AvgSPerD denotes the average number of sentences per
document.
Dataset | AvgWPerD | AvgSPerD | Train | Dev | Test
---|---|---|---|---|---
CNSE | 982.7 | 20.1 | 17,438 | 5,813 | 5,812
CNSS | 996.6 | 20.4 | 20,102 | 6,701 | 6,700
AAN-Abs | 122.7 | 4.9 | 106,592 | 13,324 | 13,324
AAN-Body | 3270.1 | 111.6 | 104,371 | 12,818 | 12,696
OC | 190.4 | 7.0 | 240,000 | 30,000 | 30,000
S2ORC | 263.7 | 9.3 | 152,000 | 19,000 | 19,000
PAN | 1569.7 | 47.4 | 17,968 | 2,908 | 2,906
### 4.1. Datasets
We use seven public datasets in our experiments, and their detailed statistics
are shown in Table 1.
News Deduplication: For this task, we use two datasets, i.e. Chinese News Same
Event dataset (CNSE) and the Chinese News Same Story dataset (CNSS) released
in (Liu et al., 2019a). They are constructed based on large Chinese news
articles collected from major Internet news, which cover diverse topics in the
open domain 000Datasets are available at
https://github.com/BangLiu/ArticlePairMatching. CNSE is designed to identify
whether a pair of news articles report the same breaking news (or event), and
CNSS is used to identify whether they belong to the same series of news
stories, labeled by professional editors. The negative samples in the two
datasets are not randomly generated. Document pairs that contain similar
keywords are selected and exclude samples with TF-IDF similarity below a
certain threshold. Finally, we follow the settings in (Liu et al., 2019a) and
split either dataset into training, development, and testing set with the
portion of instances 6:2:2.
Citation Recommendation: Citation recommendations can help researchers find
related works and finish paper writing more efficiently. Given the content of
a research paper and a candidate citation paper, the task aims to predict
whether the candidate should be cited by the paper. In our experiment, four
datasets are used for this task, i.e. AAN-Abs, AAN-Body, OC, and S2ORC. AAN-
Abs and AAN-Body are both constructed from the AAN dataset (Radev et al.,
2013), which contains computational linguistics papers published on ACL
Anthology from 2001 to 2014, along with their metadata. For the AAN-Abs
dataset released in (Zhou et al., 2020), each paper’s abstract and its
citations’ abstracts are extracted and treated as positive pairs, and negative
instances are sampled from uncited papers. For the AAN-Body dataset, we follow
the same setting described in the previous work (Jiang et al., 2019; Yang et
al., 2020; Tay et al., 2020), where they remove the reference sections to
prevent the leakage of ground-truth and remove the abstract sections to
increase the difficulty of the task. Besides, we also use the same training,
development, and testing splitting released in (Tay et al., 2020). The OC
dataset (Bhagavatula et al., 2018) contains about 7.1M papers major in
computer science and neuroscience. The S2ORC dataset (Lo et al., 2020) is a
large contextual citation graph of 8.1M open access papers across broad
domains of science. The papers in S2ORC are divided into sections and linked
by citation edges. AAN-Abs, OC, and S2ORC are pre-processed, split, and
released by (Zhou et al., 2020), for a fair comparison, we adopt the same
settings in our experiments.
Plagiarism Detection: The detection of plagiarism has received considerable
attention to protect the copyright of publications. It is a typical long-form
text matching problem because even partial text reuse will identify plagiarism
between two documents. For this task, we use the PAN dataset (Potthast et al.,
2013), which collects web documents with various plagiarism phenomena. Human
annotations are employed to indicate the text segments that are relevant to
the plagiarism both in the source and suspicious documents. Follow the
settings of (Yang et al., 2016), the positive pairs are constructed by the
segment of the source document and the suspicious document annotated as
plagiarism, while the negative pairs are subsequently constructed by replacing
the source segment in the positive pair with a segment from the corresponding
source documents which is not annotated as being plagiarised. Note that the
aforementioned AAN-Abs, OC, S2ORC, and PAN datasets can be directly downloaded
111https://xuhuizhou.github.io/Multilevel-Text-Alignment/.
Evaluation Metrics: Since all the above tasks are binary classification, we
use accuracy and F1 to act as the evaluation measures, similar to (Liu et al.,
2019a; Zhou et al., 2020). Specifically for each method, we perform training
for 10 epochs and then choose the epoch with the best validation performance
to evaluate on the test set.
### 4.2. Baselines and Experimental Settings
We adopt three types of baseline methods for comparison, including traditional
term-based retrieval methods, deep learning methods for short-form text
matching, and recent deep learning methods for long-form text matching.
We select three traditional term-based methods for comparison, i.e. BM25
(Robertson and Zaragoza, 2009), LDA (Blei et al., 2003) and SimNet (Liu et
al., 2019a). BM25 is one of the most popular traditional term-based methods.
The experimental results on CNSE and CNSS datasets are directly brought from
(Liu et al., 2019a), while others are implemented by ourselves. LDA is a
famous topic model, which is used here to demote the matching method by
computing similarities between the two texts represented by topic modeling
vectors. We do not implement it by ourselves, and the results are directly
from (Liu et al., 2019a). SimNet first extracts five text-pair similarities
and conducts classification by a multi-layer neural network, whose results are
brought from (Liu et al., 2019a).
Considering deep learning methods for short-form text matching, we compare
four types of models, including representation-based models, i.e. DSSM (Huang
et al., 2013), C-DSSM (Shen et al., 2014), and ARC-I (Hu et al., 2014);
interaction-based models, i.e. ARC-II (Hu et al., 2014) and MatchPyramid (Pang
et al., 2016); hybrid models, i.e. DUET (Mitra et al., 2017) and RE2 (Yang et
al., 2019), and the pretraining matching model BERT-Finetuning (Devlin et al.,
2019). The results of DSSM, C-DSSM, ARC-I, ARC-II, MatchPyramid, and DUET on
CNSE and CNSS dataset, are directly borrowed from the previous work (Liu et
al., 2019a), which uses the implementations from MatchZoo (Fan et al., 2017)
222https://github.com/NTMC-Community/MatchZoo. RE2 (Yang et al., 2019) is
implemented using released code by the author 333https://github.com/alibaba-
edu/simple-effective-text-matching with the default configuration, e.g.
300-dimensions pre-trained word vectors provided by Glove.840B and 30-epochs
training. BERT-Finetuning (Devlin et al., 2019) is fine-tuned on text matching
tasks based on a large-scale pre-training language model, e.g. BERT for
Chinese ‘ _bert-base-chinese_ ’ and BERT for English ‘ _bert-base-uncased_ ’
in the Transformers library 444https://github.com/huggingface/transformers.
For each of the pretraining models, we finetune it 10 epochs on the training
set.
For the methods specially designed for the long-form text matching problem, we
first focus on traditional hierarchical models, e.g. HAN (Yang et al., 2016)
and its variance GRU-HAN (Zhou et al., 2020), GRU-HAN-CDA (Zhou et al., 2020)
and SMASH (Jiang et al., 2018). Then, we compare our model with a group of
BERT-based hierarchical models, e.g. MatchBERT (Zhou et al., 2020), BERT-HAN
(Zhou et al., 2020), BERT-HAN-CDA (Zhou et al., 2020) and SMITH (Yang et al.,
2020). Finally, we consider some matching models by representing each text by
a pretrained model specifically designed for the long-form text, like
TransformerXL (Dai et al., 2019) and Longformer (Beltagy et al., 2020). The
results of GRU-HAN, GRU-HAN-CDA, BERT-HAN, and BERT-HAN-CDA on AAN-Abs, OC,
S2ORC, and PAN datasets are from the previous work (Zhou et al., 2020). The
results of HAN, SMASH, MatchBERT and SMITH on the AAN-Body dataset are from
the previous work (Yang et al., 2020). TransformerXL-Finetuning and
Longformer-Finetuning are implemented using the Transformers library. The
pretrained model we selected are ‘ _transfo-xl-wt103_ ’ for TranformerXL and ‘
_allenai/longformer-base-4096_ ’ for Longformer. For each of the pre-trained
models, we finetune it 10 epochs on the training set.
Specially, for CNSE and CNSS datasets, Concept Interaction Graph (CIG) model
(Liu et al., 2019a) is the state-of-the-art approach, which generates the
representation for each vertex, and then uses a GCN to obtain the matching
score. So we compare with three representative models of this approach, i.e.
CIG-Siam-GCN, CIG-Sim&Siam-GCN, and CIG-Sim&Siam-GCN-Simg. The results are
obtained by implementations based on their released code
555https://github.com/BangLiu/ArticlePairMatching.
The hyper-parameters of our Match-Ignition model are listed below. For the
sentence-level filter, the number of selected sentences per text $\lambda$ is
set to 5, the $d$ in PageRank algorithm defined in Eq 5 is set to 0.85. For
the word-level filter, we adopt a pre-trained BERT model for Chinese, e.g.
‘bert-base-chinese’, which contains 12 heads and 12 layers. The words
filtering ratio $\alpha$ is set to 10%, that is to say, we remove 10% words
per layer. The fine-tuning optimizer is Adam (Kingma and Ba, 2014) with the
learning rate $10^{-5}$. $\beta_{1}=0.9,\beta_{2}=0.999,\epsilon=10^{-8}$, and
batch size is set to 8. The model is built based on the Transformers library
using PyTorch (Paszke et al., 2019). The source code will be released at
https://github.com/pl8787/Match-Ignition.
Table 2. Experimental results on the news deduplication task, e.g. CNSE and
CNSS datasets. Significant performance degradation with respect to Match-
Ignition is denoted as (-) with $\mathbf{p}$-value $\leq$ 0.05. We only do
significant test on the models reimplemented from the source code, while the
results bring from (Liu et al., 2019a) do not test due to the lack of the
detailed predictions.
| | CNSE Dataset | CNSS Dataset
---|---|---|---
| Model | Acc | F1 | Acc | F1
I | BM25 (Robertson and Zaragoza, 2009) | 69.63 | 66.60 | 67.77 | 70.40
LDA (Blei et al., 2003) | 63.81 | 62.44 | 67.77 | 70.40
SimNet (Liu et al., 2019a) | 71.05 | 69.26 | 70.78 | 74.50
II | ARC-I (Hu et al., 2014) | 53.84 | 48.68 | 50.10 | 66.58
ARC-II (Hu et al., 2014) | 54.37 | 36.77 | 52.00 | 53.83
DSSM (Huang et al., 2013) | 58.08 | 64.68 | 61.09 | 70.58
C-DSSM (Shen et al., 2014) | 60.17 | 48.57 | 52.96 | 56.75
MatchPyramid (Pang et al., 2016) | 66.36 | 54.01 | 54.01 | 62.52
DUET (Mitra et al., 2017) | 55.63 | 51.94 | 52.33 | 60.67
RE2 (Yang et al., 2019) | 80.59- | 78.27- | 84.84- | 85.28-
BERT-Finetuning (Devlin et al., 2018) | 81.30- | 79.20- | 86.64- | 87.08-
III | CIG-Siam-GCN (Liu et al., 2019a) | 74.58- | 73.69- | 78.91- | 80.72-
CIG-Sim&Siam-GCN (Liu et al., 2019a) | 84.64- | 82.75- | 89.77- | 90.07-
CIG-Sim&Siam-GCN-Simg (Liu et al., 2019a) | 84.21- | 82.46- | 90.03- | 90.29-
IV | Match-Ignition | 86.32 | 84.55 | 91.28 | 91.39
Table 3. Experimental results on the citation recommendation task, e.g. AAN-
Abs, OC, and S2ORC datasets and the plagiarism detection task, e.g. PAN
dataset. Significant performance degradation with respect to Match-Ignition is
denoted as (-) with $\mathbf{p}$-value $\leq$ 0.05. We only do significant
test on the models reimplemented from the source code, while the results bring
from (Zhou et al., 2020) do not test due to the lack of the detailed
predictions.
| | AAN-Abs Dataset | OC Dataset | S2ORC Dataset | PAN Dataset
---|---|---|---|---|---
| Model | Acc | F1 | Acc | F1 | Acc | F1 | Acc | F1
I | BM25 (Robertson and Zaragoza, 2009) | 67.60- | 68.00- | 80.32- | 80.38- | 76.47- | 76.53- | 61.59- | 62.47-
II | RE2 (Yang et al., 2019) | 87.81- | 88.04- | 94.53- | 94.57- | 95.27- | 95.34- | 61.97- | 58.30-
BERT-Finetune (Devlin et al., 2018) | 88.10- | 88.02- | 94.87- | 94.87- | 96.32- | 96.29- | 59.11- | 69.66-
III | GRU-HAN (Zhou et al., 2020) | 68.01 | 67.23 | 84.46 | 82.26 | 82.36 | 83.28 | 75.70 | 75.88
GRU-HAN-CDA (Zhou et al., 2020) | 75.08 | 75.18 | 89.79 | 89.92 | 91.59 | 91.61 | 75.77 | 76.71
BERT-HAN (Zhou et al., 2020) | 73.36 | 73.51 | 86.31 | 86.81 | 90.67 | 90.76 | 87.57 | 87.36
BERT-HAN-CDA (Zhou et al., 2020) | 82.03 | 82.08 | 90.60 | 90.81 | 91.92 | 92.07 | 86.23 | 86.19
IV | TransformerXL-Finetune (Dai et al., 2019) | 83.85- | 83.24- | 91.61- | 91.79- | 92.50- | 92.39- | 58.25- | 69.07-
Longformer-Finetune (Beltagy et al., 2020) | 88.06- | 88.41- | 94.76- | 94.74- | 96.31- | 96.29- | 56.61- | 69.74-
V | Match-Ignition | 89.62 | 89.64 | 95.70 | 95.71 | 96.97 | 96.97 | 89.37 | 89.42
### 4.3. Experimental Results
The performance comparison results of Match-Ignition against baseline models
are shown in Table 2 and Table 3. From these experimental results, we can
obtain the following summaries:
1) Comparing Match-Ignition with existing deep learning models for short-form
text matching, we can see that Match-Ignition outperforms all types of the
existing short-form text matching models on both CNSE and CNSS datasets.
Specially, it performs significantly better than two strong baselines, e.g.
the hybrid model RE2 and the pretraining model BERT-Finetuning. For the tasks
of citation recommendation and plagiarism, we compare the Match-Ignition model
with the two strongest short-form text matching methods in the news
deduplication task. The results in Table 3 II show that Match-Ignition also
significantly outperforms RE2 and BERT-Finetuning, which demonstrates the
superiority of Match-Ignition against existing short-form text matching
models.
2) Then we Compare Match-Ignition with the current state-of-the-art methods in
the three tasks, including graph-based method CIG, hierarchical methods HAN
and its variants. For the news deduplication task, the Match-Ignition model
significantly outperforms the state-of-the-art method CIG-Sim&Siam-GCN-Simg,
as shown in Table 2 III. That is because CIG is usually affected by noisy
concept terms, while our model has the ability to filter noisy information in
the learning process. For the other two tasks, as we can see in Table 3 III,
Match-Ignition outperforms the state-of-the-art hierarchical methods HAN and
HAN-CDA. Note that another newly proposed state-of-the-arts model SMITH-WP+SP
is also an extension of the hierarchical method to tackle long-form text
matching problems. However, they do not use the AAN-Abs dataset but conduct
their experiments on the context-only version of the AAN dataset, namely AAN-
Body. To compare with it, we apply our model to AAN-Body dataset. As we can
see from the results in Table 4 II, the Match-Ignition model also outperforms
the SMITH-WP+SP model.
3) Comparing the Match-Ignition model with the matching models based on
pretrained long-form text representation models, e.g. TransformerXL and
Longformer. The experimental results in Table 3 IV show that comparing with
the finetuned version of TransformerXL and Longformer, the Match-Ignition
model achieves better performances. Especially in the plagiarism detection
task, noises affect the performances a lot, since the goal of TransformerXL
and Longformer is to preserve the information in the long-form text as much as
possible. Note that TransformerXL and Longformer only release their English
versions, which are not applicable in the Chinese dataset, thus we do not list
their results for the news deduplication task on CNSE and CNSS datasets.
Furthermore, we compare with another branch of the state-of-the-art
hierarchical methods, e.g. SMASH and SMITH, on the AAN-Body dataset in Talbe
4. We do not implement them on other datasets because 1) SMASH does not
provide code for model construction and training; 2) SMITH needs to pretrain
on large-scale data and then finetune. So we implement our model on the
dataset they used, e.g. the AAN-Body dataset, to achieve a fair comparison.
The experimental results show that Match-Ignition also outperforms these
baseline methods.
Table 4. Experimental results on AAN-Body dataset make a fair comparison for our Match-Ignition model and the long-form text matching models proposed in (Yang et al., 2020). | | AAN-Body Dataset
---|---|---
| Model | Acc | F1
I | BM25 (Robertson and Zaragoza, 2009) | 59.66 | 59.90
II | RE2 (Yang et al., 2019) | 80.15 | 79.25
III | HAN (Yang et al., 2016) | 82.19 | 82.57
SMASH (Jiang et al., 2019) | 83.75 | 82.78
MatchBERT (Yang et al., 2020) | 83.55 | 82.93
SMITH-WP+SP (Yang et al., 2020) | 85.36 | 85.43
IV | Match-Ignition | 89.92 | 89.91
Table 5. Ablation study of the two-level noise filtering mechanisms in Match-Ignition. | CNSE | CNSS
---|---|---
Model | Acc | F1 | Acc | F1
Match-Ignition | 86.32 | 84.55 | 91.28 | 91.39
$\cdot$ Sentense-level Filter Only | 84.11 | 82.17 | 91.04 | 91.07
$\cdot$ Word-level Filter Only | 80.31 | 76.39 | 91.10 | 91.18
BERT-Finetune | 81.30 | 79.20 | 86.64 | 87.08
Table 6. The impact of word reduction ratio $\alpha$ and the execution time of these models. Words Reduc- | CNSE | CNSS | Time per batch
---|---|---|---
tion Ratio $\alpha$ | Acc | F1 | Acc | F1 | Train | Eval
0% | 84.11 | 82.17 | 91.04 | 91.07 | 1.73s | 0.42s
5% | 85.68 | 83.65 | 90.70 | 90.73 | 1.58s | 0.37s
10% | 86.32 | 84.55 | 91.28 | 91.39 | 1.33s | 0.31s
20% | 82.55 | 79.66 | 90.25 | 90.21 | 1.07s | 0.21s
Table 7. Comparison results with other word-level noise filtering strategies. | CNSE | CNSS
---|---|---
Word-level Filter | Acc | F1 | Acc | F1
Random | 80.38 | 79.19 | 87.68 | 88.20
Embedding Norm | 80.54 | 78.15 | 84.92 | 85.03
Attention Weight | 85.52 | 83.34 | 89.91 | 89.88
PageRank | 86.32 | 84.55 | 91.28 | 91.39
### 4.4. Ablation Study
Now we conduct an ablation study to investigate the two-level noise filtering
strategies in Match-Ignition on news deduplication.
#### 4.4.1. Investigations on the sentence-level filtering
In Table 5, the ‘Sentence-level Filter Only’ and ‘Word-level Filter Only’
model denotes the result by only using the sentence-level and word-level
filter, respectively. Therefore, the sentence-level filter is critical on both
CNSE and CNSS datasets: 1) without it, the accuracy on CNSE will degrade from
86.32%/91.28% to 80.31%/91.10 on CNSE and CNSS, respectively; 2) Match-
Ignition with sentence-level filter only outperforms the strong baseline
‘BERT-Finetune’ by 3.5% and 5.1% w.r.t. accuracy on CNSE and CNSS,
respectively.
### 4.5. Case Study
Figure 3. (a) sentence graph for each document using TextRank, (b) sentence
graph built in Match-Ignition, each sentence is a node in the graph, its color
represents the document it belongs to and its size represents the importance
(PageRank value). (c) illustrates the word importances, and the darker color
means the more important word.
#### 4.5.1. Investigations on word-level filtering
Still, from Table 5, we can see the effect of the word-level filtering: 1) the
performance increases 2.6% and 0.3% from ‘Sentence-level Filter Only’ model to
the full version Match-Ignition model on CNSE and CNSS, respectively; 2)
though Match-Ignition with word-level filter only cannot beat BERT-Finetune on
CNSE, it outperforms BERT-Finetune by 5.1% on CNSS. Therefore, word-level
filtering also plays an important role in Match-Ignition.
Then we study the impact of word reduction ration $\alpha$, which is a major
hyper-parameter in the word-level filter because it determines how many
words/nodes should be deleted in each layer. Specifically, we evaluate four
words reduction ratio, where $\alpha=0\%$ means the word-level filter is
turned off. From the results shown in Table 6, we can see that too small or
large a value of $\alpha$ leads to bad performance, and $\alpha=10\%$ yields
the best performances on both CNSE and CNSS datasets.
We also demonstrate the efficiency of the Match-Ignition model with different
$\alpha$ as in Table 6. Please note that the sentence-level filter executes
very fast, comparing to evaluating Transformer. So the efficiency on sentence-
level filter can be ignored. Theoretically, the major time cost in Match-
Ignition is computing the word-by-word similarity matrices in self attention
blocks in the Transformer model. Let $N$ denotes the text length and $L$
denotes the number of layers, the computation cost can be approximated by
${\text{TimeCost}}(\alpha)\approx\sum\nolimits_{l=0}^{L-1}(1-\alpha)^{2l}$
where ${\text{TimeCost}}(0\%)\approx 12$ and ${\text{TimeCost}}(20\%)\approx
2.76$, thus $\alpha=20\%$ is 4 times faster than $\alpha=0\%$ in theory. In
our experiments, we use a single 12G Nvidia K80 GPU with batch size 8. The
efficiency results in Table 6 show that $\alpha=20\%$ is 1.6 times faster than
$\alpha=0\%$ at the training stage and 2 times faster at the evaluation stage.
Furthermore, we compare different types of word-level filtering strategies.
Random stands for the method which randomly selects words at each layer, and
Embedding Norm selects words depending on its embedding norms. Attention
Weight uses the attention weight to determine the importance of the word,
which has been proven to be a special case of PageRank, i.e. without
propagation on the graph. From the results in Table 7, PageRank achieves the
best results, demonstrating the importance of word selecting strategies in
word-level filtering.
To illustrate the Match-Ignition model more intuitively, we give an example
from the CNSE dataset to visualize the sentence-level graph (Fig 3 (a)(b)) and
word importance (Fig 3 (c)).
Specifically, Figure 3 (a) demonstrates the graph by directly applying
TextRank on each document separately, and Figure 3 (b) shows the constructed
sentence-level graph built-in Match-Ignition. The difference indicates the
rationality of our model. For example, sentence ‘2238-01’ are equally
important as ‘2238-02’ and ‘2238-03’ in Figure 3 (a). While it becomes much
more important in Figure 3 (b) because it has more connections with the
sentences in Doc1. Therefore, our model is capable to capture the key
sentences in the matching process, by considering connections both inside and
between two documents.
Fig 3 (c) show the word importance in different colors, where darker color
indicates a higher importance score. Specifically, the importance score is
computed based on the number of layers retaining the word, which shows the
importance of each word in the whole matching process. The results are
accordant with human understanding. For example, the location ‘the Foshan’ and
the name of the policy ‘Draft for Solicitation of Comment’ is important for
determining the matching degree of the news, which indeed obtain a higher
importance score in the model, as highlighted with rectangles. Furthermore,
the results show that special tokens like [CLS] and [SEP] are also important
for long-form text matching. That is because [CLS] token acts as the global
information aggregator and [SEP] token acts as a separator of the two texts,
which are two crucial indicators in long-form text matching.
## 5\. Conclusion
In this paper, we propose a novel hierarchical noise filtering approach for
the long-form text matching problem. The novelty lies in the employment of the
well-known PageRank algorithm to identify and filter both sentence-level and
word-level noisy information, which can be viewed as a generalized version of
using attention weight with propagation on the graph. We conduct extensive
experiments on three typical long-form text matching tasks including seven
public datasets, and the results show that our proposed model significantly
outperforms both short-form text matching models and recent state-of-the-arts
long-form text matching models.
In the future, we plan to investigate how to jointly learn the sentence-level
and word-level noise filter in Match-Ignition. In addition, we would like to
study the relation between Match-Ignition and graph neural network, and
whether there exists a graph neural network-based model to achieve the two-
level noise filtering in long-form text matching.
###### Acknowledgements.
This work was supported by National Natural Science Foundation of China (NSFC)
under Grants No. 61906180, No. 61773362 and No. 91746301, National Key R&D
Program of China under Grants 2020AAA0105200.
## References
* (1)
* Abadi et al. (2016) Martín Abadi, Paul Barham, Jianmin Chen, Zhifeng Chen, Andy Davis, Jeffrey Dean, Matthieu Devin, Sanjay Ghemawat, Geoffrey Irving, Michael Isard, et al. 2016\. Tensorflow: A system for large-scale machine learning. In _12th $\\{$USENIX$\\}$ Symposium on Operating Systems Design and Implementation_. 265–283.
* Beltagy et al. (2020) Iz Beltagy, Matthew E. Peters, and Arman Cohan. 2020\. Longformer: The Long-Document Transformer. arXiv:2004.05150 [cs.CL]
* Bhagavatula et al. (2018) Chandra Bhagavatula, Sergey Feldman, Russell Power, and Waleed Ammar. 2018. Content-Based Citation Recommendation. In _Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers)_. 238–251.
* Blei et al. (2003) David M Blei, Andrew Y Ng, and Michael I Jordan. 2003\. Latent dirichlet allocation. _Journal of machine Learning research_ 3, Jan (2003), 993–1022.
* Brin and Page (1998) Sergey Brin and Lawrence Page. 1998. The anatomy of a large-scale hypertextual Web search engine. _Computer Networks and ISDN Systems_ 30, 1-7 (1998), 107–117.
* Brunner et al. (2019) Gino Brunner, Yang Liu, Damian Pascual, Oliver Richter, Massimiliano Ciaramita, and Roger Wattenhofer. 2019. On Identifiability in Transformers. In _International Conference on Learning Representations_.
* Chopra et al. (2005) Sumit Chopra, Raia Hadsell, Yann LeCun, et al. 2005\. Learning a similarity metric discriminatively, with application to face verification. In _CVPR (1)_ (Boston, Massachusetts). 539–546.
* Dai et al. (2019) Zihang Dai, Zhilin Yang, Yiming Yang, Jaime G Carbonell, Quoc Le, and Ruslan Salakhutdinov. 2019\. Transformer-XL: Attentive Language Models beyond a Fixed-Length Context. In _Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics_. 2978–2988.
* Devlin et al. (2018) Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understanding. _arXiv preprint arXiv:1810.04805_ (2018).
* Devlin et al. (2019) Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. In _Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics_. 4171–4186.
* Fan et al. (2018) Yixing Fan, Jiafeng Guo, Yanyan Lan, Jun Xu, Chengxiang Zhai, and Xueqi Cheng. 2018\. Modeling Diverse Relevance Patterns in Ad-Hoc Retrieval. In _The 41st International ACM SIGIR Conference on Research Development in Information Retrieval_ (Ann Arbor, MI, USA) _(SIGIR ’18)_. Association for Computing Machinery, New York, NY, USA, 375–384.
* Fan et al. (2017) Yixing Fan, Liang Pang, JianPeng Hou, Jiafeng Guo, Yanyan Lan, and Xueqi Cheng. 2017\. Matchzoo: A toolkit for deep text matching. _arXiv preprint arXiv:1707.07270_ (2017).
* Guo et al. (2016) Jiafeng Guo, Yixing Fan, Qingyao Ai, and W Bruce Croft. 2016\. A deep relevance matching model for ad-hoc retrieval. In _Proceedings of the 25th ACM CIKM_. ACM, 55–64.
* Guo et al. (2019a) Jiafeng Guo, Yixing Fan, Liang Pang, Liu Yang, Qingyao Ai, Hamed Zamani, Chen Wu, W Bruce Croft, and Xueqi Cheng. 2019a. A deep look into neural ranking models for information retrieval. _Information Processing & Management_ (2019), 102067.
* Guo et al. (2019b) Qipeng Guo, Xipeng Qiu, Pengfei Liu, Yunfan Shao, Xiangyang Xue, and Zheng Zhang. 2019b. Star-Transformer. In _Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers)_. 1315–1325.
* Hu et al. (2014) Baotian Hu, Zhengdong Lu, Hang Li, and Qingcai Chen. 2014\. Convolutional neural network architectures for matching natural language sentences. In _Advances in neural information processing systems_. 2042–2050.
* Huang et al. (2013) Po-Sen Huang, Xiaodong He, Jianfeng Gao, Li Deng, Alex Acero, and Larry Heck. 2013\. Learning deep structured semantic models for web search using clickthrough data. In _Proceedings of the 22nd ACM international conference on Information & Knowledge Management_. ACM, 2333–2338.
* Hui et al. (2017) Kai Hui, Andrew Yates, Klaus Berberich, and Gerard de Melo. 2017\. PACRR: A Position-Aware Neural IR Model for Relevance Matching. In _Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing_. 1049–1058.
* Jiang et al. (2019) Jyun-Yu Jiang, Mingyang Zhang, Cheng Li, Michael Bendersky, Nadav Golbandi, and Marc Najork. 2019\. Semantic Text Matching for Long-Form Documents. In _The World Wide Web Conference_ (San Francisco, CA, USA) _(WWW ’19)_. Association for Computing Machinery, New York, NY, USA, 795–806.
* Jiang et al. (2018) Ray Jiang, Sven Gowal, Timothy A Mann, and Danilo J Rezende. 2018\. Beyond greedy ranking: Slate optimization via List-CVAE. _arXiv preprint arXiv:1803.01682_ (2018).
* Kingma and Ba (2014) Diederik P Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. _arXiv preprint arXiv:1412.6980_ (2014).
* Liu et al. (2019a) Bang Liu, Di Niu, Haojie Wei, Jinghong Lin, Yancheng He, Kunfeng Lai, and Yu Xu. 2019a. Matching Article Pairs with Graphical Decomposition and Convolutions. In _Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics_. Association for Computational Linguistics, Florence, Italy, 6284–6294.
* Liu et al. (2019b) Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019b. Roberta: A robustly optimized bert pretraining approach. _arXiv preprint arXiv:1907.11692_ (2019).
* Lo et al. (2020) Kyle Lo, Lucy Lu Wang, Mark Neumann, Rodney Kinney, and Daniel S Weld. 2020. S2ORC: The Semantic Scholar Open Research Corpus. In _Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics_. 4969–4983.
* Lu and Li (2013) Zhengdong Lu and Hang Li. 2013. A deep architecture for matching short texts. In _Advances in Neural Information Processing Systems_. 1367–1375.
* Mihalcea and Tarau (2004) Rada Mihalcea and Paul Tarau. 2004. Textrank: Bringing order into text. In _Proceedings of the 2004 conference on empirical methods in natural language processing_. 404–411.
* Mitra et al. (2017) Bhaskar Mitra, Fernando Diaz, and Nick Craswell. 2017\. Learning to match using local and distributed representations of text for web search. In _Proceedings of the 26th International Conference on World Wide Web_. International World Wide Web Conferences Steering Committee, 1291–1299.
* Palangi et al. (2016) Hamid Palangi, Li Deng, Yelong Shen, Jianfeng Gao, Xiaodong He, Jianshu Chen, Xinying Song, and Rabab Ward. 2016\. Deep sentence embedding using long short-term memory networks: Analysis and application to information retrieval. _IEEE/ACM Transactions on Audio, Speech and Language Processing (TASLP)_ 24, 4 (2016), 694–707.
* Pang et al. (2016) Liang Pang, Yanyan Lan, Jiafeng Guo, Jun Xu, Shengxian Wan, and Xueqi Cheng. 2016\. Text matching as image recognition. In _Thirtieth AAAI Conference on Artificial Intelligence_.
* Pang et al. (2017) Liang Pang, Yanyan Lan, Jiafeng Guo, Jun Xu, Jingfang Xu, and Xueqi Cheng. 2017\. Deeprank: A new deep architecture for relevance ranking in information retrieval. In _Proceedings of the 2017 ACM CIKM_. ACM, 257–266.
* Paszke et al. (2019) Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, et al. 2019\. PyTorch: An imperative style, high-performance deep learning library. In _Advances in Neural Information Processing Systems_. 8024–8035.
* Potthast et al. (2013) Martin Potthast, Matthias Hagen, Tim Gollub, Martin Tippmann, Johannes Kiesel, Paolo Rosso, Efstathios Stamatatos, and Benno Stein. 2013. Overview of the 5th international competition on plagiarism detection. In _CLEF Conference on Multilingual and Multimodal Information Access Evaluation_. CELCT, 301–331.
* Radev et al. (2013) Dragomir R Radev, Pradeep Muthukrishnan, Vahed Qazvinian, and Amjad Abu-Jbara. 2013. The ACL anthology network corpus. _Language Resources and Evaluation_ 47, 4 (2013), 919–944.
* Radford et al. ([n.d.]) Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. [n.d.]. Language models are unsupervised multitask learners. ([n. d.]).
* Robertson and Zaragoza (2009) Stephen Robertson and Hugo Zaragoza. 2009. The Probabilistic Relevance Framework: BM25 and Beyond. _Information Retrieval_ 3, 4 (2009), 333–389.
* Scarselli et al. (2008) Franco Scarselli, Marco Gori, Ah Chung Tsoi, Markus Hagenbuchner, and Gabriele Monfardini. 2008\. The graph neural network model. _IEEE Transactions on Neural Networks_ 20, 1 (2008), 61–80.
* Shen et al. (2014) Yelong Shen, Xiaodong He, Jianfeng Gao, Li Deng, and Grégoire Mesnil. 2014. A latent semantic model with convolutional-pooling structure for information retrieval. In _Proceedings of the 23rd ACM international conference on conference on information and knowledge management_. ACM, 101–110.
* Tay et al. (2020) Yi Tay, Mostafa Dehghani, Samira Abnar, Yikang Shen, Dara Bahri, Philip Pham, Jinfeng Rao, Liu Yang, Sebastian Ruder, and Donald Metzler. 2020\. Long Range Arena: A Benchmark for Efficient Transformers. arXiv:2011.04006 [cs.LG]
* Vaswani et al. (2017) Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In _NIPS_. 5998–6008.
* Wan et al. (2016a) Shengxian Wan, Yanyan Lan, Jiafeng Guo, Jun Xu, Liang Pang, and Xueqi Cheng. 2016a. A deep architecture for semantic matching with multiple positional sentence representations. In _Thirtieth AAAI Conference on Artificial Intelligence_.
* Wan et al. (2016b) Shengxian Wan, Yanyan Lan, Jun Xu, Jiafeng Guo, Liang Pang, and Xueqi Cheng. 2016b. Match-srnn: Modeling the recursive matching structure with spatial rnn. _arXiv preprint arXiv:1604.04378_ (2016).
* Wang et al. (2017) Zhiguo Wang, Wael Hamza, and Radu Florian. 2017. Bilateral multi-perspective matching for natural language sentences. _arXiv preprint arXiv:1702.03814_ (2017).
* Yang et al. (2020) Liu Yang, Mingyang Zhang, Cheng Li, Michael Bendersky, and Marc Najork. 2020. Beyond 512 Tokens: Siamese Multi-depth Transformer-based Hierarchical Encoder for Document Matching. arXiv:2004.12297 [cs.IR]
* Yang et al. (2019) Runqi Yang, Jianhai Zhang, Xing Gao, Feng Ji, and Haiqing Chen. 2019. Simple and Effective Text Matching with Richer Alignment Features. _arXiv preprint arXiv:1908.00300_ (2019).
* Yang et al. (2016) Zichao Yang, Diyi Yang, Chris Dyer, Xiaodong He, Alex Smola, and Eduard Hovy. 2016\. Hierarchical Attention Networks for Document Classification. In _Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies_. Association for Computational Linguistics, San Diego, California, 1480–1489.
* Zhao et al. (2019) Guangxiang Zhao, Junyang Lin, Zhiyuan Zhang, Xuancheng Ren, and Xu Sun. 2019. Sparse Transformer: Concentrated Attention Through Explicit Selection. (2019).
* Zhou et al. (2020) Xuhui Zhou, Nikolaos Pappas, and Noah A. Smith. 2020\. Multilevel Text Alignment with Cross-Document Attention. In _Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)_. Association for Computational Linguistics, Online, 5012–5025.
|
# Hypernetworks: From Posets to Geometry
Emil Saucan Department of Applied Mathematics, ORT Braude College of
Engineering, Karmiel, Israel<EMAIL_ADDRESS>
(Date: January 15, 2021)
###### Abstract.
We show that hypernetworks can be regarded as posets which, in their turn,
have a natural interpretation as simplicial complexes and, as such, are
endowed with an intrinsic notion of curvature, namely the Forman Ricci
curvature, that strongly correlates with the Euler characteristic of the
simplicial complex. This approach, inspired by the work of E. Bloch, allows us
to canonically associate a simplicial complex structure to a hypernetwork,
directed or undirected. In particular, this greatly simplifying the geometric
Persistent Homology method we previously proposed.
## 1\. Introduction
This paper is dedicated to the proposition that hypernetworks can be naturally
construed as posets that, in turn, have a innate interpretation as simplicial
complexes and, as such, they are endowed with intrinsic interconnected
topological and geometric properties, more precisely with a notion of
curvature that strongly correlates – not just in the statistical manner – to
the topological structure and, more specifically, to the Euler characteristic
of the associated simplicial complex. This observation, that stems from E.
Bloch’s work [1], allows us not only to associate to hypernetworks a structure
of a simplicial complex, but to do this is a canonical manner, that permits us
to compute its essential topological structure, following from its intrinsic
hierarchical organization, and to attach to it a geometric measure that is
strongly related to the topological one, namely the Forman Ricci curvature.
This approach allows to preserve the essential structure of the hypernetwork,
while concentrating at larger scale structures (i.e. hypervertices and
hyperedges), rather than at the local. perhaps accidental information attached
to each particular vertex or edge. This allows us, in turn, to extract the
structural information mentioned above. We should also mention here that we
also proposed a somewhat different approach to the parametrization of
hypernetworks as simplicial and more general polyhedral complexes in [13].
While the previous method allows us to preserve more of the details inherent
in the original model of the hypernetwork, it is also harder to implement in
an computer-ready manner, thus emphasizing the advantage of the canonical,
easily derivable structure we propose herein. In particular, greatly
simplifies the geometric Persistent Homology method proposed in [10] (see also
[5], [8]). We should further underline that both these approaches have an
additional advantage over the established way of representing hypernetworks as
graphs/networks, namely the fact that they allow for a simple method for the
structure-preserving embedding of hypernetworks in Euclidean $N$-space, with
clear advantages for their representation and analysis.
## 2\. Theoretical Background
### 2.1. Hypernetworks
We begin by reminding the reader the definition of of the type of structure we
study.
###### Definition 2.1 (Hypernetworks).
We define a _hypernetwork_ as a hypergraph
$\mathcal{H}=(\mathcal{V},\mathcal{E})$ with the hypernode set $\mathcal{V}$
consisting of set of nodes, i.e. $\mathcal{V}=(V_{1},\ldots,V_{p})$,
$V_{i}=\\{v_{i}^{1},\ldots,v_{k_{i}}^{i}\\}$; and hyperedges
$E_{ij}=V_{i}V_{j}\in\mathcal{E}$ – hyperedge set of $\mathcal{H}$, the
connecting groups of nodes/hypervertices.
Note that it is natural to view each hypervertex as a complete graph (or
clique) $K_{n}$, which in turn is identifiable with the (1-skeleton of) the
standard $n$-simplex.
###### Remark 2.2.
Note that in [13] we have employed a somewhat more general, but also less
common, definition of hypernetwork, where hypervertices where not viewed as
complete graphs, thus allowing for the treatment of hypernetworks as general
polyhedral complexes, not merely simplicial ones, as herein (see below).
### 2.2. Posets
We briefly summarize here the minimal definitions and properties of partially
ordered sets, or posets that we need in the sequel. In doing this, we presume
the reader is familiar with the basic definition of posets.
###### Definition 2.3 (Coverings).
Let $(\mathcal{P},<)$ be a poset, where $<$ denotes the partial order relation
on $\mathcal{P}$, and let $p,q$ be elements of $\mathcal{P}$. We say that $p$
covers $q$ if $p>q$ and there does not exists $r\in\mathcal{P}$, such that
$q<r<p$. We denote the fact that $p$ covers $q$ by $p\succ q$.
While a variety of examples of posets pervade mathematics, the basic (and
perhaps motivating) example is that of the set of subsets (a.k.a. as the power
set) $\mathcal{P}(X)$ of a given set $X$, with the role of the order relation
being played by the inclusion. Given the common interpretation of networks,
the identification with hypernetworks with a subset of $\mathcal{P}(X)$ is
immediate.
###### Definition 2.4 (Ranked posets).
Given a poset $(\mathcal{P},<)$, a rank function for $\mathcal{P}$ is a
function $\rho:\mathcal{P}\rightarrow\mathbb{N}$ such that
1. (1)
If $q$ is a minimal element of $\mathcal{P}$, then $\rho(q)=0$;
2. (2)
If $q\prec p$, then $\rho(p)=\rho(q)+1$. A poset $\mathcal{P}$ is called
ranked if there admits a rank function for $\mathcal{P}$. The maximal value of
$\rho(p),p\in\mathcal{P}$ is called the rank of $\mathcal{P}$, and it is
denoted by $r(\mathcal{P})$.
Note that in the definition of ranked posets we essentially (but not strictly)
follow [1] and, while other terminologies exist [6],[15], we prefer the one
above for the sake of clarity and concordance with Bloch’s paper.
Let us note that if a poset is ranked, than the rank function is unique.
Furthermore, if $\mathcal{P}$ is a ranked poset of rank $r$, and if
$j\in\\{0,\ldots,r\\}$, we denote
$\mathcal{P}_{j}=\\{p\in\mathcal{P}\,|\,\rho(p)=j\\}$, and by $F_{i}$ the
cardinality of $\mathcal{P}_{j}$, i.e. $F_{i}=|\mathcal{P}_{j}|$.
Again, as for the case of posets in general, $(\mathcal{P}(X),\subset)$
represents the archetypal example of ranked posets, thus hypernetworks
represent, in essence, ranked posets, which is essential for the sequel.
###### Remark 2.5.
Many of the hypernetworks arising as models in real-life problems are actually
oriented ones (see, for instance [11], [13]). For these the poset structure is
even more evident, as the order relation is emphasized by the directionality.
If, moreover, there are no loops, the resulting poset is also ranked.
### 2.3. Simplicial Complexes and the Euler Characteristic
As for the case of posets, we do not bring the full technical definition of a
simplicial complex, bur we rather refer the reader to such classics as [3] or
[7].
Given a poset $\mathcal{P}$, there exists a canonical way of producing an
associated ordered simplicial complex $\Delta(\mathcal{P})$, by considering a
vertex for each element $p\in\mathcal{P}$ and an $m$-simplex for each chain
$p_{0}\prec p_{1}\prec\ldots\prec p_{m}$ of elements of $\mathcal{P}$.
Since in the present paper we considered only finite hypernetworks/sets, we
can define the Euler characteristic of the poset $\mathcal{P}$ as being equal
to that of the associated simplicial complex $\Delta(\mathcal{P})$, i.e.
(2.1) $\chi(\mathcal{P})=\chi(\Delta(\mathcal{P}))\,.$
Note that this definition allows us to define the Euler characteristic of any
poset, even if it is not ranked, due to the fact that the associated
simplicial complex is naturally ranked by the dimension of the simplices
(faces).
However, if $\mathcal{P}$ is itself ranked – as indeed it is in our setting –
than there exists a direct, purely combinatorial way of defining the Euler
characteristic of $\mathcal{P}$ that emulates the classical one, in the
following manner:
(2.2) $\chi_{g}(\mathcal{P})=\sum_{j=0}^{r}(-1)^{j}F_{j}\,.$
While in general $\chi(\mathcal{P})$ and $\chi_{g}(\mathcal{P})$ do not
coincide, they are identical in the case of $CW$ complexes, thus in particular
for polyhedral complexes, hence a fortiori for simplicial complexes. In
particular, we shall obtain the same Euler characteristic irrespective to the
model of hypernetwork that we chose to operate with: The poset model
$\mathcal{P}$, its associated complex $\Delta(\mathcal{P})$, the geometric
view of posets a simplicial complexes that attaches to each subset of
cardinality $k$, i.e. to each hypervertex a $k$-simplex, or the more general
polyhedral model that we considered in [13]. It follows, therefore, that
The Euler characteristic of a hypernetwork is a well defined invariant,
independent of the chosen hypernetwork model, and as such captures the
essential topological structure of the network.
In the sequel we shall concentrate, for reason that we’ll explain in due time,
on the subcomplex of $\Delta(\mathcal{P})$ consisting of faces of dimension
$\leq 2$ – that is to say, on the 2-skeleton $\Delta^{2}(\mathcal{P})$ of
$\Delta(\mathcal{P})$. In particular, we shall show that
$\chi(\Delta(\mathcal{P}^{2}))$ is not just a topological invariant, it is
also closely related, in this dimension, to the geometry of
$\Delta(\mathcal{P}^{2})$.
### 2.4. Forman Curvature
R. Forman introduced in [2] a discretization of the notion of Ricci curvature,
by adapting to the quite general setting of $CW$ complexes the by now
classical Bochner-Weizenböck formula (see, e.g. [4]). We expatiated on the
geometric content of the notion of Ricci curvature and of Forman’s
discretization in particular elsewhere [9], therefore, in oder not to repeat
ourselves too much, we refer the reader to the above mentioned paper. However,
let us note that in [9] we referred to Forman’s original notion as the
augmented Forman curvature, to the reduced, 1-dimensional notion that we
introduced and employed in the study of networks in [14].
While Forman’s Ricci curvature applies for both vertex and edge weighted
complexes (a fact that plays an important role in its extended and flexible
applicability range), we concentrate here on the combinatorial case, namely
that of all weights (vertex as well as edge weights) equal to 1. In this case,
Forman’s curvature, whose expression in the general case [2] is quite
complicated, even when restricting ourselves to 2-dimensional simplicial
complexes [9], has the following simple and appealing form:
(2.3) ${\rm Ric_{F}}(e)=\\#\\{t^{2}>e\\}-\\#\\{\hat{e}:\hat{e}\|e\\}+2\;.$
Here $t^{2}$ denotes triangles and $e$ edges, while $``||^{\prime\prime}$
denotes parallelism, where two faces of the same dimension (e.g. edges) are
said to be parallel if they share a common “parent” (higher dimensional face
containing them, e.g. a triangle,), or a common “child” (lower dimensional
face, e.g a vertex).
###### Remark 2.6.
For “shallow” hypernetworks, like the chemical reactions ones considered in
[11], both Forman Ricci curvature and especially the Euler characteristic are
readily computable but also, due to the reduced depth of such hypernetworks,
rather trivial.
### 2.5. The Gauss-Bonnet formula
In the smooth setting, there exists a strong and connection between curvature
and the Euler characteristic, hat is captured by the classical Gauss-Bonnet
formula (see, e.g. [4]). While the Forman Ricci curvature, as initially
defined in [2] does not, unfortunately, satisfy a Gauss-Bonnet type theorem,
since no counterparts in dimensions 0 and 2, essential in the formulation of
the Gauss-Bonnet Theorem, are defined therein. However, Bloch defined these
necessary curvature terms and was thus able to formulate in [1] an analogue of
the Gauss-Bonnet Theorem, in the setting of ranked posets. While in general
the 1 dimensional curvature term has no close classical counterpart, in the
particular case of cell complexes, and thus of simplicial complexes in
particular, Euler characteristic and Forman curvature are intertwined in the
following discrete version of the Gauss-Bonnet Theorem:
(2.4) $\sum_{v\in F_{0}}R_{0}(v)-\sum_{e\in F_{1}}{\rm Ric_{F}}(e)+\sum_{f\in
F_{2}}R_{2}(t)=\chi(X)\,.$
Here $R_{0}$ and $R_{2}$ denote the 0-, respective 2-dimensional curvature
terms required in a Gauss-Bonnet type formula. These curvature functions are
defined via a number of auxiliary functions, as follows:
(2.5)
$R_{0}(v)=1+\frac{3}{2}A_{0}(v)-A_{0}^{2}(v)\,,\>R_{2}(t)=1+6B_{2}(t)-B_{2}^{2}(t)\,;$
where $A_{0},B_{2}$ are the aforementioned auxiliary functions, which are
defined in the following simple and combinatorially intuitive manner:
(2.6) $A_{0}(x)=\\#\\{y\in F_{1},x<y\\}\,,\>B_{2}(x)=\\#\\{z\in
F_{1},z<x\\}\,.$
Since we only consider only triangular 2-faces, the formulas for the curvature
functions reduce to the very simple and intuitive ones below:
(2.7) $R_{0}(v)=1+\frac{3}{2}{\rm deg}(v)-{\rm
deg}^{2}(v)\,,\>R_{2}(t)=1+6\cdot 3+3^{2}=24\,;$
where ${\rm deg}(v)$ denotes, conform to the canonical notation, the degree of
the (hyper-)vertex $v$, i.e. the number of its adjacent vertices.
From these formulas and from the general expression of the Gauss-Bonnet
formula (2.4) we obtain the following combinatorial formulation of the noted
formula in the setting of the 2-dimensional simplicial complexes:
(2.8) $\chi(X)=\sum_{v\in F_{0}}\left(1+\frac{3}{2}{\rm deg}(v)-{\rm
deg}^{2}(v)\right)\\\ -\sum_{e\in F_{2}}{\rm Ric_{F}}(e)+24\;;$
or, after taking into account also Formula (2.8), and some additional
manipulations:
(2.9) $\chi(X)=\sum_{v\in F_{0}}\left(1+\frac{3}{2}{\rm deg}(v)-{\rm
deg}^{2}(v)\right)\\\ -\sum_{e\in
F_{2}}\left(4+9\cdot\\#\\{t>e\\}+\sum_{v<e}{\rm deg}(v)\right)+24\;.$
(Note that we have preferred, for unity of notation throughout the paper, to
write in the formulas above, $F_{0}$ rather than $V$, and $F_{2}$ instead of
$F$, as commonly used.)
###### Remark 2.7.
Formula 2.4 and its variations allow for study the long time behavior of
evolving (hyper-)networks via the use of prototype networks of given Euler
characteristic [16]
## 3\. Directions of Future Study
The first direction of research that naturally imposes itself as necessary and
that we want to explore in the immediate future is that of understanding the
higher dimensional structure of hypernetworks, that is by taking into account
chains in the corresponding of length greater than two by studying the
structure of the fitting resulting simplicial complexes.
As we have seen, the (generalized) Euler characteristic is defined for an
$n$-dimensional (simplicial) complex. Therefore it is possible to employ its
simple defining Formula (2.2) to obtain essential topological information
about hypernetworks of any depth and, indeed, by successively restricting to
lower dimensional simplices (i.e. chains in the corresponding poset), explore
the topological structure of a hypernetwork in any dimension.
Moreover, it is possible to explore not just the topological properties of a
hypernetwork, but its geometric ones as well. The simplest manner to obtain
geometric information regarding the hypernetwork is by computing again the
Forman Ricci curvature ${\rm Ric_{F}}$ of its edges. Indeed, Forman Ricci
curvature is an edge measure, thus determined solely by the intersections of
the two faces of the simplicial complex, thus being, in effect, “blind” to the
higher dimensional faces of the complex. However, it is possible to compute
curvature measures for simplices in all dimensions, as Forman defined such
curvatures in all dimensions [2]. While the expressions of higher dimensional
curvature functions are more complicated (and we refer the reader to Forman’s
original work), and their geometric content is less clear than Ricci
curvature, they still allow us to geometrically filter of hypernetworks in all
dimensions, in addition to the topological understanding supplied by the Euler
characteristic. Here again, the simpler, more geometric approach introduced in
[10] combined with the ideas introduced in the present paper should prove
useful in adding geometric content to the understanding of networks in all
dimensions.
Moreover, while the Euler characteristic, in both its forms, represents a
topological/combinatorial invariant and, as such it operates only with
combinatorial weights (that is to say, equal to 1), Forman’s curvature is
applicable to any weighted $CW$ complex, hence to weighted hypernetworks as
well. This clearly allows for a further refinement of the geometric filtering
process mentioned above, that we wish to explore in a further study.
It follows, therefore, that the combination of the couple of tools above
endows us with simple, yet efficient means to explore, understand and
eventually classify hypernetworks.
Another direction of study that deserves future study is that of directed
hypernetworks, as such structures not only admit, as we have seen, a
straightforward interpretation as posets, but also arise in many concrete
modeling instances. While in the present paper we have restricted ourselves to
undirected networks, we propose to further investigate the directed ones as
well.
Meanwhile, we should bring to the readers attention the fact that we have
previously extended in [12] Forman’s Ricci curvature to directed simplicial
complexes/hypernetworks, and we indicated how this can be extrapolated to
higher dimensions in [13].
Furthermore, we should emphasize that, it is easy to extend the combinatorial
Euler characteristic by taking into account only the considered directed
simplices. (See [12], [13] for detailed discussions on the pertinent choices
for the directed faces.) In addition, for 2-dimensional simplicial complexes,
as those arising from hypernetworks on which we concentrated in this study, a
directed Euler characteristic can be developed directly from Formulas (2.9)
and (2.3). One possible form of the result formula is
(3.1) $\chi_{I/O}(X)=\sum_{v\in F_{0}}\left(1+\frac{3}{2}{\rm
deg_{I/O}}(v)-{\rm deg_{I/O}}^{2}(v)\right)\\\ $ $\hskip
106.69783pt-\sum_{e\in
F_{1}}\left(4+3\cdot\\#\\{\overline{t}>e\\}-\sum_{v<e}{\rm
deg_{I/O}}(v)\right)+28\\#\overline{t}\;;$
where “$I/O$” denotes the incoming, respectively outgoing edges, and
$\overline{t}$ denotes the chosen oriented (“plumbed-in”) triangles.
## References
* [1] E. Bloch, Combinatorial Ricci Curvature for Polyhedral Surfaces and Posets, preprint, arXiv:1406.4598v1 [math.CO], 2014.
* [2] F. Forman, Bochner’s Method for Cell Complexes and Combinatorial Ricci Curvature, Discrete Comput. Geom. 29(3), 323–374, 2014.
* [3] J. F. Hudson, Piecewise Linear Topology, Benjamin, New York, 1969.
* [4] J. Jost, Riemannian Geometry and Geometric Analysis, Springer, 2011.
* [5] H. Kannan, E. Saucan, I. Roy and A. Samal, Persistent homology of unweighted networks via discrete Morse theory, Scientific Reports (2019) 9:13817.
* [6] K. H. Rosen, J. G. Michaels, J. L. Gross, J. W. Grossman and Douglas R. Shier (eds.), Handbook of discrete and combinatorial mathematics, CRC Press, Boca Raton, FL, 2000.
* [7] C. P. Rourke and B. J. Sanderson, Introduction to piecewise-linear topology, Springer-Verlag, Berlin, 1972.
* [8] I. Roy, S. Vijayaraghavan, S. J. Ramaia and A. Samal, Forman-Ricci curvature and Persistent homology of unweighted complex networks, Chaos, Solitons & Fractals, 140, 110260.
* [9] A. Samal, R. P. Sreejith, J. Gu, S. Liu, E. Saucan and J. Jost, Comparative analysis of two discretizations of Ricci curvature for complex networks, Scientific Report 8(1): 8650.
* [10] E. Saucan, Discrete Morse Theory, Persistent Homology and Forman-Ricci Curvature, preprint, arxiv:submit/3078713.
* [11] E. Saucan, A. Samal, M. Weber and J. Jost, Discrete curvatures and network analysis, Commun. Math. Comput. Chem., 20(3), 605-622.
* [12] E. Saucan, R. P. Sreejith, R. P. Vivek-Ananth, J. Jost and A. Samal, Discrete Ricci curvatures for directed networks, Chaos, Solitons & Fractals, 118, 347-360, 2019.
* [13] E. Saucan, M. Weber, Forman’s Ricci curvature - From networks to hypernetworks, Proceedings of COMPLEX NETWORKS 2018, Studies in Computational Intelligence (SCI), Springer, 706-717, 2019.
* [14] R. P. Sreejith, K. Mohanraj, Jürgen Jost, E. Saucan and A. Samal, Forman curvature for complex networks, Journal of Statistical Mechanics: Theory and Experiment (J. Stat. Mech.), (2016) 063206, (http://iopscience.iop.org/1742-5468/2016/6/063206).
* [15] R. P. Stanley, Enumerative combinatorics, Vol. 1, Cambridge Studies in Advanced Mathematics, vol. 49, Cambridge University Press Cambridge, 1997. (Corrected reprint of the 1986 original.)
* [16] M. Weber, E. Saucan and J. Jost, Coarse geometry of evolving networks, J Complex Netw, 6(5), 706-732, 2018.
|
11institutetext: Rosseland Centre for Solar Physics, University of Oslo, P.O.
Box 1029, Blindern, NO-0315 Oslo, Norway
11email<EMAIL_ADDRESS>22institutetext: Institute of Theoretical
Astrophysics, University of Oslo, P.O. Box 1029, Blindern, NO-0315 Oslo,
Norway
# Excitation and evolution of coronal oscillations in self-consistent 3D
radiative MHD simulations of the solar atmosphere
P. Kohutova 1122 A. Popovas 1122
(Received; accepted)
###### Abstract
Context. Solar coronal loops are commonly subject to oscillations.
Observations of coronal oscillations are used to infer physical properties of
the coronal plasma using coronal seismology.
Aims. Excitation and evolution of oscillations in coronal loops is typically
studied using highly idealised models of magnetic flux-tubes. In order to
improve our understanding of coronal oscillations, it is necessary to consider
the effect of realistic magnetic field topology and evolution.
Methods. We study excitation and evolution of coronal oscillations in three-
dimensional self-consistent simulations of solar atmosphere spanning from
convection zone to solar corona using radiation-MHD code Bifrost. We use
forward-modelled EUV emission and three-dimensional tracing of magnetic field
to analyse oscillatory behaviour of individual magnetic loops. We further
analyse the evolution of individual plasma velocity components along the loops
using wavelet power spectra to capture changes in the oscillation periods.
Results. Various types of oscillations commonly observed in the corona are
present in the simulation. We detect standing oscillations in both transverse
and longitudinal velocity components, including higher order oscillation
harmonics. We also show that self-consistent simulations reproduce existence
of two distinct regimes of transverse coronal oscillations: rapidly decaying
oscillations triggered by impulsive events and sustained small-scale
oscillations showing no observable damping. No harmonic drivers are detected
at the footpoints of oscillating loops.
Conclusions. Coronal loop oscillations are abundant in self-consistent 3D MHD
simulations of the solar atmosphere. The dynamic evolution and variability of
individual magnetic loops suggest we need to reevaluate our models of
monolithic and static coronal loops with constant lengths in favour of more
realistic models.
###### Key Words.:
Magnetohydrodynamics (MHD) – Sun: corona – Sun: magnetic fields – Sun:
oscillations
## 1 Introduction
Coronal loops, the basic building blocks of solar corona, are commonly subject
to oscillatory behaviour. Due to their magnetic structure they act as wave
guides and can support a variety of wave modes (see e.g. reviews by Nakariakov
& Verwichte 2005; Nakariakov & Kolotkov 2020). Various types of wave modes
have been reported at coronal heights; these include nearly incompressible
transverse oscillations in the direction perpendicular to the loop axis
(Aschwanden et al. 1999; Nakariakov et al. 1999; Verwichte et al. 2013),
compressible longitudinal oscillations along the loop axis (Berghmans & Clette
1999; De Moortel et al. 2000) and incompressible torsional oscillations
(Kohutova et al. 2020b). Transverse oscillations of coronal loops are by far
the most commonly observed type. They were first discovered in the TRACE
observations of coronal loops following a solar flare (Aschwanden et al. 1999;
Nakariakov et al. 1999) and since then have been studied extensively through
high-resolution solar observations, numerical simulations and analytical
theory. The most commonly reported oscillation periods lie in the range of 1
to 10 min, and the oscillation amplitudes range from few hundred km to several
Mm. Most of the observed transverse coronal loop oscillations have been
identified as corresponding to either standing or propagating kink modes. This
mode identification is based on modelling individual coronal loops as
cylindrical flux tubes (Edwin & Roberts 1983). It should however be noted that
the density structuring in the corona is more complex than this and
assumptions about the individual mode properties based on idealised modelling
of homogeneous plasma structures do not always apply in a non-homogeneous and
dynamic environment that is real solar corona (e.g. Goossens et al. 2019). The
fundamental harmonic of standing kink mode is the most commonly observed,
where the footpoints of the coronal loop act as nodes and the point of maximum
displacement amplitude lies at the loop apex. Despite fundamental harmonic
being the most intuitive resonant response of a line-tied magnetic fieldline
to an external perturbation, excitation of higher-order harmonics is also
perfectly viable; higher order harmonics have also been identified in coronal
loop oscillations (Verwichte et al. 2004; Duckenfield et al. 2018).
Transverse oscillations of coronal loops are commonly used for coronal
seismology (see e.g. reviews by De Moortel 2005; De Moortel & Nakariakov 2012;
Nakariakov & Kolotkov 2020). Coronal seismology is a powerful method that
relies on using oscillation parameters that are readily observable in the
coronal imaging and spectral data to deduce plasma properties such as density,
temperature, magnetic field strength or density structuring, which are
otherwise very challenging to measure directly. The accuracy of the
seismologically deduced quantities is however limited by somewhat simplifying
assumptions of modelling coronal structures as homogeneous plasma cylinders
and is subject to uncertainties caused by projection and line-of-sight
effects.
Transverse coronal loop oscillations are often classified as occurring in two
regimes: large-amplitude rapidly damped oscillations and small amplitude
seemingly ’decayless’ oscillations. Large amplitude oscillations are typically
triggered by a external impulsive event, which is usually easy to identify,
such as blast wave following a solar fare (e.g. White & Verwichte 2012).
Amplitudes are of the order of 1 Mm and the oscillations are rapidly damped
within 3-4 oscillation periods.
Small amplitude oscillations are observed without any clearly associated
driver and can persist for several hours (Wang et al. 2012; Nisticò et al.
2013; Anfinogentov et al. 2015). Their amplitudes are of the order of 100 km
and they show no observable damping. Even though it is clear they must be
driven by some small-scale abundant process, the exact excitation mechanism of
this oscillation regime is unclear. Several excitation mechanisms have been
proposed to explain the persistent nature of these oscillations. These include
mechanisms acting at coronal heights such as onset of thermal instability in
the corona (Kohutova & Verwichte 2017; Verwichte & Kohutova 2017), presence of
siphon flows in the coronal loop (Kohutova & Verwichte 2018), self-oscillation
of coronal loops due to interaction with quasi-steady flows (Nakariakov et al.
2016; Karampelas & Van Doorsselaere 2020) as well as footpoint-concentrated
drivers. Commonly assumed excitation mechanism is associated with turbulent
vortex flows which are ubiquitous in the lower solar atmosphere (Carlsson et
al. 2010; Shelyag et al. 2011; Liu et al. 2019) and the resulting random
footpoint buffeting. This is typically modelled as a stochastic driver at the
footpoints (e.g. Pagano & De Moortel 2019). Such driver however does not
reproduce the observational characteristics of this type of oscillations, in
particular the stability of the oscillation phase and lack of any observable
damping (Nakariakov et al. 2016). It has been proposed that a footpoint driver
of the form of a broadband noise with a power-law fall-off can in principle
lead to excitation of eigenmodes along coronal loops (Afanasyev et al. 2020),
this has however not been tested through MHD simulations. Finally, excitation
of transverse oscillations through solar p-mode coupling has also been
proposed in several studies (Tomczyk & McIntosh 2009; Morton et al. 2016,
2019; Riedl et al. 2019). Since p-modes correspond to acoustic waves generated
by turbulent flows in the solar convection zone, they are subject to acoustic
cut-off at the $\beta=1$ equipartition layer. High magnetic field inclinations
can however lead to p-mode leakage into chromosphere even for frequencies
below the cut-off frequency (De Pontieu et al. 2004), and to subsequent
coupling to different magnetoacoustic wave modes (Santamaria et al. 2015). A
3D MHD simulations of coronal fluxtubes with gravity-acoustic wave driver at
the footpoints however did not find any evidence of the acoustic driving
leading to significant transverse displacement of the fluxtubes at coronal
heights (Riedl et al. 2019).
It is in fact very likely that the oscillatory behaviour observed in the
corona is caused by a combination of several different excitation mechanisms.
3D MHD simulations of solar atmosphere spanning from convection zone to corona
in principle include all of the potential mechanisms discussed above. However,
so far only impulsively generated rapidly damped coronal loop oscillations
have been studied in such 3D MHD simulations (Chen & Peter 2015), which show
good match of damping rates compared with observed impulsively generated
oscillations. Such numerical works also highlight the implications of
observational limitations for coronal seismology, as they enable direct
comparison of seismologically deduced and real values of physical quantities.
Detailed properties of transverse coronal loop oscillations have been studied
numerically in great detail in recent years, including their evolution,
damping and development of instabilities and resulting nonlinear dynamics
(e.g. Antolin et al. 2014; Magyar & Van Doorsselaere 2016; Karampelas et al.
2017). Most of these studies however employ simplified geometries that model
coronal structures as straight uniform fluxtubes. Additionally, such studies
also rely on artificially imposed harmonic footpoint drivers (Karampelas et
al. 2017; Pagano & De Moortel 2017), without discussing where do these drivers
originate from. Such approach makes it possible to isolate individual effects
of interests, and focus on evolution at small scales through techniques such
as adaptive mesh refinement (e.g. Magyar et al. 2015) due to obvious
computational limitations that come with studying physical mechanisms
operating at vastly different spatial scales. There is, however, need for more
complex approach to see how such effects manifest in a realistic solar
atmosphere setup and to identify their observational signatures.
In this work we investigate the presence of oscillations of coronal loops in a
simulation spanning from convection zone to corona, with a realistic lower
solar atmosphere dynamics, magnetic field geometry and density structuring. We
for the first time analyse excitation of transverse coronal oscillations in a
self-consistent simulation of the solar atmosphere. We use combination of
forward modelling of coronal EUV emission and evolution of physical quantities
along magnetic field to determine oscillation characteristics.
Figure 1: Left: Magnetic configuration of the simulation showing the
temperature profile of the individual fieldlines. The simulation domain has
physical size of 24 $\times$ 24 $\times$ 16.8 Mm. The simulation snapshot
shown corresponds to $t=180$ s after the non-equilibrium ionisation of
hydrogen has been switched on. Right: Line-of-sight component $B_{\mathrm{z}}$
of the photospheric magnetic field at $t=180$ s.
Figure 2: Left: forward-modelled emission in the SDO/AIA 171 Å channel at at
$t=180$ s. Dashed lines outline projected positions of the studied fieldlines
F1 - F5. Right: Time-distance plots showing temporal evolution along the slits
shown in blue. The vertical axis corresponds to the distance along the slit.
Several oscillating structures are visible in the time-distance plots. Dashed
contour in plot A outlines a propagating disturbance.
Animation of this figure is available.
Figure 3: Forward-modelled emission in the SDO/AIA 171 Å channel at at $t=180$
s viewed along y-axis. Dashed lines outline projected positions of the studied
fieldlines F1 - F5.
## 2 Numerical Model
We analyse coronal oscillations in the numerical simulation of magnetic
enhanced network spanning from the upper convection zone to the corona
(Carlsson et al. 2016) using 3D radiation MHD code Bifrost (Gudiksen et al.
2011). This simulation has been previously used for analysing oscillatory
behaviour of chromospheric fibrils by Leenaarts et al. (2015) and for studying
signatures of propagating shock waves by Eklund et al. (2020).
Bifrost solves resistive MHD equations and includes non-LTE radiative transfer
in the photosphere and low chromosphere and parametrized radiative losses and
heating in the upper chromosphere, transition region and corona. The
simulation includes the effects of thermal conduction parallel to the magnetic
field and the non-equilibrium ionization of hydrogen.
The physical size of the simulation box is 24 $\times$ 24 $\times$ 16.8 Mm.
The vertical extent of the grid spans from 2.4 Mm below the photosphere to
14.4 Mm above the photosphere, which corresponds to $z=0$ surface and is
defined as the (approximate) height where the optical depth $\tau_{500}$ is
equal to unity. The simulation is carried out on a 504 $\times$ 504 $\times$
496 grid. The grid is uniform in the $x$ and $y$ direction with grid spacing
of 48 km. The grid spacing in $z$ direction is non-uniform in order to resolve
steep gradients in density and temperature in the lower solar atmosphere and
varies from 19 km to 98 km. The domain boundaries are periodic in the $x$ and
$y$ direction and open in the $z$ direction. The top boundary uses
characteristic boundary conditions such that any disturbances are transmitted
through the boundary with minimal reflection. Tests of the characteristic
boundaries implemented in Bifrost suggest that the reflection of the energy
from the upper boundary is 5% or less (Gudiksen et al. 2011). At the bottom
boundary the flows are let through and the magnetic field is passively
advected while keeping the magnetic flux through the bottom boundary constant.
The average horizontal pressure is driven towards a constant value with a
characteristic timescale of 100 s. This pressure node at the bottom boundary
gives rise to acoustic wave reflection which resembles the refraction of waves
in the deeper solar atmosphere. This leads to global radial box oscillations
with a period of 450 s, which are a simulation counterpart of Solar p-modes
(Stein & Nordlund 2001; Carlsson et al. 2016) (radial modes in this context
correspond to oscillations in horizontally averaged quantities polarised along
z-axis in a cartesian simulation box).
The photospheric magnetic field has average unsigned value of about 50 G and
is concentrated in two patches of opposite polarity about 8 Mm apart; this
leads to development of several magnetic loops at coronal heights (Fig. 1).
Upon initialisation of the simulation, the large-scale magnetic field
configuration was determined by the use of potential extrapolation of the
vertical magnetic field, specified at the bottom boundary. After inserting the
magnetic field into the domain it is quickly swept around by convective
motions. These lead to magnetic field braiding and associated Ohmic and
viscous heating which together maintain high temperature in the chromosphere
and corona (see e.g. Carlsson et al. (2016); Kohutova et al. (2020a) for
further details of the numerical setup).
Figure 4: Left: Location of fieldlines F1 - F5 chosen for analysis of
oscillatory behaviour shown at $t=1500$ s. We also show z-component of
vorticity $\omega_{z}$ at z = 0, i.e. in the photosphere. The footpoints of
analysed fieldlines are embedded in the intergranular lanes with strong vortex
flows. Right: Configuration viewed from above. Animation of this figure is
available.
## 3 Transverse oscillations observable in forward-modelled EUV emission
In order to compare oscillatory behaviour seen in the numerical simulation
with the commonly observed transverse coronal loop oscillations seen by
SDO/AIA, we forward model the EUV emission using the FoMo code (Van
Doorsselaere et al. 2016). The FoMo code is capable of modelling optically
thin coronal emission in individual spectral lines as well as in the broadband
SDO/AIA channels. The instrumental response function of a given SDO/AIA
bandpass $\kappa_{\alpha}$ is calculated for a grid of electron densities
$n_{\mathrm{e}}$ and temperatures $T$ as
$\kappa_{\alpha}(n_{\mathrm{e}},T)=\int
G(\lambda_{\alpha},n_{\mathrm{e}},T)R_{\alpha}(\lambda_{\alpha})\mathrm{d}\lambda_{\alpha}$
(1)
where $G(\lambda_{\alpha},n_{\mathrm{e}},T)$ is the contribution function
calculated from the CHIANTI database, $R_{\alpha}(\lambda_{\alpha})$ is the
wavelength-dependent response function of the given bandpass, and the
wavelength integration is done over all spectral lines within the given
bandpass. $\kappa_{\alpha}$ is then integrated along the line-of-sight
parallel to z-axis, which corresponds to looking straight down on the solar
surface.
Figure 2 shows forward modelled emission in the SDO/AIA 171 Å bandpass, which
is dominated by the Fe IX line with formation temperature $\log T=5.9$. In
order to highlight the oscillatory motion, a time-distance plot is created by
taking the intensities along a slit across the loop structures and stacking
them in time. Several oscillating structures can be seen in such plot,
suggesting transverse coronal loop oscillations are abundant in the model. The
forward-modelled EUV emission in the model is more diffuse (Fig. 3) and
subject to less observable structuring across the magnetic field than coronal
emission typically observed at similar wavelengths, where the individual
coronal loops can appear very threadlike (e.g. Peter et al. 2013; Williams et
al. 2020). We note that the simulation does not include magnetic flux
emergence which means the loops are not mass-loaded and pushed into the corona
from the lower solar atmosphere. Instead, the dense loops are filled via
chromospheric evaporation caused by localised heating (Kohutova et al. 2020a).
Some transverse structuring is however still present thanks to the temperature
variation across different magnetic fieldlines. This makes it possible to see
oscillatory behaviour of the individual coronal strands in the intensity time-
distance plot.
## 4 Transverse oscillations of individual field lines
Table 1: Loop properties
FL | $L$ | $\bar{\rho}$ | $\bar{T}$ | $\bar{B}$ | $C_{S}$ | $V_{A}$
---|---|---|---|---|---|---
| (Mm) | (kg m-3) | (K) | (G) | (km s-1) | (km s-1)
$\mathrm{F1}$ | 10.1 | $5.6\times 10^{-12}$ | $7.2\times 10^{5}$ | 26 | 129 | 980
$\mathrm{F2}$ | 17.0 | $2.2\times 10^{-12}$ | $5.7\times 10^{5}$ | 17 | 114 | 989
$\mathrm{F3}$ | 15.2 | $1.5\times 10^{-12}$ | $5.5\times 10^{5}$ | 9 | 113 | 687
$\mathrm{F4}$ | 17.3 | $1.2\times 10^{-12}$ | $4.3\times 10^{5}$ | 11 | 99 | 910
$\mathrm{F5}$ | 12.8 | $2.8\times 10^{-12}$ | $6.6\times 10^{5}$ | 22 | 123 | 1186
111Average physical quantities along individual fieldlines, and the
corresponding sound speeds ($C_{S}$) and Alfvén speeds ($V_{A}$).
Figure 5: Left: Evolution of temperature, density and three velocity
components $v_{\mathrm{L}}$, $v_{\mathrm{T}}$ and $v_{\mathrm{V}}$ along the
fieldline F1. The $x$ axis corresponds to time and the $y$ axis corresponds to
the position along the fieldline. Vertical dotted lines mark segments with
oscillatory behaviour. Top right: Evolution of the transverse component of the
velocity $v_{\mathrm{T}}$ at the footpoints FP1 and FP2 (red and blue
respectively) 1 Mm above the transition region and at the fieldline apex
halfway between the two footpoints (green). Bottom right: Wavelet spectra for
the 3 velocity components taken at the apex of the fieldline and at the
footpoints FP1 and FP2. Dark colour corresponds to high power. The hatched
region corresponds to the area outside of the cone of influence. White lines
correspond to 95 % significance contours. Figure 6: Same as Fig. 5, but for
fieldline F2. Figure 7: Same as Fig. 5, but for fieldline F3. Figure 8: Same
as Fig. 5, but for fieldline F4. Figure 9: Same as Fig. 5, but for fieldline
F5.
Coronal loops in our study are represented by closed magnetic fieldlines
extending into the corona. Several of those have enhanced densities compared
to the surroundings. Density-enhanced coronal loops with well-defined
boundaries in the simulation are however less common than what is usually
observed in the corona. As there is no flux emergence mechanism included in
the simulation, most of the dense loops are filled by chromospheric
evaporation instead of being lifted up from the lower atmosphere.
The magnetic field in the simulation domain is constantly evolving and
undergoing complex motions which include sideways displacement, oscillatory
motion, torsional and upward/downward motion resulting in the change of the
total coronal loop length. Therefore, in order to investigate the oscillatory
behaviour of coronal loops in the simulation, it is necessary to trace the
evolution of the corresponding magnetic field lines through both time and
space. To do this, we use a field tracing method previously used by Leenaarts
et al. (2015) and Kohutova et al. (2020a). A magnetic fieldline is defined as
a curve in 3D space $\@vec{r}(s)$ parametrised by the arc length along the
curve $s$, for which $\mathrm{d}\@vec{r}/\mathrm{d}s=\@vec{B}/|\@vec{B}|$.
The tracing of the evolution of magnetic fieldlines is done by inserting seed
points (one seedpoint per tracked fieldline) into the locations in the
simulation domain which show oscillatory behaviour in the velocity time-
distance plots. This is equivalent to tracking the evolution of magnetic
conectivity of an oscillating plasma element. The seed points are then
passively advected forward and backward in time using the velocity at the seed
point position. At every time step the magnetic field is then traced through
the instantaneous seed point position in order to determine the spatial
coordinates of the traced fieldline. This is done using the
Runge–Kutta–Fehlberg integrator with an adaptive step size. Even though the
accuracy of this method is limited by the size of the time step between the
two successive snapshots (i.e. 10 seconds), it works reasonably well provided
that the magnetic field evolution is smooth and there are no large amplitude
velocity variations occurring on timescales shorter than the size of the time
step. We note that this method leads to a jump in the fieldline evolution in
the instances where magnetic reconnection occurs.
We investigate the evolution of 5 different fieldlines labelled F1…F5 (Fig.
4). We note that the footpoints of chosen magnetic loops lie in the regions of
enhanced vorticity. We analyse the evolution of the temperature, density, and
3 velocity components, $v_{\mathrm{L}}$, $v_{\mathrm{V}}$ and $v_{\mathrm{T}}$
along each loop. The longitudinal velocity
$v_{\mathrm{L}}=\@vec{v}\cdot\@vec{T}$ corresponds to the velocity component
aligned with the tangent vector of the magnetic fieldline given by
$\@vec{T}=\@vec{B}/|\@vec{B}|$. The vertical velocity
$v_{\mathrm{V}}=\@vec{v}\cdot\@vec{N}$ is the velocity component along the
fieldline normal vector given by
$\@vec{N}=\frac{\mathrm{d}\@vec{T}}{\mathrm{d}s}/|\frac{\mathrm{d}\@vec{T}}{\mathrm{d}s}|$
and corresponds to the motion in the plane of the coronal loop. Finally the
transverse velocity component along the binormal vector is given by
$v_{\mathrm{T}}=\@vec{v}\cdot\@vec{R}$ where $\@vec{R}=\@vec{T}\times\@vec{N}$
and corresponds to transverse motion perpendicular to the plane of the coronal
loop. Unit vectors $\@vec{T}$, $\@vec{N}$ and $\@vec{R}$ together form an
orthogonal coordinate system equivalent to a Frenet frame of reference. Such
coordinate frame is well suited for analysing oscillations in complex 3D
magnetic field geometries and is commonly used in such studies (e.g. Felipe
2012; Leenaarts et al. 2015; González-Morales et al. 2019). We further
calculate a wavelet power spectrum (Torrence & Compo 1998) for all three
velocity components at 3 different locations along the fieldline; close to the
loop footpoints 1 Mm above the transition region (height of which is tracked
with time) at the beginning and end of the fieldline (labelled FP1 and FP2 for
left and right footpoint respectively) and at the loop apex halfway between
FP1 and FP2.
### 4.1 Fieldline F1
Fieldline F1 is a short, closed fieldline in the centre of the domain. For the
most of its lifetime it is not subject to any major changes, however at
$t=2100$ s it rapidly contracts and finally disappears in the chromosphere.
The evolution of the physical quantities along the fieldline is shown in Fig.
5. The fieldline is subject to strong base heating starting at t $\sim$ 600 s
resulting in evaporative upflows of the chromospheric plasma into the loop,
observable in the $v_{\mathrm{L}}$ component for about 200 s. We identify this
as a consequence of a global disturbance propagating from the centre of the
domain outwards, visible in Fig. 2), which likely triggers reconnection in the
affected loops, leading to increased Joule and viscous heating. This is
accompanied by an onset of the oscillatory behaviour lasting from $t=600$ s to
$t=1100$ s. The oscillation is most clearly seen in the $v_{\mathrm{V}}$
component of the velocity (marked as segment b in Fig. 5) which corresponds to
vertical polarisation (i.e. polarisation in the plane of the loop). In total,
3 oscillation periods are observable, with the oscillation period being around
200 s. The lack of periodic displacement at the loop footpoints as compared
with the periodic evolution at the loop apex lacking phase shift indicate that
the oscillation is standing, and not driven by footpoint motions. This is
accompanied by an oscillation with matching period in $v_{\mathrm{L}}$
velocity component, that is along the magnetic field, with the oscillation
profile matching the second harmonic of a standing longitudinal mode (segment
a in Fig. 5). The wavelet analysis also shows presence of an oscillation with
250 s period in the $v_{\mathrm{V}}$ with later onset having similar
characteristics and lasting from $t=1300$ s to $t=2100$ s, that is until the
impulsive event that leads to rapid contraction of the loop. This oscillation
is marked as segment c in the $v_{\mathrm{V}}$ evolution plot. Some
attenuation of the oscillation is observable, especially in the initial
stages. We attribute the increase of the standing oscillation period to
increase in density of the coronal loop plasma. Finally, a large amplitude
disturbance ($\sim$ 10 km s-1 amplitude at the apex) can be observed in the
evolution of the transverse velocity component $v_{\mathrm{T}}$ from $t=1200$
s to $t=1600$ s. However, as only one oscillation period is observable, we
refrain from drawing any conclusions about the oscillation characteristics.
### 4.2 Fieldline F2
Fieldline F2 has length of over 20 Mm (Fig. 6). During the initial 400 s of
its evolution, there is a clear accumulation of cool and dense material in the
coronal part of the loop, most likely through the condensation of coronal
plasma. Condensation phenomena in this simulation has been studied in detail
by Kohutova et al. (2020a). The loop evolution is dynamic and affected by
impulsive events occurring at coronal heights. At $t\sim 500$ s a
discontinuity in temperature and velocity evolution is visible similar to what
is observed in F1. Associated temperature increase and jump in fieldline
tracking suggest this corresponds to a reconnection event. We note that the
discontinuity in the F1 evolution is observable with a slight delay compared
to F2, suggesting it is caused by a large scale disturbance propagating across
the domain. The evolution of the transverse velocity component
$v_{\mathrm{T}}$ shows several large amplitude disturbances at the apex of the
loop with typical amplitudes of $20$ km s-1; these are not mirrored by the
velocity at the footpoints and therefore not driven by footpoint motions. Lack
of any associated large amplitude deviations in the plasma density and
temperature suggests that these deviations are caused by external
disturbances, that is not originating in the loop. The fieldline F2 shows
clear oscillation in the longitudinal velocity component $v_{\mathrm{L}}$ with
$\sim$ 250 s period (marked as segment a in Fig. 6). These periodic
disturbances are propagating from the footpoint FP2 to footpoint FP1 at a
speed of $\sim$ 90 km s-1, that is close to the average local sound speed (we
note that are also some signatures of propagation in opposite direction. The
longitudinal oscillation is visible both in fieldline evolution plot and the
wavelet spectra and lasts for most of the duration of the simulation sequence
until $t=2000$ s. The wavelet spectrum shows slight gradual decrease in
period, probably linked to decrease in the total loop length. No clear
attenuation is observable for the duration of the oscillation. The loop is
also subject to shorter period transverse oscillations visible in the
$v_{\mathrm{T}}$ velocity component with period of 180 s (segment c), and
similar oscillation can be also seen in the $v_{\mathrm{V}}$ evolution
(segment d). The oscillation in $v_{\mathrm{T}}$ follows a large initial
displacement that occurred at $t=900$ s and is rapidly attenuated in a similar
manner to the large amplitude damped oscillations excited by impulsive events
observed in the solar corona. In total 4 oscillation periods can be observed
before the oscillation completely decays. Finally, we also note that high-
frequency transverse oscillations can be observed in the loop during the
initial 400 s with period less than 100 s (segment b). The period changes
during the oscillation duration, likely due to the change of the conditions in
the loop which are linked to the condensation formation discussed earlier. In
total, 5 oscillation periods are observable with no clear damping.
### 4.3 Fieldline F3
Fieldline F3 undergoes rapid expansion during the impulsive event at occurring
at $t=500$ s discussed above, during which the total length of the loop
doubles (Fig. 7). This disturbance is also very clear in the evolution of the
velocity component $v_{\mathrm{T}}$ at both footpoints and the loop apex and
manifests in a transverse displacement with the velocity amplitude of 47 km
s-1 at the apex. An oscillation is visible in the transverse velocity
component $v_{\mathrm{T}}$ (segment b in Fig. 7) observable during the whole
duration of the simulation with periodicity of $\sim$ 180 s, which changes
slightly over time. This oscillation is also picked up in the evolution of the
$v_{\mathrm{V}}$ component (segment c), due to the nature of our coordinate
system and the fact that transverse coronal loop oscillations can have any
polarisation in the plane perpendicular to the magnetic field vector. There is
no observable damping even following the impulsive event, and the amplitude of
the transverse oscillation remains around $5$ km s-1 during the whole duration
of the simulation. The transverse oscillation is clearly present even before
the impulsive event; this suggests that the impulsive event does not act as
the primary driver of the oscillation. No persistent phase shift in single
direction is observable between the $v_{\mathrm{T}}$ evolution at the loop
apex and the two footpoints. The position of maximum $v_{\mathrm{T}}$
amplitude varies between $s=5$ Mm and $s=10$ Mm along the loop. This suggests
the oscillation corresponds to a standing transverse oscillation, as the
velocity amplitude is greater at the loop apex than it is at close to the loop
footpoints and there is no indication that the oscillation is driven by the
footpoint motion. The observed oscillation mode likely corresponds to the
fundamental harmonic, as only one oscillation antinode is observed. We note
that despite the oscillatory behaviour being obvious from the plots of
$v_{\mathrm{T}}$ and $v_{\mathrm{V}}$ evolution, it is not picked up by
wavelet spectra above 95% confidence level. Oscillations with similar period
are present in the evolution of $v_{\mathrm{L}}$ velocity component (segment
a). Oscillatory behaviour of F3 is also observable in the time-distance plot
using synthetic SDO/AIA 171 Å observations shown in Fig. 2. The position of
the loop F3 in the time-distance plot starts at $y_{\mathrm{t}}=5$ Mm along
the slit and, following the expansion, gradually moves outwards and away from
the centre of the domain towards $y_{\mathrm{t}}=2.5$ Mm. From the forward-
modelled intensity it is also clear that several loops in the vicinity of F3
are also subject to oscillations with similar period and duration as the F3
oscillation. These fieldlines are therefore likely part of a larger loop
bundle that is subject to a collective transverse oscillation.
### 4.4 Fieldline F4
Fieldline F4 initially corresponds to a short coronal loop with the initial
length of 10 Mm (Fig. 8). During the first 400 s the loop expands nearly
doubling its length. Another dramatic expansion of the loop starts after
$t=1500$ s, after which the loop apex eventually reaches the upper boundary
and the loop becomes an open fieldline at $t=1670$ s. An oscillation is
observable in the longitudinal component of velocity $v_{\mathrm{L}}$ between
$t=350$ s and $t=850$ s (segment a in Fig. 8). The oscillation profile has a
node close to the apex of the loop, reminiscent of second longitudinal
harmonic. The period of the longitudinal oscillation is $\sim 200$ s, and it
increases over time. No clear periodic behaviour is visible in the evolution
of $v_{\mathrm{T}}$ and $v_{\mathrm{V}}$ velocity components during the
lifetime of the closed fieldline. After the fieldine opens, the total length
integrated along the fieldline jumps to approximately half of its original
value. Quasi-periodic disturbances propagating from the upper boundary
downwards can be observed in the evolution of $v_{\mathrm{L}}$. These appear
coupled to high frequency oscillatory behaviour with $\sim 60$ s period
observable in $v_{\mathrm{T}}$ and $v_{\mathrm{V}}$ components for the
remained of the lifetime of the open fieldline. As these are likely to be an
artifact from the open boundary, we do not analyse them further. We also note
that large amplitude transverse oscillation pattern is observable in the
synthetic EUV emission close to the projected position of F4. as this does not
seem to match with the evolution of the transverse velocity $v_{\mathrm{T}}$
of F4, we conclude that most of the emission comes from the lower lying loops.
Again, several strands in this region can be seen to oscillate in phase,
pointing to a collective transverse oscillation.
### 4.5 Fieldline F5
The evolution of fieldline F5 is shown in Fig. 9. This fieldline approaches F2
at $t=1500$ s (Fig. 4), but for most of the duration of the simulation, the
two loops evolve independently from each other without signs of collective
behaviour. Similarly to fieldlines discussed above, F5 is also affected by the
impulsive event at $t=500$ s that sweeps across the coronal loops in the
centre of the simulation domain ( Fig. 2). Oscillatory behaviour in the
longitudinal velocity component $v_{\mathrm{L}}$ with the spatial structure
reminiscent of second harmonic is identifiable from $t=500$ s to approximately
$t=1250$ s (segment a in Fig. 9), after which the periodic evolution becomes
less clear and harder to identify. The oscillation node lies slightly off-
centre closer to FP1, this oscillatory behaviour is therefore also picked up
in the wavelet spectrum for the evolution of $v_{\mathrm{L}}$ at the apex,
although with evolving period. The wavelet spectra further show a strong
signal for the periodicity in the vertical component of the velocity
$v_{\mathrm{V}}$. Vertical oscillations with $\sim 250$ s period are also
clear in the fieldline evolution plot (segment c). Oscillation with similar
characteristics is also visible in the $v_{\mathrm{T}}$ component (segment b),
although less clearly. Both $v_{\mathrm{T}}$ and $v_{\mathrm{V}}$ oscillations
have the points of maximum amplitude located in the upper parts of the loop,
suggesting the oscillation is not driven by transverse displacement of the
footpoints. This is further supported by the line plot of the evolution of the
transverse velocity component, which shows that the velocity amplitude at the
apex always dominates over velocity amplitude at the loop footpoints. We note
that the evolution of the loop length is coupled to the oscillatory behaviour,
as clearly visible from the evolution of the position of the footpoint FP2 in
the loop evolution plot. This suggests that loop is undergoing a standing
oscillation with with $\sim 250$ s period polarised in vertical direction.
Unlike horizontally polarised standing oscillations, these affect the total
length of the loop and lead to periodic inflows/outflows from the loop centre
as the loop expands/contracts. The oscillation cannot be indisputably linked
to a single impulsive event, but it follows two successive temperature
increases in the loop plasma occurring at $t=300$ s and $t=600$ s
respectively.
### 4.6 Footpoint evolution
We further calculate Fourier power spectra for each cell in the horizontal
plane of the simulation box at two heights: at z = 1.4 Mm in the chromosphere
and at z = 10 Mm in the corona. We sum the power for all frequencies between
1.67 mHz and 16.7 mHz, corresponding to 600 s and 60 s periods respectively,
which covers the range of periods of oscillatory behaviour detected in the
simulation. The resulting integrated power maps therefore highlight locations
with increased oscillatory power regardless of the period of the oscillation;
we compare these with the instantaneous positions of magnetic loop footpoints
(Fig. 10).
This evolution highlights the dynamic nature of coronal loops, as their
footpoints are not static but evolve during the entire simulation as they get
dragged around by convective flows. Such changes occur on the same timescales
as the evolution of line-of-sight component of the photospheric magnetic
field. Occasionally the evolution is more rapid, most likely when involving
impulsive reconnection events at coronal heights. In general, footpoints of
oscillating loops are not embedded in locations with increased oscillatory
power. Harmonic driver is therefore not a prerequisite for onset of coronal
loop oscillations.
Figure 10: Top: Integrated power between 1.67 mHz and 16.7 mHz for
$v_{\mathrm{x}}$ (left), $v_{\mathrm{y}}$ (centre) components of the velocity
at $z=1.4$ Mm in the chromophere and line-of-sight photospheric magnetic field
at t = 180 s (right). Bottom: Integrated power between 1.67 mHz and 16.7 mHz
for $v_{\mathrm{x}}$ (left), $v_{\mathrm{y}}$ (centre) and $v_{\mathrm{z}}$
(right) components of the velocity at $z=10$ Mm in the corona. Coloured
circles with 1 Mm diameter are centred on positions of footpoint coordinates
at $z=1.4$ Mm at $t=0$ s and colour-coded as follows: F1-black, F2-red,
F3-purple, F4-green, F5-orange. Animation of this figure for the full duration
of the simulation is available.
## 5 Discussion
An important point for consideration is the applicability of the fieldline
tracing method that is used to study the fieldline evolution in the instances
of magnetic reconnection. Tracing of the magnetic fieldlines is equivalent to
tracing the grid points in the simulation domain which are magnetically
connected. Since in the corona the matter and energy transport happens along
the magnetic field, tracing the magnetic fieldlines is therefore the closest
we can get to studying the true evolution of coronal structures without being
affected by line of sight effects, which can significantly influence
determination of oscillation parameters (see e.g. De Moortel & Pascoe 2012).
When a magnetic loop ‘reconnects’, the magnetic connectivity of a certain part
of the loop changes. However, advecting the seed points initially placed into
regions where oscillations were detected ensures we keep tracing the magnetic
connectivity of the oscillating part even if reconnection occurs in other
parts of the loop. We therefore argue that this approach remains the most
feasible way of tracing the true evolution of coronal structures.
We note that evolution in line-of-sight integrated emission does not
necessarily copy the evolution of traced magnetic fieldlines with the same
$x,y$ coordinates. This is likely due to line-of-sight superposition of
emission from several magnetic structures, which can vary in emissivity and in
presence or lack of oscillatory behaviour. This effect of line-of-sight
superposition on oscillation analysis has been discussed by previous studies
(De Moortel & Pascoe 2012; Leenaarts et al. 2015). Our work further highlights
the limitations posed by line-of-sight effects which should be taken into
account when analysing observations of coronal oscillations.
The identification of observed transverse oscillations as either standing and
propagating is complicated by the relatively short length of coronal loops in
the simulation and by high typical values of kink speed in the corona which
translates to high phase speed of the oscillation. Assuming a mean kink speed
in the range 100 - 1000 km s-1, the expected propagation time along a 20 Mm
long loop varies from 200 s to 20 s, corresponding to 20 and 2 time steps of
the simulation output, respectively. The position of the maximum oscillation
amplitude higher up along the loop rather than at the loop footpoints suggests
the oscillations are in fact standing with the loop footpoints acting as
oscillation nodes. The term nodes is however used loosely here, since the
footpoints are in fact not static, but continuously moving (Fig. 10). There
are no velocity nodes clearly observable in the coronal sections of studied
loops and the longitudinal profile of $V_{\mathrm{T}}$ component matches
fundamental oscillation harmonic. We note that similar oscillations in
chromospheric fibrils analysed in synthetised H-alpha emission have been
identified as propagating by Leenaarts et al. (2015). The method commonly used
to investigate presence of oscillation phase shift and the corresponding phase
speed is based on detecting phase shift in time-distance plots taken at
several different positions along the studied loop, using observed or forward-
modelled emission. It should be noted however, that such method only works for
static loops. It is likely to produce spurious results in the presence of the
bulk motion of the fieldline across of the slits in addition to the
oscillatory motion, which seems to be the case for all 5 fieldines studied in
this work. Instead we focused of the evolution of the oscillation phase at
different locations along each loop that have been traced with time, while
accounting for the loop 3D structure and evolution. No persistent phase shift
has been identified between the loop apex and the footpoints in any of the
analysed cases, that would suggest observed transverse oscillations are
propagating.
On the other hand, such distinction is easily made for longitudinal
oscillations as the typical values of the sound speed in the corona are order
of magnitude smaller compared to Alfvén or kink speed (see Table 1).
Longitudinal oscillations propagating from one footpoint to another with local
sound speed were detected in F2. The mean time delay between the three
observable successive wave trains is around 250 s. Such propagating waves can
be a result of an acoustic driver (potentially linked to the global p-mode
equivalent oscillations) which drives compressive longitudinal waves that
steepen into shocks in the chromosphere and propagate through the corona. Such
propagating oscillations were however not universally present for all of the
studied fieldlines, hence we refrain from drawing any conclusions about the
periodicity (or lack thereof) of the driver. We note that the period of the
global p-mode oscillation seen in the simulation is greater, around 450 s, as
discussed in section 2. Standing longitudinal oscillations sustained for few
oscillation periods were detected in F1, F3, F4 and F5, with relatively short
periods between 100-200 s, depending on the loop. The longitudinal velocity
profiles showing an oscillation node at the apex of the loops suggest these
correspond to a second harmonic and likely represent a response of the loop to
external perturbations occurring at coronal heights (Nakariakov et al. 2004).
In the loops that show the presence of second harmonic of a longitudinal
standing mode the onset of the oscillation follows events associated with
increase of temperature of the plasma, and hence likely linked to small scale
reconnection events/other impulsive heating events. Oscillation damping is
observable for most cases, however, detailed analysis of the damping profiles
and associated dissipation is beyond the scope of this work.
We also highlight the evolution of $v_{\mathrm{T}}$ in F3, which shows clear
oscillation with 180 s period that is sustained over time and does not show
any observable damping (Fig. 11). There is no consistent phase shift between
the oscillatory behaviour at the loop apex and in the loop legs, suggesting
the oscillation is standing. This is further supported by the fact that the
location of the maximum velocity amplitude lies close to the loop apex. This
oscillation pattern is very similar to the commonly observed regime of
decayless oscillations (Wang et al. 2012; Nisticò et al. 2013; Anfinogentov et
al. 2015). Conversely, an example of the damped impulsively generated (or
”decaying”) oscillation regime is observable in F2 following an event
associated with impulsive temperature increase in the loop (Fig. 11). The
oscillation velocity amplitude is largest at the apex and the evolution of the
longitudinal $V_{\mathrm{T}}$ profile matches fundamental harmonic of a
standing transverse oscillation. This suggests it is a natural response of the
loop to a perturbation accompanied by an impulsive energy release. We also
note that for several cases of oscillations in loops F1, F4 and F5, the
classification into ”decaying” and ”decayless” is not as clear (due to lack of
clear steady damping pattern such as shown in Fig. 11). F3 is the only loop
that shows persistent undamped oscillation present for the whole duration of
simulation.
The oscillatory motions seen in the simulation in magnetic structures, both
”decaying” and ”decayless” regimes correspond to standing transverse
oscillations with the oscillation antinodes lying at coronal heights and are
therefore consistent with the observations of transverse oscillations of
coronal loops as seen in TRACE and SDO/AIA data (e.g. Aschwanden et al. 1999;
White & Verwichte 2012), where the standing transverse oscillation are the
most commonly observed regime.
We further note that the detected oscillation periods vary between different
fieldlines and can also change with time, particularly in loops that are
subject to change in physical properties, or with increasing/decreasing loop
length, as would be expected for standing oscillations (this is true for all
oscillation modes visible in the simulation). This spread and variability of
the oscillation periods as well as lack of oscillation coherence in different
parts of the simulation domain suggests that they are not a result of a
coherent global oscillation in the simulation. Our analysis therefore does not
agree with the premise that the transverse oscillations in the corona are
driven by the global p-mode oscillation which is based on comparison of peaks
in the power spectra from velocity measurements in the corona and the p-mode
power spectra (Morton et al. 2016, 2019) (it should however be noted that
analysis in such studies focuses on very long coronal loops, i.e. the spatial
scales are very different from the short low-lying loops studied in this
work). Such mechanism requires coronal loops to oscillate with the same period
regardless of their length, as they are driven at their footpoints by a
harmonic driver. The variability of the oscillation period in the simulation
therefore excludes the presence of a harmonic driver, or at the very least
suggests that such driver is not a dominant mechanism responsible for the
excitation of transverse oscillations in this work. We stress that care should
be taken when drawing conclusions from the global power spectra due to the
temporal variability of the oscillation periods of individual magnetic
structures which has been shown both observationally (Verwichte & Kohutova
2017) and by numerical simulations (Kohutova & Verwichte 2017).
Excitation mechanism proposed by Nakariakov et al. (2016) is based on
identifying the decayless oscillations as self-oscillations of the coronal
loop, that are excited by interaction with quasi-steady flows, similar to a
bow moving across a violin string. The flows in the simulation domain are too
dynamic and not stable enough to drive persistent self-oscillation of the
fieldlines; this is evidenced by the motion of loop footpoints that are
dragged around by turbulent flows in the lower solar atmosphere shown in Fig.
10 and show variability on the order of minutes. 3D numerical models however
seem to suggest that excitation of self-oscillations requires persistent flows
that are steady on timescales of the order of an hour (Karampelas & Van
Doorsselaere 2020).
The absence of increased oscillatory power in the chromosphere at the
positions of the footpoints of the oscillating fieldlines also suggests that a
harmonic footpoint driver is not a prerequisite for the excitation of coronal
oscillations. Open questions addressing the excitation of the decayless
oscillations therefore still remain. If their excitation mechanism is global,
why are they not observed in all coronal loops? The decayless oscillations are
sufficiently abundant to not be driven by an isolated event (Anfinogentov et
al. 2015), but not enough to classify global process as the driver.
Figure 11: Top: Evolution of $v_{\mathrm{T}}$ component (blue) at the apex of
loop F2 showing a damped oscillation following an event possibly associated
with impulsive temperature increase. A large-scale trend was removed from
$v_{\mathrm{T}}$ time series by fitting and subtracting a second degree
polynomial. Middle: Evolution of $v_{\mathrm{T}}$ component at the apex of
loop F3 showing a sustained oscillation. Bottom: Sustained oscillation in the
time-distance plot of forward-modelled SDO/AIA 171 Å emission.
Finally, the differences between the magnetic and density structure of the
simulated and real solar corona, and hence the limitations of the model used
for our analysis need to be addressed. The forward modelled EUV emission from
the simulated corona is comparatively more diffuse and lacks a lot of fine-
scale structure seen in real coronal observations, especially fine-scale
coronal strands revealed by observations from the second Hi-C flight (Williams
et al. 2020), which are not resolved in SDO/AIA observations. The simulated
corona also suffers from a lack of coronal loops understood in the traditional
sense as distinct structures denser than the surroundings with distinct
boundaries. As the simulation does not include any flux emergence, the coronal
loops in the simulation do not correspond to overdense flux tubes lifted into
the corona from the lower solar atmosphere but are instead filled with
evaporated plasma due to impulsive heating events. Due to enhanced heating
regions having an irregular shape (Kohutova et al. 2020a) the structures
filled with evaporated plasma have greater spatial extent and lack well-
defined boundaries. Simulation resolution might also be a limiting factor,
eventhough the characteristic transverse scales seen in time-distance plots
are well above the horizontal grid size of 48 km. Distinct oscillating strands
are however still observable in Fig. 2. Furthermore, recent numerical studies
of evolution of initially homogeneous coronal loops as a response to
transverse motions suggest that our highly idealised picture of coronal loops
as monolithic plasma cylinders is not very realistic in the first place
(Magyar & Van Doorsselaere 2018; Karampelas et al. 2019; Antolin & Van
Doorsselaere 2019).
## 6 Conclusions
We have studied the excitation and evolution of coronal oscillations in 3D
self-consistent simulations of solar atmosphere spanning from convection zone
to solar corona using radiation-MHD code Bifrost. We have combined forward-
modelled EUV emission with 3D-tracing of magnetic field though both space and
time in order to analyse oscillatory behaviour of individual magnetic loops,
while accounting for their dynamic evolution. We have analysed evolution of
different velocity components, using wavelet analysis to capture changes in
the oscillatory periods due to evolution of properties of the oscillating
magnetic loops. Various types of oscillations commonly observed in the corona
are reproduced in such simulations. We detected standing oscillations in both
transverse and longitudinal velocity components, including higher order
oscillation harmonics. We have further shown that self-consistent simulations
reproduce existence of two distinct regimes of transverse coronal
oscillations, that is rapidly decaying oscillations triggered by impulsive
events and sustained small-scale oscillations showing no observable damping.
Damped transverse oscillations were found to be associated with instances of
impulsive energy release, such as small scale reconnection events, or external
perturbations leading to sudden changes in the loop properties, in agreement
with coronal observations. Persistent transverse oscillations on the other
hand were not linked to any such impulsive events. We did not find any
evidence for this oscillation regime being driven by a global (simulation)
p-mode. Lack of enhanced oscillatory power near the footpoint regions of the
studied loops together with variability of oscillation periods between
different coronal loops and lack of oscillation coherence across the
simulation domain exclude any type of harmonic driver as being responsible for
excitation of the oscillations.
Our work therefore highlights the complexity of coronal oscillations in
simulations with realistic magnetic field configurations which include the
complex dynamics of the lower solar atmosphere. Care needs to be taken when
translating findings from highly idealised models into real solar
observations, as there are several limitations to treating coronal loops as
static structures. We have shown that individual fieldlines are very dynamic,
their footpoints migrate and their overall length changes significantly over
realistic timescales. This might have non-negligible consequences for accuracy
of estimates of coronal plasma parameters deduced using coronal seismology.
The oscillating coronal structures we analysed are far from idealised plasma
cylinders. The focus of modelling work should therefore be shifting from rigid
cylindrical models towards more realistic descriptions that account for rapid
variability, complex morphology and presence of nonlinearities. There are
obviously natural limitations to this approach resulting from the
computational expense of building such models and also from the associated
limits on physical size of the simulation domain. Work is currently underway
to investigate the evolution of coronal loops in 3D self-consistent
simulations that span high into the corona. We would hence like to highlight
the potential of using self-consistent simulations of the solar atmosphere as
a laboratory for testing assumptions made by coronal seismology and models of
various damping and dissipation mechanisms in an environment with realistic
density structure and magnetic field geometry.
###### Acknowledgements.
This research was supported by the Research Council of Norway through its
Centres of Excellence scheme, project no. 262622.
## References
* Afanasyev et al. (2020) Afanasyev, A. N., Van Doorsselaere, T., & Nakariakov, V. M. 2020, A&A, 633, L8
* Anfinogentov et al. (2015) Anfinogentov, S. A., Nakariakov, V. M., & Nisticò, G. 2015, A&A, 583, A136
* Antolin & Van Doorsselaere (2019) Antolin, P. & Van Doorsselaere, T. 2019, Front. Phys., 7
* Antolin et al. (2014) Antolin, P., Yokoyama, T., & Van Doorsselaere, T. 2014, ApJL, 787, L22
* Aschwanden et al. (1999) Aschwanden, M. J., Fletcher, L., Schrijver, C. J., & Alexander, D. 1999, ApJ, 520, 880
* Berghmans & Clette (1999) Berghmans, D. & Clette, F. 1999, Sol. Phys., 186, 207
* Carlsson et al. (2010) Carlsson, M., Hansteen, V. H., & Gudiksen, B. V. 2010, Mem. Soc. Ast. It., 81, 582
* Carlsson et al. (2016) Carlsson, M., Hansteen, V. H., Gudiksen, B. V., Leenaarts, J., & De Pontieu, B. 2016, A&A, 585, A4
* Chen & Peter (2015) Chen, F. & Peter, H. 2015, A&A, 581, A137
* De Moortel (2005) De Moortel, I. 2005, Phil. Trans. R. Soc. A, 363, 2743
* De Moortel et al. (2000) De Moortel, I., Ireland, J., & Walsh, R. W. 2000, A&A, 355, L23
* De Moortel & Nakariakov (2012) De Moortel, I. & Nakariakov, V. M. 2012, Phil. Trans. R. Soc. A, 370, 3193
* De Moortel & Pascoe (2012) De Moortel, I. & Pascoe, D. J. 2012, ApJ, 746, 31
* De Pontieu et al. (2004) De Pontieu, B., Erdélyi, R., & James, S. P. 2004, Nature, 430, 536
* Duckenfield et al. (2018) Duckenfield, T., Anfinogentov, S. A., Pascoe, D. J., & Nakariakov, V. M. 2018, ApJL, 854, L5
* Edwin & Roberts (1983) Edwin, P. M. & Roberts, B. 1983, Sol. Phys., 88, 179
* Eklund et al. (2020) Eklund, H., Wedemeyer, S., Snow, B., et al. 2020, arXiv:2008.05324 [astro-ph], arXiv: 2008.05324
* Felipe (2012) Felipe, T. 2012, ApJ, 758, 96
* González-Morales et al. (2019) González-Morales, P. A., Khomenko, E., & Cally, P. S. 2019, ApJ, 870, 94
* Goossens et al. (2019) Goossens, M. L., Arregui, I., & Van Doorsselaere, T. 2019, Front. Astron. Space Sci., 6
* Gudiksen et al. (2011) Gudiksen, B. V., Carlsson, M., Hansteen, V. H., et al. 2011, A&A, 531, A154
* Karampelas & Van Doorsselaere (2020) Karampelas, K. & Van Doorsselaere, T. 2020, ApJL, 897, L35
* Karampelas et al. (2017) Karampelas, K., Van Doorsselaere, T., & Antolin, P. 2017, A&A, 604, A130
* Karampelas et al. (2019) Karampelas, K., Van Doorsselaere, T., & Guo, M. 2019, A&A, 623, A53
* Kohutova et al. (2020a) Kohutova, P., Antolin, P., Popovas, A., Szydlarski, M., & Hansteen, V. H. 2020a, A&A, 639, A20
* Kohutova & Verwichte (2017) Kohutova, P. & Verwichte, E. 2017, A&A, 606, A120
* Kohutova & Verwichte (2018) Kohutova, P. & Verwichte, E. 2018, A&A, 613, L3
* Kohutova et al. (2020b) Kohutova, P., Verwichte, E., & Froment, C. 2020b, A&A, 633, L6
* Leenaarts et al. (2015) Leenaarts, J., Carlsson, M., & Rouppe van der Voort, L. 2015, ApJ, 802, 136
* Liu et al. (2019) Liu, J., Nelson, C. J., Snow, B., Wang, Y., & Erdélyi, R. 2019, Nat. Commun., 10, 3504
* Magyar & Van Doorsselaere (2016) Magyar, N. & Van Doorsselaere, T. 2016, A&A, 595, A81
* Magyar & Van Doorsselaere (2018) Magyar, N. & Van Doorsselaere, T. 2018, ApJ, 856, 144
* Magyar et al. (2015) Magyar, N., Van Doorsselaere, T., & Marcu, A. 2015, A&A, 582, A117
* Morton et al. (2016) Morton, R. J., Tomczyk, S., & Pinto, R. F. 2016, ApJ, 828, 89
* Morton et al. (2019) Morton, R. J., Weberg, M. J., & McLaughlin, J. A. 2019, Nat. Astron., 3, 223
* Nakariakov et al. (2016) Nakariakov, V. M., Anfinogentov, S. A., Nisticò, G., & Lee, D.-H. 2016, A&A, 591, L5
* Nakariakov & Kolotkov (2020) Nakariakov, V. M. & Kolotkov, D. Y. 2020, Annu. Rev. Astron. Astrophys., 58, 441
* Nakariakov et al. (1999) Nakariakov, V. M., Ofman, L., Deluca, E. E., Roberts, B., & Davila, J. M. 1999, Science, 285, 862
* Nakariakov et al. (2004) Nakariakov, V. M., Tsiklauri, D., Kelly, A., Arber, T. D., & Aschwanden, M. J. 2004, A&A, 414, L25
* Nakariakov & Verwichte (2005) Nakariakov, V. M. & Verwichte, E. 2005, Liv. Rev. Sol. Phys., 2, 3
* Nisticò et al. (2013) Nisticò, G., Nakariakov, V. M., & Verwichte, E. 2013, A&A, 552, A57
* Pagano & De Moortel (2017) Pagano, P. & De Moortel, I. 2017, A&A, 601, A107
* Pagano & De Moortel (2019) Pagano, P. & De Moortel, I. 2019, A&A, 623, A37
* Peter et al. (2013) Peter, H., Bingert, S., Klimchuk, J. A., et al. 2013, A&A, 556, A104
* Riedl et al. (2019) Riedl, J. M., Van Doorsselaere, T., & Santamaria, I. C. 2019, A&A, 625, A144
* Santamaria et al. (2015) Santamaria, I. C., Khomenko, E., & Collados, M. 2015, A&A, 577, A70
* Shelyag et al. (2011) Shelyag, S., Keys, P., Mathioudakis, M., & Keenan, F. P. 2011, A&A, 526, A5
* Stein & Nordlund (2001) Stein, R. F. & Nordlund, Å. 2001, ApJ, 546, 585
* Tomczyk & McIntosh (2009) Tomczyk, S. & McIntosh, S. W. 2009, ApJ, 697, 1384
* Torrence & Compo (1998) Torrence, C. & Compo, G. P. 1998, Bull. Amer. Meteorol. Soc., 79, 61
* Van Doorsselaere et al. (2016) Van Doorsselaere, T., Antolin, P., Yuan, D., Reznikova, V., & Magyar, N. 2016, Front. Astron. Space Sci., 3
* Verwichte & Kohutova (2017) Verwichte, E. & Kohutova, P. 2017, A&A, 601, L2
* Verwichte et al. (2004) Verwichte, E., Nakariakov, V. M., Ofman, L., & Deluca, E. E. 2004, Sol. Phys., 223, 77
* Verwichte et al. (2013) Verwichte, E., Van Doorsselaere, T., Foullon, C., & White, R. S. 2013, ApJ, 767, 16
* Wang et al. (2012) Wang, T., Ofman, L., Davila, J. M., & Su, Y. 2012, ApJ, 751, L27
* White & Verwichte (2012) White, R. S. & Verwichte, E. 2012, A&A, 537, A49
* Williams et al. (2020) Williams, T., Walsh, R. W., Winebarger, A. R., et al. 2020, ApJ, 892, 134
|
11institutetext: Sergey G. Bobkov 22institutetext: University of Minnesota,
Vincent Hall 228, 206 Church St SE, Minneapolis, MN 55455 USA 33institutetext:
National Research University Higher School of Economics, 101000 Moscow, Russia
33email<EMAIL_ADDRESS>44institutetext: Maria A. Danshina
55institutetext: Moscow Center for Fundamental and Applied Mathematics,
Lomonosov Moscow State University, 119991 Moscow, Russia 55email:
<EMAIL_ADDRESS>66institutetext: Vladimir V. Ulyanov
77institutetext: Lomonosov Moscow State University, 119991 Moscow, Russia
88institutetext: National Research University Higher School of Economics,
101000 Moscow, Russia 88email<EMAIL_ADDRESS>
# On rate of convergence to the Poisson law
of the number of cycles
in the generalized random graphs
Sergey G. Bobkov Maria A. Danshina Vladimir V. Ulyanov
###### Abstract
Convergence of order $O(1/\sqrt{n})$ is obtained for the distance in total
variation between the Poisson distribution and the distribution of the number
of fixed size cycles in generalized random graphs with random vertex weights.
The weights are assumed to be independent identically distributed random
variables which have a power-law distribution. The proof is based on the
Chen–Stein approach and on the derived properties of the ratio of the sum of
squares of random variables and the sum of these variables. These properties
can be applied to other asymptotic problems related to generalized random
graphs
## 1 Introduction
Complex networks attract increasing attention of researchers in various fields
of science. In last years numerous network models have been proposed. Since
the uncertainty and the lack of regularity in real-world networks, these
models are usually random graphs. Random graphs were first defined in Erdos ,
and independently by Gilbert in Gilbert . The suggested models are closely
related: there are $n$ isolated vertices and every possible edge occurs
independently with probability $p:$ $0<p<1$. It is assumed that there are no
self-loops. Later the models were generalized. A natural generalization of the
ErdHos–Rényi random graph is that the equal edge probabilities are replaced by
probabilities depending on the vertex weights. Vertices with high weights are
more likely to have more neighbors than vertices with small weights. Vertices
with extremely high weights could act as the hubs observed in many real-world
networks.
The following generalized random graph (GRG) model was first introduced by
Britton et al., see Britton . Let $V=\\{1,2,..,n\\}$ be the set of vertices,
and $W_{i}>0$ be the weight of vertex $i$, $1\leq i\leq n.$ The edge
probability of the edge between any two vertices $i$ and $j$, for $i\neq j$,
is equal to
$p_{ij}=\frac{W_{i}W_{j}}{L_{n}+W_{i}W_{j}}$ (1.1)
and $p_{ii}=0$ for all $i\leq n.$ Here $L_{n}=\sum_{i=1}^{n}W_{i}$ denotes the
total weight of all vertices. The weights $W_{i},$ $i=1,2,...,n$ can be taken
to be deterministic or random. If we take all $W_{i}-s$ as the same constant:
$W_{i}\equiv n\lambda/(n-\lambda)$ for some $0\leq\lambda<n,$ it is easy to
see that $p_{ij}=\lambda/n$ for all $1\leq i<j\leq n.$ That is, the
ErdHos–Rényi random graph with $p=\lambda/n$ is a special case of the GRG.
There are many versions of the GRG, such as Poissonian random graph
(introduced by Norros and Reittu in Norros and studied by Bhamidi et
al.Bhamidi ), rank-1 inhomogeneous random graph (see Bollobas ), random graph
with given prescribed degrees (see Chung ), Chung–Lu model of heterogeneous
random graph (see Chung2 ) etc. The Chung–Lu model is the closest to the model
of generalized random graph. Two vertices $i$ and $j$ are connected with
probability $p_{ij}={W_{i}W_{j}}/{L_{n}}$ and independently of other pairs of
vertices, where $W=(W_{1},W_{2},...,W_{n})$ is a given sequence. It is
necessary to assume that $W^{2}_{i}\leq L_{n},$ for all $i.$ Under some common
conditions (see Janson ), all of the above mentioned versions of the GRG are
asymptotically equivalent, meaning that all events have asymptotically equal
probabilities. The updated review on the results about these inhomogeneous
random graphs see in Chapters 6 in Hofstad .
One of the problems that arise in real networks of various nature is the
spread of the virus. In Chakrabarti , the authors proposed an approach called
NLDS (nonlinear dynamic system) for modeling such processes. Consider a
network of $n$ vertices represented by an undirected graph $G$. Assume an
infection rate $\beta>0$ for each connected edge that is connected to an
infected vertex and a recovery rate of $\delta>0$ for each infected
individual.Define the epidemic threshold $\tau$ as a value such that
$\displaystyle\beta/\delta<\tau\text{ }\Rightarrow\text{infection dies out
over time}$ $\displaystyle\beta/\delta>\tau\text{ }\Rightarrow\text{infection
survives and becomes an epidemic.}$
$\tau$ is related to the adjacency matrix $A$ of the graph. The matrix
$A=[a_{ij}]$ is an $n\times n$ symmetric matrix defined as $a_{ij}=1$ if
vertices $i$ and $j$ are connected by an edge, and $a_{ij}$ = 0 otherwise.
Define a walk of length $k$ in $G$ from vertex $v_{0}$ to $v_{k}$ to be an
ordered sequence of vertices $(v_{0},v_{1},...,v_{k}),$ with $v_{i}\in V$,
such that $v_{i}$ and $v_{i+1}$ are connected for $i=0,1,...,k-1.$ If
$v_{0}=v_{k},$ then the walk is closed. A closed walk with no repeated
vertices (with the exception of the first and last vertices) is called a
cycle. For example, triangles, quadrangles and pentagons are cycles of length
three, four, and five, respectively. In the following, the cycle will be
denoted by the first $k$ vertices, without specifying the vertex $v_{k},$
which is the same as $v_{0}$: $(v_{0},v_{1},...,v_{k-1})$.
In Theorem 1 in Chakrabarti it has been stated that $\tau$ is equal to
$1/\lambda_{1},$ where $\lambda_{1}$ is the largest eigenvalue of the
adjacency matrix $A.$ The following lower bound for $\lambda_{1}(A)$, was
shown in Preciado
$\lambda_{1}(A)\geq\frac{6\triangle+\sqrt{36\triangle^{2}+32e^{3}/n}}{4e},$
where $n,$ $e,$ and $\triangle$ the number of vertices, edges and triangles in
$G,$ resp. Moreover, using information about the cycle numbers of higher
orders one can get more precise upper bounds for $\tau$.
In Ulyanov the central limit theorems were proved for the total number of
edges in GRG. There are also many results on asymptotic properties of the
number of triangles in homogeneous cases. For example, for the ErdHos–Rényi
random graph, the upper tails for the distribution of the triangle number had
been studied in Goldstein , Demarco , Janson2 , Kim . Recently, in Liu2020 it
was shown for GRG model that asymptotic distribution of the triangle number
converges to a Poisson distribution under strong assumption that the vertex
weights are bounded random variables.
A lot of real-world networks such as social or computer networks in Internet,
see e.g. Faloutsos , follow a so-called scale-free graph model, see Ch.1 in
Hofstad . In Ch.6 in Hofstad it was shown that when the vertex weights have
approximately a power-law distribution, the GRG model leads to scale-free
random graph.
In the present paper, we prove not only the convergence but we get the
convergence rate of order $O(1/\sqrt{n})$ for the distance in total variation
between the Poisson distribution and the distribution of the number of fixed
size cycles in GRG with random vertex weights. The weights are assumed to be
independent identically distributed random variables which have a power-law
distribution. The proof is based on the Chen–Stein approach and on the derived
properties of the ratio of the sum of squares of random variables and the sum
of these variables. These properties can be applied to other asymptotic
problems related to GRG.
The main results are formulated in Section 2. For their proofs, see Section 4.
Section 3 contains auxiliary lemmas, some of which are of independent
interest.
## 2 Main results
Let $\\{1,2,...,n\\}$ be the set of vertices, and $W_{i}$ be a weight of
vertex $i:\text{ }1\leq i\leq n.$ The probability of the edge between vertices
$i$ and $j$ is defined in (1.1). Let $W_{i},$ $i=1,2,...,n,$ be independent
identically distributed random variables distributed as a random variable $W$.
For $k\geq 3$, denote by $I(k)$ the set of potential cycles of length $k$. We
have that the number of elements in $I(k)$ is equal to $(n)_{k}/(2k),$ where
$(n)_{k}=n(n-1)...(n-k+1)$ is the number of ways to select k distinct vertices
in order, and the factor $1/(2k)$ appears since, for $k>2$, a permutation of
$k$ vertices corresponds to a choice of a cycle in $I(k)$ together with a
choice of any of two orientations and $k$ starting points. For example, all
six cycles $\\{1,3,4\\},$ $\\{3,4,1\\},$ $\\{4,1,3\\}$, $\\{4,3,1\\},$
$\\{1,4,3\\},$ $\\{3,1,4\\}$ are, in fact, one cycle of length $3$. For
$\alpha\in I(k),$ let $Y_{\alpha}$ be the indicator that $\alpha$ occurs as a
cycle in GRG. For example,
${\mathbb{P}}(Y_{\\{1,3,4\\}}=1)=p_{13}p_{34}p_{41}$.
For any integer-valued non-negative random variables $Y$ and $Z$, denote the
total variation distance between their distributions $\mathscr{L}(Y)$ and
$\mathscr{L}(Z)$ by
$\parallel\mathscr{L}(Y)-\mathscr{L}(Z)\parallel\equiv sup_{\parallel
h\parallel=1}|{\mathbb{E}}h(Y)-{\mathbb{E}}h(Z)|,$ (2.2)
where $h$ is any real function defined on $\\{0,1,2,...\\}$ and $\parallel
h\parallel\equiv\sup_{m\geq 0}|h(m)|.$
For $k\geq 3,$ put $S_{n}(k)=\sum_{\alpha\in I(k)}Y_{\alpha}$, that is
$S_{n}(k)$ is the number of cycles of length $k.$ Let $Z_{k}$ be a random
variable having Poisson distribution with parameter
$\lambda(k)=({\mathbb{E}}W^{2}/{\mathbb{E}}W)^{k}/(2k).$
###### Theorem 2.1
For any $k\geq 3,$ one has
$\parallel\mathscr{L}(S_{n}(k))-\mathscr{L}(Z_{k})\parallel=O(n^{-1/2}),$
(2.3)
provided that
${\mathbb{P}}(W>x)=o(x^{-2k-1}),\ \ {\rm as}\ \ x\rightarrow+\infty.$ (2.4)
###### Remark 1
Relation (2.3) holds under condition that $W$ has power-law distribution. The
condition on the tail behaviour of the distribution of $W$ can be replaced by
stronger moment condition: the finiteness of expectation
${\mathbb{E}}W^{2k+1}.$
###### Remark 2
Recently in Liu2020 it was proved by method of moments the convergence in
distribution of the number of triangles $S_{n}(3)$ in a generalized random
graphs to the Poisson random variable $Z_{3}$ under assumption that the vertex
weights $W_{i}$-s are bounded random variables. In Theorem 2.1 we have used
the Chen–Stein approach, see e.g. Goldstein89 and Goldstein . This allows us
not only to extend the convergence result to cycles of any fixed length $k$
but also to get the rate of convergence. Moreover, we replace the assumption
about the boundness of $W_{i}$-s with the condition that $W_{i}$ has a power-
law distribution. As we noted in the Introduction, this condition better
matches real-world networks.
Figures 2 and 2 illustrate the results of Theorem 2.1, with the example of the
number of triangles and quadrilaterals distributions.
Figure 1: Histogram of the number of triangles in GRG with $2000$ vertices.
The vertex weights $W_{i}=scale*Y+loc,$ where $loc=1,$ $scale=10$ and
$Y\thicksim Pareto$ $(9.5)$ for all $i\leq 2000.$ The number of realizations
is 400.
Figure 2: Q-Q plot for the number of quadrilaterals in GRG with $2000$
vertices and the Poisson variable $Pois(2880.16)$. The vertex weights
$W_{i}=scale*Y+loc,$ where $loc=1,$ $scale=10$ and $Y\thicksim Pareto$ $(9.5)$
for all $i\leq 2000.$ The number of realizations is 400.
The next results are not directly connected with number of cycles in GRG. They
are an important part of the proof of the Theorem 2.1. At the same time the
results are of independent interest. They describe the asymptotic properties
of ratio of a sum of the squares of $n$ i.i.d. random variables and a sum of
these random variables. These properties can be applied to other asymptotic
problems related to GRG.
Given i.i.d. positive random variables $X,X_{1},\dots,X_{n}$, define the
statistics
$T_{n}\,=\,\frac{X_{1}^{2}+\dots+X_{n}^{2}}{X_{1}+\dots+X_{n}}.$
Assume that $X$ has a finite second moment, so that, by the law of large
numbers, with probability one
$\lim_{n\rightarrow\infty}T_{n}^{p}\,=\,\Big{(}\frac{{\mathbb{E}}X^{2}}{{\mathbb{E}}X}\Big{)}^{p}$
for any $p\geq 1$. Here we describe the tail-type and moment-type conditions
which ensure that this convergence also holds on average.
###### Theorem 2.2
Given an integer $p\geq 2$, the convergence
$\lim_{n\rightarrow\infty}{\mathbb{E}}T_{n}^{p}\,=\,({\mathbb{E}}X^{2}/{\mathbb{E}}X)^{p}$
(2.5)
is equivalent to the tail condition
${\mathbb{P}}\\{X\geq x\\}=o(x^{-p-1})\ \ {\rm as}\ \ x\rightarrow\infty.$
(2.6)
Moreover, if ${\mathbb{P}}\\{X\geq x\\}=O(x^{-p-{3}/{2}})$ as
$x\rightarrow\infty$, then
${\mathbb{E}}T_{n}^{p}-({\mathbb{E}}X^{2}/{\mathbb{E}}X)^{p}\,=\,O(n^{-1/2})$
(2.7)
The finiteness of the moment ${\mathbb{E}}X^{p+1}$ is sufficient for (2.5) to
hold, while the finiteness of the moments ${\mathbb{E}}X^{q}$ is necessary for
any real value $1\leq q<p+1$.
Let $M_{n}=\max_{1\leq i\leq n}\,X_{i}.$ For $p\geq 2$, define
$R_{n}^{(p)}=T_{n}^{p}M_{n}^{2}/(X_{1}+X_{2}+...+X_{n}).$ (2.8)
By the law of large numbers, $R_{n}^{(p)}\rightarrow 0$ as
$n\rightarrow\infty$ a.s., under mild moment assumptions. The next theorem
gives the order of convergence of ${\mathbb{E}}R_{n}^{(p)}$ to zero under
tail-type and moment-type conditions.
###### Theorem 2.3
Given an integer $p\geq 2$, if ${\mathbb{P}}(X\geq x)=O(x^{-p-7/2})$ as
$x\rightarrow+\infty,$ then
${\mathbb{E}}R_{n}^{(p)}\,=\,O(n^{-1/2}).$ (2.9)
When $p>8$ and ${\mathbb{E}}X^{p+4}$ is finite, the rate can be improved to
${\mathbb{E}}R_{n}^{(p)}=O(n^{-{(p-2)}/{(p+4)}}).$ (2.10)
Moreover, if ${\mathbb{E}}\,e^{\varepsilon X}<\infty$ for some
$\varepsilon>0$, then
${\mathbb{E}}R_{n}^{(p)}=O\Big{(}\frac{(\log n)^{2}}{n}\Big{)}.$ (2.11)
## 3 Auxiliary lemmas
###### Lemma 1
Let $S_{n}=\eta_{1}+\dots+\eta_{n}$ be the sum of independent random variables
$\eta_{k}\geq 0$ with finite second moment, such that ${\mathbb{E}}S_{n}=n$
and ${\rm Var}(S_{n})=\sigma^{2}n$. Then, for any $0<\lambda<1$, one has
${\mathbb{P}}\\{S_{n}\leq\lambda
n\\}\leq\exp\bigg{\\{}-\frac{(1-\lambda)^{2}}{2\,\big{[}\sigma^{2}+\max_{k}\,({\mathbb{E}}\eta_{k})^{2}\big{]}}\,n\bigg{\\}}.$
(3.12)
###### Proof
We use here the standard arguments. Fix a parameter $t>0$. We have
${\mathbb{E}}\,e^{-tS_{n}}\geq e^{-\lambda
tn}\,{\mathbb{P}}\\{S_{n}\leq\lambda n\\}.$
Every function $u_{k}(t)={\mathbb{E}}\,e^{-t\xi_{k}}$ is positive, convex, and
admits Taylor’s expansion near zero up to the quadratic form, which implies
that
$u_{k}(t)\leq
1-t\,{\mathbb{E}}\xi_{k}+\frac{t^{2}}{2}\,{\mathbb{E}}\xi_{k}^{2}\leq\exp\Big{\\{}-t\,{\mathbb{E}}\xi_{k}+\frac{t^{2}}{2}\,{\mathbb{E}}\xi_{k}^{2}\Big{\\}}.$
Multiplying these inequalities, we get
${\mathbb{E}}\,e^{-tS_{n}}\leq\exp\Big{\\{}-tn+\frac{bt^{2}}{2}\Big{\\}},\quad
b=\sum_{k=1}^{n}{\mathbb{E}}\xi_{k}^{2}.$
The two bounds yield
${\mathbb{P}}\\{S_{n}\leq\lambda
n\\}\,\leq\,\exp\Big{\\{}-(1-\lambda)nt+bt^{2}/2\Big{\\}},$
and after optimization over $t$ (in fact, $t=\frac{1-\lambda}{b}\,n$), we
arrive at the exponential bound
${\mathbb{P}}\\{S_{n}\leq\lambda
n\\}\leq\exp\Big{\\{}-\frac{(1-\lambda)^{2}}{2b}\,n^{2}\Big{\\}}.$
Note that
$b={\rm
Var}(S_{n})+\sum_{k=1}^{n}\,({\mathbb{E}}\xi_{k})^{2}\leq\Big{(}\sigma^{2}+\max_{k}\,({\mathbb{E}}\xi_{k})^{2}\Big{)}\,n,$
and (3.12) follows.
For further lemmas we need additional notation.
Denote by $F(x)={\mathbb{P}}\\{X\leq x\\}$ ($x\in{\mathbb{R}}$) the
distribution function of the random variable $X$ and put
$\varepsilon_{q}(x)=x^{q}\,(1-F(x)),\quad x\geq 0,\ q>0.$
Raising the sum $U_{n}=X_{1}^{2}+\dots+X_{n}^{2}$ to the power $p$ with $n\geq
2p$, we have
$U_{n}^{p}\,=\,\sum X_{i_{1}}^{2}\dots X_{i_{p}}^{2},$ (3.13)
where the summation is performed over all collections of numbers
$i_{1},\dots,i_{p}\in\\{1,\dots,n\\}$. For $r=1,\dots,p$, we denoted by
$\mathcal{C}(p,r)$ the collection of all tuples
$\gamma=(\gamma_{1},\dots,\gamma_{r})$ of positive integers such that
$\gamma_{1}+\dots+\gamma_{r}=p$. For any $\gamma\in\mathcal{C}(p,r)$, there
are $n(n-1)\dots(n-r+1)$ sequences $X_{i_{1}},\dots,X_{i_{p}}$ with $r$
distinct terms that are repeated $\gamma_{1},\dots,\gamma_{r}$ times, resp.
Therefore, by the i.i.d. assumption,
${\mathbb{E}}T_{n}^{p}\ =\
\sum_{r=1}^{p}\frac{n(n-1)\dots(n-r+1)}{n^{p}}\sum_{\gamma\in\mathcal{C}(p,r)}{\mathbb{E}}\xi_{n}(\gamma),$
(3.14)
where
$\xi_{n}(\gamma)={X_{1}^{2\gamma_{1}}\dots
X_{r}^{2\gamma_{r}}}/{(\frac{1}{n}\,S_{r}+\frac{1}{n}\,S_{n,r})^{p}}$
and
$S_{r}=X_{1}+\dots+X_{r},\quad S_{n,r}=X_{r+1}+\dots+X_{n}.$
In the following lemmas, without loss of generality, let ${\mathbb{E}}X=1$.
###### Lemma 2
For the boundedness of the sequence ${\mathbb{E}}T_{n}^{p}$ it is necessary
that the moment ${\mathbb{E}}X^{p}$ be finite. Moreover, for the particular
collection $\gamma=(p)$ with $r=1$, we have
${\mathbb{E}}\xi_{n}(\gamma)\,\geq\,2^{-p}\,n^{p}\
{\mathbb{E}}X^{p}\,1_{\\{X\geq n\\}}.$ (3.15)
###### Proof
Since
$\xi_{n}(\gamma)={X_{1}^{2p}}/{(\frac{1}{n}\,X_{1}+\frac{1}{n}\,S_{n,1})^{p}}$,
applying Jensen’s inequality, we get
$\displaystyle{\mathbb{E}}\xi_{n}(\gamma)$ $\displaystyle\geq$
$\displaystyle{\mathbb{E}}_{X_{1}}\,\frac{X_{1}^{2p}}{(\frac{1}{n}\,X_{1}+\frac{1}{n}\,{\mathbb{E}}_{S_{n,1}}\,S_{n,1})^{p}}$
$\displaystyle=$
$\displaystyle{\mathbb{E}}\,\frac{X^{2p}}{(\frac{1}{n}\,X+\frac{n-1}{n})^{p}}\
\geq\ 2^{-p}\,n^{p}\ {\mathbb{E}}X^{p}\,1_{\\{X\geq n\\}}.$
In the sequel, we use the events
$A_{n,r}=\Big{\\{}S_{n,r}\leq\frac{n-r}{2}\Big{\\}}\quad{\rm and}\quad
B_{n,r}=\Big{\\{}S_{n,r}>\frac{n-r}{2}\Big{\\}}.$ (3.16)
By Lemma 1, whenever $n\geq 2p$, for some constant $c>0$ independent of $n$:
${\mathbb{P}}(A_{n,r})\leq e^{-c(n-r)}\leq e^{-cn/2}.$ (3.17)
###### Lemma 3
If ${\mathbb{E}}X^{p}$ is finite, then
${\mathbb{E}}\xi_{n}\rightarrow({\mathbb{E}}X^{2})^{p}$ as
$n\rightarrow\infty$, where
$\xi_{n}\,=\,{X_{1}^{2}\dots
X_{p}^{2}}/{(\frac{1}{n}\,S_{p}+\frac{1}{n}\,S_{n,p})^{p}}.$ (3.18)
###### Proof
Using $X_{1}\dots X_{p}\leq S_{p}^{p}$, we have
$\xi_{n}\,\leq\,{S_{p}^{2p}}/{(\frac{1}{n}\,S_{n})^{p}}\,\leq\,n^{p}\,S_{p}^{p}.$
Hence
${\mathbb{E}}\,\xi_{n}\,1_{A_{n,p}}\,\leq\,n^{p}\ {\mathbb{E}}\,S_{p}^{p}\
{\mathbb{P}}(A_{n,p})\,=\,o(e^{-cn})$ (3.19)
for some constant $c>0$ independent of $n$. Here, we applied (3.17) with $r=p$
and Lemma 2 which ensures that ${\mathbb{E}}\,S_{p}^{p}<\infty$. Further,
$\xi_{n}\,1_{B_{n,p}}\,\leq\,2^{p}X_{1}^{2}\dots X_{p}^{2}.$Hence, the random
variables $\xi_{n}\,1_{B_{n,p}}$ have an an integrable majorant. Since also
$\xi_{n}\rightarrow X_{1}^{2}\dots X_{p}^{2}$ (the law of large numbers) and
$1_{B_{n,p}}\rightarrow 1$ a.s. (implied by (3.17)), one may apply the
Lebesgue dominated convergence theorem, which gives
${\mathbb{E}}\xi_{n}1_{B_{n}}\rightarrow({\mathbb{E}}X^{2})^{p}.$ Together
with (3.19), we get ${\mathbb{E}}\xi_{n}\rightarrow({\mathbb{E}}X^{2})^{p}$.
###### Lemma 4
If the moment ${\mathbb{E}}X^{p}$ is finite, then for any
$\gamma=(\gamma_{1},\dots,\gamma_{r})\in\mathcal{C}(p,r)$,
${\mathbb{E}}\,\xi_{n}(\gamma)\,=\,4^{p}\,{\mathbb{E}}\,\frac{X_{1}^{2\gamma_{1}}\dots
X_{r}^{2\gamma_{r}}}{(\frac{1}{n}\,S_{r}+1)^{p}}+o(1).$
###### Proof
Using an elementary bound $X_{1}^{2\gamma_{1}}\dots
X_{r}^{2\gamma_{r}}\leq(X_{1}+\dots+X_{r})^{2\gamma_{1}+\dots+2\gamma_{r}}=S_{r}^{2p}$
and applying Jensen’s inequality, we see that $\xi_{n}(\gamma)\leq
n^{p}\,S_{r}^{p}\leq n^{p}\,r^{p-1}\,(X_{1}^{p}+\dots+X_{r}^{p}).$ Hence
${\mathbb{E}}\xi_{n}(\gamma)\,1_{A_{n,r}}\leq
n^{p}\,r^{p-1}\sum_{k=1}^{r}{\mathbb{E}}X_{k}^{p}\,1_{A_{n,r}}=n^{p}\,r^{p}\,{\mathbb{E}}X^{p}\,{\mathbb{P}}(A_{n,r})\
=\ o(e^{-c^{\prime}n}).$ (3.20)
On the other hand, on the set $B_{n,r}$ there is a pointwise bound
$\xi_{n}(\gamma)\,1_{B_{n,r}}\leq\,\frac{X_{1}^{2\gamma_{1}}\dots
X_{r}^{2\gamma_{r}}}{(\frac{1}{n}\,S_{r}+\frac{n-r}{2n})^{p}}\,\leq\,4^{p}\,\frac{X_{1}^{2\gamma_{1}}\dots
X_{r}^{2\gamma_{r}}}{(\frac{1}{n}\,S_{r}+1)^{p}}.$ (3.21)
Our task is reduced to the estimation of the expectation on the right-hand
side of (3.21). Let us first consider the shortest collection $\gamma=(p)$ of
length $r=1$.
###### Lemma 5
Under the condition (2.6),
${\mathbb{E}}\,\frac{X_{1}^{2p}}{(\frac{1}{n}\,X_{1}+1)^{p}}=o(n^{p-1}).$
(3.22)
In addition, if ${\mathbb{P}}\\{X\geq x\\}=O(x^{-q})$ for some real value $q$
in the interval $p<q<2p$, then
${\mathbb{E}}\,\frac{X_{1}^{2p}}{(\frac{1}{n}\,X_{1}+1)^{p}}=O(n^{2p-q}).$
(3.23)
###### Proof
We have
$\displaystyle{\mathbb{E}}\,\frac{X_{1}^{2p}}{(\frac{1}{n}\,X_{1}+1)^{p}}$
$\displaystyle=$
$\displaystyle{\mathbb{E}}\,\frac{X_{1}^{2p}}{(\frac{1}{n}\,X_{1}+1)^{p}}\,1_{\\{X_{1}\geq
n\\}}+{\mathbb{E}}\,\frac{X_{1}^{2p}}{(\frac{1}{n}\,X_{1}+1)^{p}}\,1_{\\{X_{1}<n\\}}$
$\displaystyle\leq$ $\displaystyle n^{p}\,{\mathbb{E}}\,X^{p}\,1_{\\{X\geq
n\\}}+{\mathbb{E}}\,X^{2p}\,1_{\\{X<n\\}}.$
In view of (2.6), to derive (3.22), it remains to bound the last expectation
by $o(n^{p-1})$. Integrating by parts and assuming that $x=n$ is the point of
continuity of $F(x)$, we have using $\varepsilon_{p+1}(x)\rightarrow 0$ as
$x\rightarrow\infty$,
$\displaystyle{\mathbb{E}}\,X^{2p}\,1_{\\{X<n\\}}$ $\displaystyle=$
$\displaystyle-n^{2p}\,(1-F(n))+2p\int_{0}^{n}x^{2p-1}\,(1-F(x))\,dx$ (3.24)
$\displaystyle\leq$ $\displaystyle
2p\int_{0}^{n}x^{p-2}\,\varepsilon_{p+1}(x)\,dx\ =\ o(n^{p-1}),$
For the second assertion (3.23), we similarly have
$\displaystyle{\mathbb{E}}\,X^{2p}\,1_{\\{X<n\\}}\,$ $\displaystyle\leq$
$\displaystyle\,2p\int_{0}^{n}x^{2p-1-q}\,\varepsilon_{q}(x)\,dx\ =\
O(n^{2p-q}),$ $\displaystyle{\mathbb{E}}\,X^{p}\,1_{\\{X\geq n\\}}$
$\displaystyle=$ $\displaystyle
O(n^{p-q})+p\int_{n}^{\infty}x^{p-q-1}\,\varepsilon_{q}(x)\,dx\ =\
O(n^{p-q}).$
###### Lemma 6
Let $\gamma=(\gamma_{1},\dots,\gamma_{r})\in\mathcal{C}(p,r)$, $2\leq r\leq
p-1$. Under (2.6), we have
${\mathbb{E}}\,\frac{X_{1}^{2\gamma_{1}}\dots
X_{r}^{2\gamma_{r}}}{(\frac{1}{n}\,S_{r}+1)^{p}}=o(n^{p-r-1}\log n).$ (3.25)
###### Proof
If all $\gamma_{i}\leq{p}/{2}$, there is nothing to prove, since then
${\mathbb{E}}\,\frac{X_{1}^{2\gamma_{1}}\dots
X_{r}^{2\gamma_{r}}}{(\frac{1}{n}\,S_{r}+1)^{p}}\,\leq\,{\mathbb{E}}\,X_{1}^{2\gamma_{1}}\dots\mathbb{E}\,X_{r}^{2\gamma_{r}}\,\leq\,({\mathbb{E}}X^{p})^{r}.$
In the other case, suppose for definiteness that $\gamma_{1}$ is the largest
number among all $\gamma_{i}$’s. Necessarily $\gamma_{1}>{p}/{2}$ and
$\gamma_{i}<{p}/{2}$ for all $i\geq 2$. Since $S_{r}<n$ implies $X_{1}<n$, we
similarly have
${\mathbb{E}}\,\frac{X_{1}^{2\gamma_{1}}\dots
X_{r}^{2\gamma_{r}}}{(\frac{1}{n}\,S_{r}+1)^{p}}\
1_{\\{S_{r}<n\\}}\,\leq\,({\mathbb{E}}X^{p})^{r-1}\,{\mathbb{E}}\,X^{2\gamma_{1}}\,1_{\\{X<n\\}}.$
To bound the last expectation, note that
$r-1\leq\gamma_{2}+\dots+\gamma_{r}<{p}/{2}$, so that $p\geq 2r-1$. Hence, if
$x=n$ is the point of continuity of $F(x)$, similarly to (3.24) we get
${\mathbb{E}}\,X^{2\gamma_{1}}\,1_{\\{X<n\\}}\leq
2\gamma_{1}\int_{0}^{n}x^{2\gamma_{1}-p-2}\,\varepsilon_{p+1}(x)\,dx.$ (3.26)
But since $\gamma_{1}\leq p-r+1$,
$\int_{1}^{n}x^{2\gamma_{1}-p-2}\,\varepsilon_{p+1}(x)\,dx\,\leq\,\int_{1}^{n}x^{p-2r}\,\varepsilon_{p+1}(x)\,dx\,=\,o(n^{p-2r+1}),$
(3.27)
if $p\geq 2r$ or $p\leq 2r-2,$ which is even stronger than the rate
$o(n^{p-r-1})$. In the remaining case $p=2r-1$, the last integral is $o(\log
n)$. This proves (3.25) for the part of the expectation restricted to the set
$S_{r}<n$, that is,
${\mathbb{E}}\,\frac{X_{1}^{2\gamma_{1}}\dots
X_{r}^{2\gamma_{r}}}{(\frac{1}{n}\,S_{r}+1)^{p}}\
1_{\\{S_{r}<n\\}}\,=\,o(n^{p-r-1}\log n).$ (3.28)
Note that the logarithmic term cannot be removed in the special situation
where $p=3$, $r=2$, $\gamma_{1}=2$, $\gamma_{2}=1$, in which case the last
integral in (3.27) becomes $\int_{1}^{n}x^{-1}\,\varepsilon_{4}(x)\,dx$.
Turning to the expectation over the complementary set $S_{r}\geq n$, introduce
the events
$\Omega_{i}=\Big{\\{}X_{i}\geq\max_{j\neq i}X_{j}\Big{\\}},\quad i=1,\dots,r.$
On every such set, $X_{i}\leq S_{r}\leq rX_{i}$. In particular, $S_{r}\geq n$
implies $X_{i}\geq n/r$. Hence, together with (3.28), (3.25) would follow from
the stronger assertion
${\mathbb{E}}\,\frac{X_{1}^{2\gamma_{1}}\dots
X_{r}^{2\gamma_{r}}}{X_{i}^{p}}\,1_{\\{X_{i}\geq
n\\}\cap\Omega_{i}}\,=\,o(n^{-r-1})$ (3.29)
with an arbitrary index $1\leq i\leq r$.
Case 1. $i\geq 2$. If we fix any values $X_{1}=x_{1}$ and $X_{i}=x_{i}$, then
the expectation with respect to $X_{j}$, $j\neq i$, in (3.29) will yield a
bounded quantity (since the $p$-moment is finite). Hence (3.29) is simplified
to
${\mathbb{E}}\,X_{1}^{2\gamma_{1}}X_{i}^{2\gamma_{i}-p}\,1_{\\{X_{i}\geq
n\\}\cap\\{X_{i}\geq X_{1}\\}}\,=\,o(n^{-r-1}).$ (3.30)
Here, the expectation over $X_{1}$ may be carried out and estimated similarly
to (3.26), by replacing $n$ with $x_{i}$. Namely,
${\mathbb{E}}\,X_{1}^{2\gamma_{1}}1_{\\{X_{1}\leq
x_{i}\\}}\,\leq\,2\gamma_{1}\int_{0}^{x_{i}}x^{2\gamma_{1}-p-2}\,\varepsilon_{p+1}(x)\,dx\,=\,\delta(x_{i})\,x_{i}^{2\gamma_{1}-p}$
with some $\delta(x_{i})\rightarrow 0$ as $x_{i}\rightarrow\infty$ (this
assertion may be strengthened when $2\gamma_{1}-p=1$). Hence, the expectation
in (3.30) is bounded by
$\displaystyle{\mathbb{E}}\,X_{i}^{2\gamma_{i}+2\gamma_{1}-2p}\,\delta(X_{i})\,1_{\\{X_{i}\geq
n\\}}$ $\displaystyle\leq$ $\displaystyle\delta_{n}\
{\mathbb{E}}\,X_{i}^{2\gamma_{i}+2\gamma_{1}-2p}\,1_{\\{X_{i}\geq n\\}}$
$\displaystyle\hskip-85.35826pt=\
\delta_{n}\,n^{2\gamma_{i}+2\gamma_{1}-2p}\,(1-F(n))+c_{i}\delta_{n}\int_{n}^{\infty}x^{2\gamma_{i}+2\gamma_{1}-2p-1}\,(1-F(x))\,dx$
$\displaystyle\hskip-85.35826pt=\
o(n^{2\gamma_{i}+2\gamma_{1}-3p-1})+c_{i}\delta_{n}\int_{n}^{\infty}x^{2\gamma_{i}+2\gamma_{1}-3p-2}\,\varepsilon_{p+1}(x)\,dx$
$\displaystyle\hskip-85.35826pt=\ o(n^{2\gamma_{i}+2\gamma_{1}-3p-1}),$
where $\delta_{n}=\sup_{x\geq n}\delta(x)\rightarrow 0$. To obtain (3.30), it
remains to check that $2\gamma_{i}+2\gamma_{1}-3p-1\leq-r-1$. And indeed,
since $p=\gamma_{i}+\gamma_{1}+\sum_{j\neq
i,1}\gamma_{j}\geq\gamma_{i}+\gamma_{1}+(r-2),$ the desired relation would
follow from $2\,(p-(r-2))-3p-1\leq-r-1$, that is, $p+r\geq 4$ (which is true).
Case 2. $i=1$. If we fix any value $X_{1}=x_{1}$, the expectation with respect
to $X_{j}$, $j\neq 1$, will yield a bounded quantity (since the $p$-moment is
finite). Hence (3.29) is simplified to
${\mathbb{E}}\,X^{2\gamma_{1}-p}\,1_{\\{X\geq n\\}}=o(n^{-r-1}).$
Here, the expectation may be estimated similarly. Namely,
$\displaystyle{\mathbb{E}}\,X^{2\gamma_{1}-p}\,1_{\\{X\geq n\\}}$
$\displaystyle=$ $\displaystyle\int_{n}^{\infty}x^{2\gamma_{1}-p}\,dF(x)$
$\displaystyle=$ $\displaystyle
o(n^{2\gamma_{1}-2p-1})+\int_{n}^{\infty}x^{2\gamma_{1}-2p-2}\,\varepsilon_{p+1}(x)\,dx\
=\ o(n^{2\gamma_{1}-2p-1}).$
It remains to see that $2\gamma_{1}-2p-1\leq-r-1$. Again, since
$\gamma_{1}\leq p-(r-1)$, the latter would follow from
$2(p-r+1)-2p-1\leq-r-1$, which is the same as $r\geq 2$.
We now consider the lemmas which enable us to get a bound for
${\mathbb{E}}R_{n}^{(p)},$ see (2.8). Without loss of generality, let
${\mathbb{E}}X=1$ and $n\geq 2p$.
Introduce additional notation: $M_{n,r}=\max_{r<i\leq n}X_{i},\ \ (1\leq r\leq
p).$
Recall that there is the representation (3.13) but instead of (3.14) we write
now
${\mathbb{E}}R_{n}^{(p)}\ =\
\sum_{r=1}^{p}\frac{n(n-1)\dots(n-r+1)}{n^{p+1}}\sum_{\gamma\in\mathcal{C}(p,r)}{\mathbb{E}}\,\eta_{n}(\gamma)M_{n}^{2},$
(3.31)
where
$\eta_{n}(\gamma)=\frac{X_{1}^{2\gamma_{1}}\dots
X_{r}^{2\gamma_{r}}}{(\frac{1}{n}\,S_{r}+\frac{1}{n}\,S_{n,r})^{p+1}}.$
In order to bound the expectations on the right-hand side of (3.31), we use
again the events $A_{n,r}$ and $B_{n,r}$, see (3.16). From elementary
inequalities $M_{n}\leq S_{n}$ and
$X_{1}^{2\gamma_{1}}\dots
X_{r}^{2\gamma_{r}}\leq(X_{1}+\dots+X_{r})^{2\gamma_{1}+\dots+2\gamma_{r}}\leq
S_{n}^{2p},$
it follows that
$\eta_{n}(\gamma)M_{n}^{2}\,\leq\,n^{p+1}\,S_{n}^{p+1}\,\leq\,2^{p}n^{p+1}\,(S_{r}^{p+1}+S_{n,r}^{p+1}),$
implying
${\mathbb{E}}\,\eta_{n}(\gamma)\,M_{n}^{2}\,1_{A_{n,r}}\leq
2^{p}n^{p+1}\,\Big{(}{\mathbb{E}}\,S_{r}^{p+1}\,{\mathbb{P}}(A_{n,r})+{\mathbb{E}}S_{r}^{p+1}\,{\mathbb{E}}\,S_{n,r}^{p+1}\,1_{A_{n,r}}\Big{)}.$
(3.32)
Here, by Lemma 1 with $\lambda=1/2$ and using $n-r\geq\frac{1}{2}\,n$, we have
${\mathbb{P}}(A_{n,r})\leq\exp\Big{\\{}-\frac{1}{16\,b^{2}}\,n\Big{\\}},\qquad
b^{2}={\mathbb{E}}X^{2}.$ (3.33)
Also, assuming that the moment ${\mathbb{E}}X^{p+2}$ is finite and applying
the Hölder inequality with exponents ${(p+2)}/{(p+1)}$ and $p+2$, one may
bound the last expectation in (3.32) as
${\mathbb{E}}\,S_{n,r}^{p+1}\,1_{A_{n,r}}\leq\big{(}{\mathbb{E}}\,S_{n,r}^{p+2}\big{)}^{\frac{p+1}{p+2}}\,({\mathbb{P}}(A_{n,r}))^{\frac{1}{p+2}}.$
By Jensen inequality, ${\mathbb{E}}\,S_{n,r}^{p+2}\leq
r^{p+1}\,{\mathbb{E}}X^{p+2}.$ Applying this in (3.32), the inequality (3.33)
yields an exponential bound
${\mathbb{E}}\,\eta_{n}(\gamma)\,M_{n}^{2}\,1_{A_{n,r}}\leq e^{-cn}$ (3.34)
with some constant $c>0$ which does not depend on $n$.
As for the set $B_{n,r}$, we use on it a point-wise upper bound
$\eta_{n}(\gamma)\,\leq\,2^{p+1}\,{X_{1}^{2\gamma_{1}}\dots
X_{r}^{2\gamma_{r}}}/{(\frac{1}{n}\,S_{r}+1)^{p+1}}.$ One may also use
$M_{n}\leq M_{r}+M_{n,r}\leq S_{r}+M_{n,r}$, implying, by Jensen’s inequality,
$M_{n}^{2}\ \leq\ (r+1)\,(X_{1}^{2}+\dots+X_{r}^{2}+M_{n,r}^{2}).$ It gives
$\displaystyle{\mathbb{E}}\,\eta_{n}(\gamma)\,M_{n}^{2}\,1_{B_{n,r}}$
$\displaystyle\leq$ $\displaystyle
2^{p+1}(r+1)\,\sum_{k=1}^{r}{\mathbb{E}}\,\frac{X_{1}^{2\gamma_{1}}\dots
X_{r}^{2\gamma_{r}}}{(\frac{1}{n}\,S_{r}+1)^{p+1}}\,X_{k}^{2}$ (3.35)
$\displaystyle+\
2^{p+1}(r+1)\,\sum_{k=1}^{r}{\mathbb{E}}\,\frac{X_{1}^{2\gamma_{1}}\dots
X_{r}^{2\gamma_{r}}}{(\frac{1}{n}\,S_{r}+1)^{p+1}}\ {\mathbb{E}}M_{n,r}^{2}$
Without an essential loss, the last expectation ${\mathbb{E}}M_{n,r}^{2}$ may
be replaced with ${\mathbb{E}}M_{n}^{2}$. The second last expectation was
considered in Lemmas 5 – 6 under the condition (2.6), which holds as long as
the moment ${\mathbb{E}}X^{p+1}$ is finite. The third last expectation in
(3.35), due to an additional factor $X_{k}^{2}$, dominates the second last and
needs further consideration under stronger moment assumptions. Recalling
(3.34) and returning to (3.31), let us summarize in the following statement.
###### Lemma 7
If the moment ${\mathbb{E}}X^{p+2}$ is finite, then
$\displaystyle c\,{\mathbb{E}}R_{n}^{(p)}$ $\displaystyle\leq$ $\displaystyle
e^{-cn}+\max_{1\leq k\leq r}\,\max_{\gamma\in\mathcal{C}(p,r)}\
\frac{1}{n^{p-r+1}}\ {\mathbb{E}}\,\frac{X_{1}^{2\gamma_{1}}\dots
X_{r}^{2\gamma_{r}}}{(\frac{1}{n}\,S_{r}+1)^{p+1}}\ X_{k}^{2}$ (3.36)
$\displaystyle+\ \max_{\gamma\in\mathcal{C}(p,r)}\ \frac{1}{n^{p-r+1}}\
{\mathbb{E}}\,\frac{X_{1}^{2\gamma_{1}}\dots
X_{r}^{2\gamma_{r}}}{(\frac{1}{n}\,S_{r}+1)^{p+1}}\ {\mathbb{E}}M_{n}^{2}$
with some constant $c>0$ which does not depend on $n\geq 2p$.
In order to obtain polynomial bounds for the expectations in (3.36) under
suitable moment or tail assumptions, we need to develop corresponding analogs
of Lemmas 5 – 6. We will consider separately the cases $r=1$, $r=p$, and
$2\leq r\leq p-1$ under the tail condition
${\mathbb{P}}\\{X\geq x\\}=O(1/x^{p+\alpha})\quad{\rm as}\
x\rightarrow\infty,$ (3.37)
where $\alpha>0$ is a parameter. It implies that the moments
${\mathbb{E}}X^{q}$ are finite for all $q<p+\alpha$ and is fulfilled as long
as the moment ${\mathbb{E}}X^{p+\alpha}$ is finite. Put
$\varepsilon(x)=x^{p+\alpha}\,(1-F(x)),$ where $F$ denotes the distribution
function of the random variable $X$.
###### Lemma 8
Under (3.37) with $1<\alpha\leq p+2$,
${\mathbb{E}}\,\frac{X_{1}^{2p}}{(\frac{1}{n}\,X_{1}+1)^{p+1}}\,=\,O(n^{p-\alpha+2}).$
(3.38)
Moreover, for any index $1\leq k\leq r$,
${\mathbb{E}}\,\frac{X_{1}^{2p}}{(\frac{1}{n}\,X_{1}+1)^{p+1}}\
X_{k}^{2}\,=\,O(n^{p-\alpha+2}\log n).$ (3.39)
###### Proof
The expectation in (3.38) is equal to and satisfies
$\displaystyle{\mathbb{E}}\,\frac{X_{1}^{2p}}{(\frac{1}{n}\,X_{1}+1)^{p+1}}\,1_{\\{X_{1}\geq
n\\}}+{\mathbb{E}}\,\frac{X_{1}^{2p}}{(\frac{1}{n}\,X_{1}+1)^{p+1}}\,1_{\\{X_{1}<n\\}}$
$\displaystyle\hskip-142.26378pt\leq\
n^{p+1}\,{\mathbb{E}}X^{p-1}\,1_{\\{X\geq
n\\}}+{\mathbb{E}}\,X^{2p}\,1_{\\{X<n\\}}$
Similarly to (3.24), we get
$\displaystyle{\mathbb{E}}\,X^{2p}\,1_{\\{X<n\\}}\leq
2p\int_{0}^{n}x^{p-\alpha-1}\,\varepsilon(x)\,dx\ =\ O(n^{p-\alpha}),$
provided that $\alpha<p$. In the case $\alpha=p$, the last integral is bounded
by $O(\log n)$. In addition,
$\displaystyle{\mathbb{E}}\,X^{p}\,1_{\\{X\geq
n\\}}=O(n^{p-\alpha})+p\int_{n}^{\infty}x^{p-\alpha-1}\,\varepsilon_{q}(x)\,dx\
=\ O(n^{p-\alpha}).$
This proves (3.38) for $\alpha\leq p$. If $p<\alpha\leq p+2$, then (3.38)
holds automatically, since then $2p<p+\alpha$ and therefore the expectation in
(3.38) does not exceed the finite moment ${\mathbb{E}}X_{1}^{2p}$, while the
right-hand side is bounded away from zero.
For the second assertion, one may assume that $k=1$, in which case the
expectation in (3.39) is equal to and satisfies
$\displaystyle{\mathbb{E}}\,\frac{X_{1}^{2p+2}}{(\frac{1}{n}\,X_{1}+1)^{p+1}}$
$\displaystyle=$
$\displaystyle{\mathbb{E}}\,\frac{X_{1}^{2p+2}}{(\frac{1}{n}\,X_{1}+1)^{p+1}}\,1_{\\{X_{1}\geq
n\\}}+{\mathbb{E}}\,\frac{X_{1}^{2p+2}}{(\frac{1}{n}\,X_{1}+1)^{p+1}}\,1_{\\{X_{1}<n\\}}$
$\displaystyle\leq$ $\displaystyle n^{p+1}\,{\mathbb{E}}X^{p+1}\,1_{\\{X\geq
n\\}}+{\mathbb{E}}\,X^{2p+2}\,1_{\\{X<n\\}}.$
Here, similarly to the previous step, if $\alpha<p+2$,
${\mathbb{E}}\,X^{2p+2}\,1_{\\{X<n\\}}\,\leq\,2p\int_{0}^{n}x^{p-\alpha+1}\,\varepsilon(x)\,dx\,=\,O(n^{p-\alpha+2}).$
In the case $\alpha=p+2$, the last integral is bounded by $O(\log n)$. In
addition,
$\displaystyle{\mathbb{E}}\,X^{p+1}\,1_{\\{X\geq
n\\}}=O(n^{-\alpha+1})+p\int_{n}^{\infty}x^{-\alpha}\,\varepsilon(x)\,dx\ =\
O(n^{-\alpha+1}).$
###### Lemma 9
If the moment ${\mathbb{E}}X^{4}$ is finite, then
${\mathbb{E}}\,\frac{X_{1}^{2}\dots
X_{p}^{2}}{(\frac{1}{n}\,S_{p}+1)^{p+1}}\,=\,O(1).$
Moreover, for any index $1\leq k\leq p$,
${\mathbb{E}}\,\frac{X_{1}^{2}\dots X_{p}^{2}}{(\frac{1}{n}\,S_{p}+1)^{p+1}}\
X_{k}^{2}\,=\,O(1).$
This statement is clear. The last expectation does not exceed
${\mathbb{E}}X^{4}\,({\mathbb{E}}X^{2})^{p-1}$ which is finite and does not
depend on $n$.
###### Lemma 10
Let $\gamma=(\gamma_{1},\dots,\gamma_{r})\in\mathcal{C}(p,r)$, $2\leq r\leq
p$. Under the condition (3.37) with $2<\alpha\leq 4$, for any index $1\leq
k\leq r$,
${\mathbb{E}}\,\frac{X_{1}^{2\gamma_{1}}\dots
X_{r}^{2\gamma_{r}}}{(\frac{1}{n}\,S_{r}+1)^{p+1}}\,X_{k}^{2}\,=\,O(n^{p-r-\alpha+4}).$
(3.40)
###### Proof
One may reformulate (3.40) as the statement
${\mathbb{E}}\,\frac{X_{1}^{2\gamma_{1}}\dots
X_{r}^{2\gamma_{r}}}{(\frac{1}{n}\,S_{r}+1)^{p+1}}\,=\,O(n^{p-r-\alpha+4})$
(3.41)
in which $\gamma=(\gamma_{1},\dots,\gamma_{r})\in\mathcal{C}(p+2,r)$. If all
$\gamma_{i}\leq\frac{p+2}{2}$, there is nothing to prove, since then
${\mathbb{E}}\,\frac{X_{1}^{2\gamma_{1}}\dots
X_{r}^{2\gamma_{r}}}{(\frac{1}{n}\,S_{r}+1)^{p+1}}\,\leq\,{\mathbb{E}}\,X_{1}^{2\gamma_{1}}\dots\mathbb{E}\,X_{r}^{2\gamma_{r}}\,\leq\,({\mathbb{E}}X^{p+2})^{r}.$
In the other case, we repeat the arguments used in the proof of Lemma 6.
Suppose for definiteness that $\gamma_{1}$ is the largest number among all
$\gamma_{i}$’s. Necessarily $\gamma_{1}>\frac{p+2}{2}$ and therefore
$\gamma_{i}<\frac{p+2}{2}$ for all $i\geq 2$. Since $S_{r}<n\Rightarrow
X_{1}<n$, we similarly have
${\mathbb{E}}\,\frac{X_{1}^{2\gamma_{1}}\dots
X_{r}^{2\gamma_{r}}}{(\frac{1}{n}\,S_{r}+1)^{p+1}}\
1_{\\{S_{r}<n\\}}\,\leq\,({\mathbb{E}}X^{p+2})^{r-1}\,{\mathbb{E}}\,X^{2\gamma_{1}}\,1_{\\{X<n\\}}.$
(3.42)
To bound the last expectation, note that, if $x=n$ is the point of continuity
of $F(x)$,
$\displaystyle{\mathbb{E}}\,X^{2\gamma_{1}}\,1_{\\{X<n\\}}$ $\displaystyle=$
$\displaystyle-n^{2\gamma_{1}}\,(1-F(n))+2\gamma_{1}\int_{0}^{n}x^{2\gamma_{1}-1}\,(1-F(x))\,dx$
$\displaystyle\leq$ $\displaystyle
2\gamma_{1}\int_{0}^{n}x^{2\gamma_{1}-1-p-\alpha}\,\varepsilon(x)\,dx.$
But since $\gamma_{1}\leq p-r+3$ (which follows from
$\gamma_{1}+\gamma_{2}+\dots+\gamma_{r}=p+2$ and $\gamma_{i}\geq 1$), we have
$\int_{1}^{n}x^{2\gamma_{1}-p-\alpha-1}\,\varepsilon_{p+\alpha}(x)\,dx\,\leq\,\int_{1}^{n}x^{p-2r-\alpha+5}\,\varepsilon(x)\,dx.$
The last integral grows at the desired rate $O(n^{p-r-\alpha+4})$ as the worst
case, if and only if $p-2r-\alpha+5\leq p-r-\alpha+3$, that is, $r\geq 2$
(which is true). Thus,
${\mathbb{E}}\,X^{2\gamma_{1}}\,1_{\\{X<n\\}}\,=\,O(n^{p-r-\alpha+4})$
In view of (3.42), this proves (3.41) for the part of the expectation
restricted to the set $S_{r}<n$, that is,
${\mathbb{E}}\,\frac{X_{1}^{2\gamma_{1}}\dots
X_{r}^{2\gamma_{r}}}{(\frac{1}{n}\,S_{r}+1)^{p+1}}\
1_{\\{S_{r}<n\\}}\,=\,O(n^{p-r-\alpha+4}).$ (3.43)
Here, the worst situation is attained in the case $r=2$, $\gamma_{1}=p+1$,
$\gamma_{2}=1$.
Turning to the expectation over the complementary set $S_{r}\geq n$, introduce
the events
$\Omega_{i}=\Big{\\{}X_{i}\geq\max_{j\neq i}X_{j}\Big{\\}},\quad i=1,\dots,r.$
On every such set, $X_{i}\leq S_{r}\leq rX_{i}$. In particular, $S_{r}\geq n$
implies $X_{i}\geq n/r$. Hence, together with (3.43), (3.41) would follow from
the inequality
${\mathbb{E}}\,\frac{X_{1}^{2\gamma_{1}}\dots
X_{r}^{2\gamma_{r}}}{X_{i}^{p+1}}\,1_{\\{X_{i}\geq
n\\}\cap\Omega_{i}}\,=\,O(n^{-r-\alpha+3})$ (3.44)
with an arbitrary index $1\leq i\leq r$.
Case 1. $i\geq 2$. If we fix any values $X_{1}=x_{1}$ and $X_{i}=x_{i}$, then
the expectation with respect to $X_{j}$, $j\neq i$, in (3.44) will yield a
bounded quantity (since the $(p+2)$-moment is finite). Hence (3.44) is
simplified to
${\mathbb{E}}\,X_{1}^{2\gamma_{1}}X_{i}^{2\gamma_{i}-p-1}\,1_{\\{X_{i}\geq
n\\}\cap\\{X_{i}\geq X_{1}\\}}\,=\,O(n^{-r-\alpha+3}).$ (3.45)
Here, the expectation over $X_{1}$ may be estimated similarly to the previous
step, by replacing $n$ with $x_{i}$. Recall that $\gamma_{1}>\frac{p+2}{2}$
and hence $2\gamma_{1}\geq p+3$.
Case 1.1. $2\gamma_{1}>p+\alpha$. Then we have
${\mathbb{E}}\,X_{1}^{2\gamma_{1}}1_{\\{X_{1}\leq
x_{i}\\}}\,\leq\,2\gamma_{1}\int_{0}^{x_{i}}x^{2\gamma_{1}-1-p-\alpha}\,\varepsilon(x)\,dx\,\leq\,Cx_{i}^{2\gamma_{1}-p-\alpha}$
with some constant $C>0$. Hence, up to a constant, the expectation in (3.44)
is bounded by
$\displaystyle{\mathbb{E}}\,X_{i}^{2\gamma_{i}+2\gamma_{1}-2p-\alpha-1}\,1_{\\{X_{i}\geq
n\\}}$ $\displaystyle=$
$\displaystyle\int_{n}^{\infty}x^{2\gamma_{i}+2\gamma_{1}-2p-\alpha-1}\,dF(x)$
$\displaystyle\hskip-128.0374pt=\
n^{2\gamma_{i}+2\gamma_{1}-2p-\alpha-1}\,(1-F(n))+c_{i}\int_{n}^{\infty}x^{2\gamma_{i}+2\gamma_{1}-2p-\alpha-2}\,(1-F(x))\,dx$
$\displaystyle\hskip-128.0374pt=\
O(n^{2\gamma_{i}+2\gamma_{1}-3p-2\alpha-1})+c_{i}\int_{n}^{\infty}x^{2\gamma_{i}+2\gamma_{1}-3p-2\alpha-2}\,\varepsilon(x)\,dx$
$\displaystyle\hskip-128.0374pt=\
O(n^{2\gamma_{i}+2\gamma_{1}-3p-2\alpha-1}).$
To obtain (3.45), it remains to check that
$2\gamma_{i}+2\gamma_{1}-3p-2\alpha-1\leq-r-\alpha+3$. And indeed, since
$p=\gamma_{i}+\gamma_{1}+\sum_{j\neq
i,1}\gamma_{j}\geq\gamma_{i}+\gamma_{1}+(r-2),$
the desired relation would follow from
$2\,(p-(r-2))-3p-2\alpha-1\leq-r-\alpha+3$, that is, $p+r\geq\alpha$ (which is
true since $\alpha\leq 4$, while $p,r\geq 2$). Case 1.2. $2\gamma_{1}\leq
p+\alpha$. Then ${\mathbb{E}}\,X_{1}^{2\gamma_{1}}1_{\\{X_{1}\leq
x_{i}\\}}\leq{\mathbb{E}}\,X^{p+\alpha}$ which is bounded in $x_{i}$, and the
expectation in (3.44) does not exceed up to a constant
$\displaystyle{\mathbb{E}}\,X_{i}^{2\gamma_{i}+2\gamma_{1}-p-1}\
1_{\\{X_{i}\geq n\\}}$ $\displaystyle=$
$\displaystyle\int_{n}^{\infty}x^{2\gamma_{i}+2\gamma_{1}-p-1}\,dF(x)$
$\displaystyle\hskip-128.0374pt=\
n^{2\gamma_{i}+2\gamma_{1}-p-1}\,(1-F(n))+c_{i}\int_{n}^{\infty}x^{2\gamma_{i}+2\gamma_{1}-p-2}\,(1-F(x))\,dx$
$\displaystyle\hskip-128.0374pt=\
O(n^{2\gamma_{i}+2\gamma_{1}-2p-\alpha-1})+c_{i}\int_{n}^{\infty}x^{2\gamma_{i}+2\gamma_{1}-2p-\alpha-2}\,\varepsilon(x)\,dx\
=\ O(n^{2\gamma_{i}+2\gamma_{1}-2p-\alpha-1}).$
To obtain (3.45), it remains to check that
$2\gamma_{i}+2\gamma_{1}-2p-\alpha-1\leq-r-\alpha+3.$
And indeed, by (3.45), the desired relation would follow from
$2\,(p-(r-2))-2p-\alpha-1\leq-r-\alpha+3$, that is, $r\geq 0$.
Case 2. $i=1$. If we fix any value $X_{1}=x_{1}$ in (3.44), the expectation
with respect to $X_{j}$, $j\neq 1$, yields a bounded quantity. Hence (3.44) is
simplified to
${\mathbb{E}}\,X^{2\gamma_{1}-p-1}\,1_{\\{X\geq n\\}}=O(n^{-r-\alpha+3}).$
We have
$\displaystyle{\mathbb{E}}\,X^{2\gamma_{1}-p-1}\,1_{\\{X\geq n\\}}$
$\displaystyle=$ $\displaystyle\int_{n}^{\infty}x^{2\gamma_{1}-p-1}\,dF(x)$
$\displaystyle\hskip-56.9055pt=\
n^{2\gamma_{1}-p-1}\,(1-F(n))+(2\gamma_{1}-p-1)\int_{n}^{\infty}x^{2\gamma_{1}-p-2}\,(1-F(x))\,dx$
$\displaystyle\hskip-56.9055pt=\
O(n^{2\gamma_{1}-2p-\alpha-1})+\int_{n}^{\infty}x^{2\gamma_{1}-2p-\alpha-2}\,\varepsilon(x)\,dx\
=\ O(n^{2\gamma_{1}-2p-\alpha-1}).$
It remains to see that $2\gamma_{1}-2p-\alpha-1\leq-r-\alpha+3$, that is,
$2\gamma_{1}+r\leq 2p+4$. But this follows from
$p+2=\gamma_{1}+\dots+\gamma_{r}\geq\gamma_{1}+(r-1)\geq\gamma_{1}+\frac{r}{2}$.
## 4 Proofs of main results
###### Proof
of Theorem 2.1. Fix $k\geq 3.$ In the following we omit $k$ in notation when
it is not necessary. So we write $S_{n}=S_{n}(k),$ $\lambda=\lambda(k),$
$Z=Z_{k},$ $I=I(k).$ We say that a random variable $V$ has a mixed Poisson
distribution with mixing distribution $F$ when, for every integer $m\geq 0,$
${\mathbb{P}}(V=m)={\mathbb{E}}\left(e^{-\Lambda}\frac{\Lambda^{m}}{m!}\right),$
where $\Lambda$ is a random variable with distribution $F.$
Put $\Lambda=\sum_{\alpha\in
I}{\mathbb{E}}_{W_{1},W_{2},...,W_{n}}Y_{\alpha}.$
We have for any real function $h:$ $\\{0,1,2,...\\}\rightarrow{\mathbb{R}}$
$|{\mathbb{E}}h(S_{n})-{\mathbb{E}}h(Z)|\leq{\mathbb{E}}|{\mathbb{E}}_{W_{1},...,W_{n}}h(S_{n})-{\mathbb{E}}_{W_{1},...,W_{n}}h(V)|+|{\mathbb{E}}h(V)-{\mathbb{E}}h(Z)|.$
(4.46)
For each $\alpha\in I$, define $B_{\alpha}\equiv\\{\beta\in I:$ $\alpha$ and
$\beta$ have at least one edge in common$\\}$. Put
$b_{1}=\sum_{\alpha\in I}\sum_{\beta\in B_{\alpha}}p_{\alpha}p_{\beta},$
where $p_{\alpha}={\mathbb{E}}_{W_{1},...,W_{n}}Y_{\alpha}.$
$b_{2}=\sum_{\alpha\in I}\sum_{\alpha\neq\beta\in B_{\alpha}}p_{\alpha\beta},$
where $p_{\alpha\beta}={\mathbb{E}}_{W_{1},...,W_{n}}Y_{\alpha}Y_{\beta}.$
Note that, for any $\alpha\in I$ and $\beta\in I\setminus B_{\alpha},$ the
cycles $\alpha$ and $\beta$ may have joint vertices but the do not have any
edge in common. Therefore, for such $\alpha$ and $\beta$, the random variable
$Y_{\alpha}$ and $Y_{\beta}$ are conditionally independent given weights
$W_{1},...,W_{n}.$ Thus, by Theorem 1 in Goldstein , proved with the
Chen–Stein method, and relations (2.2) and (4.46), we get
$\parallel\mathscr{L}(S_{n}(k))-\mathscr{L}(Z)\parallel\,\lesssim{\mathbb{E}}(b_{1}+b_{2})+|{\mathbb{E}}\Lambda-\lambda(k)|,$
(4.47)
where we write here and in the following that $A_{n}\,\lesssim B_{n}$ or
$A_{n}\,\gtrsim B_{n}$ when there exists a positive constant $c$ not depending
on $n$ such that $A_{n}\,\leq cB_{n}$ or $A_{n}\,\geq cB_{n}.$
For random variables $b_{1}$ and $b_{2}$, we get, cf. (3.14), by the i.i.d.
assumption and simple inequality for positive $c$ and $d:2cd\leq c^{2}+d^{2}$,
${\mathbb{E}}(b_{1}+b_{2})\ \lesssim\
\sum_{p=k+2}^{2k}\sum_{r=k}^{p-1}\frac{n(n-1)\dots(n-r+1)}{n^{p}}\sum_{\gamma\in\mathcal{C}(p,r)}{\mathbb{E}}\psi_{n}(\gamma),$
(4.48)
where
$\psi_{n}(\gamma)={W_{1}^{2\gamma_{1}}\dots
W_{r}^{2\gamma_{r}}}/{(\frac{1}{n}\,L_{r}+\frac{1}{n}\,L_{n,r})^{p}}$
and
$L_{r}=W_{1}+\dots+W_{r},\quad L_{n,r}=W_{r+1}+\dots+W_{n}.$
For example, we have minimal value $p=k+2$ and $r=k+1$ for the cycles
$\alpha=(1,2,\dots,k)$ and $\beta=(1,2,\dots,k-1,k+1).$ Then
${\mathbb{E}}p_{\alpha\beta}\lesssim{\mathbb{E}}W_{1}^{4}W_{2}^{2}\dots
W_{k-1}^{2}W_{k}^{2}W_{k+1}^{2}/L_{n}^{k+2}.$
We have maximal value $p=2k$ and $r=k$ for the cycle $\alpha=(1,2,\dots,k)$.
Then
${\mathbb{E}}p_{\alpha}^{2}\leq{\mathbb{E}}W_{1}^{4}\dots
W_{k}^{4}/L_{n}^{2k}.$
Lemmas 4 and 6 and inequality (4.48) under condition (2.4) imply
${\mathbb{E}}(b_{1}+b_{2})=o\left(\frac{\log n}{n}\right).$ (4.49)
Now we construct an upper bound for the last summand in (4.47).
It is clear that
${\mathbb{E}}\Lambda\leq\frac{1}{2k}{\mathbb{E}}\left(\frac{(W_{1}^{2}+\dots+W_{n}^{2})^{k}}{L_{n}^{k}}\right).$
(4.50)
On the other hand, note that for a positive $a$ and positive sequence
$\\{x_{i}\\},i=1,2,\dots k,$ we have, see e.g. Lemma 8 in Liu_moment
$\prod_{i=1}^{k}\frac{1}{a+x_{i}}\geq\frac{1}{a^{k}}-\frac{\sum_{i=1}^{k}x_{i}}{a^{k+1}}.$
Therefore, by the i.i.d. assumption, we get
$\displaystyle{\mathbb{E}}\Lambda$ $\displaystyle\geq$
$\displaystyle\frac{1}{2k}{\mathbb{E}}\left(\frac{(W_{1}^{2}+\dots+W_{n}^{2})^{k}}{L_{n}^{k}}\right)-c_{1}\sum_{r=1}^{k-1}\frac{n(n-1)\dots(n-r+1)}{n^{k}}\sum_{\gamma\in\mathcal{C}(k,r)}{\mathbb{E}}\psi_{n}(\gamma)$
(4.51)
$\displaystyle-\,\frac{c_{2}}{n}\sum_{\gamma\in\mathcal{C}(k+1,k)}{\mathbb{E}}\left(\frac{W_{1}^{2\gamma_{1}}\dots
W_{r}^{2\gamma_{r}}}{(L_{n}/n)^{k+1}}\right),$
where $c_{1}$ and $c_{2}$ do not depend on $n$.
Combining Lemmas 4 and 6 and relations (2.7), (4.47), (4.49), (4.50) and
(4.51), we finish the proof of Theorem 2.1.
###### Proof
of Theorem 2.2 We split the proof of the Theorem into several steps. Without
loss of generality, let ${\mathbb{E}}X=1$.
Necessity. By Lemma 3, for the convergence
${\mathbb{E}}T_{n}^{p}\rightarrow({\mathbb{E}}X^{2})^{p}$, it is necessary
that all summands in (3.14) with $r<p$ should be vanishing at infinity. In
particular, for the shortest tuple $\gamma$ with $r=1$ as in Lemma 2, it
should be required that ${n^{1-p}}\,{\mathbb{E}}\,\xi_{n}(\gamma)\rightarrow
0$ as $n\rightarrow\infty$. Hence, from the inequality (3.15) it follows that
${\mathbb{E}}X^{p}\,1_{\\{X\geq n\\}}=o(1/n).$
This relation may be simplified in terms of the tails of the distribution of
$X$. Indeed, ${\mathbb{E}}\,X^{p}\,1_{\\{X\geq
n\\}}\,\geq\,n^{p}\,{\mathbb{P}}\\{X\geq n\\},$ so that the property (2.6) is
necessary for the convergence
${\mathbb{E}}T_{n}^{p}\rightarrow({\mathbb{E}}X^{2})^{p}$.
Sufficiency and rate of convergence. First note that the condition (2.6)
ensures that the moment ${\mathbb{E}}X^{p}$ is finite. For the convergence
part of Theorem 2.2 we apply Lemmas 4-6, which imply that
${\mathbb{E}}\xi_{n}(\gamma)=o(n^{p-r})$ for any collection
$\gamma=(\gamma_{1},\dots,\gamma_{r})$ with $r<p$. It remains to take into
account Lemma 3 about the longest tuple $\tilde{\gamma}=(1,\dots,1)$ of length
$r=p$ and to recall the representation (3.14).
Turning to the rate of convergence, first note that by Lemma 4 and 6, for any
$\gamma\in\mathcal{C}(p,r)$ with $2\leq r\leq p-1$,
$\frac{n(n-1)\dots(n-r+1)}{n^{p}}\
{\mathbb{E}}\xi_{n}(\gamma)\,=\,o\Big{(}\frac{\log n}{n}\Big{)}$ (4.52)
For the shortest tuple $\gamma=(p)$ with $r=1$, we apply Lemma 5 with
$q=p+\frac{3}{2}$ and thus assume that ${\mathbb{P}}\\{X\geq
x\\}=O(x^{-p-\frac{3}{2}})$. Together with Lemma 4, this gives
$\frac{n}{n^{p}}\
{\mathbb{E}}\xi_{n}(\gamma)\,=\,O\Big{(}\frac{1}{\sqrt{n}}\Big{)}.$ (4.53)
Note that with this tail hypothesis, necessarily
${\mathbb{E}}X^{\beta}<\infty$ for any $\beta<p+\frac{3}{2}$. Since $p\geq 2$,
we have that the 3-rd moment ${\mathbb{E}}X^{3}$ is finite. Applying both
(4.52) and (4.53) in the representation (3.14) and using (3.20), we thus
obtain that
${\mathbb{E}}T_{n}^{p}\,=\,{\mathbb{E}}\xi_{n}1_{B_{n,p}}+O({1}/{\sqrt{n}}),$
(4.54)
with $\xi_{n}$ defined in (3.18).
It remains to study an asymptotic behavior of the last expectation in (4.54).
Note that $\frac{1}{n}\,S_{n}\geq\frac{1}{n}\,S_{n,p}\geq\frac{1}{2}$ on the
set $B_{n,p}$ as long as $n\geq 2p$. Applying the Taylor formula, we use an
elementary inequality $|{x^{-p}}-1|\leq p\,2^{p+1}\,|x-1|$ for
$x\geq\frac{1}{2}.$ In particular, on the set $B_{n,p}$ one has
$\Big{|}{(\frac{1}{n}\,S_{n})^{-p}}-1\Big{|}\,\leq\,p\,2^{p+1}\,\Big{|}\frac{1}{n}\,S_{n}-1\Big{|}.$
This gives
$\displaystyle\big{|}\xi_{n}-X_{1}^{2}\dots X_{p}^{2}\,\big{|}\,1_{B_{n,p}}$
$\displaystyle\leq$ $\displaystyle p\,2^{p+1}\,X_{1}^{2}\dots
X_{p}^{2}\,\Big{|}1-\frac{1}{n}\,S_{p}-\frac{1}{n}\,S_{n,p}\Big{|}$
$\displaystyle\leq$ $\displaystyle p\,2^{p+1}\,X_{1}^{2}\dots
X_{p}^{2}\,\Big{|}1-\frac{1}{n}\,S_{n,p}\Big{|}+\frac{p\,2^{p+1}}{n}\,X_{1}^{2}\dots
X_{p}^{2}\,S_{p},$
so, taking the expected values,
$\displaystyle\big{|}{\mathbb{E}}\xi_{n}1_{B_{n,p}}-{\mathbb{E}}X_{1}^{2}\dots
X_{p}^{2}\,1_{B_{n,p}}\big{|}$ $\displaystyle\leq$ $\displaystyle
p\,2^{p+1}\,({\mathbb{E}}X^{2})^{p}\,{\mathbb{E}}\,\Big{|}1-\frac{1}{n}\,S_{n,p}\Big{|}$
$\displaystyle+\
\frac{p\,2^{p+1}}{n}\,({\mathbb{E}}X^{2})^{p-1}\,{\mathbb{E}}X^{3}.$
In view of (3.17),
${\mathbb{E}}X_{1}^{2}\dots
X_{p}^{2}\,1_{B_{n,p}}\,=\,{\mathbb{E}}X_{1}^{2}\dots
X_{p}^{2}+e^{-cn}\,=\,({\mathbb{E}}X^{2})^{p}+o(e^{-cn}).$
for some constant $c>0$. Recalling (3.20), we thus get that
$\displaystyle\big{|}{\mathbb{E}}\xi_{n}-({\mathbb{E}}X^{2})^{p}\big{|}$
$\displaystyle\leq$ $\displaystyle
p\,2^{p+1}\,({\mathbb{E}}X^{2})^{p}\,{\mathbb{E}}\,\Big{|}1-\frac{1}{n}\,S_{n,p}\Big{|}$
$\displaystyle+\
\frac{p\,2^{p+1}}{n}\,({\mathbb{E}}X^{2})^{p-1}\,{\mathbb{E}}X^{3}+o(e^{-cn}).$
Finally
$\displaystyle{\mathbb{E}}\,\Big{|}\frac{1}{n}\,S_{n,p}-1\Big{|}$
$\displaystyle=$
$\displaystyle\frac{1}{n}\,{\mathbb{E}}\,|S_{n,p}-n|\,\leq\,\frac{1}{n}\,{\mathbb{E}}\,|S_{n,p}-(n-p)|+\frac{p}{n}$
$\displaystyle\leq$ $\displaystyle\frac{1}{n}\,\sqrt{{\rm
Var}(S_{n,p})}+\frac{p}{n}\,\leq\,\frac{1}{\sqrt{n}}\,\sqrt{{\mathbb{E}}X^{2}}+\frac{p}{n}.$
It remains to refer to (4.54).
###### Proof
of Theorem 2.3. Let us apply Lemmas 8–10 in the inequality (3.36). Using the
bounds for the cases $r=1$, $r=p$, and $2\leq r\leq p-1$, and assuming that
(3.37) is fulfilled for an integer $p\geq 2$ and a real number $2<\alpha\leq
4$, they imply that
${\mathbb{E}}R_{n}^{(p)}\,\leq\,e^{-cn}+\Big{(}\frac{1}{n^{\alpha-2}}+\frac{1}{n}+\frac{1}{n^{\alpha-3}}\Big{)}+\Big{(}\frac{\log
n}{n^{\alpha-2}}+\frac{1}{n}+\frac{\log n}{n^{2}}\Big{)}\
{\mathbb{E}}M_{n}^{2},$
where the constant $c>0$ does not depend on $n$. To simplify, we have to
assume that $\alpha\geq 3$ leading to
$c\,{\mathbb{E}}R_{n}\,\leq\,\frac{1}{n^{\alpha-3}}+\frac{\log
n}{n}\,{\mathbb{E}}M_{n}^{2}.$ (4.55)
The last expectation in (4.55) may also be estimated in a polynomial way.
Namely, since for any $q\geq 2$, one has
$M_{n}^{2}\,\leq\,(X_{1}^{2}+\dots+X_{n}^{q})^{{2}/{q}},$ we get, by Jensen’s
inequality,
${\mathbb{E}}M_{n}^{2}\,\leq\,({\mathbb{E}}X_{1}^{q}+\dots+{\mathbb{E}}X_{n}^{q})^{\frac{2}{q}}\,=\,n^{\frac{2}{q}}\,({\mathbb{E}}X^{q})^{\frac{2}{q}}.$
Therefore, choosing $2<q<p+\alpha$ to be sufficiently close to $p+\alpha$, and
using $\alpha=7/2$, from (4.55) we obtain (2.9). When ${\mathbb{E}}X^{p+4}$ is
finite, we get (2.10).
At last, to prove (2.11), note that the finiteness of the exponential moment
of $X$ is actually equivalent to the family of moment bounds
$({\mathbb{E}}X^{q})^{1/q}\leq cq,$ for $q\geq 1,$ which for $q\geq 2$ give
$\displaystyle{\mathbb{E}}M_{n}^{2}\leq{\mathbb{E}}\,(X_{1}^{q}+\dots+X_{n}^{q})^{2/q}\leq({\mathbb{E}}X_{1}^{q}+\dots+{\mathbb{E}}X_{n}^{q})^{2/q}\
\leq\ (cq)^{2}\,n^{2/q}.$
Choosing here $q$ to be of order $\log n$, we arrive at
${\mathbb{E}}M_{n}^{2}\leq C\,(\log n)^{2}$ with a constant $C$ independent of
$n$. Applying this bound in (4.55) with $\alpha=4$, we then obtain the much
better rate as in (2.11).
###### Acknowledgements.
This research was done within the framework of the Moscow Center for
Fundamental and Applied Mathematics, Lomonosov Moscow State University, and
HSE University Basic Research Programs. Theorem 1 was proved under support of
the RSF grant No. 18-11-00132. Research of S. Bobkov was partially supported
by NSF grant DMS–1855575.
## References
* (1) R. Arratia, L. Goldstein, and L. Gordon, Two moments suffice for Poisson approximations: the Chen-Stein method. Ann. Probab. 17, 9 – 25 (1989).
* (2) R. Arratia, L. Goldstein, and L. Gordon, Poisson approximation and the Chen-Stein method. Statistical Science 5, 403 – 434 (1990).
* (3) S. Bhamidi, R. van der Hofstad, J.S.H. van Leeuwaarden, Novel scaling limits for critical inhomogeneous random graphs. Ann. Probab. 40, 2299–2361 (2012).
* (4) B. Bollob?as, S. Janson, O. Riordan, The phase transition in inhomogeneous random graphs. Random Struct. Algorithms 31, 3 – 122 (2007).
* (5) T. Britton, M. Deijfen, A. Martin-L?of, Generating simple random graphs with prescribed degree distribution. J. Stat. Phys. 124, 1377 – 1397 (2006).
* (6) D. Chakrabarti et al., Epidemic thresholds in real networks. //ACM Transactions on Information and System Security (TISSEC). 10(4), 1 – 26 (2008).
* (7) F. Chung and L. Lu, Connected components in random graphs with given expected degree sequences. Ann. Combinat. 6(2), 125 – 145 (2002).
* (8) F. Chung, L. Lu, The volume of the giant component of a random graph with given expected degrees. SIAM J. Discrete Math. 20, 395 – 411 (2006) (electronic).
* (9) Demarco, B., Kahn, J., 2012. Upper tails for triangles. Random Structures Algorithms, 40(4), 452–459.
* (10) P. Erdos, A. Renyi, On Random Graphs. Publ. Math. Debrecen 6, 290 – 297 (1959).
* (11) C. Faloutsos, P. Faloutsos, and M. Faloutsos, On power-law relationships of the internet topology. Computer Communications Rev., 29, 251–262 (1999).
* (12) E.N. Gilbert, Random graphs. Annals of Mathematical Statistics 30, 1141 – 1144 (1959).
* (13) R. van der Hofstad, Random Graphs and Complex Networks. Cambridge Series in Statistical and Probabilistic Mathematics, 1 (2017).
* (14) Z. Hu, V. Ulyanov, Q. Feng, Limit theorems for number of edges in the generalized random graphs with random vertex weights. Journal of Mathematical Sciences, 218(2), 231 – 237 (2016).
* (15) S. Janson, K. Oleszkiewicz, A. Rucinski, Upper tails for subgraph counts in random graphs. Israel J. Math. 142, 61 – 92 (2004).
* (16) S. Janson, Asymptotic equivalence and contiguity of some random graphs, Random Struct. Algorithms 36, 26 – 45 (2010).
* (17) J.H Kim, V.H. Vu, Divide and conquer martingales and the number of triangles in a random graph. Random Structures Algorithms 24(2), 166 – 174 (2004).
* (18) Q. Liu, Z. Dong, Limit laws for the number of triangles in the generalized random graphs with random node weights. Statistics $\And$ Probability Letters 161, 108733 (2020).
* (19) Q. Liu, Z. Dong, Moment-based spectral analysis of large-scale generalized random graphs. IEEE Access 5, 9453 – 9463 (2017).
* (20) I. Norros, H.Reittu, On a conditionally Poisson graph process. Adv. in Appl. Probab. 38, 59 – 75 (2006).
* (21) V. M. Preciado and A. Jadbabaie, Moment-based spectral analysis of large-scale networks using local structural information. // IEEE/ACM Trans. Netw 21(2), 373 – 382 (2013).
|
# A two-way photonic quantum entanglement transfer interface
Yiwen Huang1,‡, Yuanhua Li1,2,‡<EMAIL_ADDRESS>Zhantong Qi1, Juan
Feng1, Yuanlin Zheng1,3 Xianfeng Chen1,3,4,5<EMAIL_ADDRESS>1 State Key
Laboratory of Advanced Optical Communication Systems and Networks, School of
Physics and Astronomy, Shanghai Jiao Tong University, Shanghai 200240, China
2 Department of Physics, Jiangxi Normal University, Nanchang 330022, China
3 Shanghai Research Center for Quantum Sciences, Shanghai 201315, China
4 Jinan Institute of Quantum Technology, Jinan 250101, China
5 Collaborative Innovation Center of Light Manipulation and Applications,
Shandong Normal University, Jinan 250358, China
$\ddagger$ These authors contributed equally to this work
###### Abstract
A quantum interface for two-way entanglement transfer between orbital angular
momentum degree of freedom in free space and time-energy degree of freedom in
optical fibers, provides a novel way toward establishing entanglement between
remote heterogeneous quantum nodes. Here, we experimentally demonstrate this
kind of transfer interface by using two interferometric cyclic gates. By using
this quantum interface, we perform two-way entanglement transfer for the two
degrees of freedom. The results show that the quantum entangled state can be
switched back and forth between orbital angular momentum and time-energy
degrees of freedom, and the fidelity of the state before and after switching
is higher than 90%. Our work demonstrates the feasibility and high performance
of our proposed transfer interface, and paves a route toward building a large-
scale quantum communication network.
###### pacs:
42.65.Ky
††preprint: APS/123-QED
Introduction-An entanglement-based quantum network is a platform for the
science and application of secure communication and distributed quantum
computation. In a complex network, quantum entanglement can be encoded in
various degrees of freedom (DOF), such as polarization, time-energy and
orbital angular momentum (OAM). Due to the unique phase-intensity profile and
unlimited number of orthogonal modes, OAM entangled states have engendered a
variety of quantum applications Erhard _et al._ (2018), such as high-
dimensional quantum key distribution Mafu _et al._ (2013), quantum
teleportation Wang _et al._ (2015), high-dimensional entanglement
swappingZhang _et al._ (2017), fundamental tests of quantum mechanics Dada
_et al._ (2011), digital spiral imaging Chen _et al._ (2014), quantum pattern
recognition Qiu _et al._ (2019) and transmission matrix measurement Valencia
_et al._ (2020). OAM entanglement is currently more and more widely used in
quantum communication tasks. On the other hand, time-energy entanglement is of
great interest as it supports various encodings Maclean _et al._ (2018);
Martin _et al._ (2017) and is insensitive to the birefringence effect of
fibers Marcikic _et al._ (2002); Zhang _et al._ (2008); Grassani _et al._
(2015). Different from polarization entanglement which requires real-time
active control to compensate polarization drifts Treiber _et al._ (2009); Yu
_et al._ (2015); Wengerowsky _et al._ (2018); Joshi _et al._ (2020), time-
energy entanglement, both continuous and discrete versions, shows intrinsic
robustness for propagation through long-distance fiber with the help of
passive dispersion-compensating devices Lee _et al._ (2014); Aktas _et al._
(2016). To date, time-energy entanglement sources have been widely used in
optical-fiber quantum tasks, such as quantum key distribution Zhong _et al._
(2015); Zhang _et al._ (2014a); Liu _et al._ (2020), dense coding Williams
_et al._ (2017) and long-distance quantum entanglement swapping Li _et al._
(2019). It is an important candidate for building long-distance optical fiber
networks.
The future quantum communication network is composed of free space and
optical-fiber coupling connection. In order to accomplish different quantum
tasks, the nodes of the network need to transfer information-carrying
entangled photons back and forth in free space and optical fibers. This
requires a reversible quantum entanglement transfer (QET) interface to
perfectly switch the entangled photons back and forth in free space and
optical fibers, which is the core technology for realizing quantum
communication between nodes in the network. So far, QET has been implemented
from time to polarization Vasconcelos _et al._ (2020), polarization to OAM
Nagali _et al._ (2009), and polarization to frequency Ramelow _et al._
(2009) DOF. Implement of a two-way QET interface that can control quantum
entanglement switching back and forth between time-energy and OAM DOF is
urgently needed for constructing the large-scale quantum communication
network. However, such a two-way QET interface of entangled photons has not
been implemented.
Here we demonstrate the first experiment of QET between time-energy and OAM
DOF of photons. Two interferometric quantum gates consisting of a Franson-type
interferemeter with spiral phase plates (SPP) inserted in different paths are
utilized for transferring quantum entanglement information from time-energy to
OAM DOF. Furthermore, we use two OAM sorters followed by two Mach-Zehnder
interferometers (MZIs) to implement QET from OAM to time-energy DOF. The
experiment results reveal a high quality of the QET between these two DOF,
while preserving quantum coherence and entanglement. Our approach paves the
way towards a novel means of connecting remote heterogeneous quantum nodes in
time-energy and OAM subspace.
Figure 1: Experimental setup of QET. All the MZIs in OAM state preparation and
transformation part are steadily phase-locked by PZT systems. EDFA, erbium
doped fiber amplifier; WDM, wavelength-division multiplexing; PC, polarization
controller; DP, Dove prism; SPP, spiral phase plate; HW, half-wave plate; BS,
50:50 beam splitter; PBS, polarization beam splitter; MZI, 1-GHz unbalanced
Mach-Zehnder interferometers; SPD, single-photon detector with quantum
efficiency of $\eta=(10.0\pm 0.2)\%$.
Experiment-Our experimental setup is descripted in Fig. 1. A narrow-band
continuous-wave (cw) laser at 1555.75 nm amplified by an erbium-doped fiber
amplifier (EDFA) is frequency doubled in a periodically poled lithium niobate
(PPLN) waveguide by second-harmonic generation (SHG). The remanent pump laser
is suppressed by a wavelength division multiplexing (WDM) filter with an
extinction ratio of 180 dB. The second harmonic is used to generate photon
pairs through the type-0 spontaneous parametric down-conversion (SPDC) process
in another 5-cm long PPLN waveguide. Owing to the narrow linewidth of the cw
pump, a large emission time uncertainty can be achieved during the SPDC
process. Thus one can express the superposition state of the photon pairs
emitted at different temporal mode as:
$|{\psi}\rangle=\kappa\int_{0}^{\infty}\xi{(t)}|{t}\rangle_{s}|{t}\rangle_{i}dt$,
where $\kappa$ is the coupling constant corresponding to the second-order
susceptibility $\chi^{(2)}$ of the PPLN waveguide, and $\xi{(t)}$ is the
emitted time distribution function. Due to the photon energy conservation
during the SPDC, the spectrum of the photon pairs is symmetric with respect to
central wavelength of 1555.75 nm and manifests strong frequency correlation,
as shown by the SPDC spectrum and joint spectral amplitude (JSA) in Fig. 2.
The SPDC source spans a full width at half maximum (FWHM) of approximate 80
nm, covering the whole telecom C-band and telecom L-band. We use cascaded
100-GHZ dense wavelength division multiplexing (DWDM) filters to divert the
signal and idler photons into 8 standard international telecommunication union
(ITU) channels, i.e., ITU CH22 to CH26 for signal and CH28 to CH32 for idler.
Figure 2: (a) Spectrum of the SPDC source based on PPLN waveguide; the red
curve is the theoretical result and the blue square points represent
experimental data. The colorful bars indicate the time-energy entangled photon
pairs multiplexed by 100-GHz DWDM channels. (b) Joint spectral amplitude of
signal and idler photons.
Results and discussions-Firstly, we characterize the performance of the SPDC
source by using a Franson-type interferometry Franson (1989), which contains
two unequal path-length MZIs with a path delay $\Delta{t}$ of 1 ns controlled
by a piezo-actuated displacement platform. Considering the temporal coherence
time of the signal and idler photons to be $\sigma_{cor}=10$ ps, such a path
delay satisfy the requirement $\tau\gg\Delta{t}\gg\sigma_{cor}$, where $\tau$
is the coherence time of the pump laser. Then, the two-photon state combining
the OAM freedom is projected to the following form of state:
$|{\psi}\rangle_{0}=[\frac{1}{\sqrt{2}}(|{t_{1}}\rangle_{s}|{t_{1}}\rangle_{i}+e^{i\varphi}|{t_{2}}\rangle_{s}|{t_{2}}\rangle_{i})]\otimes|{0}\rangle|{0}\rangle,$
(1)
where $\varphi$ is the relative phase in the long interferometer arm, and
$|{0}\rangle$ represents that the OAM mode of photon pair is Gaussian mode. In
our experiment, another cw laser at the central wavelength of 1570 nm is
injected into the other input port of the beam-splitter as feedback to
stabilize the phase of interferometers. This reference laser is offset with
SPDC photons on optical paths to avoid extra noise. The two-photon
interference fringes for the selected photon pairs with respect to the
relative phase are shown in Fig. 3. We achieved an average visibility of
$V=(95.1\pm 0.5)\%$, revealing a high quality of time-energy entanglement.
Figure 3: Two-photon interference fringes for time-energy entanglement before
QET.
In order to deterministically transfer time-energy entanglement to OAM
entanglement, SPPs used as mode-shifters are inserted into the two arms of
MZIs to form interferometric quantum gates. When a Gaussian photon transmits
through the SPP, its phase acquires a Hilbert factor exp${(i\ell\theta)}$, and
its profile becomes OAM mode of $\ell$ ($\ell$=$\pm$1), where $\ell$ is an
integer and represents the topological charge. After the OAM mode conversion,
the photons passing through the long path carrys an OAM of $\ell_{1}\hbar$
when leaving the interferometer, while the OAM of the other becomes
$\ell_{2}\hbar$. Then one can obtain a hyperentangled multi-DOF state:
$|{\psi}\rangle_{hy}=\frac{1}{\sqrt{2}}(|{t_{1},\ell_{1}}\rangle_{s}|{t_{1},\ell_{1}}\rangle_{i}+e^{i\varphi}|{t_{2},\ell_{2}}\rangle_{s}|{t_{2},\ell_{2}}\rangle_{i}).$
(2)
After the postselection of photon arrival time and precise adjustment of
relative phase $\varphi$, one can obtain a maximally entangled state
$|{\psi}\rangle=\frac{1}{\sqrt{2}}(|{\ell_{1}}\rangle_{s}|{\ell_{1}}\rangle_{i}+|{\ell_{2}}\rangle_{s}|{\ell_{2}}\rangle_{i})$.
To show the flexibility and adaptability of this approach, we experimentally
construct four maximally entangled states
$|{\phi^{\pm}}\rangle=\frac{1}{\sqrt{2}}(|{1}\rangle_{s}|{1}\rangle_{i}\pm|{-1}\rangle_{s}|{-1}\rangle_{i})$
and
$|{\psi^{\pm}}\rangle=\frac{1}{\sqrt{2}}(|{1}\rangle_{s}|{-1}\rangle_{i}\pm|{-1}\rangle_{s}|{1}\rangle_{i})$.
With an OAM mode of $\ell$$=$1 or $-$1 and a diffraction efficiency $\zeta$ of
high than 98%, four SPPs are inserted within two MZIs to convert the Gaussian
photons to OAM-carrying photons. During the experiment, the relative phase of
each MZI is carefully control to be $\varphi$=0 or $\pi$. To fully
characterize the established OAM states, we perform a quantum state tomography
for the four pairs of frequency-correlated channels and reconstruct their
density matrices by using the maximum likelihood estimation. Two spatial light
modulators (SLM, SLMs for signal photons, SLMi for idler photons) in
combination with single mode fibers and single photon detectors are used to
characterize the OAM entangled sates, the same as in Ref. Zhou _et al._
(2016). The SLMs are used to flatten the spiral phase of incident photons and
convert them to a Gaussian mode, which is efficiently coupled to single mode
fiber. The real and imaginary parts of the reconstructed density matrices
$\rho$ are presented in Fig. 4(a)-(d). We calculate the fidelity relative to
the ideal Bell states and obtain the average fidelity
$F=\langle{\psi_{ideal}}|\rho|\psi_{ideal}\rangle=(94.1\pm 1.3)\%$ and purity
$P=Tr(\rho^{2})=0.90\pm 0.01$, as shown in Fig. 4(f).
Figure 4: (a)-(e) The real and imaginary parts of the reconstructed density
matrices correspond to the OAM entangled states
$|{\phi^{\pm}}\rangle=\frac{1}{\sqrt{2}}(|{1}\rangle_{s}|{1}\rangle_{i}\pm|{-1}\rangle_{s}|{-1}\rangle_{i})$,
$|{\psi^{\pm}}\rangle=\frac{1}{\sqrt{2}}(|{1}\rangle_{s}|{-1}\rangle_{i}\pm|{-1}\rangle_{s}|{1}\rangle_{i})$
and
$|{\phi^{+}}\rangle_{0}=\frac{1}{\sqrt{2}}(|{0}\rangle_{s}|{0}\rangle_{i}+|{1}\rangle_{s}|{1}\rangle_{i})$,
respectively. (f) The measured fidelity and purity from the reconstructed
density matrices.
We further measure the two-photon interference fringes to characterize the OAM
entangled states. During the measurement, the signal photons are projected
onto the state $|{D}\rangle=\frac{1}{\sqrt{2}}(|{1}\rangle+|{-1}\rangle)$ by
SLMs while the idler photons onto
$|{\theta}\rangle=\frac{1}{\sqrt{2}}(e^{i\theta}|{1}\rangle+e^{-i\theta}|{-1}\rangle)$
by SLMi. Then, two-photon coincidences are recorded as a function of the
rotation angle $\theta$ of the phase mask applied to SLMi. By scanning
$\theta$ from 0 to 2$\pi$, the two-photon interference patterns are obtained,
as shown in Fig. 5. The average fringe visibility for four OAM entangled
states is calculated to be V=(95.4$\pm$1.8)%, which exceeds the 71% local
bound of the Bell’s inequality and convincingly reveals the existence of
entanglement. The experimental results powerfully confirm the quality of the
two-photon entanglement in OAM DOF after QET by using such an interferometric
gate.
Figure 5: Two-photon interference fringes of four OAM entangled states for
observing the violation of Bell inequality. Each coincidence count is measured
for 10 seconds.
To prove the ability of the two-way QET, we next demonstrate that two-photon
OAM entangled state can be deterministically transferred to time-energy
entangled sate by using a interferometric cyclic gate. Compared with QET from
time-energy to OAM DOF, it is more challenging to coherently convert
entanglement in OAM DOF back to time-energy subspace. The polarization of
signal and idler photons is rotated into
$\frac{1}{\sqrt{2}}(|H\rangle+|V\rangle)$ by two half-wave plates (HW) before
they enter the interferometric cyclic gates. Then, a double-path Sagnac
interferometer containing two DPs is utilized to serve as an OAM sorter to
split different OAM modes. Considering that the quantum state of the incident
photons is the superposition of two arbitrary OAM modes, $\ell_{1}$ and
$\ell_{2}$, the polarizations of photons with different modes will pose
perpendicularly to each other Zhang _et al._ (2014b) once the relative
orientation of two DPs is $\alpha$=$None$$/(\ell_{1}-\ell_{2})$. In our
experiment, we choose the quantum state
$|{\phi^{+}}\rangle_{0}=\frac{1}{\sqrt{2}}(|{0}\rangle_{s}|{0}\rangle_{i}+|{1}\rangle_{s}|{1}\rangle_{i})$
as an example to demonstrate the QET from OAM DOF to time-energy subspace. We
firstly prepare the state $|{\phi^{+}}\rangle_{0}$ through QET from time-
energy to OAM DOF. A SPP with OAM mode of $\ell$=1 is inserted in the short
path of each MZI of the first Franson-type interferometer. At this time, the
SPPs in the long path of MZIs shown in the OAM preparation part of Fig. 1 are
removed. After reconstructing the density matrix, we obtain the fidelity of
93.8% and state purity of 90.7%, as shown in Fig. 4(e) and (f). To further
characterize the entanglement of the prepared state, we measure the S
parameter of the Clauser-Horne-Shimony-Holt (CHSH) inequality as in Ref. 26
and obtain the result of S=2.4$\pm$0.03, which violates the inequality by 13
standard deviations.
Figure 6: Two-photon interference fringes for time-energy entanglement after
QET.
Then, we set the relative orientation of the two DPs in the Sagnac
interferometer to be $\alpha=90^{0}$. At the output of the Sagnac
interferometer, the photons of different modes are separated into different
paths by polarization beam splitter 1 (PBS1). Photons with an OAM mode of +1
take the long path, when the others pass through the short path. The two paths
are recombined by PBS2, forming a MZI with a delay time of 1 ns. In order to
erase any information about OAM profile, a SPP and a 4-f system are inserted
in the MZI to convert the OAM photons to the Gaussian mode. Finally, the two-
photon state becomes a time-energy entangled state after the erasing of
information about polarization and path, then it can be collected into fiber
for long distance distribution. We use another Franson-type interferometry
consisting of two MZIs with 1-ns relative delay to reveal the generated time-
energy entangled state. The experimental results are shown in Fig. 6. We
obtain a fitted visibility of V=(92.4$\pm$1.6)% for the time-energy entangled
state, which implies that the quantum entanglement is coherently transferred
into time-energy DOF.
The most important feature of our quantum interface is that it maintains the
quantum characteristics of the output photons after QET. In our experiment,
the entangled state can maintain a high fidelity over 90% for both QET
processes. However, the degradation of the visibility of interference fringes
for time-energy entanglement is still non-negligible after QET. This is mainly
caused by the imperfection of optical elements, system loss due to non-unity
coupling efficiency of space to fiber, and low detection efficiency of SPDs,
which can be improved by optimizing system parameters and using a high-
performance SPD Marsili _et al._ (2013). Another key feature is the high
probability of success of entanglement transfer. With the help of
postselection of photon arrival time, a deterministic entanglement transfer
from time energy to OAM DOF can be theoretically achieved. In our experiment,
the probability of success of entanglement conversion is about 96%. As for the
other process, due to the high extinction ratio of OAM sorters and the single-
mode characteristics of fiber, the QET of OAM back to time-energy DOF is
deterministic, too. On the other hand, our proposed QET interface has the
following preponderant characteristics. Firstly, our method paves the way for
preparing a multi-channel OAM entangled source by using DWDM technology, which
is difficult to implement by directly pumping a nonlinear crystal. In
addition, an arbitrary two dimensional OAM entangled state can be prepared by
replacing the first beam splitter in each MZI of the Franson-type
interferometer with a PBS and a HW. Secondly, the time-energy and OAM DOF
belong to high-dimensional Hilbert space. The interferometric cyclic gates can
be cascaded with each other to achieve high-dimensional QET, which is
promising in high-dimensional quantum tasks.
Summary-In conclusion, we have demonstrated a two-way QET interface with high
fidelity between the time-energy and the OAM DOF. Based on this interface, we
firstly implement the QET from time-energy to OAM DOF with an average fidelity
of the OAM entangled states of higher than 94.1%. Then, we show quantum
entanglement can be coherently tranferred from OAM back to time-energy DOF
with a high visibility of Franson-type interference over 92.4%. This interface
can be used to prepare multi-channel OAM entangled sources and paves a new way
for establishing entanglement between remote heterogeneous quantum nodes.
Thus, our scheme has great potential applications in future quantum
communication network, such as multi-DOF quantum entanglement swapping and
quantum direct communication on multinode integrated space-to-fiber
communication networks.
###### Acknowledgements.
This work is supported in part by the National Key Research and Development
Program of China (Grant No. 2017YFA0303700), National Natural Science
Foundation of China (Grant Nos. 11734011, 11804135, and 12074155), The
Foundation for Shanghai Municipal Science and Technology Major Project (Grant
No. 2019SHZDZX01-ZX06), and Project funded by China Postdoctoral Science
Foundation (Grant No. 2019M661476).
## References
* Erhard _et al._ (2018) M. Erhard, R. Fickler, M. Krenn, and A. Zeilinger, Light: Science & Applications 7, 17146 (2018).
* Mafu _et al._ (2013) M. Mafu, A. Dudley, S. Goyal, D. Giovannini, M. McLaren, M. J. Padgett, T. Konrad, F. Petruccione, N. Lütkenhaus, and A. Forbes, Physical Review A 88, 032305 (2013).
* Wang _et al._ (2015) X. L. Wang, X. D. Cai, Z. E. Su, M. C. Chen, D. Wu, L. Li, N. L. Liu, C. Y. Lu, and J. W. Pan, Nature 518, 516 (2015).
* Zhang _et al._ (2017) Y. Zhang, M. Agnew, T. Roger, F. S. Roux, T. Konrad, D. Faccio, J. Leach, and A. Forbes, Nature Communications 8, 632 (2017).
* Dada _et al._ (2011) A. C. Dada, J. Leach, G. S. Buller, M. J. Padgett, and E. Andersson, Nature Physics 7, 677 (2011).
* Chen _et al._ (2014) L. Chen, J. Lei, and J. Romero, Light: Science & Applications 3, e153 (2014).
* Qiu _et al._ (2019) X. Qiu, D. Zhang, W. Zhang, and L. Chen, Physical Review Letters 122, 123901 (2019).
* Valencia _et al._ (2020) N. H. Valencia, S. Goel, W. McCutcheon, H. Defienne, and M. Malik, Nature Physics , 1 (2020).
* Maclean _et al._ (2018) J. P. W. Maclean, J. M. Donohue, and K. J. Resch, Physical Review Letters 120, 053601 (2018).
* Martin _et al._ (2017) A. Martin, T. Guerreiro, A. Tiranov, S. Designolle, F. Fröwis, N. Brunner, M. Huber, and N. Gisin, Physical Review Letters 118, 110501 (2017).
* Marcikic _et al._ (2002) I. Marcikic, H. D. Riedmatten, W. Tittel, V. Scarani, H. Zbinden, and N. Gisin, Physical Review A 66 (2002).
* Zhang _et al._ (2008) Q. Zhang, H. Takesue, S. W. Nam, C. Langrock, X. P. Xie, B. Baek, M. M. Fejer, and Y. Yamamoto, Optical Express 16, 5776 (2008).
* Grassani _et al._ (2015) D. Grassani, S. Azzini, M. Liscidini, M. Galli, M. J. Strain, M. Sorel, J. E. Sipe, and D. Bajoni, Optica 2, 88 (2015).
* Treiber _et al._ (2009) A. Treiber, A. Poppe, M. Hentschel, D. Ferrini, T. Lorünser, E. Querasser, T. Matyus, H. Hübel, and A. Zeilinger, New Journal of Physics 11, 045013 (2009).
* Yu _et al._ (2015) L. Yu, C. M. Natarajan, T. Horikiri, C. Langrock, J. S. Pelc, M. G. Tanner, E. Abe, S. Maier, C. Schneider, S. Höfling, M. Kamp, R. H. Hadfield, M. M. Fejer, and Y. Yamamoto, Nature Communications 6, 8955 (2015).
* Wengerowsky _et al._ (2018) S. Wengerowsky, S. K. Joshi, F. Steinlechner, H. Hübel, and R. Ursin, Nature 564, 225 (2018).
* Joshi _et al._ (2020) S. K. Joshi, D. Aktas, S. Wengerowsky, M. Lončarić, S. P. Neumann, B. Liu, T. Scheidl, G. C. Lorenzo, Ž. Samec, L. Kling, A. Qiu, M. Razavi, M. Stipčević, J. G. Rarity, and R. Ursin, Science Advances 6, eaba0959 (2020).
* Lee _et al._ (2014) C. Lee, Z. S. Zhang, G. R. Steinbrecher, H. C. Zhou, J. Mower, T. Zhong, L. G. Wang, X. L. Hu, R. D. Horansky, and V. B. Verma, Physical Review A 90, 062331 (2014).
* Aktas _et al._ (2016) D. Aktas, B. Fedrici, F. Kaiser, T. Lunghi, L. Labonte, and S. Tanzilli, Laser Photonics Reviews 10, 451 (2016).
* Zhong _et al._ (2015) T. Zhong, H. Zhou, R. D. Horansky, C. Lee, V. B. Verma, A. E. Lita, A. Restelli, J. C. Bienfang, R. P. Mirin, and T. Gerrits, New Journal of Physics 17, 022002 (2015).
* Zhang _et al._ (2014a) Z. Zhang, J. Mower, D. Englund, F. N. C. Wong, and J. H. Shapiro, Physical Review Letters 112, 120506 (2014a).
* Liu _et al._ (2020) X. Liu, X. Yao, R. Xue, H. Wang, H. Li, Z. Wang, L. You, X. Feng, F. Liu, K. Cui, and W. Zhang, APL Photonics 5, 076104 (2020).
* Williams _et al._ (2017) B. P. Williams, R. J. Sadlier, and T. S. Humble, Physical Review Letters 118, 050501 (2017).
* Li _et al._ (2019) Y. Li, Y. Huang, T. Xiang, Y. Nie, M. Sang, L. Yuan, and X. Chen, Physical Review Letters 123, 250505 (2019).
* Vasconcelos _et al._ (2020) R. Vasconcelos, S. Reisenbauer, C. Salter, G. Wachter, D. Wirtitsch, J. Schmiedmayer, P. Walther, and M. Trupke, npj Quantum Information 6, 9 (2020).
* Nagali _et al._ (2009) E. Nagali, F. Sciarrino, F. De Martini, L. Marrucci, B. Piccirillo, E. Karimi, and E. Santamato, Physical Review Letters 103, 013601 (2009).
* Ramelow _et al._ (2009) S. Ramelow, L. Ratschbacher, A. Fedrizzi, N. K. Langford, and A. Zeilinger, Physical Review Letters 103, 253601 (2009).
* Franson (1989) J. D. Franson, Physical Review Letters 62, 2205 (1989).
* Zhou _et al._ (2016) Z. Y. Zhou, S. L. Liu, Y. Li, D. S. Ding, W. Zhang, S. Shi, M. X. Dong, B. S. Shi, and G. C. Guo, Physical Review Letters 117, 103601 (2016).
* Zhang _et al._ (2014b) W. Zhang, Q. Qi, J. Zhou, and L. Chen, Physical Review Letters 112, 153601 (2014b).
* Marsili _et al._ (2013) F. Marsili, V. B. Verma, J. A. Stern, S. Harrington, A. E. Lita, T. Gerrits, I. Vayshenker, B. Baek, M. D. Shaw, R. P. Mirin, and S. W. Nam, Nature Photonics 7, 210 (2013).
|
remarkRemark hypothesisHypothesis claimClaim Symbol-based multigrid methods
for Block structuresMatthias Bolten, Marco Donatelli, Paola Ferrari, Isabella
Furci
# A symbol based analysis for multigrid methods for
Block-Circulant and Block-Toeplitz Systems ††thanks: Submitted to the editors
DATE. This work was supported by Gruppo Nazionale per il Calcolo Scientifico
(GNCS-INdAM).
Matthias Bolten Department of Mathematics and Informatics. University of
Wuppertal, Wuppertal, Germany<EMAIL_ADDRESS>furci@uni-
wuppertal.de Marco Donatelli Department of Science and High Technology,
Insubria University, Insubria University, Como, Italy (,)
<EMAIL_ADDRESS><EMAIL_ADDRESS>Paola
Ferrari33footnotemark: 3 Isabella Furci22footnotemark: 2
###### Abstract
In the literature, there exist several studies on symbol-based multigrid
methods for the solution of linear systems having structured coefficient
matrices. In particular, the convergence analysis for such methods has been
obtained in an elegant form in the case of Toeplitz matrices generated by a
scalar-valued function. In the block-Toeplitz setting, that is, in the case
where the matrix entries are small generic matrices instead of scalars, some
algorithms have already been proposed regarding specific applications and a
first rigorous convergence analysis has been performed in [M. Donatelli, P.
Ferrari, I. Furci, D. Sesana, and S. Serra-Capizzano. Multigrid methods for
block-circulant and block-Toeplitz large linear systems: Algorithmic proposals
and two-grid optimality analysis. Numer. Linear Algebra Appl.]. However, with
the existent symbol-based theoretical tools, it is still not possible to prove
the convergence of many multigrid methods known in the literature. This paper
aims to generalize the previous results giving more general sufficient
conditions on the symbol of the grid transfer operators. In particular, we
treat matrix-valued trigonometric polynomials which can be non-diagonalizable
and singular at all points and we express the new conditions in terms of the
eigenvectors associated with the ill-conditioned subspace. Moreover, we extend
the analysis to the V-cycle method proving a linear convergence rate under
stronger conditions, which resemble those given in the scalar case. In order
to validate our theoretical findings, we present a classical block structured
problem stemming from a FEM approximation of a second order differential
problem. We focus on two multigrid strategies that use the geometric and the
standard bisection grid transfer operators and we prove that both fall into
the category of projectors satisfying the proposed conditions. In addition,
using a tensor product argument, we provide a strategy to construct efficient
V-cycle procedures in the block multilevel setting.
###### keywords:
Block-Toeplitz matrices, Multigrid methods, Finite element methods
15B05, 65N30, 65N55
## 1 Introduction
Linear systems with multilevel block-Toeplitz coefficient matrices arise in
the discretization of many differential and integral equations. Among them, we
mention the case of $\mathbb{Q}_{{r}}$ Lagrangian finite elements (FEM)
approximation of a second order differential problem [5] and the signal
restoration problems where some of the sampling data are not available [9].
When dealing with large multilevel and multilevel block-Toeplitz systems, the
performances (in terms of computational costs and iterations) of
preconditioners based on circulant approximations deteriorate [31]. This is
one of the many reasons why the class of multigrid methods is of great
interest for the solution of such systems [8, 16, 33].
Convergence results for multigrid methods are usually based on the local
Fourier analysis (LFA) [6], although several extensions and generalizations
have been recently proposed in the literature [15, 21, 26]. In [11] it was
proven that the convergence analysis of multigrid methods for circulant and
Toeplitz matrices [30, 32] is a linear algebra generalization of the LFA in
the case of the Galerkin approach. Indeed, it does not necessarily require a
differential operator and it can also be applied to integral problems with
applications such as signal and image processing [10]. For differential
problems with constant coefficients and uniform grids, the matrix algebra
approach leads to a condition on the symbols associated to the circulant
matrices analogous to the classical condition on the orders of the grid
transfer operators. In particular, such condition relates the order of the
zeros of the symbols associated to the coefficient matrix and the grid
transfer operators.
In this paper, we prove a generalization of such condition to block-symbols,
that is when the generating function $\mathbf{f}$ associated with the
coefficient matrix is a matrix-valued trigonometric polynomial. The block-
symbol has been previously investigated in the literature [12, 14, 23], but
many theoretical aspects have not yet been properly addressed. In particular,
a V-cycle convergence analysis is still missing and some classical grid
transfer operators do not satisfy the strong requirements of the two-grid
analysis in [12]. The main aim of the paper is to provide a complete
convergence analysis of multigrid methods for structured block-Toeplitz and
circulant systems under weak assumptions. In order to show the applicability
of our theory, we will consider classical multigrid strategies for
$\mathbb{Q}_{{r}}$ Lagrangian FEM in the case of uniform Cartesian grids.
First, we consider the Two Grid Method (TGM) and, according to the classical
Ruge-Stüben [29] convergence analysis, we focus on validating both a smoothing
property and an approximation property. The first is easily generalizable in
the block setting from the scalar case. Indeed, in [12] it has been proven
that it mainly affects the choice of the specific relaxation parameter for the
selected smoother. The validation of the approximation property for block
structured matrices approximation, instead, is non-trivial and it requires
additional hypotheses. In particular, the idea is to focus on the crucial
choice of conditions on the trigonometric polynomial $\mathbf{p}$ used to
construct the projector. In [12] the proof of the approximation property is
based on the validation of an additional commutativity requirement [12,
Section 4.1]. However, in some practical cases these conditions cannot be
satisfied. Hence, the main theorem of Section 4 provides less restrictive
conditions on $\mathbf{p}$. Indeed, differently from [12], the conditions are
expressed in terms of the eigenvectors associated with the ill-conditioned
subspace and permit to enlarge the class of suitable trigonometric polynomials
used to construct the projectors. Moreover, we provide some useful lemmas
which can further simplify the validation of the requirements, under specific
hypotheses that $\mathbf{p}$ often satisfies in the applications.
Another important result of the paper concerns the extension of the
theoretical findings to V-cycle methods. Indeed, following the proof of the
main theorem on the TGM convergence and the results in [25], it is possible to
obtain elegant conditions for the convergence and optimality of the V-cycle in
the block case. For the latter, a crucial point is the investigation of the
properties of the symbols at coarse levels, with a particular focus on the
orders of the zeros.
In order to validate our theoretical findings and show their applicability, we
present a classical block structured problem stemming from the
$\mathbb{Q}_{{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}r}}$
Lagrangian FEM approximation of a second order differential problem. We focus
on two multigrid strategies that use the geometric projection operator and the
standard bisection grid transfer operator. We prove that both fall into the
category of projectors satisfying the proposed conditions, which lead to
convergence and optimality of multigrid methods [20, 14]. Finally, in Section
6 we provide the extension of the theory for block multilevel Toeplitz
matrices exploiting the properties of the Kronecker product.
The paper is organized as follows. In Section 2 we recall the basics of the
multigrid methods, with particular attention to the TGM convergence analysis
and on the general conditions that lead to the V-cycle optimality. In Section
3 we restrict the attention to the block setting recalling some properties of
block-circulant and block-Toeplitz matrices. In particular, we introduce the
main ingredients for an effective multigrid procedure that will be
investigated in Section 4. Here, we focus on the conditions which ensure the
convergence and optimality of the TGM for a linear system with coefficient
matrix generated by a matrix-valued trigonometric polynomial and we provide a
possible simplification for the validation of the conditions in practical
cases. In Subsection 4.2 we derive the conditions for convergence and
optimality also for the V-cycle. In Section 5 we present the two classical
multigrid strategies using the geometric projection operator and the standard
bisection grid transfer operator. Finally, in Section 6 we consider the
extension in the block multilevel case and we show how the results of previous
sections can be exploited and generalized.
## 2 Multigrid methods
Multigrid methods are efficient and robust iterative methods for solving
linear systems of the form
$A_{n}x_{n}=b_{n},$
where often, and as assumed in this paper, $A_{n}\in\mathbb{C}^{n\times n}$ is
positive definite [34]. The main idea is to combine a classical stationary
iterative method, called smoother, with a coarse grid correction having a
spectral behaviour complementary with respect to the smoother [7]. In order to
fix the notation for positive definite matrices, if $X\in\mathbb{C}^{n\times
n}$ is a positive definite matrix, $\|\cdot\|_{X}=\|X^{1/2}\cdot\|_{2}$
denotes the Euclidean norm weighted by $X$ on $\mathbb{C}^{n}$. If $X$ and $Y$
are Hermitian matrices, then the notation $X\leq Y$ means that $Y-X$ is a
nonnegative definite matrix. Given a matrix $X$ we denote by $X^{T}$ and
$X^{H}$ the transpose and the conjugate transpose matrix of $X$, respectively.
### 2.1 Two-grid method
Let $P_{n,k}\in\mathbb{C}^{n\times k}$, $k<n$, be a given full-rank matrix and
let us consider two stationary iterative methods: the method
$\mathcal{V}_{n,\rm{pre}}$, with iteration matrix ${V}_{n,\rm{pre}}$, and
$\mathcal{V}_{n,\rm{post}}$, with iteration matrix ${V}_{n,\rm{post}}$. An
iteration of a Two-Grid Method (TGM) is given in Algorithm 1.
Algorithm 1
TGM$(A_{n},\mathcal{V}_{n,\rm{pre}}^{\nu_{\rm{pre}}},\mathcal{V}_{n,\rm{post}}^{\nu_{\rm{post}}},P_{n,k},b_{n},x_{n}^{(j)})$
0\.
$\tilde{x}_{n}=\mathcal{V}_{n,\rm{pre}}^{\nu_{\rm{pre}}}(A_{n},{b}_{n},x_{n}^{(j)})$
1\. $r_{n}=b_{n}-A_{n}\tilde{x}_{n}$
2\. $r_{k}=P_{n,k}^{H}r_{n}$
3\. $A_{k}=P_{n,k}^{H}A_{n}P_{n,k}$
4\. Solve $A_{k}y_{k}=r_{k}$
5\. $\hat{x}_{n}=\tilde{x}_{n}+P_{n,k}y_{k}$
6\.
$x_{n}^{(j+1)}=\mathcal{V}_{n,\rm{post}}^{\nu_{\rm{post}}}(A_{n},{b}_{n},\hat{x}_{n})$
Steps $1.\rightarrow 5.$ define the “coarse grid correction” that depends on
the projecting operator $P_{n,k}$, while step $0.$ and step $6.$ consist,
respectively, in applying $\nu_{\rm{pre}}$ times a pre-smoother and
$\nu_{\rm{post}}$ times a post-smoother of the given iterative methods. Step
3. defines the coarser matrix $A_{k}$ according to the Galerkin approach which
ensures that the coarse grid correction is an algebraic projector and hence is
very useful for an algebraic study of the convergence of the method. Indeed,
the TGM is a stationary method defined by the following iteration matrix
$\displaystyle{\rm
TGM}(A_{n},V_{n,\rm{pre}}^{\nu_{\rm{pre}}},V_{n,\rm{post}}^{\nu_{\rm{post}}},P_{n,k})=V_{n,\rm{post}}^{\nu_{\rm{post}}}\left[I_{n}-P_{n,k}\left(P_{n,k}^{H}A_{n}P_{n,k}\right)^{-1}P_{n,k}^{H}A_{n}\right]V_{n,\rm{pre}}^{\nu_{\rm{pre}}}.$
###### Theorem 2.1.
([29]) Let $A_{n}$ be a positive definite matrix of size $n$ and let
$V_{n,{\rm post}},$ $V_{n,{\rm pre}}$ be defined as in the TGM algorithm.
Assume
* (a)
$\exists\alpha_{\rm{pre}}>0\,:\;\|V_{n,\rm{pre}}x_{n}\|_{A_{n}}^{2}\leq\|x_{n}\|_{A_{n}}^{2}-\alpha_{\rm{pre}}\|V_{n,\rm{pre}}x_{n}\|_{A_{n}^{2}}^{2},\qquad\forall
x_{n}\in\mathbb{C}^{n},$
* (b)
$\exists\alpha_{\rm{post}}>0\,:\;\|V_{n,\rm{post}}x_{n}\|_{A_{n}}^{2}\leq\|x_{n}\|_{A_{n}}^{2}-\alpha_{\rm{post}}\|x_{n}\|_{A_{n}^{2}}^{2},\qquad\forall
x_{n}\in\mathbb{C}^{n},$
* (c)
$\exists\gamma>0\,:\;\min_{y\in\mathbb{C}^{k}}\|x_{n}-P_{n,k}y\|_{2}^{2}\leq\gamma\|x_{n}\|_{A_{n}}^{2},\qquad\forall
x_{n}\in\mathbb{C}^{n}.$
Then $\gamma\geq\alpha_{\rm{post}}$ and
$\displaystyle\|{\rm
TGM}(A_{n},V_{n,\rm{pre}},V_{n,\rm{post}},P_{n,k})\|_{A_{n}}\leq\sqrt{\frac{1-\alpha_{\rm{post}}/\gamma}{1+\alpha_{\rm{pre}}/\gamma}}<1.$
Conditions $(a)-(b)$ and $(c)$ are usually called “smoothing property” and
“approximation property”, respectively. Since $\alpha_{\rm{post}}$ and
$\gamma$ are independent of $n$, if the assumptions of Theorem 2.1 are
satisfied, then the resulting TGM exhibits a linear convergence. In other
words, the number of iterations in order to reach a given accuracy $\epsilon$
can be bounded from above by a constant independent of $n$ (possibly depending
on the parameter $\epsilon$). Moreover, if the projection and smoothing steps
have a computational cost lower or equal to the matrix-vector product with the
matrix $A_{n}$, then the TGM is optimal.
### 2.2 V-cycle method
For large $n$ a V-cycle method should be implemented. The standard V-cycle
method is obtained replacing the direct solution at step 4. with a recursive
call of the TGM applied to the coarser linear system
$A_{k_{\ell}}y_{k_{\ell}}=r_{k_{\ell}}$, where $\ell$ represents the level.
The recursion is usually stopped at level ${\ell_{\min}}$ when
$k_{\ell_{\min}}$ becomes small enough for solving cheaply step 4. with a
direct solver. In the following, as it is done in [25], we assume that we are
using the same iterative method as pre/post smoother, with same numbers of
iterative steps. We denote the iteration matrix by $V_{n_{\ell}}^{\nu}$. The
global iteration matrix ${\rm MGM}_{0}$ of the V-cycle method is recursively
defined as
$\displaystyle{\rm
MGM}_{\ell_{\min}}(A_{n_{\ell_{\min}}},V_{n_{\ell_{\min}}}^{\nu},V_{n_{\ell_{\min}}}^{\nu},P_{n_{\ell_{\min}},k_{\ell_{\min}}})=O_{n_{\ell_{\min}},n_{\ell_{\min}}},$
$\displaystyle{\rm
MGM}_{\ell}(A_{n_{\ell}},V_{n_{\ell}}^{\nu},V_{n_{\ell}}^{\nu},P_{n_{\ell},k_{\ell}})=$
$\displaystyle
V_{n_{\ell}}^{\nu}\left[I_{n_{\ell}}-P_{n_{\ell},k_{\ell}}\left(I_{n_{\ell}+1}-{\rm
MGM}_{\ell+1}\right)\left(P_{n_{\ell},k_{\ell}}^{H}A_{n_{\ell}}P_{n_{\ell},k_{\ell}}\right)^{-1}P_{n_{\ell},k_{\ell}}^{H}A_{n_{\ell}}\right]V_{n_{\ell}}^{\nu},$
for $\ell={\ell_{\min}-1},\dots,0$, where
$O_{n_{\ell_{\min}},n_{\ell_{\min}}}$ denotes the $n_{\ell_{\min}}\times
n_{\ell_{\min}}$ matrix of all zero components.
In order to prove the convergence and optimality of the V-cycle method, the
key ingredient is the analysis of the spectral radius of ${\rm MGM}_{0}$,
which is the iteration matrix at the finest level. In [25, Corollary 3.1] the
authors show the following relation
$\begin{split}\rho\left({\rm
MGM_{0}}(A_{n_{0}},V_{n_{0}}^{\nu},V_{n_{0}}^{\nu},P_{n_{0},k_{0}})\right)\leq
1-\min_{\ell}\frac{1-\rho\left({\rm
TGM}(A_{n_{0}},V_{n_{0}}^{\nu},V_{n_{0}}^{\nu},P_{n_{0},k_{0}})\right)}{\|\pi_{A_{n_{\ell}}}\|^{2}_{A_{n_{\ell}}(I_{n_{\ell}}-V_{n_{\ell}}^{2\nu})^{-1}},}\end{split},$
where
$\pi_{A_{n_{\ell}}}=P_{n_{\ell},k_{\ell}}(P_{n_{\ell},k_{\ell}}^{H}A_{n_{\ell}}P_{n_{\ell},k_{\ell}})^{-1}P_{n_{\ell},k_{\ell}}A_{n_{\ell}}.$
Hence, assuming that we choose smoothers and prolongation operators such that
the two grid optimality is guaranteed, i.e. $\rho\left({\rm
TGM}(A_{n_{0}},V_{n_{0}}^{\nu},V_{n_{0}}^{\nu},P_{n_{0},k_{0}})\right)<c<1$,
it is sufficient to prove that the following quantity is bounded
(1)
$\|\pi_{A_{n_{\ell}}}\|^{2}_{A_{n_{\ell}}(I_{n_{\ell}}-V_{n_{\ell}}^{2\nu})^{-1}}.$
In practice, the boundness of $\|\pi_{A_{n_{\ell}}}\|^{2}_{2}$ is usually
enough, reducing the convergence analysis to the study of the spectral
behaviour of the coarse grid correction operator.
###### Lemma 2.2.
Assume that there exists a positive $C$ independent of $n$ such that
(2) $\Lambda(A_{n_{\ell}})\subseteq(0,C],$
where $\Lambda(A_{n_{\ell}})$ denotes the spectrum of the matrix
$A_{n_{\ell}}$. Suppose that one iteration of the Richardson method with the
damping parameter $\omega\in(0,2/C)$ both as pre-smoother and post-smoother is
applied. Then, the boundness of $\|\pi_{A_{n_{\ell}}}\|^{2}_{2}$ implies that
$\|\pi_{A_{n_{\ell}}}\|^{2}_{A_{n_{\ell}}(I_{n_{\ell}}-V_{n_{\ell}}^{2\nu})^{-1}}$
is bounded as well.
###### Proof 2.3.
Applying one step of pre-smoother and post-smoother, it holds
(3)
$\displaystyle\|\pi_{A_{n_{\ell}}}\|^{2}_{A_{n_{\ell}}(I_{n_{\ell}}-V_{n_{\ell}}^{2})^{-1}}$
$\displaystyle=$
$\displaystyle\left\|A_{n_{\ell}}^{1/2}(I_{n_{\ell}}-V_{n_{\ell}}^{2})^{-1/2}\pi_{A_{n_{\ell}}}\left(A_{n_{\ell}}^{1/2}(I_{n_{\ell}}-V_{n_{\ell}}^{2})^{-1/2}\right)^{-1}\right\|_{2}^{2}$
$\displaystyle\leq$
$\displaystyle\left\|A_{n_{\ell}}^{1/2}(I_{n_{\ell}}-V_{n_{\ell}}^{2})^{-1/2}\right\|_{2}^{2}\left\|\pi_{A_{n_{\ell}}}\right\|_{2}^{2}\left\|\left(A_{n_{\ell}}^{1/2}(I_{n_{\ell}}-V_{n_{\ell}}^{2})^{-1/2}\right)^{-1}\right\|_{2}^{2}.$
Since the smoother is the Richardson method with damping parameter $\omega$,
we can write
$\begin{split}&A_{n_{\ell}}^{1/2}(I_{n_{\ell}}-V_{n_{\ell}}^{2})^{-1/2}=A_{n_{\ell}}^{1/2}(I_{n_{\ell}}-\left(I_{n_{\ell}}-\omega
A_{n_{\ell}}\right)^{2})^{-1/2}=(2\omega
I_{n_{\ell}}-\omega^{2}A_{n_{\ell}})^{-1/2},\end{split}$
whose 2-norm is bounded if $\omega$ is such that
$2\omega-\omega^{2}\lambda_{j}(A_{n_{\ell}})>0,\,j=1,\dots,n_{\ell}.$
Equivalently,
$\begin{split}&\left(A_{n_{\ell}}^{1/2}(I_{n_{\ell}}-V_{n_{\ell}}^{2})^{-1/2}\right)^{-1}=(2\omega
I_{n_{\ell}}-\omega^{2}A_{n_{\ell}})^{1/2},\end{split}$
whose 2-norm is bounded for all $\omega\in(0,2/C)$ as $n$ increases thanks to
equation (2). Finally, thanks to inequality (3), the boundness of
$\|\pi_{A_{n_{\ell}}}\|^{2}_{2}$ implies that
$\|\pi_{A_{n_{\ell}}}\|^{2}_{A_{n_{\ell}}(I_{n_{\ell}}-V_{n_{\ell}}^{2\nu})^{-1}}$
is bounded as well.
## 3 Multigrid methods for block-circulant and block-Toeplitz matrices
In the present paper, we are interested in proposing an effective multigrid
method in the case where $A_{n}$ is a block-circulant or block-Toeplitz
matrix. Therefore, we recall some properties of these structured matrices.
### 3.1 Block-circulant and block-Toeplitz matrices
Let $\mathcal{M}_{d}$ be the linear space of the complex $d\times d$ matrices.
Given a function $\mathbf{f}:Q\to\mathcal{M}_{d}$, for $i=1,\dots,d$, we
denote by $\lambda_{i}(\mathbf{f})$ the eigenvalue functions of $\mathbf{f}$
and by $\lambda_{i}(\mathbf{f}(\theta))$ their evaluation at a point
$\theta\in Q$. The following lemma is derived from the results in [3, Section
VI.1] and provides the existence and continuity of the eigenvalue functions of
$\mathbf{f}$.
###### Lemma 3.1.
Let $\theta\rightarrow\mathbf{f}(\theta)$ be a continuous map from an interval
$Q$ into the space of $d\times d$ matrices such that the eigenvalues of
$\mathbf{f}(\theta)$ are real for all $\theta\in Q$. Then there exist
continuous functions
$\lambda_{1}(\mathbf{f}(\theta)),\lambda_{2}(\mathbf{f}(\theta)),\dots,\lambda_{d}(\mathbf{f}(\theta))$
that, for each $\theta\in Q$, are the eigenvalues of $\mathbf{f}(\theta)$.
Let $\mathbf{f}:Q\to\mathcal{M}_{d}$, with $Q=(-\pi,\pi)$. We say that
$\mathbf{f}\in L^{p}([-\pi,\pi])$ (resp. is measurable) if all its components
$f_{ij}:Q\to\mathbb{C},\ i,j=1,\ldots,d,$ belong to $L^{p}([-\pi,\pi])$ (resp.
are measurable) for $1\leq p\leq\infty$.
###### Definition 3.2.
Let the Fourier coefficients of a function $\mathbf{f}\in L^{p}([-\pi,\pi])$
be
$\displaystyle\hat{\mathbf{f}_{j}}:=\frac{1}{2\pi}\int_{Q}\mathbf{f}(\theta){\rm
e}^{-\iota
j\theta}d\theta\in\mathcal{M}_{d},\qquad\iota^{2}=-1,\,j\in\mathbb{Z}.$
Then, the block-Toeplitz matrix associated with f is the matrix with $d$
blocks of size $n$ and hence it has order $d\cdot n$ given by
$\displaystyle
T_{n}(\mathbf{f})=\sum_{|j|<n}J_{n}^{(j)}\otimes\hat{\mathbf{f}_{j}},$
where $\otimes$ denotes the (Kronecker) tensor product of matrices. The term
$J_{n}^{(j)}$ is the matrix of order $n$ whose $(i,k)$ entry equals $1$ if
$i-k=j$ and zero otherwise.
The set $\\{T_{n}(\mathbf{f})\\}_{n\in\mathbb{N}}$ is called the family of
block-Toeplitz matrices generated by $\mathbf{f}$, that in turn is referred to
as the generating function or the symbol of
$\\{T_{n}(\mathbf{f})\\}_{n\in\mathbb{N}}$. In the scalar case, when $d=1$, if
${f}$ is a trigonometric polynomial of degree lower than $n$, then we can
define the circulant matrix generated by $f$ by
$\displaystyle\mathcal{A}_{n}(f)=F_{n}\begin{smallmatrix}\vspace{-0.5ex}\textrm{\normalsize
diag}\\\
\vspace{-0.8ex}i\in\mathcal{I}_{n}\end{smallmatrix}({f}(\theta_{i}^{(n)}))F_{n}^{H},$
where $\mathcal{I}_{n}=\\{0,\ldots,n-1\\}$ and
$F_{n}=\frac{1}{\sqrt{n}}\left[{\rm e}^{-\imath
j\theta_{i}^{(n)}}\right]_{i,j=0}^{n-1}$, with $\theta_{i}^{(n)}=\frac{2\pi
i}{n}$. Circulant matrices form an algebra of normal matrices. In the block-
case, $d>1$, if $\mathbf{f}:Q\to\mathcal{M}_{d}$ is a $d\times d$ matrix-
valued trigonometric polynomial, then the block-circulant matrix of order $dn$
generated by $\mathbf{f}$ is defined as
$\mathcal{A}_{n}(\mathbf{f})=(F_{n}\otimes
I_{d})\begin{smallmatrix}\vspace{-0.5ex}\textrm{\normalsize diag}\\\
\vspace{-0.8ex}i\in\mathcal{I}_{n}\end{smallmatrix}(\mathbf{f}(\theta_{i}^{(n)}))(F_{n}^{H}\otimes
I_{d}),$
where $\begin{smallmatrix}\vspace{-0.5ex}\textrm{\normalsize diag}\\\
\vspace{-0.8ex}i\in\mathcal{I}_{n}\end{smallmatrix}(\mathbf{f}(\theta_{i}^{(n)}))$
is the block-diagonal matrix where the block-diagonal elements are
$\mathbf{f}(\theta_{i}^{(n)})$.
### 3.2 Projectors for block structured matrices
For the convergence analysis of block-circulant and block-Toeplitz matrices,
previous results are based on the Ruge-Stüben theory [29] for TGM in Theorem
2.1, see [8, 22, 12]. The smoothing property is satisfied by damped Richardson
iteration simply choosing the damping parameter in the interval
$(0,2/\|\mathbf{f}\|_{\infty})$, see [12, Lemma 1]. The approximation property
$(c)$ requires a precise definition of $P_{n,k}$ and a detailed analysis.
The choice of prolongation and restriction operators fulfilling the
approximation condition is crucial for multigrid convergence and optimality.
In particular, the projector $P_{n,k}$ is chosen is order that
* •
it projects the problem onto a coarser space by “cutting” the coefficient
matrix,
* •
the resulting projected matrix should maintain the same block structure and
properties of the original matrix.
Let $K_{n,k}$ be the $n\times k$ downsampling matrix, such that:
n even:
$k=\frac{n}{2}$ and $K_{n,k}=K^{Odd}_{n,k}$,
n odd:
$k=\frac{n-1}{2}$ and $K_{n,k}=K^{Even}_{n,k}$,
with $K^{Odd}_{n,k}$ and $K^{Even}_{n,k}$ defined as
$K^{Odd}_{n,k}=\left[\begin{array}[]{cccccccc}1&&&&\\\ 0&&&&\\\ \vdots&1&&&\\\
&0&&&\\\ &\vdots&&&\vdots\\\ &&&&1\\\ &&&&0\\\ \end{array}\right]_{n\times
k},\qquad K^{Even}_{n,k}=\left[\begin{array}[]{cccccccc}0&&&&\\\ 1&&&&\\\
0&0&&&\\\ \vdots&1&&&\\\ &0&&&\\\ &\vdots&&&\\\ &&&&\vdots\\\ &&&&1\\\
&&&&0\\\ \end{array}\right]_{n\times k}.$
In particular, $K^{Odd}_{n,k}$ is the $n\times k$ matrix obtained by removing
the even rows from the identity matrix of size $n$, that is it keeps the odd
rows. On the other hand, $K^{Even}_{n,k}$ keeps the even rows. When $n$ is
even, $K^{Odd}_{n,k}$ performs the packaging of the Fourier frequencies since
it holds
$(K^{Odd}_{n,k})^{T}F_{n}=\frac{1}{\sqrt{2}}[F_{k}\,|\,F_{k}].$
This property of the Fourier matrix is the key to define a projector $P_{n,k}$
that preserves the block-circulant structure at the coarser levels. In the
rest of the paper, the projector will be denoted by $P^{d}_{n,k}$ since the
block-structured matrices have blocks of order $d$. Therefore, we define the
structure of the projecting operators $P^{d}_{n,k}$ for the block-circulant
matrix $\mathcal{A}_{n}(\mathbf{f})$ generated by a trigonometric polynomial
$\mathbf{f}:Q\rightarrow\mathcal{M}_{d}$ as follows. Let $n$ be even and of
the form $2^{t}$, $t\in\mathbb{N}$, such that the size of the coarser problem
is $k=\frac{n}{2}=2^{t-1}$. The projector $P^{d}_{n,k}$ is then constructed as
the product between a matrix $\mathcal{A}_{n}(\mathbf{p})$ in the algebra,
with $\mathbf{p}$ a proper trigonometric polynomial that will be defined in
the following sections, and a cutting matrix $K^{Odd}_{n,k}\otimes I_{d}$.
That is,
(4) $P^{d}_{n,k}=\mathcal{A}_{n}(\mathbf{p})(K^{Odd}_{n,k}\otimes I_{d}).$
The result of multiplying a $d\times d$ block matrix of dimension $dn\times
dn$ by $K^{Odd}_{n,k}\otimes I_{d}$ is a $d\times d$ block matrix where just
the even “block-columns” are maintained. We are left to determine the
conditions to be satisfied by $\mathcal{A}_{n}(\mathbf{p})$ (or better by its
generating function $\mathbf{p}$), in order to obtain a projector which is
effective in terms of convergence. Using the block-symbol analysis, sufficient
conditions have been proven in [12]. Unfortunately, such conditions are quite
strong and are not satisfied by the classical projector studied in Section 5.
Therefore, in the next section we prove weakly conditions on $\mathbf{p}$ that
provide an optimal multigrid method. The same strategy can be applied when we
deal with block-Toeplitz matrices generated by a matrix-valued trigonometric
polynomial, instead of block-circulant matrices. Indeed, the only thing that
should be adapted is the structure of the projector which slightly changes for
block-Toeplitz matrices, in order to preserve the structure at coarser levels.
Hence, for a matrix-valued trigonometric polynomial $\mathbf{p}$, the
projector matrix is $P^{d}_{n,k}=T_{n}(\mathbf{p})\left(K^{Even}_{n,k}\otimes
I_{d}\right).$ Note that in the Toeplitz case $n$ should be chosen odd and of
the form $2^{t}-1$, $t\in\mathbb{N}$, such that the size of the coarser
problem is $k=\frac{n-1}{2}=2^{t-1}-1$.
## 4 Multigrid convergence for block-Circulant matrices
Let $A_{N}=\mathcal{A}_{n}(\mathbf{f})$, $N=N(d,n)=dn$ and $n$ even, with
$\mathbf{f}$ matrix-valued trigonometric polynomial $\mathbf{f}\geq 0$. We
highlight that the theoretical results we derive are based on the hypothesis
that f is a trigonometric polynomial. This guarantees that the symbol at the
coarse levels maintains the same structure and properties of the symbol at the
finest one. However, in line with the scalar-valued case addressed in [30,
32], the proposed theory can be easily extended to the dense case where f
belongs to $L^{\infty}([-\pi,\pi])$, only requiring the additional hypothesis
that f has isolated zeros of finite order.
Let $P^{d}_{n,k}=\mathcal{A}_{n}(\mathbf{p})(K^{Odd}_{n,k}\otimes I_{d})$ with
$\mathbf{p}$ matrix-valued trigonometric polynomial. Suppose that there exist
unique $\theta_{0}\in[0,2\pi)$ and $\bar{\jmath}\in\\{1,\dots,d\\}$ such that
(5) $\left\\{\begin{array}[]{ll}\lambda_{j}(\mathbf{f}(\theta))=0,&\mbox{for
}\theta=\theta_{0}\mbox{ and }j=\bar{\jmath},\\\
\lambda_{j}(\mathbf{f}(\theta))>0,&{\rm otherwise}.\end{array}\right.$
The latter assumption means that the matrix $\mathbf{f}(\theta)$ has exactly
one zero eigenvalue in $\theta_{0}$ and it is positive definite in
$[0,2\pi)\backslash\\{\theta_{0}\\}$. Moreover, we have that the order of the
zero in $\theta_{0}$ must be even. As a consequence, the matrices $A_{N}$
could be singular and the ill-conditioned subspace is the eigenspace
associated with $\lambda_{\bar{\jmath}}(\mathbf{f}(\theta_{0}))$. On the other
hand, the block-Toeplitz matrices $T_{n}(\mathbf{f})$ are positive definite
with the same ill-conditioned subspace and become ill-conditioned as $N$
increases. Since $\mathbf{f}(\theta)$ is Hermitian, it can be diagonalized by
an orthogonal matrix $Q(\theta)$. Moreover, we are in the setting that the
eigenvalues and the eigenvectors of $\mathbf{f}$ are continuous functions in
the variable $\theta$ [24, 28]. We have
(6) $\begin{split}&\mathbf{f}(\theta)=Q(\theta)D(\theta)Q(\theta)^{H}=\\\
&\left[\begin{array}[]{@{\;}c@{\;}|@{\;}c@{\;}|@{\;}c@{\;}|@{\;}c@{\;}|@{\;}c@{\;}|@{\;}c@{\;}}q_{1}(\theta)&\dots&q_{\bar{\jmath}}(\theta)&\dots&q_{d}(\theta)\end{array}\right]\begin{bmatrix}\lambda_{1}(\mathbf{f}(\theta))&&&&&\\\
&\ddots&&&\\\ &&\lambda_{\bar{\jmath}}(\mathbf{f}(\theta))&&\\\ &&&\ddots&\\\
&&&&\lambda_{d}(\mathbf{f}(\theta))\end{bmatrix}\left[\begin{array}[]{ccccccc}{q_{1}}^{H}(\theta)\\\
\hline\cr\vdots\\\ \hline\cr q_{\bar{\jmath}}^{H}(\theta)\\\
\hline\cr\vdots\\\ \hline\cr{q_{d}}^{H}(\theta)\end{array}\right],\end{split}$
where $q_{\bar{\jmath}}(\theta)$ is the eigenvector that generates the ill-
conditioned subspace since $q_{\bar{\jmath}}(\theta_{0})$ is the eigenvector
of $\mathbf{f}(\theta_{0})$ associated with
$\lambda_{\bar{\jmath}}(\mathbf{f}(\theta_{0}))=0$. Under the following
assumptions, we will prove that there are sufficient conditions to ensure the
linear convergence of the TGM.
In the next section—in particular Theorem 4.1—we will show that is sufficient
to choose $\mathbf{p}$ such that
1. $(i)$
$\mathbf{p}(\theta)^{H}\mathbf{p}(\theta)+\mathbf{p}(\theta+\pi)^{H}\mathbf{p}(\theta+\pi)>0\quad\forall\theta\in[0,2\pi),$
which implies that the trigonometric function
(7)
$\textbf{s}(\theta)=\mathbf{p}(\theta)\left(\mathbf{p}(\theta)^{H}\mathbf{p}(\theta)+\mathbf{p}(\theta+\pi)^{H}\mathbf{p}(\theta+\pi)\right)^{-1}\mathbf{p}(\theta)^{H}$
is well-defined for all $\theta\in[0,2\pi)$,
2. $(ii)$
$\textbf{s}(\theta_{0})q_{\bar{\jmath}}(\theta_{0})=q_{\bar{\jmath}}(\theta_{0}),$
3. $(iii)$
$\lim_{\theta\rightarrow\theta_{0}}\lambda_{\bar{\jmath}}(\mathbf{f}(\theta))^{-1}(1-\lambda_{\bar{\jmath}}(\textbf{s}(\theta)))=c,\quad
c\in\mathbb{R}.$
Note that the first condition does not depend on f and its spectral
properties, then it provides a certain freedom in the choice of the grid
transfer operator with respect to the problem. The second and third conditions
depend on the eigenvector associated to the singularity of f, which is known,
and the behaviour of the minimal eigenvalue function of f. The latter is a
scalar-valued function and its analytic properties can be investigated or
approximated with the preferred mathematical tools.
### 4.1 TGM optimality
The following theorem proves that conditions $(i)-(iii)$ imply the
approximation property $(c)$. Combining this result with the smoothing
property proved in [12], the optimality of the TGM follows from Theorem 2.1.
###### Theorem 4.1.
Consider the matrix $A_{N}:=\mathcal{A}_{n}(\mathbf{f})$, with $n$ even and
$\mathbf{f}\in\mathcal{M}_{d}$ matrix-valued trigonometric polynomial,
$\mathbf{f}\geq 0$, such that condition (5) is satisfied. Let $P^{d}_{n,k}$ be
the projecting operator defined as in equation (4) with
$\mathbf{p}\in\mathcal{M}_{d}$ trigonometric polynomial satisfying conditions
$(i)-(iii)$. Then, there exists a positive value $\gamma$ independent of $n$
such that inequality $(c)$ in Theorem 2.1 is satisfied.
###### Proof 4.2.
The first part of the proof takes inspiration from [12, Theorem 5.2]. We
report all the details for completeness, uniforming the notation. We remind
that in order to prove that there exists $\gamma>0$ independent of $n$ such
that for any $x_{N}\in\mathbb{C}^{N}$
(8)
$\displaystyle\min_{y\in\mathbb{C}^{K}}\|x_{N}-P^{d}_{n,k}y\|_{2}^{2}\leq\gamma\|x_{N}\|_{A_{N}}^{2},$
we can choose a special instance of $y$ in such a way that the previous
inequality is reduced to a matrix inequality in the sense of the partial
ordering of the real space of Hermitian matrices. For any
$x_{N}\in\mathbb{C}^{N}$, let
$\overline{y}\equiv\overline{y}(x_{N})\in\mathbb{C}^{K}$ be defined as
$\overline{y}=[(P^{d}_{n,k})^{H}P^{d}_{n,k}]^{-1}(P^{d}_{n,k})^{H}x_{N}.$ From
condition $(i)$ and [12, Proposition 4.2], it is straightforward that
$(P^{d}_{n,k})^{H}P^{d}_{n,k}$ is invertible. Therefore, (8) is implied by
$\displaystyle\|x_{N}-P^{d}_{n,k}\overline{y}\|_{2}^{2}\leq\gamma\|x_{N}\|_{A_{N}}^{2},$
where the latter is equivalent to the matrix inequality
$G_{N}(\mathbf{p})^{H}G_{N}(\mathbf{p})\leq\gamma A_{N}.$ with
$G_{N}(\mathbf{p})=I_{N}-P^{d}_{n,k}[(P^{d}_{n,k})^{H}P^{d}_{n,k}]^{-1}(P^{d}_{n,k})^{H}$.
By construction, the matrix $G_{N}(p)$ is a Hermitian unitary projector, in
fact
$G_{N}(\mathbf{p})^{H}G_{N}(\mathbf{p})=G_{N}(\mathbf{p})^{2}=G_{N}(\mathbf{p})$.
As a consequence, the preceding matrix inequality can be rewritten as
(9) $\displaystyle G_{N}(\mathbf{p})\leq\gamma\mathcal{A}_{n}(\mathbf{f}).$
We notice that
$\left(K^{Odd}_{n,k}\right)^{T}F_{n}=\frac{1}{\sqrt{2}}F_{k}I_{n,2}$, where
$I_{n,2}=\left[I_{k}|I_{k}\right]_{k\times n}$. Since we can decompose the
block-circulant matrix $\mathcal{A}_{n}(\mathbf{p})=(F_{n}\otimes
I_{d})\begin{smallmatrix}\vspace{-0.5ex}\textrm{\normalsize diag}\\\
\vspace{-0.8ex}i\in\mathcal{I}_{n}\end{smallmatrix}(\mathbf{p}(\theta_{i}^{(n)}))(F_{n}^{H}\otimes
I_{d})$, we have
$\displaystyle(P^{d}_{n,k})^{H}=\frac{1}{\sqrt{2}}(F_{k}\otimes
I_{d})(I_{n,2}\otimes
I_{d})\begin{smallmatrix}\vspace{-0.5ex}\textrm{\normalsize diag}\\\
\vspace{-0.8ex}i\in\mathcal{I}_{n}\end{smallmatrix}(\mathbf{p}(\theta_{i}^{(n)})^{H})(F_{n}^{H}\otimes
I_{d}),$
and the matrix $(F_{n}^{H}\otimes I_{d})G_{N}(\mathbf{p})(F_{n}\otimes I_{d})$
becomes
$\displaystyle(F_{n}^{H}\otimes I_{d})G_{N}(\mathbf{p})(F_{n}\otimes I_{d})$
$\displaystyle=I_{N}-\begin{smallmatrix}\vspace{-0.5ex}\textrm{\normalsize
diag}\\\
\vspace{-0.8ex}i\in\mathcal{I}_{n}\end{smallmatrix}(\mathbf{p}(\theta_{i}^{(n)}))(I_{n,2}^{T}\otimes
I_{d})$
$\displaystyle\left[\begin{smallmatrix}\vspace{-0.5ex}\textrm{\normalsize
diag}\\\
\vspace{-0.8ex}i\in\mathcal{I}_{k}\end{smallmatrix}(\mathbf{p}(\theta_{i}^{(n)})^{H}\mathbf{p}(\theta_{i}^{(n)})+\mathbf{p}(\theta_{\tilde{\imath}}^{(n)})^{H}\mathbf{p}(\theta_{\tilde{\imath}}^{(n)}))\right]^{-1}$
$\displaystyle(I_{n,2}\otimes
I_{d})\begin{smallmatrix}\vspace{-0.5ex}\textrm{\normalsize diag}\\\
\vspace{-0.8ex}i\in\mathcal{I}_{n}\end{smallmatrix}(\mathbf{p}(\theta_{i}^{(n)})^{H})$
where $\tilde{\imath}=i+k$. Now, it is clear that there exists a suitable
permutation by rows and columns of $(F_{n}^{H}\otimes
I_{d})G_{N}(\mathbf{p})(F_{n}\otimes I_{d})$ such that we can obtain a
$2d\times 2d$ block-diagonal matrix of the form
$I_{N}-\begin{smallmatrix}\vspace{-0.5ex}\textrm{\normalsize diag}\\\
\vspace{-0.8ex}i\in\mathcal{I}_{k}\end{smallmatrix}\left[\begin{array}[]{c}\mathbf{p}(\theta_{i}^{(n)})\\\
\mathbf{p}(\theta_{\tilde{\imath}}^{(n)})\end{array}\right]\\!\left[\begin{array}[]{c}(\mathbf{p}(\theta_{i}^{(n)})^{H}\mathbf{p}(\theta_{i}^{(n)})+\mathbf{p}(\theta_{\tilde{\imath}}^{(n)})^{H}\mathbf{p}(\theta_{\tilde{\imath}}^{(n)}))^{-1}\end{array}\right]\\!\left[\begin{array}[]{cc}\mathbf{p}(\theta_{i}^{(n)})^{H}&\mathbf{p}(\theta_{\tilde{\imath}}^{(n)})^{H}\end{array}\right].$
Therefore, by considering the same permutation by rows and columns of
$(F_{n}^{H}\otimes I_{d})\mathcal{A}_{n}(\mathbf{f})(F_{n}\otimes
I_{d})=\begin{smallmatrix}\vspace{-0.5ex}\textrm{\normalsize diag}\\\
\vspace{-0.8ex}i\in\mathcal{I}_{n}\end{smallmatrix}(\mathbf{f}(\theta_{i}^{(n)}))$,
condition (9) is equivalent to requiring that there exists $\gamma>0$
independent of $n$ such that, $\forall j=0,\ldots,k-1$
$\displaystyle I_{2d}-\left[\begin{array}[]{c}\mathbf{p}(\theta_{i}^{(n)})\\\
\mathbf{p}(\theta_{\tilde{\imath}}^{(n)})\end{array}\right]\left[\begin{array}[]{c}(\mathbf{p}(\theta_{i}^{(n)})^{H}\mathbf{p}(\theta_{i}^{(n)})+\mathbf{p}(\theta_{\tilde{\imath}}^{(n)})^{H}\mathbf{p}(\theta_{\tilde{\imath}}^{(n)}))^{-1}\end{array}\right]\left[\begin{array}[]{cc}\mathbf{p}(\theta_{i}^{(n)})^{H}&\mathbf{p}(\theta_{\tilde{\imath}}^{(n)})^{H}\end{array}\right]$
$\displaystyle\leq\gamma\left[\begin{array}[]{cc}\mathbf{f}(\theta_{i}^{(n)})&\\\
&\mathbf{f}(\theta_{\tilde{\imath}}^{(n)})\end{array}\right].$
We define the set $\Omega(\theta_{0})=\\{\theta_{0},\theta_{0}+\pi\\}.$ Due of
the continuity of $\mathbf{p}$ and $\mathbf{f}$ it is clear that the preceding
set of inequalities can be reduced to requiring that a unique inequality of
the form
$\displaystyle I_{2d}-\left[\begin{array}[]{c}\mathbf{p}(\theta)\\\
\mathbf{p}(\theta+\pi)\end{array}\right]\left[\begin{array}[]{c}(\mathbf{p}(\theta)^{H}\mathbf{p}(\theta)+\mathbf{p}(\theta+\pi)^{H}\mathbf{p}(\theta+\pi))^{-1}\end{array}\right]$
$\displaystyle\left[\begin{array}[]{cc}\mathbf{p}(\theta)^{H}&\mathbf{p}(\theta+\pi)^{H}\end{array}\right]\leq\gamma\left[\begin{array}[]{cc}\mathbf{f}(\theta)&\\\
&\mathbf{f}(\theta+\pi)\end{array}\right]$
holds for all $\theta\in[0,2\pi)\backslash\Omega(\theta_{0})$. Let us define
$\textbf{q}(\theta)=(\mathbf{p}(\theta)^{H}\mathbf{p}(\theta)+\mathbf{p}(\theta+\pi)^{H}\mathbf{p}(\theta+\pi))^{-1}$.
By simple computations, the previous inequality becomes
(14) $\displaystyle
I_{2d}-\left[\begin{array}[]{cc}\mathbf{p}(\theta)\textbf{q}(\theta)\mathbf{p}(\theta)^{H}&\mathbf{p}(\theta)\textbf{q}(\theta)\mathbf{p}(\theta+\pi)^{H}\\\
\mathbf{p}(\theta+\pi)\textbf{q}(\theta)\mathbf{p}(\theta)^{H}&\mathbf{p}(\theta+\pi)\textbf{q}(\theta)\mathbf{p}(\theta+\pi)^{H}\end{array}\right]\leq\gamma\left[\begin{array}[]{cc}\mathbf{f}(\theta)&\\\
&\mathbf{f}(\theta+\pi)\end{array}\right].$
Let us define the matrix-valued function
$R(\theta)=\begin{bmatrix}\mathbf{f}(\theta)&\\\
&\mathbf{f}(\theta+\pi)\end{bmatrix}^{-\frac{1}{2}}\left[\begin{array}[]{cc}I_{d}-\mathbf{p}(\theta)\textbf{q}(\theta)\mathbf{p}(\theta)^{H}&-\mathbf{p}(\theta)\textbf{q}(\theta)\mathbf{p}(\theta+\pi)^{H}\\\
-\mathbf{p}(\theta+\pi)\textbf{q}(\theta)\mathbf{p}(\theta)^{H}&I_{d}-\mathbf{p}(\theta+\pi)\textbf{q}(\theta)\mathbf{p}(\theta+\pi)^{H}\end{array}\right]\cdot\\\
\cdot\begin{bmatrix}\mathbf{f}(\theta)&\\\
&\mathbf{f}(\theta+\pi)\end{bmatrix}^{-\frac{1}{2}}.$
Applying the Sylvester inertia law [19], we have that the relation (14) is
verified if
(15) $\displaystyle R(\theta)\leq\gamma I_{2d}$
is satisfied. If we prove that for every
$\theta\in[0,2\pi)\backslash\Omega(\theta_{0})$ the matrix $R(\theta)$ is
uniformly bounded in the spectral norm, then we have that there exists
$\gamma>0$ which bounds the spectral radius of $R(\theta)$ and then the latter
implies inequality (15). To show that the matrix $R(\theta)$ is uniformly
bounded in the spectral norm, we can rewrite $R(\theta)$ in components as
$\displaystyle R(\theta)=\begin{bmatrix}R_{1,1}(\theta)&R_{1,2}(\theta)\\\
R_{2,1}(\theta)&R_{2,2}(\theta)\end{bmatrix}=$
$\displaystyle\begin{bmatrix}\mathbf{f}^{-\frac{1}{2}}(\theta)(I_{d}-\mathbf{p}(\theta)\textbf{q}(\theta)\mathbf{p}(\theta)^{H})\mathbf{f}^{-\frac{1}{2}}(\theta)&\\!-\mathbf{f}^{-\frac{1}{2}}(\theta)\mathbf{p}(\theta)\textbf{q}(\theta)\mathbf{p}(\theta\\!+\\!\pi)^{H}\mathbf{f}^{-\frac{1}{2}}(\theta\\!+\\!\pi)\\\
-\mathbf{f}^{-\frac{1}{2}}(\theta\\!+\\!\pi)\mathbf{p}(\theta\\!+\\!\pi)\textbf{q}(\theta)\mathbf{p}(\theta)^{H}\mathbf{f}^{-\frac{1}{2}}(\theta)&\\!\mathbf{f}^{-\frac{1}{2}}(\theta\\!+\\!\pi)(I_{d}\\!-\\!\mathbf{p}(\theta\\!+\\!\pi)\textbf{q}(\theta)\mathbf{p}(\theta\\!+\\!\pi)^{H})\mathbf{f}^{-\frac{1}{2}}(\theta\\!+\\!\pi)\end{bmatrix}\\!\\!.$
The function
$\|R(\theta)\|_{2}:[0,2\pi)\backslash\Omega(\theta_{0})\rightarrow\mathbb{R}$
is continuous and, in order to show that $R(\theta)$ is uniformly bounded in
the spectral norm, Weierstrass theorem implies that it is sufficient to prove
that the following limits exist and are finite:
$\lim_{\theta\rightarrow\theta_{0}}\|R(\theta)\|_{2},\qquad\lim_{\theta\rightarrow\theta_{0}+\pi}\|R(\theta)\|_{2}.$
By definition, $R(\theta)$ is a Hermitian matrix for
$\theta\in[0,2\pi)\backslash\Omega(\theta_{0})$. Moreover, by direct
computation, one can verify that the matrix on the left-hand side of (14) is a
projector, having eigenvalues $0$ and $1$. Consequently, from the Sylvester
inertia law, it follows that $R(\theta)$ is a non-negative definite matrix. We
remark that in order to bound the spectral norm of a non-negative definite
matrix-valued function, it is sufficient to bound its trace. Hence, we check
that the spectral norms of the elements on the block diagonal of $R(\theta)$
are bounded. The latter is equivalent to verify that the limits
(16) $\lim_{\theta\rightarrow\theta_{0}}\|R_{1,1}(\theta)\|_{2},$ (17)
$\quad\lim_{\theta\rightarrow\theta_{0}}\|R_{2,2}(\theta)\|_{2},$ (18)
$\lim_{\theta\rightarrow\theta_{0}+\pi}\|R_{1,1}(\theta)\|_{2},$ (19)
$\quad\lim_{\theta\rightarrow\theta_{0}+\pi}\|R_{2,2}(\theta)\|_{2}$
exist and are finite, which in practice requires only the proof of (16).
Indeed, the finiteness of (17) and (18) is implied by the hypotheses on
$\mathbf{f}$, which is non-singular in $\theta_{0}+\pi$. The finiteness of
(19) can be proven as (16) taking into account that $R(\theta)$ is
$2\pi$-periodic. To prove (16) we note that for all
$\theta\in[0,2\pi)\backslash\Omega(\theta_{0})$, we can write
$\displaystyle\left\|R_{1,1}(\theta)\right\|_{2}=$
$\displaystyle\left\|\mathbf{f}^{-\frac{1}{2}}(\theta)(I_{d}-\mathbf{p}(\theta)\textbf{q}(\theta)\mathbf{p}(\theta)^{H})\mathbf{f}^{-\frac{1}{2}}(\theta)\right\|_{2}=\left\|\mathbf{f}^{-{1}}(\theta)-\mathbf{f}^{-\frac{1}{2}}(\theta)\textbf{s}(\theta)\mathbf{f}^{-\frac{1}{2}}(\theta)\right\|_{2},$
with $\textbf{s}(\theta)$ defined as in (7). Without loss of generality, we
can assume that $\bar{\jmath}=1$, that is $q_{1}(\theta_{0})$ is the
eigenvector of $\mathbf{f}(\theta_{0})$ associated with the eigenvalue $0$.
Indeed, if $\bar{\jmath}\neq 1$, it is sufficient to permute rows and columns
of $D(\theta_{0})$ in the factorization in (6) via a permutation matrix $\Pi$
which brings the diagonalization of $\mathbf{f}(\theta_{0})$ into the desired
form. Moreover, we can assume that $\|q_{1}(\theta_{0})\|_{2}=1$. From
condition $(i)$ we have that the matrix-valued function $\textbf{s}(\theta)$
is Hermitian for all $\theta\in[0,2\pi)$. In addition, from condition $(ii)$
and from the latter assumption on $\bar{\jmath}$, the matrix
$\textbf{s}(\theta)$ can be decomposed as
$\textbf{s}(\theta)=W_{s}(\theta)D_{s}(\theta)W_{s}^{H}(\theta)$ and
$\textbf{s}(\theta_{0})=\left[\begin{array}[]{@{\,}c@{\;}|@{\;}c@{\;}|@{\;}c@{\;}|@{\;}c@{\,}}q_{1}(\theta_{0})&w_{2}(\theta_{0})&\dots&w_{d}(\theta_{0})\end{array}\right]\begin{bmatrix}1\\\
&\lambda_{2}(\textbf{s}(\theta_{0}))\\\ &&\ddots\\\
&&&\lambda_{d}(\textbf{s}(\theta_{0}))\end{bmatrix}\left[\begin{array}[]{@{\,}c@{\,}}{q_{1}}^{H}(\theta_{0})\\\
\hline\cr{w_{2}}^{H}(\theta_{0})\\\ \hline\cr\vdots\\\
\hline\cr{w_{d}}^{H}(\theta_{0})\end{array}\right].$
Then, we can rewrite the quantity to bound as follows:
$\begin{split}&\lim_{\theta\rightarrow\theta_{0}}\|Q(\theta)D^{-1}(\theta)Q^{H}(\theta)-Q(\theta)D^{-\frac{1}{2}}(\theta)Q^{H}(\theta)W_{s}(\theta)D_{s}(\theta)W^{H}_{s}(\theta)Q^{H}(\theta)D^{-\frac{1}{2}}(\theta)Q^{H}(\theta)\|_{2}\\\
&=\lim_{\theta\rightarrow\theta_{0}}\|D^{-1}(\theta)-D^{-\frac{1}{2}}(\theta)Q^{H}(\theta)W_{s}(\theta)D_{s}(\theta)W^{H}_{s}(\theta)Q^{H}(\theta)D^{-\frac{1}{2}}(\theta)\|_{2}.\\\
\end{split}$
By definition of $Q(\theta_{0})$ and $W_{s}(\theta_{0})$, the vector
$q_{0}(\theta_{0})$ is orthogonal with respect to both $q_{j}(\theta_{0})$,
$w_{j}(\theta_{0})$, $j=2,\dots,d$. Denoting by $\textbf{0}_{d-1}$ the null
row vector of size $d-1$, we have
$\lim_{\theta\rightarrow\theta_{0}}Q^{H}(\theta)W_{s}(\theta)=\left[\begin{array}[]{
c c
}\begin{array}[]{c}q_{1}(\theta_{0})^{H}q_{1}(\theta_{0})\end{array}&\textbf{0}_{d-1}\\\
\textbf{0}_{d-1}^{T}&\begin{array}[]{cc}M(\theta_{0})\end{array}\end{array}\right],$
where $M(\theta)$ is a matrix-valued function which is well-defined and
continuous on $[0,2\pi]$. Then, since the eigenvalue functions
$\lambda_{i}(\mathbf{f}(\theta))^{-1}$, for $i=2,\dots,d$, are well-defined
and continuous on $[0,2\pi],$ see Lemma 3.1, the quantity to bound becomes
$\begin{split}&\left\|\\!\begin{bmatrix}\\!\underset{\theta\rightarrow\theta_{0}}{\lim}\lambda_{1}(\mathbf{f}(\theta))^{-1}(1-\lambda_{1}(\textbf{s}(\theta)))&\textbf{0}_{d-1}\\\
\textbf{0}_{d-1}^{T}&\begin{bmatrix}\lambda_{2}(\mathbf{f}(\theta_{0}))^{-1}\\\
&\\!\\!\\!\\!\ddots\\!\\!\\\
&&\lambda_{d}(\mathbf{f}(\theta_{0}))^{-1}\end{bmatrix}\\!\left(I_{d-1}\\!-\\!M(\theta_{0})M^{T}(\theta_{0})\right)\\!\end{bmatrix}\\!\right\|_{2}\end{split}\\!.$
Consequently, the thesis follows from condition $(iii)$.
###### Remark 4.3.
The proof of Theorem 4.1 requires that $\theta_{0}\neq\theta_{i}^{(n)}$ for
all $i$ and $n$. Nevertheless, in practice, our multigrid method works well
even if $\theta_{0}=\theta_{i}^{(n)}$ for a certain $i$ and $n$, as it can
happen in the circulant case. In such case the coefficient matrix $A_{N}$ is
singular but the multigrid method converges anyway, since it works on the
orthogonal complement of the eigenvector corresponding to the zero eigenvalue.
We could add a rank one correction, like the one used for scalar symbols in
[2], but this would only lead to unnecessary complication of notation, see
[1].
In practical applications choosing a $\mathbf{p}$ such that conditions $(ii)$
and $(iii)$ are verified could not be trivial. Hence, in the following,
assuming that $\mathbf{p}$ satisfies the condition $(i)$ so that the matrix-
valued function s is well-defined, we provide two useful results, Lemma
4.4-4.6, that can be used to construct $\mathbf{p}$ that fulfills condition
$(ii)$. Analogously, Lemma 4.9 shows how to deal with condition $(iii)$ under
some additional hypotheses on $\mathbf{p}$ and $\mathbf{f}$.
###### Lemma 4.4.
Let $\mathbf{f}$ be a matrix-valued trigonometric polynomial, $\mathbf{f}\geq
0$ that satisfies condition (5). Assume $\mathbf{p}$ is a matrix-valued
trigonometric polynomial such that condition $(i)$ is fulfilled, so that the
matrix-valued function s defined as in (7) is well-defined. Assume that the
eigenvector $q_{\bar{\jmath}}(\theta_{0})$ associate with the ill-conditioned
subspace of $\mathbf{f}(\theta_{0})$, i.e.,
$\mathbf{f}(\theta_{0})q_{\bar{\jmath}}(\theta_{0})=0q_{\bar{\jmath}}(\theta_{0})$,
is such that:
1. 1.
$q_{\bar{\jmath}}(\theta_{0})$ is an eigenvector of $\mathbf{p}(\theta_{0})$,
associated to $\lambda^{(1)}_{\bar{\jmath}}\neq 0$ that is
$\mathbf{p}(\theta_{0})q_{\bar{\jmath}}(\theta_{0})=\lambda^{(1)}_{\bar{\jmath}}q_{\bar{\jmath}}(\theta_{0});$
2. 2.
$q_{\bar{\jmath}}(\theta_{0})$ is an eigenvector of
$\mathbf{p}(\theta_{0}+\pi)$ associated with the zero eigenvalue, that is
$\mathbf{p}(\theta_{0}+\pi)q_{\bar{\jmath}}(\theta_{0})=0q_{\bar{\jmath}}(\theta_{0});$
3. 3.
$q_{\bar{\jmath}}(\theta_{0})$ is an eigenvector of
$\mathbf{p}(\theta_{0})^{H}$, associated to $\lambda^{(2)}_{\bar{\jmath}}\neq
0$, that is
$\mathbf{p}(\theta_{0})^{H}q_{\bar{\jmath}}(\theta_{0})=\lambda^{(2)}_{\bar{\jmath}}q_{\bar{\jmath}}(\theta_{0}).$
Then condition $(ii)$ is satisfied.
###### Proof 4.5.
From all the hypotheses on $q_{\bar{\jmath}}(\theta_{0})$ and by direct
computation, we have
$\left(\mathbf{p}(\theta_{0})^{H}\mathbf{p}(\theta_{0})+\mathbf{p}(\theta_{0}+\pi)^{H}\mathbf{p}(\theta_{0}+\pi)\right)q_{\bar{\jmath}}(\theta_{0})=\lambda^{(1)}_{\bar{\jmath}}\lambda^{(2)}_{\bar{\jmath}}q_{\bar{\jmath}}(\theta_{0}).$
Then, by definition of $\textbf{s}(\theta)$ in (7), it holds that
$\displaystyle\textbf{s}(\theta_{0})q_{\bar{\jmath}}(\theta_{0})$
$\displaystyle=\mathbf{p}(\theta_{0})\left(\mathbf{p}(\theta_{0})^{H}\mathbf{p}(\theta_{0})+\mathbf{p}(\theta_{0}+\pi)^{H}\mathbf{p}(\theta_{0}+\pi)\right)^{-1}\mathbf{p}(\theta_{0})^{H}q_{\bar{\jmath}}(\theta_{0})$
$\displaystyle=\lambda_{\bar{\jmath}}^{(2)}\mathbf{p}(\theta_{0})\left(\mathbf{p}(\theta_{0})^{H}\mathbf{p}(\theta_{0})+\mathbf{p}(\theta_{0}+\pi)^{H}\mathbf{p}(\theta_{0}+\pi)\right)^{-1}q_{\bar{\jmath}}(\theta_{0})$
$\displaystyle=\lambda_{\bar{\jmath}}^{(2)}\frac{1}{\lambda^{(1)}_{\bar{\jmath}}\lambda^{(2)}_{\bar{\jmath}}}\mathbf{p}(\theta_{0})q_{\bar{\jmath}}(\theta_{0})=q_{\bar{\jmath}}(\theta_{0}).$
The next lemma provides other sufficient conditions to verify hypothesis
$(ii)$, if the projector is associated with a trigonometric polynomial
$\mathbf{p}$ which is non-singular in the considered point $\theta_{0}$.
###### Lemma 4.6.
With the assumption and notation of Lemma 4.4 where the condition 3. is
replaced with
1. 3 bis.
$\mathbf{p}(\theta_{0})$ is non-singular.
Then condition $(ii)$ is satisfied.
###### Proof 4.7.
By definition of s in equation (7), we have
$\begin{split}\textbf{s}(\theta_{0})^{-1}&=\left(\mathbf{p}(\theta_{0})\left(\mathbf{p}(\theta_{0})^{H}\mathbf{p}(\theta_{0})+\mathbf{p}(\theta_{0}+\pi)^{H}\mathbf{p}(\theta_{0}+\pi)\right)^{-1}\mathbf{p}(\theta_{0})^{H}\right)^{-1}\\\
&=\mathbf{p}(\theta_{0})^{-H}\left(\mathbf{p}(\theta_{0})^{H}\mathbf{p}(\theta_{0})+\mathbf{p}(\theta_{0}+\pi)^{H}\mathbf{p}(\theta_{0}+\pi)\right)^{-1}\mathbf{p}(\theta_{0})^{-1}\\\
&=I_{d}+\mathbf{p}(\theta_{0})^{-H}\mathbf{p}(\theta_{0}+\pi)^{H}\mathbf{p}(\theta_{0}+\pi)\mathbf{p}(\theta_{0})^{-1}.\end{split}$
Then, it holds
$\begin{split}s(\theta_{0})^{-1}q_{\bar{\jmath}}(\theta_{0})=&\left[I_{d}+\mathbf{p}(\theta_{0})^{-H}\mathbf{p}(\theta_{0}+\pi)^{H}\mathbf{p}(\theta_{0}+\pi)\mathbf{p}(\theta_{0})^{-1}\right]q_{\bar{\jmath}}(\theta_{0})=\\\
&q_{\bar{\jmath}}(\theta_{0})+\frac{1}{\lambda_{\bar{\jmath}}^{(1)}}\mathbf{p}(\theta_{0})^{-H}\mathbf{p}(\theta_{0}+\pi)^{H}\mathbf{p}(\theta_{0}+\pi)q_{\bar{\jmath}}(\theta_{0})=q_{\bar{\jmath}}(\theta_{0})\end{split}$
and then condition $(ii)$ follows.
Finally, we present Lemma 4.9 to simplify the validation of condition $(iii)$
in possible applications. This provides some additional hypotheses on p and f
that can be considered when proving that condition $(iii)$ is satisfied. For
the aforementioned purpose we first introduce the following remark containing
algebraic calculations in order to speed up the proof of the lemma.
###### Remark 4.8.
Suppose that we can write $\mathbf{p}(\theta)$ as
$\mathbf{p}(\theta)=\left[\begin{array}[]{@{\,}c@{\;}|@{\;}c@{\;}|@{\;}c@{\;}|@{\;}c@{\,}}h_{1}(\theta)&h_{2}(\theta)&\dots&h_{d}(\theta)\end{array}\right]\begin{bmatrix}\lambda_{1}(\mathbf{p}(\theta))&\textbf{0}_{d-1}\\\
\textbf{0}_{d-1}^{T}&M(\theta)\end{bmatrix}\left[\begin{array}[]{@{\,}c@{\,}}{h_{1}}^{H}(\theta)\\\
\hline\cr{h_{2}}^{H}(\theta)\\\ \hline\cr\vdots\\\
\hline\cr{h_{d}}^{H}(\theta)\end{array}\right],$
where $M(\theta)\in\mathbb{C}^{(d-1)\times(d-1)},$ for each $\theta$,
$\lim_{\theta\rightarrow\theta_{0}}h_{1}(\theta+\pi)=q_{\bar{\jmath}}(\theta_{0})$,
and $h_{\jmath}(\theta_{0}+\pi)^{H}q_{\bar{\jmath}}(\theta_{0})=0,$ for
${\jmath}=2,\dots,d$. Then, we can write
$\begin{split}&\lim_{\theta\rightarrow\theta_{0}}\mathbf{p}(\theta+\pi)q_{\bar{\jmath}}(\theta_{0})=\\\
&\lim_{\theta\rightarrow\theta_{0}}\\!\left[\begin{array}[]{@{\,}c@{\;}|@{\;}c@{\;}|@{\;}c@{\;}|@{\;}c@{\,}}\\!h_{1}(\theta+\pi)&h_{2}(\theta+\pi)&\dots&h_{d}(\theta+\pi)\\!\end{array}\right]\\!\\!\begin{bmatrix}\lambda_{1}(\mathbf{p}(\theta+\pi))&\textbf{0}_{d-1}\\\
\textbf{0}_{d-1}^{T}&\\!\\!M(\theta+\pi)\end{bmatrix}\\!\\!\left[\begin{array}[]{@{\,}c@{\,}}{h_{1}}^{H}(\theta+\pi)\\\
\hline\cr{h_{2}}^{H}(\theta+\pi)\\!\\\ \hline\cr\vdots\\\
\hline\cr{h_{d}}^{H}(\theta+\pi)\end{array}\right]\\!\\!q_{\bar{\jmath}}(\theta_{0})\\!=\\\
&\lim_{\theta\rightarrow\theta_{0}}\\!\left[\begin{array}[]{@{\,}c@{\;}|@{\;}c@{\;}|@{\;}c@{\;}|@{\;}c@{\,}}\\!h_{1}(\theta+\pi)&h_{2}(\theta+\pi)&\dots&h_{d}(\theta+\pi)\\!\end{array}\right]\\!\\!\begin{bmatrix}\lambda_{1}(\mathbf{p}(\theta+\pi))&\textbf{0}_{d-1}\\\
\textbf{0}_{d-1}^{T}&\\!\\!M(\theta+\pi)\end{bmatrix}\\!\\!\left[\begin{array}[]{@{\,}c@{\,}}{q_{\bar{\jmath}}}^{H}(\theta_{0})\\\
\hline\cr{h_{2}}^{H}(\theta_{0}+\pi)\\\ \hline\cr\vdots\\\
\hline\cr{h_{d}}^{H}(\theta_{0}+\pi)\end{array}\right]\\!\\!q_{\bar{\jmath}}(\theta_{0})\\!=\\\
&\lim_{\theta\rightarrow\theta_{0}}\left[\begin{array}[]{@{\,}c@{\;}|@{\;}c@{\;}|@{\;}c@{\;}|@{\;}c@{\,}}\\!h_{1}(\theta+\pi)&h_{2}(\theta+\pi)&\dots&h_{d}(\theta+\pi)\\!\end{array}\right]\\!\\!\begin{bmatrix}\lambda_{1}(\mathbf{p}(\theta+\pi))&\textbf{0}_{d-1}\\\
\textbf{0}_{d-1}^{T}&\\!\\!M(\theta+\pi)\end{bmatrix}\\!\\!\begin{bmatrix}1\\\
0\\\ \vdots\\\ 0\end{bmatrix}=\\\
&\lim_{\theta\rightarrow\theta_{0}}\left[\begin{array}[]{@{\,}c@{\;}|@{\;}c@{\;}|@{\;}c@{\;}|@{\;}c@{\,}}h_{1}(\theta+\pi)&h_{2}(\theta+\pi)&\dots&h_{d}(\theta+\pi)\end{array}\right]\begin{bmatrix}\lambda_{1}(\mathbf{p}(\theta+\pi))\\\
0\\\ \vdots\\\ 0\end{bmatrix}=\\\
&\lim_{\theta\rightarrow\theta_{0}}\lambda_{1}(\mathbf{p}(\theta+\pi))h_{1}(\theta+\pi)=\lim_{\theta\rightarrow\theta_{0}}\lambda_{1}(\mathbf{p}(\theta+\pi))q_{\bar{\jmath}}(\theta_{0}).\end{split}$
###### Lemma 4.9.
Assume that $\mathbf{p}$ and $\mathbf{f}$ are matrix-valued functions which
satisfy the requirements of the Lemma 4.4. If
1. 1.
$\lambda_{\bar{\jmath}}^{(1)}=\lambda_{\bar{\jmath}}^{(2)}=\lambda_{\bar{\jmath}},$
2. 2.
$\lim_{\theta\rightarrow\theta_{0}}\frac{|\lambda_{\bar{\jmath}}(\mathbf{p}(\theta+\pi)|^{2}}{\lambda_{\bar{\jmath}}(\mathbf{f}(\theta))}=c,$
then, the condition $(iii)$ of the Theorem 4.1 is satisfied. That is,
$\lim_{\theta\rightarrow\theta_{0}}\frac{1-\lambda_{\bar{\jmath}}(\textbf{s}(\theta))}{\lambda_{\bar{\jmath}}(\mathbf{f}(\theta))}=c.$
###### Proof 4.10.
From the hypotheses on $\textbf{s}(\theta)$, we have that
${\textbf{s}}(\theta)$ is equal to
$\begin{split}&\left[\begin{array}[]{@{\;}c@{\;}|@{\;}c@{\;}|@{\;}c@{\;}|@{\;}c@{\;}|@{\;}c@{\;}}w_{1}(\theta)&\dots&w_{\bar{\jmath}}(\theta)&\dots&w_{d}(\theta)\end{array}\right]\begin{bmatrix}\lambda_{1}({\textbf{s}}(\theta))&&&&\\\
&\ddots&&&\\\ &&\lambda_{\bar{\jmath}}({\textbf{s}}(\theta))&&\\\
&&&\ddots&\\\
&&&&\lambda_{d}({\textbf{s}}(\theta))\end{bmatrix}\left[\begin{array}[]{ccccccc}{w_{1}}^{H}(\theta)\\\
\hline\cr\vdots\\\ \hline\cr w_{\bar{\jmath}}^{H}(\theta)\\\
\hline\cr\vdots\\\ \hline\cr{w_{d}}^{H}(\theta)\end{array}\right]\end{split},$
where,
$\lim_{\theta\rightarrow\theta_{0}}w_{\jmath}(\theta)=q_{\bar{\jmath}}(\theta_{0}).$
By definition of s, we can write
$\begin{split}&\lim_{\theta\rightarrow\theta_{0}}\frac{1-\lambda_{\bar{\jmath}}(\textbf{s}(\theta))}{\lambda_{\bar{\jmath}}(\mathbf{f}(\theta))}=\\\
&\lim_{\theta\rightarrow\theta_{0}}\frac{1-w_{\jmath}(\theta)^{H}\mathbf{p}(\theta)\left(\mathbf{p}(\theta)^{H}\mathbf{p}(\theta)+\mathbf{p}(\theta+\pi)^{H}\mathbf{p}(\theta+\pi)\right)^{-1}\mathbf{p}(\theta)^{H}w_{\jmath}(\theta)}{\lambda_{\bar{\jmath}}(\mathbf{f}(\theta))}.\end{split}$
Since
$\mathbf{p}(\theta_{0})q_{\bar{\jmath}}(\theta_{0})=\mathbf{p}(\theta_{0})^{H}q_{\bar{\jmath}}(\theta_{0})=\lambda_{\bar{\jmath}}q_{\bar{\jmath}}(\theta_{0})$,
$\begin{split}&\lim_{\theta\rightarrow\theta_{0}}\frac{1-\lambda_{\bar{\jmath}}(\textbf{s}(\theta))}{\lambda_{\bar{\jmath}}(\mathbf{f}(\theta))}=\\\
&\lim_{\theta\rightarrow\theta_{0}}\frac{1-|\lambda_{\bar{\jmath}}|q_{\bar{\jmath}}(\theta_{0})^{H}\left(\mathbf{p}(\theta)^{H}\mathbf{p}(\theta)+\mathbf{p}(\theta+\pi)^{H}\mathbf{p}(\theta+\pi)\right)^{-1}q_{\bar{\jmath}}(\theta_{0})}{\lambda_{\bar{\jmath}}(\mathbf{f}(\theta))}.\end{split}$
Note that,
$\begin{split}&\lim_{\theta\rightarrow\theta_{0}}q_{\bar{\jmath}}(\theta_{0})^{H}\left(\mathbf{p}(\theta)^{H}\mathbf{p}(\theta)+\mathbf{p}(\theta+\pi)^{H}\mathbf{p}(\theta+\pi)\right)q_{\bar{\jmath}}(\theta_{0})=\\\
&\lim_{\theta\rightarrow\theta_{0}}q_{\bar{\jmath}}(\theta_{0})^{H}\mathbf{p}(\theta)^{H}\mathbf{p}(\theta)q_{\bar{\jmath}}(\theta_{0})+\lim_{\theta\rightarrow\theta_{0}}q_{\bar{\jmath}}(\theta_{0})^{H}\mathbf{p}(\theta+\pi)^{H}\mathbf{p}(\theta+\pi)q_{\bar{\jmath}}(\theta_{0})=\\\
&\lim_{\theta\rightarrow\theta_{0}}|\lambda_{\bar{\jmath}}|^{2}+q_{\bar{\jmath}}(\theta_{0})^{H}\mathbf{p}(\theta+\pi)^{H}\mathbf{p}(\theta+\pi)q_{\bar{\jmath}}(\theta_{0}).\end{split}$
Then,
$\begin{split}&\lim_{\theta\rightarrow\theta_{0}}q_{\bar{\jmath}}(\theta_{0})^{H}\left(\mathbf{p}(\theta)^{H}\mathbf{p}(\theta)+\mathbf{p}(\theta+\pi)^{H}\mathbf{p}(\theta+\pi)\right)^{-1}q_{\bar{\jmath}}(\theta_{0})=\\\
&\lim_{\theta\rightarrow\theta_{0}}\frac{1}{|\lambda_{\bar{\jmath}}|^{2}+q_{\bar{\jmath}}(\theta_{0})^{H}\mathbf{p}(\theta+\pi)^{H}\mathbf{p}(\theta+\pi)q_{\bar{\jmath}}(\theta_{0})}.\end{split}$
Consequently, we can write
$\begin{split}&\lim_{\theta\rightarrow\theta_{0}}\frac{1-\lambda_{\bar{\jmath}}(\textbf{s}(\theta))}{\lambda_{\bar{\jmath}}(\mathbf{f}(\theta))}=\frac{1-\frac{|\lambda_{\bar{\jmath}}|^{2}}{|\lambda_{\bar{\jmath}}|^{2}+q_{\bar{\jmath}}(\theta_{0})^{H}\mathbf{p}(\theta+\pi)^{H}\mathbf{p}(\theta+\pi)q_{\bar{\jmath}}(\theta_{0})}}{\lambda_{\bar{\jmath}}(\mathbf{f}(\theta))}=\\\
&\lim_{\theta\rightarrow\theta_{0}}\frac{q_{\bar{\jmath}}(\theta_{0})^{H}\mathbf{p}(\theta+\pi)^{H}\mathbf{p}(\theta+\pi)q_{\bar{\jmath}}(\theta_{0})}{\lambda_{\bar{\jmath}}(\mathbf{f}(\theta))\left(|\lambda_{\bar{\jmath}}|^{2}+q_{\bar{\jmath}}(\theta_{0})^{H}\mathbf{p}(\theta+\pi)^{H}\mathbf{p}(\theta+\pi)q_{\bar{\jmath}}(\theta_{0})\right)}=\\\
&c\lim_{\theta\rightarrow\theta_{0}}\frac{q_{\bar{\jmath}}(\theta_{0})^{H}\mathbf{p}(\theta+\pi)^{H}\mathbf{p}(\theta+\pi)q_{\bar{\jmath}}(\theta_{0})}{\lambda_{\bar{\jmath}}(\mathbf{f}(\theta))},\end{split}$
where in the latter equality we used the fact that
$|\lambda_{\bar{\jmath}}|^{2}>0.$ Hence, from Remark 4.8, we have that
$\lim_{\theta\rightarrow\theta_{0}}\frac{1-\lambda_{\bar{\jmath}}(\textbf{s}(\theta))}{\lambda_{\bar{\jmath}}(\mathbf{f}(\theta))}=c\lim_{\theta\rightarrow\theta_{0}}\frac{|\lambda_{\bar{\jmath}}(\mathbf{p}(\theta+\pi)|^{2}}{\lambda_{\bar{\jmath}}(\mathbf{f}(\theta))}.$
Then, the thesis follows from hypothesis 2.
In conclusion, in order to prove that a considered symbol $\mathbf{p}$ is a
good choice to construct the grid transfer operator for the TGM method it is
sufficient to check if $\mathbf{p}$ satisfies:
1. 1.
condition $(i)$,
2. 2.
Lemma 4.4 or Lemma 4.6,
3. 3.
Lemma 4.9.
### 4.2 V-cycle optimality
Following the proof of the Theorem 4.1 and the results in [25], it is possible
to derive also conditions for the convergence and optimality of the V-cycle in
the block case. Indeed, according to Lemma 2.2, it is sufficient to prove that
there exists a $\delta$ independent from $n$ such that, for all $\ell$,
$\|\pi_{A_{n_{\ell}}}\|_{2}\leq\delta,$ or, equivalently, that
$\|A_{n_{\ell}}^{1/2}P_{n_{\ell},k_{\ell}}(P_{n_{\ell},k_{\ell}}^{H}A_{n_{\ell}}P_{n_{\ell},k_{\ell}})^{-1}P_{n_{\ell},k_{\ell}}A_{n_{\ell}}^{1/2}\|_{2}\leq\delta,$
which is implied by
(20)
$A_{n_{\ell}}^{1/2}P_{n_{\ell},k_{\ell}}(P_{n_{\ell},k_{\ell}}^{H}A_{n_{\ell}}P_{n_{\ell},k_{\ell}})^{-1}P_{n_{\ell},k_{\ell}}A_{n_{\ell}}^{1/2}\leq\delta
I_{N}.$
As proved in [12, Proposition 1], all matrices $A_{n_{\ell}}$ have a block-
circulant structure and share the same spectral properties, in particular for
$\theta_{0}=0$, see also equation (21). Therefore, the following analysis
performed at the first level could be repeated unchanged at a generic level
$\ell$. Following the steps of the proof of Theorem 4.1, condition (20)
becomes
$\displaystyle\begin{smallmatrix}\vspace{-0.5ex}\textrm{\normalsize diag}\\\
\vspace{-0.8ex}i\in\mathcal{I}_{k}\end{smallmatrix}\left[\begin{array}[]{cc}\mathbf{f}(\theta_{i}^{(n)})&\\\
&\mathbf{f}(\theta_{\tilde{\imath}}^{(n)})\end{array}\right]^{\frac{1}{2}}\left[\begin{array}[]{c}\mathbf{p}(\theta_{i}^{(n)})\\\
\mathbf{p}(\theta_{\tilde{\imath}}^{(n)})\end{array}\right]$
$\displaystyle\cdot\begin{smallmatrix}\vspace{-0.5ex}\textrm{\normalsize
diag}\\\
\vspace{-0.8ex}i\in\mathcal{I}_{k}\end{smallmatrix}\left[\begin{array}[]{c}\mathbf{p}(\theta_{i}^{(n)})^{H}\mathbf{f}(\theta_{i}^{(n)})\mathbf{p}(\theta_{i}^{(n)})+\mathbf{p}(\theta_{\tilde{\imath}}^{(n)})^{H}\mathbf{f}(\theta_{\tilde{\imath}}^{(n)})\mathbf{p}(\theta_{\tilde{\imath}}^{(n)})\end{array}\right]^{-1}$
$\displaystyle\cdot\begin{smallmatrix}\vspace{-0.5ex}\textrm{\normalsize
diag}\\\
\vspace{-0.8ex}i\in\mathcal{I}_{k}\end{smallmatrix}\left[\begin{array}[]{cc}\mathbf{p}(\theta_{i}^{(n)})^{H}&\mathbf{p}(\theta_{\tilde{\imath}}^{(n)})^{H}\end{array}\right]\left[\begin{array}[]{cc}\mathbf{f}(\theta_{i}^{(n)})&\\\
&\mathbf{f}(\theta_{\tilde{\imath}}^{(n)})\end{array}\right]^{\frac{1}{2}}\leq\delta
I_{N}.$
The latter is equivalent to require that are bounded the components of the
matrix-valued function
$\begin{split}&Z(\theta)=\\\ &\left[\begin{array}[]{cc}\mathbf{f}(\theta)&\\\
&\mathbf{f}(\theta+\pi)\end{array}\right]^{\frac{1}{2}}\left[\begin{array}[]{c}\mathbf{p}(\theta)\\\
\mathbf{p}(\theta+\pi)\end{array}\right]\hat{\mathbf{f}}^{-1}(2\theta)\left[\begin{array}[]{cc}\mathbf{p}(\theta)^{H}&\mathbf{p}(\theta+\pi)^{H}\end{array}\right]\left[\begin{array}[]{cc}\mathbf{f}(\theta)&\\\
&\mathbf{f}(\theta+\pi)\end{array}\right]^{\frac{1}{2}},\end{split}$
where
(21)
$\displaystyle\hat{\mathbf{f}}(\theta)=\frac{1}{2}\left(\mathbf{p}\left(\frac{\theta}{2}\right)^{H}\mathbf{f}\left(\frac{\theta}{2}\right)\mathbf{p}\left(\frac{\theta}{2}\right)+\mathbf{p}\left(\frac{\theta}{2}+\pi\right)^{H}\mathbf{f}\left(\frac{\theta}{2}+\pi\right)\mathbf{p}\left(\frac{\theta}{2}+\pi\right)\right)$
is the generating function of
$\mathcal{A}_{k}(\hat{\mathbf{f}})=P_{n,k}^{H}\mathcal{A}_{n}(\mathbf{f})P_{n,k}$.
We have
$\begin{split}\|Z_{1,1}(\theta)\|&=\|\mathbf{f}(\theta)\mathbf{p}(\theta)\hat{\mathbf{f}}^{-1}(2\theta)\mathbf{p}(\theta)^{H}\mathbf{f}(\theta)\|\leq\\\
&\leq\|\mathbf{f}(\theta)\mathbf{p}(\theta)\hat{\mathbf{f}}^{-1}(2\theta)\|\|\mathbf{p}(\theta)^{H}\mathbf{f}(\theta)\|,\\\
\|Z_{2,2}(\theta)\|&=\|\mathbf{f}(\theta+\pi)\mathbf{p}(\theta+\pi)\hat{\mathbf{f}}^{-1}(2\theta)\mathbf{p}(\theta)^{H}\mathbf{f}(\theta+\pi)\|\leq\\\
&\leq\|\mathbf{f}(\theta+\pi)\|\|\mathbf{p}(\theta+\pi)\hat{\mathbf{f}}^{-1}(2\theta)\|\|\mathbf{p}(\theta+\pi)^{H}\mathbf{f}(\theta+\pi)\|,\\\
\|Z_{1,2}(\theta)\|,\|Z_{2,1}(\theta)\|&\leq\|\mathbf{f}(\theta)\mathbf{p}(\theta)\hat{\mathbf{f}}^{-1}(2\theta)\|\|\mathbf{p}(\theta+\pi)^{H}\mathbf{f}(\theta+\pi)\|.\end{split}$
Since $\mathbf{f}(\theta)$ and $\mathbf{p}(\theta)$ are trigonometric
polynomials, the quantities $\|\mathbf{p}(\theta)^{H}\mathbf{f}(\theta)\|$,
$\|\mathbf{f}(\theta+\pi)\|$ and
$\|\mathbf{p}(\theta+\pi)^{H}\mathbf{f}(\theta+\pi)\|$ are bounded. Hence, we
have to prove that
(22)
$\|\mathbf{f}(\theta)\mathbf{p}(\theta)\hat{\mathbf{f}}^{-1}(2\theta)\|<\infty,\quad\|\mathbf{p}(\theta+\pi)\hat{\mathbf{f}}^{-1}(2\theta)\|<\infty.$
Consequently, a key point is to investigate the properties of the generating
function at the coarse levels. The following lemma will be useful tools for
the aforementioned purpose.
###### Lemma 4.11.
Let $\mathbf{f}$ be defined as in Theorem 4.1 and $\hat{\mathbf{f}}$ be
defined as in formula (21). Assume that $q_{\bar{\jmath}}(\theta_{0})$ is the
eigenvector associated with the ill-conditioned subspace of
$\mathbf{f}(\theta_{0})$. In addition, assume that the eigenvector
$q_{\bar{\jmath}}(\theta_{0})$ is such that:
1. (a)
$q_{\bar{\jmath}}(\theta_{0})$ is an eigenvector of $\mathbf{p}(\theta_{0})$,
associated to $\lambda^{(1)}\neq 0$ that is
$\mathbf{p}(\theta_{0})q_{\bar{\jmath}}(\theta_{0})=\lambda^{(1)}q_{\bar{\jmath}}(\theta_{0});$
2. (b)
$q_{\bar{\jmath}}(\theta_{0})$ is an eigenvector of
$\mathbf{p}(\theta_{0}+\pi)$ associated with the zero eigenvalue, that is
$\mathbf{p}(\theta_{0}+\pi)q_{\bar{\jmath}}(\theta_{0})=0q_{\bar{\jmath}}(\theta_{0}).$
Then, the following properties are fulfilled:
1. 1.
$\hat{\mathbf{f}}$ is an Hermitian matrix-valued trigonometric polynomial;
2. 2.
$\hat{\mathbf{f}}(\theta)\geq 0$, $\forall\,\theta\in[0,2\pi]$;
3. 3.
$\hat{\mathbf{f}}(2\theta_{0})q_{\bar{\jmath}}(\theta_{0})=0q_{\bar{\jmath}}(\theta_{0});$
4. 4.
$\hat{\mathbf{f}}(\theta)>0,$ $\,\theta\neq 2\theta_{0}$;
5. 5.
If
(23)
$\lim_{\theta\rightarrow\theta_{0}}\frac{\lambda_{\bar{\jmath}}^{2}(\mathbf{p}(\theta+\pi))}{\lambda_{\bar{\jmath}}(\mathbf{f}(\theta))}<\infty,$
then
$\lim_{\theta\rightarrow\theta_{0}}\frac{\lambda_{\bar{\jmath}}(\hat{\mathbf{f}}(2\theta))}{\lambda_{\bar{\jmath}}(\mathbf{f}(\theta))}=c,\,c\neq
0\in\mathbb{R}.$
###### Proof 4.12.
1. 1.
It is straightforward to see that $\hat{\mathbf{f}}$ is an Hermitian matrix-
valued trigonometric polynomial from its definition in (21). In particular,
$\hat{\mathbf{f}}$ is obtained by sums and products of the trigonometric
polynomials $\mathbf{p}$ and $\mathbf{f}$.
2. 2.
Assume $y\in\mathbb{R}^{d}$, then we have
$y^{T}\hat{\mathbf{f}}y=\frac{1}{2}y^{T}\mathbf{p}\left(\frac{\theta}{2}\right)^{H}\mathbf{f}\left(\frac{\theta}{2}\right)\mathbf{p}\left(\frac{\theta}{2}\right)y+\frac{1}{2}y^{T}\mathbf{p}\left(\frac{\theta}{2}+\pi\right)^{H}\mathbf{f}\left(\frac{\theta}{2}+\pi\right)\mathbf{p}\left(\frac{\theta}{2}+\pi\right)y.$
By hypotheses, $\mathbf{f}(\frac{\theta}{2})$ and
$\mathbf{f}(\frac{\theta}{2}+\pi)$ are non negative matrices, then
$y^{T}\hat{\mathbf{f}}y\geq 0.$
3. 3.
By definition, it holds
$\hat{\mathbf{f}}(2\theta_{0})q_{\bar{\jmath}}=\frac{1}{2}\mathbf{p}^{H}(\theta_{0})\mathbf{f}({\theta_{0}})\mathbf{p}(\theta_{0})q_{\bar{\jmath}}+\frac{1}{2}\mathbf{p}^{H}(\theta_{0}+\pi)\mathbf{f}(\theta_{0}+\pi)\mathbf{p}(\theta_{0}+\pi)q_{\bar{\jmath}}.$
The vector $q_{\bar{\jmath}}(\theta_{0})$ is the eigenvector of
$\mathbf{p}(\theta_{0}+\pi)$ and $\mathbf{f}(\theta_{0})$ associated with the
zero eigenvalue and it is the eigenvector of $\mathbf{p}(\theta_{0})$
associated with $\lambda^{(1)}$. Then, we have
$\hat{\mathbf{f}}(2\theta_{0})q_{\bar{\jmath}}=\frac{\lambda^{(1)}}{2}\mathbf{p}^{H}(\theta_{0})\mathbf{f}({\theta_{0}})q_{\bar{\jmath}}(\theta_{0})+0q_{\bar{\jmath}}(\theta_{0})=0q_{\bar{\jmath}}(\theta_{0}).$
4. 4.
Assume $y\neq 0\in\mathbb{R}^{d}$, then we have
$y^{T}\hat{\mathbf{f}}y=\frac{1}{2}y^{T}\mathbf{p}\left(\frac{\theta}{2}\right)^{H}\mathbf{f}\left(\frac{\theta}{2}\right)\mathbf{p}\left(\frac{\theta}{2}\right)y+\frac{1}{2}y^{T}\mathbf{p}\left(\frac{\theta}{2}+\pi\right)^{H}\mathbf{f}\left(\frac{\theta}{2}+\pi\right)\mathbf{p}\left(\frac{\theta}{2}+\pi\right)y.$
From the proof of the second item we already know that
$y^{T}\hat{\mathbf{f}}y\geq 0,$ since both
$\frac{1}{2}y^{T}\mathbf{p}\left(\frac{\theta}{2}\right)^{H}\mathbf{f}\left(\frac{\theta}{2}\right)\mathbf{p}\left(\frac{\theta}{2}\right)y$
and
$\frac{1}{2}y^{T}\mathbf{p}\left(\frac{\theta}{2}+\pi\right)^{H}\mathbf{f}\left(\frac{\theta}{2}+\pi\right)\mathbf{p}\left(\frac{\theta}{2}+\pi\right)y$
are non negative. To prove that, for $\theta\neq 2\theta_{0}$,
$y^{T}\hat{\mathbf{f}}y>0,$ it is sufficient to show that if
$\frac{1}{2}y^{T}\mathbf{p}\left(\frac{\theta}{2}\right)^{H}\mathbf{f}\left(\frac{\theta}{2}\right)\mathbf{p}\left(\frac{\theta}{2}\right)y=0$,
then
$\frac{1}{2}y^{T}\mathbf{p}\left(\frac{\theta}{2}+\pi\right)^{H}\mathbf{f}\left(\frac{\theta}{2}+\pi\right)\mathbf{p}\left(\frac{\theta}{2}+\pi\right)y$
is different from 0 and, if
$\frac{1}{2}y^{T}\mathbf{p}\left(\frac{\theta}{2}+\pi\right)^{H}\mathbf{f}\left(\frac{\theta}{2}+\pi\right)\mathbf{p}\left(\frac{\theta}{2}+\pi\right)y=0$,
then
$\frac{1}{2}y^{T}\mathbf{p}\left(\frac{\theta}{2}\right)^{H}\mathbf{f}\left(\frac{\theta}{2}\right)\mathbf{p}\left(\frac{\theta}{2}\right)y\neq
0$. In the following we show the first fact, the latter can be proved in the
same way. We have that
$\frac{1}{2}y^{T}\mathbf{p}\left(\frac{\theta}{2}\right)^{H}\mathbf{f}\left(\frac{\theta}{2}\right)\mathbf{p}\left(\frac{\theta}{2}\right)y=0$
if and only if $\mathbf{p}\left(\frac{\theta}{2}\right)y=0y$. Note that for
$\theta\neq 2\theta_{0}$, $\mathbf{f}(\theta/2)>0$, since, for
$\theta\neq\theta_{0}$, $\mathbf{f}(\theta)>0$. Moreover, if $\theta\neq
2\theta_{0}$, then for periodicity, $\theta\neq 2\theta_{0}-2\pi$, that is,
$\theta/2+\pi\neq\theta_{0}$. Consequently, $\mathbf{f}(\theta/2+\pi)>0$. If,
by contradiction,
$\frac{1}{2}y^{T}\mathbf{p}\left(\frac{\theta}{2}+\pi\right)^{H}\mathbf{f}\left(\frac{\theta}{2}+\pi\right)\mathbf{p}\left(\frac{\theta}{2}+\pi\right)y=0$,
then $\mathbf{p}(\theta/2+\pi)y=0y$, that cannot be satisfied, since, by
hypothesis $\mathbf{p}(\theta)$ verifies the condition $(i)$.
5. 5.
Since $\hat{\mathbf{f}}(\theta)$ is Hermitian for every $\theta$, we can
assume $\hat{\mathbf{f}}(\theta)$ equals to
$\begin{split}&\left[\begin{array}[]{@{\;}c@{\;}|@{\;}c@{\;}|@{\;}c@{\;}|@{\;}c@{\;}|@{\;}c@{\;}}\\!v_{1}(\theta)&\dots&v_{\bar{\jmath}}(\theta)&\dots&v_{d}(\theta)\\!\end{array}\right]\\!\\!\begin{bmatrix}\lambda_{1}(\hat{\mathbf{f}}(\theta))&&&&\\\
&\\!\\!\ddots\\!\\!&&&\\\
&&\lambda_{\bar{\jmath}}(\hat{\mathbf{f}}(\theta))&&\\\
&&&\\!\\!\ddots\\!\\!&\\\
&&&&\lambda_{d}(\hat{\mathbf{f}}(\theta))\end{bmatrix}\\!\\!\left[\begin{array}[]{ccccccc}{v_{1}}^{H}(\theta)\\\
\hline\cr{v_{2}}^{H}(\theta)\\\ \hline\cr\vdots\\\ \hline\cr
v_{\bar{\jmath}}^{H}(\theta)\\\ \hline\cr\vdots\\\
\hline\cr{v_{d}}^{H}(\theta)\end{array}\right].\end{split}$
From property 3, we can assume, without loss of generality, that
(24)
$\lim_{\theta\rightarrow\theta_{0}}v_{\bar{\jmath}}(2\theta)=q_{\bar{\jmath}}(\theta_{0}).$
Consequently, we obtain
(25)
$\begin{split}&\lim_{\theta\rightarrow\theta_{0}}\\!\frac{\lambda_{\bar{\jmath}}(\hat{\mathbf{f}}(2\theta))}{\lambda_{\bar{\jmath}}(\mathbf{f}(\theta))}\\!=\\!\lim_{\theta\rightarrow\theta_{0}}\\!\frac{v_{\bar{\jmath}}(2\theta)^{H}\hat{\mathbf{f}}(2\theta))v_{\bar{\jmath}}(2\theta)}{q_{\bar{\jmath}}(\theta){\mathbf{f}}(\theta)q_{\bar{\jmath}}(\theta)}\\!=\\!\frac{1}{2}\lim_{\theta\rightarrow\theta_{0}}\\!\\!\left(\\!\frac{v_{\bar{\jmath}}(2\theta)^{H}{\mathbf{p}}(\theta)^{H}\mathbf{f}(\theta)\mathbf{p}(\theta)v_{\bar{\jmath}}(2\theta)}{q_{\bar{\jmath}}(\theta){\mathbf{f}}(\theta)q_{\bar{\jmath}}(\theta)}+\right.\\\
&\left.\frac{v_{\bar{\jmath}}(2\theta)^{H}{\mathbf{p}}(\theta+\pi)^{H}\mathbf{f}(\theta+\pi)\mathbf{p}(\theta+\pi)v_{\bar{\jmath}}(2\theta)}{q_{\bar{\jmath}}(\theta){\mathbf{f}}(\theta)q_{\bar{\jmath}}(\theta)}\right)=\frac{1}{2}\frac{q_{\bar{\jmath}}(\theta_{0})^{H}{\mathbf{p}}(\theta_{0})^{H}\mathbf{f}(\theta_{0})\mathbf{p}(\theta_{0})q_{\bar{\jmath}}(\theta_{0})}{q_{\bar{\jmath}}(\theta_{0}){\mathbf{f}}(\theta_{0})q_{\bar{\jmath}}(\theta_{0})}+\\\
&\frac{1}{2}\lim_{\theta\rightarrow\theta_{0}}\frac{v_{\bar{\jmath}}(2\theta)^{H}{\mathbf{p}}(\theta+\pi)^{H}\mathbf{f}(\theta+\pi)\mathbf{p}(\theta+\pi)v_{\bar{\jmath}}(2\theta)}{q_{\bar{\jmath}}(\theta){\mathbf{f}}(\theta)q_{\bar{\jmath}}(\theta)},\end{split}$
where in the first term we used relation (24). Exploiting properties (a) and
(b) and Remark 4.8, we can write
(26)
$\begin{split}&\lim_{\theta\rightarrow\theta_{0}}\frac{\lambda_{\bar{\jmath}}(\hat{\mathbf{f}}(2\theta))}{\lambda_{\bar{\jmath}}(\mathbf{f}(\theta))}=\frac{1}{2}(\lambda^{(1)})^{2}+\frac{1}{2}{q_{\bar{\jmath}}(\theta_{0})^{H}\mathbf{f}(\theta_{0}+\pi)q_{\bar{\jmath}}(\theta_{0})}\left(\lim_{\theta\rightarrow\theta_{0}}\frac{\lambda_{\bar{\jmath}}^{2}(\mathbf{p}(\theta+\pi))}{\lambda_{\bar{\jmath}}(\mathbf{f}(\theta))}\right),\end{split}$
where $\lambda_{\bar{\jmath}}^{2}(\mathbf{p}(\theta+\pi))$ is the eigenvalue
of $\mathbf{p}(\theta+\pi)$ associated to the eigenvector
$q_{\bar{\jmath}}(\theta_{0})$. Since the quantity
$\frac{1}{2}{q_{\bar{\jmath}}(\theta_{0})^{H}\mathbf{f}(\theta_{0}+\pi)q_{\bar{\jmath}}(\theta_{0})}$
is strictly positive and bounded by condition (5), property 5 is satisfied
under the hypotheses that
(27)
$\lim_{\theta\rightarrow\theta_{0}}\frac{\lambda_{\bar{\jmath}}^{2}(\mathbf{p}(\theta+\pi))}{\lambda_{\bar{\jmath}}(\mathbf{f}(\theta))}<\infty.$
Passing from TGM to V-cycle, the hypothesis 2 in Lemma 4.9 has to be
strengthen removing the power two, similarly to the case of scalar symbol [1],
obtaining the condition
$\lim_{\theta\rightarrow\theta_{0}}\frac{|\lambda_{\bar{\jmath}}(\mathbf{p}(\theta+\pi))|}{\lambda_{\bar{\jmath}}(\mathbf{f}(\theta))}=c.$
In fact, it leads to convergence and optimality even when dealing with V-cycle
with more than two grids.
###### Lemma 4.13.
Let $\mathbf{p}$ and $\mathbf{f}$ satisfy the hypothesis of Lemma 4.11. If
(28)
$\lim_{\theta\rightarrow\theta_{0}}\frac{|\lambda_{\bar{\jmath}}(\mathbf{p}(\theta+\pi))|}{\lambda_{\bar{\jmath}}(\mathbf{f}(\theta))}=c,$
then the two bounds in (22), needed for the convergence and optimality of the
V-cycle, are verified.
###### Proof 4.14.
The condition (28) implies the hypothesis of the item 5 of Lemma 4.11, then we
have that the order of the zero at the coarse levels does not change, since it
brings to
(29)
$\lim_{\theta\rightarrow\theta_{0}}\frac{\lambda_{\bar{\jmath}}(\hat{\mathbf{f}}(2\theta))}{\lambda_{\bar{\jmath}}(\mathbf{f}(\theta))}=c,\,c\neq
0\in\mathbb{R},$
where $\hat{\mathbf{f}}(\theta)$ is given in (21). Hence, from direct
computation using the same techniques as in the proof of Theorem 4.1 and Lemma
4.11, we have that the quantity
$\|\mathbf{f}(\theta)\mathbf{p}(\theta)\hat{\mathbf{f}}^{-1}(2\theta)\|$
is bounded. Moreover, the limit bound in (29) implies that the second
condition in (22) can be replaced by
$\|\mathbf{p}(\theta+\pi){\mathbf{f}}^{-1}(2\theta)\|<\infty,$
which is given by (28).
## 5 Geometric Projectors
In the following we will apply the theoretical considerations from the
previous sections to problems arising from the discretization of partial
differential equations (PDEs). When PDEs are discretized with high order of
accuracy using the finite element method (FEM), block matrices arise
naturally. We consider finite elements with nodal bases, using a Cartesian
grid and Kronecker products of one-dimensional basis functions. For a problem
posed in $m$ dimensions discretized using Kronecker product of basis functions
of degree $r$ this automatically yields blocks of size $r^{m}\times r^{m}$. As
prolongation operators we have different choices, here we consider two: the
linear interpolation usually used in geometric multigrid methods for scalar
problems [34] and the prolongation obtained as the adjoint operator of the
restriction operator when considering the finite element basis functions [4].
### 5.1 $\mathbb{Q}_{r}$ Lagrangian FEM Stiffness Matrices
First we consider the $\mathbb{Q}_{r}$ Lagrangian FEM approximation of the
differential 1D problem, that is given by: Find $u$ such that
(30) $\begin{cases}&-u^{\prime\prime}(x)=\psi(x)\quad{\rm on}\,\,(0,1),\\\
&u(0)=u(1)=0,\end{cases}$
where $\psi(x)\in L^{2}\left(0,1\right)$. In this setting the weak formulation
on the problem is written as follows: Find $u\in H^{1}_{0}(0,1)$ such that
$a(u,v)=\langle\psi,v\rangle,\quad\forall v\in H^{1}_{0}(0,1),$ where
$a(u,v):=\int_{(0,1)}u^{\prime}(x)v^{\prime}(x)\,dx$ and
$\langle\psi,v\rangle:=\int_{(0,1)}\psi(x)v(x)\,dx$. For $r,n\geq 1$, we
define the space
(31) $V_{n}^{(r)}:=\\{\sigma\in\mathcal{C}\left([0,1]\right),\,{\rm
such}\,{\rm
that}\,\sigma{|_{\left[\frac{i}{n},\frac{i+1}{n}\right]}}\in\mathbb{P}_{r},\,\forall
i=0,\dots,n-1\\},$
where we denote by $\mathbb{P}_{r}$ the space of polynomials of degree less
than or equal to $r$. So the space $V_{n}^{(r)}$ represents the space of
continuous piecewise polynomial functions. Starting from $V_{n}^{(r)}$, we
consider its subspace of functions that vanish on the boundary, defined by
$W_{n}^{(r)}:=\\{\sigma\in V_{n}^{(r)},\,{\rm such}\,{\rm
that}\,\sigma(0)=\sigma(1)=0\\}.$ Note that $W_{n}^{(r)}$ is a finite $n{r}-1$
dimension subspace of $H_{0}^{1}(0,1)$ and, following a Galerkin approach [4],
we approximate the solution $u$ of the variational problem by solving the
problem: Find $u_{r,n}\in W_{n}^{(r)}$ such that
(32) $a(u_{r,n},v)=\langle\psi,v\rangle,\quad\forall v\in W_{n}^{(r)}.$
We define the uniform knot sequence
(33) $\xi_{i}=\frac{i}{nr},\quad i=0,\dots,nr,$
and the Lagrangian basis functions by
$\phi_{j}^{n,r}(\xi_{i})=\delta_{i,j},\,i,j=0,\dots,nr,$ with $\delta_{i,j}$
being the Kronecker delta. It is well known that the latter definition is
well-posed and that $\\{\phi_{1}^{n,r},\dots,\phi_{nr-1}^{n,r}\\}$ is a basis
for $W_{n}^{(r)}$. Then $u_{r,n}$ can be written as a linear combination of
such basis as $u_{r,n}=\sum_{j=1}^{nr-1}u_{j}\phi_{j}^{n,r}{.}$ Using this
discretization, approximately solving the problem (32) reduces to the solution
of the linear system $K_{n}^{(r)}\textbf{u}=\textbf{b},$ with
$K_{n}^{(r)}=[a(\phi_{j}^{n,r},\phi_{i}^{n,r})]_{i,j=1}^{nr-1},\quad\textbf{b}=\left[\langle\psi,\phi_{i}^{n,r}\rangle\right]_{i=1}^{nr-1},\quad\textbf{u}=\left[u_{i}\right]_{i=1}^{nr-1}.$
The spectral properties of the stiffness matrix-sequence
$\\{K_{n}^{(r)}\\}_{n}$ were studied in [18]. In the following we report the
spectral properties of matrix-valued function $\mathbf{f}$ associated with the
normalized matrix-sequence
$\\{T_{n}(\mathbf{f})\\}_{n}=\\{nK_{n}^{(r)}\\}_{n},$ which are needed for our
analysis [18, 27].
###### Theorem 5.1.
The $r\times r$ matrix-valued generating function of
$\\{T_{n}(\mathbf{f})\\}_{n}=\\{nK_{n}^{(r)}\\}_{n}$ is
(34) $\displaystyle\mathbf{f}_{\mathbb{Q}_{r}}(\theta)=a_{0}+a_{1}{\rm
e}^{\imath\theta}+a_{1}^{T}{\rm e}^{-\imath\theta}$
and the following statements hold true:
1. 1.
$\mathbf{f}_{\mathbb{Q}_{r}}(0){\rm e}_{r}=0$, ${\rm e}_{r}$ vector of all
ones, $r\geq 1$;
2. 2.
there exist constants $C_{2}\geq C_{1}>0$ (dependent on $f_{\mathbb{Q}_{r}}$)
such that
(35)
$C_{1}(2-2\cos(\theta))\leq\lambda_{1}(\mathbf{f}_{\mathbb{Q}_{r}}(\theta))\leq
C_{2}(2-2\cos(\theta));$
and
3. 3.
there exist constants $M_{2}\geq M_{1}>0$ (dependent on
$\mathbf{f}_{\mathbb{Q}_{r}}$) such that
(36)
$0<{M_{1}}\leq\lambda_{j}(\mathbf{f}_{\mathbb{Q}_{r}}(\theta))\leq{M_{2}},\ \
\ \ j=2,\ldots,r.$
From the latter result, we have that $\mathbf{f}_{\mathbb{Q}_{r}}$ is a
matrix-valued trigonometric polynomial which fulfills the hypotheses of
Subsection 4. Indeed, for each $r\geq 1$ we have that
* •
the minimum eigenvalue function $\lambda_{\min}(\mathbf{f}_{\mathbb{Q}_{r}})$
of $\mathbf{f}_{\mathbb{Q}_{r}}$ has a zero of order 2 in $\theta_{0}=0$;
* •
for $j=2,\dots,r$, it holds
$\lambda_{j}(\mathbf{f}_{\mathbb{Q}_{r}}(\theta))>0$, for all
$\theta\in[0,2\pi]$.
Then, the first item of Theorem 5.1 implies that $\mathbf{f}_{\mathbb{Q}_{r}}$
can be decomposed as in equation (6), with $q_{\bar{\jmath}}=q_{1}$ equal to
${\rm e}_{r}$, the column vector of all ones.
Consequently, in next section we test the applicability of the results given
in the Section 4, and we confirm that two standard projectors are effective in
terms of convergence and optimality when used for solving the linear system
which has $T_{n}(\mathbf{f}_{\mathbb{Q}_{r}})$ as coefficient matrix. In
particular, we will deal with projector of the form
$P_{n,k}^{{}_{\mathbb{Q}_{r}}}=T_{n}(\mathbf{p}_{{}_{\mathbb{Q}_{r}}})(K_{n,k}^{Even}\otimes
I_{d}),$
where its properties and efficiency will depend on those of the associated
trigonometric polynomial $\mathbf{p}_{{}_{\mathbb{Q}_{r}}}$.
### 5.2 The scalar linear interpolation projector
The discretization using $\mathbb{Q}_{r}$ Lagrangian finite elements provides
an approximate solution at all nodes used as interpolation points within an
element and on its boundaries. While the use of linear interpolation is common
in finite difference discretizations of partial differential equations [34],
it can be used in this setting, as well. For $n$ elements and using polynomial
degree $r$ it can be written as
(37) $P_{n,k}^{{r}}=T_{rn}(2+2\cos(\theta))K^{Even}_{rn,\frac{r(n-1)}{2}}.$
We will now show that this scalar interpolation nevertheless satisfies the
hypotheses of Lemma 4.4. For that purpose we have to show that it fits into
the block setting of the present paper. Hence, first we have to rewrite
$P_{n,k}^{r}$ in block form as
(38) $P_{n,k}^{{r}}=T_{n}(\mathbf{p}_{{}_{L_{r}}})(K^{Even}_{n,k}\otimes
I_{r}).$
which means that we want to find a matrix-valued trigonometric polynomial
$\mathbf{p}_{{}_{L_{r}}}$ such that the latter equation is true, with
$P_{n,k}^{{r}}$ defined as in (37).
Recalling the action of the cutting matrix $(K^{Even}_{n,k}\otimes I_{r})$,
seen in Subsection 3.2, we observe that $P_{n,k}^{{r}}$ can be rewritten in
the desired block form with associated matrix-valued trigonometric polynomial
$\mathbf{p}_{{}_{L_{r}}}$ of the form
(39) $\mathbf{p}_{{}_{L_{r}}}(\theta)=\hat{a}_{0}+\hat{a}_{-1}{\rm
e}^{-\imath\theta}+\hat{a}_{1}{\rm e}^{\imath\theta},$
where the expression of the Fourier coefficients
$\hat{a}_{0},\hat{a}_{-1},\hat{a}_{1}$ depends on whether the degree is even
or odd. Indeed, we have
1. 1.
In the case of even degree $r$, we define
$A_{1}=T_{r}(2+2\cos(\theta))K^{Even}_{r,\frac{r}{2}}=\begin{bmatrix}1&&&&\\\
2&&&&\\\ 1&1&&&\\\ &2&&&\\\ &1&&&\\\ &&\ddots&&\\\ &&&1&\\\ &&&2&\\\ &&&1&1\\\
&&&&2\end{bmatrix}\in\mathbb{R}^{r\times\frac{r}{2}}.$
Then the identity (38) holds with
$\displaystyle\hat{a}_{-1}$
$\displaystyle=\begin{bmatrix}&A_{1}&|&\textbf{0}_{{r,\frac{r}{2}}}&\end{bmatrix},\quad\hat{a}_{0}=\left[\begin{array}[]{ccc|c|c}&\textbf{0}_{1,\frac{r}{2}-1}&&1&\\\
\cline{1-4}\cr&&&&A_{1}\\\
&\textbf{0}_{{r}-1,\frac{r}{2}-1}&&\textbf{0}^{T}_{1,\frac{r}{2}-1}&\end{array}\right],$
$\displaystyle\hat{a}_{1}$
$\displaystyle=\left[\begin{array}[]{ccc|c}&\textbf{0}_{1,r-1}&&1\\\
\cline{1-4}\cr&&&\\\
&\textbf{0}_{r-1,r-1}&&\textbf{0}^{T}_{r-1}\end{array}\right].$
Note that
(40) $\sum_{j=1}^{\frac{r}{2}}[A_{1}]_{1,j}=1\quad\text{ and
}\quad\sum_{j=1}^{\frac{r}{2}}[A_{1}]_{i,j}=2,\,\text{ for $i=2,\dots,r$.}$
2. 2.
In the case of odd degree $r$ we define
$\displaystyle A_{2}=T_{r}(2+2\cos(\theta))K^{Odd}_{r,\frac{r+1}{2}}$
$\displaystyle=\begin{bmatrix}2&&&&\\\ 1&1&&&\\\ &2&&&\\\ &1&&&\\\
&&\ddots&&\\\ &&&1&\\\ &&&2&\\\ &&&1&1\\\
&&&&2\end{bmatrix}\in\mathbb{R}^{r\times\frac{r+1}{2}},$ $\displaystyle
A_{3}=T_{r}(2+2\cos(\theta))K^{Even}_{r,\frac{r+1}{2}}$
$\displaystyle=\begin{bmatrix}1&&&&\\\ 2&&&&\\\ 1&1&&&\\\ &2&&&\\\ &1&&&\\\
&&\ddots&&\\\ &&&1&\\\ &&&2&\\\
&&&1&1\end{bmatrix}\in\mathbb{R}^{r\times\frac{r+1}{2}}.$
Then (38) holds with
$\displaystyle\hat{a}_{-1}$
$\displaystyle=\begin{bmatrix}A_{3}&|&\textbf{0}_{r,\frac{r-1}{2}}&\end{bmatrix},$
$\displaystyle\hat{a}_{0}$
$\displaystyle=\begin{bmatrix}&\textbf{0}_{r,\frac{r-1}{2}}&|&A_{2}&\end{bmatrix},$
$\displaystyle\hat{a}_{1}$
$\displaystyle=\left[\begin{array}[]{ccc|c}&\textbf{0}_{1,r-1}&&1\\\
\cline{1-4}\cr&&&\\\
&\textbf{0}_{r-1,r-1}&&\textbf{0}^{T}_{r-1}\end{array}\right].$
Further we have
(41)
$\sum_{j=1}^{\frac{r+1}{2}}[A_{2}]_{1,j}=2,\,\sum_{j=1}^{\frac{r+1}{2}}[A_{3}]_{1,j}=1\,\text{
and
}\,\sum_{j=1}^{\frac{r+1}{2}}[A_{2}]_{i,j}=\sum_{j=1}^{\frac{r+1}{2}}[A_{3}]_{i,j}=2\text{
for $i=2,\dots,r$.}$
In the following we show that $\mathbf{p}_{{}_{L_{r}}}$ satisfies the
hypotheses of the Lemma 4.4.
###### Lemma 5.2.
Let $\mathbf{p}_{{}_{L_{r}}}$ be the $r\times r$ trigonometric polynomial
defined in (39), and ${\rm e}_{r}=[1,\dots,1]^{T}\in\mathbb{R}^{r}$. Then,
1. 1)
$\mathbf{p}_{{}_{L_{r}}}(0)\,{\rm e}_{r}=4\,{\rm e}_{r}$.
2. 2)
$\mathbf{p}_{{}_{L_{r}}}(\pi)\,{\rm e}_{r}=0\,{\rm e}_{r}$.
3. 3)
$\mathbf{p}_{{}_{L_{r}}}(0)^{H}\,{\rm e}_{r}=4\,{\rm e}_{r}$.
###### Proof 5.3.
The first two items are equivalent to require that the sum of the elements in
each row of the matrices $\mathbf{p}_{{}_{L_{r}}}(0)$ and
$\mathbf{p}_{{}_{L_{r}}}(\pi)$ is $4$ and $0$, respectively. Hence, to prove
$1)$, it is sufficient to show that
$\sum_{j=1}^{r}[\mathbf{p}_{{}_{L_{r}}}(0)]_{i,j}=\sum_{j=1}^{r}[\hat{a}_{0}+\hat{a}_{1}+\hat{a}_{-1}]_{i,j}=4,\quad
i=1,\dots,r.$
Then, we can exploit the structure of the Fourier coefficients
$\hat{a}_{-1},\hat{a}_{1},\hat{a}_{0}$ for even and odd degree. In particular,
looking at the structure of the matrices $A_{1}$, $A_{2}$, $A_{3}$ and at
relations (40) and (41), we have that for even degree $r$
$\begin{split}&\sum_{j=1}^{r}[\mathbf{p}_{{}_{L_{r}}}(0)]_{i,j}=\begin{cases}1+\left(2\sum_{j=1}^{\frac{r}{2}}[A_{1}]_{1,j}\right)+1=4,&{\rm
for}\,i=1\\\ \left(2\sum_{j=1}^{\frac{r}{2}}[A_{1}]_{i,j}\right)=4,&{\rm
for}\,i=2,\dots,r\\\ \end{cases},\end{split}$
and for odd degree $r$
$\begin{split}&\sum_{j=1}^{r}[\mathbf{p}_{{}_{L_{r}}}(0)]_{i,j}\\!=\begin{cases}\left(\sum_{j=1}^{\frac{r+1}{2}}[A_{3}]_{1,j}\\!+\\![A_{2}]_{1,j}\right)\\!+\\!1\\!=\\!4,&{\rm
for}\,i\\!=\\!1\\\
\left(\sum_{j=1}^{\frac{r}{2}}[A_{3}]_{i,j}\\!+\\![A_{2}]_{i,j}\right)\\!=\\!4,&{\rm
for}\,i\\!=\\!2,\dots,r\end{cases}\\!.\end{split}$
The proof of $2)$ can be repeated following the idea in $1)$ and noting that
$\mathbf{p}_{{}_{L_{r}}}(\pi)=\hat{a}_{0}-\hat{a}_{1}-\hat{a}_{-1}.$
Analogously, the third item can be proven following the same idea of $1)$,
showing that the sum of the elements in each column of the matrices
$\mathbf{p}_{{}_{L_{r}}}(0)$ is $4$. Since it is a straightforward
computation, we omit the details.
The latter result, together with Lemma 4.4 permits to conclude that
$\mathbf{p}_{{}_{L_{r}}}$ satisfies condition $(ii)$, once that we prove that
it satisfies the condition $(i)$, so that the matrix-valued function s is
well-defined. By direct computation, we find that for both even and odd $r$ we
have
$\mathbf{p}_{{}_{L_{r}}}(\theta)^{H}\mathbf{p}_{{}_{L_{r}}}(\theta)+\mathbf{p}_{{}_{L_{r}}}(\theta+\pi)^{H}\mathbf{p}_{{}_{L_{r}}}(\theta+\pi)=\begin{bmatrix}12&2&0&\dots&2{\rm
e}^{2\imath\theta}\\\ 2&12&2&\dots&0\\\ &&\ddots&\\\ 0&&&12&2\\\ 2{\rm
e}^{-2\imath\theta}&0&\dots&2&12\end{bmatrix},$
which is clearly a definite positive matrix for all $\theta\in[0,2\pi)$, so
$\mathbf{p}_{{}_{L_{r}}}$ satisfies condition $(i)$. Then, the function
$\textbf{s}(\theta)$ defined in (7) is well-defined.
The validation of condition $(iii)$ will be investigated in subsection 5.4
since we will treat it for both the projector associated with
$\mathbf{p}_{{}_{L_{r}}}$ and the classical geometric projector described in
the next subsection.
### 5.3 The projector using the finite element basis functions
In the present subsection we deal with the projector which is constructed
following the classical approach used for finite elements [4]. It has been
already treated for the $\mathbb{Q}_{r}$ Lagrangian FEM stiffness matrices in
the algebraic multigrid setting. Indeed, in [14] the authors proved the
optimality of the TGM methods for $r=2,3$. Our goal is to generalize the study
of the multigrid method procedure, proving that the restriction matrix can be
written for any degree $r$ in the form
(42) $P_{n,k}^{{r}}=T_{n}(\mathbf{p}_{{}_{G_{r}}})(K_{n,k}^{Even}\otimes
I_{d}),$
with $\mathbf{p}_{{}_{G_{r}}}$ being a matrix-valued trigonometric polynomial
that satisfies the hypotheses of Lemma 4.4, and then those of Theorem 4.1.
Let us start fixing the $r$ and taking as first $n=2$. Then we have from (33)
the knot sequence $\xi_{i}^{2r}=\frac{i}{2r},\,i=0,\dots,2r.$ If we take $n=4$
the uniform knot sequence is $\xi_{i}^{4r}=\frac{i}{4r},\,i=0,\dots,4r,$ and
thus it can be obtained from $\\{\xi_{i}^{2r}\\}_{i}$ adding the midpoint of
each sub interval defined by the points in $\\{\xi_{i}^{2r}\\}_{i}$.
Taking the Lagrangian basis functions for the spaces $W_{2}^{r}$ and
$W_{4}^{r}$, defined in Subsection 5.1, the latter observation implies that
(43) $W_{2}^{r}\subseteq W_{4}^{r}.$
The geometric multigrid strategy suggests to construct a prolongation operator
$\mathcal{P}~{}:~{}W_{2}^{r}\rightarrow W_{4}^{r}$, imposing
$\mathcal{P}v^{2,r}=v^{2,r},$ $\forall v^{2,r}\in W_{2}^{r}.$ A basis function
$\phi_{j}^{2,r}$ can be then written as a linear combination of the functions
$\phi_{j}^{4,r}$, that is
$\phi_{j}^{2,r}(x)=\sum_{i=1}^{4r-1}c_{i}\phi_{i}^{4,r}(x).$ From the
properties of the basis functions, we have that
$\phi_{j}^{4,r}(\xi_{i})=r_{i,j}$. Consequently, for $i=1,\dots,4r-1$, the
coefficients $c_{i}$ are given by the evaluations of
$\phi_{j}^{2,r}(\xi_{i})$, and this implies that
$\phi_{j}^{2,r}(x)=\sum_{i=1}^{4r-1}\phi_{j}^{2,r}(\xi_{i})\phi_{i}^{4,r}(x).$
Therefore, the $j-th$ column $({P})_{j}$ of the matrix $P$ representing the
prolongation operator $\mathcal{P}$ is given by
(44) $({P})_{j}=[\phi_{j}^{2,r}(\xi_{i})]_{i=1}^{4r-1}.$
Since we are in the setting of multigrid methods for $r\times r$ block-
Toeplitz matrices, from Subsection 3.2 we have that $n$ is taken of the form
$2^{t}-1$, $k=\frac{n-1}{2}$ and we look for a prolongation matrix of the form
(42). Taking inspiration from [14], we define the matrix-valued trigonometric
polynomial $\mathbf{p}_{{}_{G_{r}}}$ by
(45) $\mathbf{p}_{{}_{G_{r}}}(\theta)=\hat{b}_{0}+\hat{b}_{1}{\rm
e}^{\iota\theta}+\hat{b}_{-1}{\rm e}^{-\iota\theta}+\hat{b}_{2}{\rm
e}^{2\iota\theta}+\hat{b}_{-2}{\rm e}^{-2\iota\theta}$
with $\hat{b}_{-2}=0_{r\times
r},\,\hat{b}_{-1}=[P_{i,j}]_{\begin{subarray}{c}i=1,\dots,r\\\
j=1,\dots,r\end{subarray}},\,\hat{b}_{0}=[P_{i,j}]_{\begin{subarray}{c}i=r+1,\dots,2r\\\
j=1,\dots,r\end{subarray}},\,\hat{b}_{1}=[P_{i,j}]_{\begin{subarray}{c}i=2r+1,\dots,3r\\\
j=1,\dots,r\end{subarray}},\,$ and
$\hat{b}_{2}=[P_{i,j}]_{\begin{subarray}{c}i=3r+1,\dots,4r\\\
j=1,\dots,r\end{subarray}}.$ Then, from the expression of the columns of the
matrix $P$ in (44), we have
$\displaystyle\hat{b}_{-1}=\begin{bmatrix}\phi_{1}^{2,r}(\xi_{1}^{4,r})&\dots&\phi_{{\rm
r}}^{2,r}(\xi_{1}^{4,r})\\\ \vdots&&\vdots\\\
\phi_{1}^{2,r}(\xi_{r}^{4,r})&\dots&\phi_{{\rm r}}^{2,r}(\xi_{r}^{4,r})\\\
\end{bmatrix},\quad$
$\displaystyle\hat{b}_{0}=\begin{bmatrix}\phi_{1}^{2,r}(\xi_{r+1}^{4,r})&\dots&\phi_{{\rm
r}}^{2,r}(\xi_{r+1}^{4,r})\\\ \vdots&&\vdots\\\
\phi_{1}^{2,r}(\xi_{2r}^{4,r})&\dots&\phi_{{\rm r}}^{2,r}(\xi_{2r}^{4,r})\\\
\end{bmatrix},$ $\displaystyle\hat{b}_{1}=\begin{bmatrix}0&\dots&0&\phi_{{\rm
r}}^{2,r}(\xi_{2r+1}^{4,r})\\\ \vdots&&\vdots&\vdots\\\ 0&\dots&0&\phi_{{\rm
r}}^{2,r}(\xi_{3r}^{4,r})\\\ \end{bmatrix},\quad$
$\displaystyle\hat{b}_{2}=\begin{bmatrix}0&\dots&0&\phi_{{\rm
r}}^{2,r}(\xi_{3r+1}^{4,r})\\\ \vdots&&\vdots&\vdots\\\ 0&\dots&0&\phi_{{\rm
r}}^{2,r}(\xi_{4r}^{4,r})\\\ \end{bmatrix}.$
where, for the expressions of $\hat{b}_{1}$ and $\hat{b}_{2}$, we are using
the fact that the sets ${\rm supp}(\phi_{1}^{2,r}),\dots,{\rm
supp}(\phi_{r-1}^{2,r})$ are included in $[0,\xi_{2r}^{4,r}]$, see [18, pag
1108]. Moreover, from [18, Equation (3.5)] we can also see that, for
$i=2r+1,\dots,4r-1$,
$\phi_{r}^{2,r}(\xi_{i}^{4,r})=\phi_{0}^{2,r}\left(\xi_{i}^{4,r}-\frac{1}{2}\right)=\phi_{0}^{2,r}(\xi_{i-2r}^{4,r}),$
hence we have
(46)
$\begin{split}\hat{b}_{1}=\begin{bmatrix}0&\dots&0&\phi_{0}^{2,r}(\xi_{1}^{4,r})\\\
\vdots&&\vdots&\vdots\\\ 0&\dots&0&\phi_{0}^{2,r}(\xi_{r}^{4,r})\\\
\end{bmatrix}\quad{\rm
and}\quad\hat{b}_{2}=\begin{bmatrix}0&\dots&0&\phi_{0}^{2,r}(\xi_{r+1}^{4,r})\\\
\vdots&&\vdots&\vdots\\\ 0&\dots&0&\phi_{0}^{2,r}(\xi_{2r}^{4,r})\\\
\end{bmatrix}.\\\ \end{split}$
We want to prove that such projector $P^{{r}}_{n,k}$ satisfies the hypothesis
of Lemma 4.4. Since, from Theorem 5.1, we know that ${\rm
e}_{r}=[1,\dots,1]^{T}\in\mathbb{R}^{r}$ is the eigenvector of $\mathbf{f}(0)$
associated with the ill-conditioned subspace, the next lemma gives us the
proof that $P^{{r}}_{n,k}$ satisfies the hypothesis of Theorem 4.1.
###### Lemma 5.4.
Let $\mathbf{p}_{{}_{G_{r}}}$ be the $r\times r$ trigonometric polynomial
defined in (45), and ${\rm e}_{r}=[1,\dots,1]^{T}\in\mathbb{R}^{r}$. Then
1. 1)
$\mathbf{p}_{{}_{G_{r}}}(0)\,{\rm e}_{r}=2\,{\rm e}_{r}$,
2. 2)
$\mathbf{p}_{{}_{G_{r}}}(\pi)\,{\rm e}_{r}=0\,{\rm e}_{r}$,
3. 3)
$\mathbf{p}_{{}_{G_{r}}}(0)$ is non-singular.
###### Proof 5.5.
Note that the thesis is equivalent to require that the sum of the elements in
each row of the matrices $\mathbf{p}_{{}_{G_{r}}}(0)$ and
$\mathbf{p}_{{}_{G_{r}}}(\pi)$ is $2$ and $0$, respectively. Then, to prove
item $1)$, we prove that for every $i=1,\dots,r$
$\sum_{j=1}^{r}[\mathbf{p}_{{}_{G_{r}}}(0)]_{i,j}=2.$ The expression of
$\mathbf{p}_{{}_{G_{r}}}(\theta)$ in (45) yields
$\mathbf{p}_{{}_{G_{r}}}(0)=\hat{b}_{0}+\hat{b}_{1}+\hat{b}_{-1}+\hat{b}_{2},$
then we have for $i=1,\dots,r$
(47)
$\begin{split}&\sum_{j=1}^{r}[\mathbf{p}_{{}_{G_{r}}}(0)]_{i,j}=\sum_{j=1}^{r}[\hat{b}_{0}+\hat{b}_{1}+\hat{b}_{-1}+\hat{b}_{2}]_{i,j}=\\\
&\left(\sum_{j=1}^{r}\phi_{j}^{2,r}(\xi_{i+r}^{4,r})\right)+\phi_{0}^{2,r}(\xi_{i}^{4,r})+\left(\sum_{j=1}^{r}\phi_{j}^{2,r}(\xi_{i}^{4,r})\right)+\phi_{0}^{2,r}(\xi_{i+r}^{4,r})\end{split}.$
From the fact that the basis functions form a partition of the unity and since
${\rm supp}(\phi_{j}^{2,r})\cap[0,\xi^{4,r}_{2r}]=\emptyset$,
$j=r+1,\dots,2r,$ we have, for $i=1,\dots,r$, that
$\sum_{j=1}^{r}[\mathbf{p}_{{}_{G_{r}}}(0)]_{i,j}=\left(\sum_{j=0}^{2r}\phi_{j}^{2,r}(\xi_{i}^{4,r})\right)+\left(\sum_{j=0}^{2r}\phi_{j}^{2,r}(\xi_{i+r}^{4,r})\right)=2.$
In order to prove item $2)$ we write analogously
$\mathbf{p}_{{}_{G_{r}}}(\pi)=\hat{b}_{0}-\hat{b}_{1}-\hat{b}_{-1}+\hat{b}_{2},$
and, for $i=1,\dots,r$,
$\begin{split}&\sum_{j=1}^{r}[\mathbf{p}_{{}_{G_{r}}}(\pi)]_{i,j}=\sum_{j=1}^{r}[\hat{b}_{0}{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}+}\hat{b}_{1}-\hat{b}_{-1}+\hat{b}_{2}]_{i,j}=\\\
&\left(\sum_{j=0}^{2r}\phi_{j}^{2,r}(\xi_{i}^{4,r})\right)-\left(\sum_{j=0}^{2r}\phi_{j}^{2,r}(\xi_{i+r}^{4,r})\right)=0.\end{split}$
The item $3)$ is equivalent to ${\rm
det}\,\left(\mathbf{p}_{{}_{G_{r}}}(0)\right)\neq 0$. By direct computation,
we have that
${\rm det}\,\left(\mathbf{p}_{{}_{G_{r}}}(\theta)\right)=\frac{{\rm
e}^{-r\iota\theta}({\rm e}^{\iota\theta}+1)^{r+1}}{2^{\frac{r(r+1)}{2}}}.$
Finally, since ${\rm e}^{-r\iota\theta}\neq 0$, $\forall\theta\in[0,2\pi]$ and
$({\rm e}^{\iota\theta}+1)=0$ only if $\theta=\pi+2\ell\pi$, we have that
${\rm det}\,\left(\mathbf{p}_{{}_{G_{r}}}(\theta)\right)\neq 0,\quad{\rm
for}\quad\theta\in[0,2\pi]\setminus\\{\pi\\},$
hence $\mathbf{p}_{{}_{G_{r}}}(0)$ is non-singular.
Once we verify that $\mathbf{p}_{{}_{G_{r}}}$ satisfies condition $(i)$, we
can use Lemmas 5.4 and 4.6 to conclude that the matrix-valued function s is
well-defined and $\mathbf{p}_{{}_{G_{r}}}$ satisfies condition $(ii)$. To
prove that
$\mathbf{p}_{{}_{G_{r}}}(\theta)^{H}\mathbf{p}_{{}_{G_{r}}}(\theta)+\mathbf{p}_{{}_{G_{r}}}(\theta+\pi)^{H}\mathbf{p}_{{}_{G_{r}}}(\theta+\pi)>0$
it is sufficient to show that both ${\rm
det}\,(\mathbf{p}_{{}_{G_{r}}}(\theta)^{H}\mathbf{p}_{{}_{G_{r}}}(\theta))$
and ${\rm
det}\,(\mathbf{p}_{{}_{G_{r}}}(\theta+\pi)^{H}\mathbf{p}_{{}_{G_{r}}}(\theta+\pi))$
are non-negative definite matrix-valued functions which are singular
respectively in $\theta_{1}$ and $\theta_{2}$, with
$\theta_{1}\neq\theta_{2}.$ Indeed, we have that
${\rm
det}\,\left(\mathbf{p}_{{}_{G_{r}}}(\theta)^{H}\mathbf{p}_{{}_{G_{r}}}(\theta)\right)={\rm
det}\,(\mathbf{p}_{{}_{G_{r}}}(\theta))^{2}=\frac{{\rm
e}^{-2r\iota\theta}({\rm e}^{\iota\theta}+1)^{2(r+1)}}{2^{r(r+1)}},$
which is zero for $\theta_{1}=\pi.$ Analogously, it holds
$\begin{split}&{\rm
det}\,\left(\mathbf{p}_{{}_{G_{r}}}(\theta+\pi)^{H}\mathbf{p}_{{}_{G_{r}}}(\theta+\pi)\right)=\\\
&{\rm
det}\,\left(\mathbf{p}_{{}_{G_{r}}}(\theta+\pi)\right)^{2}=\left(\frac{{\rm
e}^{-r\iota(\theta+\pi)}({\rm
e}^{\iota(\theta+\pi)}+1)^{r+1}}{2^{\frac{r(r+1)}{2}}}\right)^{2}=\frac{{\rm
e}^{-2r\iota\theta}({\rm
e}^{\iota\theta}-1)^{2(r+1)}}{2^{r(r+1)}},\end{split}$
which is zero for $\theta_{2}=0.$
### 5.4 Optimal convergence of the V-cycle using the projector
$P_{n,k}^{{}_{\mathbb{Q}_{r}}}$
For both projectors described in Subsections 5.2 and 5.3 we have to verify the
limit condition (iii), in order to theoretically ensure the TGM optimality.
For this purpose it is sufficient to show either that the function
$1-\lambda_{\bar{\jmath}}(\textbf{s}(\theta))$ has a zero at least of the same
order of $\lambda_{\bar{\jmath}}(\mathbf{f}_{\mathbb{Q}_{r}}(\theta))$, or,
using the result in Lemma 4.9, that this property is satisfied by the
eigenvalue function
$\lambda_{\bar{\jmath}}(\mathbf{p}_{{}_{G_{r}}}(\theta+\pi))^{2}$.
1. 1.
For the linear interpolation operator:
* •
for even degree, we have that $\textbf{s}(\theta)$ is a projector since it can
be easily verified that
$\textbf{s}^{2}(\theta)-\textbf{s}(\theta)=\textbf{0}_{r\times r}$. Hence,
from condition (ii), we have $\lambda_{\bar{\jmath}}(\textbf{s}(0))=1$, and,
from the continuity of the eigenvalue functions (Lemma 3.1), we have that
$\lambda_{\bar{\jmath}}(\textbf{s}(\theta))\equiv 1$. Hence, it is
straightforward to see that the condition (iii) is verified;
* •
for odd degree, it can be numerically proved that
$\lambda_{\bar{\jmath}}(\mathbf{p}_{{}_{L_{r}}}(\theta+\pi))^{2}$ has a zero
of order $4$ in $0$, then condition (iii) is verified. For this purpose, we
can numerically study the behavior of the function
$\det(\mathbf{p}_{{}_{L_{r}}}(\theta+\pi)\mathbf{p}_{{}_{L_{r}}}(\theta+\pi)^{H})$.
Indeed, since for $l=2,\dots,r,\,\theta\in[0,2\pi],$
(48)
$\lambda_{\bar{\jmath}}\left(\mathbf{p}_{{}_{L_{r}}}(\theta+\pi)\mathbf{p}_{{}_{L_{r}}}(\theta+\pi)^{H}\right)<\lambda_{l}\left(\mathbf{p}_{{}_{L_{r}}}(\theta+\pi)\mathbf{p}_{{}_{L_{r}}}(\theta+\pi)^{H}\right),$
the behaviour of
$\lambda_{\bar{\jmath}}(\mathbf{p}_{{}_{L_{r}}}(\theta+\pi)\mathbf{p}_{{}_{L_{r}}}(\theta+\pi)^{H})$
in $0$ is equivalent to that of
$\det\left(\mathbf{p}_{{}_{L_{{r}}}}(\theta+\pi)\mathbf{p}_{{}_{L_{{r}}}}(\theta+\pi)^{H}\right)=\prod_{i=1}^{r}\lambda_{l}\left(\mathbf{p}_{{}_{L_{{r}}}}(\theta+\pi)\mathbf{p}_{{}_{L_{{r}}}}(\theta+\pi)^{H}\right)$
at the same point, which as a product of nonnegative functions is still a
nonnegative function. We numerically checked that
$\displaystyle\det\left(\mathbf{p}_{{}_{L_{{r}}}}(\theta+\pi)\mathbf{p}_{{}_{L_{{r}}}}(\theta+\pi)^{H}\right)={\rm
e}^{-2\iota\theta}({\rm e}^{\iota\theta}-1)^{4},$
which has a zero of order $4$ in $0$.
2. 2.
For the geometric projector operator we consider the even and odd degree $r$
simultaneously and we follow the latter strategy of studying the behavior of
$\det\left(\mathbf{p}_{{}_{G_{{r}}}}(\theta+\pi)\mathbf{p}_{{}_{G_{{r}}}}(\theta+\pi)^{H}\right)$
in $0$. From the proof of item 3) of Lemma 5.4 we have that
$\displaystyle\det\left(\mathbf{p}_{{}_{G_{{r}}}}(\theta+\pi)\mathbf{p}_{{}_{G_{{r}}}}(\theta+\pi)^{H}\right)=\frac{{\rm
e}^{-2r\iota\theta}({\rm e}^{\iota\theta}-1)^{2(r+1)}}{2^{r(r+1)}},$
which clearly has a zero of order $2(r+1)$ in $0$.
### 5.5 Example $r=2$
We conclude the section with the application of the results in Section 4.1 and
Subsection 4.2 on the scalar linear interpolation projector for the specific
case $r=2$ for the problem
$\left\\{\begin{array}[]{ll}-(a(x)u(x)^{\prime})^{\prime}=\psi(x),\,{\rm
on}\,(0,1)\\\ u(0)=u(1)=0,\end{array}\right.$
which is the variable coefficient version of the problem in (30). In this
setting the grid transfer operator $P_{n,k}^{{2}}$ is a $2n\times n-1$ matrix
given by
(49) $P_{n,k}^{{2}}=T_{n}(\mathbf{p}_{{}_{L_{2}}})(K^{Even}_{n,k}\otimes
I_{2}).$
with matrix-valued trigonometric polynomial
$\mathbf{p}_{{}_{L_{2}}}(\theta)=\begin{bmatrix}1+{\rm e}^{-\iota\theta}&{\rm
e}^{\iota\theta}+1\\\ 2{\rm e}^{-\iota\theta}&2\end{bmatrix}.$ The cut
stiffness matrix $[T_{n}(\mathbf{\textbf{f}}_{\mathbb{Q}_{2}})]_{-},$ has the
associated generating function
$\begin{split}\mathbf{f}_{\mathbb{Q}_{2}}(\theta)=&\frac{1}{3}\left(\begin{bmatrix}16&-8\\\
-8&14\end{bmatrix}+\begin{bmatrix}0&-8\\\ 0&1\end{bmatrix}{\rm
e}^{\iota\theta}+\begin{bmatrix}0&0\\\ -8&1\end{bmatrix}{\rm
e}^{-{\iota}\theta}\right)=\\\ &\frac{1}{3}\begin{bmatrix}16&-8(1+{\rm
e}^{\iota\theta})\\\ -8(1+{\rm e}^{-\iota\theta})&14+{\rm
e}^{\iota\theta}+{\rm e}^{-\iota\theta}\end{bmatrix},\end{split}$
with the following properties:
* •
$\lambda_{1}(\mathbf{\textbf{f}}_{\mathbb{Q}_{2}}(\theta))$ has a zero of
order 2 in $\theta_{0}=0$.
* •
$\mathbf{\textbf{f}}_{\mathbb{Q}_{{2}}}(0)q_{1}(0)=0$, $q_{1}(0)=[1,1]^{T}$.
By direct computation it is possible to check that the trigonometric
polynomial $\mathbf{p}_{{}_{L_{2}}}$ verifies
* •
$\left.\mathbf{p}_{{}_{L_{2}}}(\theta)^{H}\mathbf{p}_{{}_{L_{2}}}(\theta)+\mathbf{p}_{{}_{L_{2}}}(\theta+\pi)^{H}\mathbf{p}_{{}_{L_{2}}}(\theta+\pi)=\begin{bmatrix}12&2\\\
2&12\end{bmatrix}>0.\quad\right\\}\Rightarrow{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\,(i)}$
* •
$\left.\begin{tabular}[]{@{}c@{}}$\mathbf{\textbf{p}}^{(2)}(0)\,{\rm
e}_{2}=\begin{bmatrix}2&2\\\ 2&2\end{bmatrix}e_{2}=4\,{\rm e}_{2}$\\\
$\mathbf{\textbf{p}}^{(2)}(\pi)\,{\rm e}_{2}=\begin{bmatrix}0&0\\\
-2&2\end{bmatrix}e_{2}=0\,{\rm e}_{2}$\\\
$\,\mathbf{\textbf{p}}^{(2)}(0)^{H}\,{\rm e}_{2}=\begin{bmatrix}2&2\\\
2&2\end{bmatrix}e_{2}=4\,{\rm e}_{2}$\\\
\end{tabular}\qquad\qquad\qquad\qquad\qquad\qquad\quad\right\\}\Rightarrow{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\,(ii)}$
* •
$\lambda_{1}(\mathbf{\textbf{p}}^{(2)}(\theta+\pi))=0.$
$\qquad\quad\quad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\left.\right\\}\Rightarrow{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}(\ref{eq:final_bound_vcycle})}$
In particular, the second, third and fourth conditions are the hypotheses of
Lemma 4.4 needed for the validation of $(ii)$. The fact that the minimum
eigenvalue function of $\mathbf{\textbf{p}}^{(2)}(\theta+\pi)$ is identically
zero implies the thesis of Lemma 4.13. Hence, the convergence and optimality
of the V-cycle method is ensured when applied to the problem (30) using
$P_{n,k}^{{2}}$. This is reflected on the fact that the number of iterations
needed to reach the convergence of the methods is constant when increasing the
problem size. In Table 1 we show the results for the cases
$a(x)=1,x^{2}+1,{\rm e}^{x}-2x$, using as pre and post smoother 1 iteration of
the Gauss-Seidel method. We highlight that also other smoothers are suitable
for the optimality of the method. For instance, Lemma 2.2 guarantees that the
relaxed Richardson method can be used, provided that a preliminary study for
the choice of the damping parameter is performed.
$N=d\cdot 2^{t}-1$ | $a(x)=1$ | $a(x)=x^{2}+1$ | $a(x)={\rm e}^{x}-2x$
---|---|---|---
$t$ | TGM | V-Cycle | TGM | V-Cycle | TGM | V-Cycle
4 | 6 | 6 | 7 | 7 | 6 | 7
5 | 6 | 6 | 7 | 7 | 6 | 7
6 | 6 | 6 | 7 | 7 | 6 | 7
7 | 6 | 6 | 7 | 7 | 6 | 7
8 | 6 | 6 | 7 | 7 | 6 | 7
9 | 6 | 6 | 7 | 7 | 6 | 7
10 | 6 | 6 | 7 | 7 | 6 | 7
Table 1: Two-grid and V-cycle iterations in 1D for $a(x)=1,x^{2}+1,{\rm
e}^{x}-2x$, $tol=1\times 10^{-6}$.
## 6 Extension to multi-dimensional case
In the present subsection we give a possible extension of the convergence
results in the multidimensional setting. First, we need to introduce the
multi-index notation and define the objects of our analysis in more
dimensions.
Let ${\bf n}:=(n_{1},\ldots,n_{{m}})$ be a multi-index in $\mathbb{N}^{{m}}$
and set $N({r},\textbf{n}):={r}\prod_{i=1}^{{m}}n_{i}$. In particular, we need
to provide a generalized definition of the projector $P_{n,k}^{r}$ for the
${{m}}-$level block-circulant matrix
$A_{N}=\mathcal{A}_{\textbf{n}}(\mathbf{f})$ of dimension $N({r},\textbf{n})$
generated by a multilevel matrix-valued trigonometric polynomial $\mathbf{f}$.
A complete discussion on the multi-index notation can be found in [17].
Analogously to the scalar case, we want to construct the projectors from an
arbitrary multilevel block-circulant matrix
$\mathcal{A}_{\textbf{n}}(\mathbf{p})$, with $\mathbf{p}$ multivariate matrix-
valued trigonometric polynomial. For the construction of the projector we can
use a tensor product approach:
(50) $P_{\textbf{n},\textbf{k}}=\mathcal{A}_{\bf
n}(\mathbf{p})\left(K_{\textbf{n},\textbf{k}}^{Even}\otimes I_{r}\right),$
where $K_{\textbf{n},\textbf{k}}^{Even}$ is the
$N(1,\textbf{n})\times\frac{N(1,\textbf{n})}{2^{{{m}}}}$ matrix defined by
$K_{\textbf{n},\textbf{k}}^{Even}=K_{n_{1},k_{1}}^{Even}\otimes
K_{n_{2},k_{2}}^{Even}\otimes\dots\otimes K_{n_{{m}},k_{{m}}}^{Even}$ and
$\mathcal{A}_{\textbf{n}}(\mathbf{p})$ is a multilevel block-circulant matrix
generated by $\mathbf{p}$. The main goal is to combine the proof of Theorem
4.1 with the multilevel techniques in [30], in order to generalize conditions
(i)-(iii) to the multilevel case.
In the ${{m}}-$level setting, we are assuming that
$\boldsymbol{\theta}_{0}\in[0,2\pi)^{{m}}$ and
$\bar{\jmath}\in\\{1,\dots,r\\}$ such that
(51)
$\left\\{\begin{array}[]{ll}\lambda_{j}(\mathbf{f}(\boldsymbol{\theta}))=0&\mbox{for
}\boldsymbol{\theta}=\boldsymbol{\theta}_{0}\mbox{ and }j=\bar{\jmath},\\\
\lambda_{j}(\mathbf{f}(\boldsymbol{\theta}))>0&{\rm
otherwise}.\end{array}\right.$
The latter assumption means that the matrix $\mathbf{f}(\boldsymbol{\theta})$
has exactly one zero eigenvalue in $\boldsymbol{\theta}_{0}$ and it is
positive definite in $[0,2\pi)^{{m}}\backslash\\{\boldsymbol{\theta}_{0}\\}$.
Let us assume that, $q_{\bar{\jmath}}(\boldsymbol{\theta}_{0})$ is the
eigenvector of $\mathbf{f}(\boldsymbol{\theta}_{0})$ associated with
$\lambda_{\bar{\jmath}}(\mathbf{f}(\boldsymbol{\theta}_{0}))=0$. Moreover,
define
$\Omega(\boldsymbol{\theta})=\left\\{\boldsymbol{\theta}+\pi\boldsymbol{\eta},\,\boldsymbol{\eta}\in\\{0,1\\}^{{m}}\right\\}$.
Under these hypotheses, the multilevel extension of conditions (i)-(iii),
which are sufficient to ensure the optimal convergence of the TGM in the
multilevel case, is the following. Choose $\mathbf{p}(\cdot)$ such that
* •
(52)
$\sum_{\xi\in\Omega(\boldsymbol{\theta})}\mathbf{p}(\xi)^{H}\mathbf{p}(\xi)>0,\quad\forall\,\boldsymbol{\theta}\in[0,2\pi)^{{m}},$
which implies that the trigonometric function
$\textbf{s}(\boldsymbol{\theta})=\mathbf{p}(\boldsymbol{\theta})\left(\sum_{\xi\in\Omega(\boldsymbol{\theta})}\mathbf{p}(\xi)^{H}\mathbf{p}(\xi)\right)^{-1}\mathbf{p}(\boldsymbol{\theta})^{H}$
is well-defined for all $\boldsymbol{\theta}\in[0,2\pi)^{{m}}$.
* •
(53)
$\textbf{s}(\boldsymbol{\theta}_{0})q_{\bar{\jmath}}(\boldsymbol{\theta}_{0})=q_{\bar{\jmath}}(\boldsymbol{\theta}_{0}).$
* •
(54)
$\lim_{\boldsymbol{\theta}\rightarrow\boldsymbol{\theta}_{0}}\lambda_{\bar{\jmath}}(\mathbf{f}(\boldsymbol{\theta}))^{-1}(1-\lambda_{\bar{\jmath}}(\textbf{s}(\boldsymbol{\theta})))=c,$
where $c\in\mathbb{R}$ is a constant.
In the following we want to construct a multilevel projector
$P_{\textbf{n},\textbf{k}}$ such that the conditions (52)-(54) are satisfied
and, then, the optimal convergence of the TGM, applied to the problem (30) in
the multidimensional setting, is ensured. In particular, starting from matrix-
valued trigonometric polynomials $\mathbf{p}_{r_{\ell}},$
$\ell=1,\dots,{{m}}$, we aim at defining a multivariate polynomial
$\mathbf{p}^{({{m}})}_{{{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}\mathbf{r}}}}$
associated to the multilevel projector $P_{{\textbf{n}},{\textbf{k}}}$ such
that the conditions (52)-(54) are satisfied.
In the following lemmas, we show that the aforementioned goal is achieved, if
we choose the multivariate matrix-valued trigonometric polynomial
(55)
$\mathbf{p}^{({{m}})}_{{{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}\mathbf{r}}}}(\theta_{1},\theta_{2},\dots,\theta_{{{m}}})=\bigotimes_{\ell=1}^{{{m}}}\mathbf{p}_{{r_{\ell}}}(\theta_{\ell}),$
where
$\mathbf{p}_{{{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}r_{\ell}}}}(\theta_{\ell})\in\mathbb{C}^{{{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}r_{\ell}}}\times{{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}r_{\ell}}}}$
are polynomials that satisfy conditions (i)-(iii).
###### Lemma 6.1.
Let
$\mathbf{p}^{({{m}})}_{{{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}\mathbf{r}}}}(\theta_{1},\theta_{2},\dots,\theta_{{{m}}})$
be defined as in (55). Then,
$\sum_{\xi\in\Omega(\boldsymbol{\theta})}\mathbf{p}^{({{m}})}_{{{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}\mathbf{r}}}}(\xi)^{H}\mathbf{p}^{({{m}})}_{{{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}\mathbf{r}}}}(\xi)=\bigotimes_{\ell=1}^{{{m}}}\left(\mathbf{p}_{{r_{\ell}}}(\theta_{\ell})^{H}\mathbf{p}_{{r_{\ell}}}(\theta_{\ell})+\mathbf{p}_{{r_{\ell}}}(\theta_{\ell}+\pi)^{H}\mathbf{p}_{{r_{\ell}}}(\theta_{\ell}+\pi)\right).$
###### Proof 6.2.
By definition,
$\mathbf{p}^{({{m}})}_{{{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}\mathbf{r}}}}(\boldsymbol{\theta})=\bigotimes_{\ell=1}^{{{m}}}\mathbf{p}_{{r_{\ell}}}(\theta_{\ell})$,
then
$\begin{split}\sum_{\xi\in\Omega(\boldsymbol{\theta})}\mathbf{p}^{({{m}})}_{{{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}\mathbf{r}}}}(\xi)^{H}\mathbf{p}^{({{m}})}_{{{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}\mathbf{r}}}}(\xi)&=\sum_{\xi\in\Omega(\boldsymbol{\theta})}\left(\bigotimes_{\ell=1}^{{{m}}}\mathbf{p}_{{r_{\ell}}}(\xi_{\ell})^{H}\right)\left(\bigotimes_{\ell=1}^{{{m}}}\mathbf{p}_{{r_{\ell}}}(\xi_{\ell})\right)\\\
&=\sum_{\xi\in\Omega(\boldsymbol{\theta})}\left(\bigotimes_{\ell=1}^{{{m}}}\left(\mathbf{p}_{{r_{\ell}}}(\xi_{\ell})^{H}\mathbf{p}_{{{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}r_{\ell}}}}(\xi_{\ell})\right)\right).\end{split}$
The proof is then concluded once we prove by induction on ${{m}}$ the
following equality
(56)
$\sum_{\xi\in\Omega(\boldsymbol{\theta})}\left(\bigotimes_{\ell=1}^{{{m}}}\left(\mathbf{p}_{{r_{\ell}}}(\xi_{\ell})^{H}\mathbf{p}_{{{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}r_{\ell}}}}(\xi_{\ell})\right)\right)=\bigotimes_{\ell=1}^{{{m}}}\left(\mathbf{p}_{{r_{\ell}}}(\theta_{\ell})^{H}\mathbf{p}_{{r_{\ell}}}(\theta_{\ell})+\mathbf{p}_{{r_{\ell}}}(\theta_{\ell}+\pi)^{H}\mathbf{p}_{{r_{\ell}}}(\theta_{\ell}+\pi)\right).$
The equation above is clearly verified for ${{m}}=1$, indeed, by definition
$\displaystyle\sum_{\xi\in\Omega(\boldsymbol{\theta})}\left(\bigotimes_{\ell=1}^{1}\left(\mathbf{p}_{{r_{\ell}}}(\xi_{\ell})^{H}\mathbf{p}_{{{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}r_{\ell}}}}(\xi_{\ell})\right)\right)=\sum_{\xi\in\\{\theta_{1},\theta_{1}+\pi\\}}\left(\mathbf{p}_{{r_{1}}}(\xi_{1})^{H}\mathbf{p}_{{r_{1}}}(\xi_{1})\right)=$
$\displaystyle\mathbf{p}_{r_{1}}(\theta_{1})^{H}\mathbf{p}_{{r_{1}}}(\theta_{1})+\mathbf{p}_{{r_{1}}}(\theta_{1}+\pi)^{H}\mathbf{p}_{{r_{1}}}(\theta_{1}+\pi)=$
$\displaystyle\bigotimes_{\ell=1}^{1}\left(\mathbf{p}_{{r_{\ell}}}(\theta_{\ell})^{H}\mathbf{p}_{{r_{\ell}}}(\theta_{\ell})+\mathbf{p}_{{r_{\ell}}}(\theta_{\ell}+\pi)^{H}\mathbf{p}_{{r_{\ell}}}(\theta_{\ell}+\pi)\right).$
Let us assume that equality (56) is true for ${{m}}-1$. We have that
$\begin{split}&\bigotimes_{\ell=1}^{{{m}}}\left(\mathbf{p}_{{r_{\ell}}}(\theta_{\ell})^{H}\mathbf{p}_{{r_{\ell}}}(\theta_{\ell})+\mathbf{p}_{{r_{\ell}}}(\theta_{\ell}+\pi)^{H}\mathbf{p}_{{r_{\ell}}}(\theta_{\ell}+\pi)\right)=\\\
&\left[\bigotimes_{\ell=1}^{{{m}}-1}\left(\mathbf{p}_{{r_{\ell}}}(\theta_{\ell})^{H}\mathbf{p}_{{r_{\ell}}}(\theta_{\ell})+\mathbf{p}_{{r_{\ell}}}(\theta_{\ell}+\pi)^{H}\mathbf{p}_{{r_{\ell}}}(\theta_{\ell}+\pi)\right)\right]\otimes\\\
&\left(\mathbf{p}_{{r_{{m}}}}(\theta_{{m}})^{H}\mathbf{p}_{{r_{{m}}}}(\theta_{{m}})+\mathbf{p}_{{r_{{m}}}}(\theta_{{m}}+\pi)^{H}\mathbf{p}_{{r_{{m}}}}(\theta_{{m}}+\pi)\right)\end{split}$
The left-hand side of the latter term is a function of ${{m}}-1$ variables
$(\theta_{1},\theta_{2},\dots,\theta_{{{m}}-1})$. Then, by the inductive
hypothesis and from the properties of the tensor product we have
$\displaystyle\left[\bigotimes_{\ell=1}^{{{m}}-1}\left(\mathbf{p}_{{r_{\ell}}}(\theta_{\ell})^{H}\mathbf{p}_{{r_{\ell}}}(\theta_{\ell})+\mathbf{p}_{{r_{\ell}}}(\theta_{\ell}+\pi)^{H}\mathbf{p}_{{r_{\ell}}}(\theta_{\ell}+\pi)\right)\right]\otimes$
$\displaystyle\left(\mathbf{p}_{{r_{{m}}}}(\theta_{{m}})^{H}\mathbf{p}_{{r_{{m}}}}(\theta_{{m}})+\mathbf{p}_{{r_{{m}}}}(\theta_{{m}}+\pi)^{H}\mathbf{p}_{{r_{{m}}}}(\theta_{{m}}+\pi)\right)=$
$\displaystyle\left(\sum_{\begin{subarray}{c}(\xi_{1},\xi_{2},\dots,\xi_{{{m}}-1})\\\
\in\\\
\Omega(\theta_{1},\theta_{2},\dots,\theta_{{{m}}-1})\end{subarray}}\bigotimes_{\ell=1}^{{{m}}-1}\mathbf{p}_{{r_{\ell}}}(\xi_{\ell})^{H}\mathbf{p}_{{r_{\ell}}}(\xi_{\ell})\right)\otimes$
$\displaystyle\left(\mathbf{p}_{{r_{{m}}}}(\theta_{{m}})^{H}\mathbf{p}_{{r_{{m}}}}(\theta_{{m}})+\mathbf{p}_{{r_{{m}}}}(\theta_{{m}}+\pi)^{H}\mathbf{p}_{{r_{{m}}}}(\theta_{{m}}+\pi)\right)=$
$\displaystyle\sum_{\begin{subarray}{c}(\xi_{1},\xi_{2},\dots,\xi_{{{m}}-1})\\\
\in\\\
\Omega(\theta_{1},\theta_{2},\dots,\theta_{{{m}}-1})\end{subarray}}\\!\\!\left[\\!\left(\bigotimes_{\ell=1}^{{{m}}-1}\mathbf{p}_{{r_{\ell}}}(\xi_{\ell})^{H}\mathbf{p}_{{r_{\ell}}}(\xi_{\ell})\\!\right)\\!\\!\otimes\\!\\!\left(\mathbf{p}_{{r_{{m}}}}(\theta_{{m}})^{H}\mathbf{p}_{{r_{{m}}}}(\theta_{{m}})\\!+\\!\mathbf{p}_{{r_{{m}}}}(\theta_{{m}}\\!+\\!\pi)^{H}\mathbf{p}_{{r_{{m}}}}(\theta_{{m}}\\!+\\!\pi)\\!\right)\\!\right]\\!\\!=$
$\displaystyle\sum_{\begin{subarray}{c}\xi\in\\{(\theta_{1}+l_{1}\pi,\dots,\theta_{{{m}}-1}+l_{{{m}}-1}\pi\\},\\\
\textbf{l}\in\\{0,1\\}^{{{m}}-1}\end{subarray}}\left[\left(\bigotimes_{\ell=1}^{{{m}}-1}\mathbf{p}_{{r_{\ell}}}(\xi_{\ell})^{H}\mathbf{p}_{{r_{\ell}}}(\xi_{\ell})\right)\otimes\mathbf{p}_{{r_{{m}}}}(\theta_{{m}})^{H}\mathbf{p}_{{r_{{m}}}}(\theta_{{m}})+\right.$
$\displaystyle+\left.\left(\bigotimes_{\ell=1}^{{{m}}-1}\mathbf{p}_{{r_{\ell}}}(\xi_{\ell})^{H}\mathbf{p}_{{r_{\ell}}}(\xi_{\ell})\right)\otimes\mathbf{p}_{{r_{{m}}}}(\theta_{{m}}+\pi)^{H}\mathbf{p}_{{r_{{m}}}}(\theta_{{m}}+\pi)\right]=$
$\displaystyle\sum_{\begin{subarray}{c}\xi\in\\{(\theta_{1}+l_{1}\pi,\dots,\theta_{{{m}}-1}+l_{{{m}}-1}\pi,\theta_{{{m}}})\\},\\\
\textbf{l}\in\\{0,1\\}^{{{m}}-1}\end{subarray}}\bigotimes_{\ell=1}^{{{m}}}\mathbf{p}_{{r_{\ell}}}(\xi_{\ell})^{H}\mathbf{p}_{{r_{\ell}}}(\xi_{\ell})+$
$\displaystyle\sum_{\begin{subarray}{c}\xi\in\\{(\theta_{1}+l_{1}\pi,\dots,\theta_{{{m}}-1}+l_{{{m}}-1}\pi,\theta_{{{m}}}+\pi)\\},\\\
\textbf{l}\in\\{0,1\\}^{{{m}}-1}\end{subarray}}\bigotimes_{\ell=1}^{{{m}}}\mathbf{p}_{{r_{\ell}}}(\xi_{\ell})^{H}\mathbf{p}_{{r_{\ell}}}(\xi_{\ell})=$
$\displaystyle\sum_{\xi\in\Omega(\boldsymbol{\theta})}\bigotimes_{\ell=1}^{{{m}}}\mathbf{p}_{{r_{\ell}}}(\xi_{\ell})^{H}\mathbf{p}_{{r_{\ell}}}(\xi_{\ell}).$
Then, relation (56) is verified for ${{m}}$, and this concludes the proof.
###### Lemma 6.3.
Let
$\mathbf{p}^{({{m}})}_{{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}\mathbf{r}}}(\theta_{1},\theta_{2},\dots,\theta_{{{m}}})$
defined as in (55) where $\mathbf{p}_{{r_{\ell}}}$, for every
$\ell=1,\dots,{{m}}$, is a polynomial which verifies the positivity condition
(i). Then, $\mathbf{p}^{({{m}})}_{{r}}$ is such that the positivity condition
in the multilevel setting (52) is satisfied.
###### Proof 6.4.
The thesis is consequence of Lemma 6.1 and the matrix tensor product
properties. Indeed, the eigenvalues of a tensor product of matrices are the
product of the eigenvalues of the matrices. Then, condition (52) is trivially
implied from the fact that
$\sum_{\xi\in\Omega(\boldsymbol{\theta})}\mathbf{p}^{({{m}})}_{{{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}\mathbf{r}}}}(\xi)^{H}\mathbf{p}^{({{m}})}_{{{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}\mathbf{r}}}}(\xi)=\bigotimes_{\ell=1}^{{{m}}}\left(\mathbf{p}_{{r_{\ell}}}(\theta_{\ell})^{H}\mathbf{p}_{{r_{\ell}}}(\theta_{\ell})+\mathbf{p}_{{r_{\ell}}}(\theta_{\ell}+\pi)^{H}\mathbf{p}_{{r_{\ell}}}(\theta_{\ell}+\pi)\right),$
and from the positivity condition holding in the unilevel case.
###### Lemma 6.5.
Let
$\mathbf{p}^{({{m}})}_{{{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}\mathbf{r}}}}(\theta_{1},\theta_{2},\dots,\theta_{{{m}}})$
be defined as in (55) and it verifies (52). Then, the trigonometric function
$\textbf{s}(\boldsymbol{\theta})=\mathbf{p}^{({{m}})}_{{{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}\mathbf{r}}}}(\boldsymbol{\theta})\left(\sum_{\xi\in\Omega(\boldsymbol{\theta})}\mathbf{p}^{({{m}})}_{{{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}\mathbf{r}}}}(\xi)^{H}\mathbf{p}^{({{m}})}_{{{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}\mathbf{r}}}}(\xi)\right)^{-1}\mathbf{p}^{({{m}})}_{{{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}\mathbf{r}}}}(\boldsymbol{\theta})^{H}$
is well-defined for all $\boldsymbol{\theta}\in[0,2\pi)^{{m}}$. Moreover, it
holds
(57)
$\textbf{s}(\boldsymbol{\theta})=\bigotimes_{\ell=1}^{{{m}}}\textbf{s}_{{r_{\ell}}}(\theta_{\ell}),$
where
$\textbf{s}_{{r_{\ell}}}(\theta_{\ell})=\mathbf{p}_{{r_{\ell}}}(\theta_{\ell})\left(\mathbf{p}_{{r_{\ell}}}(\theta_{\ell})^{H}\mathbf{p}_{{r_{\ell}}}(\theta_{\ell})+\mathbf{p}_{{r_{\ell}}}(\theta_{\ell}+\pi)^{H}\mathbf{p}_{{r_{\ell}}}(\theta_{\ell}+\pi)\right)^{-1}\mathbf{p}_{{r_{\ell}}}(\theta_{\ell})^{H}$,
for $\ell=1,\dots,{{m}}$.
###### Proof 6.6.
From Lemma 6.3, we have that $\textbf{s}(\boldsymbol{\theta})$ is well-defined
for all $\boldsymbol{\theta}\in[0,2\pi)^{{m}}$. From Lemma 6.1 and the
properties of the tensor product, we have
(58)
$\begin{split}&\textbf{s}(\boldsymbol{\theta})=\mathbf{p}^{({{m}})}_{{{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}\mathbf{r}}}}(\boldsymbol{\theta})\left(\sum_{\xi\in\Omega(\boldsymbol{\theta})}\mathbf{p}^{({{m}})}_{{{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}\mathbf{r}}}}(\xi)^{H}\mathbf{p}^{({{m}})}_{{{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}\mathbf{r}}}}(\xi)\right)^{-1}\mathbf{p}^{({{m}})}_{{{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}\mathbf{r}}}}(\boldsymbol{\theta})^{H}=\\\
&\bigotimes_{\ell=1}^{{{m}}}\mathbf{p}_{{r_{\ell}}}(\theta_{\ell})\left(\bigotimes_{\ell=1}^{{{m}}}\left[\mathbf{p}_{{r_{\ell}}}(\theta_{\ell})^{H}\mathbf{p}_{{r_{\ell}}}(\theta_{\ell})+\mathbf{p}_{{r_{\ell}}}(\theta_{\ell}+\pi)^{H}\mathbf{p}_{{r_{\ell}}}(\theta_{\ell}+\pi)\right]^{-1}\right)\bigotimes_{\ell=1}^{{{m}}}\mathbf{p}_{{r_{\ell}}}(\theta_{\ell})^{H}=\\\
&\bigotimes_{\ell=1}^{{{m}}}\left(\mathbf{p}_{{r_{\ell}}}(\theta_{\ell})\left[\mathbf{p}_{{r_{\ell}}}(\theta_{\ell})^{H}\mathbf{p}_{{r_{\ell}}}(\theta_{\ell})+\mathbf{p}_{{r_{\ell}}}(\theta_{\ell}+\pi)^{H}\mathbf{p}_{{r_{\ell}}}(\theta_{\ell}+\pi)\right]^{-1}\mathbf{p}_{{r_{\ell}}}(\theta_{\ell})^{H}\right)=\bigotimes_{\ell=1}^{{{m}}}\textbf{s}_{{r_{\ell}}}(\theta_{\ell}).\end{split}$
.
###### Lemma 6.7.
Let
$\mathbf{p}^{({{m}})}_{{}_{{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}\mathbf{r}}}}(\theta_{1},\theta_{2},\dots,\theta_{{{m}}})$
be defined as in (55), such that, for all $\ell=1,\dots,{{m}}$,
$\mathbf{p}_{{{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}r_{\ell}}}}(\theta_{\ell})\in\mathbb{C}^{{{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}r_{\ell}}}\times{{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}r_{\ell}}}}$
is a polynomial that satisfies conditions (i)-(iii). Let
${q}_{\mathbf{r}}=\bigotimes_{\ell=1,\dots,{{m}}}{\rm q}_{r_{\ell}}$, where
${\rm q}_{r_{\ell}}$ is the column vector of length $r_{\ell}$ such that
$\textbf{s}_{{r_{\ell}}}({\theta_{0}^{(\ell)}}){\rm q}_{r_{\ell}}={\rm
q}_{r_{\ell}}$, $\ell=1,\dots,{{m}}$. Then,
$\textbf{s}(\boldsymbol{\theta_{0}}){q}_{\mathbf{r}}={q}_{\mathbf{r}},\qquad\mbox{where
}\boldsymbol{\theta}_{0}=\left(\theta_{0}^{(1)},\dots,\theta_{0}^{({{m}})}\right).$
###### Proof 6.8.
From Lemma 6.5, we have that
$\textbf{s}(\boldsymbol{\theta_{0}})=\bigotimes_{\ell=1}^{{{m}}}\textbf{s}_{{r_{\ell}}}(\theta_{0}),$
then, by definition and from the properties of the tensor product, it holds
(59)
$\textbf{s}(\boldsymbol{\theta_{0}}){q}_{\mathbf{r}}=\left(\bigotimes_{\ell=1}^{{{m}}}\textbf{s}_{{r_{\ell}}}(\theta_{0})\right)\left(\bigotimes_{\ell=1}^{{{m}}}{\rm
q}_{r_{\ell}}\right)=\bigotimes_{\ell=1}^{{{m}}}\left(\textbf{s}_{{r_{\ell}}}(\theta_{0}){\rm
q}_{r_{\ell}}\right)=\bigotimes_{\ell=1}^{{{m}}}{\rm
q}_{r_{\ell}}={q}_{\mathbf{r}}.$
###### Lemma 6.9.
Let
$\mathbf{p}^{({{m}})}_{{{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}\mathbf{r}}}}(\theta_{1},\theta_{2},\dots,\theta_{{{m}}})$
be defined as in (55) such that verifies (52). Consider
$\textbf{s}(\boldsymbol{\theta})=\bigotimes_{\ell=1}^{{{m}}}\textbf{s}_{{r_{\ell}}}(\theta_{\ell}),$
where
$\textbf{s}_{{r_{\ell}}}(\theta_{\ell})=\mathbf{p}_{{r_{\ell}}}(\theta_{\ell})\left(\mathbf{p}_{{r_{\ell}}}(\theta_{\ell})^{H}\mathbf{p}_{{r_{\ell}}}(\theta_{\ell})+\mathbf{p}_{{r_{\ell}}}(\theta_{\ell}+\pi)^{H}\mathbf{p}_{{r_{\ell}}}(\theta_{\ell}+\pi)\right)^{-1}\mathbf{p}_{{r_{\ell}}}(\theta_{\ell})^{H},$
for $\ell=1,\dots,{{m}}$, and they verify condition $(iii)$. Then,
$\textbf{s}(\boldsymbol{\theta})$ satisfies condition (54).
###### Proof 6.10.
Without loss of generality, suppose that the order of the zero of
$\lambda_{\bar{\jmath}}\left(\mathbf{f}({\theta_{\ell}})\right)$ in
$\theta_{0}$ is $\varsigma\geq 2$ for $\ell=1,\dots,{{m}}$, then the functions
$1-\lambda_{\bar{\jmath}}\left(\textbf{s}_{{r_{\ell}}}(\theta_{\ell})\right)$
have a zero in $\theta_{0}$ of order at least $\varsigma\in\mathbb{N}$ for all
$\ell=1,\dots,{{m}}$ by condition (iii). Hence, the $(\varsigma-1)$-th
derivative of
$1-\lambda_{\bar{\jmath}}\left(\textbf{s}_{{r_{\ell}}}(\theta_{\ell})\right)$
in $\theta_{0}$ is equal to zero. Then we have, for $\ell=1,\dots,{{m}}$,
$\left.\lambda_{\bar{\jmath}}\left(\textbf{s}_{{r_{\ell}}}(\theta_{\ell})\right)^{(\varsigma-1)}\right|_{\theta_{0}}=0.$
The thesis follows by direct computation of the partial derivatives of
$1-\lambda_{\bar{\jmath}}(\textbf{s}(\boldsymbol{\theta}))$ in
$\boldsymbol{\theta}_{0}$, exploiting the fact that
$\textbf{s}(\boldsymbol{\theta})=\bigotimes_{\ell=1}^{{{m}}}\textbf{s}_{{r_{\ell}}}(\theta_{\ell})\qquad\mbox{and}\qquad\lambda_{\bar{\jmath}}\left(\textbf{s}(\boldsymbol{\theta})\right)=\prod_{\ell=1}^{{{m}}}\lambda_{\bar{\jmath}}\left(\textbf{s}_{{r_{\ell}}}(\theta_{\ell})\right).$
.
## 7 Conclusions and Future Developments
We derived the conditions which ensure the optimal convergence rate of both
the TGM and the V-cycle method when applied to (multilevel) block-circulant
and (multilevel) block-Toeplitz matrices. In particular, we focused on the
case where the generating function $\mathbf{f}$ is a matrix-valued
trigonometric polynomial and we provide several simplifications for the
validation of the theoretical conditions in practical cases. We also
generalized the results for multilevel block-Toeplitz matrices.
As a final comment, we emphasize that the one of the main aims of the paper
was to give a theoretical ground to the optimal multigrid convergence for
block structured matrices, where optimal means with a convergence rate
independent of the matrix size. Moreover, it provided analytical proofs of the
effectiveness of standard projectors, largely used in classical applications
[20]. The numerical potency of the projectors treated in subsections 5.2-5.3
has been exploited in many different settings (multilevel, variable
coefficients case) with optimal results. In addition to Table 1, see Tables
1–6 in [14] and Table V.7 in [13].
## Acknowledgments
The work of Marco Donatelli, Paola Ferrari, Isabella Furci is partially
supported by Gruppo Nazionale per il Calcolo Scientifico (GNCS-INdAM).
## References
* [1] A. Aricò and M. Donatelli, A v-cycle multigrid for multilevel matrix algebras: proof of optimality, Numer. Math., 105 (2007), pp. 511–547.
* [2] A. Aricò, M. Donatelli, and S. Serra-Capizzano, V-cycle optimal convergence for certain (multilevel) structured linear systems, SIAM J. Matrix Anal. Appl., 26 (2004), pp. 186–214.
* [3] R. Bhatia, Matrix analysis, vol. 169 of Graduate Texts in Mathematics, Springer-Verlag, New York, 1997.
* [4] D. Braess, Finite elements, Cambridge University Press, Cambridge, third ed., 2007. Theory, fast solvers, and applications in elasticity theory, Translated from the German by Larry L. Schumaker.
* [5] D. Braess, Finite elements: Theory, fast solvers, and applications in solid mechanics, Cambridge University Press, 2007.
* [6] A. Brandt, Rigorous quantitative analysis of multigrid, i. constant coefficients two-level cycle with l_2-norm, SIAM Journal on Numerical Analysis, 31 (1994), pp. 1695–1730.
* [7] W. L. Briggs, V. E. Henson, and S. F. McCormick, A Multigrid Tutorial, Second Edition, SIAM, second ed., 2000.
* [8] R. H. Chan, Q.-S. Chang, and H.-W. Sun, Multigrid method for ill-conditioned symmetric Toeplitz systems, SIAM J. Sci. Comput., 19 (1998), pp. 516–529.
* [9] V. Del Prete, F. Di Benedetto, M. Donatelli, and S. Serra-Capizzano, Symbol approach in a signal-restoration problem involving block toeplitz matrices, Journal of Computational and Applied Mathematics, 272 (2014), pp. 399–416.
* [10] M. Donatelli, A multigrid for image deblurring with tikhonov regularization, Numerical Linear Algebra with Applications, 12 (2005), pp. 715–729.
* [11] M. Donatelli, An algebraic generalization of local fourier analysis for grid transfer operators in multigrid based on toeplitz matrices, Numerical Linear Algebra with Applications, 17 (2010), pp. 179–197.
* [12] M. Donatelli, P. Ferrari, I. Furci, D. Sesana, and S. Serra-Capizzano, Multigrid methods for block-circulant and block-Toeplitz large linear systems: Algorithmic proposals and two-grid optimality analysis, Numer. Linear Algebra Appl., e2356 (2020).
* [13] P. Ferrari, Toeplitz and block-toeplitz structures with variants: From the spectral analysis to preconditioning and multigrid methods using a symbol approach. Ph.D. Thesis, Insubria University, 2020.
* [14] P. Ferrari, R. I. Rahla, C. Tablino-Possio, S. Belhaj, and S. Serra-Capizzano, Multigrid for $\mathbb{Q}_{k}$ finite element matrices using a (block) Toeplitz symbol approach, Mathematics, 8 (2020).
* [15] G. Fiorentino and S. Serra, Multigrid methods for Toeplitz matrices, Calcolo, 28 (1991), pp. 283–305 (1992).
* [16] G. Fiorentino and S. Serra, Multigrid methods for symmetric positive definite block Toeplitz matrices with nonnegative generating functions, SIAM J. Sci. Comput., 17 (1996), pp. 1068–1081 (1996).
* [17] C. Garoni and S. Serra-Capizzano, Generalized locally Toeplitz sequences: theory and applications. Vol. II, Springer, Cham, 2018.
* [18] C. Garoni, S. Serra-Capizzano, and D. Sesana, Spectral analysis and spectral symbol of $d$-variate $\mathbb{Q}_{p}$ Lagrangian FEM stiffness matrices, SIAM J. Matrix Anal. Appl., 36 (2015), pp. 1100–1128.
* [19] G. H. Golub and C. F. Van Loan, Matrix computations, vol. 3 of Johns Hopkins Series in the Mathematical Sciences, Johns Hopkins University Press, Baltimore, MD, 1983.
* [20] W. Hackbusch, Multigrid methods and applications, vol. 4 of Springer Series in Computational Mathematics, Springer-Verlag, Berlin, 1985.
* [21] P. Hemker, On the order of prolongations and restrictions in multigrid procedures, Journal of Computational and Applied Mathematics, 32 (1990), pp. 423–429.
* [22] T. Huckle and J. Staudacher, Multigrid methods for block Toeplitz matrices with small size blocks, BIT, 46 (2006), pp. 61–83.
* [23] T. K. Huckle, Compact fourier analysis for designing multigrid methods, SIAM Journal on Scientific Computing, 31 (2008), pp. 644–666.
* [24] T. Kato, Perturbation theory for linear operators, Springer-Verlag, Berlin-New York, second ed., 1976. Grundlehren der Mathematischen Wissenschaften, Band 132.
* [25] A. Napov and Y. Notay, When does two-grid optimality carry over to the V-cycle?, Numer. Linear Algebra Appl., 17 (2010), pp. 273–290.
* [26] A. Napov and Y. Notay, Smoothing factor, order of prolongation and actual multigrid convergence, Numerische Mathematik, 118 (2011), pp. 457–483.
* [27] R. I. Rahla, S. Serra-Capizzano, and C. Tablino-Possio, Spectral analysis of $\mathbb{P}_{k}$ finite element matrices in the case of friedrichs–keller triangulations via generalized locally Toeplitz technology, Numer. Linear Algebra Appl., 27 (2020), p. e2302.
* [28] F. Rellich, Perturbation theory of eigenvalue problems, Assisted by J. Berkowitz. With a preface by Jacob T. Schwartz, Gordon and Breach Science Publishers, New York-London-Paris, 1969.
* [29] J. W. Ruge and K. Stüben, Algebraic multigrid, in Multigrid methods, vol. 3 of Frontiers Appl. Math., SIAM, Philadelphia, PA, 1987, pp. 73–130.
* [30] S. Serra-Capizzano, Convergence analysis of two-grid methods for elliptic Toeplitz and PDEs matrix–sequences, Numer. Math., 92 (2002), pp. 433–465.
* [31] S. Serra-Capizzano, Matrix algebra preconditioners for multilevel Toeplitz matrices are not superlinear, Linear Algebra Appl., 343 (2002), pp. 303–319.
* [32] S. Serra-Capizzano and C. Tablino-Possio, Multigrid methods for multilevel circulant matrices, SIAM J. Sci. Comput., 26 (2004), pp. 55–85.
* [33] H.-W. Sun, X.-Q. Jin, and Q.-S. Chang, Convergence of the multigrid method for ill-conditioned block Toeplitz systems, BIT, 41 (2001), pp. 179–190.
* [34] U. Trottenberg, C. W. Oosterlee, and A. Schüller, Multigrid, Academic Press, Inc., San Diego, CA, 2001. With contributions by A. Brandt, P. Oswald and K. Stüben.
|
# Adaptive Remote Sensing Image Attribute Learning
for Active Object Detection
Nuo Xu1, Chunlei Huo1, Jiacheng Guo2, Yiwei Liu3, Jian Wang4 and Chunhong Pan1
1NLPR, Institute of Automation, Chinese Academy of Sciences,
School of Artificial Intelligence, University of Chinese Academy of Sciences,
Beijing, China
Email<EMAIL_ADDRESS><EMAIL_ADDRESS><EMAIL_ADDRESS>2Beijing Information Science and Technology University, Beijing, China. Email:
<EMAIL_ADDRESS>3Beijing University of Civil Engineering and Architecture,
Beijing, China. Email<EMAIL_ADDRESS>4College of Robotics, Beijing Union
University, Beijing, China. Email<EMAIL_ADDRESS>
###### Abstract
In recent years, deep learning methods bring incredible progress to the field
of object detection. However, in the field of remote sensing image processing,
existing methods neglect the relationship between imaging configuration and
detection performance, and do not take into account the importance of
detection performance feedback for improving image quality. Therefore,
detection performance is limited by the passive nature of the conventional
object detection framework. In order to solve the above limitations, this
paper takes adaptive brightness adjustment and scale adjustment as examples,
and proposes an active object detection method based on deep reinforcement
learning. The goal of adaptive image attribute learning is to maximize the
detection performance. With the help of active object detection and image
attribute adjustment strategies, low-quality images can be converted into
high-quality images, and the overall performance is improved without
retraining the detector.
## I Introduction
Object detection is one of the fundamental and important issues in the field
of computer vision. Remote sensing image object detection is a process of
detecting the location of the object of interest from a remote sensing image
and identifying its category. Because of the difficulty of feature extraction,
position regression, and object classification, it is very challenging. Deep
learning technology is a powerful method that can automatically learn feature
representation from data, which has developed very rapidly in recent years.
The method of deep learning has demonstrated extraordinary effects in solving
difficulties of object detection, prompting many efficient methods and
outperforming traditional methods. The deep learning method parses the input
data by constructing a hierarchical nonlinear learning unit to learn the end-
to-end mapping between the image and its semantic label. It is this learning
method that has made a great breakthrough in the field of remote sensing image
object detection.
Although deep learning is very effective, traditional object detection methods
are limited due to the passive nature. Firstly, in the process of in-orbit
imaging, the image acquisition process is based on the human visual perception
as a reference for checking its quality, and it does not consider the specific
requirements of tasks such as object detection. Secondly, the images are
directly used for training or testing without proper image quality evaluation,
or the images are simply evaluated and manually pre-processed by the visual
inspection. In short, evaluating the acquired image in terms of human visual
perception is not necessarily the optimal for object detection task. In fact,
there is a gap in imaging configuration requirements for visual inspection and
object detection, and such difference will impact the performance of the
detection model. In other words, adaptive image attribute learning is very
important for the in-orbit imaging procedure or the subsequent object
detection step, however, it is rarely considered in the literature. Image
attributes mean spatial resolution, color, scale (the ratio of the distance on
an image to the distance on the ground), hue, saturation, brightness and so
on. This paper only takes brightness and scale learning as examples.
In order to overcome the above limitations, this paper proposes an active
object detection method based on deep reinforcement learning. The role of
reinforcement learning is to optimize imaging conditions and improve object
detection performance. It is worth noting that the application of deep
reinforcement learning in image processing is a new topic, and the proposed
method is different from traditional detection models described in the next
section. The novelty of this paper is the combination of deep reinforcement
learning with the current mainstream object detection algorithms. By adjusting
the image attribute, the image quality is actively improved to adapt to the
well-trained detectors, thus, the detection performance will be improved, as
shown in Fig. 1. In short, it is useful for offline detection and online
imaging.
For convenience, the framework in this paper is named active object detection
with reinforcement learning (_RL-AOD_). The most important difference between
_RL-AOD_ and the mainstream method is that the mainstream detector locates the
object in one step through the regression algorithm, but _RL-AOD_ can
adaptively select the appropriate brightness and scale through sequence
decision-making in the process of locating the object. This method can
adaptively learn the image attributes with the best object detection
performance, which is of great significance for remote sensing image object
detection. Our contributions in this paper are summarized as follows:
(1) An active object detection framework _RL-AOD_ is proposed, by combining
deep reinforcement learning and mainstream deep learning object detection
method. This method is used to solve the problem that the imaging
configuration and detection tasks do not match.
(2) This paper proposes strategies for adaptively adjusting brightness and
scale of images, and combines them together to improve the detection
performance of low-quality images.
Figure 1: Motivation. Due to the limitation of imaging configuration and
environmental changes, the detection performance of low-quality images is not
good. Therefore, it is necessary to adaptively learn image attributes to
improve detection performance.
## II Related Work
Active object detection consists of two parts, reinforcement learning and
object detection. For convenience, related technologies will be reviewed
below.
Object Detection. Before detectors based on deep learning are proposed, DPM
[1] is a very successful object detection algorithm. DPM method first
calculates the gradient direction histogram, then uses SVM (Surpport Vector
Machine) to train to obtain the gradient model of the object, and finally uses
target matching technology to detect specific objects. Object detection based
on deep learning includes position regression and object classification. In
the past few years, various new algorithms have constantly been proposed, and
there is a strong correlation between these algorithms. In general, deep
learning based detection models can be divided into the following two
categories, two-stages methods (e.g., Faster RCNN [2], FPN [3], R-FCN [4],
Cascade RCNN [5]) and one-stage methods (e.g., YOLO [6], SSD [7], DSSD [8],
RetinaNet [9], CornerNet [10]). The main difference between the two-stages
framework and the one-stage ones is the pre-processing step for generating
regional proposals. The former pays more attention to precision, while the
latter pays more attention to speed. Compared to two-stage detectors, one-
stage detectors simplify the detection process and increasing the speed since
regression and classification are performed only once, and the accuracy is
thus being impacted. Future trends will focus more on the combination of the
two (e.g., RefineDet [11], RFBNet [12]). This type of method has two or more
regression and classification processes. Not only is the accuracy not worse
than the two-stage method, but the speed can also be close to the one-stage
method. Despite the great success of deep learning, there is still a huge gap
between the performance of the current best methods and requirements from
practical applications. Traditional object detection methods only passively
detect the object, but cannot actively learn the attributes (brightness,
scale, etc.) of images. Traditional active object recognition method (e.g.,
[13, 14]) is to perform viewpoint control by controlling the camera. The
method in this paper does not directly control the camera, but learns images’
attributes.
Deep Reinforcement Learning. Reinforcement learning (RL) is a powerful and
effective tool for an agent to learn how to make serial sequence decisions
based on the external environment. The overall decision made by the agent will
be optimal since RL aims to maximize the accumulating rewards. In recent
years, traditional RL algorithms have been incorporated into the deep learning
framework, thus producing a series of deep reinforcement learning (DRL) models
(e.g. DQN [15], DDPG [16], TRPO [17]), which outperform traditional RL
methods. In the early days, RL methods are mainly used for robot control [18,
19]. In recent years, DRL methods have been successfully applied in many
fields such as game agents [20, 21] and neural network architecture design
[22, 23]. DRL has also attracted people’s attention in the field of computer
vision (CV). For examples, some scholars use DRL to continuously narrow the
detection window to the final object by sequence decision [24, 25, 26]. In
detail, they use DRL method alone to locate objects, with categories of
objects not being considered and image attributes (such as brightness and
scale) unchanged. However, the results of most of this kind of methods has not
improved much, but it is also an attempt. In contrast, this paper combines DRL
method with the current mainstream object detection methods together to
adaptively learn the best image attributes. In addition, there are some other
meaningful methods using DRL for basic image processing (e.g., enhancement
[27], recovery [28]). Although these methods can adaptively learn the
attributes of images step by step, they are not related to the detection task,
but only to meet the visual inspection. Moreover, there are many other works
that combine DRL and CV [29, 30, 31, 32, 33], but all of them are different
from the method in this paper. Since human thinking is often a sequential
decision-making process, algorithms with deep reinforcement learning methods
are closer to human behavior than traditional methods. In short, the image
processing method based on deep reinforcement learning is a topic worth
studying.
In this paper, an adaptive adjustment strategy of image attribute is learned
in the framework of Double DQN [34] combined with Faster RCNN. Both image
quality and detection performance can be improved by applying this strategy.
To our best knowledge, the problem being considered in this paper is new, and
it is rarely being studied in the literature.
## III Methodology
Imaging configuration is an important factor affecting image quality and
object detection performance. In particular, brightness and scale are the two
most important factors. In addition, the indicators used to evaluate the
imaging configuration are different for different tasks, such as visual
inspection and object detection. To this end, this paper takes brightness
learning and scale learning as examples to study the active imaging
configuration learning in the context of object detection task. Below, active
object detection (_RL-AOD_) will be formulated, and proposed approach will be
elaborated step by step.
Figure 2: Overview of RL-AOD. Firstly, $D$ is used to extract features and
detect objects from ${img}(t)$. Then ${Ag}^{b}$ and ${Ag}^{s}$ are utilized to
select the optimal action $a^{b}(t)$ and $a^{s}(t)$ according to the state
$s^{b}(t)$ and $s^{s}(t)$ respectively. Finally, act is performed on
${img}(t)$ in order to obtain ${img}(t+1)$.
### III-A Problem Formulation
Deep reinforcement learning consists of five key elements, namely environment,
agent, state, action and reward. Below, we explain them in the context of
image attribute learning.
Environment. The role of environment is to receive a series of actions
performed by the agent, to evaluate the quality of these actions, and to
feedback reward to the agent. The environment in this article refers to the
object detector, abbreviated as $D$. Since the input image size of the one-
stage detection method is fixed, it is difficult to adjust the scale.
Therefore, this paper uses the Faster RCNN method to construct _RL-AOD_
framework. The detector $D$ is trained in advance on a high quality dataset.
Agent. Agent is the core of the entire reinforcement learning system, whose
task is to learn a series of state-to-action mappings based on the reward
provided by the environment. The agent in this framework is expected to select
the appropriate brightness adjustment actions and scale adjustment actions to
transform the image according to the current image feature, and finally adapt
to the detector $D$, and improve the overall performance. In brightness
adjustment and scale adjustment, two independent agents were trained
respectively, named ${Ag}^{b}$ and ${Ag}^{s}$.
Algorithm 1 RL-AOD
Input: Low quality images
Networks: Detector $D$, Agent ${Ag}^{b}$ and ${Ag}^{s}$
Parameters: Feature $f^{c}$, $f^{b}$, $f^{s}$, State $s^{b}$, $s^{s}$, Action
$a^{b}$, $a^{s}$, Action Set $A^{b}$, $A^{s}$, Reward $r^{b}$, $r^{s}$
Output: High quality images
1: Pretrain Faster RCNN $D$ on high quality image sets.
2: Use $r^{b}$ calculated by $D$ as a guide to train DQN Agent ${Ag}^{b}$ on
both low and high quality image sets.
3: Use $r^{s}$ calculated by $D$ as a guide to train DQN Agent ${Ag}^{s}$ on
both low and high quality image sets.
4: while there are still unprocessed images do
5: Let step $t=0$.
6: Get a low quality image ${img}(0)$
7: while current step $t<T$ do
8: Extract $f^{c}(t)$, $f^{b}(t)$, $f^{s}(t)$ from ${img}(t)$ using $D$.
9: Combine $f^{c}(t)$, $f^{b}(t)$ to get $s^{b}(t)$.
10: Combine $f^{c}(t)$, $f^{s}(t)$ to get $s^{s}(t)$.
11: Select $a^{b}(t)$ from $A^{b}$ based on $s^{b}(t)$ using ${Ag}^{b}$.
12: Select $a^{s}(t)$ from $A^{s}$ based on $s^{s}(t)$ using ${Ag}^{s}$.
13: Apply $a^{b}(t)$ and $a^{s}(t)$ to ${img}(t)$ to get ${img}(t+1)$.
14: Step $t$++.
15: end while
16: end while
17: return High quality image set $\\{{img}(T)\\}$
State. State refers to the current status of the agent and contains all the
information used to make the action selection. In this paper, the state mainly
consists of two parts, one part is the contextual feature of the image, which
is used to describe the overall background of the image and denoted as
$f^{c}$. Another part of the feature is used to judge the level of a certain
attribute (brightness, scale) of the image, written as $f^{b}$ and $f^{s}$,
respectively. Therefore, the states corresponding to ${Ag}^{b}$ and ${Ag}^{s}$
are $s^{b}=\\{f^{c},f^{b}\\}$ and $s^{s}=\\{f^{c},f^{s}\\}$, respectively.
Action. Action refers to the best action that the agent picks from the action
set $A$. For the brightness-adjusting agent ${Ag}^{b}$, the action set
includes two actions: brightening and darkening.
$A^{b}=\\{a^{b}_{1},a^{b}_{2}\\}$. Similarly, the scale-adjusting agent
${Ag}^{s}$, the action set includes two actions: zoom in and zoom out.
$A^{s}=\\{a^{s}_{1},a^{s}_{2}\\}$. Two agents will select the best action from
their respective action sets based on the state.
Reward. Reward is used to evaluate performance of the agent at a certain time
step, and it is provided by the environment. In this paper, the reward $r$ is
based on the detection performance.
$\begin{array}[]{l}r(t)=sign(p(t+1)-p(t))\\\ \end{array}$ (1)
Where $p$ stands for detection performance, and $p=\frac{1}{2}(F+mIoU)$.
$mIoU$ is the average $IoU$ of all correct detection boxes, and $F$ is the
F-measure of detection boxes with $IoU>0.5$. By experiments, we find that the
pure usage of $F$ indicator and $mIoU$ does not work well. $mIoU$ alone is
less robust to multiple objects, and $F$ indicator alone often leads to a
small reward.
In general, the process of the _RL-AOD_ algorithm is reflected in Alg. 1 and
Fig. 2. After the detector and the two agents are trained separately, $D$ is
used to extract features and detect objects of ${img}(t)$. Then features
$f^{c}(t)$, $f^{b}(t)$ and $f^{s}(t)$ can be obtained. Next, ${Ag}^{b}$ and
${Ag}^{s}$ are utilized to select the optimal action $a^{b}(t)$ and $a^{s}(t)$
according to the state $s^{b}(t)$ and $s^{s}(t)$ ($s^{b}=\\{f^{c},f^{b}\\}$,
$s^{s}=\\{f^{c},f^{s}\\}$) respectively. Finally, act is performed on
${img}(t)$ in turn, in order to obtain ${img}(t+1)$. The detailed process of
feature extraction, state transition, and action design will be described
below.
Figure 3: Brightness transformation. This figure illustrates how the grayscale
value of $V$ is adjusted under different brightness levels $L^{b}$ when the
grayscale value of $V_{base}$ is $0$, $255$ and $C$, respectively. At the same
time, the histogram of $V$ under different $L^{b}$ is also given.
### III-B Automatic Brightness Adjustment
Agent will take the extracted features as the state and take the optimal
action to transform the state. Feature extraction part will mainly introduce
the acquisition of $f^{c}$ and $f^{b}$. State transition part will introduce
an indicator $L^{b}$ for roughly estimating the brightness level of an image.
Action design part will propose a set of brightness adjustment actions.
Feature extraction. $f^{c}$ is the contextual feature extracted by the
detector $D$, which is used to describe the overall background of the image.
This part of the feature is extracted from the intermediate output of Faster
RCNN(feature maps before RoI Pooling layer). If the backbone of the model is
based on VGG16, then the feature maps of the intermediate output will have 512
channels, and the averaged feature map over channels will result in a
512-dimensional vector. If the backbone of the model is ResNet50 or ResNet101,
this vector will have 1024 dimensions. Experiments show that this dimension is
too high and will impact the final performance. Therefore, it is necessary to
perform a max-pooling operation with a stride of 2 for the feature, and a
feature of 512 dimensions is then obtained. $f^{b}$ is the histogram feature
and is used to judge the level of image’s brightness. To compute the
histogram, RGB image needs to be converted into HSV color space. Then a
histogram is obtained on the component $V$, where the bin width is 4. Since
the quantitative level is 256, a 64-dimension histogram can be obtained.
Finally, the two parts are concatenated together to obtain $s^{b}$, a feature
vector of 576 dimensions.
(a)
(b)
Figure 4: Geometric meaning of $d$. (a): $-1\leq L^{b}<0$ (Linear mapping
between the $V$ component of a dark image and $V_{base}$). (b): $0\leq
L^{b}\leq 1$ (Linear mapping between the $V$ component of a bright image and
$V_{base}$).
State transition. To describe the brightness level, an indicator $L^{b}$ is
computed on the image brightness component $V$ in HSV color space of RGB
image. $L^{b}$ lies within the range [-1, 1], and changing brightness level
$L^{b}$ means changing the image brightness. The negative number means that
the image is dark, and the positive number means that the image is bright. The
larger the absolute value, the darker or brighter the image.
To calculate $L^{b}$, the method in this paper attempts to separate $L^{b}$
from the brightness component $V$ of the current image. In other words, the
brightness component $V$ is separated into two parts. One is the brightness
level $L^{b}$ (a scalar) and the other is called $V_{base}$ (a matrix). Eq.
(2) describes the decomposition form.
$V(t)=\left\\{\begin{array}[]{ll}(1+L^{b}(t))V_{base}\quad\quad\quad\quad\
{-1\leq L^{b}<0}\\\ (1-L^{b}(t))V_{base}+255L(t)\ \ {0\leq L^{b}\leq 1}\\\
\end{array}\right.$ (2)
In brightness adjustment, only $L^{b}$ needs to be computed, $V_{base}$ is the
constant basis for each image. On this basis, any level of brightness
component $V$ can be obtained. The relationship between the grayscale value of
$V_{base}$ and the grayscale value of $V$ under different $L^{b}$ is shown in
Fig. 3. So estimating $V_{base}$ is crucial. As shown in Eq. (3), $L^{b}(0)$
can be calculated. $V_{base}$ can be got by letting $t=0$ and substituting
$L^{b}(0)$ into Eq. (2).
$L^{b}(0)=\frac{d}{255}-1,\quad d\approx\frac{\sum_{i}p_{i}}{6}$ (3)
Where $\\{p_{0},p_{0.1},p_{0.2},\dots,p_{1}\\}$ are $11$ quantiles of the $V$
component, which are indicated by red dots in Fig. 4(a) and (b). $d$ in Eq.
(3) is indicated by a solid red line in Fig. 4(a) and (b). The principle of
using the quantiles to estimate $d$ can be clearly illustrated in the figure.
After pairing the quantiles (such as
$\\{(p_{0},p_{1}),(p_{0.1},p_{0.9})\cdots\\}$), the sum of each pair is an
estimate of $d$, and all estimates are averaged to get a more accurate
estimate of $d$, ie, Eq. (3).
Action design. The essence of action design is to change the brightness level
of the image. The brightness level $L^{b}(t+1)$ can be updated from $L^{b}(t)$
by Eq. (4), and the situation where $L^{b}$ is outside the range [-1,1] can be
avoided. For dark images, the changing scope of $L^{b}$ is greater in the
brightening operation, and for bright images the changing scope of $L^{b}$ is
greater in the darkening operation. In consequence, the agent can choose a
good action in the next step even it takes a bad action in this step.
$L^{b}(t+1)=\left\\{\begin{array}[]{lr}0.9L^{b}(t)+0.1\times
1&{a^{b}(t)=a^{b}_{1}}\\\ 0.9L^{b}(t)+0.1\times(-1)&{a^{b}(t)=a^{b}_{2}}\\\
\end{array}\right.$ (4)
The reason why $V(t+1)$ is not obtained by multiplying $V(t)$ by a coefficient
is that in the subsequent brightness adjustment, the truncated grayscale
values larger than 255 are difficult to be recovered. In addition, it is easy
to enter the situation in which two actions are alternately selected, because
if two actions are selected in turn, state will return to origin.
### III-C Automatic Scale Adjustment
Agent will take the extracted features as the state and take the optimal
action to transform the state. Feature extraction part will mainly introduce
the acquisition of $f^{s}$. State transition part will introduce an indicator
$L^{s}$ for roughly estimating the scale level of an image. Action design part
will propose a set of scale adjustment actions.
Feature extraction. $f^{s}$ is the statistical histogram of objects’ area.
Firstly, the detector $D$ is used to detect the image, and a series of
bounding boxes are obtained. On the area of these bounding boxes, a histogram
is obtained. Because it is the area that is being counted, the design of the
bin width will show a tendency to widen, and the speed of the increase will be
square. The specific form of bin is [$0$, $9^{2}$, $10^{2}$ $\cdots$ $24^{2}$,
$27^{2}$ $\cdots$ $75^{2}$, $80^{2}$ $\cdots$ $175^{2}$, $182^{2}$ $\cdots$
$245^{2}$, +$\infty$]. After manual design, this feature $f^{s}$ will be
64-dimensional. Since not every image has a lot of objects, and some images
don’t even contain any objects, the extracted histogram feature is a very
sparse vector. This is very unfavorable for subsequent training. This sparse
feature can be convolved by a Gaussian kernel to make it less sparse.
Similarly, $f^{c}$ and $f^{s}$ are concatenated together to obtain $s^{s}$, a
576-dimension vector.
State transition. Similarly, the scale level $L^{s}$ is defined for adjusting
the size of images. $L^{s}$ lies within the range [-1, 1], and changing scale
level $L^{s}$ means changing the size of images. It is worth noting that the
resolution of the image is not absolutely related to the scale level $L^{s}$.
The scale level $L^{s}$ can only describe the average area of all objects in
an image. The negative number means that the average area of all objects in
this image is small, and the positive number means the average area is large.
The larger the absolute value, the smaller or larger the average area. In step
$t$, bilinear interpolation can be used to resize the image $img_{t}^{s}$ to
$\theta^{L^{s}(t)}$ times of the size to get a new image $img_{t+1}^{s}$, as
expressed by Eq. (5). After $T$ steps, the scale of the new image is
$\theta^{\sum_{t}^{T}L^{s}(t)}$ times that of the original image.
$img^{s}(t+1)=Resize(img^{s}(t),\theta^{L^{s}(t)})$ (5)
After defining the scale level $L^{s}$, it is now necessary to determine the
$L^{s}$ corresponding to the original image, that is, estimating $L^{s}(0)$.
Eq. (6) shows the estimation process.
$L^{s}(0)=\frac{1}{2}{log}_{\theta}(\frac{\alpha}{\alpha_{0}})$ (6)
Where $\alpha$ refers to the average area of objects. Both $\alpha_{0}$ and
$\theta$ are auxiliary parameters. $\alpha_{0}$ represents a priori average
area with a value of $96^{2}$. This is the threshold for area of medium-size
objects and large-size objects in the COCO dataset evaluation criteria.
$\theta$ is set to $8$. With this setting, images with the average area of
objects between $16^{2}$ and $768^{2}$ correspond to $L^{s}$ in [-1, 1], and
the average area outside the range will give $L^{s}$ a value of 1 or -1.
Action design. $L^{s}$ is adjusted in the similar way with Eq. (4). Eq. (7) is
the adjustment method of $L^{s}$.
$L^{s}(t+1)=\left\\{\begin{array}[]{lr}0.95L^{s}(t)+0.05\times 1\quad\ \ \
{a^{s}(t)=a^{s}_{1}}\\\ 0.95L^{s}(t)+0.05\times(-1)\ {a^{s}(t)=a^{s}_{2}}\\\
\end{array}\right.$ (7)
In this way, $L^{s}$ can be avoided going beyond the range [-1,1]. For images
have large average area of objects, the changing scope of $L^{s}$ will be
greater in the zoom out operation, and for images have small average area of
objects, the changing scope of $L^{s}$ is greater in the zoom in operation. In
consequence, even if the agent takes a bad action in this step, it can still
choose a good action in the next step.
## IV Experiments
### IV-A Datasets and Setting
In order to verify the effectiveness of the method, this paper carried out
experiments on a remote sensing image dataset. The details of the dataset and
the settings of parameters in the network will be introduced next.
Dataset. Our interests focus mainly on aircraft, and we collect many very high
resolution remote sensing images and build a dataset for aircraft detection.
The dataset is consisted of $13,078$ training images and $5,606$ test images.
These images are high quality images obtained under normal conditions. In
fact, due to the complexity of the environment, it is impossible to always get
high quality images through imaging devices in the real world. For example, if
the environment is too dark, the quality of the image obtained must not be
high. In this paper, the degradation of the test set based on brightness and
scale is performed by simulation. As shown in Fig. 1, each test image can be
subjected to four degradation operations, and thereby five images can be
obtained. In this way, $28,030$ test images can be obtained totally. In the
following, all models are trained in the degraded dataset.
Settings. In this paper, the detector refers to Faster RCNN model. This model
is trained in the training set containing $13,078$ images mentioned above.
Whether the model backbone is VGG16, ResNet50 or ResNet101, the scales of the
anchor is set to $(4,8,16,32)$, and the number of iterations is set to
$110000$. The rest of the settings are set according to the model default
settings. The structures of the agents corresponding to brightness and scale
adjustment are exactly the same, both using Double DQN model. Double DQN is a
variant of the DQN algorithm, which is used to eliminate the overestimation of
Q-values and is more stable. The agent network is a six-layer fully connected
neural network. The neurons in each layer are $512,512,512,512,512,2$. When
training agent networks, $13,078$ training images are randomly degraded
(changing brightness and scale). The degraded images account for 80% of the
total training image set. Adam optimizer is used for agent network learning
with a basic learning rate of $0.001$. The brightness-adjusted agent network
will train $120,000$ iterations, and the scale-adjusted agent network trains
$40,000$ iterations. During the training process, the action selection adopts
the $\epsilon$-greedy method. This method will randomly select actions with a
small probability, and select the optimal action in the rest.
TABLE I: Performance comparison of different methods. Method+Backbone | $AP$ | $AP^{50}$ | $AP^{75}$ | $AP^{S}$ | $AP^{M}$ | $AP^{L}$
---|---|---|---|---|---|---
DPMv5 (benchmark) | - | 0.338 | - | - | - | -
RetinaNet+VGG16 | 0.376 | 0.585 | 0.431 | 0.249 | 0.492 | 0.526
RetinaNet+ResNet50 | 0.446 | 0.674 | 0.515 | 0.297 | 0.591 | 0.588
RetinaNet+ResNet101 | 0.503 | 0.732 | 0.596 | 0.338 | 0.654 | 0.681
SSD321+ResNet101 | 0.417 | 0.661 | 0.475 | 0.200 | 0.594 | 0.703
DSSD321+ResNet101 | 0.426 | 0.666 | 0.485 | 0.196 | 0.610 | 0.739
YOLOv2+DarkNet19 | 0.407 | 0.632 | 0.472 | 0.202 | 0.573 | 0.701
YOLOv3+DarkNet53 | 0.491 | 0.808 | 0.553 | 0.441 | 0.574 | 0.401
R-FCN+ResNet50 | 0.422 | 0.705 | 0.461 | 0.223 | 0.576 | 0.690
R-FCN+ResNet101 | 0.427 | 0.713 | 0.464 | 0.225 | 0.582 | 0.694
Faster RCNN+VGG16 | 0.441 | 0.750 | 0.469 | 0.273 | 0.587 | 0.652
Faster RCNN+Res50 | 0.455 | 0.768 | 0.482 | 0.273 | 0.601 | 0.700
Faster RCNN+Res101 | 0.479 | 0.784 | 0.525 | 0.300 | 0.626 | 0.703
RL-AOD+VGG16 | 0.530 | 0.822 | 0.608 | 0.355 | 0.674 | 0.751
RL-AOD+ResNet50 | 0.519 | 0.815 | 0.590 | 0.346 | 0.661 | 0.734
RL-AOD+ResNet101 | 0.531 | 0.822 | 0.612 | 0.361 | 0.664 | 0.750
TABLE II: Performance comparison of different parameter settings of _RL-AOD_. FR refers to the original Faster RCNN method. $B$ refers to the brightness adjustment. $S$ refers to the scale adjustment. $2$ and $4$ represent the maximum step $T$ ($T$ is defined in Alg. 1). $\ast$ represents the result of testing on the undamaged normal dataset. Method+Backbone | $AP$ | $AP^{50}$ | $AP^{75}$ | $AP^{S}$ | $AP^{M}$ | $AP^{L}$
---|---|---|---|---|---|---
FR+VGG16 | 0.441 | 0.750 | 0.469 | 0.273 | 0.587 | 0.652
B2+VGG16 | 0.498 | 0.821 | 0.542 | 0.312 | 0.650 | 0.741
BS2+VGG16 | 0.515 | 0.811 | 0.585 | 0.339 | 0.662 | 0.736
B4+VGG16 | 0.503 | 0.823 | 0.554 | 0.314 | 0.655 | 0.754
BS4+VGG16 | 0.530 | 0.822 | 0.608 | 0.355 | 0.674 | 0.751
FR+Res50 | 0.455 | 0.768 | 0.482 | 0.273 | 0.601 | 0.700
B2+Res50 | 0.499 | 0.825 | 0.538 | 0.308 | 0.646 | 0.744
BS2+Res50 | 0.509 | 0.805 | 0.569 | 0.333 | 0.650 | 0.730
B4+Res50 | 0.508 | 0.831 | 0.552 | 0.318 | 0.656 | 0.753
BS4+Res50 | 0.519 | 0.815 | 0.590 | 0.346 | 0.661 | 0.734
FR+Res101 | 0.479 | 0.784 | 0.525 | 0.300 | 0.626 | 0.703
B2+Res101 | 0.510 | 0.824 | 0.558 | 0.322 | 0.657 | 0.755
BS2+Res101 | 0.524 | 0.813 | 0.602 | 0.352 | 0.661 | 0.745
B4+Res101 | 0.514 | 0.833 | 0.565 | 0.324 | 0.660 | 0.763
BS4+Res101 | 0.531 | 0.822 | 0.612 | 0.361 | 0.664 | 0.750
FR$\ast$+Res101 | 0.574 | 0.875 | 0.649 | 0.401 | 0.711 | 0.785
BS4$\ast$+Res101 | 0.575 | 0.873 | 0.651 | 0.402 | 0.711 | 0.786
(a)
(b)
(c)
(d)
(e)
(f)
(g)
(h)
(i)
(j)
(k)
(l)
Figure 5: Results comparison. The images (a, b, c, d, e, f) in the first row
are the detection results on the degraded images with respect to brightness
and scale. The images (g, h, i, j, k, l) in the second row are the results
after adaptive learning by _RL-AOD_. (a, b, c) are to simulate the situation
of over-exposure, and (d, e, f) are to simulate the situation of under-
exposure. _RL-AOD_ can find the best image attributes (brightness and scale)
through the sequence decision method, so that the detection performance can be
improved.
### IV-B Results and Discussions
In this section, _RL-AOD_ will be compared to different state-of-the-art
methods, and _RL-AOD_ with different parameter settings will be also compared.
The following will be combined with Fig. 5, Tab. I and Tab. II for analysis.
Comparison of different methods. The performances of the different methods are
listed in Tab. I. DPM is a classic non-deep-learning object detection method,
selected as a benchmark over feature-detector-descriptor. It can be observed
that _RL-AOD_ +ResNet101 method has the largest $AP$ of $0.531$. _RL-AOD_
based on VGG16 and ResNet50 are also superior to other classical methods, such
as Faster RCNN, YOLO, SSD. In order to ensure the fairness of the comparison,
all methods, including the mainstream method and the method in this paper, are
trained and tested on the damaged data set. Therefore, _RL-AOD_ method can
achieve better performances in a dataset where the imaging configuration does
not match the detector. Since the detector used by _RL-AOD_ is Faster RCNN,
the two methods can be further compared. With the same backbone, _RL-AOD_
outperforms Faster RCNN by $25.7\%$, $10.3\%$ and $8.9\%$ with respect to the
averaged indicators $AP^{S}$, $AP^{M}$ and $AP^{L}$. It can be indicated that
_RL-AOD_ is the most promising for detecting small objects. Faster RCNN
performs multiple poolings in the feature extraction process, and rounding in
RoI pooling layer leads to a precision loss. In consequence, Faster RCNN is
limited in detecting small objects. In contrast, _RL-AOD_ compensates for this
defect to some extent through adaptive scale adjustment. Although
YOLOv3+DarkNet53 is optimal in detecting small objects, it has the lowest
detection accuracy of $0.401$ for large objects. Among all methods, _RL-AOD_
+ResNet101 is sub-optimal in detecting small objects. Some results of _RL-AOD_
are shown in Fig. 5. It can be found that after adaptive attribute learning,
many missed objects can be detected. In Fig. 5, images (a, b, c) are to
simulate the situation of over-exposure, and images (d, e, f) are to simulate
the situation of under-exposure. From the results in Fig. 5, it can be seen
that both over-exposure and under-exposure will cause missed alarm, thereby
reducing performance. Images (g, h, i, j, k, l) show the results of adaptive
learning by _RL-AOD_ in this paper. The number of missed alarms has been
significantly reduced, and the performance has been improved. In summary,
based on the traditional method, _RL-AOD_ gradually adjusts the image
attributes (brightness and scale) with the help of deep reinforcement
learning, so that the damaged image with poor detection effect is more
suitable for detection. This is a very meaningful work for remote sensing
images.
Comparison of different parameter settings. To understand how _RL-AOD_ works,
the testing images are evaluated by setting different parameters. The
performances are listed in Tab. II. Firstly, with the same backbone, the
performance improvement caused by the brightness adjustment is greater than
that brought by the scale adjustment. For example, the difference between AP
of B4+Res101 and FR+Res101 ($0.035$) is greater than the difference between AP
of BS4+Res101 and B4+Res101 ($0.017$). In brightness and scale adjustment,
averaged AP improvements are $0.050$ and $0.018$ respectively, at the maximum
step $T$ of $4$. Secondly, as for the performance of the maximum step size of
$4$, all indicators are better than the result in the maximum step $T$ of $2$.
This shows that _RL-AOD_ is effective in adjusting the attributes of images
step by step, making them more adaptable to the detector. Thirdly, the
sequence decision operation does not reduce detection accuracy of normal
images that are not damaged. It can be seen from Tab. II that the AP values of
FR$\ast$+Res101 and BS4$\ast$+Res101 are very close, 0.574 and 0.575
respectively. Fourthly, compared to FR+Res101 and FR$\ast$+Res101, when images
are damaged and not suitable for detection, the performance of faster rcnn
will be greatly reduced, and the AP will drop by 9.5 points. This also
confirms the fact that during the orbit imaging process, if the image
acquisition process does not take into account the specific requirements of
object detection and other tasks, and the evaluation is not carried out, the
detection effect may not be optimal. Finally, it is worth noting that although
the scale adjustment is able to improve the detection performance of small and
medium-size objects, it will reduce the detection accuracy of some large-size
objects. The reason is that in remote sensing image dataset, small and medium-
size objects occupy the majority. Generally, it is easier to improve the
performance by zoom in than zoom out. Therefore, this will cause the imbalance
between large and small-size objects. When training the agent, the agent will
be more inclined to enlarge the image. In general, however, the advantages of
_RL-AOD_ outweigh the disadvantages, as AP is still improved. The introduction
of serialized decision-making methods, such as deep reinforcement learning,
makes it possible to perform detection while adjusting image attributes.
## V Conclusion
This paper proposes an active object detection method _RL-AOD_ , which uses
deep reinforcement learning to help the object detection module actively
adjust image attributes (such as brightness and scale). Traditional object
detection methods are limited due to the passive nature, but our active method
in this paper can adapt to various situations (such as insufficient
brightness, etc.). Experiments demonstrate the necessity of adaptive
brightness and scale adjustment, and the effectiveness of _RL-AOD_ . Future
work will focus on DDPG that produces continuous actions. At the same time, it
will also pay more attention to real-time and improve model speed.
## Acknowledgments
This research was supported by the Major Project for New Generation of AI
under Grant No. $2018AAA0100400$, and the National Natural Science Foundation
of China under Grants $62071466$ and $91646207$.
## References
* [1] P. Felzenszwalb, R. Girshick, D. McAllester, and D. Ramanan, “Object detection with discriminatively trained part-based models,” _IEEE TPAMI_ , vol. 32, pp. 1627–1645, 2009.
* [2] S. Ren, K. He, R. Girshick, and J. Sun, “Faster r-cnn: Towards real-time object detection with region proposal networks,” in _NeurIPS_ , 2015.
* [3] T. Lin, P. Dollár, R. Girshick, K. He, B. Hariharan, and S. Belongie, “Feature pyramid networks for object detection.” in _CVPR_. IEEE, 2017.
* [4] J. Dai, Y. Li, K. He, and J. Sun, “R-fcn: Object detection via region-based fully convolutional networks,” in _NeurIPS_ , 2016.
* [5] Z. Cai and N. Vasconcelos, “Cascade r-cnn: Delving into high quality object detection,” in _CVPR_. IEEE, 2018\.
* [6] J. Redmon and A. Farhadi, “Yolo9000: Better, faster, stronger,” in _CVPR_. IEEE, 2017.
* [7] W. Liu, D. Anguelov, D. Erhan, C. Szegedy, S. Reed, C. Fu, and A. Berg, “Ssd: Single shot multibox detector,” in _ECCV_ , 2016.
* [8] C. Fu, W. Liu, A. Ranga, A. Tyagi, and A. Berg, “Dssd : Deconvolutional single shot detector,” in _CVPR_. IEEE, 2017\.
* [9] T. Lin, P. Goyal, R. Girshick, K. He, and P. Dollár, “Focal loss for dense object detection,” in _ICCV_. IEEE, 2017.
* [10] H. Law and J. Deng, “Cornernet: Detecting objects as paired keypoints,” in _ECCV_ , 2018.
* [11] S. Zhang, L. Wen, X. Bian, Z. Lei, and S. Li, “Single-shot refinement neural network for object detection,” in _CVPR_. IEEE, 2018.
* [12] S. Liu, D. Huang, and Y. Wang, “Receptive field block net for accurate and fast object detection,” in _ECCV_ , 2018.
* [13] J. Denzler and C. Brown, “Information theoretic sensor data selection for active object recognition and state estimation,” _IEEE TPAMI_ , vol. 2, pp. 145–157, 2002.
* [14] D. Wilkes and J. Tsotsos, “Active object recognition,” in _CVPR_. IEEE, 1992.
* [15] V. Mnih, K. Kavukcuoglu, D. Silver, A. Rusu, J. Veness, M. Bellemare, A. Graves, M. Riedmiller, A. Fidjeland, G. Ostrovski _et al._ , “Human-level control through deep reinforcement learning,” _Nature_ , vol. 518, no. 7540, p. 529, 2015.
* [16] T. Lillicrap, J. Hunt, A. Pritzel, N. Heess, T. Erez, Y. Tassa, D. Silver, and D. Wierstra, “Continuous control with deep reinforcement learning,” in _ICLR_ , 2015.
* [17] J. Schulman, S. Levine, P. Abbeel, M. Jordan, and P. Moritz, “Trust region policy optimization,” in _ICML_ , 2015.
* [18] P. Kormushev, S. Calinon, and D. Caldwell, “Robot motor skill coordination with em-based reinforcement learning,” in _IROS_. IEEE, 2010.
* [19] T. Hester, M. Quinlan, and P. Stone, “Generalized model learning for reinforcement learning on a humanoid robot,” in _ICRA_. IEEE, 2010.
* [20] D. Silver, A. Huang, C. Maddison, A. Guez, L. Sifre, G. Van Den Driessche, J. Schrittwieser, I. Antonoglou, V. Panneershelvam, M. Lanctot _et al._ , “Mastering the game of go with deep neural networks and tree search,” _Nature_ , vol. 529, no. 7587, p. 484, 2016.
* [21] D. Silver, J. Schrittwieser, K. Simonyan, I. Antonoglou, A. Huang, A. Guez, T. Hubert, L. Baker, M. Lai, A. Bolton _et al._ , “Mastering the game of go without human knowledge,” _Nature_ , vol. 550, no. 7676, p. 354, 2017\.
* [22] B. Baker, O. Gupta, N. Naik, and R. Raskar, “Designing neural network architectures using reinforcement learning,” in _ICLR_ , 2017.
* [23] B. Zoph and Q. Le, “Neural architecture search with reinforcement learning,” in _ICLR_ , 2017.
* [24] J. Caicedo and S. Lazebnik, “Active object localization with deep reinforcement learning,” in _ICCV_. IEEE, 2015.
* [25] M. Bellver, X. Nieto, F. Acosta, and J. Torres, “Hierarchical object detection with deep reinforcement learning,” in _NeurIPS_ , 2016.
* [26] Z. Jie, X. Liang, J. Feng, X. Jin, W. Lu, and S. Yan, “Tree-structured reinforcement learning for sequential object localization,” in _NeurIPS_ , 2016.
* [27] J. Park, J. Lee, D. Yoo, and I. Kweon, “Distort-and-recover: Color enhancement using deep reinforcement learning,” in _CVPR_. IEEE, 2018.
* [28] K. Yu, C. Dong, L. Lin, and C. Loy, “Crafting a toolchain for image restoration by deep reinforcement learning,” in _CVPR_. IEEE, 2018.
* [29] X. Liang, L. Lee, and E. Xing, “Deep variation-structured reinforcement learning for visual relationship and attribute detection,” in _CVPR_. IEEE, 2017.
* [30] S. Yoo, K. Yun, J. Choi, K. Yun, and J. Choi, “Action-decision networks for visual tracking with deep reinforcement learning,” in _CVPR_. IEEE, 2017.
* [31] J. Ba, V. Mnih, and K. Kavukcuoglu, “Multiple object recognition with visual attention,” in _ICLR_ , 2015.
* [32] Y. He, K. Cao, C. Li, and C. Loy, “Merge or not? learning to group faces via imitation learning,” in _AAAI_ , 2018.
* [33] C. Huang, S. Lucey, and D. Ramanan, “Learning policies for adaptive tracking with deep feature cascades,” in _ICCV_ , 2017.
* [34] H. Van Hasselt, A. Guez, and D. Silver, “Deep reinforcement learning with double q-learning.” in _AAAI_ , 2016.
|
11institutetext: Translational Imaging Group, CMIC, University College London,
UK 22institutetext: Dementia Research Centre (DRC), University College London,
UK
# Scale factor point spread function matching: beyond aliasing in image
resampling
M. Jorge Cardoso, 1122 Marc Modat, 1122 Tom Vercauteren 11
Sebastien Ourselin, 1122
###### Abstract
Imaging devices exploit the Nyquist-Shannon sampling theorem to avoid both
aliasing and redundant oversampling by design. Conversely, in medical image
resampling, images are considered as continuous functions, are warped by a
spatial transformation, and are then sampled on a regular grid. In most cases,
the spatial warping changes the frequency characteristics of the continuous
function and no special care is taken to ensure that the resampling grid
respects the conditions of the sampling theorem. This paper shows that this
oversight introduces artefacts, including aliasing, that can lead to important
bias in clinical applications. One notable exception to this common practice
is when multi-resolution pyramids are constructed, with low-pass "anti-
aliasing" filters being applied prior to downsampling. In this work, we
illustrate why similar caution is needed when resampling images under general
spatial transformations and propose a novel method that is more respectful of
the sampling theorem, minimising aliasing and loss of information. We
introduce the notion of scale factor point spread function (sfPSF) and employ
Gaussian kernels to achieve a computationally tractable resampling scheme that
can cope with arbitrary non-linear spatial transformations and grid sizes.
Experiments demonstrate significant ($p<10^{-4}$) technical and clinical
implications of the proposed method.
## 1 Introduction
Image resampling is ubiquitous in medical imaging. Any processing pipeline
that requires coordinate mapping, correction for imaging distortions, or
simply altering the resolution of an image, needs a representation of the
image outside of the digital sampling grid provided by the initial image. The
theoretical foundations from the Nyquist-Shannon sampling theorem provide
conditions under which perfect continuous reconstruction can be achieved from
regularly spaced samples. This theoretical model thus provides a means of
interpolating between the discrete samples and underpins the typical
resampling procedure used in medical imaging. Many interpolating methods,
ranging from low order, e.g. piecewise constant and linear interpolation, to
high order, e.g. polynomial, piecewise-polynomial (spline) and windowed sinc
(Lanczos) methods have thus been developed , with different methods being
optimal for specific types of signal.
Resampling typically relies on one such interpolation method to represent, on
a given (discrete) target sampling grid, a continuous image derived from the
input source image. In most cases, the (discrete) resampled image is simply
assumed to be a faithful representation of the continuous derived image, with
no special care taken to ensure the sampling theorem conditions. Within the
context of medical imaging, Meijering et al. [1] compared multiple resampling
kernels, concluding that the quality of the resampling was proportional to the
degree of the kernel, with linear, cubic and Lanczos resampling kernels being
the best with a $1^{st}$, $2^{nd}$ and $3^{rd}$ order neighbourhood
respectively. This comparison was done with grid aligned data under
translation, thus respecting the Nyquist criterion, meaning that their
conclusions cannot be extrapolated to general resampling.
Resampling is known to introduce artefacts in medical imaging. One common
example arises within the context of image registration [2, 3] where a source
image is typically resampled to the sampling grid of a target image and a
similarity function is used to compare the images based on the discrete
samples at the target grid location only, thereby discarding the continuous
image representation. For example, Pluim et al. [2] ameliorated the problem by
using partial volume (PV) sampling, and more recently Aganj et al. [3]
replaced the cost function summation term with an approximate integral.
Another main source of artefacts is the fact that aliasing is introduced when
resampling an image to lower resolution. While, in multi-resolution pyramids
[4], aliasing is well addressed by applying a Gaussian filter prior to
downsampling, the problem in the general resampling case has surprisingly
received little attention in the medical imaging community. Unser et al. [5]
propose a least-squares spline-based formulation that achieves aliasing-free
resampling for affine transformations, but to the best of our knowledge no
general solution has been proposed for local transformations. Inverse
approximation-based resampling has also been proposed [6], but this technique
cannot easily take into account the local anisotropy of the transformations
and has the drawback of requiring the inverse transformation.
In this paper, we propose to address aliasing artefacts in the general context
of resampling with non-linear spatial transformations by associating a scale
factor point spread function (sfPSF) to the images and formulating the
resampling process as an sfPSF matching problem. In the case of an original
image from a known imaging device, the sfPSF is simply the PSF of the device.
Our approach models local sfPSF transformations as a result of coordinate
mapping, providing a unified and scale-aware way to resample images between
grids of arbitrary sizes, under arbitrary spatial transformations.
## 2 Methods and Methodological considerations
##### Nominal scale factor Point Spread Function (sfPSF).
Observed samples (pixels/voxels) in medical images are not direct Dirac
impulse measurements of a biological scene, but are typically samples from the
convolution of the unobservable biological scene with an observation kernel,
i.e. they have an associated point spread function (PSF). This PSF is commonly
a property of the acquisition or reconstruction system. For example, 3D MRI
images have approximately Gaussian PSFs due to several factors, including the
low-pass filters in the receiver coils, while CT and PET images have spatially
variant PSFs dependent on many parameters including the image reconstruction
algorithm.
In this work, we propose to associate a PSF to each image, whether it directly
arises from an imaging device or is the result of further transformation or
processing, and refer to this as the scale factor PSF (sfPSF) by analogy with
the scale factor used in scale space theory [4]. In the case where the image
directly arises from an imaging device with known PSF, the sfPSF is simply
taken as the closest Gaussian approximation. When the actual PSF associated
with a given image is unkown, without loss of generality the nominal sfPSF is
assumed to be a Gaussian $\mathcal{G}_{\Sigma}$, where the covariance matrix
$\Sigma$ is diagonal and the full-width-half-maximum (FWHM) of the Gaussian
PSF is matched to the voxel size $D=\\{D_{x},D_{y},D_{z}\\}$:
$\Sigma=(2\sqrt{2\log
2})^{-2}\operatorname{diag}(D_{x}^{2},D_{y}^{2},D_{z}^{2})$. The relevance of
the sfPSF will become clear in the following section. Intuitively, when an
image with an associated sfPSF is smoothed, the resulting smoothed image will
have larger sfPSF. Conversely, when an image is upsampled, the resulting image
will be associated with the same sfPSF as the source image (in world
coordinates).
##### Compensating for aliasing when downsampling.
In medical images, frequencies commonly approach the Nyquist limit of their
representation (e.g. full k-space sampling in MRI). If one affinely transforms
an image onto its own grid with an affine transformation which has a
determinant greater than 1, i.e. spatially compressing the samples, or
transforms an image into a lower resolution grid, then the Nyquist criterion
is not satisfied with typical interpolation. This sub-Nyquist sampling
introduces frequency aliasing and loss of information.
High Res | $\circledast$ | Resampling | $\rightarrow$ | Resampled to | Resampled to Low Resolution Grid
---|---|---|---|---|---
Image | Grid | High Res Grid | Linear | Sinc | Gaussian sfPSF
| $\circledast$ | | $\rightarrow$ | | | |
| | | | | | |
Figure 1: Left to right: a high resolution (HR) image (63x63 voxels at 1x1mm
voxel size) with a square on a high frequency pattern; the resampling grid
(transformation); the HR image resampled to the original HR grid and to a
lower resolution (21x21 voxels at 3x3mm voxel size) grid using linear, sinc
and a Gaussian sfPSF (proposed) interpolation. Note the aliasing in the
linear/sinc examples.
In scale-space and pyramidal approaches, when downsampling an image by a
factor of $k=2$, the original image is typically pre-convolved with a Gaussian
filter with variance $\sigma_{P}=0.7355$. Applying the notion of sfPSF and the
notations introduced above, we specify $\mathcal{G}_{{\Sigma}_{S}}$ and
$\mathcal{G}_{{\Sigma}_{T}}$ as the source image and target grid nominal
sfPSFs respectively. Following the scale space methodology [4], sfPSF matching
consists of finding the best scale-matching PSF $\mathcal{G}_{{\Sigma}_{P}}$
that, when combined with the source sfPSF, provides the best approximation of
the target sfPSF. If the sfPSFs are Gaussian, then the covariance of
$\mathcal{G}_{{\Sigma}_{P}}$ can be obtained through simple covariance
addition ${{{\Sigma}_{T}}={{\Sigma}_{S}}}+{{\Sigma}_{P}}$. In the downsampling
example above, without additional knowledge, we have
$\operatorname{FWHM}_{T}=k=2$ and $\operatorname{FWHM}_{S}=1$. We therefore
get
$\Sigma_{P}=\Sigma_{T}-\Sigma_{S}=(k^{2}-1^{2})(2\sqrt{2\log{2}})^{-2}\operatorname{Id}$
or, in this case, $\sigma_{P}=0.7355$. This is equivalent to finding the best
Fourier domain filter that limits the bandwidth of the source image
frequencies to the representable Nyquist frequencies on the target grid.
Similar filtering would be necessary to avoid aliasing when rigidly resampling
high resolution images to a lower resolution, e.g. high resolution anatomical
MRI and CT images or binary segmentations are transformed to lower resolution
MRI (2D FLAIR), metabolic (PET), microstructural (DWI) and functional (fMRI)
images. In this work we propose to compensate for the differences in sfPSF
when resampling to arbitrary grid sizes under global (affine) or local (non-
linear) spatial transformations.
Before presenting our methodological contribution in detail, we illustrate the
sfPSF performance in comparison with standard interpolation on a synthetic
phantom (Fig. 1) with a low frequency object (square) overlaid on a high-
frequency pattern (to highlight aliasing). This phantom is resampled to a grid
with 3 times less resolution per axis. Note that the proposed sfPSF method
appropriately integrates out high frequencies without introducing aliasing.
##### Pre-convolution sfPSF matching.
In the affine transformation $\mathcal{A}$ scenario from the space of $T$ to
$S$, the source image can be pre-convolved with an _anti-aliasing_ filter with
covariance
$\Sigma_{P}^{S}+\Sigma_{S}=\mathcal{A}\cdot\Sigma_{T}\cdot\mathcal{A}^{\intercal}$
and then resampled using classical interpolation. Note that $\Sigma_{P}^{S}$
is now defined in the space of $S$.
In the general non-linear spatial transformation case, a similar but
spatially-variant pre-filtering procedure could be applied by relying on a
local affine approximation of the spatial transformation, i.e. using the
jacobian matrix. This solution might provide useful results but because the
sfPSF matching, or smoothing, is performed before the spatial transformation,
and the sfPSF potentially has a large spatial extent, it can be seen as
disregarding the non-homogeneous distribution of the samples in target space
$T$. Instead, we propose to model and perform the sfPSF matching in the space
of $T$ and thus rely on $\Sigma_{P}^{T}$.
##### Target space sfPSF matching.
Let $\mathcal{F}_{S\leftarrow T}(\mathbf{v})$ be a spatial transformation
mapping the location $\mathbf{v}=(v_{x},v_{y},v_{z})$ from the space of $T$ to
that of $S$, given either an affine or non-linear transformation. Let
$\mathcal{A}_{S\leftarrow T}(\mathbf{v})$ be the jacobian matrix of
$\mathcal{F}$ at $\mathbf{v}$ which provides the best linear approximation of
$\mathcal{F}$ at $\mathbf{v}$. Following our previous derivations we have
$\Sigma_{P}^{S}(\mathbf{v})+\Sigma_{S}\approx\mathcal{A}_{S\leftarrow
T}(\mathbf{v})\cdot\Sigma_{T}\cdot\mathcal{A}_{S\leftarrow
T}^{\intercal}(\mathbf{v})$. Thus, given
$\Sigma_{P}^{S}(\mathbf{v})=\mathcal{A}_{S\leftarrow
T}(\mathbf{v})\cdot\Sigma_{P}^{T}(\mathbf{v})\cdot\mathcal{A}_{S\leftarrow
T}^{\intercal}(\mathbf{v})$ we now want
$\Sigma_{P}^{T}(\mathbf{v})+\mathcal{A}_{S\leftarrow
T}^{-1}(\mathbf{v})\cdot\Sigma_{S}\cdot\mathcal{A}_{S\leftarrow
T}^{-\intercal}(\mathbf{v})\approx\Sigma_{T}$. In other words, we want to find
a _symmetric positive semi-definite_ covariance $\Sigma_{P}^{T}(\mathbf{v})$
that, when combined with the affinely transformed covariance
$\Sigma_{S}^{T}(\mathbf{v})=\mathcal{A}_{S\leftarrow
T}^{-1}(\mathbf{v})\cdot\Sigma_{S}\cdot\mathcal{A}_{S\leftarrow
T}^{-\intercal}(\mathbf{v})$, best approximates $\Sigma_{T}$. Under the
typical assumption that the nominal sfPSF $\Sigma_{T}$ is diagonal, this
approximation is given by
$\tilde{\Sigma_{P}^{T}}(\mathbf{v})=\max(\Sigma_{T}-\lambda(\mathbf{v}),0)$
with $\lambda(\mathbf{v})$ being a diagonal scaling matrix containing the
components of $\Sigma_{S}^{T}(\mathbf{v})$ obtained through polar
decomposition and $max(\cdot,\cdot)$ is the element-wise maximum operator
between two matrices. It is important to note that under regimes where the
Nyquist limit is not violated, e.g. upsampling and rigid transformation
between isotropic grids, the proposed method reverts to standard resampling as
the sfPSF would become a Dirac.
##### Interpolation by convolution with the sfPSF.
Let $I_{T}(\mathbf{v})$ be the sought resampled intensity in the space of $T$
at location $\mathbf{v}$, and $I_{S}(\mathbf{v})$ be the intensity of the
source image. The interpolated and PSF matched value of $I_{T}$ at location
$(\mathbf{v})$ can be obtained by an _oversampled_ discretised convolution
$\displaystyle I_{T}(\mathbf{v})$
$\displaystyle=\frac{1}{Z}\sum_{\mathpzc{v}_{x}}^{N_{x}(\mathbf{v})}\sum^{N_{y}(\mathbf{v})}_{\mathpzc{v}_{y}}\sum^{N_{z}(\mathbf{v})}_{\mathpzc{v}_{z}}I_{S}(\mathcal{F}_{S\leftarrow
T}(\mathbf{v}-\mathbf{\mathpzc{v}}))\mathcal{G}_{\tilde{\Sigma^{T}_{P}}{(\mathbf{v})}}(\mathbf{\mathpzc{v}})$
(1)
where
$\mathcal{G}_{\tilde{\Sigma^{T}_{P}}{(\mathbf{v})}}(\mathbf{\mathpzc{v}})=(2\pi)^{-\frac{3}{2}}|\tilde{\Sigma_{P}}{(\mathbf{v})}|^{-\frac{1}{2}}\,e^{-\frac{1}{2}\mathbf{\mathpzc{v}}^{\prime}(\tilde{\Sigma^{T}_{P}}{(\mathbf{v})})^{-1}\mathbf{\mathpzc{v}}}$,
$Z$ is a normaliser that ensures the (discrete) sum over
$\mathcal{G}_{\tilde{\Sigma^{T}_{P}}{(\mathbf{v})}}(\mathbf{\mathpzc{v}})$
equals 1. $N_{x}(\mathbf{v})$, $N_{y}(\mathbf{v})$ and $N_{z}(\mathbf{v})$ are
sampling regions centered at $\mathbf{v}$ in the $x$, $y$ and $z$ directions
respectively. More specifically, $N_{x}(\mathbf{v})$ are homogeneously spaced
samples between -3 and 3 standard deviations, i.e.
$N_{x}(\mathbf{v})=\left\\{k\sigma_{P_{x}}(\mathbf{v})\ \ \forall
k\in[-3,3]\right\\}$, with
$\sigma_{P_{x}}^{2}(\mathbf{v})=\tilde{\Sigma^{T}_{P}}^{xx}(\mathbf{v})$, and
equivalently for $N_{y}(\mathbf{v})$, $N_{z}(\mathbf{v})$,
$\sigma_{P_{y}}(\mathbf{v})$ and $\sigma_{P_{z}}(\mathbf{v})$. As S is a
discrete image, an appropriate interpolation method, such as linear, cubic or
sinc interpolation is used to get the sample values at
$\mathcal{F}_{S\leftarrow T}(\mathbf{v}-\mathbf{\mathpzc{v}})$. Similarly,
interpolation is required to compute $\mathcal{F}_{S\leftarrow
T}(\mathbf{v}-\mathbf{\mathpzc{v}})$ if the spatial transformation is
represented as a displacement field. Note also that when $\sigma_{P_{x}}$
(resp. $\sigma_{P_{y}}$ or $\sigma_{P_{z}}$) becomes $0$ or very small, care
has to be taken to appropriately compute the limit of the Gaussian weight by
resorting to an axis aligned Dirac function.
##### Limitations of Sinc interpolation by convolution.
Under the sampling theorem, replacing the Gaussian in Eq.1 with Sinc should
produce the theoretically best alias-free results. Knowing that this work
focuses on obtaining realistic PSF’s rather than the least aliased one, we
note 3 problems with Sinc resampling that are addressed with the proposed
method: First, Sinc is only optimal if the discretised sampling scheme of Eq.1
is replaced with an integral. Second, sincs are optimal from a bandlimit point
of view, but are poor approximations of real PSFs, e.g. PET images have
aproximate Gaussian PSF larger than the voxel size. Finally, sincs cannot be
used for signal-limited images (probabilistic segmentations bounded from 0 to
1), as it would produce negative probabilities. Conversely, Gaussians do not
have this problem.
## 3 Validation and Discussion
##### Data.
30 healthy control subjects were obtained from the ADNI2 database. All
subjects had an associated T1-weighted MRI image, acquired at
1.1$\times$1$\times$1$mm$ voxel size, and a 18F-FDG PET image, reconstructed
at 3$\times$3$\times$3$mm$. For this selected subset of ADNI2, all data (MRI
and PET) were acquired on the same scanner with the same scanning parameters,
thus removing acquisition confounds. While the effective PSF of PET images can
be between 3 and 6$mm$ FWHM, in these experiments we will assume a PSF with
3$mm$ FWHM.
##### Frequency domain analysis under non-linear transformation.
Frequency domain analysis was used to assess whether resampling images under
non-linear transformation respects the Nyquist limit and to test whether the
proposed method can mitigate any error. The 30 T1 MRIs were zero padded by a
factor of 2 in the frequency domain (0.55$\times$0.5$\times$0.5$mm$ voxel
size), doubling the representable frequencies. Fig. 2 shows two upsampled
images and their frequency magnitude. Note the empty power spectrum outside
the Nyquist band (white box).
| | | |
---|---|---|---|---
| | | |
Figure 2: Left to right: (Top) upsampled source $S$ and target $T$ images
followed by $S$ non-linearly resampled to $T$ using sinc and Gaussian sfPSF;
(Bottom) their respective frequency domain log magnitude. The white box
represents the Nyquist limit before zero-padding. Note the high supra-Nyquist
magnitude after sinc resampling.
29 upsampled T1 images were affinely [7] and then non-rigidly [8] registered
to the remaining image using standard algorithms. Each upsampled image was
resampled to the space of the remaining image using both a three-lobed
truncated sinc resampling, and the proposed Gaussian sfPSF method (see Fig.
2). The Gaussian sfPSF was set to 1.1$\times$1$\times$1$mm$ FWHM for both
$\Sigma_{S}$ and $\Sigma_{T}$. Note that, as expected, supra-Nyquist
frequencies are created when using sinc interpolation. These frequencies are
greatly suppressed when using the proposed method. Analysis of the power
spectra of the 29 resampled images showed an average power suppression of
94.4% for frequencies above the Nyquist band when using the Gaussian PSF
instead of sinc interpolation. This power suppression results in an equivalent
reduction of aliasing at the original resolution.
##### Volume preservation and partial volume under rigid resampling.
To demonstrate clinically relevant consequences of aliasing, we tested the
effect of resampling within the context of segmentation propagation and
partial volume estimation. Specifically, T1 images are segmented into 98
different regions using multi-atlas label propagation and fusion [9] based on
the Neuromorphometrics, Inc. labels. The T1 images were then rigidly
registered [7] to the PET data, and each one of the segmented regions was
resampled to the PET data using both linear interpolation and the Gaussian
sfPDF. Linear was chosen here due to the signal-limited $[0,1]$ nature of
probabilities. Example results are shown in Fig. 3.
| |
---|---|---
High Res Vol=14.54ml | Linear Vol=14.39ml | Gaussian sfPSF Vol=14.53ml
Figure 3: Left to right: A segmented region overlaid on a high resolution T1
(1.1$\times$1$\times$ 1$mm$) image (and zoomed), followed by the same
segmentation resampled to the resolution of a PET image
(3$\times$3$\times$3$mm$) using linear and the proposed Gaussian sfPSF. The
zoomed segmentations are in gray scale between 0 and 1. Given that the cortex
is commonly thinner than $3mm$, note the unrealistic large amount of voxels
with segmentation probability equal to 1 at PET resolution when using linear
resampling.
Figure 4: The mean RVD (Left) and mean ARVD (Right) between a low and high
resolution representation of a segmented region averaged over the population.
100 different regions are plotted against the mean volume of the region
$V_{HR}$ over the population.
As all the segmentations are probabilistic, volume was estimated as the sum of
all probabilities for all voxels $\mathbf{v}$ times the voxel size
$\mathcal{V}$. The volume was estimated in the original T1 space ($V_{HR}$),
and in the PET space after linear ($V_{TRI}$) and the Gaussian sfPSF
($V_{sfPSF}$) resampling. Similar volumes were obtained when the probabilistic
segmentations were thresholded at 0.5. The relative volume difference
RVD$=(V_{sfPSF}-V_{HR})/V_{HR}$ and its absolute value, ARVD $=|$RVD$|$, were
estimated for each of the 30 subjects, 98 cortical regions and 2 resampling
methods. The RVD and ARVD were averaged over all subjects for each region,
resulting in 98 mean RVD and ARVD values per region, per method. These values
are plotted in Fig. 4. A Parzen window mean and STD are also plotted in the
same figure. Both resampling methods are unbiased according to the mean RVD.
However, the Gaussian sfPSF method provides significantly lower ($p<10^{-4}$
\- Wilcoxon signed-rank test) errors in terms of mean ARVD for all regions,
especially in smaller regions where linear resampling can introduce up to 8%
mean absolute volume difference. An 8% mean difference implies even larger
local errors, resulting in detrimental effects in partial volume/ compartment
modelling, and when estimating biomarkers that rely on structural
segmentations.
## 4 Conclusion
The presented work explores aliasing in medical image resampling and proposes
a new interpolation technique that matches the scale factor PSF given
arbitrary grid sizes and non-linear transformations. We demonstrate the
advantages of using Gaussian sfPSF resampling, both in terms of Nyquist limit
and volume preservation, when compared with common resampling techniques.
Future work will involve deploying the proposed methodology within image
registration algorithms and verifying the impact of the sfPSF in partial
volume and compartment modelling. An implementation will be made available at
the time of publication.
##### Acknowledgements.
This work was supported by the EPSRC (EP/H046410/1, EP/J020990/1, EP/K005278),
the MRC (MR/J01107X/1), the EU-FP7 (FP7-ICT-2011-9-601055), the NIHR BRC
UCLH/UCL HII (BW.mn.BRC10269), and the UCL Leonard Wolfson Experimental
Neurology Centre (PR/ylr/18575).
## References
* [1] Meijering, E.H.W., Niessen, W.J., Pluim, J.P.W., Viergever, M.A.: Quantitative comparison of sinc-approximating kernels for medical image interpolation. In: Proc. MICCAI’99. Springer Berlin Heidelberg (1999) 210–217
* [2] Pluim, J.P., Maintz, J.B.A., Viergever, M.A.: Mutual information matching and interpolation artifacts. In: SPIE Medical Imaging. Volume 3661., SPIE Press (May 1999) 56–65
* [3] Aganj, I., Yeo, B.T.T., Sabuncu, M.R., Fischl, B.: On removing interpolation and resampling artifacts in rigid image registration. IEEE Trans. Image Process. 22(2) (February 2013) 816–827
* [4] Lindeberg, T.: Scale-space for discrete signals. IEEE Trans. Pattern Anal. Mach. Intell. 12(3) (1990) 234–254
* [5] Unser, M., Neimark, M., Lee, C.: Affine transformations of images: a least squares formulation. In: Proc. ICIP’94. Volume 3. (1994) 558–561
* [6] Arigovindan, M., Sühling, M., Hunziker, P., Unser, M.: Variational image reconstruction from arbitrarily spaced samples: A fast multiresolution spline solution. IEEE Trans. Image Process. 14(4) (2005) 450–460
* [7] Modat, M., Cash, D.M., Daga, P., Winston, G.P., Duncan, J.S., Ourselin, S.: Global image registration using a symmetric block-matching approach. Journal of Medical Imaging 1(2) (July 2014) 024003–024003
* [8] Modat, M., Ridgway, G.R., Taylor, Z.A., Lehmann, M., Barnes, J., Hawkes, D.J., Fox, N.C., Ourselin, S.: Fast free-form deformation using graphics processing units. Computer Methods and Programs in Biomedicine 98(3) (June 2010) 278–284
* [9] Cardoso, M., Modat, M., Wolz, R., Melbourne, A., Cash, D., Rueckert, D., Ourselin, S.: Geodesic information flows: Spatially-variant graphs and their application to segmentation and fusion. IEEE TMI (in press) (2015)
* [10] Higham, N.J.: Computing a nearest symmetric positive semidefinite matrix. Linear Algebra Appl. 103 (1988) 103–118
* [11] Schröcker, H.P.: Uniqueness results for minimal enclosing ellipsoids. Comput. Aided Geom. Des. 25(9) (2008) 756–762
## Draft Appendix
Let $S$ and $T$ be 2 symmetric positive definite (SPD) matrices representing
covariance matrices. We are looking for the _smallest_ symmetric positive
semi-definite (SPSD) matrix $P$ such that $S+P$ is larger than $T$.
### Some useful tools and properties
A matrix $X$ is SPSD if
$\forall v,v^{\intercal}\cdot X\cdot v\geq 0$ (2)
We then write
$X\succeq 0$ (3)
Furthermore, we write $X\succeq Y$ iff $X-Y\succeq 0$.
A matrix $X$ is SPD if
$\forall v\neq 0,v^{\intercal}\cdot X\cdot v>0$ (4)
We then write
$X\succ 0$ (5)
We also write
$X\succ Y\textrm{~{}iff~{}}X-Y\succ 0$
.
If $Z$ is invertible, then $Z^{\intercal}\cdot X\cdot Z\succeq 0$ (resp.
$\succ 0$) iff $X\succeq 0$ (resp. $\succ 0$). The spectral decomposition of a
SPSD matrix is given by
$X=Z(X)\cdot\Lambda(X)\cdot Z(X)^{\intercal}$ (6)
where $Z(X)\cdot Z(X)^{\intercal}=\operatorname{Id}$ and
$\Lambda(X)=\operatorname{diag}(\lambda_{i}(X))$. Note that for any matrix
$Y$,
$\det(Z(X)\cdot Y\cdot Z(X)^{\intercal})=\det(Y)$ (7)
### Smallest SPSD matrix increment
#### A matrix norm minimization view.
We are looking for an SPSD matrix $P$ such that $S+P$ is close to $T$. In
other words, we want $P$ to be close to $T-S$. We rephrase this more formally
as minimizing the Frobenius norm $\left\lVert(S+P)-T\right\rVert=\left\lVert
P-(T-S)\right\rVert$ subject to the condition $P\succeq 0$.
Following [10], the solution to this problem is given by:
$P=Z(T-S)\cdot\max(\Lambda(T-S),0)\cdot Z(T-S)^{\intercal}$ (8)
While this provides a nice closed-form solution. It is unclear whether the
Frobenius norm really captures the discrepancy in term of covariance matrix.
This solution still has interesting properties. We indeed have $S+P\succeq S$
and $S+P\succeq T$. Indeed, from (8), we easily see that $P-(T-S)\succeq 0$
#### A geometric view.
We are looking for an SPSD matrix $P$ that minimizes the volume of $S+P$ (i.e.
$\det(S+P)$) under the constraints that $P\succeq 0$, $S+P\succeq S$ (which is
trivially obtained from $P\succeq 0$) and $S+P\succeq T$.
Let us first generalize the problem. We now simply look for an SPD matrix $Q$
that minimizes $\det(Q)$ subject to the conditions $Q\succeq S$ and $Q\succeq
T$. Let us consider the spectral decomposition of $S$ and use the
corresponding spectral coordinate system:
$X^{\prime}=Z(S)^{\intercal}\cdot X\cdot Z(S)$ (9)
Note that we have $S^{\prime}=\Lambda(S)$. Our problem is equivalent to
minimizing $\det(Q^{\prime})=\det(Q)$ subject to the conditions
$Q^{\prime}\succeq S^{\prime}=\Lambda(S)$ and $Q^{\prime}\succeq T^{\prime}$.
Figure 5: Covariances in coordinate system of $S$.
Let us now rescale the space with a volume-preserving transformation to turn
$\Lambda(S)$ into a scalar matrix. Let
$\Lambda_{N}(S)=\det(S)^{1/(2*Dim)}\cdot\operatorname{diag}(\lambda_{i}^{-1/2}(S))$
(10)
where $Dim$ is the dimension of the space and $\det(\Lambda_{N}(S))=1$. Let
$X^{\prime\prime}=\Lambda_{N}(S)\cdot
X^{\prime}\cdot\Lambda_{N}(S)=\Lambda_{N}(S)\cdot Z(X)^{\intercal}\cdot X\cdot
Z(X)\cdot\Lambda_{N}(S)$ (11)
Note that we have $S^{\prime\prime}=\det(S)^{1/Dim}\operatorname{Id}$. Our
problem become equivalent to that of minimizing
$\det(Q^{\prime\prime})=\det(Q)$ subject to the constraints
$Q^{\prime\prime}\succeq S^{\prime\prime}=\det(S)^{1/Dim}\operatorname{Id}$
and $Q^{\prime\prime}\succeq T^{\prime\prime}$.
Figure 6: Volume-preserving rescaling to make $S$ isotropic.
Let us now consider the spectral decomposition of $T^{\prime\prime}$ and use
the corresponding spectral coordinate system:
$X^{\prime\prime\prime}=Z(T^{\prime\prime})^{\intercal}\cdot
X^{\prime\prime}\cdot Z(T^{\prime\prime})$ (12)
Note that we have $T^{\prime\prime\prime}=\Lambda(T^{\prime\prime})$. Our
problem become equivalent to that of minimizing
$\det(Q^{\prime\prime\prime})=\det(Q)$ subject to the constraints
$Q^{\prime\prime\prime}\succeq
S^{\prime\prime\prime}=\det(S)^{1/Dim}\operatorname{Id}$ and
$Q^{\prime\prime\prime}\succeq
T^{\prime\prime\prime}=\Lambda(T^{\prime\prime})$. Note that both
$\det(S)^{1/Dim}\operatorname{Id}$ and $\Lambda(T^{\prime\prime})$ are axis-
aligned, i.e. diagonal.
Figure 7: Rotation to make both covariances axis-aligned.
For symmetry-reasons, and because of the uniqueness of the solution [11], it
has to also be axis-aligned, i.e. diagonal:
$Q^{\prime\prime\prime}=\operatorname{diag}(\lambda_{i}(Q^{\prime\prime\prime}))$.
We find that our problem is now equivalent to minimizing
$\det(Q^{\prime\prime\prime})=\prod\lambda_{i}(Q^{\prime\prime\prime})$
subject to the conditions
$\lambda_{i}(Q^{\prime\prime\prime})\geq\det(S)^{1/Dim}$ and
$\lambda_{i}(Q^{\prime\prime\prime})\geq\lambda_{i}(T^{\prime\prime})$. The
solution to this problem is obtained by taking:
$\lambda_{i}(Q^{\prime\prime\prime})=\max(\det(S)^{1/Dim},\lambda_{i}(T^{\prime\prime}))$
(13)
Figure 8: Best covariance in this space?
Now we need to roll back the transformations and get
$Q=Z(S)\cdot\Lambda_{N}^{-1}(S)\cdot Z(T^{\prime\prime})\\\
\cdot\max(\det(S)^{1/Dim}\operatorname{Id},\Lambda(T^{\prime\prime}))\cdot
Z(T^{\prime\prime})^{\intercal}\cdot\Lambda_{N}^{-1}(S)\cdot Z(S)^{\intercal}$
(14)
Equivalently, focusing on $T$ first, we get:
$Q=Z(T)\cdot\Lambda_{N}^{-1}(T)\cdot Z(S^{\prime\prime})\\\
\cdot\max(\det(T)^{1/Dim}\operatorname{Id},\Lambda(S^{\prime\prime}))\cdot
Z(S^{\prime\prime})^{\intercal}\cdot\Lambda_{N}^{-1}(T)\cdot Z(T)^{\intercal}$
(15)
##### Axis-aligned $T$.
If $T$ is axis aligned, (15) simplifies to:
$\displaystyle Q=T^{1/2}\cdot
Z(S^{\prime\prime\prime})\cdot\max(\operatorname{Id},\Lambda(S^{\prime\prime\prime}))\cdot
Z(S^{\prime\prime\prime})^{\intercal}\cdot T^{1/2}$ (16)
with
$\displaystyle S^{\prime\prime\prime}=T^{-1/2}\cdot S\cdot T^{-1/2}$ (17)
#### Relating the matrix norm and the geometric views.
At this stage it’s unclear how (8) and (14) relate to each other. Numerical
simulations shows a difference but the results are quite close to each other.
Figure 9 shows an example with a rather large difference. The geometric method
indeed leads to smaller volume but this comes at the cost of increasing the
anisotropy. The matrix norm method produces more intuitive results.
Figure 9: Summary of all ellipses
Some interesting references include [10, 11] and
http://mathoverflow.net/questions/120925.
|
# Energy-transfer Quantum Dynamics of HeH+ with He atoms: Rotationally
Inelastic Cross Sections and Rate Coefficients
F. A. Gianturco Institut für Ionenphysik und Angewandte Physik, Universität
Innsbruck
Technikerstr. 25 A-6020, Innsbruck, Austria K. Giri Department of
Computational Sciences, Central University of Punjab,
Bathinda 151001 India L. González-Sánchez<EMAIL_ADDRESS>Departamento de
Química Física, University of Salamanca
Plaza de los Caídos sn, 37008, Salamanca, Spain E. Yurtsever Department of
Chemistry, Koç University
Rumelifeneriyolu, Sariyer TR 34450, Istanbul, Turkey N. Sathyamurthy Indian
Institute of Science Education and Research Mohali
SAS Nagar, Manauli 140306 India R. Wester Institut für Ionenphysik und
Angewandte Physik, Universität Innsbruck
Technikerstr. 25 A-6020, Innsbruck, Austria
###### Abstract
Abstract
Two different $ab$ $initio$ potential energy surfaces are employed to
investigate the efficiency of the rotational excitation channels for the polar
molecular ion HeH+ interacting with He atoms. We further use them to
investigate the quantum dynamics of both the proton-exchange reaction and the
purely rotational inelastic collisions over a broad range of temperatures. In
current modeling studies, this cation is considered to be one of the possible
cooling sources under early universe conditions after the recombination era
and has recently been found to exist in the Interstellar Medium. Results from
the present calculations are able to show the large efficiency of the state-
changing channels involving rotational states of this cation. In fact, we find
them to be similar in size and behaviour to the inelastic and to the reaction
rate coefficients obtained in previous studies where H atoms were employed as
projectiles. The same rotational excitation processes, occurring when free
electrons are the collision partners of this cation, are also compared with
the present findings. The relative importance of the reactive, proton-exchange
channel and the purely inelastic channels is also analysed and discussed. The
rotational de-excitation processes are also investigated for the cooling
kinetics of the present cation under cold trap conditions with He as the
buffer gas. The implications of the present results for setting up more
comprehensive numerical models to describe the chemical evolution networks in
different environments are briefly discussed.
molecular processes — HeH+/He — rate coefficients — Interstellar Medium— cold
ion trap kinetics
## I Introduction
Since the detection of HeH+ in the planetary nebula NGC 7026 by Güsten _et
al._ (2019) and its confirmation by Neufeld _et al._ (2020), there has been a
renewed interest in the mechanism of formation and destruction of HeH+ in
stellar and interstellar conditions. Novotný _et al._ (2019) have recently
measured the recombination rate for HeH+ in the ion storage ring and found
such rates, which lead to the destruction of HeH+, to be much smaller than
what was expected from earlier studies, thus suggesting that this important
cation should be more abundant than previously expected in the astrochemical
environments: from circumstellar envelopes to the stage of the recombination
era in the early universe. Hence, we have witnessed a revival of the interest
in discussing and modeling its collision efficiency in operating as an energy
dissipation partner with other chemical species like He, H, H+ and H2, all
partners considered by several modeling studies to be present in similar
environments.(Galli and Palla, 2013; Lepp, Stancil, and Dalgarno, 2002)
The molecular cation HeH+ was detected in the laboratory using a mass
spectrograph as early as 1925. (Hogness and Lunn, 1925) The infrared spectrum
was predicted by Dabrowski and Herzberg in 1977. (Dabrowski and Herzberg,
1977) But HeH+ eluded detection in interstellar conditions until rather
recently, as mentioned earlier. (Güsten _et al._ , 2019)
Our understanding so far is that soon after the nucleosynthesis was over and
conditions were conducive to recombination processes (red shift z $\sim$1000
and temperature $\sim$3000-4000 K), helium, hydrogen and to a less extent
lithium atoms were formed. (Galli and Palla, 2013; Lepp, Stancil, and
Dalgarno, 2002) Although earlier studies (Bates, 1951) had proposed the
formation of H${}_{2}^{+}$ by radiative association (RA) from H and H+,
subsequent studies suggested the formation of HeH+ as the first molecular
species by the following route:
$\textrm{He}+\textrm{H}^{+}\longrightarrow\textrm{HeH}^{+}+\textrm{h}\nu$ (1)
However, the formation of HeH+ under interstellar conditions is attributed to
another RA channel. (Forrey _et al._ , 2020) That is,
$\textnormal{He}^{+}+\textnormal{H}\longrightarrow\textnormal{HeH}^{+}+\textrm{h}\nu$
(2)
HeH+ could react readily with the abundant H atoms (He : H = 1 : 10 in the
early universe) and form H${}_{2}^{+}$:
$\textnormal{HeH}^{+}+\textnormal{H}\longrightarrow\textnormal{He}+\textnormal{H}_{2}^{+}.$
(3)
The interaction of HeH+ with H2 to give HeH${}_{3}^{+}$ and also possibly He
and H${}_{3}^{+}$ have also been considered by Zicler et al. (Zicler _et al._
, 2017) but the corresponding inelastic collisions leading to rotational or
rovibrational energy transfer have not been included explicitly within that
chemical network.
Despite the difficulties in detecting HeH+ in the planetary nebula, Güsten
_et al._ (2019) pointed out that the recorded emission of HeH+ from the
rotational state $j$ = 1 to $j$ = 0 is (2-3 times) larger than what could be
accounted for by the available rate coefficients and astrophysical models.
Ravi _et al._ (2020) have proposed recently that the nonadiabatic coupling
terms between He, H and H+ could act like (astronomical) friction and form
[HeH${}_{2}^{+}$]*, which could in turn lead to the formation of HeH+ and H
and also of He + H${}_{2}^{+}$. All the above considerations point out to the
relevance that the present molecular cation is currently expected to have
within the chemistry of the early universe and of the interstellar medium
(ISM), as we shall further illustrate below.
Inelastic collisions between HeH+ and H and the reactive events between the
two have also been investigated extensively over the years (Bovino _et al._ ,
2011; De Fazio, 2014; Desrousseaux and Lique, 2020), and the reverse reaction
$\textnormal{He}+\textnormal{H}_{2}^{+}\longrightarrow\textnormal{HeH}^{+}+\textnormal{H}$
(4)
has been studied extensively over the last several decades (for example, see
Ramachandran _et al._ (2009); Kolakkandy, Giri, and Sathyamurthy (2012); De
Fazio (2014)). Some of their results will be also discussed in the following.
More specifically, in the present work we intend to investigate in some
detail, and to our knowledge for the first time, how efficiently HeH+ could
also exchange internal energy (mainly rotational energy) when it interacts
with the neutral He atoms known to be present in the same environments, and
therefore enter the paths in energy dissipation networks by undergoing purely
inelastic (rotational) and H+-exchange processes with that neutral partner:
$\textnormal{HeH}^{+}{(\nu,j)}+\textnormal{He}^{{}^{\prime}}\longrightarrow\textnormal{HeH}^{+}{(\nu,j^{\prime})}+\textnormal{He}^{{}^{\prime}}.$
(5)
$\textnormal{HeH}^{+}+\textnormal{He}^{{}^{\prime}}\longrightarrow\textnormal{He}+\textnormal{H}^{+}\textnormal{He}^{{}^{\prime}}.$
(6)
We have labelled one of the two helium atoms as He′ in equations (5) and (6)
to distinguish between the two atoms when we shall also be considering the
H+\- exchange collision events, as further discussed below.
The equilibrium geometry and the potential well depth for He2H+ has been
investigated over the years by several workers. (Poshusta, Haugen, and Zetik,
1969; Poshusta and Siems, 1971; Milleur, Matcha, and Hayes, 1974; Dykstra,
1983; Baccarelli, Gianturco, and Schneider, 1997; Filippone and Gianturco,
1998; Kim and Lee, 1999) Although a limited number of geometries near the
equilibrium geometry of He2H+ was investigated by Dykstra (1983) and an
analytical fit of the potential energy surface (PES) was given by Lee and
Secrest (1986) the first extensive $ab$ $initio$ PES for the system was
generated by Panda and Sathyamurthy (2003) using the coupled cluster singles
and doubles excitation with perturbative triples (CCSDT(T)) method employing
the d-aug-cc-pVTZ basis set. A deep potential well of depth 0.578 eV was
reported for the collinear geometry [He-H-He]+ with a He-H distance of 0.926 Å
and several bound and quasi-bound states (for total angular momentum $J$ = 0)
were determined in that work. In the present report, their potential function
is referred to as PS-PES.
Using the time-dependent quantum mechanical wave packet (TDQMWP) method
(Balakrishnan, Kalyanaraman, and Sathyamurthy, 1997), Bhattacharya and Panda
(2009) investigated the H+-exchange reaction in (HeH+/ He) collisions for
different vibrational ($v$) and rotational ($j$) states of HeH+ and found that
there were severe oscillations in the plots of the exchange reaction
probability as a function of the relative kinetic energy
($E_{\textrm{trans}}$) of the reactants. These oscillations could be traced to
the bound and quasi-bound vibrational states of HeH+…He complex. (Panda and
Sathyamurthy, 2003) Although the oscillations were considerably quenched in
the plots of partial reaction cross section as a function of
$E_{\textrm{trans}}$, some of the oscillations survived in the plots of the
corresponding excitation function. (Panda and Sathyamurthy, 2003)
Additionally, Liang _et al._ (2012) reported another $ab$ $initio$ PES for
the system, hereinafter referred to as the LYWZ PES, obtained using the multi-
reference configuration method and the d-aug-cc-pV5Z basis set. It was
comparable to the PS-PES in all its general and specific features, as noted in
(Liang _et al._ , 2012), so we shall be using the PS-PES in the present
study. These authors (Liang _et al._ , 2012) also carried out quasi-classical
trajectory calculations using their LYWZ PES and found that the resulting
excitation functions were also comparable in magnitude to those reported by
Panda and Sathyamurthy (2003), except for small oscillations in the latter
which were not present in (Liang _et al._ , 2012). Furthermore, by using the
TDQMWP approach, Xu and Zhang (2013) additionally computed the integral
reaction cross sections on the LYWZ PES over a range of $E_{\textrm{trans}}$
(0-0.5 eV) and pointed out the importance of including Coriolis coupling. The
work of Wu _et al._ (2014) extended the above study and computed state-to-
state differential and integral reaction cross sections on the same PES. They
also reported rate coefficients for the exchange reaction for $v$ = 0, $j$ = 0
of HeH+ ranging from 7x10-11 to 3x10-10 cm3 molecule-1 s-1 over a temperature
range of 0-200 K. Yao (Yao, 2014) studied the dynamics of the HeD+ \+ He
exchange reaction. Unfortunately, none of these studies investigated the
anisotropy of the potential around HeH+ and purely inelastic vib-rotational
processes in (HeH+/ He) collisions.
Liang et al. (Liang _et al._ , 2012) had shown that the two PESs are nearly
identical and therefore, we shall restrict our present analysis by carrying
out rotational inelastic calculations using the PS-PES. (Panda and
Sathyamurthy, 2003), together with the calculations which make use of the
newly computed RR-PES discussed in the following Section.
The best available theoretical value for the dipole moment is $\mu$ = 1.66 D
as given by Pavanello _et al._ (2005) while the rotational constant is B =
33.526 cm-1 as quoted in Mueller _et al._ (2005). These are the values
employed in the present work.
We have also carried out a different set of $ab$ $initio$ calculations by
generating a new PES which is focused on the purely inelastic collisions
without considering the H+-exchange channel mentioned above. The target
molecule was therefore taken to be at a fixed internuclear distance given by
its equilibrium value (see below) and will be called a rigid-rotor (RR)
potential energy surface. The purely inelastic cross sections generated by
using this new PES will also be compared with those obtained from the dynamics
where the H+-exchange channels were also included when using the earlier PS-
PES. The following section will present in more details the features of both
$ab$ $initio$ calculations, while later on their dynamics will be discussed
and compared.
One of the reasons for the present study is linked to the notion that is of
direct interest to have quantitative information on the relative efficiency of
a variety of energy-changing processes involving the internal level structure
of the HeH+ polar cation when it is made to interact with other ”chemical”
partners considered to be present in a variety of interstellar environments.
Thus, it is important to know how possible partners like H and He neutral
atoms, or the free electrons, are affecting internal energy redistribution in
the molecular cation to make it a significant partner for the general cooling
channels deemed important following the recombination era (Galli and Palla,
2013) . Hence, the above question is central to the conclusions of our present
investigation reported in the last Section.
Our present results will therefore be compared with those already available
for (HeH+/ H) (Desrousseaux and Lique, 2020) and for (HeH+/ e-) (Hamilton,
Faure, and Tennyson, 2016; Ayouz and Kokoouline, 2019) collisions leading to
rotational excitations of the molecular cation. As we shall show below, one of
the important findings of our present study is that the neutral helium atoms
turn out to be as efficient as, if not more than, hydrogen atoms in causing
rotational excitation in HeH+ and therefore their corresponding inelastic rate
coefficients should be included in the kinetics modeling the chemical
evolution in early universe and ISM environments.
The newly constructed $ab$ $initio$ PES for the rigid rotor HeH+-He
interaction is described in section II and compared there with the available
PS-PES. The methodology adopted for the investigation of the inelastic
processes is described in sections III and IV, while the computed results of
inelastic cross sections and rate coefficients are presented, discussed and
compared with available results involving other collision partners in section
V. Einstein spontaneous emission coefficients are also presented and discussed
in relation to generating critical density values under different early
universe conditions to estimate the relative importance of the collisional and
radiative decays, and the results are reported in section VI. A brief
description of the quantum rotational kinetics of HeH+ in a cold ion trap with
He as a buffer gas will be given in VII. A summary of our findings and their
importance for chemical network modelings of the cooling role of the title
cation will finally be presented in section VIII.
## II Ab initio interaction potentials
For the HeH+/He system, new and extensive ab initio calculations were carried
out in the present work using the MOLPRO suite of quantum chemistry codes: see
Werner _et al._ (2012, 2019).
---
Figure 1: 3D perspective plots of the two PESs for rigid rotor HeH+ ($r$ =
0.774 Å) interacting with He, and potential energy contours in ($R$, $\theta$)
space. top panel: results from the RR-PES computed in this work; bottom panel:
the same rigid-rotor PES obtained from the reactive PS-PES discussed both
earlier and in this Section.
The HeH+ bond distance was kept fixed at 0.774 Å throughout the calculation of
the potential energy points, thereby producing the RR-PES. The post-Hartree-
Fock treatment was carried out using the CCSD(T) method as implemented in
MOLPRO (Hampel, Peterson, and Werner, 1992; Deega and Knowles, 1994) and
complete basis set (CBS) extrapolation using the aug-cc-pVTZ, aug-cc-pVQZ and
aug-cc-pV5Z basis sets (Woon and Dunning Jr, 1993, 1994) was carried out. The
basis-set-superposition-error (BSSE) (Boys and Bernardi, 1970) was also
included for all the calculated points so that the full interaction was
obtained with the inclusion of the BSSE correction.
Figure 2: Energy spacings between the lower rotational levels of the present
cation (left) and steady-state distribution of the relative populations among
rotational levels as a function of temperature (right in figure).
The two dimensional (2D) RR-PES ($R$, $\theta$) was calculated using 76 points
from 1.0 to 10.0 Å along $R$ and 19 values from 0 to 180∘ in $\theta$ for a
total of 1,444 grid points. We report in Figure 1 a pictorial representation
of the new RR-PES, given in 3D space and also projected below it as potential
energy contours, in the upper panel of the figure. The lower panel reports the
results for the same rigid-rotor two-dimensional reduction of the more
complete 3D reactive potential given by the PS-PES already discussed in the
previous Section. It is interesting to note that both interactions exhibit a
deep attractive well on the linear geometry forming the (He-H-He)+ complex, as
already discussed in much of the earlier work mentioned in the Introduction
and analysed in the more recent studies on the stable HeHHe+ molecule
(Fortenberry and Wiesenfeld, 2020; Stephan and Fortenberry, 2017).
This pictorial comparison of the two PESs that will be employed to generate
rotationally inelastic cross sections and rate coefficients in the next
Section clearly reveals that they are largely identical, a feature which will
be further discussed below. The He-end of the cation is located on the
positive side of the X coordinate in the figure panels.
It is also useful at this point to provide a pictorial view of the structural
properties of the HeH+ molecular ion in terms of the energy spacing between
its lower rotational levels and the resulting steady-state distributions of
their relative populations over a range of temperatures relevant for the
present discussion. They are reported in Figure 2 where one clearly sees how
only relatively few states would be populated under equilibrium conditions up
to 1000 K. These features will play a role when discussing the inelastic rate
coefficients from the quantum dynamics in the following Section.
The original data points of the new RR-PES calculations were used by
interpolation to generate a grid with 601 radial points (between 1 and 10 Å)
and 37 angular values (from 0 to 180∘).
Figure 3: Multipolar expansion coefficients computed for the RR-PES
interaction of the present study and from the 2D-reduced representation of the
earlier PS-PES also employed in the present study. See main text for further
details.
For a more direct, and quantitative evaluation of the spatial anisotropy of
this PES, it is also useful to expand this extensive 2D grid for the fixed-
bond (RR) target geometry in terms of Legendre polynomials as given below:
$\textnormal{V}(R,\theta)=\Sigma_{\lambda}V_{\lambda}(R)P_{\lambda}(cos\theta)$
(7)
We have initially obtained 25 multipolar coefficients for each of the involved
rigid-rotor PESs, although only the first 10 were actually included in the
scattering calculations discussed later since they turned out to be sufficient
to reach numerical convergence of the cross section values.The calculated
radial coefficients exhibited a root-mean-square error to the initial RR-PES
points of about 0.64 cm-1 along the radial range. The obtained radial
coefficients were interpolated during their usage in our in-house scattering
code (see below) and further extrapolated using the asymptotic representations
of the lower coefficients, thus ensuring that the overall interaction includes
asymptotically the leading dipole polarizability of the neutral atomic partner
plus the dipole-polarization terms:
$V_{lr}(R,\theta)\sim\frac{\alpha_{0}}{2R^{4}}+\frac{2\alpha_{0}\mu}{R^{5}}\cos{\theta}$
(8)
where $\alpha_{0}$=1.41 a${}^{3}_{0}$ is the polarizability of the helium atom
and $\mu$ is the permanent dipole of the HeH+ partner reported earlier.
The resulting radial coefficients from the above expansion, for both the new
RR-PES and for the 2D-reduction of the fuller 3D PES that includes the
reactive channels and which we shall analyse in the next subSection, the PS-
PES, are compared in the panels of Figure 3, where only the first six of them
are actually shown.
Here we must mention specifically that Panda and Sathyamurthy(Panda and
Sathyamurthy, 2003) had reported a global analytical potential energy function
for the PS-PES. Therefore, keeping the equilibrium bond distance ($r_{e}$) of
HeH+ fixed at 0.774 Å, we compute $V_{\lambda}(R)$ numerically by integrating
$\textnormal{V}(R,r_{e},\theta)$ over $\theta$:
$\textnormal{V${}_{\lambda}$}(R)=\frac{(2\lambda+1)}{2}\int_{-1}^{1}V(R,r_{e},\theta)P_{\lambda}(cos\theta)d(cos\theta)$
(9)
One clearly sees from that comparison that, in terms of the 2D description
which will be employed to calculate the purely rotationally inelastic dynamics
in the present work, the two potential functions behave very closely and are
therefore likely to yield very similar inelastic cross sections and
corresponding inelastic rate coefficients as will be discussed in detail in
the following Sections.
## III Rotational Inelastic dynamics on the RR-PES
We briefly report below the computational method employed in this work to
obtain purely rotationally inelastic cross sections and rate coefficients for
the scattering of HeH+ with He using the the two dimensional RR-PES discussed
in the preceding subSection. The standard time-independent formulation of the
Coupled-Channel (CC) approach to quantum scattering has been known for many
years already (see for example Taylor (2006) for a general text-book
formulation) while the more recent literature on the actual computational
methods has been also very large. For a selected set of references over the
more recent years see for instance refs. Arthurs and Dalgarno, 1960; Secrest,
1979; Kouri and Hoffman, 1997; Hutson, 1994; Gianturco, 1979. However, since
we have already discussed our specific computational methodology in many of
our earlier publications (Martinazzo, Bodo, and Gianturco, 2003; López-Durán,
Bodo, and Gianturco, 2008; González-Sánchez _et al._ , 2015), only a short
outline of our approach will be given in the present discussion.
For the case where no chemical modifications are induced in the molecule by
the impinging projectile, the total scattering wave function can be expanded
in terms of asymptotic target rotational eigenfunctions (within the rigid
rotor approximation) which are taken to be spherical harmonics and whose
eigenvalues are given by $Bj(j+1)$, where $B$ is the rotational constant for
the closed-shell HeH+ ion mentioned already in the previous Section: 33.526
cm-1 and $j$ is the rotational quantum number. The channel components for the
CC equations are therefore expanded into products of total angular momentum
$J$ eigenfunctions and of radial functions to be determined via the solutions
of the CC equations (Martinazzo, Bodo, and Gianturco, 2003; López-Durán, Bodo,
and Gianturco, 2008) i.e. the familiar set of coupled, second order
homogeneous differential equations:
$\left(\frac{d^{2}}{dR^{2}}+\mathbf{K}^{2}-\mathbf{V}-\frac{\mathbf{l}^{2}}{R^{2}}\right)\mathbf{\psi}^{J}=0.$
(10)
In the above Coupled Equations, the $\mathbf{K}^{2}$ matrix contains the
wavevector values for all the coupled channels of the problem and the
$\mathbf{V}$ matrix contains the full matrix of the anisotropic coupling
potential. The required scattering observables are obtained in the asymptotic
region where the Log-Derivative matrix has a known form in terms of free-
particle solutions and unknown mixing coefficients. Therefore, at the end of
the propagation one can use the Log-Derivative matrix to obtain the K-matrix
by solving the following linear system:
$(\mathbf{N}^{\prime}-\mathbf{Y}\mathbf{N})=\mathbf{J}^{\prime}-\mathbf{Y}\mathbf{J}$
(11)
where the prime signs indicate radial derivatives, $\mathbf{J}(R)$ and
$\mathbf{N}(R)$ are matrices of Riccati-Bessel and Riccati-Neumann functions.
(López-Durán, Bodo, and Gianturco, 2008) The matrix $\mathbf{Y}(R)$ collects
the eigensolutions along the radial region of interest, out of which the Log
derivative matrix is then constructed. (López-Durán, Bodo, and Gianturco,
2008) From the K-matrix produced by solving the coupled radial eq.s the
S-matrix is then easily obtained and from it the state-to-state cross
sections. (López-Durán, Bodo, and Gianturco, 2008) We have already published
an algorithm that modifies the variable phase approach to solve that problem,
specifically addressing the latter point and we defer the interested reader to
that reference for further details. (Martinazzo, Bodo, and Gianturco, 2003;
López-Durán, Bodo, and Gianturco, 2008)
In the present calculations we have generated a broad range of state-to-state
rotationally inelastic cross sections. The number of rotational states coupled
within the dynamics was up to $j$ = 15 and the expansion over the $J$ values
to converge the individual cross sections went up to $J$ = 100 at the highest
energies. The radial range of integration during the propagation of the
coupled eq.s covered radial values from 1.0 to 1000.0 Å using a variable
number of points which went up to 5000 . The range of $E_{\textrm{trans}}$
went from 10-4 cm-1 to 104 cm-1 with 1500-2000 points for each considered
transition.
Once the state-to-state inelastic integral cross sections
($\sigma_{j\rightarrow j^{\prime}}$) are known, the rotationally inelastic
rate coefficients $k_{j\rightarrow j^{\prime}}(T)$ can be evaluated as the
convolution of the cross sections over a Boltzmann distribution of the
$E_{\textrm{trans}}$ values:
$\begin{multlined}k_{j\rightarrow j^{\prime}}(T)=\left(\frac{8}{\pi\mu
k_{\text{B}}^{3}T^{3}}\right)^{1/2}\\\
\int_{0}^{\infty}E_{\textrm{trans}}\sigma_{j\rightarrow
j^{\prime}}(E_{\textrm{trans}})e^{-E_{\textrm{trans}}/k_{\text{B}}T}dE_{\textrm{trans}}\end{multlined}k_{j\rightarrow
j^{\prime}}(T)=\left(\frac{8}{\pi\mu k_{\text{B}}^{3}T^{3}}\right)^{1/2}\\\
\int_{0}^{\infty}E_{\textrm{trans}}\sigma_{j\rightarrow
j^{\prime}}(E_{\textrm{trans}})e^{-E_{\textrm{trans}}/k_{\text{B}}T}dE_{\textrm{trans}}$
(12)
The reduced mass value for the HeH+/He system was taken to be 2.224975 a.u.
The individual rate coefficients were obtained at intervals of 1K, starting
from 5K and going up to 500K. The interplay between the changes in the reduced
mass values, appearing in the denominator in the equation above, and the
structural strength within their corresponding PES will be further discussed
in the later Sections when the dynamical outcomes will be analysed.
## IV Quantum Dynamics using both the RR-PES and the PS-PES
Since the full 3D PS-PES discussed above allows for non-rigid rotor
interaction as well as the H+-exchange reaction, it is interesting at this
point to evaluate the relative flux distributions going into the reactive and
purely inelastic channels, so that a comparison can be made with the results
produced by using the new RR-PES interaction. Hence, the (HeH+/He) dynamics
was first investigated using the ABC code (Skouteris, Castillo, and
Manolopoulos, 2000) for $v$ = 0, $j$ = 0 of HeH+ over a range of
$E_{\textrm{trans}}$ and including the presence of the H+-exchange reactive
channels. It is important to mention the relevant code parameters used in our
investigations, which cover a range of total energy (0.197 - 0.746 eV), with
the maximum energy (Emax) = 2.15 eV in any channel and a maximum (Jmax) of 11
rotational states. We have considered the maximum of hyperradius (Rmax) to be
24 $a_{0}$, with the number of log derivative propagation sectors (MTR) = 250,
the total angular momentum (Jtotmax) going up to 100 and the helicity
truncation parameter (Kmax) = 1.
From the same set of calculations we have extracted the flux components going
into the purely rotationally inelastic channels and we will discuss them in
comparison with the results for the same processes obtained using the RR-PES
described in the previous Section.
The purely inelastic transition probability, as well as the H+-exchange
reaction probability, have been computed for the $j$ = 0 $\rightarrow
j^{{}^{\prime}}$ = 1 process as an example of their behaviour. They turned
out, as expected, to markedly change as a function of $E_{\textrm{trans}}$ and
also of the contributing $J$. The tests of their convergence behaviour has
been reported in Figure S1 in the Supplementary Material, where it is also
shown that the oscillations which we have mentioned in the previous Section
become markedly quenched when the partial cross sections and integral cross
sections are plotted as a function of $E_{\textrm{trans}}$.
We now report in Figure 4 the computed cross sections for the H+\- exchange
reactions with $j^{\prime}$ $>$ 0: they rapidly rise starting from the same
threshold (67 cm-1) at which the inelastic channels open, while leveling off
quickly with the increasing of the relative $E_{\textrm{trans}}$.
Figure 4: Integral inelastic cross section for (a) the H+-exchange reaction
leading to ${j^{\prime}>}$ 0, for ${v^{\prime}}$ = 0 and ${j^{\prime}\geq}$ 0,
for ${v^{\prime}}>$ 0, (b) the purely rotationally inelastic process
(${j^{\prime}>}$ 0) and (c) the sum of the two processes as a function of
$E_{\textrm{trans}}$ for HeH+ ($v$ = 0, $j$ = 0) + He collisions.
The purely rotationally inelastic cross sections reported in Figure 4 turn out
to be significantly larger in magnitude than those obtained for the
H+-exchange channel. This may come as a surprise initially as the dynamics of
the (HeH+, He) system has a deep potential well of 0.578 eV and the dynamical
outcomes are expected to be statistical (probability of exchange reaction
$\sim$ probability of inelastic events $\sim$ 0.5). However, it is known that
the mere presence of a deep potential well does not necessarily lead to
statistical outcomes. Although there have been studies in the past (Smith,
1976) on the role of exchange reactions in vibrationally inelastic collisions
in systems like (Cl, HCl), we are not aware of any such studies on the role of
exchange reaction in rotationally inelastic scattering processes. Since the
two helium atoms involved in the collision are indistinguishable, it follows
that the actual observable would be a sum of the cross section values for the
exchange and the inelastic processes. Therefore, we shall include the
H+-exchange channel (${j^{\prime}>}$ 0, for ${v^{\prime}}$ = 0) to the
inelastic channels obtained from the PS-PES when discussing the inelastic
processes in comparison with the data from the RR-PES, which will not have the
H+-exchange option within its dynamical treatment of a purely rigid rotor
molecular partner. The possible differences between the final rates obtained
with the two methods is indeed one of the interesting results from the present
study.
---
Figure 5: Rotational excitation cross sections for $\Delta j$ = 1 and for
$\Delta j$ = 2 transitions for HeH+ ($j$ = 0-3) collisions with He. The
results in panels a) and b) are obtained via the PS-PES using the 3D
interaction within the reactive code ABC as discussed in the main text. The
purely rotational inelastic results in panels c) and d) were obtained via the
newly constructed $ab$ $initio$ RR-PES using the present 2D interaction within
the non-reactive code ASPIN also discussed in the main text. The energy range
of $E_{\textrm{trans}}$ values is the same in all four panels. The energy
position for the openings of the first excited vibrational level is
pictorially marked in panels a) and b) to better clarify the relative
energetics.
The state-to-state inelastic cross sections involving different rotational
states are given in Figure 5, where we show the results for different $j$
(initial) and $j^{\prime}$ (final) states. The two upper panels in that Figure
report the cross section results obtained using the ABC code and the PS-PES
discussed in the previous Section, while the two lower panels report the same
results when using the RR-PES of this work, also described in the previous
Section, which follows the rigid rotor quantum dynamics.
We can see that the energy threshold for the different inelastic channels
increases with an increase in $j^{\prime}$, as is to be expected from the
behaviour of the multipolar anisotropic coefficients discussed earlier.
Furthermore, the magnitude of the cross sections decreases with an increase in
the energy gap between $j$ and $j^{\prime}$, as has been found for similar
systems in many of our earlier studies (see, for example, refs. Martinazzo,
Bodo, and Gianturco, 2003; López-Durán, Bodo, and Gianturco, 2008). The
following additional comments can be made from a comparison of the two sets of
calculations for the rotationally inelastic channels reported in Figure 5:
$(i)$ the overall relative sizes of the different inelastic cross sections
produced by the two different methods, which employ different, but largely
similar PESs as shown in the previous Section, turn out to be fairly close to
each other in size and to also exhibit very similar energy dependence over the
observed range of energies. The inclusion of the H+-exchange channels,
therefore, is shown to make a fairly minor difference on the size of the final
cross sections, apart from the energy regions just above thresholds;
$(ii)$ the purely inelastic cross sections (lower two panels) show a marked
series of resonant structures in the energy range up to about 500 cm-1. Such
resonant structures are also reproduced with similar features by the
calculations using the PS-PES interaction (upper two panels), where the
inelastic flux also includes the H+-exchange contribution as discussed
earlier. Such similarities reflect the similar strength of their anisotropic
multipolar coefficients between the two PESs, as exhibited by the comparison
given in Figure 3, while indicating the fairly minor role of the exchange
channels with respect to the purely inelastic ones;
$(iii)$ the dominance of the anisotropy coupling induced by the $V_{\lambda}$
= 1 multipolar coefficient is also visible when comparing the cross sections
of the upper panels (panels a and c) with those reported by the ones below
(panels b and d): the former are all larger than the latter over the whole
energy range considered;
$(iv)$ as a general conclusion, we can say from the data in the panels of the
above figure that both sets of calculations confirm the similarities between
the anisotropy of the two PESs, and the fairly minor role played by the
explicit inclusion of the H+-exchange channels vs the rigid rotor dynamics. We
further see that all inelastic cross sections turn out to be rather large,
indicating that the strong angular coupling terms in the potentials overcome
the effects from having rather large energy gaps between the involved
rotational states, as shown in the previous Figure 2.
To make sure that our conclusions are not dependent on the use of the two
dimensional (2D) rigir rotor (RR) model or the choice of PES (RR-PES or PS-
PES), we have carried out 2D calculations using the PS-PES and the inelastic
scattering code (ASPIN). The results are shown in Table 1. Considering the
fact that two different PESs and two different dynamical models are used, the
agreement between the results for different $\Delta j$ transitions can be
considered excellent. The propensity for different $\Delta j$ transitions is
also reflected very well in the two different models and the two different
PESs used.
Table 1: Comparison of representative results of inelastic cross section values (in $\AA^{2}$ units) for HeH+($j$ = 0) + He collisions, obtained using the 2D (RR) model and the two PESs at Etrans = 1000 cm-1. | 2D RR-PES | 2D(RR) PS-PES | 3D PS-PES
---|---|---|---
$j^{\prime}=1$ | 14.90 | 18.14 | 15.94
$j^{\prime}=2$ | 7.47 | 7.42 | 6.90
$j^{\prime}=3$ | 4.10 | 4.79 | 5.29
In conclusion, the present calculations unequivocally indicate that the
present polar cation can efficiently exchange rotational energy by collisions
with the He atoms present in the ISM environment and therefore, as we shall
further show below, can radiatively dissipate in that same environment the
excess internal energy stored during those collisions.
In order to extend the present comparison to the inelastic rate coefficients,
and also to compare our findings for the He partner with what has been found
with other partners that transfer energy by collisions to the cation
rotational states, we present further results in the following Section where
the rate constants are also presented.
## V Comparing Helium-, Hydrogen- and electron-driven dynamics
To assess the relative importance of state-changing processes induced by He,
we compare their dynamical outcomes first with recently reported rate
coefficients ($k$’s) for rotational energy transfer in HeH+ by collision with
neutral H , another important component in the Interstellar Medium.
---
Figure 6: Temperature variation of the rate coefficients for rotational
excitation of HeH+ with $\Delta j$ = 1 (upper four panels) and for $\Delta
j$=2 (lower four panels) obtained for HeH+ collisions with He, using the PS-
PES and the RR-PES of the present work. We include for comparison the earlier
results from the work of Desrousseaux and Lique (2020) on HeH+ ($v$ = 0, $j$ =
0, 1, 2 and 3) in collision with H atoms.
We have taken these rate coefficients as a function of temperature ($T$) from
the earlier calculations by Desrousseaux and Lique (2020), and have plotted
our own computed rate coefficients for $\Delta j$ = 1 and 2 transitions for a
direct comparison in Figure 6.
Figure 7: Rate coefficients for rotational excitation of HeH+ ($v$ = 0, $j$ =
0) going to HeH+($v^{{}^{\prime}}$ = 0 and $j^{{}^{\prime}}$ $\neq$ 0) in
collision with He. The present data are from the quantum dynamics on the PS-
PES and the RR-PES discussed in the main text and are shown for three
different temperatures. Included for comparison are results from the earlier
work of Desrousseaux and Lique (2020) for HeH+($v$ = 0, $j$ = 0) in collision
with neutral H.
It is clear from the four upper panels of the Figure 6, which present the
$\Delta j$ = 1 transitions, that the $k$ values for $j$ = 0 and 1 obtained
using the rigid rotor dynamics on the RR-PES are fairly similar, and
comparable in size and behaviour, to those we have obtained via the 3D
reactive dynamics using the PS-PES. The panels reporting additional
transitions from higher initial $j$ states also confirm this similarity of
rate coefficient values and of temperature dependence for the calculations via
the two present PESs. Interestingly, we also note that the purely rotationally
inelastic results using the RR-PES actually yield slightly larger rate
coefficients over the entire temperature ($T$) range than those obtained from
the 3D reactive dynamics with the PS-PES. This is in spite of the fact that
the latter results also include the contributions from the experimentally
indistinguishable H+-exchange route to the inelastic processes. Furthermore,
the observables from either of the presently employed PES are clearly very
similar in size to those obtained from collisions of the present cation with
neutral H atoms, also reported in the same panels.
The results obtained for the state-changing processes with $\Delta j$ = 2
transitions, presented in the four lower panels of the same figure, show that
all state-to-state rate coefficients are uniformly smaller than those
involving $\Delta j$ = 1, as would be expected. Here again we see that all the
inelastic rate coefficients for He turn out to be similar in size, albeit
uniformly slightly larger than those where H is the collisional partner, thus
underscoring the fact that one should consider both atomic projectiles when
modeling the kinetics of energy transfer paths involving the present cation.
In other words, the interesting new result from the present study is clearly
the fact that He atoms should be considered at the same level of importance as
the H atoms when collisionally exciting rotational states of the title
molecule.
Another interesting way of looking at the relative efficiency of such
inelastic rates is shown by the panels of Figure 7. We see there that both our
sets of results for the He partner show again very similar excitation rate
coefficients to those reported earlier by Desrousseaux and Lique (2020) for
the H partner. The data shown are for the excitations from the $j$ = 0 state
going to $j^{\prime}$ =1 through 5 over the range of $T$ (0-500 K): the three
panels present three intermediate $T$-values as examples. The efficiency of
the rotational excitation by collisions with H are, in any case, of the same
order of magnitude as those we have found here for the He collision partner.
We also note that the size of the rate coefficients for excitations ending
into increasingly higher final $j^{\prime}$ states become dramatically
smaller: the large energy gaps between states of the ionic rotor largely
control that their size dramatically decreases as the gap increases. We see
that the move to higher temperatures also causes the excitation rates to
become dramatically larger for the same types of excitation. That the decline
of the rate coefficient values with increasing energy gap is more marked at
the lower temperatures turns out to be a common feature for both atomic
partners and to all calculations involving the present molecular system.
Figure 8: Temperature variation of the rate coefficient for the $j$ = 0 to
$j^{\prime}$ = 1 inelastic process on the PS-PES and on the RR-PES for HeH+/He
collisions at low temperatures. Included for comparison are results from the
work of Desrousseaux and Lique (2020) for HeH+($v$ = 0, $j$ = 0) in collision
with H. See main text for additional details.
A further analysis is presented for a range of low temperatures up to 50 K in
Figure 8. We see that the values of the rate constants obtained for the (HeH+/
He) inelastic collisions, using both the RR-PES and the 3D reactive dynamics
via the PS-PES, are close to each other in size and behaviour over that range
of temperatures, with the data from using the PS-PES being slightly larger
than those obtained via the RR-PES. We know, however, (see Figure 9 ) that
from around 100K and up to 500K the rate coefficients calculated via the RR-
PES become larger than those produced via the PS-PES despite the former PES
not including the H+-exchange channel. The rate coefficients obtained via the
two PESs of the present work show, in any case, fairly large values for the
rate coefficients: they are clearly larger than those reported by Desrousseaux
and Lique (2020), for the collisions involving the H atom and computed either
with the same rigid rotor dynamics of the present work or obtained from a
dynamical treatment where the H2-formation channel is being present in the
quantum treatment: see Desrousseaux and Lique (2020). Therefore, we can safely
say that the rigid-rotor type of rotationally inelastic dynamics produces
somewhat larger inelastic rate coefficients in comparison to when reactive
channels are also considered in the dynamics. In any event, the latter
reactive flux is marginal in comparison with the size of the inelastic rate
coefficients. Additionally, we find that the efficiency of the rotational
excitation of the cation by He is larger than that obtained from collisions
with the H atom in the same environmental conditions. (Desrousseaux and Lique,
2020) We also note that older results for the H atom as a collision partner,
Kulinich _et al._ (2019) for the purely rotational excitation channels, were
shown for comparison in an earlier study by Desrousseaux and Lique (2020) and
turned out to be markedly larger than those reported by Desrousseaux and Lique
(2020). That difference was there attributed to the lower quality of the PES
employed in that earlier work. We have, therefore, omitted to report them in
the present analysis and suggest that the rate coefficients from the more
accurate PES should preferably be taken into consideration for comparative
studies.
Figure 9: Comparison of rate coefficients for $\Delta j$ = 1 and $\Delta j$ =2
transitions using either the PS-PES (lines with crosses) or the RR-PES (green
curves with x signs) for HeH+ ($j$ = 0) in collision with He. We also report
the same type of rate coefficients of Desrousseaux and Lique (2020) for HeH+
in collision with H and also those obtained by Hamilton, Faure, and Tennyson
(2016) for the HeH+ in collision with free electrons. Figure 10: Comparison of
the rate coefficients for rotationally inelastic (HeH+($j$ = 0)/He) collisions
summed over all state-to-state excitations into all $j\textasciiacute$ final
states. The calculations were performed via the RR-PES and the PS-PES
discussed in the main text. Present results are compared with the same results
reported earlier byDesrousseaux and Lique (2020) for HeH+/H collisions.
The present results dealing with (HeH+/ He) system are further compared with
those reported by Hamilton, Faure, and Tennyson (2016) that involved
HeH+/$e^{-}$ collisions in the $T$ range of 100-500 K. The comparison is shown
in Figure 9 together with the results from the neutral H partner. We can see
clearly that the electron as a projectile is able to produce rotational
excitation rate coefficients which are about 3-4 orders of magnitude larger
than those caused by collisions with either He or H . Consistent with our
earlier observations (vide supra), the rates for $\Delta j$ = 1 transition
from $j$ = 0 are significantly larger than those starting from $j$ = 1. The
relative efficiency between individual collision processes naturally need to
be correctly weighted with the different densities estimated for these three
different projectiles, so their actual role within kinetic evolutionary models
requires the further evaluation of relative densities, as we shall briefly
discuss in the following Section for the case of He.
A further, perhaps more global, quantity can also be employed by showing the
purely rotational inelastic rate coefficients where the individual state-to-
state excitation processes from the $j$=0 initial level are summed over all
the open excited levels as a function of temperature. The results are shown in
Figure 10. The present calculations for the He partner are given from the two
different PESs employed here while the results for the H atom are marked as
D&L and are from ref. Desrousseaux and Lique, 2020. The very similar
excitation efficiency shown by both partners is clearly evident in Fig. 10.
## VI Einstein coefficients and critical densities
Another important process for the decay of the internal rotational states of
the ions in the astrophysical environments is their interaction with the
surrounding radiation field. The transition rates from an excited state $k$
can be written as a sum of stimulated and spontaneous emission rates as (Brown
and Carrington, 2003)
$\kappa^{em}_{k\to i}=\kappa^{sti}_{k\to i}+\kappa_{k\to i}^{spo}=A_{k\to
i}(1+\eta_{\gamma}(\nu,T))$ (13)
Where $A_{k\to i}$ is the Einstein coefficient for spontaneous emission and
$\eta_{\gamma}(\nu,T)=(e^{(h\nu/k_{B}T)}-1)^{-1}$ is the Bose-Einstein photon
occupation number.
The Einstein coefficient for dipole transitions is given as (Brown and
Carrington, 2003)
$A_{k\rightarrow i}=\frac{2}{3}\frac{\omega_{k\rightarrow
i}^{3}S_{k\rightarrow i}}{\epsilon_{0}c^{3}h(2j_{k}+1)}$ (14)
Where $\omega_{i\to k}\approx 2B_{0}(j_{i}+1)$ is the transition’s angular
frequency, and $S_{k\rightarrow i}$ is the line strength. For pure rotational
transitions, eq. 14 simplifies to
$A_{k\rightarrow i}=\frac{2}{3}\frac{\omega_{k\rightarrow
i}^{3}}{\epsilon_{0}c^{3}h}\mu_{0}^{2}\frac{j_{k}}{(2j_{k}+1)}$ (15)
Where $\mu_{0}$ is the permanent electric dipole moment of the molecule. In
the present case the calculated value of 1.66 D has been employed. (Dabrowski
and Herzberg, 1977)
Table 2: Computed Einstein spontaneous emission coefficients $A_{j\to
j^{\prime}}$ for 4HeH+ ($B_{0}$ = 33.526cm-1, $\mu=1.66$ D), HD+ ($B_{e}$ =
22.5 cm-1, $\mu$ = 0.87 D) and C2H- ($B_{e}$ = 1.389 cm-1, $\mu$ = 3.09 D).
All quantities in units of s-1. The data for HD+ and C2H- are taken from ref.
Mant _et al._ , 2020.
. Transition 4HeH+ HD+ C2H- $1\to 0$ 8.68$\times 10^{-2}$ 7.2$\times 10^{-3}$
2.14$\times 10^{-5}$ $2\to 1$ 8.33$\times 10^{-1}$ 6.9$\times 10^{-2}$
2.05$\times 10^{-4}$ $3\to 2$ 30.14$\times 10^{-1}$ 2.5$\times 10^{-1}$
7.43$\times 10^{-4}$ $4\to 3$ 74.09$\times 10^{-1}$ 6.0$\times 10^{-1}$
1.83$\times 10^{-3}$ $5\to 4$ 1.479$\times 10^{+1}$ 1.2$\times 10^{0}$
3.65$\times 10^{-3}$
A sample of the present results is collected in the Table 2, where other
diatomic systems of interest in the ISM environment are also reported for
comparison. Their properties are all taken from ref. Mant _et al._ , 2020.
One should also note here that earlier calculations of the fuller range of
rovibrational coefficents was presented by Engel et al. (Engel _et al._ ,
2005). Their results are close to ours, differing at most by 10-15%. Such
difference is chiefly due to the differences in the potential curves employed
by the calculations and does not substantially change our following
discussion. The present system, due to the large energy separations between
rotational states, produces by far the largest values for the Einstein rate
coefficients of spontaneous emission in the comparison shown in our Table 2.
As we shall discuss further below, such large differences play an important
role in the radiative dissipation of its internal energy in the low-density
astrophysical environments of interest for our discussion.
The local thermodynamic equilibrium (LTE) assumption holds whenever the
population of excited levels is given by the Boltzmann’s law. This happens
when the rate of spontaneous emission is significantly smaller than the rate
of collisional de-excitation and therefore can be neglected. This means that
the density of gas should be significantly larger than some critical value of
that density so that the LTE assumption can be kept. The above analysis is
made more specific by the concept of critical density as recently mentioned,
for instance, in Lara-Moreno, Stoecklin, and Halvick (2019). The definition of
a critical density is given as:
$n_{\text{crit}}^{i}(T)=\frac{A_{ij}}{\sum_{j\neq i}k_{ij}(T)}$ (16)
Where the critical density for any ith rotational level is obtained by giving
equal weight to the effects of the collision-induced and the spontaneous
emission processes. We have taken the rate coefficients discussed in the
previous Sections, and using the RR-PES for both excitation and de-excitation
processes, populating by collisions rotational levels up to $j$=9. We have
also employed the computed spontaneous decay Einstein coefficients discussed
here, samples of which are given by the above Table 2. We report the results
for the present evaluations in the panel of Figure 11, where the obtained
critical density values are given along the Y-axis and the range of $T$
considered is along the X-axis. We clearly see there that the large values
obtained for the critical densities in this case are mainly controlled by the
very large spontaneous radiative coefficients discussed before.
Figure 11: Computed critical densities for the HeH+/He system, as defined in
Equation eq. 16, over a range of temperatures from 5K and up to 500 K. See
main text for further details.
From the variety of estimated baryon densities in the early universe
environments discussed earlier, we have already mentioned that the baryon
density $n_{b}$ is proportional to the red shift value $z$ via the
relationship: (1+$z$)${}^{3}.$ (Galli and Palla, 2013) Hence, we can say that
for values of $z$ varying between 200 and 5000 the corresponding $n_{b}$
values have to vary between about 10-1 cm-3 up to about 103 cm-3. Therefore,
we see that the critical densities associated to all the rotational levels we
are considering are higher than the above estimated values for the baryon
densities. This means that we should consider those populated states not to be
under the LTE conditions since the critical density values are all large
enough to allow the molecules to radiate before they can collisionally de-
excite. Under such conditions, therefore, to know accurately the collision-
driven rates calculated in our work would be important since the LTE
conditions cannot be reliably employed. More specifically, our present
modeling of the critical densities deals chiefly with those associated to the
He atoms. However, since the densities of the latter partner are not very far
from those for the H atom (about 90% for H and about 10% for He), our
comparison between the two species, in relation to the more general baryon
densities we are reporting, is still significant. It indicates that radiative
decay of the rotationally excited states of the HeH+ is going to be by far the
dominant channel for internal energy release in the above environments.
To have shown that collisions can efficiently populate those excited
rotational levels, which in turn rapidly dissipate their stored energy by
radiative emission, is therefore a significant result from the present
calculations.
## VII Rotational Relaxation Kinetics in Ion Traps
Since the data discussed in the previous Sections indicate that the
collisional state-changing processes induced by He atoms among the rotational
states of the HeH+ cation are rather efficient processes, it is also
interesting to further analyse the role of such new inelastic rates in an
entirely different physical situation, i.e. that associated with the trapping
of the present cation under the much denser conditions of a cold ion trap
where the He partner plays the role of the buffer gas. We have studied such
processes many times in a variety of small molecular cations, so we shall not
repeat the general discussion here but simply refer to those earlier
publications Hernández Vera _et al._ (2017); González-Sánchez, Wester, and
Gianturco (2018a); Gianturco _et al._ (2018); Schiller _et al._ (2017),
while only a brief outline of the theory will be given below. We already know,
in fact, that given the information we have obtained from the calculations ,
we are now in a position to try and follow the microscopic evolution of the
cation’s rotational state populations in a cold ion trap environment by
setting up the corresponding rate equations describing such evolution, induced
by collisional energy transfers with the uploaded He gas, as described in
various of our earlier studies González-Sánchez, Wester, and Gianturco
(2018b):
$\frac{d\mathbf{p}}{dt}=n_{He}\mathbf{k}(T)\cdot\mathbf{p}(t)$ (17)
Where the quantity $n_{He}$ indicates the density of the buffer gas loaded
into the trap, the He partner playing this time the role of the collisional
coolant . The vector $\mathbf{p}(t)$ contains the time-evolving fractional
rotational populations of the ion partner’s rotational state, p${}_{j}(t)$,
from the initial distribution at t=tinitial, and the matrix $\mathbf{k}(T)$
contains the individual k${}_{i\rightarrow j}(T)$ rate coefficients at the
temperature of the trap’s conditions. Both the p(tinitial) values and the
collisional temperature T of the trap corresponding to the mean collisional
energy between the partners are quantities to be specifically selected in each
computational run and will be discussed in detail in the modelling examples
presented below. In the present study we shall disregard for the moment the
inclusion of the state-changing rates due to spontaneous radiative processes
in the trap since, contrary to what happens under the conditions described in
the previous Section, if we compare the critical densities shown by Figure 11
with those expected in traps, we see them to be smaller than those of the
collision-controlling, more common buffer gas densities selected in the traps,
as we shall show below. They are therefore not expected to have a significant
effect under the more usual trap conditions Hernández Vera _et al._ (2017).
We have chosen the initial rotational temperature of the trap’s ions to be at
400 K, so that the vector’s components at t=tinitial are given by a Boltzmann
distribution at that chosen temperature. This was done in order to follow the
kinetics evolution over an extended range of time and also test the physical
reliability of our computed state-changing collision rate constants.
If the rate coefficients of the $\mathbf{k}(T)$ matrix satisfy the detailed
balance between state-changing transitions, then as t$\rightarrow\infty$ the
initial Boltzmann distribution will approach that of the effective equilibrium
temperature of the uploaded buffer gas as felt by the ions in the trap. These
asymptotic solutions correspond to the steady-state conditions and can be
obtained by solving the corresponding homogeneous form of eq. 17 given as:
$d\mathbf{p}(t)/dt=0$. We solved the homogeneous equations by using the
singular-value decomposition technique (SVD) Schiller _et al._ (2017),
already employed by us in previous studies. The non-homogeneous equations 17,
starting from our tinitial of 400 K, were solved using the Runge-Kutta method
for different translational temperatures of the trap Schiller _et al._
(2017); González-Sánchez, Wester, and Gianturco (2018b).
Another useful indicator which could be extracted from the present
calculations is the definition of a characteristic time, $\tau$, which can be
defined as:
$\displaystyle\left\langle E_{rot}\right\rangle(\tau)$
$\displaystyle\,-\,\left\langle E_{rot}\right\rangle(t=\infty)\,=$ (18)
$\displaystyle\frac{1}{e}\left(\left\langle
E_{rot}\right\rangle(t=0)-\left\langle E_{rot}\right\rangle(t=\infty)\right)$
The quantity $\left\langle E_{rot}\right\rangle$ represents the level-averaged
rotational internal energy of the molecule in the trap after a characteristic
time interval $\tau$ defined by equation 18. It obviously depends on the
physical collision frequency and therefore it depends on the $n_{He}$ value
present in the trap Schiller _et al._ (2017); González-Sánchez, Wester, and
Gianturco (2018b).
Figure 12: Computed time evolution of the rotational state fractional
populations in the cold trap. The buffer gas density was chosen to be of 1010
cm-3. The six panels show different choices for the trap temperatures and the
vertical lines indicate when the steady-state population is reached in each
situation. See main text for further details.
The results for the time evolution in a specific example of buffer gas density
value (1010 cm-3) and for a temperature range between 10K and 60K are shown in
the six panels of Figure 12. We present there the solutions of the kinetic
eq.s that were started at an initial T value of 400K, reporting in each panel
the time lag needed to reach the steady-state populations at each temperature
shown. The specific end time is located by the vertical line when the relative
populations change no more by the 4th significant figure. Our data clearly
indicate the effects of dealing with a light-atoms molecule where the energy
gaps between levels are obviously rather large. At the lowest T values, in
fact (e.g. for T at 10K and 20K) nearly all molecules reach the j=0 ground
state within 10s, with only negligible populations of the excited states. As
the temperature increases to 30K ,and then up to 60K, we see an increase of
the number of molecules in the j=1 excited state, while however the most
populated state remains the j=0 one, in contrast to what was seen in other
cations where the ground state relative population did not dominate in the
trap after the steady-state González-Sánchez, Wester, and Gianturco (2018a, b)
due to the smaller energy gaps existing for those systems between the lower
rotational states.
The behaviour of the computed $\tau$ values over a range of trap densities
possible in current cold trapped ion experiments, and over the range of
temperatures also possible in those traps Hernández Vera _et al._ (2017);
González-Sánchez, Wester, and Gianturco (2018a); Gianturco _et al._ (2018);
Schiller _et al._ (2017) is obviously linked to the form of eq.15, so that we
clearly expect a scaling of the $\tau$ values with the increasing densities
for the buffer gas: four orders of magnitude of change for the $n_{He}$ will
see a similar span of values for the characteristic times. They are also
expected to change very little over the chosen range of T values.
## VIII Present Conclusions
We have obtained a new potential energy surface from first principles, using
quantum chemical methods with highly correlated functions as described in the
earlier Section 2. The molecular target was treated as a rigid rotor within
the 2D description of the new interaction potential RR-PES and our findings
were further compared with those from a treatment that allowed the cation to
vibrate and to undergo an H+-exchange reaction using the earlier PS-PES . Both
potential functions were therefore employed to generate a wide variety of
rotationally inelastic cross sections and to extract from them the
corresponding inelastic rate coefficients pertaining to either purely
rotational energy-transfer channels or the rotationally inelastic processes
combined with the contributions from the H+-exchange reactive channels. Our
calculations found that the inelastic rates produced by the two different
potential functions turned out to be very similar with each other and clearly
showed that the collisional excitation of the rotational internal states of
HeH+ interacting with He atoms is an important process, efficiently yielding
rotationally excited states of this cation under the ISM conditions modeled in
this study.
We have further compared our present results with earlier calculations in the
literature which used other likely partners like neutral H atoms and free
electrons under the same external conditions we have employed for the He
partner. We compare a variety of different collision efficiency indicators,
clearly showing for the first time that He and H are inducing rotational
excitation processes with very similar efficiency, with He consistently
turning out to be the more efficient partner. These findings therefore suggest
that both neutral atoms have to be considered when including inelastic
processes within the chemistry of early universe kinetic models.
We have further calculated Einstein Coefficients fto estimate spontaneous
decay of the cation´s excited rotational states into a range of lower levels.
These coefficients turn out to be among the largest found for simple molecular
cations with light atoms and therefore clearly indicate that, given the
expected baryonic densities at different redshift values from current models
(see ref. Galli and Palla, 2013), the critical densities required for the
collisional paths to compete with the radiative paths are not likely to be
present in the expected interstellar environments where this molecule has been
detected. This indicates that LTE conditions are not likely to be achieved for
the molecular internal temperatures and therefore the rapid radiative decay of
rotational states down to the ground level will be the dominant cooling path
for the present molecular ion.
To test an entirely different physical environment we have also run
simulations of the time evolution of the rotational states of the present
cation when confined in a cold trap where the He partner plays the role of the
cooling buffer gas. The density values of the latter are obviously much larger
than in the ISM conditions, so that we can have quantitative information on
the efficiency of this buffer gas as a coolant under laboratory conditions and
over a much more dense environment. The results indicate the efficiency of the
collisional cooling channels and the selective role on the relative
populations played by the large energy gaps which exist in this light-atom
molecular cation.
The present work has thus provided from first principles a broad set of
dynamical data and rotationally inelastic rate coefficients which allow for a
more realistic modeling of the chemical and baryonic evolution kinetics in the
astrophysical environments and also for a quantitative evaluation of the
efficiency of the collisional cooling paths under cold ion trap laboratory
conditions.
## IX Supplementary Material
The multipolar coefficients for the Legendre expansion of the new RR-PES for
HeH${}^{+}-$He, the computed proton-exchange reaction cross sections, the
computed inelastic rate coefficients from both the PESs employed in this work,
and the convergence tests for the 3D reactive calculations (S1) are available
as Supplementary Material to the present publication.
## X Acknowledgements
FAG and RW acknowledge the financial support of the Austrian FWF agency
through research grant n. P29558-N36. One of us (L.G-S) further thanks MINECO
(Spain) for the awarding of grant PGC2018-09644-B-100. We are grateful to A.
N. Panda for providing the potential energy subroutine for the PS-PES. We are
also very grateful to F.Lique and B.Desrousseaux for generously providing us
with all the numerical results published in their paper ready for comparison
with our present calculations.
## XI Data Availability
The data that supports the findings of this study are available within the
article and its supplementary material.
## References
* Güsten _et al._ (2019) R. Güsten, H. Wiesemeyer, D. Neufeld, K. M. Menten, U. U. Graf, K. Jacobs, B. Klein, O. Ricken, C. Risacher, J. Stutzki, and S. Arnouts, “Astrophysical detection of the helium hydride ion HeH.” Nature 568, 357 (2019).
* Neufeld _et al._ (2020) D. A. Neufeld, M. Goto, T. R. Geballe, R. Güsten, K. M. Menten, and H. Wiesemeyer, “Detection of Vibrational Emissions from the Helium Hydride Ion (HeH+) in the Planetary Nebula NGC 7027,” ApJ 894, 37 (2020).
* Novotný _et al._ (2019) O. Novotný, P. Wilhelm, D. Paul, A. Kálosi, S. Saurabhi, K. Becker, A. Blaum, S. George, J. Göck, M. Grieser, F. Grussie, R. von Hahn, C. Krantz, H. Kreckel, C. Meyer, P. M. Mishra, D. Muell, F. Nuesslein, M. Orlov, D. A. Rimmler, V. C. Schmidt, A. Shornikov, S. Terekhov, A. S. Vogel, D. Zajfman, and A. Wolf, “Quantum-state–selective Electron Recombination Studies Suggest Enhanced Abundance of Primordial HeH+,” Science 365, 676 (2019).
* Galli and Palla (2013) D. Galli and F. Palla, “The Dawn of Chemistry,” Annu. Rev. Astron. Astrophys. 51, 163 (2013).
* Lepp, Stancil, and Dalgarno (2002) S. Lepp, P. C. Stancil, and A. Dalgarno, “Atomic and Molecular Processes in The Early Universe,” J. Phys. B: At. Mol. Opt. Phys. 35, R57 (2002).
* Hogness and Lunn (1925) T. R. Hogness and E. G. Lunn, “The Ionization of Hydrogen by Electron Impact as Interpreted by Positive Ray Analysis,” Phys. Rev. 26, 44 (1925).
* Dabrowski and Herzberg (1977) I. Dabrowski and G. Herzberg, “The Predicted Infrared Spectrum of HeH+ and Its Possible Astrophysical Importance,” Ann. N. Y. Acad. Sci. 38, 14 (1977).
* Bates (1951) D. R. Bates, “Rate of Formation of Molecules by Radiative Association,” Mon. Not. Roy. Astron. Soc. 111, 303 (1951).
* Forrey _et al._ (2020) R. C. Forrey, J. F. Babb, E. D. S. Courtney, R. McArdle, and P. C. Stancil, “Revisiting the Formation of HeH+ in the Planetary Nebula NGC 7027,” ApJ arXiv: 2006.08716v1, submitted (2020).
* Zicler _et al._ (2017) E. Zicler, O. Parisel, F. Pauzat, Y. Ellinger, M.-C. Bacchus-Montabonel, and J.-P. Maillard, “Search for hydrogen-helium molecular species in space,” A&A 607, A61 (2017).
* Ravi _et al._ (2020) S. Ravi, S. Mukherjee, B. Mukherjee, S. Adhikari, N. Sathyamurthy, and M. Baer, “Topological Studies Related to Molecular Systems Formed Soon After the Big Bang: HeH${}_{2}^{+}$ as the Precursor for HeH+,” Mol. Phys. , in press (2020).
* Bovino _et al._ (2011) S. Bovino, M. Tacconi, F. A. Gianturco, and D. Galli, “Ion Chemistry in the Early Universe Revisiting the Role of HeH+ with New Quantum Calculations,” A&A 529, A140 (2011).
* De Fazio (2014) D. De Fazio, “The H + HeH+ $\longrightarrow$ He + H${}_{2}^{+}$ Reaction from the Ultra-cold Regime to the Three-body Breakup: Exact Quantum Mechanical Integral Cross Sections and Rate Constants,” Phys. Chem. Chem. Phys. 16, 11662 (2014).
* Desrousseaux and Lique (2020) B. Desrousseaux and F. Lique, “Collisional Energy Transfer in the HeH+-H reactive System,” J. Chem. Phys. 152, 074303 (2020).
* Ramachandran _et al._ (2009) C. N. Ramachandran, D. De Fazio, S. Cavalli, F. Tarantelli, and V. Aquilanti, “Isotopic Effect on Stereodynamics of the Reactions of H + HeH+/H + HeD+/H + HeT+,” Chem. Phys. Lett. 469, 26 (2009).
* Kolakkandy, Giri, and Sathyamurthy (2012) S. Kolakkandy, K. Giri, and N. Sathyamurthy, “Collision-Induced Dissociation in (He, H${}_{2}^{+}$ ($v$ = 0-2; $j$ = 0-3)) System: A Time-Dependent Quantum Mechanical Investigation,” J. Chem. Phys. 136, 244312 (2012).
* Poshusta, Haugen, and Zetik (1969) R. D. Poshusta, J. A. Haugen, and D. F. Zetik, “Ab Initio Predictions for Very Small Ions,” J. Chem. Phys. 51, 3343 (1969).
* Poshusta and Siems (1971) R. D. Poshusta and W. F. Siems, “Ab Initio Calculations on He2H+,” J. Chem. Phys. 55, 1995 (1971).
* Milleur, Matcha, and Hayes (1974) M. B. Milleur, R. L. Matcha, and E. F. Hayes, “Theoretical Studies of Hydrogen‐rare Gas Complexes: HenH and HenH+ Clusters,” J. Chem. Phys. 60, 674 (1974).
* Dykstra (1983) C. E. Dykstra, “The Strong Hydrogen Bond in HeHHe+ and Its Weak Counterpart in HeH${}_{3}^{+}$,” J. Mol. Struct. 103, 131 (1983).
* Baccarelli, Gianturco, and Schneider (1997) I. Baccarelli, F. A. Gianturco, and F. Schneider, “Stability and Fragmentation of Protonated Helium Dimers from ab Initio Calculations of Their Potential Energy Surfaces,” J. Phys. Chem. 101, 6054 (1997).
* Filippone and Gianturco (1998) F. Filippone and F. A. Gianturco, “Charged Chromophoric Units in Protonated Rare-gas Clusters: A Dynamical Simulation,” Eur. Lett. 44, 585 (1998).
* Kim and Lee (1999) S. T. Kim and J. S. Lee, “Ab Initio Study of He2H+ and Ne2H+: Accurate Structure and Energetics,” J. Chem. Phys. 110, 4413 (1999).
* Lee and Secrest (1986) J. S. Lee and D. Secrest, “A Calculation of the Rotation–vibration States of He2H+,” J. Chem. Phys. 85, 6565 (1986).
* Panda and Sathyamurthy (2003) A. N. Panda and N. Sathyamurthy, “Bound and Quasibound States of He2H+ and He2D+,” J. Phys. Chem. A 107, 7125 (2003).
* Balakrishnan, Kalyanaraman, and Sathyamurthy (1997) N. Balakrishnan, C. Kalyanaraman, and N. Sathyamurthy, “Time-dependent quantum mechanical approach to reactive scattering and related processes,” Phys. Rep. 280, 79 (1997).
* Bhattacharya and Panda (2009) S. Bhattacharya and A. N. Panda, “Time-dependent Quantum Dynamics of the He + H+He Reaction,” J. Phys. B: At. Mol. Opt. Phys. 42, 085201 (2009).
* Liang _et al._ (2012) J.-J. Liang, C. L. Yang, L. Z. Wang, and Q. G. Zhang, “A New Analytical Potential Energy Surface for the Singlet State of He2H+,” J. Chem. Phys. 136, 094307 (2012).
* Xu and Zhang (2013) W. Xu and P. Zhang, “Accurate Study on the Quantum Dynamics of the He + HeH+ (X${}^{1}\sum^{+}$) Reaction on A New ab Initio Potential Energy Surface for the Lowest 11A${}^{{}^{\prime}}$ Electronic Singlet State,” J. Phys. Chem. A 117, 1406 (2013).
* Wu _et al._ (2014) D. Wu, M. Guo, Y. Wang, S. Yin, Z. Sun, and M. R. Hoffmann, “Coriolis Coupling Effect of State‐to‐state Quantum Dynamics for He + HeH+,” Theor Chem Acc 133, 1552 (2014).
* Yao (2014) C. X. Yao, “Quantum and Quasi‐classical Studies of the He + HeD+ $\longrightarrow$ HeD+ \+ He Exchange Reaction and Its Isotopic Variant,” Theor Chem Acc 133, 1554 (2014).
* Pavanello _et al._ (2005) M. Pavanello, S. Bubin, M. M., and L. Adamowicz, “Non-Born–Oppenheimer calculations of the pure vibrational spectrum of HeH+,” J. Phys. Chem. 123, 104306 (2005).
* Mueller _et al._ (2005) H. S. P. Mueller, F. Schloder, J. Stutzki, and G. Winnewisser, “The cologne database for molecular spectroscpy, cdms: a useful tool for astronomers and spectroscopists,” J.Mol.Struct. 742, 215 (2005).
* Hamilton, Faure, and Tennyson (2016) J. R. Hamilton, A. Faure, and J. Tennyson, “Electron-impact Excitation of Diatomic Hydride Cations–I. HeH+, CH+, ArH+,” Mon. Not. R. Astron. Soc. 455, 3281 (2016).
* Ayouz and Kokoouline (2019) M. Ayouz and V. Kokoouline, “Cross Sections and Rate Coefficients for Rovibrational Excitation of HeH+ Isotopologues by Electron Impact,” Atoms 7, 67 (2019).
* Werner _et al._ (2012) H.-J. Werner, P. J. Knowles, G. Knizia, F. R. Manby, and M. Schütz, “Molpro: a general-purpose quantum chemistry program package,” WIREs Comput. Mol. Sci. 2, 242–253 (2012).
* Werner _et al._ (2019) H.-J. Werner, P. J. Knowles, G. Knizia, F. R. Manby, M. Schütz, _et al._ , “Molpro, version 2019.2, a package of ab initio programs,” (2019), see https://www.molpro.net.
* Hampel, Peterson, and Werner (1992) C. Hampel, K. A. Peterson, and H.-J. Werner, “A comparison of the efficiency and accuracy of the quadratic configuration interaction (qcisd), coupled cluster (ccsd), and brueckner coupled cluster (bccd) methods,” Chem. Phys. Lett. 190, 1–12 (1992).
* Deega and Knowles (1994) M. J. O. Deega and P. J. Knowles, “Perturbative corrections to account for triple excitations in closed and open shell coupled cluster theories,” Chem. Phys. Lett. 227, 321–326 (1994).
* Woon and Dunning Jr (1993) D. E. Woon and T. H. Dunning Jr, “Gaussian basis sets for use in correlated molecular calculations. III. The atoms aluminum through argon,” J. Chem. Phys. 98, 1358 (1993).
* Woon and Dunning Jr (1994) D. E. Woon and T. H. Dunning Jr, “Gaussian basis sets for use in correlated molecular calculations. IV. Calculation of static electrical response properties,” J. Chem. Phys. 100, 2975 (1994).
* Boys and Bernardi (1970) S. F. Boys and F. Bernardi, “Calculation of small molecular interactions by differences of separate total energies - some procedures with reduced errors,” Mol. Phys. 19, 553 (1970).
* Fortenberry and Wiesenfeld (2020) C. Fortenberry and L. Wiesenfeld, “A Molecular Candle where few molecules shine: HeHHe+,” Molecules 25, 42183 (2020).
* Stephan and Fortenberry (2017) C. Stephan and R. Fortenberry, “The interstellar formation and spectra of the noble gas, proton-bound HeHHe+, HeHNe+ and HeHAr+ complexes,” Mon.Not.Roy.Astronom.Soc. 469, 339–346 (2017).
* Taylor (2006) J. R. Taylor, _Scattering Theory The Quantum Theory of Nonrelativistic Collisions_ (Dover, 2006).
* Arthurs and Dalgarno (1960) A. M. Arthurs and A. Dalgarno, “The theory of scattering by a rigid rotator,” Proc. R. Soc. A 256, 540 (1960).
* Secrest (1979) D. Secrest, “Rotational excitation-i: The quantal treatment.” Bernstein R.B. (eds) Atom - Molecule Collision Theory Plenum, New York (1979), https://doi.org/10.1007/978-1-4613-2913-8.
* Kouri and Hoffman (1997) D. Kouri and D. Hoffman, “A tutorial on computational approaches to quantum scattering.” Truhlar D.G., Simon B. (eds) Multiparticle Quantum Scattering With Applications to Nuclear, Atomic and Molecular Physics 89, Springer, New York, NY (1997).
* Hutson (1994) J. Hutson, “Coupled channel methods for solving the bound-state schroedinger equation,” Comp.Phys.Comm. 84, 1–18 (1994).
* Gianturco (1979) F. Gianturco, “The transfer of molecular energies by collisions: recent quantum treatments,” Lect.Notes Chem. Springer Verlag, Berlin (1979).
* Martinazzo, Bodo, and Gianturco (2003) R. Martinazzo, E. Bodo, and F. A. Gianturco, “A Modified Variable-Phase Algorithm for Multichannel Scattering with Long-range Potentials,” Comput. Phys. Commun. 151, 187 (2003).
* López-Durán, Bodo, and Gianturco (2008) D. López-Durán, E. Bodo, and F. A. Gianturco, “ASPIN: An All Spin Scattering Code for Atom-molecule Rovibrationally Inelastic Cross Sections,” Comput. Phys. Commun. 179, 821 (2008).
* González-Sánchez _et al._ (2015) L. González-Sánchez, F. A. Gianturco, F. Carelli, and R. Wester, “Computing Rotational Energy Transfers of OD-/OH- in Collisions with Rb: Isotopic Effects and Inelastic Rates at Cold Ion-trap Conditions,” New. J. Phys. 17, 123003 (2015).
* Skouteris, Castillo, and Manolopoulos (2000) D. Skouteris, J. F. Castillo, and D. E. Manolopoulos, “ABC: A Quantum Reactive Scattering Program,” Compu. Phys. Commun. 133, 128 (2000).
* Smith (1976) I. W. M. Smith, “Relaxations in Collisions of Vibrationally Excited Molecules with Potentially Reactive Atoms,” Acc. Chem. Res. 9, 161 (1976).
* Kulinich _et al._ (2019) Y. Kulinich, B. Novosyadlyj, V. Shulga, and W. Han, “Thermal and resonant emission of dark ages halos in the rotational lines of HeH+,” arXiv:1911.04832 [astro-ph] (2019).
* Brown and Carrington (2003) J. M. Brown and A. Carrington, _Rotational Spectroscopy of Diatomic Molecules_ (Cambridge University Press, Cambridge, 2003).
* Mant _et al._ (2020) B. P. Mant, F. A. Gianturco, R. Wester, E. Yurtsever, and L. González-Sánchez, “Ro-vibrational quenching of C${}_{2}^{-}$ anions in collisions with He, Ne and Ar atoms,” Phys. Rev. A 102, 062810 (2020).
* Engel _et al._ (2005) E. Engel, N. Doss, G. Harris, and J. Tennyson, “Calculated spectra for HeH+ and its effect on the opacity of cool metal-poor stars,” MNRAS 357, 471– 477 (2005).
* Lara-Moreno, Stoecklin, and Halvick (2019) M. Lara-Moreno, T. Stoecklin, and P. Halvick, “Rotational Transitions of C3N- Induced by Collision with H2,” MNRAS 486, 414 (2019).
* Hernández Vera _et al._ (2017) M. Hernández Vera, F. A. Gianturco, R. Wester, H. da Silva Jr., O. Dulieu, and S. Schiller, “Rotationally inelastic collisions of H${}_{2}^{+}$ ions with He buffer gas: Computing cross sections and rates,” J. Chem. Phys. 146, 124310 (2017).
* González-Sánchez, Wester, and Gianturco (2018a) L. González-Sánchez, R. Wester, and F. A. Gianturco, “Modeling Quantum Kinetics in Ion Traps: State-changing Collisions for OH+,” ChemPhysChem 19, 1866 (2018a).
* Gianturco _et al._ (2018) F. A. Gianturco, O. Y. Lakhmanskaya, M. Hernández Vera, E. Yurtsever, and R. Wester, “Collisional relaxation kinetics for ortho and para NH${}_{2}^{-}$ under photodetachment in cold ion traps,” Faraday Discuss. 212, 117–135 (2018).
* Schiller _et al._ (2017) S. Schiller, I. Kortunov, M. Hernández Vera, H. da Silva Jr., and F. Gianturco, “Quantum state preparation of homonuclear molecular ions enabled via cold buffer gas: an ab initio study for the H${}_{2}^{+}$ and D${}_{2}^{+}$ cases,” Phys. Rev.A 95, 043411 (2017).
* González-Sánchez, Wester, and Gianturco (2018b) L. González-Sánchez, R. Wester, and F. A. Gianturco, “Collisional cooling of internal rotation in MgH+ ions trapped with He atoms: Quantum modeling meets experiments in Coulomb crystals,” Phys. Rev.A 98, 053423 (2018b).
|
# Chains of Boson Stars
C. A. R. Herdeiro†, J. Kunz‡, I. Perapechka⋆, E. Radu† and Y. Shnir⋄
†Department of Mathematics, University of Aveiro and CIDMA, Campus de
Santiago, 3810-183 Aveiro, Portugal
‡ Institute of Physics, University of Oldenburg, Germany Oldenburg D-26111,
Germany
⋆ Department of Theoretical Physics and Astrophysics, Belarusian State
University, Minsk 220004, Belarus
⋄ BLTP, JINR, Dubna 141980, Moscow Region, Russia
(January 2021)
###### Abstract
We study axially symmetric multi-soliton solutions of a complex scalar field
theory with a sextic potential, minimally coupled to Einstein’s gravity. These
solutions carry no angular momentum and can be classified by the number of
nodes of the scalar field, $k_{z}$, along the symmetry axis; they are
interpreted as chains with $k_{z}+1$ boson stars, bound by gravity, but kept
apart by repulsive scalar interactions. Chains with an odd number of
constituents show a spiraling behavior for their ADM mass (and Noether charge)
in terms of their angular frequency, similarly to a single fundamental boson
star, as long as the gravitational coupling is small; for larger coupling,
however, the inner part of the spiral is replaced by a merging with the
fundamental branch of radially excited spherical boson stars. Chains with an
even number of constituents exhibit a truncated spiral pattern, with only two
or three branches, ending at a limiting solution with finite values of ADM
mass and Noether charge.
## 1 Introduction
Many non-linear physical systems support non-topological solitons, which
represent spatially localized field configurations. One of the simplest
examples in flat space is given by $Q$-balls, which are particle-like
configurations in a model with a complex scalar field possessing a harmonic
time dependence and a suitable self-interaction potential [1, 2, 3]. When
$Q$-balls are coupled to gravity, the so-called Boson Stars (BSs) emerge,
which represent solitonic solutions with a topologically trivial and globally
regular geometry. The simplest such configurations are static and spherically
symmetric, the scalar field possessing a mass term only, without self-
interaction [4, 5]. These solutions are usually dubbed mini-Boson Stars (mBS),
being regarded as macroscopic quantum states, which are prevented from
gravitationally collapsing by Heisenberg’s uncertainty principle; also, they
do not have a regular flat spacetime limit.
Both $Q$-balls and BSs carry a Noether charge associated with an unbroken
continuous global $U(1)$ symmetry. The charge $Q$ is proportional to the
angular frequency of the complex boson field and represents the boson particle
number of the configurations [2, 3].
In flat spacetime $Q$-balls exist only within a certain frequency range for
the scalar field: between a maximal value $\omega_{\rm max}$, which
corresponds to the mass of the scalar excitations, and some lower non-zero
critical value $\omega_{\rm min}$, that depends on the form of the potential.
On the one hand, as the frequency $\omega$ is approaching its extremal values,
both the mass $M$ and the Noether charge $Q$ of the configurations diverge. On
the other hand, $M$ and $Q$ attain a minimum at some critical value
$\omega_{\rm cr}\in[\omega_{\rm min},\omega_{\rm max}]$. Away from
$\omega_{\rm cr}$ the mass and the charge of the configurations increase
towards the divergent value at the boundary of the domain. Within
$[\omega_{\rm min},\omega_{\rm cr}]$ the configurations become unstable
against decay.
The situation is different for BSs: gravity modifies the critical behavior
pattern of the configurations. The fundamental branch of the BS solutions
starts off from the perturbative excitations at $\omega\sim\omega_{\rm max}$,
at which $M,Q$ trivialise (rather than diverge). Then, the BSs exhibit a
spiraling (or oscillating) pattern of the frequency dependence of both charge
and mass, where both tend to some finite limiting values at the centers of the
spirals. Qualitatively, the appearance of the frequency-mass spiral may be
related to oscillations in the force-balance between the repulsive scalar
interaction and the gravitational attraction in equilibria. Indeed, radially
excited rotating BSs do not exhibit a spiraling behavior; instead the second
branch extends all the way back to the upper critical value of the frequency
$\omega_{\rm max}$, forming a loop [6].
The main purpose of this paper is to report on the existence of a new type of
solutions, which correspond to chains of BSs.111Similar chains, but for a
scalar field without self-interactions, were recently studied in [7], in the
context of multipolar BSs. Here, we emphasise the interpretation of multiple
BSs, rather than a single multipolar BS. These are static, axially symmetric
equilibrium configurations interpreted as a number of BSs located
symmetrically with respect to the origin, along the symmetry axis. We
construct these solutions and investigate their physical properties for a
choice of the scalar field potential with quartic and sextic self-interaction
terms, which was employed in most of the $Q$-balls literature. We note that
similar configurations of chains of constituents are known to exist both for
gravitating and flat space non-Abelian monopoles and dyons [8, 9, 10, 11, 12,
14, 15, 13, 16, 17, 18, 19, 20], Skyrmions [21, 22, 23], electroweak
sphalerons [24, 25, 26, 27, 28], $SU(2)$ non-self dual configurations [29, 30]
and Yang-Mills solitons in ADS4 spacetime [31, 32].
In these multi-component BSs configurations the constituents form a chain
along the symmetry axis and, consequently, the scalar field is required to
possess ’nodes’ (zeros of the scalar field amplitude). Configurations with
$k_{z}$ nodes on the symmetry axis possess $k_{z}+1$ constituents.
Configurations with even and odd numbers of constituents can show a different
pattern, when their domain of existence is mapped out. In particular, we find
that the pattern exhibited by a mass-frequency diagram of the chains of BSs
can differ both from the typical spiraling picture and from the closed loop
scenario. For chains with an even number of constituents the pattern always
terminates at critical solutions. For chains with an odd number of
constituents, the pattern depends on the strength of the gravitational
interaction. The configurations then either merge with the corresponding
radial excitation of the fundamental solution, or the central constituent of
the configurations exhibits oscillations while retaining smaller satellite
constituents.
This paper is organized as follows. In Section II the theoretical setting is
specified. This includes the action, the equations of motion, the Ansatz and
the boundary conditions for the BS chains. The numerical results for these new
equilibrium configurations are shown in Section III . We give our conclusions
in Section IV.
## 2 The model
### 2.1 Action, field equations and global charges
We consider a self-interacting complex scalar field $\Phi$, which is minimally
coupled to Einstein’s gravity in a $(3+1)$-dimensional space-time. The
corresponding action of the system is
$\mathcal{S}=\int d^{4}x\sqrt{-g}\left[\frac{R}{16\pi
G}-\frac{1}{2}g^{\mu\nu}\left(\Phi_{,\,\mu}^{*}\Phi_{,\,\nu}+\Phi_{,\,\nu}^{*}\Phi_{,\,\mu}\right)-U(|\Phi|^{2})\right],$
(2.1)
where $R$ is the Ricci scalar curvature, $G$ is Newton’s constant, the
asterisk denotes complex conjugation, and $U$ denotes the scalar field
potential.
Variation of the action (2.1) with respect to the metric leads to the Einstein
equations
$E_{\mu\nu}\equiv R_{\mu\nu}-\frac{1}{2}g_{\mu\nu}R-8\pi G~{}T_{\mu\nu}=0\ ,$
(2.2)
where
$T_{\mu\nu}\equiv\Phi_{,\mu}^{*}\Phi_{,\nu}+\Phi_{,\nu}^{*}\Phi_{,\mu}-g_{\mu\nu}\left[\frac{1}{2}g^{\sigma\tau}(\Phi_{,\sigma}^{*}\Phi_{,\tau}+\Phi_{,\tau}^{*}\Phi_{,\sigma})+U(|\Phi|^{2})\right]\,,$
(2.3)
is the stress-energy tensor of the scalar field.
The corresponding equation of motion of the scalar field is the non-linear
Klein-Gordon equation
$\left(\Box-\frac{dU}{d|\Phi|^{2}}\right)\Phi=0\,,$ (2.4)
where $\Box$ represents the covariant d’Alembert operator.
The solutions considered in this work have a static line-element (with a
timelike Killing vector field $\xi$), being topologically trivial and globally
regular, $i.e.$ without an event horizon or conical singularities, while the
scalar field is finite and smooth everywhere. Also, they approach
asymptotically the Minkowski spacetime background. Their mass $M$ can be
obtained from the respective Komar expressions [33],
${M}=\frac{1}{{4\pi G}}\int_{\Sigma}R_{\mu\nu}n^{\mu}\xi^{\nu}dV~{}.$ (2.5)
Here $\Sigma$ denotes a spacelike hypersurface (with the volume element $dV$),
while $n^{\mu}$ is normal to $\Sigma$, with $n_{\mu}n^{\mu}=-1$ [33]. After
replacing in (2.5) the Ricci tensor by the stress-energy tensor, via the
Einstein equations (2.2), one finds the equivalent expression
$\displaystyle
M=\,2\int_{\Sigma}\left(T_{\mu\nu}-\frac{1}{2}\,g_{\mu\nu}\,T_{\gamma}^{\
\gamma}\right)n^{\mu}\xi^{\nu}dV\ .$ (2.6)
The action (2.1) is invariant with respect to the global $\mathrm{U}(1)$
transformations of the complex scalar field, $\Phi\to\Phi e^{i\alpha}$, where
$\alpha$ is a constant. The following Noether 4-current is associated with
this symmetry
$j_{\mu}=-i(\Phi\partial_{\mu}\Phi^{\ast}-\Phi^{\ast}\partial_{\mu}\Phi)\,.$
(2.7)
It follows that integrating the timelike component of this 4-current in a
spacelike slice $\Sigma$ yields a second conserved quantity – the Noether
charge:
$\displaystyle Q=\int_{\Sigma}j^{\mu}n_{\mu}dV.$ (2.8)
### 2.2 The ansatz and equations
Apart from being static, the configurations in this work are also axially
symmetric222The scalar field possesses a time dependence (with
$\partial_{t}\Phi=-i\omega\Phi$), which disappears at the level of the energy-
momentum tensor. However, the scalar field is axially symmetric,
$\partial_{\varphi}\Phi=0$. . Thus, in a system of adapted coordinates, the
space-time possesses two commuting Killing vector fields
$\displaystyle\xi=\partial/\partial t~{}~{}{\rm
and}~{}~{}\eta=\partial/\partial\varphi,$ (2.9)
with $t$ and $\varphi$ the time and azimuthal coordinates, respectively. Line
elements with these symmetries are usually studied by employing a metric
ansatz [34]
$\displaystyle
ds^{2}=-e^{-2U(\rho,z)}dt^{2}+e^{2U(\rho,z)}\left[e^{2k(\rho,z)}(d\rho^{2}+dz^{2})+P(\rho,z)^{2}d\varphi^{2}\right],$
(2.10)
where $(\rho,z)$ correspond, asymptotically, to the usual cylindrical
coordinates333 In the Einstein-Maxwell theory, it is always possible to set
$P\equiv\rho$ , such that only two independent metric functions appear in the
equations, with $(\rho,z)$ the canonical Weyl coordinates [34]. For a
(complex) scalar field matter content, however, the generic metric ansatz
(2.10) with three independent functions is needed. . However, it the numerical
treatment of the Einstein-Klein-Gordon equations, it is useful to employ
‘quasi-isotropic’ spherical coordinates $(r,\theta)$, defined by the
coordinate transformation in (2.10)
$\displaystyle\rho=r\sin\theta,~{}~{}z=r\cos\theta~{},$ (2.11)
with the usual range $0\leqslant r<\infty$, $0\leqslant\theta\leq\pi$. Then
the metric can then be written in a Lewis-Papapetrou form, with
$ds^{2}=-fdt^{2}+\frac{m}{f}\left(dr^{2}+r^{2}d\theta^{2}\right)+\frac{l}{f}r^{2}\sin^{2}\theta
d\varphi^{2}~{}.$ (2.12)
The three metric functions $f$, $l$, and $m$ are functions of the variables
$r$ and $\theta$ only, chosen such that the trivial angular and radial
dependence of the (asymptotically flat) line element is already factorized.
The relation between the metric functions in the above line-element and those
in (2.10) is $f=e^{-2U}$, $lr^{2}\sin^{2}\theta=P^{2}$, $m=e^{2k}$. The
symmetry axis of the spacetime is located at the set of points with vanishing
norm of $\eta$, $||\eta||=0$; it corresponds to the $z$-axis in (2.10) or the
set with $\theta=0,\pi$ in (2.12). The Minkowski spacetime background is
approached for $r\to\infty$, with $f=l=m=1$.
For the scalar field we adopt an Ansatz with a harmonic time dependence, while
the amplitude depends on $(r,\theta)$,
$\Phi=\phi(r,\theta)e^{-i\omega t},$ (2.13)
where $\omega\geq 0$ is the angular frequency. The corresponding stress-energy
tensor is static, with the nonvanishing components
$\displaystyle
T_{r}^{r}=\frac{f}{m}\bigg{(}\phi_{,r}^{2}-\frac{\phi_{,\theta}^{2}}{r^{2}}\bigg{)}+\frac{\omega^{2}\phi^{2}}{f}-U(\phi^{2}),~{}~{}T_{\theta}^{\theta}=\frac{f}{m}\bigg{(}-\phi_{,r}^{2}+\frac{\phi_{,\theta}^{2}}{r^{2}}\bigg{)}+\frac{\omega^{2}\phi^{2}}{f}-U(\phi^{2}),~{}~{}T_{r}^{\theta}=\frac{2f}{r^{2}m}\phi_{,r}\phi_{,\theta}$
$\displaystyle
T_{\varphi}^{\varphi}=-\frac{f}{m}\bigg{(}\phi_{,r}^{2}+\frac{\phi_{,\theta}^{2}}{r^{2}}\bigg{)}+\frac{\omega^{2}\phi^{2}}{f}-U(\phi^{2}),~{}~{}T_{t}^{t}=-\frac{f}{m}\bigg{(}\phi_{,r}^{2}+\frac{\phi_{,\theta}^{2}}{r^{2}}\bigg{)}-\frac{\omega^{2}\phi^{2}}{f}-U(\phi^{2}).$
(2.14)
After inserting the ansatz (2.12), (2.13) into the field equations (2.2),
(2.4) we find a system of six coupled partial differential equations that
needs to be solved. There are three equations for the metric functions
$f,l,m$, found by taking the following suitable combinations of the Einstein
equations, $E_{r}^{r}+E_{\theta}^{\theta}=0$, $E_{\varphi}^{\varphi}=0$ and
$E_{t}^{t}=0$, yielding
$\displaystyle
f_{,rr}+\frac{f_{,\theta\theta}}{r^{2}}+\frac{2f_{,r}}{r}+\frac{\cot\theta
f_{,\theta}}{r^{2}}-\frac{1}{f}\bigg{(}f_{,r}^{2}+\frac{f_{,\theta}^{2}}{r^{2}}\bigg{)}+\frac{1}{2l}\bigg{(}f_{,r}l_{,r}+\frac{f_{,\theta}l_{,\theta}}{r^{2}}\bigg{)}+16\pi
G\left(U(\phi^{2})-\frac{2\omega^{2}\phi^{2}}{f}\right)m=0~{},$ $\displaystyle
l_{,rr}+\frac{l_{,\theta\theta}}{r^{2}}+\frac{3l_{,r}}{r}+\frac{2\cot\theta
l_{,\theta}}{r^{2}}-\frac{1}{2l}\bigg{(}l_{,r}^{2}+\frac{l_{,\theta}^{2}}{r^{2}}\bigg{)}+32\pi
G\left(U(\phi^{2})-\frac{\omega^{2}\phi^{2}}{f}\right)\frac{lm}{f}=0~{},$
(2.15) $\displaystyle
m_{,rr}+\frac{m_{,\theta\theta}}{r^{2}}+\frac{m_{,r}}{r}+\frac{m}{2f^{2}}\bigg{(}f_{,r}^{2}+\frac{f_{,\theta}^{2}}{r^{2}}\bigg{)}-\frac{1}{m}\bigg{(}m_{,r}^{2}+\frac{m_{,\theta}^{2}}{r^{2}}\bigg{)}$
$\displaystyle\ \ \ \ \ \ \ +16\pi
G\left[\frac{f}{m}\bigg{(}\phi_{,r}^{2}+\frac{\phi_{,\theta}^{2}}{r^{2}}\bigg{)}+U(\phi^{2})-\frac{\omega^{2}\phi^{2}}{f}\right]\frac{m^{2}}{f}=0~{},$
and one equation for the scalar field amplitude
$\displaystyle\phi_{,rr}+\frac{\phi_{,\theta\theta}}{r^{2}}+\left(\frac{2}{r}+\frac{l_{,r}}{2l}\right)\phi_{,r}+\left(\cot\theta+\frac{l_{,\theta}}{2l}\right)\frac{\phi_{,\theta}}{r^{2}}+(\frac{\omega^{2}}{f}-\frac{\partial
U}{\partial\phi^{2}})\frac{m}{f}\phi=0\ .$ (2.16)
Apart from these, there are two more Einstein equations,
$E_{\theta}^{r}=0,~{}E_{r}^{r}-E_{\theta}^{\theta}=0$, which are not solved in
practice, being treated as constraints and used to check the numerical
accuracy of the solutions.
The mass $M$ is computed from the Komar expression (2.5), where we insert the
metric ansatz (2.12), with unit vector $n=-\sqrt{f}dt$, the volume element
$dV=1/\sqrt{f}\,\sqrt{-g}\,dr\,d\theta\,d\varphi$, and
$\sqrt{-g}=r^{2}\sin\theta\frac{\sqrt{l}m}{f}$. In evaluating (2.5), we use
the fact that $R_{t}^{t}$ is a total derivative:
$\displaystyle\sqrt{-g}R_{t}^{t}$ $\displaystyle=$
$\displaystyle-\frac{\partial}{\partial
r}\left(\frac{r^{2}\sin\theta\sqrt{l}f_{,r}}{2f}\right)-\frac{\partial}{\partial\theta}\left(\frac{\sin\theta\sqrt{l}f_{,\theta}}{2f}\right).$
Then it follows that $M$ can be read off from the asymptotic expansion of the
metric function $f$
$\displaystyle f=1-\frac{2MG}{r}+\mathcal{O}\left(\frac{1}{r^{2}}\right)\ .\ \
\ $ (2.17)
Alternatively, the mass $M$ can be obtained by direct integration of (2.6) ,
$\displaystyle
M=\int\left(T_{t}^{t}-T_{r}^{r}-T_{\theta}^{\theta}-T_{\varphi}^{\varphi}\right)\,\sqrt{-g}\,dr\,d\theta\,d\varphi=4\pi\int_{0}^{\infty}dr\int_{0}^{\pi}d\theta~{}r^{2}\sin\theta\frac{\sqrt{l}m}{f}\left(U(\phi^{2})-\frac{2\omega^{2}\phi^{2}}{f}\right)~{}.$
(2.18)
In terms of the above Ansatz the Noether charge $Q$ is given by
$Q=4\pi\omega\int\limits_{0}^{\infty}dr\int\limits_{0}^{\pi}d\theta~{}\frac{\sqrt{l}m}{f^{2}}r^{2}\sin\theta~{}\phi^{2}~{}.$
(2.19)
Also, let us note that the solutions in this work have no horizon. Therefore
they are zero entropy objects, without an intrinsic temperature. Still, in
analogy to black holes, one may write a “first law of thermodynamics” [35],
albeit without the entropy term, which reads
$dM=\omega dQ~{}.$ (2.20)
This gives a relation between the mass and Noether charge of neighbouring BS
solutions which can be used to check the numerical accuracy of the solutions.
### 2.3 The potential and scaling properties
The solutions in this work were found for a potential originally proposed in
[36, 37] ,
$U(|\Phi|^{2})=\nu|\Phi|^{6}-\lambda|\Phi|^{4}+\mu^{2}|\Phi|^{2}$ (2.21)
which is chosen such that nontopological soliton solutions - $Q$-balls - exist
in the absence of the Einstein term in the action (2.1), $i.e.$ on a Minkowski
spacetime background [38, 39, 40, 41]. Also, at least in the spherically
symmetric case, this choice of the potential follows for the existence of very
massive and highly compact objects approaching the black hole limit [38].
The parameter $\mu$ in (2.21) corresponds to the mass of the scalar
excitations around the $|\Phi|=0$ vacuum. Apart from that, the potential
possesses two more parameters $(\nu,\lambda)>0$, determining the self-
interactions, which are chosen in such a manner that it possesses a local
minimum at some finite non-zero value of the field $|\Phi|$, besides the
global minimum at $|\Phi|=0$.
Two of the constants in (2.21) can actually be absorbed into a redefinition of
the radial coordinate together with a rescaling of the scalar field,
$\displaystyle r\to\frac{u_{0}}{\mu}r\
,~{}~{}~{}\phi\to\frac{\sqrt{\mu}}{\nu^{1/4}\sqrt{u_{0}}}\phi\ ,$ (2.22)
with $u_{0}>0$ an arbitrary constant. Note that the scalar field frequency
changes accordingly, $\omega\to\frac{u_{0}}{\mu}\omega.$
Then, the potential (2.21) becomes (up to an overall factor),
$\displaystyle U=\phi^{6}-\bar{\lambda}\phi^{4}+u_{0}^{2}\phi^{2},\qquad{\rm
with}~{}~{}\bar{\lambda}=\frac{\lambda u_{0}}{\mu\sqrt{\nu}}\ .$ (2.23)
The choice employed in most of the $Q$-ball literature is
$\displaystyle u_{0}^{2}=1.1\ ,~{}~{}{\rm and}~{}~{}\bar{\lambda}=2\ ,$ (2.24)
which are the values used also in this work.
For completeness, let us mention that for the specific ansatz (2.12), (2.13)
with the above scalings, the equations for gravitating $Q$-balls (which are
effectively solved) can also be derived by extremizing the following reduced
action444In (2.26) we use the dimensionless radial variable $r$ and the
dimensionless frequency $\omega$ given in units set by $\mu$.
$\displaystyle S_{red}=\int drd\theta\left({\cal L}_{g}-4\alpha^{2}{\cal
L}_{s}\right),$ (2.25)
with [for ease of notation, we denote $(\nabla S)\cdot(\nabla T)\equiv
S_{,r}T_{,r}+\frac{1}{r^{2}}S_{,\theta}T_{,\theta}$]:
$\displaystyle{\cal L}_{g}$ $\displaystyle=$ $\displaystyle
r^{2}\sin\theta\sqrt{l}\bigg{[}\frac{1}{2lm}(\nabla l)\cdot(\nabla
m)-\frac{1}{2f^{2}}(\nabla f)^{2}$ (2.26)
$\displaystyle{~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}}-\frac{1}{rl}\left(l_{,r}+\frac{\cot\theta}{r}l_{,\theta}\right)+\frac{1}{rm}\left(m_{,r}+\frac{\cot\theta}{r}m_{,\theta}\right)\bigg{]},$
$\displaystyle{\cal L}_{s}$ $\displaystyle=$ $\displaystyle
r^{2}\sin\theta\frac{m\sqrt{l}}{f}\bigg{[}\frac{f}{m}(\nabla\phi)^{2}-\frac{\omega^{2}\phi^{2}}{f}+\phi^{6}-2\phi^{4}+1.1\phi^{2}\bigg{]}.$
The (dimensionless) coupling constant reads
$\displaystyle\alpha^{2}=\frac{4\pi G\mu}{\sqrt{\nu}u_{0}},$ (2.27)
which determines the strength of the gravitational coupling of the solutions.
As with the known spherically symmetric configurations, solutions exist for
$0\leqslant\alpha<\infty$. The limit $\alpha\to 0$ corresponds to the non-
backreacting case, $i.e.$ $Q$-balls on a fixed Minkowski background. To
understand the large $\alpha$ limit, we define $\hat{\phi}=\phi/\alpha$. Then
for large values of the effective gravitational coupling $\alpha$, the non-
linearity of the potential (2.21) becomes suppressed and the system approaches
the usual Einstein-Klein-Gordon model [4, 5], with its corresponding mBS
solutions. That is, the resulting equations are identical to those found for a
model with a non-self-interacting, massive complex scalar field $\hat{\phi}$.
However, in this work we shall restrict our study to the case of a finite,
nonzero $\alpha$. The basic properties of the mBS chains were studied (in a
more general context) in the recent work [7], while the issue of flat
spacetime $Q$-ball chains will be discussed elsewhere.
### 2.4 Boundary conditions
The solutions studied in this work are globally regular and asymptotically
flat, possessing finite mass and Noether charge. Appropriate boundary
conditions guarantee that these conditions are satisfied.
Starting with the boundary conditions for the metric functions, regularity of
the solutions at the origin requires
$\partial_{r}f\bigl{.}\bigr{|}_{r=0}=\partial_{r}m\bigl{.}\bigr{|}_{r=0}=\partial_{r}l\bigl{.}\bigr{|}_{r=0}=0\,.$
(2.28)
Demanding asymptotic flatness at spatial infinity yields
$f\bigl{.}\bigr{|}_{r\to\infty}=m\bigl{.}\bigr{|}_{r\to\infty}=l\bigl{.}\bigr{|}_{r\to\infty}=1.$
(2.29)
Axial symmetry and regularity impose on the symmetry axis at $\theta=0,\pi$
the conditions
$\partial_{\theta}\phi\bigl{.}\bigr{|}_{\theta=0,\pi}=\partial_{\theta}f\bigl{.}\bigr{|}_{\theta=0,\pi}=\partial_{\theta}m\bigl{.}\bigr{|}_{\theta=0,\pi}=\partial_{\theta}l\bigl{.}\bigr{|}_{\theta=0,\pi}=0\,.$
(2.30)
Further, the condition of the absence of a conical singularity requires that
the solutions should satisfy the constraint
$m\bigl{.}\bigr{|}_{\theta=0,\pi}=l\bigl{.}\bigr{|}_{\theta=0,\pi}$. In our
numerical scheme we explicitly verified (within the numerical accuracy) this
condition on the symmetry axis.
Turning now to the boundary conditions for the scalar field amplitude, we
mention first that $\phi$ approaches asymptotically the global minimum,
$\phi\bigl{.}\bigr{|}_{r\to\infty}=0~{},$ (2.31)
while on the symmetry axis we impose
$\partial_{\theta}\phi\bigl{.}\bigr{|}_{\theta=0,\pi}=0\ .$ (2.32)
The behaviour of the scalar field at the origin is more complicated, depending
on the considered parity ${\cal P}$. As mentioned in the Introduction, the
solutions split into two classes, distinguished by their behaviour $w.r.t.$ a
reflection along the equatorial plane $\theta=\pi/2$. The geometry is left
invariant under this transformation,
$\displaystyle
f(r,\pi-\theta)=f(r,\theta),~{}~{}l(r,\pi-\theta)=l(r,\theta),~{}~{}m(r,\pi-\theta)=m(r,\theta)~{};$
(2.33)
for the scalar field, however, there are two possibilities:
$\displaystyle{\cal
P}=1~{}~{}~{}{\rm(even~{}parity)}:~{}~{}\phi(r,\pi-\theta)={\phantom{-}}\phi(r,\theta)\
,$ (2.34) $\displaystyle{\cal
P}=-1~{}{\rm(odd~{}parity)}:~{}~{}\phi(r,\pi-\theta)=-\phi(r,\theta)~{}.$
(2.35)
We use this symmetry to reduce the domain of integration to
$[0,\infty)\times[0,\pi/2]$, with
$\partial_{\theta}f\bigl{.}\bigr{|}_{\theta=\pi/2}=\partial_{\theta}m\bigl{.}\bigr{|}_{\theta=\pi/2}=\partial_{\theta}l\bigl{.}\bigr{|}_{\theta=\pi/2}=0\,,$
(2.36)
and
${\cal
P}=1:~{}\partial_{\theta}\phi\bigl{.}\bigr{|}_{\theta=\pi/2}=0,~{}~{}~{}{\cal
P}=-1:~{}\phi\bigl{.}\bigr{|}_{\theta=\pi/2}=0,~{}~{}$ (2.37)
together with
$\displaystyle{\cal P}=1:~{}\partial_{r}\phi\bigl{.}\bigr{|}_{r=0}=0~{}\
,~{}~{}~{}{\cal P}=-1:~{}\phi\bigl{.}\bigr{|}_{r=0}=0\,.$ (2.38)
Finally, let us mention that, for both ${\cal P}=\pm 1$, one can formally
construct an approximate expression of the solutions compatible with the above
boundary conditions, $e.g.$ by assuming the existence of a power series form
close to $r=0$. The resulting expressions are of little help in understanding
the properties of the solutions, and a numerical approach is necessary555We
recall that, more than fifty years after their discovery [4, 5], the static
mBS are still not known in closed form. . However, one important result of
this study is the bound-state condition
$\displaystyle\omega\leqslant\mu,$ (2.39)
which emerges from the finite mass requirement. No similar result is found for
the minimal value of frequency.
### 2.5 Numerical method
The set of four coupled non-linear elliptic partial differential equations for
the functions $(f,l,m;\phi)$ has been solved numerically subject to the
boundary conditions defined above. In a first stage, a new compactified radial
variable $x$ is introduced, replacing $r$, with
$\displaystyle x\equiv\frac{r}{c+r},$ (2.40)
with $c$ an arbitrary parameter, typically of order one. With this choice, the
semi-infinite region $[0,\infty)$ is mapped to the finite interval $[0,1]$.
This avoids the use of a cutoff radius.
The numerical calculations have been performed by using a using a sixth-order
finite difference scheme. The system of equations is discretized on a grid
with a typical number of points $129\times 89$. The underlying linear system
is solved with the Intel MKL PARDISO sparse direct solver [50] using the
Newton-Raphson method. Calculations are performed with the packages
FIDISOL/CADSOL [51] and CESDSOL666Complex Equations – Simple Domain partial
differential equations SOLver is a C++ package being developed by one of us
(I.P.). library. In all cases, the typical errors are of order of $10^{-4}$.
For the choice (2.24) of the potential’s parameters, the only input parameters
are the gravitational coupling constant $\alpha$ together with the scalar
field’s frequency $\omega$. The parity of the solutions is imposed via the
boundary conditions (2.34), (2.38). The number of individual constituents
results from the numerical output. Also, in all plots below, quantities are
given in natural units set by $\mu,G$.
## 3 The solutions
### 3.1 Nodal structure and energy distribution
The choice of the parity ${\cal P}$ is related to the number of distinct
constituents of the solutions (as resulting $e.g.$ from the spatial
distribution of the energy density). Moreover, denoting the number of nodes on
the symmetry axis by $k_{z}$, the solutions can be classified by the number of
nodes $k_{z}$. The number of constituents of the chains is given by $k_{z}+1$.
The even-parity configurations (${\cal P}=1$) have an even $k_{z}$, while the
solutions with an odd $k_{z}$ are found for ${\cal P}=-1$. For example, the
spherically symmetric solutions have $k_{z}=0$ and one single constituent
localized at the origin, $r=0$. The simplest non-spherical configuration has
$k_{z}=1$ and represents a pair of static BSs777 In principle, the $k_{z}=1$
solutions can be thought of as the static limit of negative parity spinning
configurations considered in [41]. with opposite phases, the inversion of the
sign of the scalar field function $\phi$ under reflections
$\theta\to\pi-\theta$ corresponds to the shift of the phase $\omega t\to\omega
t+\pi$.
It was pointed out that the character of the interaction between Q-balls in
Minkowski spacetime depends on their relative phase [42, 43]. If the Q-balls
are in phase, the interaction is attractive, if they are out of phase, there
is a repulsive force between them. Thus, an axially symmetric $k_{z}=1$
solution can be balanced by the gravitational attraction.
Solutions with $k_{z}>1$ exist as well; the maximal value we have reached so
far is $k_{z}=5$; however, they are likely to exist for an arbitrarily large
$k_{z}$.
To illustrate these aspects, we display in Fig. 1 several functions of
interest for the five types of representative configurations. Both odd and
even parity configurations are shown there, with the node numbers $k_{z}=0-4$;
also the solutions have the same values of the input parameters, $\alpha=0.25$
and $\omega/\mu=0.8$, being located on the 1st branch in the
$(\omega,M)$-diagram (as described below). These chains possess one to five
constituents (from top to bottom), as seen by the number of peaks of the
charge density, shown in the left panels. The middle panels represent the
scalar field amplitude $\phi$, and the right panels show the metric function
$f=-g_{tt}$. For the sake of clarity we have chosen to exhibit these figures
in polar coordinates $(\rho,z)$, as given by eq. (2.11).
Figure 1: Chains of BSs with one to five constituents (from top to bottom) on
the first (a.k.a. fundamental) branch for $\alpha=0.25$ at $\omega/\mu=0.80$:
$3d$ plots of the $U(1)$ scalar charge distributions (left plots), the scalar
field functions $\phi$ (middle plots) and the metric functions $f$ (right
plots) versus the coordinates $\rho=r\sin\theta$ and $z=r\cos\theta$.
The first row shows a single spherically symmetric BS for comparison. The
second row exhibits the pair of BSs. The charge density has two symmetric
peaks, the metric function has two symmetric troughs, while the scalar field
function is anti-symmetric, featuring a peak and a trough. The triplet,
quartet and quintet in the next few rows feature $k_{z}+1$ very similar (in
size and shape) peaks for the charge density and troughs for the metric
function, while the scalar field shows alternating peaks and troughs, all
located symmetrically along the $z$-axis. Thus on the fundamental branch we
basically encounter a chain consisting of $k_{z}+1$ BSs, all possessing
similar size, shape and distance from their next neighbors.
This picture partially changes as we move along the domain of solutions, for a
given coupling $\alpha$. This is seen in Fig. 2, where these chains
($k_{z}=0-4$) are now shown for illustrative solutions sitting on the second
branch of the $M$ $vs.$ $\omega$ domain of existence (see the discussion in
the next subsection), with $\alpha=0.25$ for $k_{z}=0-3$, $\alpha=0.5$ for
$k_{z}=4$, and $\omega/\mu=\\{0.43,0.47,0.57,0.7,0.7\\}$ for $k_{z}=0-4$,
respectively. As we look at the charge density of the configurations, we see a
dominant peak at the center for odd chains and a dominant inner pair for even
chains, while the other peaks of the triplet, quartet and quintet have turned
into slight elevations, hardly visible in the figures. In the metric functions
the troughs at the center of the odd chains dominate, and likewise the (almost
merged) inner pair of troughs of the even chains. All other troughs are
weakened substantially. The scalar field itself, however, retains the outer
peaks and troughs to a somewhat greater extent, still reflecting clearly the
number of constituents of the chains.
Figure 2: Chains of BSs with one to five constituents (from top to bottom) on
the second branch for differing values of $\alpha$ and $\omega$: $3d$ plots of
the $U(1)$ scalar charge distributions (left plots), the scalar field
functions $\phi$ (middle plots) and the metric functions $f$ (right plots)
versus the coordinates $\rho=r\sin\theta$ and $z=r\cos\theta$.
### 3.2 The $\omega$-dependence and the branch structure
Recall the frequency dependence for the single BSs and for fixed coupling
$\alpha$ [38]. The set of BSs emerges from the vacuum at the maximal
frequency, given by the boson mass $\omega_{\text{max}}=\mu$. Thus, unlike the
case of $Q$-balls in flat space, where mass and charge diverge, these
quantitites vanish in this limit. Decreasing the frequency $\omega$ spans the
first or fundamental branch, which terminates at the first backbending of the
curve, at which point it moves towards larger frequencies. The curve then
follows a spiraling/oscillating pattern, with successive backbendings.
The mass and charge form a spiral, as $\omega$ is varied, while the minimum of
the metric function $f$ and the maximum of the scalar function $\phi$ shows
damped oscillations. The set of solutions tends to a limiting solution at the
center of the spiral which has finite values of the mass and charge. However,
the values of the scalar field function $\phi$ and the metric function $f$ at
the center of the star, which represent the maximal and minimal values of
these functions, $\phi_{\text{max}}$ and $f_{\text{min}}$, respectively, do
not seem to be finite, with $\phi_{\text{max}}$ diverging and $f_{\text{min}}$
vanishing in the limit.
Let us now consider the frequency dependence of the BS chains, when the
coupling $\alpha$ is kept fixed. Like the single BSs, all chains emerge
similarly from the vacuum at the maximal frequency, given by the boson mass
$\omega_{\text{max}}=\mu$, where their mass and charge vanish in the limit.
When $\omega$ is decreased, mass and charge rise and the chains follow along
their fundamental branch. As for the single BS, for all chains this
fundamental branch ends at a minimal value of the frequency, from where a
second branch arises. But then even and odd chains will in general exhibit
different patterns, which will also depend on the coupling strength $\alpha$.
#### 3.2.1 Even chains
Figure 3: Comparison of the $k_{z}=0$ single BSs (dashed curves), the
$k_{z}=1$ pair of BSs (solid curves), and the $k_{z}=3$ quartet of BSs (dash-
dotted curves): scaled ADM mass $M$ (upper left panel), scaled charge $Q$
(upper right panel), minimal value of the metric function $f_{\text{min}}$
(middle left panel), maximal value of the scalar field $\phi_{\text{max}}$ of
the $k_{z}=1$ pair only (middle right panel), and separation $z_{\text{d}}$
between the two components of the $k_{z}=1$ pair only (lower panel) $vs.$
frequency $\omega$ for several values of the coupling $\alpha$. Note the
quadratic scale for $f_{\text{min}}$ and $z_{\text{d}}$.
Let us consider the BS pair and the higher even chains in more detail. We
illustrate the $\omega$-dependence for even chains in Fig. 3, selecting the BS
pair ($k_{z}=1$) and the BS quartet ($k_{z}=3$), and comparing with the single
BS. In the upper panels we show the $k_{z}=1$ pair (solid curves), the
$k_{z}=3$ quartet (dash-dotted curves), as well as the $k_{z}=0$ single BS
(dashed curves). For the latter we only show the first few branches. In the
two upper panels we exhibit the scaled ADM mass $M$ (left) and the scaled
charge $Q$ (right). The different colors refer to different values of the
coupling $\alpha$. The middle left panel shows in an analogous manner the
minimal value $f_{\text{min}}$ of the metric function $f$. Restricting to the
$k_{z}=1$ pair, we then exhibit the maximal value $\phi_{\rm max}$ of the
scalar field function $\phi$ on right middle panel, and the separation
$z_{\text{d}}$ between the components of the $k_{z}=1$ pair in the lower
panel. All quantities are shown versus the frequency $\omega$.
Whereas the single BSs constitute an infinite set of branches, that form a
spiral or a damped oscillation, depending on the quantity of interest [38],
the even chains seem to end in a limiting configuration quite abruptly
somewhere in the middle of a branch. The number of branches before this
limiting configuration is encountered depends on the strength of the
gravitational coupling $\alpha$. For small $\alpha$ there are more branches,
for larger $\alpha$, the limiting configuration is already encountered on the
second branch, as illustrated in Fig. 3 for the BS pair.
The mass and the charge exhibit only the onset of a spiraling behavior with
two branches for the larger $\alpha$ values ($\alpha=0.5,1$), and three
branches for the smaller ones ($\alpha=0.15,0.2,0.25$). While the minimum
$f_{\text{min}}$ the metric function (middle left) and the maximum
$\phi_{\text{max}}$ of the scalar function (middle right) exhibit only two or
three oscillations. The coordinate distance $z_{\text{d}}$ between the two
components of the pair, as given by twice the value of the $z$-coordinate of
the maximum $\phi_{\text{max}}$, exhibits both types of behavior (lower). For
small $\alpha$ ($\alpha=0.15$) $z_{\text{d}}$ shows three oscillations, while
it decreases continuously. For larger $\alpha$ ($\alpha=0.25$) it exhibits the
onset of a spiral, with again larger values on the third branch.
Let us now address the limiting behavior of the pairs in more detail, and its
dependence on the coupling $\alpha$. To that end, we illustrate in Fig. 4 the
profiles of the metric function $f$ (left panels) and the scalar field
function $\phi$ (right panels) on the $z$-axis, choosing a large (upper
panels) and a small (lower panels) value of the coupling $\alpha$. Since for
the large $\alpha$, the coordinate distance $z_{\text{d}}$ does not decrease
monotonically, but increases again towards the limiting solution, we retain
two well-separated components in the limit. The minimum of the metric function
$f_{\text{min}}$ tends to zero in the limit, while the maximum (minimum) of
the scalar field function $\phi_{\text{max}}$ ($\phi_{\text{min}}$) becomes
extremely sharp.
Figure 4: Profiles on the symmetry axis of the metric function $f$ (left) and
the scalar field function $\phi$ of the $k_{z}=1$ (almost) critical solution
on the second branch at $\omega=0.85$ and $\alpha=1$ (upper plots), and on the
third branch at $\omega=0.52$ and $\alpha=0.5$ (lower plots). Also,
$z=r\cos\theta$, with $\theta=0,\pi$.
In fact, the scalar field amplitude $\phi$ acquires the shape of two
antisymmetric peak(on)s associated with the locations of the minima of the
metric function $f$, where the name peakons refers to field configurations
with extremely large absolute values of the second derivative at the maxima
[52, 53]. Further numerical investigation of such singular solutions is,
unfortunately, plagued by severe numerical problems. We remark, that in the
case of odd chains the choice of spherical coordinates and the presence of the
major peak(on) at the origin alleviate the numerical problems considerably.
For smaller $\alpha$, the coordinate distance $z_{\text{d}}$ between the two
components of the pair decreases monotonically towards the limiting solution.
Fig. 4 shows that the closeness of the extrema of the scalar field function
coincides with a very steep rise of the scalar field function at the origin.
The metric function, however, retains only a small peak at the origin. Here
the numerical grid allows for better resolution of this extremal behavior, and
thus a closer approach to the limiting solution. Still, the approach towards
the limiting solution is restricted by numerical accuracy.
The $k_{z}=3$ quartet represents a bound state of four BSs, located
symmetrically along the symmetry axis. It may also be viewed as a bound state
of two bound pairs of BSs, since this configuration is not fully symmetric.
The two inner BSs are slightly larger than the two outer BSs, though there is
not too much distinction between the stars as long as the configuration
resides on the fundamental branch. As seen in Fig. 3, these quartet
configurations share most of the properties with $k_{z}=1$ pairs. For the
chosen values of the coupling $\alpha$ we find two branches of solutions, the
fundamental branch connected to the perturbative excitations at
$\omega\to\omega_{\text{max}}$, and the second branch leading to a limiting
solution. While approaching this limiting solution the outer extrema of the
metric and of the scalar field function become less pronouced, leaving
basically the two inner extrema, which evolve completely analogously to the
extrema of the pair towards a limiting solution. We expect this scenario to
represent a generic pattern seen for all chains with odd $k_{z}$ (although so
far we have checked it for $k_{z}=1,3,5$ only).
#### 3.2.2 Odd chains
As noticed above, the odd chains always possess a BS centered at the origin,
with the remaining BSs are located symmetrically with respect to the origin,
on the symmetry axis. The presence of the central BS constituent suggests that
the $(\omega,M)$-pattern of the odd chains could resemble that found for a
single BS. To scrutinize this conjecture, let us consider the branch structure
for the $k_{z}=2$ triplet of BSs, exhibited in Fig. 5 for four values of the
coupling $\alpha$ (with different colors). Analogously to the even chains, we
show in the two upper panels the scaled ADM mass $M$ (left) and the scaled
charge $Q$ (right). The middle panels show the minimal value $f_{\text{min}}$
of the metric function (left) and the maximal value $\phi_{\rm max}$ of the
scalar field function (right), while the lower panel shows the separation
$z_{\text{d}}$ between the neighboring components of the $k_{z}=2$ triplet.
Again, all quantities are shown as a function of the frequency $\omega$.
Figure 5: $k_{z}=2$ triplet of BSs: scaled ADM mass $M$ (upper left panel),
scaled charge $Q$ (upper right panel), minimal value of the metric function
$f_{\text{min}}$ (middle left panel), maximal value of the scalar field
function $\phi_{\text{max}}$ (middle right panel), and separation
$z_{\text{d}}$ between neighboring components of the $k_{z}=2$ triplet (lower
panel) $vs.$ frequency $\omega$ for several values of the coupling $\alpha$.
Note the quadratic scale for $Q$, $f_{\text{min}}$ and $z_{\text{d}}$.
Let us first consider small values of the coupling $\alpha$. Then our
expectation is borne out, and we observe the triplet forming spirals for the
mass and charge, while damped oscillations for the extremal values
$f_{\text{min}}$ and $\phi_{\rm max}$. The extremal values always reside at
the center of the configurations, $i.e.$, at the origin. Also the separation
$z_{\text{d}}$ between the neighboring components of the $k_{z}=2$ triplet
forms a spiral. Here, along this spiral, the contribution of the central BS to
the full configurations becomes dominant, while the outer BSs contributions
diminish.
When increasing the coupling constant beyond $\alpha\gtrsim 0.45$, however,
the scenario changes completely. While the configurations still follow a
similar pattern along the fundamental branch, the further part of the
$(\omega,M)$-diagram does not involve a spiraling/oscillating behavior.
Instead there is a single second branch, that leads all the way back to the
vacuum configuration. Thus all physical quantities exhibit loops, beginning
and ending at $\omega_{\text{max}}$. This may be interpreted as follows. Along
the second branch the configurations again feature a dominant central BS, but
the ’satellite’ BSs dissolve into a (sort of) boson shell. Moreover, the
system tends more and more towards spherical symmetry. Now we recall that a
central BS surrounded by a boson shell is precisely what constitutes a
radially excited spherically symmetric BS with a single radial node,
$n_{r}=1$. This suggests that the $k_{z}=2$ triplet might merge with a
$n_{r}=1$ single BS.
Let us therefore compare the $M-\omega$ diagram of the $k_{z}=2$ triplets at
large coupling $\alpha$ with that of the radially excited $n_{r}=1$ single
BSs. Such a comparison is shown in Fig. 5, where the $k_{z}=2$ BS triplets are
marked by solid curves and the radially excited $n_{r}=1$ single BSs by dashed
curves for two values of $\alpha$. The upper panels show the scaled ADM mass
$M$ (left) and the scaled charge $Q$ (right), while the lower panels show the
minimal value $f_{\text{min}}$ of the metric function $f$ (left) and the
maximal value $\phi_{\rm max}$ of the scalar field function $\phi$ (right).
Figure 6: Comparison of the radially excited $n_{r}=1$ single BSs (dashed
curves), and $k_{z}=2$ triplet of BSs (solid curves): scaled ADM mass $M$
(upper left panel), scaled charge $Q$ (upper right panel), minimal value of
the metric function $f_{\text{min}}$ (lower left panel), maximal value of the
scalar field function $\phi_{\text{max}}$ (lower right panel) $vs.$ frequency
$\omega$ for two values of the coupling $\alpha$. Note the quadratic scale for
$f_{\text{min}}$.
These figures are indeed very telling. The radially excited $n_{r}=1$ single
BSs exhibit the typical curve pattern of spherically symmetric BSs. They
emerge from the vacuum, form the fundamental branch and end in a
spiraling/oscillating pattern. The $k_{z}=2$ triplet likewise emerges from the
vacuum, forms a fundamental branch, and a second branch, but this second
branch of the triplet nicely overlaps with the fundamental branch of the
$n_{r}=1$ single BSs at some critical value of the frequency
$\omega_{\text{cr}}$. The overlap happens when mass and charge of the
$n_{r}=1$ single BSs have already passed their maximal values and the radially
excited stars are descending into the spiral.
It is now clear how the domain of existence of odd chains with more
constituents should be, for sufficiently large values of the coupling
$\alpha$. Let us consider a chain with $k_{z}$ nodes and thus $k_{z}+1$
constituents. Then this chain will feature on its fundamental branch $k_{z}+1$
more or less equal BSs, located on the symmetry axis. Subsequently the central
BS will start to dominate while the satellite BSs will dissolve into boson
shells. As the system tends towards spherical symmetry, it will overlap with a
$n_{r}=k_{z}/2$ single BS at some critical value of the frequency. We have
checked this behavior for the $k_{z}=4$ BS quintet, which indeed overlaps with
the radially excited $n_{r}=2$ single BS.
## 4 Conclusions
The main purpose of this paper was to report the existence of a new type of
solitonic configurations for a model with a gravitating self-interacting
complex scalar field. These configurations represent chains of BSs, with
$k_{z}+1$ constituents located symmetrically along the symmetry axis. The
number $k_{z}\geqslant 0$ represents the number of nodes on the symmetry axis
of the scalar field amplitude $\phi$.
The chains emerge from the vacuum $\phi=0$ at a maximal value of the boson
field frequency $\omega$, which is given by the field’s mass. For any $k_{z}$,
a fundamental branch of solutions is found emerging from this vacuum, in a
($\omega,M$)-diagram (with $M$ the ADM mass), ending at a minimal value of the
frequency $\omega_{\rm min}$, whose value is determined by the self-
interaction potential and the gravitational coupling strength $\alpha$. The
subsequent solution curve depends on the number of constituents and the
coupling $\alpha$. A single spherical BS has been argued to form an infinite
number of branches, leading to spirals or damped oscillations (depending on
the quantities considered) as its limiting configuration is approached. For
even chains we do not see such an endless spiraling/oscillating pattern.
Instead we observe only two to three branches, depending on the coupling
$\alpha$.
As the limiting configuration is approached the even chains retain basically
two of their $k_{z}+1$ constituents, whose metric function $g_{00}$ exhibits
two sharp peaks, reaching a very small value, while the scalar field features
two sharp opposite extrema located right at the location of these peaks. The
resulting configurations then feature huge second derivatives of the
functions, which impede further numerical analysis towards the limiting
solution.
The odd chains show a similar pattern as the single BS, when the coupling
$\alpha$ is small. This may be interpreted as the central BS dominating the
configurations on the higher branches. For larger $\alpha$, however, the
pattern changes totally, and the chains overlap on their second branch with a
radially excited spherical single BS with $n_{r}=k_{z}/2$ (radial) nodes. In
this case the central dominant BS will be surrounded by $n_{r}$ ‘boson
shells’.888 Boson shells without a central BS are also known [54, 55].
These solutions can be generalized in various directions. The most obvious
generalization is to include rotation. For the scalar field this means to
include an explicit harmonic dependence on the azimuthal angle $\varphi$. The
rotating single-BSs were obtained long ago [56, 57, 58, 40]. The rotating BSs
with odd parity and $k_{z}=1$, representing the rotating generalizations of
the pair of BSs, were also discussed in the literature [41]. We predict the
existence of rotating generalizations for the triplet and the higher chains
discussed in this work.
Single non-rotating BSs cannot be endowed with a black hole at the center; the
no-hair theorems forbid their existence [59]. This result is, however,
circumvented in the presence of spin, hairy generalizations of the Kerr black
hole (BH) with a complex scalar field being reported in literature [44, 60,
61, 62, 63, 64, 45, 65]. These hairy BHs obey a synchronization condition
relating the angular velocity of the event horizon and the boson field
frequency. Most studies so far considered only an even parity scalar field,
see $e.g.$ [66, 67, 68, 69, 70, 71, 72, 73, 74, 47, 75, 46, 76]. Synchronized
hairy BHs with an odd parity scalar field were obtained in [47, 46, 76]. While
these solutions represent only the simplest type of generalizations,
containing a single black hole at the center of configurations with one
(parity even) constituent or two (parity odd) constituents one can easily
image to put a black hole either at the center of rotating configurations with
more than two components, or to put a black hole at the center of each of the
components along the symmetry axis. Such configurations should correspond to
hairy double Kerr solutions or hairy multi-Kerr solutions. It would be
interesting to see, whether the presence of the scalar hair can regularize
such solutions, so that no conical singularity would be needed to hold them in
equilibrium.
Along similar lines, but replacing rotation by an electric charge, one could
investigate chains of electromagnetically charged BSs, generalizing the
results for a single charged BSs in Einstein-Klein-Gordon-Maxwell model [78,
54, 79]. Some results in this direction were reported in the recent work [82],
where (flat space) $Q$-chains were constructed in a model with a $U(1)$ gauged
scalar field, for a particular choice of the constants in the potential
(2.21). Gravitating generalizations of these solutions should also exist,
sharing some of the properties of the (ungauged) BS chains in this work. In
this context, we mention the recent results in [79, 80], showing that the no-
hair theorem in [81] does not apply to a single charged static BS in a model
with a $Q$-ball type potential (2.21), with the existence of BH
generalizations. For chains of charged BSs, it would be again of particular
interest to see whether they would support regular static multi-BH solutions.
## Acknowledgements
We gratefully acknowledge support by the DFG funded Research Training Group
1620 “Models of Gravity” and by the DAAD. We would also like to acknowledge
networking support by the COST Actions CA16104 and CA15117. Ya.S. gratefully
acknowledges the support by the Ministry of Sceince and High Education of
Russian Federation, project FEWF-2020-0003. and by the BLTP JINR Heisenberg-
Landau program 2020. This work is also supported by the Center for Research
and Development in Mathematics and Applications (CIDMA) through the Portuguese
Foundation for Science and Technology (FCT - Fundacao para a Ciência e a
Tecnologia), references UIDB/04106/2020 and UIDP/04106/2020 and by national
funds (OE), through FCT, I.P., in the scope of the framework contract foreseen
in the numbers 4, 5 and 6 of the article 23, of the Decree-Law 57/2016, of
August 29, changed by Law 57/2017, of July 19. We acknowledge support from the
projects PTDC/FIS-OUT/28407/2017, CERN/FIS-PAR/0027/2019 and PTDC/FIS-
AST/3041/2020. This work has further been supported by the European Union’s
Horizon 2020 research and innovation (RISE) programme H2020-MSCA-RISE-2017
Grant No. FunFiCO-777740. The authors would like to acknowledge networking
support by the COST Action CA16104.
## References
* [1] G. Rosen, J. Math. Phys. 9 (1968) 996, 999
* [2] R. Friedberg, T. D. Lee and A. Sirlin, Phys. Rev. D 13 (1976) 2739
* [3] S. R. Coleman, Nucl. Phys. B 262 (1985) 263 Erratum: [Nucl. Phys. B 269 (1986) 744].
* [4] D. J. Kaup, Phys. Rev. 172 (1968) 1331.
* [5] R. Ruffini and S. Bonazzola, Phys. Rev. 187 (1969) 1767.
* [6] L. G. Collodel, B. Kleihaus and J. Kunz, Phys. Rev. D 96 (2017) no.8, 084066
* [7] C. A. R. Herdeiro, J. Kunz, I. Perapechka, E. Radu and Y. Shnir, Phys. Lett. B 812 (2021), 136027 [arXiv:2008.10608 [gr-qc]].
* [8] B. Kleihaus and J. Kunz, Phys. Rev. D 61 (2000) 025003
* [9] B. Kleihaus and J. Kunz, Phys. Rev. Lett. 85 (2000), 2430-2433
* [10] B. Kleihaus, J. Kunz and Y. Shnir, Phys. Lett. B 570 (2003) 237
* [11] B. Kleihaus, J. Kunz and Y. Shnir, Phys. Rev. D 68 (2003) 101701
* [12] B. Kleihaus, J. Kunz and Y. Shnir, Phys. Rev. D 70 (2004) 065010
* [13] B. Kleihaus, J. Kunz and Y. Shnir, Phys. Rev. D 71 (2005) 024013
* [14] R. Teh and K. Wong, J. Math. Phys. 46 (2005), 082301
* [15] V. Paturyan, E. Radu and D. Tchrakian, Phys. Lett. B 609 (2005), 360-366
* [16] B. Kleihaus, J. Kunz and U. Neemann, Phys. Lett. B 623 (2005) 171
* [17] J. Kunz, U. Neemann and Y. Shnir, Phys. Lett. B 640 (2006) 57
* [18] J. Kunz, U. Neemann and Y. Shnir, Phys. Rev. D 75 (2007) 125008
* [19] K. Lim, R. Teh and K. Wong, J. Phys. G 39 (2012), 025002
* [20] R. Teh, A. Soltanian and K. Wong, Phys. Rev. D 89 (2014) no.4, 045018
* [21] S. Krusch and P. Sutcliffe, J. Phys. A 37 (2004) 9037
* [22] Y. Shnir and D. H. Tchrakian, J. Phys. A 43 (2010) 025401
* [23] Y. Shnir, Phys. Rev. D 92 (2015) no.8, 085039
* [24] B. Kleihaus, J. Kunz and M. Leissner, Phys. Lett. B 663 (2008), 438-444
* [25] R. Ibadov, B. Kleihaus, J. Kunz and M. Leissner, Phys. Lett. B 663 (2008), 136-140
* [26] R. Ibadov, B. Kleihaus, J. Kunz and M. Leissner, Phys. Lett. B 686 (2010), 298-306
* [27] R. Ibadov, B. Kleihaus, J. Kunz and M. Leissner, Phys. Rev. D 82 (2010), 125037
* [28] R. Teh, B. Ng and K. Wong, Annals Phys. 362 (2015), 170-195
* [29] E. Radu and D. H. Tchrakian, Phys. Lett. B 636 (2006) 201
* [30] Y. M. Shnir, EPL 77 (2007) no.2, 21001.
* [31] O. Kichakova, J. Kunz, E. Radu and Y. Shnir, Phys. Rev. D 86 (2012) 104065.
* [32] O. Kichakova, J. Kunz, E. Radu and Y. Shnir, Phys. Rev. D 90 (2014) no.12, 124012.
* [33] R. M. Wald, General Relativity, (University of Chicago Press, Chicago, 1984).
* [34] D. Kramer, H. Stephani, E. Herlt, and M. MacCallum, Exact Solutions of Einstein’s Field Equations, Cambridge University Press, Cambridge, (1980).
* [35] T. D. Lee and Y. Pang, Phys. Rept. 221 (1992) 251.
* [36] W. Deppert and E. W. Mielke, Phys. Rev. D 20 (1979) 1303.
* [37] E. W. Mielke and R. Scherzer, Phys. Rev. D 24, 2111 (1981).
* [38] R. Friedberg, T. Lee and Y. Pang, Phys. Rev. D 35 (1987), 3658
* [39] M.S. Volkov and E. Wohnert, Phys. Rev. D 66 (2002) 085003.
* [40] B. Kleihaus, J. Kunz and M. List, Phys. Rev. D 72 (2005) 064002 .
* [41] B. Kleihaus, J. Kunz, M. List and I. Schaffer, Phys. Rev. D 77 (2008) 064025
* [42] R. Battye and P. Sutcliffe, Nucl. Phys. B 590, (2000) 329
* [43] P. Bowcock, D. Foster and P. Sutcliffe, J. Phys. A 42 (2009), 085403
* [44] S. Hod, Phys. Rev. D 86 (2012) 104026 Erratum: [Phys. Rev. D 86 (2012) 129902]
* [45] C. Herdeiro, E. Radu and H. Runarsson, Phys. Lett. B 739 (2014) 302
* [46] J. Kunz, I. Perapechka and Y. Shnir, Phys. Rev. D 100 (2019) no.6, 064032
* [47] Y. Q. Wang, Y. X. Liu and S. W. Wei, Phys. Rev. D 99 (2019) no.6, 064036
* [48] Y. Brihaye and B. Hartmann, Phys. Rev. D 79 (2009) 064013
* [49] A. Bernal, J. Barranco, D. Alic and C. Palenzuela, Phys. Rev. D 81 (2010) 044031
* [50] N.I.M. Gould, J.A. Scott and Y. Hu, ACM Transactions on Mathematical Software 33 (2007) 10;
O. Schenk and K. Gärtner Future Generation Computer Systems 20 (3) (2004) 475.
* [51] W. Schönauer and R. Weiß, J. Comput. Appl. Math. 27, 279 (1989) 279;
M. Schauder, R. Weiß and W. Schönauer, Universität Karlsruhe, Interner Bericht
Nr. 46/92 (1992).
* [52] R. Camassa and D. D. Holm, Phys. Rev. Lett. 71 (1993), 1661
* [53] J. Lis, Phys. Rev. D 84 (2011), 085030
* [54] B. Kleihaus, J. Kunz, C. Lammerzahl and M. List, Phys. Lett. B 675 (2009), 102-115
* [55] B. Kleihaus, J. Kunz, C. Lammerzahl and M. List, Phys. Rev. D 82 (2010), 104050
* [56] F. E. Schunck and E. W. Mielke, Phys. Lett. A 249, 389 (1998).
* [57] S. Yoshida and Y. Eriguchi, Phys. Rev. D 56, 762 (1997).
* [58] F. D. Ryan, Phys. Rev. D 55, 6081 (1997).
* [59] C. A. R. Herdeiro and E. Radu, Int. J. Mod. Phys. D 24 (2015) no.09, 1542014
* [60] C. A. R. Herdeiro and E. Radu, Phys. Rev. Lett. 112 (2014) 221101
* [61] C. Herdeiro and E. Radu, Phys. Rev. D 89 (2014) no.12, 124018
* [62] C. Herdeiro and E. Radu, Class. Quant. Grav. 32 (2015) no.14, 144001
* [63] S. Hod, Phys. Rev. D 90 (2014) no.2, 024051
* [64] C. L. Benone, L. C. B. Crispino, C. Herdeiro and E. Radu, Phys. Rev. D 90, no. 10, 104024 (2014)
* [65] C. A. R. Herdeiro and E. Radu, Int. J. Mod. Phys. D 23 (2014) no.12, 1442014
* [66] B. Kleihaus, J. Kunz and S. Yazadjiev, Phys. Lett. B 744 (2015) 406
* [67] C. A. R. Herdeiro, E. Radu and H. Runarsson, Phys. Rev. D 92 (2015) no.8, 084059
* [68] C. Herdeiro, J. Kunz, E. Radu and B. Subagyo, Phys. Lett. B 748, 30 (2015)
* [69] C. Herdeiro, E. Radu and H. Runarsson, Class. Quant. Grav. 33 (2016) no.15, 154001
* [70] Y. Brihaye, C. Herdeiro and E. Radu, Phys. Lett. B 760 (2016) 279
* [71] S. Hod, Phys. Lett. B 751 (2015) 177
* [72] C. Herdeiro, J. Kunz, E. Radu and B. Subagyo, Phys. Lett. B 779, 151 (2018)
* [73] C. Herdeiro, I. Perapechka, E. Radu and Y. Shnir, JHEP 1810 (2018) 119
* [74] C. Herdeiro, I. Perapechka, E. Radu and Y. Shnir, JHEP 02 (2019), 111
* [75] J. F. M. Delgado, C. A. R. Herdeiro and E. Radu, Phys. Lett. B 792 (2019), 436-444
* [76] J. Kunz, I. Perapechka and Y. Shnir, JHEP 07 (2019), 109
* [77] B. Kleihaus and J. Kunz, Phys. Lett. B 494 (2000), 130-134
* [78] P. Jetzer and J. van der Bij, Phys. Lett. B 227 (1989), 341-346
* [79] C. A. R. Herdeiro and E. Radu, Eur. Phys. J. C 80 (2020) no.5, 390 [arXiv:2004.00336 [gr-qc]].
* [80] J. P. Hong, M. Suzuki and M. Yamada, Phys. Rev. Lett. 125 (2020) no.11, 111104 [arXiv:2004.03148 [gr-qc]].
* [81] A. E. Mayo and J. D. Bekenstein, Phys. Rev. D 54 (1996), 5059-5069 [arXiv:gr-qc/9602057 [gr-qc]].
* [82] V. Loiko, I. Perapechka and Y. Shnir, [arXiv:2012.01052 [hep-th]].
|
# Predicting Hyperkalemia in the ICU and
Evaluation of Generalizability and Interpretability
Gloria Hyunjung Kwak The Hong Kong University of Science and Technology, Hong
Kong, China Christina Chen Massachusetts Institute of Technology, Cambridge,
MA, USA Beth Israel Deaconess Medical Center, Boston, MA, USA Harvard
Medical School, Boston, MA, USA Lowell Ling The Chinese University of Hong
Kong, Hong Kong, China Erina Ghosh Philips Research North America,
Cambridge, MA, USA Leo Anthony Celi Massachusetts Institute of Technology,
Cambridge, MA, USA Beth Israel Deaconess Medical Center, Boston, MA, USA
Harvard T.H. Chan School of Public Health, Boston, MA, USA Pan Hui The Hong
Kong University of Science and Technology, Hong Kong, China University of
Helsinki, Helsinki, Finland
###### Abstract
> Hyperkalemia is a potentially life-threatening condition that can lead to
> fatal arrhythmias. Early identification of high risk patients can inform
> clinical care to mitigate the risk. While hyperkalemia is often a
> complication of acute kidney injury (AKI), it also occurs in the absence of
> AKI. We developed predictive models to identify intensive care unit (ICU)
> patients at risk of developing hyperkalemia by using the Medical Information
> Mart for Intensive Care (MIMIC) and the eICU Collaborative Research Database
> (eICU-CRD). Our methodology focused on building multiple models, optimizing
> for interpretability through model selection, and simulating various
> clinical scenarios.
>
> In order to determine if our models perform accurately on patients with and
> without AKI, we evaluated the following clinical cases: (i) predicting
> hyperkalemia after AKI within 14 days of ICU admission, (ii) predicting
> hyperkalemia within 14 days of ICU admission regardless of AKI status, and
> compared different lead times for (i) and (ii). Both clinical scenarios were
> modeled using logistic regression (LR), random forest (RF), and XGBoost.
>
> Using observations from the first day in the ICU, our models were able to
> predict hyperkalemia with an AUC of (i) 0.79, 0.81, 0.81 and (ii) 0.81,
> 0.85, 0.85 for LR, RF, and XGBoost respectively. We found that 4 out of the
> top 5 features were consistent across the models. AKI stage was significant
> in the models that included all patients with or without AKI, but not in the
> models which only included patients with AKI. This suggests that while AKI
> is important for hyperkalemia, the specific stage of AKI may not be as
> important. Our findings require further investigation and confirmation.
## Introduction
Hyperkalemia, or high serum potassium levels, is a rare but potentially life-
threatening condition that may lead to fatal cardiac arrhythmias. Identifying
patients at high risk for hyperkalemia may allow providers to adjust clinical
management, such as avoiding potassium repletion and potassium-containing or
potassium-sparing medications. Previous studies show that hyperkalemia in
hospitalized patients range from K$>$5.3 to K$>$6.0 mEq/L, but for this paper,
we chose the more restrictive potassium cutoff of 6.0 mEq/L (?; ?). Previous
literature investigating hyperkalemia in hospitalized patients mostly focused
on evaluating the association of clinical features with the development of
hyperkalemia. The only predictive models that we are aware of are based on
medication administration and electrocardiograms (?; ?; ?; ?).
Our goal is to present a methodology to build predictive models to identify
patients at high risk of developing hyperkalemia using observations from the
first day of ICU admission. Models were selected to maximize clinician
interpretability: LR, RF, and XGBoost. Features were selected based on
literature and verified by clinical expertise. While most patients with
hyperkalemia in the ICU also have AKI, it is important to capture those with
normal kidney function because their hyperkalemia is easier to miss.
## Methods and Materials
### Databases
Experiments were conducted on two publicly available databases of critical
care patients: the Medical Information Mart for Intensive Care III, IV (MIMIC-
III, MIMIC-IV) and the eICU Collaborative Research Database (eICU-CRD) (?; ?;
?). We incorporated ICU admissions: 61,532 admissions from MIMIC-III (2001 to
2012), 25,769 admissions from MIMIC-IV (2014 to 2019), and 200,859 ICU
admissions from eICU-CRD (2014 and 2015).
### Definitions
#### Hyperkalemia
We used American Heart Association’s definition of moderate hyperkalemia,
K$\geq$6 mEq/L, which is more restrictive than many studies, but associated
with much higher mortality (?). We filtered out erroneous lab values, such as
hemolyzed specimens, by including these constraints: (i) only one potassium in
6 hours and the result $\geq$6, (ii) two potassium results in 6 hours and both
results $\geq$6, (iii) one potassium level $\geq$6 with calcium gluconate
administration.
#### AKI
AKI stage was calculated each time a creatinine or urine output was measured,
according to KDIGO guidelines (?). Baseline creatinine is defined as the
lowest creatinine within the past 7 days. We used this definition because eICU
does not include pre-admission labs and we wanted to be consistent in our
definition.
#### Scenarios
We have two clinical scenarios as follows: (i) Case 1: AKI within 7 days of
admission to the ICU, followed by hyperkalemia within the next 7 days, (ii)
Case 2: Hyperkalemia within 14 days of admission to the ICU, with or without
AKI. (See Appendix Figure 3)
For both clinical cases, the training set was composed of patients who
developed hyperkalemia between admission day 1 to 14. For the test set, we
selected patients who developed hyperkalemia between day 1 to 14. To
investigate increasing lead times, we also selected subgroups of patients who
developed hyperkalemia between day $n$ to 14 were selected as a test set
($n:2\dots 4$).
### Cohort Selection
We included the first ICU admission for all patients between the ages of 18
and 90. The exclusion criteria are as follows: (i) patients who have chronic
kidney disease stage V or End-stage Renal disease based on ICD-9 codes, (ii)
patients who had end-of-life-discussions within 24 hours of ICU admission,
(iii) patients who had peritoneal dialysis patients at any time, (iv) patients
who had hemodialysis prior to admission to the ICU, (v) patients who had
potassium level $\geq$6 at ICU admission.
After exclusion criteria adjustment, the number of patients in this study was
43,798 for Case 1 and 83,565 for Case 2 with variables. We collected
demographics data (gender, age), laboratory variables (creatinine kinase,
glucose, lactate, pH, wbc, chloride, bilirubin, platelet, alanine
transaminase, phosphate, hgb, serum potassium), AKI stage, fluid balance, IV
fluid use (saline, hartmann, plasmalyte, dextrose 5%, dextrose 10%) and
medications use (ACEi/ARB, diuretics, NSAID, beta blockers, steroids,
potassium chloride, nitroglycerin, vasopressor, etc. See Appendix Table 2).
Features were selected based on literature review and clinician expert
opinion. We selected laboratory values drawn within 24h of admission that were
closest to admission time. Fluid balance was calculated for the first 24h.
Drug usage was determined positive if it was used within a day of admission.
Missing values were estimated based on data from 12h before and 48h after
admission (closest to admission time) and then interpolated with k-nearest
neighbor (n$=$3).
### Model
For each of the two clinical cases, we built three models. We started with a
baseline LR, which is commonly used in medical literature and well understood
by clinicians. We also used RF and XGBoost, which are good options for our
sparse data based on a single time point with the added benefit of being
easier to interpret.
After data normalization, the entire cohort of patients was Case 1: 43,798
(hyperkalemia:1,048), Case 2: 83,565 (hyperkalemia:1,821). Random shuffling
and splitting were repeated for the training (60%) and test (40%) sets to
evaluate stability using k-fold cross validation (k$=$4). Models were trained
with balancing the class frequency, and parameters (number of estimators and
maximum depth of trees) were chosen based on convergence of error rates. The
Area under the curve (AUC)s of the receiver operating curve (ROC) were used to
assess the performance of LR, RF, and XGBoost over different lead times for
the test sets across the scenarios. The importance of features in this project
is interpreted with local model-agnostic SHAP (SHapley Additive exPlanation)
values (?). SHAP values attribute to each feature the change in the expected
model prediction when conditioning on that feature. The baseline
characteristics of each cohort are shown in Appendix Table 3.
Table 1: Model performance (AUC) comparison for machine learning classifiers.; LR, Logistic Regression; RF, Random Forest; XGB, XGBoost; Testdate, Test set-up date; AUC, area under the curve. Testdate | Model | AKI Cohort | General Cohort
---|---|---|---
| | (Case 1) | (Case 2)
1st$\sim$14th | LR | 0.79 (0.77$-$0.81) | 0.81 (0.80$-$0.82)
| RF | 0.81 (0.80$-$0.82) | 0.85 (0.84$-$0.85)
| XGB | 0.81 (0.79$-$0.82) | 0.85 (0.85$-$0.86)
2nd$\sim$14th | LR | 0.75 (0.74$-$0.76) | 0.72 (0.71$-$0.74)
| RF | 0.78 (0.77$-$0.79) | 0.80 (0.79$-$0.81)
| XGB | 0.78 (0.76$-$0.79) | 0.80 (0.78$-$0.81)
3rd$\sim$14th | LR | 0.70 (0.69$-$0.72) | 0.71 (0.70$-$0.72)
| RF | 0.73 (0.72$-$0.74) | 0.80 (0.79$-$0.81)
| XGB | 0.73 (0.72$-$0.74) | 0.80 (0.78$-$0.81)
4th$\sim$14th | LR | 0.70 (0.69$-$0.71) | 0.72 (0.71$-$0.73)
| RF | 0.74 (0.71$-$0.76) | 0.80 (0.78$-$0.82)
| XGB | 0.73 (0.70$-$0.75) | 0.80 (0.78$-$0.82)
Figure 1: Model performance (AUC) comparison for machine learning
classifiers.; LR, Logistic Regression; RF, Random Forest; AUC, area under the
curve.
## Results
LR had AUC 0.79 (95% Confidence Interval: 0.77$-$0.81) for Case 1 (AKI cohort)
and 0.81 (0.80$-$0.82) for Case 2 (general cohort) with the test set-up date
which is 1st to 14th day from ICU admission. RF and XGBoost performed better
with AUC 0.81 (0.80$-$0.82, 0.79$-$0.82) and 0.85 (0.84$-$0.85, 0.85$-$0.86)
for Case 1 and Case 2 with the same test date range (See Table 1, Figure 1).
The performance of RF and XGBoost in Case 2 is consistently higher than models
in Case 1. In addition, performance decreased in both cases when hyperkalemia
occurred later in the hospitalization. AUC was reduced over time in both
scenarios.
We ran our models with and without the AKI stage as a feature, and found that
all of our models were in close agreement. Analysis of feature importance is
shown in Figure 2. Top features in RF and XGBoost models in both clinical
cases include high phosphate, high admission potassium, high fluid balance,
and vasopressor use. In addition, AKI stage was also an important feature in
Case 2.
Figure 2: Top 10 SHAP values from RF and XGBoost for Case 1 and 2. (a)
RF(Case1) (b) XGBoost(Case1) (c) RF(Case2) (d) XGBoost(Case2)
## Discussion
Using MIMIC and eICU-CRD, we built models that may be used to predict risk of
hyperkalemia in critically ill patients with and without AKI. The models
require parameters from the first day in the ICU to predict development of
hyperkalemia within the first two weeks from admission.
In both clinical scenarios, the performance of our RF and XGBoost models
decreased with increasing intervals from admission time. This is likely due to
the relatively longer duration of forward prediction compared to using only
admission parameters for prediction, suggesting that data from subsequent,
time-varying clinical states after admission day may be required to improve
model prediction.
Top features in RF and XGBoost models in both clinical cases include high
phosphate, high admission potassium, high fluid balance, and vasopressor use.
Vasopressor use and positive fluid balance is suggestive of hemodynamic
instability and over aggressive volume resuscitation is associated with
increased morbidity and mortality. Whilst these patients had hemodynamic
instability and aggressive volume resuscitation likely due to higher severity
of illness, interestingly that these factors remain important even when AKI
staging is featured in the model. More specifically, a positive balance is
associated with hyperkalemia. This suggests treatment of shock with
vasopressor and fluid therapy may alter risk of hyperkalemia beyond that of
AKI as part of multiorgan dysfunction. For example, patients who require
vasopressors for cardiogenic shock due to heart failure will usually be given
diuretics to achieve negative fluid balance and eliminate potassium through
the urine, whereas patients with hemodynamically unstable patients (ex. septic
patients) often require fluid loading to improve cardiac output. In contrast,
high serum phosphate and serum potassium levels on admission suggest renal
dysfunction prior to admission since the kidney is largely responsible for
regulating both. High phosphate is often seen in chronic kidney disease, which
can impair the kidney’s ability to excrete potassium even in the absence of
AKI.
The AKI stage is not important in Case 1, where all patients have AKI, but it
is important in Case 2. This could mean that AKI, regardless of stage,
increases risk of hyperkalemia, but the difference in AKI stage does not have
a large impact on risk of hyperkalemia. There are many potential reasons for
this, including small cohort size and the KDIGO definition of AKI severity
(?). This requires further investigation and confirmation using larger
datasets.
Medications have been shown to have strong associations with hyperkalemia, but
this was not the case for our study (?; ?; ?). This could potentially be due
to the severity of illness in the ICU population and the existence of other
powerful causes of hyperkalemia.
As emphasized in a recent commentary (?), a deeper understanding of the
patterns discovered in clinical datasets to infer causation is necessary prior
to adoption rather than simply evaluating algorithms on multiple datasets. In
addition, an algorithm requires validation and re-calibration using local data
before implementation; generalizability should never be inferred. We used
multicenter datasets to create a large patient cohort of more than 83,000
patients and minimize overfitting by using broad inclusion criterias. The
models might help elucidate causes of hyperkalemia in the ICU, especially
those that are actionable.
## Conclusion
We developed models to predict hyperkalemia in critically ill patients, with a
focus on applicability to various clinical scenarios and interpretability. We
used multi-center databases, compared multiple models optimized for
interpretability, and performed sensitivity analyses using multiple use cases.
## References
* [Acker et al. 1998] Acker, C. G.; Johnson, J. P.; Palevsky, P. M.; and Greenberg, A. 1998\. Hyperkalemia in hospitalized patients: causes, adequacy of treatment, and results of an attempt to improve physician compliance with published therapy guidelines. Archives of internal medicine 158(8):917–924.
* [Eschmann et al. 2017] Eschmann, E.; Beeler, P. E.; Schneemann, M.; and Blaser, J. 2017\. Developing strategies for predicting hyperkalemia in potassium-increasing drug-drug interactions. Journal of the American Medical Informatics Association 24(1):60–66.
* [Futoma et al. 2020] Futoma, J.; Simons, M.; Panch, T.; Doshi-Velez, F.; and Celi, L. A. 2020\. The myth of generalisability in clinical research and machine learning in health care. The Lancet Digital Health 2(9):e489–e492.
* [Henz et al. 2008] Henz, S.; Maeder, M. T.; Huber, S.; Schmid, M.; Loher, M.; and Fehr, T. 2008\. Influence of drugs and comorbidity on serum potassium in 15 000 consecutive hospital admissions. Nephrology Dialysis Transplantation 23(12):3939–3945.
* [Johnson et al. 2016] Johnson, A. E.; Pollard, T. J.; Shen, L.; Li-Wei, H. L.; Feng, M.; Ghassemi, M.; Moody, B.; Szolovits, P.; Celi, L. A.; and Mark, R. G. 2016\. Mimic-iii, a freely accessible critical care database. Scientific data 3(1):1–9.
* [Johnson et al. 2020] Johnson, A. E.; Bulgarelli, L.; Pollard, T. J.; Horng, S.; Celi, L. A.; and Mark, R. G. 2020\. Mimic-iv. PhysioNet, Last accessed 1 September 2020.
* [Khanagavi et al. 2014] Khanagavi, J.; Gupta, T.; Aronow, W. S.; Shah, T.; Garg, J.; Ahn, C.; Sule, S.; and Peterson, S. 2014\. Hyperkalemia among hospitalized patients and association between duration of hyperkalemia and outcomes. Archives of medical science: AMS 10(2):251.
* [Khwaja 2012] Khwaja, A. 2012\. Kdigo clinical practice guidelines for acute kidney injury. Nephron Clinical Practice 120(4):c179–c184.
* [Lin et al. 2020] Lin, C.-S.; Lin, C.; Fang, W.-H.; Hsu, C.-J.; Chen, S.-J.; Huang, K.-H.; Lin, W.-S.; Tsai, C.-S.; Kuo, C.-C.; Chau, T.; et al. 2020\. A deep-learning algorithm (ecg12net) for detecting hypokalemia and hyperkalemia by electrocardiography: Algorithm development. JMIR medical informatics 8(3):e15931.
* [Lundberg and Lee 2017] Lundberg, S. M., and Lee, S.-I. 2017\. A unified approach to interpreting model predictions. In Advances in neural information processing systems, 4765–4774.
* [Nyirenda et al. 2009] Nyirenda, M. J.; Tang, J. I.; Padfield, P. L.; and Seckl, J. R. 2009\. Hyperkalaemia. Bmj 339\.
* [Ostermann et al. 2020] Ostermann, M.; Bellomo, R.; Burdmann, E. A.; Doi, K.; Endre, Z. H.; Goldstein, S. L.; Kane-Gill, S. L.; Liu, K. D.; Prowle, J. R.; Shaw, A. D.; et al. 2020\. Controversies in acute kidney injury: conclusions from a kidney disease: Improving global outcomes (kdigo) conference. Kidney international 98(2):294–309.
* [Pollard et al. 2018] Pollard, T. J.; Johnson, A. E.; Raffa, J. D.; Celi, L. A.; Mark, R. G.; and Badawi, O. 2018\. The eicu collaborative research database, a freely available multi-center database for critical care research. Scientific data 5:180178.
* [Uijtendaal et al. 2011] Uijtendaal, E. V.; Zwart-van Rijkom, J. E.; van Solinge, W. W.; and Egberts, T. C. 2011\. Frequency of laboratory measurement and hyperkalaemia in hospitalised patients using serum potassium concentration increasing drugs. European journal of clinical pharmacology 67(9):933–940.
* [Vanden Hoek et al. 2010] Vanden Hoek, T. L.; Morrison, L. J.; Shuster, M.; Donnino, M.; Sinz, E.; Lavonas, E. J.; Jeejeebhoy, F. M.; and Gabrielli, A. 2010\. Part 12: cardiac arrest in special situations: 2010 american heart association guidelines for cardiopulmonary resuscitation and emergency cardiovascular care. Circulation 122(18_suppl_3):S829–S861.
## Appendix A Appendix
Figure 3: Data timeframe for predicting hyperkalemia. (a) Case1 (b) Case2 Table 2: List of medications Variable | Category | Medications
---|---|---
med$\\_$ace$\\_$yn | ACEi/ARB | benazepril, monopril, captopril, enalapril,
| | enalaprilat, lisinopril, moexipril, quinapril,
| | ramipril, trandolapril, valsartan,
| | losartan, irbesartan
med$\\_$loop$\\_$yn | Loop/Thiazide Diuretics | torsemide, furosemide, chlorothiazide,
| | indapamide, hydrochlorothiazide
med$\\_$nsaid$\\_$yn | NSAID | celecoxib, celebrex, diclofenac, ibuprofen,
| | indomethacin, ketorolac + toradol,
| | naproxen
med$\\_$beta$\\_$yn | Beta Blockers | carvedilol, esmolol, metoprolol, nadolol,
| | propranolol, sotalol, acebutolol, atenolol,
| | bisoprolol
med$\\_$steroids$\\_$yn | Steroids | hydrocortisone na succ., methylprednisolone,
| | prednisone
med$\\_$beta$\\_$ag$\\_$yn | Beta Agonist | albuterol, salmeterol, levalbuterol
med$\\_$k$\\_$sparing$\\_$yn | K sparing Diuretics | spironolactone, amiloride
med$\\_$carbonic$\\_$yn | Carbonic Anhydrase | acetazolamide, methazolamide
| Inhibitors |
med$\\_$dig$\\_$yn | Digoxin |
med$\\_$hep$\\_$yn | Heparin |
med$\\_$pot$\\_$chl$\\_$yn | Potassium Chloride |
med$\\_$succ$\\_$yn | Succinylcholine |
med$\\_$ins$\\_$yn | Insulin |
med$\\_$sod$\\_$bic$\\_$yn | Sodium Bicarbonate |
med$\\_$cal$\\_$yn | Calcium Gluconate |
med$\\_$nitrog$\\_$yn | Nitroglycerin |
med$\\_$labet$\\_$yn | Labetalol |
med$\\_$vasop$\\_$yn | Vasopressor | vasopressin, dopamine, phenylephrine,
| | epinephrine, norepinephrine
Table 3: Baseline characteristics; All values are expressed in the median and interquartile ranges unless specified; HyperK, Hyperkalemia; CKD, Chronic Kidney Disease; Comorbidities, Baseline Comorbidities; COPD, Chronic Obstructive Pulmonary Disease; SOFA, sequential organ function assessment; CK, Creatinine Kinase; Potassium, Admission potassium; Dialysis, Any form of renal replacement therapy during ICU stay; LOS, Length of Stay. | AKI Cohort (Case 1) | General Cohort (Case 2)
---|---|---
| n=43,798 | n=83,565
| HyperK | Non-HyperK | HyperK | Non-HyperK
| n=1,048 | n=42,750 | n=1,821 | n=81,744
Age (years) | 64 (52.1-74.8) | 66 (55-76.3) | 64 (53-89) | 64 (52-75)
Male (%) | 672 (64.1%) | 24,442 (57.2%) | 1,205 (66.2%) | 46,160 (56.5%)
SOFA score | 8 (5-11) | 4 (2-7) | 6 (4-9) | 3 (2-6)
Comorbidities | | | |
CKD (%) | 203 (19.4) | 4,256 (10.0) | 297 (16.3) | 6,074 (7.4)
COPD (%) | 96 (9.2) | 3,945 (9.2) | 156 (8.6) | 6,129 (7.5)
Diabetes (%) | 172 (16.4) | 6,307(14.8) | 314 (17.2) | 10,876 (13.3)
Hypertension (%) | 169 (16.1) | 10,272 (24.0) | 368 (20.2) | 16,937 (20.7)
Stroke (%) | 26 (2.5) | 1,489 (3.5) | 43 (2.4) | 3,012 (3.7)
Specialty | | | |
Medical (%) | 582 (55.5) | 23,029 (53.9) | 784 (43.1) | 41,518 (50.8)
Surgery (%) | 356 (34.0) | 14,321 (33.5) | 860 (47.2) | 27,257 (33.3)
Others (%) | 110 (10.5) | 5,400 (12.6) | 177 (9.7) | 12,969 (15.9)
CK (U/L) | 245 (83-1,125) | 150 (67-447) | 231 (84-939) | 146 (68-434)
Creatinine (mg/dL) | 1 (0.7–1.5) | 1 (0.7–1.4) | 0.9 (0.7-1.3) | 0.9 (0.7-1.3)
Phosphate (mg/dL) | 4.5 (3.4-6.1) | 3.5 (2.8-4.2) | 4.1 (3.2-5.3) | 3.4 (2.7-4)
Potassium (mEq/L) | 4.5 (3.9-5.2) | 4.1 (3.7-4.5) | 4.5 (0-1) | 4.0 (0-2)
Calcium (mg/dL) | 8.4 (7.8-8.9) | 8.4 (7.8-8.9) | 8.4 (7.9-8.9) | 8.4 (7.9-8.9)
Vasopressor | 564 (53.8%) | 9,702 (22.7%) | 820 (45.0%) | 13,848 (16.9%)
Dialysis | 433 (41.3%) | 2,805 (6.6%) | 478 (26.3%) | 3,088 (3.8%)
ICU LOS (days) | 3 (1-9) | 2 (1-4) | 3 (2-7) | 1 (1-3)
Hospital mortality (%) | 326 (31.1%) | 3,366 (7.9%) | 367 (20.2%) | 4,532 (5.5%)
|
# Blue Holes and Froude Horizons:
Circular Shallow Water Profiles for Astrophysical Analogs
Amilcare Porporato<EMAIL_ADDRESS>Department of Civil and
Environmental Engineering and High Meadows Environmental Institute, Princeton
University, Princeton, New Jersey 08540, USA Luca Ridolfi
<EMAIL_ADDRESS>Department of Environment, Land and Infrastructure
Engineering, Politecnico di Torino, Corso Duca degli Abruzzi 24, 10129 Turin,
Italy Lamberto Rondoni<EMAIL_ADDRESS>Dipartimento di Scienze
Matematiche, Politecnico di Torino, Corso Duca degli Abruzzi 24, 10129 Torino,
Italy INFN, Sezione di Torino, Via P. Giuria 1, 10125 Torino, Italy
###### Abstract
Interesting analogies between shallow water dynamics and astrophysical
phenomena have offered valuable insight from both the theoretical and
experimental point of view. To help organize these efforts, here we analyze
systematically the hydrodynamic properties of backwater profiles of the
shallow water equations with 2D radial symmetry. In contrast to the more
familiar 1D case typical of hydraulics, even in isentropic conditions, a
solution with minimum-radius horizon for the flow emerges, similar to the
black hole and white hole horizons, where the critical conditions of unitary
Froude number provide a unidirectional barrier for surface waves. Beyond these
time-reversible solutions, a greater variety of cases arises, when allowing
for dissipation by turbulent friction and shock waves (i.e., hydraulic jumps)
for both convergent and divergent flows. The resulting taxonomy of the base-
flow cases may serve as a starting point for a more systematic analysis of
higher-order effects linked, e.g., to wave propagation and instabilities,
capillarity, variable bed slope, and rotation.
††preprint: APS/123-QED
“(l’Idraulica), sempre da me tenuta per difficilissima e piena di oscurità.”
Galileo Galilei [letter to R. Staccoli, January 16, 1630]
## I Introduction
Browsing the recent literature on fluid analogies in astrophysics [1, 2],
Galileo may object that his quote about the ’obscurities of hydraulics’
actually does not refer to the darkness of the black holes, but to the
mysteries of fluid dynamics, which still persist today. Indeed, it was not
until the early 1970’s, that striking analogies between the properties of
black holes and thermodynamics were pointed out, associating the area of a
black hole event horizon with the thermodynamic entropy. A kind of second law
was proven to hold [3, 4, 5, 6] (cf. Ref. [7] for a review) and a
correspondence with hydrodynamic systems was observed [8]. Whether such
analogies are to be taken as expressions of bona fide thermodynamic properties
was an issue at the time, and so remains today [9, 10, 11].
In 1981, Unruh [1] noted that there exist well understood acoustic phenomena,
reproducible in the laboratory, which formally enjoy the same properties of
black holes, as far as the quantum thermal radiation is concerned (the
acoustic metric is conformal, but not identical, to the Painlevé–Gullstrand
form of the Schwarzschild geometry [2]). Independently, acoustic analogies
were considered also by other authors, notably by Moncrief [12] and Matarrese
[13]. In particular, Unruh’s work implies that exotic phenomena, such as black
hole evaporation, may be tested in controllable experiments on Earth! The way
was paved for a research line now known as analogue gravity [2], which has led
to numerous experimental analogues of cosmological gravitational phenomena.
Most notable for their beauty and variety are the hydraulic experiments, which
in turn are connected by analogy to polytropic gasdynamics [14, 15]. For
instance, Ref. [16] reports on a shallow water experiment mimicking the
development of a shock instability in the collapse of a stellar core, that
makes highly non-spherical the birth of a neutron star, in spite of the
underlying spherical symmetry. The element of interest here is the
2-dimensional hydraulic jump, in a convergent radial flow, which corresponds
to the accretion spherical shock that arises above the surface of the neutron
star when it is being generated. The diverging radial case is instead
associated to a white hole, a kind of time reversed black hole [17, 18], which
is experimentally realized even with superfluids [19, 20]. In Ref. [21], the
scattering of surface waves on an accelerating transcritical 1-dimensional
flow are investigated, as corresponding to the space time of a black hole
horizon, while in Ref. [22], gravity waves have been used to investigate
instabilities of both black and white holes. These are but a few examples (see
also [23, 24, 25]) out of the many one can find in the specialized literature.
Using the same mathematical framework to describe very diverse physical
situations has repeatedly proven successful. Laplace considered Newtonian
mechanics equally suitable to describe the ’movements of the greatest bodies
of the universe and those of the tiniest atom’ [26]. Classical mechanics is
effective not only when dealing with several macroscopic objects, under non-
relativistic conditions, but it also constitutes the basis of statistical
mechanics, which deals with huge numbers of atoms and molecules. The planets
orbiting the Sun and the molecules in a glass of water are separated by 52
orders of magnitude in mass, and by 23 orders of magnitude in numbers. The
shallow water experiment and the collapse leading to a neutron star [16] are
separated, after all, by only six orders of magnitude in size.
Within this context, our contribution aims at providing a systematic
classification of the so-called backwater profiles (i.e., the elevation of the
free surface as a function of the streamwise coordinate), possibly connected
by shock waves (i.e., the hydraulic jumps), as solutions of the shallow water
equations in circular symmetry. A simple but systematic discussion of this
type may have the merit of connecting apparently different flow
configurations. While most of the shallow water literature refers to 1D
streams, here we focus on the role of the circular symmetry enforced by the
continuity equation, which opens the possibility for convergent or divergent
flows; we also pay attention to the role of dissipation, provided not only by
friction but also, when present, by shock waves in the form of hydraulic
jumps.
The astrophysics literature has mainly dealt with cases with circular
hydraulic jumps, either convergent [16] or divergent [17, 18], as well as with
1D currents with critical transitions over obstacles [23, 24]. While these
dissipative cases are characterized by strong energy losses due to the
presence of hydraulic jumps, and therefore are not symmetric with respect to
time or velocity inversion, here we also emphasize the presence of an inviscid
solution, which obeys this symmetry. The corresponding analytical solution is
of particular interest, as it represents a close analog of the black hole;
herein, the subcritical (i.e., subluminal, in the analogy) convergent current
accelerates towards a ’Froude horizon’, where the velocity becomes critical
(i.e., equal to the speed of the surface waves). The white hole analogy
emerges naturally, when the flow velocity is reversed.
Turbulent friction, acting in the direction opposite to the flow, allows for
the appearance of a critical point in the dynamical system describing the free
surface profiles, and thus it indirectly allows for the presence of stable
hydraulic jumps, represented as sharp discontinuities in the surface profiles.
Depending on the boundary conditions, both converging and diverging cases with
no jumps (called here dissipative black and white holes, respectively) are
also found, along with the corresponding cases with shock waves.
We confine our discussion to steady state profiles, in both inviscid and
turbulent conditions, noticing however that the laminar case presents no
qualitative differences. While the interesting effects of capillarity and
rotation are left for future work, we hope that the present analysis may be
useful to provide a classification of base flows to analyze systematically the
links between shallow water profiles and their astrophysical analogs.
## II Governing Equations
The starting point of the analogy are the well-known shallow water equations
[14, 15], namely the continuity equation
$\partial_{t}h=-\bm{\nabla}\cdot(h\bm{v})\,,$ (1)
where $h$ is the water depth, and $\bm{v}$ is the depth-averaged velocity,
which obeys the momentum equation
$\partial_{t}\bm{v}+\bm{v}\cdot\bm{\nabla}\bm{v}+g\bm{\nabla}h=\bm{\nabla}z_{b}-\bm{j},$
(2)
where $g$ is the gravitational acceleration, considered constant, $z_{b}$ is
the bed elevation, and $\bm{j}$ is the frictional force.
Friction is modeled as
$\bm{j}=\frac{|\bm{v}|^{\alpha}}{C^{2}h^{\beta}}\bm{v},$ (3)
where $C^{2}$ is the Chezy’s coefficient, which is inversely proportional to
the friction coefficient.
In what follows, we will only consider horizontal stream beds,
$\bm{\nabla}z_{b}=0$, and will focus on the case of $\alpha=1$ and $\beta=1$,
typical of fully developed turbulent flows. For simplicity, we assume $C^{2}$
to be constant, according to Bresse’s hypothesis [27]. More complicated
formulations [28, 29, 30, 31], including the laminar case with $\alpha=0$,
$\beta=2$ [32]), do not change the picture qualitatively. Other effects, such
as rotation and capillarity, although potentially very interesting, are
neglected here. We will return to a discussion of their effects and to
possible extensions of this work in the concluding section.
## III Isentropic case
Figure 1: Isentropic (inviscid) case. Top left panel: stream profiles as given
by Eq. (8); the red point marks the critical condition
$\xi_{min}={3\sqrt{3}}/{2}$, $y(\xi_{min})={2}/{3}$. Top right panel: behavior
of the velocity (dashed lines) and Froude number (solid lines) in the
subcritical (black lines) and supercritical (red lines) conditions. Bottom
panel: 3D view of the dimensionless water-level profile for the subcritical
branch of the inviscid solution (8), corresponding to the black hole. A
qualitative rendering of the ’light cones’ for the propagation of small waves
is also shown.
When reduced to a circularly symmetric problem, in steady state and in the
absence of friction, the continuity equation becomes
$2\pi rvh=Q,$ (4)
where $Q$ is the total volumetric flowrate, while the momentum equation takes
the form
$\frac{d}{dr}\left(h+\frac{v^{2}}{2g}\right)=0,$ (5)
and it can be integrated as
$h+\frac{v^{2}}{2g}=H,$ (6)
where $H$ is the constant head (energy per unit weight of fluid).
Eliminating the velocity from the previous equations,
$h+\frac{Q^{2}}{2g(2\pi r)^{2}h^{2}}=H,$ (7)
consistently with [33, 34, 32], suggests a normalization with $y=h/H$ and
$\xi=\frac{r}{R}$, where $R=\frac{Q}{2\pi H\sqrt{2gH}}$ is the radius at which
the given discharge passes with Torricellian velocity $\sqrt{2gH}$ and height
$H$. As a result,
$\xi=\frac{1}{y\sqrt{1-y}}.$ (8)
where, by definition, one has $\xi>0$ and $y\in(0,1)$. The solutions are
reported in Fig. 1, along with the corresponding velocity, obtained using
$\nu=\frac{1}{\xi y}.$ (9)
Note that $\xi_{min}=\frac{3\sqrt{3}}{2}$ and $y(\xi=\xi_{min})=\frac{2}{3}$
(see Eq. (40) in [22]). Each branch of the solution represents two
possibilities, since the flow direction can be inverted, because the equations
are invariant under changes in the flow direction. In the astrophysical
analogy, the top branch represents a black hole, when the flow is convergent,
and a white hole, when the flow is divergent, both with no dissipation.
The occurrence of a minimum radius, $\xi_{min}$, can be understood by
considering the radian-specific discharge, $Q_{u}=Q/2\pi r$, or $q_{u}=1/\xi$
in dimensionless form, which combined with the stream profile (8) yields
$q_{u}=y\sqrt{1-y}$ (10)
showing that $q_{u}$=$q_{u}(y)$ is a non-monotonic relation, with a maximum at
$y=2/3$, corresponding to $q_{u}=1/\xi_{min}$. This limit behavior arises
because a stream approaching smaller radii (flowing according to one of the
two branches) can carry that flow per radian only until the maximum value of
$q_{u}$ is attained, and the stream reaches the depth $y=2/3$ at $\xi_{min}$.
For smaller radii, the stream cannot carry that much energy $H$ and discharge
$Q$, while conserving them. The condition of maximum $q_{u}$ is called
critical (i.e., $y=2/3\equiv y_{c}$).
By introducing the Froude number
$Fr=\frac{v}{\sqrt{gh}}=\frac{1}{\xi}\sqrt{\frac{2}{y^{3}}},$ (11)
the critical condition corresponds to $Fr=1$, which is plotted as the red line
$y_{c}=(\sqrt{2}/\xi)^{\frac{2}{3}}$ in Fig. 1. Accordingly, the upper (lower)
branches showed in Fig. 1 are called subcritical (supercritical) conditions.
Notice that the Froude number is the ratio between the stream velocity, $v$,
and the propagation celerity of small-amplitude surface waves, $\sqrt{gh}$
(first obtained by Lagrange [35]). It follows that subcritical streams are
characterized by surface waves that can propagate against the stream ($Fr<1$);
in contrast, waves can only propagate in favor of current if streams are
supercritical ($Fr>1$). This is also reflected in the ’light cones’ drawn in
Fig. 1 (lower panel).
The interplay between the stream and the wave velocities and the existence of
a minimum radius are analogous to the conditions of the Schwarzschild solution
of the black hole. Like the latter corresponds to a mass confined in a region,
whose escape velocity equals that of light, the minimum radius $\xi_{min}$
corresponds to the threshold at which surface waves are no longer able to go
up the current. As a subcritical current flowing towards the central hole with
decreasing water depth $h$, the stream velocity $v$ increases, while the wave
celerity $\sqrt{gh}$ decreases. It follows that the Froude number gradually
approaches 1, and the stream reaches this critical value precisely at a
’Froude horizon’, inside which the surface waves are no longer able to go
upstream, namely to run away from the hole. In this sense, the surface waves
resemble the light in the Schwarzschild problem. However, while in the black
hole the velocity of the falling observer decreases proportionally to the
square root of the radius and the light speed remains constant, here (see top-
right panel in Fig. 1) the surface-wave speed changes in space depending on
the water depth; as mentioned in the Introduction, the correspondence between
the black hole and the shallow water metrics is only conformal [2].
Note that modifying the dependence of the bottom slope with the radius, the
analogy could possibly be made tighter, as also briefly noted in [22] but this
will not be pursued here. Another interesting point for future work regards
the conditions beyond the minimum radius, namely inside the Froude horizon,
where different solutions, similar to the so-called interior Schwarzschild
solution [36], might account for the fact that the flow cannot take place with
the same discharge and energy.
## IV Backwater profiles due to friction
Allowing for the effects of turbulent friction, several additional
configurations become possible. These can be obtained considering, in terms of
water depth, the combined continuity and momentum equation as
$\frac{dH}{dr}=\frac{d}{dr}\left(h+\frac{v^{2}}{2g}\right)=-\frac{v|v|}{C^{2}h}.$
(12)
Using the normalization $v/\sqrt{2gH_{0}}=\nu=1/(\xi y)$ (where $H_{0}$ is the
stream head at the boundary) and the other reference scales introduced before
yields
$\frac{dy}{d\xi}\left(1-\frac{2}{\xi^{2}y^{3}}\right)=\frac{2}{\xi^{3}y^{2}}-\frac{\alpha}{\xi^{2}y^{3}},$
(13)
where $\alpha={\rm sgn}(v)2g/C^{2}$. Thus $\alpha$ is positive for turbulent
flows proceeding along $\xi$ (divergent) and negative for flows that go
against $\xi$ (convergent). As a result, the slope of the water depth is
$\frac{dy}{d\xi}=\frac{2y-\alpha\xi}{\xi^{3}y^{3}-2\xi}=\frac{N(\xi,y,\alpha)}{D(\xi,y)}.$
(14)
Figure 2 shows the phase plots for some values of $\alpha$. In these plots,
three curves are highlighted: the profiles (8) corresponding to the inviscid
case (black lines), and the solutions of equations $N(\xi,y,\alpha)=0$ (green
lines) and $D(\xi,y)=0$ (red lines). Since the physical domain is bound to
$\xi>0$, green lines occur only if $\alpha>0$.
In the $\alpha=0$ case (i.e., no energy dissipation), the inviscid solution
reproduces the only possible stream profiles compatible with energy
conservation: depending on the boundary condition, the subcritical or the
supercritical reach is selected (notice that the critical condition, where the
reaches join, lies on the curve $D(\xi,y)=0$). Differently, when dissipation
occurs (i.e., $\alpha\neq$0), the curve representing the flow starts from the
boundary with energy $H_{0}$ and flows dissipating energy according to the Eq.
(12). Since $H_{0}$ was chosen as a reference energy to normalize the problem,
it follows that the boundary condition lies on the inviscid solution, where
$H(\xi_{bc})/H_{0}=1)$, being $\xi_{bc}$ the radial position of the boundary.
The remaining part of the stream profile follows the curve of the phase space,
departing from the boundary condition. In other words, the black curve
corresponding to the solution (8) now becomes the locus of the initial
conditions of the backwater profiles. As expected, the phase trajectories are
tangent to the inviscid solution only for $\alpha=0$.
Becuase of dissipation, the flows are characterized by $(h+v^{2}/2g)\leq
H_{0}$, that in dimensionless form reads
$y+\frac{1}{\xi^{2}y^{2}}\leq 1.$ (15)
Such a condition is satisfied only in the region of the phase space enclosed
by the inviscid solution (8), that is the area inside the black curve in Fig.
2. Therefore, physically meaningful current profiles correspond to phase lines
in this region. The curve $D(\xi,y)=0$ marks the critical condition and
separates the upper reach of the phase lines corresponding to subcritical
streams ($Fr<1$) from the lower one that refers to supercritical streams
($Fr>1$). It is interesting to note that this critical divide does not depend
on the friction parameter $\alpha$. In the case of $\alpha>0$ (i.e., diverging
streams, flowing along increasing $\xi$), the phase space contains a focus
where $N=D=0$ – with coordinates
$\\{\xi_{f}=\sqrt[5]{2^{4}/\alpha^{3}},y_{f}=\sqrt[5]{\alpha^{2}/2}\\}$. The
latter falls within the physically meaningful domain only if
$\alpha<8/(9\sqrt{3})$.
The transition from supercritical to subcritical streams takes place through
the formation of a hydraulic jump. Typically, these appear as turbulent bores,
although a series of standing waves (the so-called undular jump) can occur
when the two streams before and after the jump are close to the critical
condition (not considered here). The radial position of the hydraulic jump is
obtained by applying the momentum principle along the flow direction to a
short reach of circular sector of stream. This yields
$\frac{\gamma}{2}h^{2}\phi r+\beta\rho(\phi rq)v=const.,$ (16)
where the two addends refer to the hydrostatic and dynamical force,
respectively, $\gamma$ is the fluid specific weight, $\phi$ is the central
angle of the circular sector, $\beta$ is the momentum coefficient, and $\rho$
is the fluid density. In the previous relation, bed friction and lateral
hydrostatic components have been neglected.
Figure 2: Phase space for $\alpha$ increasing in steps of 0.4, starting from
-0.4 (top left). The red line is $D=0$, while the green is $N=0$; the black
line is the solution of the inviscid case, Eq (8).
By introducing the dimensionless quantities and assuming turbulent motion
($\beta\simeq 1$), the previous relation (16) gives
$y^{2}+\frac{4}{\xi^{2}y}\equiv F(\xi,y)=\mathrm{const.},$ (17)
namely the (dimensionless) specific force $F$ that the two streams have to
balance immediately before ($Fr>1$) and after ($Fr<1$) the hydraulic jump
[37], i.e., $F(\xi_{j},y_{1})=F(\xi_{j},y_{2})$, where the subscripts 1 and 2
refer to the supercritical and subcritical depths at the radial position,
$\xi_{j}$, of the hydraulic jump. A minimum of the specific force,
$F=F_{min}$, occurs at $\xi=\sqrt[3]{2/\xi^{2}}$, in correspondence to the
critical condition, $D=0$.
It is important to note that the hydraulic jump is a strongly dissipative
phenomenon, related to the formation of turbulence and vorticity. In
dimensionless terms, the energy dissipation of the hydraulic jump can be
calculated as the difference in total energy across the jump,
$[\Delta(\Delta+2y_{1})/(y_{1}^{2}y_{2}^{2}\xi_{j})-\Delta]$, where
$\Delta=(y_{2}-y_{1})$ is the depth difference across hydraulic jump.
Figure 3: Dissipative solutions. The columns (see solid and dashed lines)
refer to neutron star ($\alpha=-0.1$) and white hole ($\alpha=0.1$) cases,
respectively. The first row shows stream profiles (solid lines; blue arrows
indicate flow direction); the central row reports behaviors of velocity
(dashed lines) and Froude number; the lower row displays behaviors of specific
force (dashed lines) and stream head (referred to the initial stream energy,
$H_{0}$). In the lower row, the red and black lines refer to the supercritical
and subcritical streams, respectively. In all panels, dotted lines refer to
profiles with no hydraulic jumps: the drain case (left column) and the spring
case (right column).
In terms of astrophysical analogs, the introduction of friction in the shallow
water dynamics brings about additional cases, exemplified in Fig. 3: a neutron
star (left column) and an analog of the dissipative white hole (right column);
two other analogs are depicted in the same figure (with dotted lines) and
correspond to dissipative forms of the black and white holes. In the case of
the neutron-star analog, shown on the left of Fig. 3, the flows proceeds
towards the center. Practically speaking, the fluid enters the domain from a
circular sluice gate, placed along the external radius, and is drained through
a central hole (of dimensions larger than $\xi_{min}$). The flow is initially
supercritical and becomes subcritical after a hydraulic jump. The central and
bottom panels show that the Froude number exhibits a non-monotonic behavior,
as it first decreases in the supercritical reach and then increases when the
stream becomes subcritical. The reason for this lies in the hydraulic
constraint that the current must become critical at the edge of the central
hole, as shown in the right central panel, where the stream reaches $Fr=1$ on
the hole edge placed at $\xi=7.5$. Accordingly, also the velocity shows a non-
monotonic behaviour. Finally, the left lower panel highlights that the
hydraulic jump entails an abrupt energy dissipation, which occurs where the
specific forces of supercritical and subcritical streams are equal.
In the case of the dissipative white hole, illustrated in the right panels of
Fig. 3, the fluid flows along increasing values of $\xi$ (e.g., as if coming
from a vertical jet impinging the bed close to $\xi=0$) and it is initially in
supercritical conditions; a hydraulic jump then connects the profile to the
subcritical one downstream. The central and bottom panels show that both the
Froude number and the stream velocity decrease monotonically along the radius
(although they would start increasing again, if the profile were to be
continued), with a step change at the hydraulic jump. The condition of
equality of the specific force dictates the radial position of the hydraulic
jump, where a localized energy dissipation occurs. The subcritical profile
$y(\xi)$ can be non-monotonic, since a maximum can occur depending on whether
the subcritical reach intersects or does not intersect the line $N=0$, which
is the green line in Fig. 2.
Fig. 3 reports also the profiles occurring when water drains or flows from a
central hole or spring, without hydraulic jumps. Such profiles, corresponding
to dissipative black holes, are characterized by subcritical streams shown as
dotted lines. In the drain case (left panels), the flow originates from an
external circular reservoir, then it accelerates converging towards the center
and finally enters the hole in critical conditions. In the spring case (right
panels), a profile, analogous to a dissipative white hole, starts from the
critical condition ($Fr=1$, where water emerges), gradually slows down and
joins the subcritical profile previously described in the case of white hole
with shock.
These last two subcritical analogs with no hydraulic jumps are similar to
those discussed in the isentropic case. However, the presence of a maximum in
the curve $y=y(\xi)$ of the spring case, which does not occur in isentropic
situations, highlights an interesting class of profiles, connecting two
critical horizons, that is peculiar to the viscous case. An example of a white
hole confined between two horizons (see [22], Sec. XI) is shown in Fig. 4,
where the flow springs from critical conditions, reaches a maximum and then
decreases returning to the critical condition before jumping off from the
outer edge of circular plate.
Figure 4: 2-horizon spring: dissipative, subcritical profile connecting two
horizons ($\alpha=0.4$, black line). Blue arrow indicates flow direction,
while red lines refer to the critical conditions (solid line) and Froude
number along the profile (dashed line), respectively.
The stability of the hydraulic jumps occurring in both analogs of Fig. 3 is an
interesting matter. If one only considers the specific forces $F$, they both
appear spatially stable: perturbations of their radial position are absorbed
by the consequent imbalance between the upstream and downstream specific
forces, so that eventually the jumps return to their original position.
However, a more detailed momentum balance across the jump, that included
lateral hydrostatic pressures and bed friction, could alter this picture (see
also [38, 31]), especially in convergent cases [16], as in one-dimensional
streams in convergent or upward sloping channels, [39, 40]. Finally, it is
worth mentioning that hydraulic jumps connecting supercritical to subcritical
streams are possible also in the isentropic case. However, unlike the
dissipative cases, their spatial position is undetermined, being marginally
stable [41].
## V Conclusions
The solutions of the shallow water equations present a variety of
configurations, which besides their direct fluid dynamic interest may also
have useful implications as analogs of specific astrophysical phenomena. For
conditions of circular symmetry, the resulting steady state solutions have
been discussed with particular attention to the transition between subcritical
and supercritical conditions. The main cases are organized in Table I. These
steady state solutions may be realized in the laboratory and can be used as
base solutions to explore the modes of propagation of disturbances and
instabilities.
Starting from these configurations, several avenues for future research are
suggested by the astrophysical analogies. Of particular interest is the
stability of the hydraulic jumps. As already mentioned, this analysis is
complicated by the presence of bottom friction and, in particular, by the
pressure forces along the circumference of the shock, whose quantification
depends on the specific geometry of the hydraulic jump [39, 40]. Moreover,
they may include oscillation and symmetry breaking instabilities [38, 31],
including those nicely documented in the neutron star analogue [16].
Along a similar line, one could conjecture the appearance of roll waves, i.e.,
pulsing and breaking waves, see [42, 15]), that could be realized in
supercritical conditions with variable bottom slope. Apparently similar star-
pulsation phenomena are well known in the literature [43, 44, 45]. In general,
modifications of the bed slope (both downward and upward) introduce a degree
of freedom, which would allow for the interplay between energy dissipation by
friction and potential energy gain/loss to widen the gamut of hydraulic
profiles and shock behaviors.
Finally, including rotations would be of interest for both Kerr-Newman black
holes and for exploring wave generation in vorticity-shock interactions [46,
47], while capillarity effects are known to generate lower wave-number
disturbances in the upstream reach of obstacles [15], which have been linked
to the Hawking radiation of black hole evaporation [24]. Extended
thermodynamic formalism for turbulent flows, shocks and waves might also
provide avenues to more concretely link black hole entropy to classical
thermodynamics [48, 49].
Table 1: List of shallow water analogs in circular symmetry (SUB=Subcritical flow; SUP=Supercritical flow; HJ=Hydraulic Jump; SFH=Smooth Froude Horizon). Shallow Water | Astr. Analog | Flow Dir. | Flow Types | Energetics | Eq./Fig. | Ref.
---|---|---|---|---|---|---
Circular Jump | Neutron Star | Convergent | SUP$>$HJ$>$SUB | Dissipative | Eq. (14), Fig. 3 | [16]
Drain | Turbulent Black Hole | Convergent | SUB$>$SFH | Dissipative | Eq. (14), Fig. 3 |
Inviscid Drain | Black Hole | Convergent | SUB$>$SFH | Isentropic | Eq. (8), Fig. 1 | [22]
Inviscid Spring | White Hole | Divergent | SUB$>$SFH | Isentropic | Eq. (8), Fig. 1 | [22]
Spring | Turbulent White Hole | Divergent | SUB$>$SFH | Dissipative | Eq. (14), Fig. 3 |
2-Horizon Spring | Confined White Hole | Divergent | SFH$>$SUB$>$SFH | Dissipative | Eq. (14), Fig. 4 | [22], Sect. XI
Circular Jump | White Hole with Shock | Divergent | SUP$>$HJ$>$SUB | Dissipative | Eq. (14), Fig. 3 | [18, 19]
Acknowledgements. – A.P. the US National Science Foundation (NSF) grants
EAR-1331846 and EAR-1338694. LR acknowledges that the present research has
been partially supported by MIUR grant Dipartimenti di Eccellenza 2018-2022
(E11G18000350001).
## References
* Unruh [1981] W. G. Unruh, Experimental black-hole evaporation?, Physical Review Letters 46, 1351 (1981).
* Barcelò _et al._ [2005] C. Barcelò, S. Liberati, and M. Visser, Analogue gravity, Living Reviews in Relativity 8 (2005).
* Bekenstein [1972] J. Bekenstein, Black holes and the second law, Lettere al Nuovo Cimento 4, 737 (1972).
* Bekenstein [1973] J. Bekenstein, Black holes and entropy, Physical Review D 7, 2333 (1973).
* Bardeen _et al._ [1973] J. Bardeen, B. Carter, and S. Hawking, The four laws of black hole mechanics, Communications in Mathematical Physics 31, 161 (1973).
* Hawking [1975] S. Hawking, Particle creation by black holes, Communications in Mathematical Physics 43, 199 (1975).
* Page [2005] D. Page, Hawking radiation and black hole thermodynamics, New Journal of Physics 7 (2005).
* Smarr [1973] L. Smarr, Mass formula for kerr black holes, Physical Review Letters 30, 71 (1973).
* Wallace [2010] D. Wallace, Gravity, entropy, and cosmology: in search of clarity, British Journal for the Philosophy of Science 61, 513 (2010).
* Wallace [2018] D. Wallace, The case for black hole thermodynamics part i: Phenomenological thermodynamics, Studies in History and Philosophy of Modern Physics 64, 52 (2018).
* Callender [2019] C. Callender, Are we all wrong about black holes?, Quanta Magazine (2019).
* Moncrief [1974] V. Moncrief, Gravitational perturbations of spherically symmetric systems. i. the exterior problem, Annals of Physics 88, 323 (1974).
* Matarrese [1985] S. Matarrese, On the classical and quantum irrotational motions of a relativistic perfect fluid i. classical theory, Proceedings of the Royal Society A 401, 53 (1985).
* Landau and Lifshitz [1987] L. Landau and E. Lifshitz, Theoretical physics, vol. 6, fluid mechanics (1987).
* Whitham [2011] G. B. Whitham, _Linear and nonlinear waves_ , Vol. 42 (John Wiley & Sons, 2011).
* Foglizzo _et al._ [2012] T. Foglizzo, F. Masset, J. Guilet, and G. Durand, Shallow water analogue of the standing accretion shock instability: Experimental demonstration and a two-dimensional model, Physical Review Letters 108, 051103 (2012).
* Jannes _et al._ [2011] G. Jannes, R. Piquet, P. Maïssa, C. Mathis, and G. Rousseaux, Experimental demonstration of the supersonic-subsonic bifurcation in the circular jump: A hydrodynamic white hole, Physical Review E 83, 1 (2011).
* Bhattacharjee [2017] J. Bhattacharjee, Tunneling of the blocked wave in a circular hydraulic jump, Physics Letters A 381, 733– (2017).
* Volovik [2005] G. Volovik, Hydraulic jump as a white hole, JETP Letters 82, 624– (2005).
* Volovik [2006] G. Volovik, Horizons and ergoregions in superfluids, Journal of Low Temperature Physics 145, 337– (2006).
* Euvé _et al._ [2020] L. P. Euvé, S. Robertson, N. James, A. Fabbri, and G. Rousseaux, Scattering of co-current surface waves on an analogue black hole, Physical Review Letters 124, 1– (2020).
* Schützhold and Unruh [2002] R. Schützhold and W. G. Unruh, Gravity wave analogues of black holes, Physical Review D 66, 044019 (2002).
* Rousseaux _et al._ [2008] G. Rousseaux, C. Mathis, P. Maïssa, T. G. Philbin, and U. Leonhardt, Observation of negative-frequency waves in a water tank: a classical analogue to the hawking effect?, New Journal of Physics 10, 053015 (2008).
* Weinfurtner _et al._ [2011] S. Weinfurtner, E. W. Tedford, M. C. Penrice, W. G. Unruh, and G. A. Lawrence, Measurement of stimulated hawking emission in an analogue system, Physical Review Letters 106, 021302 (2011).
* Euvé _et al._ [2016] L.-P. Euvé, F. Michel, R. Parentani, T. G. Philbin, and G. Rousseaux, Observation of noise correlated by the hawking effect in a water tank, Physical Review Letters 117, 121301 (2016).
* Laplace [1829] P. Laplace, _Essai philosophique sur les probabilités_ (H. Remy, 1829).
* Bresse [1860] J. A. C. Bresse, _Cours de mecanique appliquee: 2: Hydraulique_ (Mallet-Bachelier, 1860).
* Bonetti _et al._ [2017] S. Bonetti, G. Manoli, C. Manes, A. Porporato, and G. G. Katul, Manning’s formula and strickler’s scaling explained by a co-spectral budget model, Journal of Fluid Mechanics 812, 1189 (2017).
* Bohr _et al._ [1997] T. Bohr, V. Putkaradze, and S. Watanabe, Averaging theory for the structure of hydraulic jumps and separation in laminar free-surface flows, Physical Review Letters 79, 1038 (1997).
* Luchini and Charru [2010] P. Luchini and F. Charru, The phase lead of shear stress in shallow-water flow over a perturbed bottom, Journal of fluid mechanics 665, 516 (2010).
* Ivanova and Gavrilyuk [2019] K. Ivanova and S. Gavrilyuk, Structure of the hydraulic jump in convergent radial flows, Journal of Fluid Mechanics 860, 441 (2019).
* Bohr _et al._ [1993] T. Bohr, P. Dimon, and V. Putkaradze, Shallow-water approach to the circular hydraulic jump, Journal of Fluid Mechanics 254, 635 (1993).
* Watson [1964] E. Watson, The radial spread of a liquid jet over a horizontal plane, Journal of Fluid Mechanics 20, 481 (1964).
* Hager [1985] W. Hager, Hydraulic jump in non-prismatic rectangular channels, Journal of Hydraulic Research 23, 21 (1985).
* Lagrange [1783] I. Lagrange, Mémoire sur la théorie du mouvement des fluides, Bulletin de la Classe des Sciences Academie Royal de Belique , 151 (1783).
* Hughston and Tod [1990] L. P. Hughston and K. P. Tod, _An introduction to general relativity_ , Vol. 5 (Cambridge University Press, 1990).
* Chow [1959] V. T. Chow, _Open-channel hydraulics_ (McGraw-Hill, 1959).
* Ellegaard _et al._ [1998] C. Ellegaard, A. E. Hansen, A. Haaning, K. Hansen, A. Marcussen, T. Bohr, J. L. Hansen, and S. Watanabe, Creating corners in kitchen sinks, Nature 392, 767 (1998).
* Marchi and Rubatta [2004] E. Marchi and A. Rubatta, _Meccanica dei Fluidi(in Italian)_ (UTET, 2004).
* Defina _et al._ [2008] A. Defina, F. Susin, and D. Viero, Bed friction effects on the stability of a stationary hydraulic jump in a rectangular upward sloping channel, Physics of Fluids 20, 036601 (2008).
* Valiani and Caleffi [2016] A. Valiani and V. Caleffi, Free surface axially symmetric flows and radial hydraulic jump, Journal of Hydraulic Engineering 142, 06015025 (2016).
* Dressler [1949] R. F. Dressler, Mathematical solution of the problem of roll-waves in inclined opel channels, Communications on Pure and Applied Mathematics 2, 149 (1949).
* Balmforth and Gough [1990] N. Balmforth and D. Gough, Effluent stellar pulsation, The Astrophysical Journal 362, 256 (1990).
* Balmforth [1992] N. Balmforth, Solar pulsational stability–i. pulsation-mode thermodynamics, Monthly Notices of the Royal Astronomical Society 255, 603 (1992).
* Andersson and Kokkotas [1996] N. Andersson and K. D. Kokkotas, Gravitational waves and pulsating stars: What can we learn from future observations?, Physical review letters 77, 4134 (1996).
* Klein _et al._ [1994] R. I. Klein, C. F. McKee, and P. Colella, On the hydrodynamic interaction of shock waves with interstellar clouds. 1: Nonradiative shocks in small clouds, The Astrophysical Journal 420, 213 (1994).
* Ellzey _et al._ [1995] J. L. Ellzey, M. R. Henneke, J. M. Picone, and E. S. Oran, The interaction of a shock with a vortex: shock distortion and the production of acoustic waves, Physics of Fluids 7, 172 (1995).
* Biró _et al._ [2020] T. S. Biró, V. G. Czinner, H. Iguchi, and P. Ván, Volume dependent extension of kerr-newman black hole thermodynamics, Physics Letters B 476, 135344 (2020).
* Porporato _et al._ [2020] A. Porporato, M. Hooshyar, A. Bragg, and G. Katul, Fluctuation theorem and extended thermodynamics of turbulence, Proceedings of the Royal Society A 476, 20200468 (2020).
|
# Optimal Reinsurance: A Ruin-Related Uncertain Programming Approach
Wrya Vakili and Alireza Ghaffari-Hadigheh
Dept. of Applied Math.
Azarbaijan Shahid Madani University, Tabriz, Iran
<EMAIL_ADDRESS>
<EMAIL_ADDRESS>
###### Abstract
We investigate the role of reinsurance in maximizing the wealth of an
insurance company. We use Liu’s uncertainty theory (B. Liu, 2007) for the
problem modeling and follow-up computations. The uncertainty measure of ruin
for the insurance company is considered as the optimization criterion. Since
calculating the ruin index is very difficult, we introduce a simple
computational method to identify the uncertain measure of ruin for an
insurance company. Finally, a generalized model is presented, granting the
model be more practical.
## 1 Introduction
Risk is an inevitable feature of a business. Analyzing the behavior of
involved risk and it’s consequences have always been subjected to different
theories that deal with unpredictable environments. Risk theory investigates
the effect of potential deviation of the outcome from their expected value,
prevent it from occurring, or being prepared to take action when it is
inescapable. It mostly practices the modeling of cash flow as a surplus
process, e.g., the business’s wealth during the activity period [10].
Using insurance, people aim to hedge their property against diverse sorts of
risks they might encounter. Insurance companies take their customers’ risks in
exchange for a specified amount of money referred to as the premium. The
collective risk model, also called the Cramer-Lundberg model, was initiated by
Lundberg and latter developed by Cramer [5]. It is usually counted as a
standard configuration of an insurance company’s risk process concerning the
risks it takes by selling insurance underwriting.
In the non-life insurance industry, consider an insurance company that sells
many finite fire insurance contracts. To compete against other companies, it
may offer a lower premium, leading to enduring a high risk in each contract.
Notice that in such contracts, the chance of damage of property by the fire is
significantly low. However, the policyholder’s loss in an incident is
extremely higher than the premium amount. Exercising a mass of such a risk,
the company would prefer to hedge itself by purchasing reinsurance to
transform some of the risks to another company. Here, we model such a
situation as an optimization problem at which the optimal decision is the
amount of risk it should share.
Consider this classical model as appeared in most of the standard actuarial
literature [5]
$U_{t}=u+ct-R_{t},$ (1)
where $u$ is the initial capital of the company, $c>0$ is a constant that
represents insurance premium indicating that company receives $c$ units of
money per unit time. The process $R_{t}=\sum_{i=1}^{N_{t}}\eta_{i}$ is so-
called compound process, where $N_{t}$ is the process of counting the number
of claims that the company might have to pay during $(0,t]$. Further,
$\eta_{i}$’s are random variables denoting the severity of claims. Finally,
$U_{t}$ is the wealth of the company regarding its contracts over $(0,t]$.
Observe that the interval $(0,t]$ can be partitioned into separated points
$s_{1},s_{2},\ldots$ in which losses take place. We refer to
$\xi_{i}=s_{i+1}-s_{i}$ as interarrival times. In the Lundberg-Cramer model,
these random variables are iid with exponential distribution. It is also
assumed that $N_{t}$ and $\eta_{i}$’s are independent, and $\eta_{i}$’s are
iid random variables. Anderson et al.[3] generalized the classic model by
letting $N_{t}$ to be any renewal process, not just the Poisson process. In
these models, the only non-deterministic component is $R_{t}$, while
calculating its distribution is a very costly task, and most of the time, it
is almost impossible to do. However, an acceptable distribution can be
approximated by discretizing the variables to reduce them to be more
tractable.
Ruin is an essential and primary concept of mathematical approaches in the
risk management of insurance activities. It is defined as the company’s
surplus being negative in some future, concerning active contracts of some
policies [5]. Observe that ruin happens when the collected premiums don’t
satisfy the losses claimed by policyholders. Just by restricting the model to
some simplified considerations, it is possible to obtain some results. For
example, Panjer [38] proved that when severity only takes positive integers,
and $N_{t}$ is of some special cases, calculation of the distribution of
$R_{t}$ and consequently the ruin, is possible through some recursive
algorithm. In another study, diffusion approximation was proposed [24] and
followed by Grandell [22], as another approach to make the computation of ruin
possible. Here, the main idea is using the central limit theorem in favor of
approximating the risk process by a Wiener process. Another approximation was
proposed by De Vydler et al. [17], where the first three moments of the
$R_{t}$ were used to mimic its behavior and find a recursive algorithm for
calculating the probability distribution of the ruin. Thus, direct calculating
of the probability of ruin is far from being a practical approach.
Some generalized versions of classical models were developed, where more
details of the real-world problem are taken into account. Dufresne et al. [18]
used a diffusion process to perturb the risk process, allowing more details to
the model, such as the effect of interest rate or dependency between the
number and severity of the claims. More assumptions, such as taxes and
dividends, have been added to models enriching the literature and making the
models more practical. We refer to [2, 42], as well as to the masterpiece
textbooks in the field [5, 19] for more details.
Reinsurance is regarded as a framework of protecting the insurance companies
against potential catastrophic high losses [1]. As an insurance company
charges its policyholders with premiums, the reinsurer also defines a premium
adapted to the amount of preassumed uncertain compensation cover that offers
the cedent party. Reinsurance’s role is to limit the primary insurance
liability with a predefined risk and increase its risk-taking capacity when
the likely loss is predicted to be overwhelming.
There are two basic forms of reinsurance contracts; proportional and non-
proportional. In the former, both parties admit a predetermined share of all
premiums and their potential losses. In the latter, reinsurance’s liability is
not determined already but depends on the number or severity of the claims
incurred before the reinsurance contract’s expiration [1].
One of the most important concerns about the reinsurance contracts is
determining optimal retention, i.e., the optimum amount of the risk that
primary insurance holds. Since the pioneering works by Borch [9], Kahn [26],
and Arrow [4], there have been many contributions to the subject. Followed by
these fundamental works, other researchers added a great deal of knowledge to
the literature, e.g., [10, 11, 20, 23, 40, 41].
The special case of proportional reinsurance was the subject of some other
researches [12, 13, 14, 27]. Common objectives include minimizing the
probability of ruin [6, 16, 39], maximizing the expected utility of the
terminal wealth [7, 21, 37, 29], and mean variance portfolio optimization [8,
27]. For a comprehensive review on optimal reinsurance approaches see [15].
Probability theory based models are assumed to be practical and useful, not to
be considered the perfect and the only way to model certain problems. As noted
by some people [35, 45, 46], probability theory has some shortcomings in
dealing with those problems with no sufficient data, or basic information is
based on some experts’ opinions.
Liu [30] introduced the uncertainty theory to deal with problems related to
belief degree in expert-based systems. Analogous to the stochastic point of
view, Liu [31] introduced the concept of renewal and renewal reward process
for initiated systems based on sudden arrival events. To deal with the ruin
theory problems, the concept of ruin index has been proposed in the
uncertainty theory framework as its substitution in probability theory [36].
Li et al. [28] proposed a premium calculation principle in an uncertain
environment. Later, Yao et al. [43, 44] modified the results on ruin index in
some extensions and generalized over different assumptions.
Uncertainty theory has two crucial features that make it a reliable tool
dealing with expert-based models. Firstly, it enjoys a well-defined axiomatic
structure similar to probability theory. Secondly, this theory is designed in
a way to be tractable for subjective modeling in the fields like actuarial
science that distributions are highly based on belief and not data. Here we
investigate the impact of proportional reinsurance on an insurance company’s
surplus process and its survival that is defined as leaving the uncertain
measure of ruin (UMR) under a fair bound. We assume a quota share reinsurance
contract, through which the primary insurance cedes premiums and losses with
an unknown cession rate to the reinsurer. We need to consider a safety loading
for both reinsurance and insurance premiums so that the UMR does not occur in
an almost certain way in the long run. We study the optimal retention of
proportional reinsurance under ruin-related optimization restrictions. The
model determines parameters that identify the best possible scenario to get
involved in a reinsurance contract. Based on a proportional type of
reinsurance contract, we only study the problem from an insurance point of
view, assuming that reinsurers are willing and capable of taking the whole
committed risk if required.
The rest of the paper is organized as follows. Since uncertainty theory is
relatively new, in the next section we present some basics on the theory which
will be used in the main parts of the paper. Section 3 includes three
subsections. First, we introduce a new way of calculating the uncertain
measure of ruin followed by a detailed example. We then discuss optimal
reinsurance as an uncertain optimization problem and describe the results
through an example in non-life insurance. The section will be ended with a
generalization of the proposed model. The conclusion suggests some possible
extensions that might make the model more reliable.
## 2 Preliminaries
Let $\Gamma$ be a nonempty set and $\mathscr{L}$ be a $\sigma$\- algebra over
$\Gamma$. Each element $\Lambda\in\mathscr{L}$ is called an event. A set
function $\mathscr{M}$ from $\mathscr{L}$ to $[0,1]$ that satisfies the
following axioms is called an uncertain measure [30]:
###### Axiom 1.
(Normality Axiom) $\mathscr{M}\\{\Gamma\\}=1$.
###### Axiom 2.
(Duality Axiom) $\mathscr{M}\\{\Lambda\\}+\mathscr{M}\\{\Lambda^{c}\\}=1$ for
any event $\Lambda$.
###### Axiom 3.
(Subadditivity Axiom) For every countable sequence of events
$\Lambda_{1},\Lambda_{2},\ldots$,
$\mathscr{M}\\{\bigcup_{i=1}^{\infty}\Lambda_{i}\\}\leq\sum_{i=1}^{\infty}\mathscr{M}\\{\Lambda_{i}\\}.$
(2)
The triplet $(\Gamma,\mathscr{L},\mathscr{M})$ is called an uncertainty space.
The Product axiom, distinguishing the probability theory from the uncertainty
theory, is defined as follows [32].
###### Axiom 4.
(Product Axiom) Let $(\Gamma_{k},\mathscr{L}_{k},\mathscr{M}_{k})$ be
uncertainty spaces for $k=1,2,\ldots$. The product uncertain measure
$\mathscr{M}$ is a measure on product $\sigma$-algebra
$\mathscr{L}_{1}\times\mathscr{L}_{2}\times\cdots\times\mathscr{L}_{n}$
satisfying
$\mathscr{M}\\{\prod_{k=1}^{\infty}\Lambda_{k}\\}=\bigwedge_{k=1}^{\infty}\mathscr{M}_{k}\\{\Lambda_{k}\\}.$
(3)
Uncertain variable has been defined to ease the quantitative modeling of
phenomena in uncertainty theory [30]. It is a measurable function from an
uncertain space $(\Gamma,\mathscr{L},\mathscr{M})$ to the set of real numbers,
in which for any Borel set $B$, the set $\\{\xi\in
B\\}=\\{\gamma\in\Gamma|\xi(\gamma)\in B\\}$ is an event. Uncertainty
distribution of an uncertain variable $\xi$ is defined as [30]
$\Phi(x)=\mathscr{M}\\{\xi\leq x\\},\qquad\forall x\in\Re.$ (4)
There are several uncertain variables in the literature of the uncertainty
theory. The simplest one is the linear uncertain variable with the following
distribution,
$\displaystyle\Phi(x)=\begin{cases}0&~{}{\rm if}~{}x\leq a\\\
\displaystyle\frac{x-a}{b-a}&~{}{\rm if}~{}a\leq x\leq b\\\ 0&~{}{\rm
if}~{}x\geq b,\end{cases}$ (5)
denoted by $\mathscr{L}(a,b)$ where $a$ and $b$ are real numbers with $a<b$.
An uncertain variable $\xi$ is called normal and denoted by
$\mathscr{N}(e,\sigma)$ [30], if it has the normal uncertain distribution
$\Phi(x)=\left(1+exp\left(\frac{\pi(e-x)}{\sqrt{3}\sigma}\right)\right)^{-1},x\in\Re,$
(6)
where $e$ and $\sigma$ are real numbers with $\sigma>0$. Moreover, an
uncertain variable $\xi$ is called lognormal if $\ln\xi$ is a normal uncertain
variable. Its distribution reads as
$\Phi(x)=\left(1+exp\left(\frac{\pi(e-\ln
x)}{\sqrt{3}\sigma}\right)\right)^{-1},x\geq 0.$ (7)
Uncertain variables $\xi_{1},\xi_{2},\dots,\xi_{n}$ are independent if [32]
$\mathscr{M}\left\\{\bigcap_{i=1}^{n}(\xi_{i}\in
B_{i})\right\\}=\bigwedge_{i=1}^{n}\mathscr{M}\left\\{\xi_{i}\in
B_{i}\right\\},$ (8)
for arbitrary Borel sets $B_{1},B_{2},\dots,B_{n}$. An uncertain distribution
$\Phi(x)$ is called regular if it is a continuous and strictly increasing
function w.r.t. $x$ for all $0<\Phi(x)<1$, and
$\lim_{x\rightarrow-\infty}\Phi(x)=0,\qquad\lim_{x\rightarrow+\infty}\Phi(x)=1.$
(9)
Let $\xi_{1},\xi_{2},\ldots,\xi_{n}$ be independent uncertain variables with
regular uncertainty distributions $\Phi_{1},\Phi_{2},\ldots,\Phi_{n}$,
respectively. If $f(\xi_{1},\xi_{2},\ldots,\xi_{n})$ is strictly increasing
w.r.t. $\xi_{1},\xi_{2},\ldots,\xi_{m}$ and strictly decreasing w.r.t.
$\xi_{m+1},\xi_{m+2},\ldots,\xi_{n}$, then $f$ is an uncertain variable with
the uncertainty distribution [34, Theorem 1.26]
$\Psi(x)=\sup_{f(x_{1},x_{2},\ldots,x_{n})=x}\left(\min_{1\leq i\leq
m}\Phi_{i}(x_{i})\wedge\min_{m+1\leq i\leq n}(1-\Phi(x_{i}))\right).$ (10)
###### Definition 1.
Let $\xi$ be an uncertain variable with regular uncertainty distribution
$\Phi(x)$. The inverse function $\Phi^{-1}(\alpha)$ is called the inverse
uncertainty distribution of $\xi$.
###### Theorem 2.
If $f(\xi_{1},\xi_{2},\ldots,\xi_{n})$ is strictly increasing w.r.t.
$\xi_{1},\xi_{2},\ldots,\xi_{m}$ and strictly decreasing w.r.t.
$\xi_{m+1},\xi_{m+2},\ldots,\xi_{n}$, then $f$ is an uncertain variable with
the inverse uncertainty distribution
$\Psi^{-1}(\alpha)=f(\Phi^{-1}(\alpha),\ldots,\Phi^{-1}(\alpha),\Phi_{m+1}^{-1}(1-\alpha),\ldots,\Phi_{n}^{-1}(1-\alpha)).$
(11)
###### Theorem 3.
[33] Let $\xi_{1},\xi_{2},\ldots,\xi_{n}$ be independent uncertain variables
with regular uncertainty distributions $\Phi_{1},\ldots,\Phi_{n}$,
respectively. If $f(\xi_{1},\ldots,\xi_{n})$ is strictly increasing with
respect to $\xi_{1},\xi_{2},\ldots,\xi_{m}$ and strictly decreasing with
respect to $\xi_{m+1},\xi_{m+2},\ldots,\xi_{n}$, then
$\mathscr{M}\left\\{f(\xi_{1},\xi_{2},\ldots,\xi_{n})\leq 0\right\\},$ (12)
is the root $\alpha$ of the equation
$f(\Phi_{1}^{-1}(\alpha),\ldots,\Phi_{m}^{-1}(\alpha),\Phi_{m+1}^{-1}(1-\alpha),\ldots,\Phi_{n}^{-1}(1-\alpha))=0.$
(13)
If
$f(\Phi_{1}^{-1}(\alpha),\ldots,\Phi_{m}^{-1}(\alpha),\Phi_{m+1}^{-1}(1-\alpha),\ldots,\Phi_{n}^{-1}(1-\alpha))<0,$
for all $\alpha$, then we set the root $\alpha=1$; and if
$f(\Phi_{1}^{-1}(\alpha),\ldots,\Phi_{m}^{-1}(\alpha),\Phi_{m+1}^{-1}(1-\alpha),\ldots,\Phi_{n}^{-1}(1-\alpha))>0$
for all $\alpha$, then we set the root $\alpha=0$.
The expected value of an uncertain variable $\xi$ is defined as [30]
$E[\xi]=\int_{0}^{+\infty}\mathscr{M}\\{\xi>r\\}dr-\int_{-\infty}^{0}\mathscr{M}\\{\xi\leq
r\\}dr,$ (14)
provided that at least one of the two integrals is finite. Having an uncertain
distribution $\Phi(x)$ of an uncertain variable $\xi$, one can calculate its
expected value by [30]
$E[\xi]=\int_{0}^{\infty}(1-\Phi(x))dx-\int_{-\infty}^{0}\Phi(x)dx,$ (15)
provided that at least one of these integrals is finite. It is proved that for
an uncertain variable $\xi$ with regular uncertain distribution $\Phi(x)$, it
holds
$E[\xi]=\int_{0}^{1}\Phi^{-1}(\alpha)d\alpha.$ (16)
Variance of an uncertain variable $\xi$ with finite expected value $E[\xi]$ is
also defined as [30]
$V[\xi]=E\left[(\xi-E[\xi])^{2}\right].$ (17)
Let $\xi$ and $\eta$ be independent uncertain variables with finite expected
values. Then, for any real numbers $a$ and $b$ [34]
$E\left[a\xi+b\eta\right]=aE\left[\xi\right]+bE\left[\eta\right].$ (18)
### Uncertain process
Let $(\Gamma_{k},\mathscr{L}_{k},\mathscr{M}_{k})$ be uncertainty spaces for
$k=1,2,\ldots$, and $T$ be a totally ordered set (e.g. time). For the sake of
simplicity, we use the term “time” for each member of this set. An uncertain
process is a function $X_{t}(\gamma)$ from
$T\times(\Gamma_{k},\mathscr{L}_{k},\mathscr{M}_{k})$ to the set of real
numbers such that $\\{X_{t}\in B\\}$ is an event for any Borel set $B$ of real
numbers at each time $t$. An uncertain process $X_{t}$ has independent
increments if
$X_{t_{0}},X_{t_{1}}-X_{t_{0}},X_{t_{2}}-X_{t_{1}},\ldots,X_{t_{k}}-X_{t_{k-1}},$
(19)
are independent uncertain variables where $t_{0}$ is the initial time and
$t_{1},t_{2},\ldots,t_{k}$ are any times with
$t_{0}<t_{1}<t_{2}<\cdots<t_{k}$.
An uncertain process has stationary increments if its increments are
independent identically distributed (iid) uncertain variables whenever the
time intervals have equal lengths. For uncertain process $X_{t}$, fixing the
event $\omega$, the resulted function $X_{t}(\omega)$ is called a sample path
of $X_{t}$.
###### Definition 4.
[32] An uncertain process $C(t)$ is called Liu process if
(a) $C(0)=0$ and almost all sample paths are Lipschitz continuous,
(b) $C(t)$ has stationary and independent increments,
(c) Every increment $C(t+s)-C(s)$ is a normal uncertain variable with expected
value zero and variance $t^{2}$.
###### Definition 5.
[31] Let $\xi_{1},\xi_{2},\ldots$ be iid uncertain interarrival times. Define
$S_{0}=0$ and $S_{n}=\xi_{1}+\xi_{2}+\cdots+\xi_{n}$ for $n\geq 1$. The
uncertain process
$N_{t}=\max_{n\geq 0}\\{n|S_{n}\leq t\\}$ (20)
is called an uncertain renewal process.
###### Theorem 6.
[34] Let $N_{t}$ be a renewal process with iid uncertain interarrival times
$\xi_{1},\xi_{2},\ldots$. Further, let $\Phi$ denote their common uncertainty
distribution. Then, $N_{t}$ has an uncertainty distribution
$\Upsilon_{t}(x)=1-\Phi\left(\frac{t}{\lfloor x\rfloor+1}\right),\qquad\forall
x\geq 0,$ (21)
where $\lfloor x\rfloor$ represents the floor function.
###### Theorem 7.
[34] Let $N_{t}$ be a renewal process with iid uncertain interarrival times
$\xi_{1},\xi_{2},\ldots$ Then the average renewal number
$\frac{N_{t}}{t}\rightarrow\frac{1}{\xi_{1}}$ (22)
in the sense of convergence in distribution as $t\rightarrow\infty$.
###### Theorem 8.
[34] Let $N_{t}$ be a renewal process with iid uncertain interarrival times
$\xi_{1},\xi_{2},\ldots$ Then
$\lim_{t\rightarrow\infty}\frac{E[N_{t}]}{t}=E\left[\frac{1}{\xi_{1}}\right].$
(23)
If $\Phi$ is regular, then
$\lim_{t\rightarrow\infty}\frac{E[N_{t}]}{t}=\int_{0}^{1}\frac{1}{\Phi^{-1}(\alpha)}d\alpha.$
(24)
###### Definition 9.
[34] Let $\xi_{1},\xi_{2},\ldots$ be iid uncertain interarrival times.
Further, let $\eta_{1},\eta_{2},\ldots$ be iid uncertain rewards or costs (
losses in insurance case) associated with $i$-th interarrival times $\xi_{i}$
for $i=1,2,\cdots$Then
$R_{t}=\sum_{i=1}^{N_{t}}\eta_{i},$ (25)
is called a renewal reward process, where $N_{t}$ is the renewal process with
uncertain interarrival times $\xi_{1},\xi_{2},\ldots$
###### Theorem 10.
[34] Let $R_{t}$ be a renewal reward process with iid uncertain interarrival
times $\xi_{1},\xi_{2},\ldots$ and iid uncertain rewards
$\eta_{1},\eta_{2},\ldots$ Assume $(\xi_{1},\xi_{2},\ldots)$ and
$(\eta_{1},\eta_{2},\ldots)$ are independent uncertain vectors, and those
interarrival times and rewards have uncertainty distributions $\Phi$ and
$\Psi$, respectively. Then, $R_{t}$ has uncertainty distribution
$\Upsilon_{t}(x)=\max_{k\geq
0}\left\\{\left(1-\Phi\left(\frac{t}{k+1}\right)\right)\bigwedge\Psi(\frac{x}{k})\right\\}.$
(26)
###### Theorem 11.
Let $R_{t}$ be a renewal reward process with iid uncertain interarrival times
$\xi_{1},\xi_{2},\ldots$ and iid uncertain rewards $\eta_{1},\eta_{2},\ldots$
Assume $(\xi_{1},\xi_{2},\ldots)$ and $(\eta_{1},\eta_{2},\ldots)$ are
independent uncertain vectors. Then the reward rate
$\frac{R_{t}}{t}\rightarrow\frac{\eta_{1}}{\xi_{1}},$ (27)
in the sense of convergence in distribution as $t\rightarrow\infty$.
###### Definition 12.
[36] Let $U_{t}$ be an insurance risk process. Then the ruin index is defined
as the uncertain measure that $U_{t}$ eventually becomes negative, i.e.,
${\rm Ruin}=\mathscr{M}\left\\{\inf_{t\geq 0}U_{t}<0\right\\}.$ (28)
## 3 The Optimization Model
Here, we consider the process of variation in the wealth of the primary
insurance as
$U_{t}^{x}=u+\left[x(1+\rho)-(\rho-\theta)\right]ct-xR_{t},$ (29)
where, $x\in(0,1]$ is a control parameter; and $\rho$ and $\theta$ are
reinsurance and insurance safety loads, respectively. It is worthy to remind
that term $c$, as the insurance premium, should be chosen in a way to ensure
that ruin does not happen almost surely for any amount of the initial capital
or reinsurance program.
When a company considers taking a reinsurance contract, solving the following
optimization problem identifies the ruin index.
${\rm Ruin}=\max_{k\geq 1}\sup_{z\geq
0}\bigg{\\{}\Phi\Big{(}\frac{z}{k+1}\Big{)}\bigwedge
1-\Psi\Big{(}\frac{u+\big{[}x(1+\rho)-(\rho-\theta)\big{]}cz}{xk}\Big{)}\bigg{\\}}.$
(30)
Note that solving this problem is very costly; it is almost impossible for
most uncertain distributions, and just its approximations would be identified.
To overcome this difficulty, we propose a simpler method for the UMR and
construct a constraint to prevent ruin. For the ultimate ruin, the ruin at the
infinite time, it is sensible to model an optimization problem dependent on
$t$ and taking the objective function as the asymptotic uncertain expected
value of the insurer’s risk process. Formally, we consider the following
constrained optimization problem.
$\begin{array}[]{rrclcl}\displaystyle\max_{0\leq x\leq
1}&\lx<EMAIL_ADDRESS>\textrm{s.t.}&{\rm Ruin}\leq\varepsilon,\end{array}$ (31)
where $\varepsilon$ is a small positive number chosen by the administration as
maximum tolerable ruin.
### 3.1 Calculating the Uncertain Measure of Ruin
First, we need to compute the UMR for primary insurance. In the contract
duration, arrival times coincide with some point in the interval $(0,t]$.
Since ruin might happen in only one of these arrivals, we regard
$t=\sum_{i=1}^{N_{t}}\xi_{i}$, where $\xi_{i}$’s are uncertain interarrival
times. Therefore, the risk process becomes
$U_{t}=u+\sum_{i=1}^{N_{t}}(c\xi_{i}-\eta_{i}).$ (32)
Let $\xi_{i}$ and $\eta_{i}$ be independent uncertain variables. Then by
Theorem, the inverse uncertainty distribution of $\eta_{1}-c\xi_{1}$ is
$\Upsilon^{-1}_{1}(\alpha)=\Psi^{-1}_{1}(\alpha)-c\Phi^{-1}_{1}(1-\alpha)=\Psi^{-1}(\alpha)-c\Phi^{-1}(1-\alpha).$
(33)
Let $\Theta_{i}=\eta_{i}-c\xi_{i}$ and $M_{k}=\Theta_{1}+\cdots+\Theta_{k}$.
It can be easily understood that the inverse uncertainty distribution of
$M_{k}$ for all $k$ is
$\displaystyle L^{-1}_{k}(\alpha)=k[\Psi^{-1}(\alpha)-c\Phi^{-1}(1-\alpha)],$
(34)
where $L_{k}$ is the uncertainty distribution of the uncertain variable
$M_{k}$.
Notice that ruin happens when $\max_{k\geq 1}\left\\{M_{k}\right\\}\geq u.$
Recall that direct calculation of this quantity is a very costly task using
its uncertainty distribution, especially when uncertain distributions like
lognormal are involved in the model. Taking
$f=\max_{k}\left\\{x_{1},\ldots,x_{k}\right\\}-u$, $M_{k}$’s and $f$ satisfy
the requirements of Theorem 3. Therefore, the UMR is
$\mathscr{M}\left\\{f(M_{1},M_{2},\ldots,M_{k})>0\right\\}.$ (35)
For $k$ big enough, the root $\alpha$ of
$\displaystyle\max_{k}\left\\{L^{-1}_{1}(1-\alpha),L^{-1}_{2}(1-\alpha),\ldots,L^{-1}_{k}(1-\alpha)\right\\}-u=0$
(36)
identifies the UMR. Equivalently, we have to solve
$\displaystyle\max\bigg{\\{}\Psi^{-1}(1-\alpha)-c\Phi^{-1}(\alpha),\ldots,k(\Psi^{-1}(1-\alpha)-c\Phi^{-1}(\alpha))\bigg{\\}}=u.$
(37)
We need the assumption
${\rm Premium}>E[\eta_{i}]E[\xi_{i}],$ (38)
in the selection of value $c$, because after each interarrival time, one
expects $E[\xi_{i}]$ as the amount of loss, and so the premium must be bigger
than this amount. Observe that $\eta_{i}$’s are iid, means that for all $i$
these expectation amounts are identical, as for the $\xi_{i}$s. This criterion
on premiums prevents the model from the obvious event of ruin being determined
in a long run.
To illustrate the practicality of this approach in calculating the ruin,
consider the following concrete example.
###### Example 1.
Consider an insurance company takes some risk in signing a particular class of
underwriting. They invite some experts to build two uncertain distributions
for the potential number of claims and their severity. Suppose their opinion
on the severity of claims and the interarrival times are $\mathscr{LOGN}(2,1)$
and $\mathscr{L}(1,3)$, respectively.
A way to know the measure of possible bankruptcy caused by these contracts is
to identify the company’s UMR. As it was said, it is the root $\alpha$ of the
following equation. To calculate the UMR, we used online Mathematica [25].
$\displaystyle\max_{k\geq 1}$
$\displaystyle\bigg{\\{}\exp\Big{(}2+\frac{\sqrt{3}}{\pi}\ln\frac{1-\alpha}{\alpha}\Big{)}-26(2\alpha+1),\ldots,$
(40) $\displaystyle\hskip
28.45274ptk\bigg{(}\exp\Big{(}2+\frac{\sqrt{3}}{\pi}\ln\frac{1-\alpha}{\alpha}\Big{)}-26(2\alpha+1)\bigg{)}\bigg{\\}}=u.$
Fig.1(a) represents the UMR concerning the number of claims $k$ with fixed
$u=10000$. Notice that this measure gets stable after $k$ gets big enough (
for $k=10000$ and higher), and then the measure does not change considerably.
This observation is a completely rational outcome because we expect the
ultimate ruin to lift by increasing the number of claims. After a point on,
there is no significant fluctuation in the number UMR, which is mostly because
of the premiums’ growth. We also illustrated the UMR for different amounts of
the initial capital $u$, for fixed $k=100$, Fig.1(b). As it is depicting,
increasing the initial capital reduces the measure of ruin substantially to a
point where it is almost zero.
(a) Number of Claims $k$ for $u=10000$.
(b) Different amounts of $u$ for $k=100$.
Figure 1: The sample UMR in Example 1.
### 3.2 Optimal Retention
Applying these results for the case of quota share reinsurance contract, we
devise a model to explain the primary insurance situation. Solving the
optimization problem provides us the optimal retention value $x$. First, we
need to calculate the UMR for the case of reinsurance involvement. After
sharing its risk with reinsurance, the primary insurance has the following
wealth process
$U_{t}^{x}=u+\sum_{i=1}^{N_{t}}\left[x(1+\rho)-(\rho-\theta)\right]c\xi_{i}-x\eta_{i},$
(41)
with the assumption $\rho>\theta$, meaning that the risk in reinsurance is
more significant than the one in primary insurance. Notice that if we take
$x=1$, the wealth process (41) describes the primary insurance without
reinsurance partnership.
Let $\left[x(1+\rho)-(\rho-\theta)\right]c$ be denoted by $\beta$. Similar to
Sec. 3.1, each $x\eta_{i}-\beta\xi_{i}$ has the inverse uncertainty
distribution
$\hat{\Upsilon}^{-1}(\alpha)=x\Psi^{-1}(\alpha)-\beta\Phi^{-1}(1-\alpha).$
(42)
Moreover, for $\hat{\Theta}_{i}=x\eta_{i}-\beta\xi_{i}$, uncertain variables
$\hat{M}_{k}=\hat{\Theta}_{1}+\cdots+\hat{\Theta}_{k}$ has the following
inverse uncertainty distributions
$\hat{L}_{k}^{-1}(\alpha)=k\left(x\Psi^{-1}(\alpha)-\beta\Phi^{-1}(1-\alpha)\right),$
(43)
where $\hat{L}_{k}$ is the uncertain distribution of $\hat{M}_{k}$. To reflect
the profitability of the contracts for the primary insurance company, we
consider
$\Bigg{[}x(1+\rho)-(\rho-\theta)\Bigg{]}c-x\int_{0}^{1}\left[\frac{\Psi^{-1}(\alpha)}{\Phi^{-1}(1-\alpha)}\right]d\alpha,$
(44)
as the objective function, that is the expected value of the wealth with
respect to $t$. It means that the company makes a profit equal to the
objective function at each unit of time in the long run. Our purpose is to
make this amount positive and as large as possible by an optimal value of $x$.
Considering the profitablity of the company in small time intervals influences
the idea of optimizing the expected value of the primary insurance company’s
wealth asymptotically.
The problem identifies the optimal retention rate that maximizes the objective
function, the company’s wealth, constrained by keeping its UMR under a
predefined value. Thus, we have the following optimization problem.
$\begin{array}[]{l}\displaystyle\max_{0\leq x\leq
1}\Big{[}x(1+\rho)-(\rho-\theta)\Big{]}c-x\int_{0}^{1}\frac{\Psi^{-1}(\alpha)}{\Phi^{-1}(1-\alpha)}d\alpha\\\\[8.53581pt]
\textrm{s.t.}\quad\textrm{Root}\Big{\\{}\max_{k\geq
1}\big{\\{}\hat{L}^{-1}_{1}(1-\alpha),\ldots,\hat{L}^{-1}_{k}(1-\alpha)\big{\\}}-u=0\Big{\\}}\leq\varepsilon.\end{array}$
(45)
###### Remark 1.
Observe that in this optimization problem, for $x=0$, the insurance company
transfers the whole risk to the reinsurance and leaves with no premium. On the
other hand, $x=1$ means that the primary insurance takes the whole risk by
itself and does not get into the reinsurance contract. The value of $x$ is
completely dependant on the distributions of severity and number of claims.
Recall that an expert provides these uncertainty distributions. While, initial
capital, parameters such as $\rho$ and $\theta$ depend on the amount of
premium $c$. The value of $c$ is also provided based on the management policy.
###### Example 2.
We assume an insurance company is deciding on arranging a great amount of non-
life insurance contracts. Let it aims to underwrite many houses and businesses
in a new area. Since it has no enough experience in this region, the
management may ask some experts to contribute their opinions via uncertain
distributions on the number and severity of claims based on their experiences.
Consider the case where the interarrival times of the renewal process have a
linear distribution, $\mathscr{L}(1,3)$, and claim variables follow the
lognormal distribution $\mathscr{LOGN}(2,1)$. This means that experts believe,
overall, that for each policyholder there would be at least one claim in the
long run, and the number of claims is not bigger than 3. Also, they believe
that the amount of loss that each claim produces is in a way that makes a log-
normal distribution, meaning that there is always a considerable uncertain
measure devoted to rare events.
Now suppose $\rho=0.9,\theta=0.8,c=26$, and $\varepsilon=0.005$. With these
assumptions, the optimization problem reads as
$\begin{array}[]{l l}\displaystyle\max_{0\leq x\leq 1}&39.6x-2.6\\\
\textrm{s.t.}&\textrm{Root}\Big{\\{}\max_{k\geq
1}k\big{[}x\exp(2+\frac{\sqrt{3}}{\pi}\ln\frac{1-\alpha}{\alpha})\\\ &\hskip
85.35826pt-(49.4x-2.6)(2\alpha+1)\big{]}-u\Big{\\}}\leq 0.005.\end{array}$
(46)
The optimal retentions of the company for $k=100$, a fixed number of possible
claims, and for different initial capital amounts are represented in Fig.2(a),
which show the behavior of retention rate with varying this value. To analysis
the effect of claims’ number on the retention rate, we fix the initial capital
at $u=100000$.
When $u$ is small the UMR is significantly high, consequently, the retention
needs to be small to keep the UMR under the assumed limit. Raising the initial
capital certainly gives a more chance to the insurance company to take more
risks and at the same time keeping the UMR under the limit, Graph (2(a)).
Investigating Graph (2(b)) demonstrates that, growing the number $k$,
increases the UMR, which makes the retention rate to decrease in a way that
the problem’s constraint stays satisfied. Almost by $k=7000$ (the fixed
initial capital), and the numbers higher than that, the amount of retention
gets stabilized.
(a) Different Amount of the Initial Capital.
(b) Different Possible Amount of the Claims.
Figure 2: Optimal Retention Rate for the Insurance Company Constrained to Keep
UMR Under Preassumed Number in Example 2.
Notice that the number of sold contracts by the primary insurance company can
be considered as the potential claims’ number. With this assumption, one can
assume that a catastrophe situation would occur when all these contracts have
a loss and confirm a claim.
This optimization problem can also be more practically used by incorporating
other optimal control parameters to design contracts and reduce the risk while
raising their potential gain. It should also be noticed that increasing the
initial capital $u$ lessens the risk of ruin, too. However, we don’t use it as
a control parameter as it is absent in the objective function.
### 3.3 A Perturbed Extension
Following the idea of perturbing the risk process by a Brownian motion [18],
one can perturb the UMR by the Liu process. By perturbation, we mean adding an
uncertain process that can take positive and negative numbers. This
generalized model is used to include more practical considerations in the
model. We can interpret it as the imperfect human reckoning in the expert
opinion’s contribution when only a few data is available. In this way, adding
a potential fluctuation to the model allows us to mimic this incompetence. It
can also be recognized as the variation caused by dependency between uncertain
variables used in the independent model, recalling that they are mostly
dependent in real life.
The perturbed problem would be modeled as
$U_{t}=u+ct-R_{t}+C_{t},$ (47)
where $C_{t}$ is a Liu process having the following inverse uncertainty
distribution.
$\tilde{\Phi}_{t}^{-1}(\alpha)=t\frac{\sqrt{3}}{\pi}\ln(\frac{\alpha}{1-\alpha}).$
(48)
We assume that the Liu process, at each point of time, is independent of other
uncertain variables in the model. Following the inference for the non-
perturbed case, we consider the uncertain variables
$\tilde{\Theta}_{i}=\eta_{i}-c\xi_{i}-C_{i},$ (49)
as a perturbed version of $\Theta$. The inverse uncertainty distribution of
this uncertain variable is
$\tilde{\Upsilon}^{-1}_{i}(\alpha)=\Psi^{-1}_{i}(\alpha)-c\Phi^{-1}_{i}(1-\alpha)-\tilde{\Phi}_{i}^{-1}(1-\alpha).$
(50)
Defining
$\tilde{M}_{k}=\tilde{\Theta}_{1}+\tilde{\Theta}_{2}+\cdots+\tilde{\Theta}_{k},$
the UMR becomes the uncertain measure of the following event
$\max_{k\geq 1}\left\\{\tilde{M}_{k}\right\\}\geq u,$ (51)
with the inverse uncertainty distribution
$\displaystyle\tilde{L}^{-1}_{k}(\alpha)$ $\displaystyle=$ $\displaystyle
k(\Psi^{-1}(\alpha)-c\Phi^{-1}(1-\alpha)-\tilde{\Phi}_{k}^{-1}(1-\alpha)),\qquad
k\geq 1.$ (52)
Using Theorem 3, for $k$ big enough, the UMR is the root $\alpha$ of
$\displaystyle\max_{k}\left\\{\tilde{L}^{-1}_{1}(1-\alpha),\tilde{L}^{-1}_{2}(1-\alpha),\ldots,\tilde{L}^{-1}_{k}(1-\alpha)\right\\}-u=0,$
(53)
or equivalently the root of
$\begin{array}[]{l}\max_{k\geq
1}\Big{\\{}\Psi^{-1}(1-\alpha)-c\Phi^{-1}(\alpha)-\tilde{\Phi}_{k}^{-1}(\alpha),\ldots,\\\
\hskip
42.67912ptk(\Psi^{-1}(1-\alpha)-c\Phi^{-1}(\alpha)-\tilde{\Phi}_{k}^{-1}(\alpha))\Big{\\}}=u.\end{array}$
(54)
Similarly, when reinsurance is involved in the model, we have
$U_{t}^{x}=u+\left[x(1+\rho)-(\rho-\theta)\right]ct-xR_{t}+C_{t}.$ (55)
Defining $\tilde{\Theta}_{i}=x\eta_{i}-\beta\xi_{i}-C_{i}$, and
$\tilde{M}_{k}=\tilde{\Theta}_{1}+\tilde{\Theta}_{2}+\cdots+\tilde{\Theta}_{k},$
with the inverse uncertainty distribution
$\tilde{L}_{k}^{-1}(\alpha)=k\left(x\Psi^{-1}(\alpha)-\beta\Phi^{-1}(1-\alpha)-\tilde{\Phi}_{k}^{-1}(1-\alpha))\right),$
(56)
the UMR is the root $\alpha$ of
$\max_{k\geq
1}\left\\{\tilde{L}^{-1}_{1}(1-\alpha),\tilde{L}^{-1}_{2}(1-\alpha),\ldots,\tilde{L}^{-1}_{k}(1-\alpha)\right\\}-u=0.$
(57)
## 4 Conclusions
In this paper, we investigated a model designed for an insurance company that
wants to share a part of its risk with a reinsurer. Uncertainty theory was
applied, for the first time, to model the involved uncertainty of the problem.
Some comparisons between initial capital, the maximum number of possible
claims, and the optimal retention were achieved. Further research can model
other kinds of reinsurance contracts as an uncertain optimization problem with
similar or different sorts of constraints.
## Conflict of interest
The authors declare that they have no conflict of interest.
## References
* [1] Albrecher H, Beirlant J and Teugels J, Reinsurance: Actuarial and statistical aspects. (2017) (John Wiley & Sons)
* [2] Albrecher H and Hipp C, Lundbergs risk process with tax, Blatter der DGVFM. 28(1) (2007) 13-28
* [3] Andersen S E, On the collective theory of risk in case of contagion between claims, Bulletin of the Institute of Mathematics and its Applications12(2) (1957) 275-279
* [4] Arrow J K, Uncertainty and the welfare economics of medical care, American Economic Association (53) (1963) 941-973
* [5] Asmussen S and Albrecher H, Ruin probabilities, (2010) (World Scientific Singapore)
* [6] Bai L, Cai J and Zhou M, Optimal reinsurance policies for an insurer with a bivariate reserve risk process in a dynamic setting, Insurance: Mathematics and Economics 53(3) (2013) 664-670
* [7] Bai L and Guo Y J, Optimal dynamic excess-of-loss reinsurance and multidimensional portfolio selection. Science China Mathematics 53(7)(2010) 1787-1804
* [8] Bauerle N, Benckmark and mean-variance problems for insurers, Mathematical Methods of Operations Research 62(1) (2005) 159-165
* [9] Borch K, The safety loading of reinsurance premiums, Scandinavian Actuarial Journal 1960 (1960) 163-184
* [10] Bowers L N, Gerber U H, Hickman C J, Jones A D and Nesbitt J C, Actuarial Mathematics, Transactions of the Faculty of Actuaries 41 (1987) 91-94
* [11] Buhlmann H, Mathematical methods in risk theory, (2007) (Springer Science & Business Media)
* [12] Cai J and Tan S K, Optimal retention for a stop-loss reinsurance under the var and cte risk measures, ASTIN Bulletin: The Journal of the IAA 37(1)(2007) 93-112
* [13] Cai J, Tan S K, Weng C and Zhang Y, Optimal reinsurance under var and cte risk measures, Insurance: mathematics and Economics 43(1) (2008)0185-196
* [14] Centeno L M , On combining quota-share and excess of loss, ASTIN Bulletin: The Journal of the IAA 15(1) (1985) 49-63
* [15] Centeno L M and Simoes O, Optimal reinsurance, RACSAM-Revista de la Real Academia de Ciencias Exactas, Fisicas Y Naturales. Serie A. Mathematics 103(2) (2009) 387-404
* [16] Chen S, Li Z and Li K, Optimal investment reinsurance policy for an insurance company with var constraint, Insurance: Mathematics and Economics 47(2) (2010) 144-153
* [17] De Vylder F E, A practical solution to the problem of ultimate ruin probability, Scandinavian Actuarial Journal 2 (1978) 114-119
* [18] Dufresne F and Gerber U H, Risk theory for the compound poissopn process that is perturbed by diffusion,Insurance: Mathematics and Economics 10(1)(1991) 51-59
* [19] Embrechts p, Kluppelberg C and Mikosch T, Modelling extremal events: for insurance and finance. (2013) (Springer Science & Business Media)
* [20] Gerber U H, An introduction to Mathematical risk theory, (1979) (S. S. Huebner Foundation, R. D. Irwin Inc. Homeward Illinois)
* [21] Gerber U H, Shiu S E and Yang H, A constraint free approach to optimal reinsurance, Scandinavian Actuarial Journal 2019(1) (2019) 62-79
* [22] Grandell J, A class of approximations of ruin probabilities, Scandinavian Actuarial Journal 1977(1) (1977) 37-52
* [23] Heerwaarden V A, Kaas R and Goovaerts M, Optimal reinsurance in relation to ordering of risks, Insurance: Mathematics and Economics 8(1) (1989) 11-17
* [24] Iglehart L D, Diffusion approximations in collective risk theory, journal of Applied Probability 6(2) (1969) 285-292
* [25] Wolfram Research, Champaign , Il Version 12.1 (2020)
* [26] Kahn M P, Some remarks on a recent paper by borch, ASTIN Bulletin: The Journal of the IAA 1(5) (1961) 265-272
* [27] Kaluszka M, Optimal reinsurance under mean-variance premium principles, Insurance: Mathematics and Economics 28(1) (2001) 61-67
* [28] Li S, Peng J and Zhang B, The uncertain premium principle based on the distortion function, Insurance: Mathematics and Economics 53(2) (2013) 317-324
* [29] Liang Z and Bayraktar E, optimal reinsurance and investment with unobservable claim size and intensity, Insurance: Mathematics and Economics 55 (2014) 156-166
* [30] Liu B, Uncertainty Theory, (2007) (Springer)
* [31] Liu B, Fuzzy process, Hybrid process and uncertain process, Journal of Uncertain systems 2(1) (2008) 3-16
* [32] Liu B, Some research problems in uncertainty theory, Journal of uncertain systems 3(1) (2009) 3-10
* [33] Liu B, Uncertain risk analysis and uncertain reliability analysis, Journal of Uncertain systems 4(3) (2010) 163-170
* [34] Liu B, Uncertainty Theory: A branch of mathematics for modelling human uncertainty, (2010)( Springer-Verlag)
* [35] Liu B, Why is there a need for uncertainty theory, Journal of Uncertain Systems 6(1) (2012) 3-10
* [36] Liu B, Extreme value theorems of uncertain process with application to insurance risk model, Soft Computing 17(4) (2013) 549-556
* [37] Liu Y and Ma J, Optimal reinsurance/investment problems for general insurance models, The Annals of Applied Probability 19(4) (2009) 1495-1528
* [38] Panjer H H, Recursive evaluation of a family of compound distributions, ASTIN Bulletin: The Journal of the IAA 12(1) (1981) 22-26
* [39] Promislow S D and Young R V, Unifying framework for optimal insurance, Insurance: Mathematics and Economics 36(3) (2005) 347-364
* [40] Waters R H, Excess of loss reinsurance limits, Scandinavian Actuarial Journal 1979(1) (1979) 37-43
* [41] Waters R H, Some mathematical aspects of reinsurance, Insurance: Mathematics and Economics 2(1) (1983) 17-26
* [42] Yang H and Zhang L, Optimal investment for insurer with jump-diffusion risk process, Insurance: Mathematics and Economics 37(3)(2005) 615-634
* [43] Yao K and Qin Z, A modified insurance risk process with uncertainty, Insurance: Mathematics and Economics 62 (2015) 227-233
* [44] Yao K and Zhou J, Ruin time of uncertain insurance risk process, IEEE Transactions on Fuzzy Systems 26(1) (2016) 19-28
* [45] Zadeh L, Discussion: Probability theory and fuzzy logic are complementary rather than competitive, Technometrics 37(3) (1995) 271-276
* [46] Zadeh L, Is there a need for fuzzy logic?, Information Sciences 178(13) (2008) 2751-2779
|
# Slot Machines: Discovering Winning Combinations of Random Weights in Neural
Networks
Maxwell Mbabilla Aladago Lorenzo Torresani
###### Abstract
In contrast to traditional weight optimization in a continuous space, we
demonstrate the existence of effective random networks whose weights are never
updated. By selecting a weight among a fixed set of random values for each
individual connection, our method uncovers combinations of random weights that
match the performance of traditionally-trained networks of the same capacity.
We refer to our networks as “slot machines” where each reel (connection)
contains a fixed set of symbols (random values). Our backpropagation algorithm
“spins” the reels to seek “winning” combinations, i.e., selections of random
weight values that minimize the given loss. Quite surprisingly, we find that
allocating just a few random values to each connection (e.g., $8$ values per
connection) yields highly competitive combinations despite being dramatically
more constrained compared to traditionally learned weights. Moreover,
finetuning these combinations often improves performance over the trained
baselines. A randomly initialized VGG-19 with $8$ values per connection
contains a combination that achieves $91\%$ test accuracy on CIFAR-10. Our
method also achieves an impressive performance of $98.2\%$ on MNIST for neural
networks containing only random weights.
## 1 Introduction
Innovations in how deep networks are trained have played an important role in
the remarkable success deep learning has produced in a variety of application
areas, including image recognition (He et al., 2016), object detection (Ren et
al., 2015; He et al., 2017), machine translation (Vaswani et al., 2017) and
language modeling (Brown et al., 2020). Learning typically involves either
optimizing a network from scratch (Krizhevsky et al., 2012), finetuning a pre-
trained model (Yosinski et al., 2014) or jointly optimizing the architecture
and weights (Zoph & Le, 2017). Against this predominant background, we pose
the following question: can a network instantiated with only random weights
achieve competitive results compared to the same model using optimized
weights?
For a given task, an untrained, randomly initialized network is unlikely to
produce good performance. However, we demonstrate that given sufficient random
weight options for each connection, there exist selections of these random
weight values that have generalization performance comparable to that of a
traditionally-trained network with the same architecture. More importantly, we
introduce a method that can find these high-performing randomly weighted
configurations consistently and efficiently. Furthermore, we show empirically
that a small number of random weight options (e.g., $2-8$ values per
connection) are sufficient to obtain accuracy comparable to that of the
traditionally-trained network. Instead of updating the weights, the algorithm
simply selects for each connection a weight value from a fixed set of random
weights.
We use the analogy of “slot machines” to describe how our method operates.
Each reel in a Slot Machine has a fixed set of symbols. The reels are jointly
spinned in an attempt to find winning combinations. In our context, each
connection has a fixed set of random weight values. Our algorithm “spins the
reels” in order to find a winning combination of symbols, i.e., selects a
weight value for each connection so as to produce an instantiation of the
network that yields strong performance. While in physical Slot Machines the
spinning of the reels is governed by a fully random process, in our Slot
Machines the selection of the weights is guided by a method that optimizes the
given loss at each spinning iteration.
More formally, we allocate $K$ fixed random weight values to each connection.
Our algorithm assigns a quality score to each of these $K$ possible values. In
the forward pass a weight value is selected for each connection based on the
scores. The scores are then updated in the backward pass via stochastic
gradient descent. However, the weights are never changed. By evaluating
different combinations of fixed randomly generated values, this extremely
simple procedure finds weight configurations that yield high accuracy.
We demonstrate the efficacy of our algorithm through experiments on MNIST and
CIFAR-10. On MNIST, our randomly weighted Lenet-300-100 (Lecun et al., 1998)
obtains a $97.0\%$ test set accuracy when using $K=2$ options per connection
and $98.2\%$ with $K=8$. On CIFAR-10 (Krizhevsky, 2009), our six layer
convolutional network outperforms the traditionally-trained network when
selecting from $K=8$ fixed random values at each connection.
Finetuning the models obtained by our procedure generally boosts performance
over networks with optimized weights albeit at an additional compute cost (see
Figure 5). Also, compared to traditional networks, our networks are less
memory efficient due to the inclusion of scores. That said, our work casts
light on some intriguing phenomena about neural networks:
* •
First, our results suggest a performance comparability between selection from
multiple random weights and traditional training by continuous weight
optimization. This underscores the effectiveness of strong initializations.
* •
Second, this paper further highlights the enormous expressive capacity of
neural networks. (Maennel et al., 2020) show that contemporary neural networks
are so powerful that they can memorize randomly generated labels. This work
builds on that revelation and demonstrates that current networks can model
challenging non-linear mappings extremely well even by simple selection from
random weights.
* •
This work also connects to recent observations (Malach et al., 2020; Frankle &
Carbin, 2019) suggesting strong performance can be obtained by utilizing
gradient descent to uncover effective subnetworks.
* •
Finally, we are hopeful that our novel model —consisting in the introduction
of multiple weight options for each edge— will inspire other initialization
and optimization strategies.
Figure 1: Our method assigns a set of $K$ random weight options to each
connection (we use $K=3$ in this illustration). During the forward pass, one
of the $K$ values is selected for each connection, based on a quality score
computed for each weight value. On the backward pass, the quality scores of
all weights are updated using a straight-through gradient estimator (Bengio et
al., 2013), enabling the network to sample better weights in future passes.
Unlike the scores, the weights are never changed.
## 2 Related Work
Supermasks and the Strong Lottery Ticket Conjecture. The lottery ticket
hypothesis was articulated in (Frankle & Carbin, 2019) and states that a
randomly initialized neural network contains sparse subnetworks which when
trained in isolation from scratch can achieve accuracy similar to that of the
trained dense network. Inspired by this result, Zhou et al. (2019) present a
method for identifying subnetworks of randomly initialized neural networks
that achieve better than chance performance without training. These
subnetworks (named “supermasks”) are found by assigning a probability value to
each connection. These probabilities are used to sample the connections to use
and are updated via stochastic gradient descent. Without ever modifying the
weights, Zhou et al. (2019) find subnetworks that perform impressively across
multiple datasets.
Follow up work by Ramanujan et al. (2020) finds supermasks that match the
performance of a dense network. On ImageNet (Russakovsky et al., 2009), they
find subnetworks within a randomly weighted ResNet-50 (Zagoruyko & Komodakis,
2016) that match the performance of a smaller, trained ResNet-34 (He et al.,
2016).
Accordingly, they propose the strong lottery ticket conjecture: a sufficiently
overparameterized, randomly weighted neural network contains a subnetwork that
performs as well as a traditionally-trained network with the same number of
parameters. Ramanujan et al. (2020) adopts a deterministic protocol in their
so-called “edge-popup” algorithm for finding supermasks instead of the
stochastic algorithm of Zhou et al. (2019).
These empirical results as well recent theoretical ones (Malach et al., 2020;
Pensia et al., 2020) suggest that pruning a randomly initialized network is
just as good as optimizing the weights, provided a good pruning mechanism is
used. Our work corroborates this intriguing phenomenon but differs from these
prior methods in a significant aspect. We eliminate pruning completely and
instead introduce multiple weight values per connection. Thus, rather than
selecting connections to define a subnetwork, our method selects weights for
all connections in a network of fixed structure. Although our work has
interesting parallels with pruning, it is different from pruning as all
connections remain active in every forward pass.
Pruning at Initialization. The lottery ticket hypothesis also inspired
several recent work aimed towards pruning (i.e., predicting “winning” tickets)
at initialization (Lee et al., 2020, 2019; Tanaka et al., 2020; Wang et al.,
2020).
Our work is different in motivation from these methods and those that train
only a subset of the weights (Hoffer et al., 2018; Rosenfeld & Tsotsos, 2019).
Our aim is to find neural networks with random weights that match the
performance of traditionally-trained networks with the same number of
parameters.
Weight Agnostic Neural Networks. Gaier & Ha (2019) build neural network
architectures with high performance in a setting where all the weights have
the same shared random value. The optimization is instead performed over the
architecture (Stanley & Miikkulainen, 2002).
They show empirically that the network performance is indifferent to the
shared value but defaults to random chance when all the weights assume
different random values. Although we do not perform weight training, the
weights in this work have different random values. Further, we build our
models using fixed architectures.
Low-bit Networks and Quantization Methods. As in binary networks (Hubara et
al., 2016; Rastegari et al., 2016) and network quantization (Hubara et al.,
2017; Wang et al., 2018), the parameters in slot machines are drawn from a
finite set. However, whereas the primary objective in quantized networks is
mostly compression and computational speedup, the motivation behind slot
machines is recovering good performance from randomly initialized networks.
Accordingly, slot machines use real-valued weights as opposed to the binary
(or integer) weights used by low-bit networks. Furthermore, the weights in
low-bit networks are usually optimized directly whereas only associated scores
are optimized in slot machines.
Random Decision Trees. Our approach is inspired by the popular use of random
subsets of features in the construction of decision trees (Breiman et al.,
1984). Instead of considering all possible choices of features and all
possible splitting tests at each node, random decision trees are built by
restricting the selection to small random subsets of feature values and
splitting hypotheses. We adapt this strategy to the training of neural network
by restricting the optimization of each connection over a random subset of
weight values.
## 3 Slot Machines: Networks with Fixed Random Weight Options
Our goal is to construct non-sparse neural networks that achieve high accuracy
by selecting a value from a fixed set of completely random weights for each
connection. We start by providing an intuition for our method in Section 3.1,
before formally defining our algorithm in Section 3.2 .
### 3.1 Intuition
An untrained, randomly initialized network is unlikely to perform better than
random chance. Interestingly, the impressive advances of Ramanujan et al.
(2020) and Zhou et al. (2019) demonstrate that networks with random weights
can in fact do well if pruned properly. In this work, instead of pruning we
explore weight selection from fixed random values as a way to obtain effective
networks. To provide an intuition for our method, consider an untrained
network $N$ with one weight value for each connection, as typically done. If
the weights of $N$ are drawn randomly from an appropriate distribution
$\mathcal{D}$ (e.g., Glorot & Bengio (2010) or He et al. (2015)), there is an
extremely small but nonzero probability that ${N}$ obtains good accuracy (say,
greater than a threshold $\tau$) on the given task. Let $q$ denote this
probability. Also consider another untrained network $N_{K}$ that has the same
architectural configuration as $N$ but with $K>1$ weight choices per
connection. If $n$ is the number of connections in $N$, then $N_{K}$ contains
within it $K^{n}$ different network instantiations that are architecturally
identical to $N$ but that differ in weight configuration. If the weights of
$N_{K}$ are sampled from $\mathcal{D}$, then the probability that none of the
$K^{n}$ networks obtains good accuracy is essentially $(1-q)^{K^{n}}$. This
probability decays quickly as either $K$ or $n$ increases. Our method finds
randomly weighted networks that achieve very high accuracy even with small
values of $K$. For instance, a six layer convolutional network with $2$ random
values per connection obtains $85.1\%$ test accuracy on CIFAR-10.
But how do we select a good network from these $K^{n}$ different networks?
Brute-force evaluation of all possible configurations is clearly not feasible
due to the massive number of different hypotheses. Instead, we present an
algorithm, shown in Figure 1, that iteratively searches the best combination
of connection values for the entire network by optimizing the given loss. To
do this, the method learns a real-valued quality score for each weight option.
These scores are used to select the weight value of each connection during the
forward pass. The scores are then updated in the backward pass based on the
loss value in order to improve training performance over iterations.
### 3.2 Learning in Slot Machines
Here we introduce our algorithm for the case of fully-connected networks but
the description extends seamlessly to convolutional networks. A fully-
connected neural network is an acyclic graph consisting of a stack of $L$
layers $[1,\cdots,L]$ where the $\ell$th layer has $n_{\ell}$ neurons. The
activation $h(x)^{(\ell)}_{i}$ of neuron $i$ in layer $\ell$ is given by
$h(x)^{(\ell)}_{i}=g\left(\sum_{j=1}^{n_{\ell-1}}h(x)_{j}^{(\ell-1)}W^{(\ell)}_{ij}\right)$
(1)
where $W^{(\ell)}_{ij}$ is the weight of the connection between neuron $i$ in
layer $\ell$ and neuron $j$ in layer $\ell-1$, $x$ represents the input to the
network, and $g$ is a non-linear activation function. Traditionally,
$W^{(\ell)}_{ij}$ starts off as a random value drawn from an appropriate
distribution before being optimized with respect to a dataset and a loss
function using gradient descent. In contrast, our method does not ever update
the weights. Instead, it associates a set of $K$ possible weight options for
each connection111For simplicity, we use the same number of weight options $K$
for all connections in a network., and then it optimizes the selection of
weights to use from these predefined sets for all connections.
Forward Pass. Let $\\{W_{ij1},\ldots,W_{ijK}\\}$222For brevity, from now on
we omit the superscript denoting the layer. be the set of the $K$ possible
weight values for connection ($i,j$) and let $s_{ijk}$ be the “quality score”
of value $W_{i,j,k}$, denoting the preference for this value over the other
possible $K-1$ values. We define a selection function $\rho$ which takes as
input the scores $\\{s_{ij1},\ldots,s_{ijK}\\}$ and returns an index between
$1$ and $K$. In the forward pass, we set the weight of ($i,j$) to
$W_{ijk^{*}}$ where $k^{*}=\rho(s_{ij1},\ldots,s_{ijK})$.
In our work, we set $\rho$ to be either the $\arg\max$ function (returning the
index corresponding to the largest score) or the sampling from a Multinomial
distribution defined by $\\{s_{ij1},\ldots,s_{ijK}\\}$. We refer to the former
as Greedy Selection (GS). We name the latter Probabilistic Sampling (PS) and
implement it as
$\rho\sim\text{Mult}\left(\frac{e^{s_{ij1}}}{\sum_{k=1}^{K}e^{s_{ijk}}},\cdots,\frac{e^{s_{ijK}}}{\sum_{k=1}^{K}e^{s_{ijk}}}\right)$
(2)
where Mult is the multinomial distribution. The empirical comparison between
these two selection strategies is given in Section 4.4.
We note that, although $K$ values per connection are considered during
training (as opposed to the infinite number of possible values in traditional
training), only one value per connection is used at test time. The final
network is obtained by selecting for each connection the value corresponding
to the highest score (for both GS and PS) upon completion of training. Thus,
the effective capacity of the network at inference time is the same as that of
a traditionally-trained network.
Backward Pass. In the backward pass, all the scores are updated with
straight-through gradient estimation since $\rho$ has a zero gradient almost
everywhere. The straight-through gradient estimator (Bengio et al., 2013)
treats $\rho$ essentially as the identity function in the backward pass by
setting the gradient of the loss with respect to $s_{ijk}$ as
$\nabla_{s_{ijk}}\leftarrow\frac{\partial\mathcal{L}}{\partial
a(x)^{(\ell)}_{i}}h(x)^{(\ell-1)}_{j}W^{(\ell)}_{ijk}\ $ (3)
for $k\in\\{1,\cdots,K\\}$ where $\mathcal{L}$ is the objective function.
$a(x)_{i}^{(\ell)}$ is the pre-activation of neuron $i$ in layer $\ell$. Given
$\alpha$ as the learning rate, and ignoring momentum, we update the scores via
stochastic gradient descent as
$\tilde{s}_{ijk}=s_{ijk}-\alpha\nabla_{s_{ijk}}$ (4)
where $\tilde{s}_{ijk}$ is the score after the update. Our experiments
demonstrate that this simple algorithm learns to select effective
configurations of random weights resulting in impressive results across
different datasets and models.
Table 1: Architecture specifications of the networks used in our experiments.
The Lenet network is trained on MNIST. The CONV-$x$ models are the same VGG-
like architectures used in (Frankle & Carbin, 2019; Zhou et al., 2019;
Ramanujan et al., 2020). All convolutions use $3\times 3$ filters and pool
denotes max pooling.
_Network_ | Lenet | CONV-2 | CONV-4 | CONV-6 | VGG-19
---|---|---|---|---|---
| | | | | 2x64, pool
| | | | | 2x128, pool
| | | | $64,64$, pool | 2x256, pool
| | | $64,64$, pool | $128,128$, pool | 4x512, pool
_Convolutional Layers_ | | $64,64$, pool | $128,128$, pool | $256,256$, pool | 4x512, avg-pool
_Fully-connected Layers_ | $300,100,10$ | $256,256,10$ | $256,256,10$ | $256,256,10$ | 10
_Epochs: Slot Machines_ | $200$ | $200$ | $200$ | $200$ | $220$
_Epochs: Learned Weights_ | $200$ | $200$ | $330$ | $330$ | $320$
(a) (b)
Figure 2: Selecting from only $K=2$ weight options per connection already
dramatically improves accuracy compared to an untrained network that performs
at random chance ($10\%$) on both (a) MNIST and (b) CIFAR-10. The first bar in
each plot shows the performance of an untrained randomly initialized network
and the second bar shows the results of selecting random weights with GS using
$K=2$ options per connection.
## 4 Experiments
### 4.1 Experimental Setup
The weights of all our networks are sampled uniformly at random from a Glorot
Uniform distribution (Glorot & Bengio, 2010),
$\mathbb{U}(-\sigma_{x},\sigma_{x})$ where $\sigma_{x}$ is the standard
deviation of the Glorot Normal distribution. We ignore $K$, the number of
options per connection, when computing the standard deviation since it does
not affect the network capacity in the forward pass. Like for the weights, we
initialize the scores independently from a uniform distribution
$\mathbb{U}(0,\lambda\sigma_{x})$ where $\lambda$ is a small constant. We use
$\lambda=0.1$ for all fully-connected layers and set $\lambda$ to $1$ when
initializing convolutional layers. We use $15\%$ and $10\%$ of the training
sets of MNIST and CIFAR-10, respectively, for validation. We report
performance on the separate test set. On MNIST, we experiment with the
Lenet-300-100 (Lecun et al., 1998) architecture following the protocol in
Frankle & Carbin (2019). We also use the VGG-like architectures used thereof
and in Zhou et al. (2019) and Ramanujan et al. (2020). We denote these
networks as CONV-2, CONV-4, and CONV-6. These architectures are provided in
Table 1 for completeness. All our plots show the averages of four different
independent trials. Error bars whenever shown are the minimum and maximum over
the trials.
Figure 3: Comparison with traditional training on CIFAR-10 and MNIST.
Performance of slot machines improves as K increases (here we consider
$K\in\\{2,4,8,16,32,64\\}$) although the performance degrades after $K\geq 8$
are small. For CONV-6 (the deepest model considered here), our approach using
GS achieves accuracy superior to that obtained with trained weights, while for
CONV-4 it produces performance only slightly inferior to that of the optimized
network. Furthermore, as illustrated by the error bars in these plots, the
accuracy variances of slot machines are much smaller than those of networks
traditionally trained by optimizing weights. Accuracies are measured on the
test set over four different trials using early stopping on the validation
accuracy. Figure 4: Test accuracy versus training cost. For the same training
compute budget, Slot Machines achieve comparable performance to models
traditionally optimized.
All models use a batch size of $128$ and stochastic gradient descent with warm
restarts333Restarts happen at epoch 25 and 75. (Loshchilov & Hutter, 2017), a
momentum of 0.9 and an $\ell_{2}$ penalty of $0.0001$444PS models do not use
weight-decay.. When training GS slot machines, we set the learning rate to
$0.2$ for $K\leq 8$ and $0.1$ otherwise. We set the learning rate to $0.01$
when directly optimizing the weights (training from scratch and finetuning)
except when training VGG-19 where we set set the learning rate to $0.1$. We
find that a high learning rate is required when sampling the network
probabilistically, a behaviour which was also observed in Zhou et al. (2019).
Accordingly, we use a learning rate of $25$ for all PS models. We did not
train VGG-19 using PS.
We use data augmentation and dropout (with a rate of $p=0.5$) when
experimenting on CIFAR-10 (Krizhevsky, 2009). We use batch normalization in
VGG-19 but the affine parameters are never updated throughout training.
### 4.2 Slot Machines versus Traditionally-Trained Networks
We compare the networks using random weights selected from our approach with
two different baselines: (1) randomly initialized networks with one weight
option per connection, and (2) traditionally-trained networks whose continuous
weights are iteratively updated. These baselines are off-the-shelf modules
from PyTorch (Paszke et al., 2019) which we train in the standard way as
explained in Section 4.1. For this first set of experiments we use GS to
optimize Slot Machines, since it tends to provide better performance than PS
(the two methods will be compared in subsection 4.4).
As shown in Figure 2, untrained networks with only one random weight per edge
perform at chance. However, methodologically selecting the parameters from
just two random values for each connection greatly enhances performance across
different datasets and networks. Even better, as shown in Figure 3, as the
number of random weight options per connection increases, the performance of
these networks approaches that of traditionally-trained networks with the same
number of parameters, despite containing only random values. (Malach et al.,
2020) proved that any “ReLU network of depth $\ell$ can be approximated by
finding a weighted-subnetwork of a random network of depth $2\ell$ and
sufficient width.” Without pruning, our selection method finds within the
superset of fixed random weights a $6$ layer configuration that outperforms a
$6$ layer traditionally-trained network. Furthermore, Figure 4 shows that the
overall cost of training Slot Machines is comparable to that of traditional
optimization.
Figure 5: Finetuning Slot Machines. For the same total training cost (shown on
the horizontal axis), CONV-4 and CONV-6 Slot Machines finetuned with
traditional optimization achieve better accuracy compared to the same networks
learned from scratch.
### 4.3 Finetuning Slot Machines
Our approach can also be viewed as a strategy to provide a better
initialization for traditional training. To assess the value of such a scheme,
we finetune the networks obtained after training slot machines for $100$
epochs to match the cost of learned weights. Figure 5 summarizes the results
in terms of training time (including both selection and finetuning) vs test
accuracy. It can be noted that for the CONV-4 and CONV-6 architectures,
finetuned slot machines achieve higher accuracy compared to the same models
learned from scratch, at no additional training cost. For VGG-19, finetuning
improves accuracy ($92.1\%$ instead of $91.7\%$) but the resulting model still
does not match the performance of the model trained from scratch ($92.6\%$).
To show that the weight selection in slot machines does in fact impact
performance of finetuned models, we finetune from different slot machine
checkpoints. If the selection is beneficial, then finetuning from later
checkpoints will show improved performance. As shown in Figure 6, this is
indeed the case as finetuning from later checkpoints results in higher
performance on the test set.
Figure 6: Finetuning from different Slot Machine checkpoints. Slot Machine
checkpoint shows the number of training epochs used for weight selection
before switching to finetuning (performed for 100 epochs). Performance is
measured on the test set using early stopping determined by the maximum
validation accuracy during finetuning.
### 4.4 Greedy Selection Versus Probabilistic Sampling
As detailed in Section 3.2, we consider two different methods for sampling our
networks in the forward pass: a greedy selection where the weight
corresponding to the highest score is used and a stochastic selection which
draws from a proper distribution over the weights.
To fully comprehend the behavior differences between these two strategies, it
is instructive to look at Figure 7, which reports the percentage of weights
changed every 5 epochs by the two strategies. PS keeps changing a large
percentage of weights even in late stages of the optimization, due to its
probabilistic sampling.
As seen in Figure 8, GS performs better than PS. Despite the network changing
considerably, PS still manages to obtain decent accuracy indicating that there
are potentially many good random networks within a slot machine. However, as
hypothesized in Ramanujan et al. (2020), the high variability due to
stochastic sampling means that the same network is likely never or rarely
observed more than once in any training run. This makes learning extremely
challenging and consequently adversely impacts performance. Conversely, GS is
less exploratory and converges fairly quickly to a stable set of weights.
Figure 7: Weight exploration in Slot Machines. The vertical axis shows (on a
log scale) the percentage of weights changed after every five epochs as
training progresses. Compared to PS, GS is much less exploratory and converges
rapidly to a preferred configuration of weights. On the other hand, due to its
probabilistic sampling, PS keeps changing the weight selections even in late
stages of training. Figure 8: Performance of GS vs PS. Training slot machines
via greedy selection (GS) yields better accuracy than optimizing them with
probabilistic sampling (PS) for all values of $K$ considered. The reason is
that PS is a lot more exploratory and tends to produce slower convergence, as
shown in Figure 7.
From Figure 8 we can also notice that the accuracy of GS improves or remains
stable as the value of $K$ is increased. This is not always the case for PS
when $K\geq 8$. We claim this behavior is expected since GS is more restricted
in terms of the choices it can take. Thus, GS benefits more from large values
of $K$ compared to PS.
### 4.5 Sharing Random Weights
Inspired by quantized networks (Hubara et al., 2016; Rastegari et al., 2016;
Hubara et al., 2017; Wang et al., 2018), we consider slot machines under two
new settings. The first constrains the connections in a layer to share the
same set of $K$ random weights. The second setting is even more restricting as
it requires all connections in the network to share the same set of $K$ random
weights. Under the first setting, at each layer the weights are drawn from the
uniform distribution $(-\sigma_{\ell},\sigma_{\ell})$ where $\sigma_{\ell}$ is
the standard deviation of the Glorot Normal distribution for layer $\ell$.
When using a single set of weights for the entire network, we sample the
weights independently from $\mathbb{U}(-\hat{\sigma},\hat{\sigma})$.
$\hat{\sigma}$ is the mean of the standard deviations of the per layer Glorot
Normal distributions.
Each of the weights is still associated with a score. The slot machine with
shared weights is then trained as before. This approach has the potential of
compressing the model although the full set of of scores is still needed.
As shown in Figure 9, these models continue to do well when $K$ is large
enough. However, unlike conventional slot machines, these models do not work
when $K$ is very small, e.g., $K=2$. Furthermore, the accuracy exhibits a
large variance from run to run, as evidenced by the large error bars in the
plot. This is understandable, as the slot machine with shared weights is
restricted to search in a much smaller space of parameter combinations and
thus the probability of finding a winning combination is much reduced.
Difference between Slot Machines (SMs) and quantized networks. Although in
the experiment above we evaluate SMs under the shared-weight setting commonly
adopted in network quantization, we would like to point out that several
differences separate our approach from prior work in network quantization. (1)
Goal. The aim of network quantization is model compression and increased
efficiency. Conversely, the goal of SMs is to achieve accuracy comparable or
better than traditional training by means of weight selection instead of
continuous optimization. We also demonstrate that finetuning SMs via
continuous optimization results in higher accuracy compared to training from
scratch as shown Figure 5. (2) Shared vs unshared weights. Quantized networks
use shared weights across all connections. For example, BinaryNets (Hubara et
al., 2016) use only weight values $+1/-1$. While this makes sense for the
purpose of reducing the model footprint and increasing efficiency, sharing
weights causes a drop in accuracy (e.g., compare performance of Globally-
shared vs Unshared in Figure 9). SMs use distinct sets of weights for
different connections (i.e., unshared weights) in order to retain high
accuracy. (3) Optimization. While quantization networks typically involve a
continuous optimization over the weights, our approach involves discrete
selection of one out of $K$ fixed weights for each connection.
Figure 9: Sharing random weights. Slot Machines (GS) using the same set of $K$
random weights for all connections in a layer or even in the entire network
perform quite well. However, they do not match the performance of Slot
Machines (GS) that use different sets of weights for different connections.
The benefit of sharing weights is that the memory requirements and space
needed to store these networks is substantially smaller compared to the
storage footprint of slot machines with unshared weights. As an example, a
Lenet model with unshared weights needs $\sim 1$MB of storage whereas the same
model using shared weights in each layer needs $\sim 0.02$MB of storage.
### 4.6 Sparse Slot Machines
We conducted experiments where one of the $K$ weight options is constrained to
always be $0$ which induced sparse networks. However, the resulting sparsity
is low when $K$ is large. For CONV-6 on CIFAR-10, the sparsity is $49\%$ when
$K=2$, and $1.1\%$ when $K=64$. If $K$ is small, the selected sparse network
has a lower performance compared to the corresponding standard Slot Machine
where all the $K$ weight options are initialized randomly (76% versus 83% test
accuracy for CONV-6 on CIFAR-10 with $K=2$). However, when $K$ is large (e.g.,
$K=64$), the sparse network has performance comparable to a standard Slot
Machine. This is because when $K$ is small, restricting one of the weights to
be $0$ effectively removes a possible non-zero value from the already few
options.
### 4.7 Experimental Comparison with Prior Pruning Approaches
The models learned by our algorithm could in principle be found by applying
pruning to a bigger network representing the multiple weight options in the
form of additional connections. One way to achieve this is by introducing
additional “dummy” layers after every layer $\ell$ except the output layer.
Each “dummy” layer will have $K*c$ identity units where
$c=n_{\ell}*n_{\ell+1}$ and $n_{\ell}$ is the number of neurons in layer
$\ell$. The addition of these layers has the effect of separating out the $K$
random values for each connection in our network into distinct connection
weights. It is important that the neurons of the “dummy” layer encode the
identity function to ensure that the random values can pass through it
unmodified. Finally, in order to obtain the model learned by our system, all
connections between a layer and its associated “dummy” layer must be pruned
except for the weights which would have been selected by our algorithm as
shown in Figure 10. This procedure requires allocating a bigger network and is
clearly more costly compared to our algorithm.
Figure 10: In principle, it is possible to obtain our network (_left_) by
pruning a bigger network constructed ad-hoc (_right_). In this example, our
slot machine uses $K=2$ options per connection ($i,j$). The green-colored
connections represent the selected weights. The square boxes in the bigger
network implement identity functions. The circles designate vanilla neurons
with non-linear activation functions. Red dash lines in our network represent
unchosen weights. These lines in the bigger network would correspond to pruned
weights.
In this section, we compare slot machines with pruning techniques in prior
works. Our approach is similar to the pruning technique of Ramanujan et al.
(2020) as their method too does not update the weights of the networks after
initialization. Furthermore, their strategy selects the weights greedily, as
in our GS. However, they use one weight per connection and employ pruning to
uncover good subnetworks within the random network whereas we use multiple
random values per edge. Additionally, we do not ever prune any of the
connections. We compare the results of our networks to this prior work in
Table 2. We also compare with supermasks (Zhou et al., 2019). Supermasks
employ a probability distribution during the selection which makes them
reminiscent of our PS models. However, they use a Bernoulli distribution at
each weight while PS uses a Multinomial distribution at each connection. Also,
like Ramanujan et al. (2020), supermasks have one weight per connection and
perform pruning rather than weight selection. Table 2 shows that GS achieves
accuracy comparable to that of Ramanujan et al. (2020) while PS matches the
performance of supermasks. These results suggest an interesting empirical
performance equivalency among these related but distinct approaches.
Table 2: Comparison with (Ramanujan et al., 2020) and (Zhou et al., 2019). The
results of the first two rows are from the respective papers.
Method | Lenet | CONV-2 | CONV-4 | CONV-6
---|---|---|---|---
Ramanujan et al. (2020) | - | 77.7 | 85.8 | 88.1
SNIP (Zhou et al., 2019) | 98.0 | 66.0 | 72.5 | 76.5
Slot Machines (GS) | 98.2 | 78.2 | 86.3 | 88.4
Slot Machines (PS) | 98.0 | 71.7 | 80.2 | 81.7
### 4.8 Distribution of Selected Weights
In Figure 11 we study the distribution of selected weights at different
training points in order to understand why certain weights are chosen and
others are not. We observe that both GS and PS tend to prefer weights having
large magnitudes as learning progresses. This propensity for large weights
might help explain why methods such as magnitude-based pruning of
traditionally-trained networks work as well as they do. We provide further
analyses of this phenomenon in the supplementary material.
Figure 11: The distributions of the selected weights in the first two
convolutional and the first fully-connected layers of CONV-6 on CIFAR-10.
Starting from purely uniform distributions, Slot Machines progressively choose
large magnitude weights as training proceeds. See supplementary material for
additional analyses.
## 5 Conclusion and Future Work
This work shows that neural networks with random weights perform
competitively, provided that each connection is given multiple weight options
and that a good selection strategy is used. We introduce a simple selection
procedure that is remarkably effective and consistent in producing strong
weight configurations from few random options per connection. We also
demonstrate that these selected configurations can be used as starting
initializations for finetuning, which often produces accuracy gains over
training the network from scratch, at comparable computational cost. Our study
suggests that our method tends to naturally select large magnitude weights as
training proceeds. Future work will be devoted to further analyze what other
properties differentiate selected weights from those that are not selected, as
knowing such properties may pave the way for more effective initializations
for neural networks. More work is also needed to reduce the memory
requirements of these networks so they can be scaled to bigger networks.
## References
* Bengio et al. (2013) Bengio, Y., Léonard, N., and Courville, A. Estimating or propagating gradients through stochastic neurons for conditional computation, 2013.
* Breiman et al. (1984) Breiman, L., Friendman, J., Stone, C. J., and Olstein, R. A. _Classification and regression trees._ Wadsworth & Brooks/Cole Advanced Books & Software., Monterey, CA, 1984\. ISBN 978-0-412-04841-8.
* Brown et al. (2020) Brown, T. B., Mann, B., Ryder, N., Subbiah, M., Kaplan, J., Dhariwal, P., Neelakantan, A., Shyam, P., Sastry, G., Askell, A., Agarwal, S., Herbert-Voss, A., Krueger, G., Henighan, T., Child, R., Ramesh, A., Ziegler, D. M., Wu, J., Winter, C., Hesse, C., Chen, M., Sigler, E., Litwin, M., Gray, S., Chess, B., Clark, J., Berner, C., McCandlish, S., Radford, A., Sutskever, I., and Amodei, D. Language models are few-shot learners. In _Advances in Neural Information Processing Systems_ , volume 33, 2020. URL https://papers.nips.cc/paper/2020/file/1457c0d6bfcb4967418bfb8ac142f64a-Paper.pdf.
* Frankle & Carbin (2019) Frankle, J. and Carbin, M. The lottery ticket hypothesis: Finding sparse, trainable neural networks. In _International Conference on Learning Representations_ , 2019. URL https://openreview.net/forum?id=rJl-b3RcF7.
* Gaier & Ha (2019) Gaier, A. and Ha, D. Weight agnostic neural networks. In Wallach, H., Larochelle, H., Beygelzimer, A., d'Alché-Buc, F., Fox, E., and Garnett, R. (eds.), _Advances in Neural Information Processing Systems_ , volume 32, pp. 5364–5378. Curran Associates, Inc., 2019. URL https://proceedings.neurips.cc/paper/2019/file/e98741479a7b998f88b8f8c9f0b6b6f1-Paper.pdf.
* Glorot & Bengio (2010) Glorot, X. and Bengio, Y. Understanding the difficulty of training deep feedforward neural networks. In Teh, Y. W. and Titterington, M. (eds.), _Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics_ , volume 9 of _Proceedings of Machine Learning Research_ , pp. 249–256, Chia Laguna Resort, Sardinia, Italy, 13–15 May 2010. PMLR.
* He et al. (2015) He, K., Zhang, X., Ren, S., and Sun, J. Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In _2015 IEEE International Conference on Computer Vision (ICCV)_ , pp. 1026–1034, 2015.
* He et al. (2016) He, K., Zhang, X., Ren, S., and Sun, J. Deep residual learning for image recognition. In _2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)_ , pp. 770–778, 2016.
* He et al. (2017) He, K., Gkioxari, G., Dollár, P., and Girshick, R. Mask r-cnn. In _2017 IEEE International Conference on Computer Vision (ICCV)_ , pp. 2980–2988, 2017. doi: 10.1109/ICCV.2017.322.
* Hoffer et al. (2018) Hoffer, E., Hubara, I., and Soudry, D. Fix your classifier: the marginal value of training the last weight layer. In _International Conference on Learning Representations_ , 2018. URL https://openreview.net/forum?id=S1Dh8Tg0-.
* Hubara et al. (2016) Hubara, I., Courbariaux, M., Soudry, D., El-Yaniv, R., and Bengio, Y. Binarized neural networks. In Lee, D., Sugiyama, M., Luxburg, U., Guyon, I., and Garnett, R. (eds.), _Advances in Neural Information Processing Systems_ , volume 29. Curran Associates, Inc., 2016. URL https://proceedings.neurips.cc/paper/2016/file/d8330f857a17c53d217014ee776bfd50-Paper.pdf.
* Hubara et al. (2017) Hubara, I., Courbariaux, M., Soudry, D., El-Yaniv, R., and Bengio, Y. Quantized neural networks: Training neural networks with low precision weights and activations. _The Journal of Machine Learning Research_ , 18(1):6869–6898, 2017.
* Krizhevsky (2009) Krizhevsky, A. Learning multiple layers of features from tiny images. Technical report, University of Toronto, 2009.
* Krizhevsky et al. (2012) Krizhevsky, A., Sutskever, I., and Hinton, G. E. Imagenet classification with deep convolutional neural networks. In Pereira, F., Burges, C. J. C., Bottou, L., and Weinberger, K. Q. (eds.), _Advances in Neural Information Processing Systems_ , volume 25, pp. 1097–1105. Curran Associates, Inc., 2012. URL https://proceedings.neurips.cc/paper/2012/file/c399862d3b9d6b76c8436e924a68c45b-Paper.pdf.
* Lecun et al. (1998) Lecun, Y., Bottou, L., Bengio, Y., and Haffner, P. Gradient-based learning applied to document recognition. _Proceedings of the IEEE_ , 86(11):2278–2324, 1998.
* Lee et al. (2019) Lee, N., Ajanthan, T., and Torr, P. SNIP: Single-shot Network Pruning based on connection sensitivity. In _International Conference on Learning Representations_ , 2019. URL https://openreview.net/forum?id=B1VZqjAcYX.
* Lee et al. (2020) Lee, N., Ajanthan, T., Gould, S., and Torr, P. H. S. A signal propagation perspective for pruning neural networks at initialization. In _International Conference on Learning Representations_ , 2020. URL https://openreview.net/forum?id=HJeTo2VFwH.
* Loshchilov & Hutter (2017) Loshchilov, I. and Hutter, F. Sgdr: Stochastic gradient descent with warm restarts. In _International Conference on Learning Representations (ICLR)_ , 2017. URL https://openreview.net/pdf?id=Skq89Scxx.
* Maennel et al. (2020) Maennel, H., Alabdulmohsin, I., Tolstikhin, I., Baldock, R. J. N., Bousquet, O., Gelly, S., and Keysers, D. What do neural networks learn when trained with random labels?, 2020. URL https://arxiv.org/pdf/2006.10455.pdf.
* Malach et al. (2020) Malach, E., Yehudai, G., Shalev-Schwartz, S., and Shamir, O. Proving the lottery ticket hypothesis: Pruning is all you need. In III, H. D. and Singh, A. (eds.), _Proceedings of the 37th International Conference on Machine Learning_ , volume 119 of _Proceedings of Machine Learning Research_ , pp. 6682–6691. PMLR, 13–18 Jul 2020. URL http://proceedings.mlr.press/v119/malach20a.html.
* Paszke et al. (2019) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., Desmaison, A., Kopf, A., Yang, E., DeVito, Z., Raison, M., Tejani, A., Chilamkurthy, S., Steiner, B., Fang, L., Bai, J., and Chintala, S. Pytorch: An imperative style, high-performance deep learning library. In Wallach, H., Larochelle, H., Beygelzimer, A., d'Alché-Buc, F., Fox, E., and Garnett, R. (eds.), _Advances in Neural Information Processing Systems_ , volume 32, pp. 8026–8037. Curran Associates, Inc., 2019. URL https://proceedings.neurips.cc/paper/2019/file/bdbca288fee7f92f2bfa9f7012727740-Paper.pdf.
* Pensia et al. (2020) Pensia, A., Rajput, S., Nagle, A., Vishwakarma, H., and Papailiopoulos, D. Optimal lottery tickets via subsetsum: Logarithmic over-parameterization is sufficient. In _Advances in Neural Information Processing Systems_ , volume 33, 2020. URL https://papers.nips.cc/paper/2020/file/1b742ae215adf18b75449c6e272fd92d-Paper.pdf.
* Ramanujan et al. (2020) Ramanujan, V., Wortsman, M., Kembhavi, A., Farhadi, A., and Rastegari, M. What’s hidden in a randomly weighted neural network? In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)_ , June 2020. URL https://openaccess.thecvf.com/content_CVPR_2020/papers/Ramanujan_Whats_Hidden_in_a_Randomly_Weighted_Neural_Network_CVPR_2020_paper.pdf.
* Rastegari et al. (2016) Rastegari, M., Ordonez, V., Redmon, J., and Farhadi, A. Xnor-net: Imagenet classification using binary convolutional neural networks. In _Computer Vision – ECCV 2016_ , pp. 525–542, 2016. URL http://arxiv.org/abs/1603.05279.
* Ren et al. (2015) Ren, S., He, K., Girshick, R., and Sun, J. Faster r-cnn: Towards real-time object detection with region proposal networks. In Cortes, C., Lawrence, N., Lee, D., Sugiyama, M., and Garnett, R. (eds.), _Advances in Neural Information Processing Systems_ , volume 28, pp. 91–99. Curran Associates, Inc., 2015. URL https://proceedings.neurips.cc/paper/2015/file/14bfa6bb14875e45bba028a21ed38046-Paper.pdf.
* Rosenfeld & Tsotsos (2019) Rosenfeld, A. and Tsotsos, J. K. Intriguing properties of randomly weighted networks: Generalizing while learning next to nothing. In _2019 16th Conference on Computer and Robot Vision (CRV)_ , pp. 9–16, 2019.
* Russakovsky et al. (2009) Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., Berg, A. C., and Fei-Fei, L. ImageNet: A Large-Scale Hierarchical Image Database. In _CVPR_ , 2009.
* Stanley & Miikkulainen (2002) Stanley, K. O. and Miikkulainen, R. Evolving neural networks through augmenting topologies. _Evolutionary Computation_ , 10(2):99–127, 2002\.
* Tanaka et al. (2020) Tanaka, H., Kunin, D., Yamins, D. L. K., and Ganguli, S. Pruning neural networks without any data by iteratively conserving synaptic flow. In _Advances in Neural Information Processing Systems_ , volume 33, 2020. URL https://proceedings.neurips.cc/paper/2020/file/46a4378f835dc8040c8057beb6a2da52-Paper.pdf.
* Vaswani et al. (2017) Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A. N., Kaiser, L. u., and Polosukhin, I. Attention is all you need. In Guyon, I., Luxburg, U. V., Bengio, S., Wallach, H., Fergus, R., Vishwanathan, S., and Garnett, R. (eds.), _Advances in Neural Information Processing Systems_ , volume 30, pp. 5998–6008. Curran Associates, Inc., 2017. URL https://proceedings.neurips.cc/paper/2017/file/3f5ee243547dee91fbd053c1c4a845aa-Paper.pdf.
* Wang et al. (2020) Wang, C., Zhang, G., and Grosse, R. Picking winning tickets before training by preserving gradient flow. In _International Conference on Learning Representations_ , 2020. URL https://openreview.net/forum?id=SkgsACVKPH.
* Wang et al. (2018) Wang, P., Hu, Q., Zhang, Y., Zhang, C., Liu, Y., and Cheng, J. Two-step quantization for low-bit neural networks. In _2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition_ , pp. 4376–4384, 2018. doi: 10.1109/CVPR.2018.00460.
* Yosinski et al. (2014) Yosinski, J., Clune, J., Bengio, Y., and Lipson, H. How transferable are features in deep neural networks? In Ghahramani, Z., Welling, M., Cortes, C., Lawrence, N., and Weinberger, K. Q. (eds.), _Advances in Neural Information Processing Systems_ , volume 27, pp. 3320–3328. Curran Associates, Inc., 2014. URL https://proceedings.neurips.cc/paper/2014/file/375c71349b295fbe2dcdca9206f20a06-Paper.pdf.
* Zagoruyko & Komodakis (2016) Zagoruyko, S. and Komodakis, N. Wide residual networks. In Richard C. Wilson, E. R. H. and Smith, W. A. P. (eds.), _Proceedings of the British Machine Vision Conference (BMVC)_ , pp. 87.1–87.12. BMVA Press, 2016.
* Zhou et al. (2019) Zhou, H., Lan, J., Liu, R., and Yosinski, J. Deconstructing lottery tickets: Zeros, signs, and the supermask. In Wallach, H., Larochelle, H., Beygelzimer, A., d'Alché-Buc, F., Fox, E., and Garnett, R. (eds.), _Advances in Neural Information Processing Systems_ , volume 32, pp. 3597–3607. Curran Associates, Inc., 2019. URL https://proceedings.neurips.cc/paper/2019/file/1113d7a76ffceca1bb350bfe145467c6-Paper.pdf.
* Zoph & Le (2017) Zoph, B. and Le, Q. V. Neural architecture search with reinforcement learning. In _International Conference on Learning Representations_ , 2017. URL https://openreview.net/pdf?id=r1Ue8Hcxg.
## Appendix A Distribution of Selected Weights and Scores
As discussed in Section 4.8 in the main paper, we observe that slot machines
tend to choose increasingly large magnitude weights as learning proceeds. In
Figures 12, 14, and 15 of this appendix, we provide additional plots
demonstrating this phenomenon for other architectures. It may be argued that
the observed behavior might be due to the Glorot Uniform distribution from
which the weights are sampled. Accordingly, we performed ablations for this
where we used a Glorot Normal distribution for the weights as opposed to the
Glorot Uniform distribution used throughout the paper. As shown in Figure
14(a), the initialization distribution do indeed contribute to observed
pattern of preference for large magnitude weights. However, initialization may
not be the only reason as the models continue to choose large magnitude
weights even when the weights are sampled from a Glorot Normal distribution.
This is shown more clearly in the third layer of Lenet which has relatively
fewer weights compared to the first two layers. We also observed a similar
behavior in normally distributed convolutional layers.
Different from the weights, notice that the selected scores are distributed
normally as shown in Figure 12. The scores in PS move much further away from
the initial values compared to those in GS. This is largely due to the large
learning rates used in PS models.
Figure 12: Distribution of selected scores. Different from the selected
weights, the selected scores tend to be normally distributed for both GS and
PS. We show only the scores for layer $3$ of Lenet because it is the layer
with the fewest number of weights. However, the other layers show a similar
trend except that the selected scores in them have very narrow distributions
which makes them uninteresting. Notice that although we sample the scores
uniformly from the non-negative range $\mathbb{U}(0,0.1*\sigma_{x}$) where
$\sigma_{x}$ is the standard deviation of the Glorot Normal distribution,
gradient descent is able to drive them into the negative region. The scores in
PS slot machines move much farther away from the initialization compared to
those in GS due to the large learning rates used in PS models. Figure 13:
Scores Initialization. The models are sensitive to the range of the sampling
distribution. As discussed in Section 4.1 of the main paper, the initial
scores are sampled from the uniform distribution
$\mathbb{U}(\gamma,\gamma+\lambda\sigma_{x})$. The value of $\gamma$ does not
affect performance and so we always set it to $0$. These plots are averages of
$5$ different random initializations of Lenet on MNIST.
(a) Glorot Normal Initialization
(b) Glort Uniform Initialization
Figure 14: Distribution of selected weights on MNIST. As noted above, both
sampling methods tend to choose larger magnitude weights as oppose to small
values. This behavior is more evident when the values are sampled from a
Glorot Uniform distribution (bottom) as opposed to a Glorot Normal
distribution (top). However, layer $3$ which has the fewest number of weights
of any layer in this work continue to select large magnitude weights even when
using a normal distribution.
Figure 15: Distribution of selected weights on CIFAR-10. Similar to the plots
shown in Figure 11 in the paper, both CONV-2 and CONV-4 on CIFAR-10 tend to
choose bigger and bigger weights in terms of magnitude as training progresses.
Here, we show the distribution of the selected networks in the first two
convolutional layers and the first fully-connected layer of the above networks
but all the layers in all slot machines show a similar pattern.
## Appendix B Scores Initialization
We initialize the quality scores by sampling from a uniform distribution
$\mathbb{U}(\gamma,\gamma+\lambda\sigma_{x})$. As shown in Figure 13, we
observe that our networks are sensitive to the range of the uniform
distribution the scores are drawn from when trained using GS. However, as
expected we found them to be insensitive to the position of the distribution
$\gamma$. Generally, narrow uniform distributions, e.g., $\mathbb{U}(0,0.1)$,
lead to higher test set accuracy compared to wide distributions e.g.,
$\mathbb{U}(0,1)$. This matches intuition since the network requires
relatively little effort to drive a very small score across a small range
compared to a large range. To concretize this intuition, take for example a
weight $\tilde{w}$ that gives the minimum loss for connection ($i,j$). If its
associated score $\tilde{s}$ is initialized poorly to a small value, and the
range is small, the network will need little effort to push it to the top to
be selected. However, if the range is large, the network will need much more
effort to drive $\tilde{s}$ to the top for $\tilde{w}$. We believe that this
sensitivity to the distribution range could be compensated by using higher
learning rates for wider distributions of scores and vice-versa.
|
∎
11institutetext: H. Jelodar 22institutetext<EMAIL_ADDRESS>
R. Orji
<EMAIL_ADDRESS>
S. Matwin
<EMAIL_ADDRESS>
S. Weerasinghe
<EMAIL_ADDRESS>
O. Oyebode
<EMAIL_ADDRESS>
Y. Wang
<EMAIL_ADDRESS>
33institutetext:
1School of Computer Science and Technology, Nanjing University of Science and
Technology, Nanjing 210094, China
2Faculty of Computer Science, Dalhousie University, Halifax, NS, Canada
3 Faculty of Medicine, Dalhousie University, Halifax, NS, Canada
4 Institute of Computer Science Polish Academy of Sciences, Warsaw, Poland
# Artificial Intelligence for Emotion-Semantic Trending and People Emotion
Detection During COVID-19 Social Isolation
Hamed Jelodar 1 2 Rita Orji 2 Stan Matwin 2 4 Swarna Weerasinghe 3 Oladapo
Oyebode 2 Yongli Wang 1
(Received: date / Accepted: date)
###### Abstract
Taking advantage of social media platforms, such as Twitter, this paper
provides an effective framework for emotion detection among those who are
quarantined. Early detection of emotional feelings and their trends help
implement timely intervention strategies. Given the limitations of medical
diagnosis of early emotional change signs during the quarantine period,
artificial intelligence models provide effective mechanisms in uncovering
early signs, symptoms and escalating trends. Novelty of the approach presented
herein is a multitask methodological framework of text data processing,
implemented as a pipeline for meaningful emotion detection and analysis, based
on the Plutchik/Ekman approach to emotion detection and trend detection. We
present an evaluation of the framework and a pilot system. Results of confirm
the effectiveness of the proposed framework for topic trends and emotion
detection of COVID-19 tweets. Our findings revealed Stay-At-Home restrictions
result in people expressing on twitter both negative and positive emotional
semantics (feelings), where negatives are “Anger” (8.5% of tweets), followed
by “Fear” (5.2%), “Anticipation” (53.6%) and positive emotional semantics are
“Joy” (14.7%) and “Trust” (11.7%). Semantic trends of safety issues related to
staying at home rapidly decreased within the 28 days and also negative
feelings related to friends dying and quarantined life increased in some days.
These findings have potential to impact public health policy decisions through
monitoring trends of emotional feelings of those who are quarantined. The
framework presented here has potential to assist in such monitoring by using
as an online emotion detection tool kit.
###### Keywords:
Twitter, NLP, Deep Learning, COVID-19, Emontion
## 1 Introduction
Over 73 million people have been affected by COVID-19 across the globe [3].
This more than a yearlong outbreak is likely to have a significant impact on
mental health of many individuals who lost loved ones, who lost personal
contacts with others due to strictly enforced public health guidelines of
mandatory social segregation. Complex psychological reactions to COVID-19
regulatory mechanisms of mandatory quarantine and related emotional reactions
has been recognized as hard to disentangle [1] – [4]. A study conducted in
Belgium found social media being positively associated with constructive
coping for adolescents with anxious feelings during the quarantine period of
COVID-19 [4]. Another study conducted among social media users during COVID-19
pandemic in Spain was able to capture added stress placed on people’s
emotional health during the pandemic period [5]. However, social media
providing a platform of risk communication and exchange of feelings and
emotions to curb social isolation, this text data provides a wealth of
information on the natural flow of people’s emotional feelings and expressions
[6]. This rich source of data can be utilized to curb the data collection
barriers during the pandemic. The goal of this research was to use AI to
uncover the hidden, implicit signal related to emotional health of people
subject to mandatory quarantine, embedded in a latent manner in their twitter
messages.
Within the context of this paper, an NLP-based emotion detection system aims
to provide useful information by examining unstructured text data used in
social media. The purpose of the NLP system used herein is to show the meaning
and emotions of users’ expressions related to a particular topic, which can be
used to understand their psychological health and emotional wellbeing. In this
regard, use of NLP-based approach for emotion detection from complex textual
structures such as social media (e.g., Twitter) remains a challenge in
biomedical applications of AI.
The goal of this paper is to contribute an AI based methodological framework
that can uncover emotion semantic trends that can better understand the impact
and the design of quarantine regulations. The two-fold objectives of this
paper are: (a) to develop and AI framework based on machine learning models
with for emotion detection and (b) to pilot this model on unstructured tweets
that followed quarantine regulation using stay at home messaging during the
first wave of COVID-19. We investigate emotions and semantic structures to
discover knowledge from general public tweeter exchanges. We analyze the
structure of vocabulary patterns used on Twitter with a specific focus on the
impact of the Stay-At-Home public health order during the first wave of the
COVID-19. The AI framework is described and its implemented pipeline pilot
herein can be used in the emotion detection of social media information
exchange during the second wave of COVID-19 and beyond to investigate the
impact on any future public health guideline.
We aim to demonstrate the effectiveness of deep learning models for detecting
emotions from COVID-19 tweets. In addition, our findings will provide
directions public health decision making on emotion trend detection over a
four weeks period in relation to “Stay at Home” order. The contributions of
this paper can be summarized as follows:
* •
A cleaned and standardized tweet dataset of COVID-19 issues is built in this
research, and a new database of emotion-annotated COVID-19 tweets is
presented, and this could be used for future comparisons and implementations
of detection-systems based on machine learning models.
* •
We design a triple-task framework to investigate the emotions in eight
standard positions (explained in section II B) via Plutchik’s model using the
COVID-19 tweets in which all three different tasks are complementary to each
other towards a common goal.
* •
We discover semantic-word trends via various models such as latent Dirichlet
allocation (LDA) and probabilistic latent semantic analysis (PLSA). We aim to
have a semantic knowledge discovery based on topic trends and semantic
structures during the first wave of the pandemic, which provides an effective
mechanism for managing future waves.
* •
A deep learning model based convolutional neural network (CNN)) is presented
for emotion detection from the COVID-19 tweets. To the best of our knowledge,
this is the first attempt that detects emotion automatically for people’s
reaction to stay at home during the pandemic based on the online comments,
especially for #StayAtHome.
This paper is organized into sections: (a) a review of literature on existing
models on emotion detection for social media pertaining to health online
communities (section 3) (b) introduce a multi-tasks framework to COVID-19
emotion detection (section 4), (c)describe data collection of twitter and
research experiment (section 5), (d) discuss the effectiveness of the
presented AI framework and future research directions (section 6) with final
section on the conclusions on findings of emotion detection during stay at
home(section 7).
## 2 NOVELTY OF THE PROPOSED FRAMEWORK
Although machine learning based emotion detection approaches have been
proposed within social media text analysis with the context of COVID-19, there
are still many challenges remained to be addressed. In this regard, most of
the existing studies related to COVID-19, on Twitter, and other social media
platforms were performed on a general public opinion, no research have
specifically investigated emotions related to quarantine “stay at home” order,
public health policy of social segregation, that is widely used across the
globe. Novelty of the methods used in this paper consists of a multi-task
framework that can be directly applied to COVID-19 related mood discovery,
using eight types of emotional reaction and designing a deep learning model to
uncover emotions based on the first wave of the pandemic public health
restriction of mandatory social segregation. We argue that the framework can
discover semantic trends of COVID-19 tweets during the first wave of the
pandemic to predict new concerns that may be associated with furthering into
the new waves of COVID-19 quarantine orders and other related public health
regulations. Our novel approach presented herein can help future public health
crisis management in the new waves of the Coronavirus pandemic.
Moreover, public health decision makers need to understand the temporal
patterns of emotional reactions on the population when these public health
regulatory measures are continued. To fulfill this need, we investigate the
semantic topic and emotion trends to better understand the people’s reactions
from the initial wave of the pandemic.
## 3 RELATED WORK
NLP and Machine Learning has been used within the context of identifying the
type of emotions in twitter texts. In this section, we provide a review of
literature on recent emotion detection studies with focus on; Emotion
detection in online health communities, Emotion-based Lexical models, Deep
learning and machine learning, and Directions for Public health decision
making using social media during COVID-19 related text analytics.
Emotion detection analytics through information retrieval and NLP as a
mechanism have been used to explore large text corpora of online health
community communications in psychiatry, dentistry, cancer and health and
fitness. For example, a communication tool was introduced for mental health
care to understand counseling content based on emotion detection and natural
language processing using chat assistants [7] - [12]. Similar to the proposed
approach in our work, a research analyzed messages in online health
communities (OHCs) to understand the most prominent emotions in health-related
posts and proposed a computational model that can exploit the semantic
information from the text data [9]. They presented a dataset from a cancer
forum with the six common emotions based on the Ekman model and investigated
the most prominent emotions in OHCs. We proposed to use broader types of
emotions using Plutchik’s model that contains eight emotions.
In our previous work [13], sentiment and latent-topics techniques application
to COVID-19 comments in social media shed light on the usefulness of NLP
methods in uncovering issues directly related to COVID-19 public health
decision making. We expect to extend the methodology, in this study, within
our goal of extracting meaningful knowledge of emotional expression words from
people’s reactions during mandatory quarantine using the StayAtHome hashtag on
Twitter. This knowledge is essential as it can help decision makers to take
necessary actions to control the adverse emotional effects of various public
health policies, especially during the emerging waves of the pandemic.
Clearly, negative emotional effects such as anger and fear can lead to
negative social reactions. To the best of the authors’ knowledge, little
research have been done to understand the emotional expression during
mandatory quarantine, partly due to difficulties in collecting such personal
level data during the pandemic. The authors of a study in India analyzed real-
time posts on Twitter during COVID-19, and they were able to identify the
expression of mood of the nation through their analysis [14]. Also, they
developed a platform for viewing the trends in emotional change across the
country during a specific interval. As a result, their system allowed users to
view the mood of India on specific events happening in the country during
COVID-19. Development of such a platform for Canada may be a far reaching goal
of this study. This study presents the first step towards development of such
online tools to monitor the moods and emotions to inform public health
decision makers.
## 4 METHODS
This paper’s methods provide step-by-step approach to text data processing,
emotion detection and intensity scoring, emotion semantic trends calculation
and finally evaluation of the deep learning algorithm using training and
testing data.
### 4.1 Multi-Task Framework
In this section, we present a multi-task framework based on Plutchik’s Emotion
model [15] and deep learning techniques to address aforementioned research
objectives of this paper. Our approach includes three main tasks. Plutchik’s
is an operationalization of Ekman [16]. The first task is to create models to
investigate emotional reaction to mandatory restrictions of Stay-At-Home using
tweets. The second task, is to show how to discover semantic and emotional
trends to obtain patterns depicted in the first wave of the pandemic for the
30 days period from April 28th to June 1st, 2020. Finally, the third task, a
machine learning deep neural network is built as an emotion detection system
that can be used for social media exchange data analysis during the quarantine
period. Our framework, including these three tasks, is presented in Fig. 1.
1) #StayAtHome-Related Tweets and Data-Dimension Reduction:
Our inclusion criteria include only tweets related to the COVID-19 and Stay-
At-Home order. Application of this inclusion criteria is a critical step
because the quality of the input data directly affects the output.Lexical text
analysis, Data-Dimension Reduction, and NLP Preprocessing of data are
necessary to clean the data by removing the noisy and inconsistent tweets and
also analyzing the relevant data to identify relevant and appropriate
information related to the topic of interest. For this purpose, four NLP
techniques are used: sentence splitting and word tokenization, removing stop-
words, HTML cleaning to remove unnecessary contents, removing of stop-words
and hashtags, and stemming to remove prefixes and suffixes hence returning to
the root. The main purpose of the text data cleaning process is to eliminate
all tweets unrelated to the subject of our stay at home public health order.
Each tweet passed through a set of filters that are created based on the
points described above. As stated in the objectives of this paper we want to
detect the emotions expressed in the tweets that are in English language, non-
English tweets are eliminated from the dataset and the classifier is then
trained using English tweets only.
2) Task I: Emotion-detection of #StayAtHome tweets:
To achieve our research goal Task 1 is the most important process for
automatic detection of emotions from #StackAtHome tweets. It is the first step
towards the initial determination of the type of COVID-19 emotions, which also
has a direct influence on Tasks 2 and 3. We take advantage of the NRC-Word-
Emotion [17] lexicon based on Plutchik’s Wheel of Emotions to perform this
task, which is beneficial for standard scoring of the word-emotion
association. In this research, three sub processing steps are carried out on
every annotated tweet: (a) identifying type of emotion using Plutchik-theory
and hence reproducibility is warranted, (b) assigning the emotion score
obtained from the National Research Council Canada (NRC)/ NRC Emotion Lexicon
[17] and (c) identifying the emotion and the maximum association score based
on the scores computed according to the following rule. In part (b) for
assigning the score, we calculate the total emotion association score as the
sum of scores of the terms depicting higher scores for higher intensity of the
emotion of the tweets (See Table I, example). The scores denote the intensity
of the emotion for the COVID-19 tweets. By default, every emotion in the tweet
will receive a value of 1. This value will be increased or decreased by the
intensifier words used in the tweets. However, if there are multiple mentions
of emotion then the intensity will have a higher score, as shown in Table I.
In part (c) the maximum association score of a tweet represents the maximum
score noted for any of the eight emotions. A tweet that does not associate
with any emotion receives a score of zero (0), as showed in Table 1. This AI
based emotion detection task uncovers emotion semantics with emotion valuation
(strength) attached to each emotion lexicon in each of the tweets.
Figure 1: Research framework and pipeline for the COVID-19 tweet emotion
capture and analysis Figure 2: Example of the process of determining the score
for a pure tweet that is related to #StayAtHome
Since the labelling is done automatically and no human tagging is used, we
will have consistent data annotation. This model defines eight basic emotions
and makes it possible to provide consistent classification of texts to uncover
the trend in the data to reach the objectives of this research. Fig. 2
provides an example of selecting the score for COVID-19 tweets. For example,
from the tweet after text processing showed “Sad man friend whos livin skin
cant stand company” and will have the emotion SAD associated with FEAR and
this emotional expression provided the highest score from NRC based Lexicon
when the process is detecting the predominant emotion.
Table 1: EXAMPLE OF THE TWEETS WITH VARIOUS EMOTIONS
Tweet ID. | Tweet without stop-words | Label
---|---|---
A | Tweet | Today has been a challenging day, here’s to tomorrow | Anticipation
Score | Anger=0$\sim$Anticipation=1$\sim$Disgust=0$\sim$Fear=0 $\sim$Joy=0 $\sim$Sadness=0$\sim$Surprise=0$\sim$Trust=0
B | Tweet | A day is a long time in the coronavirus pandemic. | Anticipation
Score | Anger=0$\sim$Anticipation=2$\sim$Disgust=0$\sim$Fear=0 $\sim$Joy=0 $\sim$Sadness=0$\sim$Surprise=0$\sim$Trust=0
C | Tweet | Looking forward to those summer days when I can enjoy the beach and the ocean breeze again????. Stay positive and healthy everyone. | Joy
Score | Anger=0$\sim$Anticipation=1$\sim$Disgust=0$\sim$Fear=0$\sim$Joy=3$\sim$Sadness=0$\sim$Surprise=0$\sim$Trust=1
3) Task II: Emotion/Semantic-Trends of #StayAtHome
Researchers have identified timing of the emotional progression and noted that
positive emotions arose significantly earlier and the negative emotions took
longer [18] – [20]. Identification of the emotion and semantic trends over
time can be helpful and effective to understand temporal changes of the
opinions related to the human behavior. In fact, understanding the mood
changes or awareness of the emotion trends can have a practical application
for public health decision making. We use semantic topics [21] discovered in
the entire dataset to detect and describe semantic trends.
In order to obtain semantic topics we need to design a topic model. Infact, we
consider two popular methods for evaluating and determining an optimal
approach to obtain semantic-trends from #StayAtHome tweets of the online
community during the stay-at-home. For applying this task, the PLSA [21] and
LDA [22] models are employed to obtain the best semantic related-words and
discovering semantic structures of the COVID-19 tweets, as described below:
\- The probabilistic latent semantic analysis (PLSA) model is used as an NLP
technique that can display topical similarities between words.
\- The latent Dirichlet allocation (LDA) model has proven very useful for
semantic extraction and generating trends over time. LDA has been successfully
applied in several applications such as topic discovery, temporal semantic
trends, document classification, and finding relations between documents.
Another advantage of this method is the identification of semantic-trends over
time, which we consider in this research as means to discover unusual semantic
trends based on the first wave of the pandemic of #StayAtHome tweets. Overall,
The main aim of the task 2 is to capture two kinds of trends based on emotion
and semantic aspects of the COVID-19 tweets. However, we know that LDA model
discover ‘semantic related-words’ from the semantic structure of the text.
Then by investigating the distributions of these semantic-topics in various
days, we obtain semantic trends.
As a second part of this task, we compute the types of each emotion to
identify the trends among different emotions based on task I. By considering
the length and strength of the staying at home public health order in the
first wave, we believe that it is necessary to examine the changes in people’s
emotions by monitoring the time trends and fluctuations of directions using
twitter data.
4) Task III: Modeling Sentence and COVID-19 Emotion-Detection
Machine learning offers the advantage of automatic emotion detection beyond
using existing lexicial dictionaries for emotion analysis. In particular, deep
learning models have proven successful in many NLP applications for emotion
detection from health and medical text data [23] – [25]. We focused use a
convolutional neural network (CNN) [26] – [27] to implement an emotion
detection system based on emotion vectors of #StayAtHome tweets. Our network
layers involve embedding layers, convolution layers, drop out layers, and Max-
Pooling and filter layers.
In the following, we discuss the details of the designed deep-learning model,
the number of convolutional layers, and dense layers to build our COVID-19
emotion detection framework; since the input layer is the sentence
representation, a convolutional layer is then deployed to obtain the sequence
level feature from the sentence sequences. Moreover, the convolutional layer
is considered as the core functional block and includes a collection of
filters. These filters serve as a learner in the network act when they find
some certain type of feature at a determined input. Overall, we consider three
flatten layers for the designed method. Finally, we concatenate the output of
all three learned features by considering dense layers to generate scores and
recognize the type of emotion of the COVID-19 tweets.
## 5 EXPERIMENTAL EVALUATION
This section describes the dataset, generates emotion/semantic trends and the
informative results with various experiments to evaluate the performance of
the research model. In fact, to show the value of the framework, we conduct
experiments on collected COVID-19 tweets and generate informative
emotion/semantic trens. We provide a comparison with a standard base-line to
demonstrate the superiority of our CNN model for emotion detection using F1
score and accuracy as standard metrics. In this research, we use 90% of the
data for training and 10% for testing for all experiments. We focused on the
tweets from the first waves of the COVID-19 pandemic based on #StayAtHome.
However, NLP pre-processing methods (such as stop-words and stemming) are used
to reduce the noises and improve the quality of the output of the task. Table
II shows various times along with the number of tweets based on pure content
and status of data after removing duplicate contents.
Table 2: TABLE II. DETAILS OF THE # STAYATHOME DATASET
#StatAtHome Days | Pure Tweets | Obviate Duplicate Tweets | After Data-Reduction
---|---|---|---
28-4 To 6-5 | 409,000 | 145,000 | 92,000
7-5 To 15-5 | 289,000 | 112,000 | 12,000
16-5 To 24-5 | 226,000 | 88,000 | 10,000
25-5 To 6-2 | 122,000 | 54,000 | 6,000
Total | 1,046,000 | - | -
### 5.1 Informative Trends of the first wave: Emotion and Semantic
Trending topics, to a certain extent, describe the opinion of a community and
provide the means to analyze it, knowing where public attention is at a
certain time point and this has become a matter of interest for researchers
and health professionals. Regarding task II, we need to predict the trend of
topics and give some explanations for the important variation of trends about
COVID-19 trends. To test our machine learning approach with respect to this
task, we randomly split our dataset into 90% for training and the remaining
10% for testing.
### 5.2 Relationship between semantic trends and StayAtHome Tweets
It is difficult to identify the key concepts discussed by users from a million
tweets in traditional ways, so we examine NLP methods (LDA and PLSA) to
extract topics based on semantic aspects to better understand behaviors and
People’s reactions, during stay at home. Then, we investigate the distribution
of generated topics on different days of the initial wave of the outbreak,
which as a result of this process can be helpful in managing public health in
the community. First, we investigate PLSA and LDA models to analyze and
validate the relationship between semantic-topics extracted from COVID-19
tweets and related issues of the pandemic. For this purpose, we use a Mallet
package. Then, we generate 100 topics and only focused on 5 top topics of all
COVID-19 tweets as a result of topic modeling for discovering semantic
related-words. Fig 3, compares the performance of the two topic modeling
methods we have considered to identify emotional trends in our twitter data.
And showed, LDA can capture semantic topics better than the PLSA model for
extracting semantic-topics of #StayAtHome tweets. Therefore, we consider an
LDA model for performing Task II.
Figure 3: The precision metric of word clustering for semantic-topic trending
based on LDA and PLSA
To implement our analytic framework’s detection of semantic trends shown in
the topics during the initial wave of the pandemic, we investigate five top
topics (i.e., S1, S2, S3, S4, and S5, Fig.4) to better understand the online
community reactions change over time. These topics are distributed over
different days and we were able to isolate time varying nature of the semantic
trends of #StayAtHome tweets labeled by an automatic process described in Task
1. According to Fig. 4, the highest ranked (most frequent) topic is
characterized by the words Home, Staysafe, Lockdown, Love, and Family. These
correspond to the safety issues related to staying at home. We label this
topic as S1. It rapidly decreases over time at the rate of 0.11 (p=0.04)
within the 28 days and the decline was greater within the last 14 days 0.28
(p=0.001) (Fig. 4) with some day to day fluctuations shown in the graph. Topic
S2( words Live, Free) shows a decline within the first 14 days and then an
increasing trend is detected within the last 14 days. We need to confirm these
with data related to quarantine and it is possible that after the 14 day
quarantine period the individuals feel free to live. It is important to notice
that negative feelings grow over time, eg S4 (words Friends, Die, Virus)
increases at a rate of 0.14 (p=0.0001) and S5 (Home, remote, quarantine,
health life) increases at a rate of 0.06 (p=0.0005) over the course of the 28
days of tweet text data collection period. Herein we show dynamic behaviour of
statistically significant trends of topics from April 27th to June 1st.
Figure 4: Semantic trends of the initial waves of COVID-19 pandemic by
#StayAtHome
### 5.3 Relationship Between Emotion Trends and StayAtHome Tweets
In Tasks 1 and 2 of the framework, we take the advantage of NRC emotional
lexical, which is supported by Plutchik’s theory based on 14,000 words for
finding the eight primary emotions: anger, anticipation, joy, surprise,
sadness, disgust, trust, and fear. The NRC dictionary has been widely
validated for emotion analysis on social media such as Twitter emotion;
therefore, we consider the advantage of this lexicon for #StayAtHome tweets in
this research. In the application of task II analysis in this stage, we have
accomplished the identification of the relationship between emotion trends in
COVID-19 tweets based on Plutchik theory of classification.
Figure 5: Distribution of emotion trends in #StayAtHome tweets over time in
the initial wave of COVID-19 pandemic. Figure 6: Emotion Trends of the
COVID-19 tweets in line graphs.
According to Fig. 5 the most of the emotions depict in tweets across time are
“Anticipation”. As shown in Fig. 6 the mean percentage of “anticipation”
detected per day across 30 day period is 53.6 (95% CI: 52.4-55.0) with the
least shown by “Surprise” (mean 1.5% , CI: 1.4-1.6%), and “Sadness” (mean
2.2,CI:2.1-2.3%). Among negative emotions “Anger” is shown with the highest
(mean=8.5,CI:8.2-8.7%), followed by “Fear” (mean=5.2, CI:5.0-5.5%), “disgust”
(mean=2.3, CI:2.2-2.4) and “Sadness” (mean=2.2, CI:2.1-2.3). Among positive
feelings “joy” (mean=14.7,CI:14.4-15.4) is the second highest emotion.
According to psychology literature [36], Anticipation and Surprise can be
related to positive or negative health emotional outcomes. Nevertheless, in
this study anticipation stemmed out of the hashtag “stay-at-home”, a
restriction on a socially undesirable action and therefore, one can assume
anticipation is mostly directed towards a negative emotional feeling, of
perceived susceptibility. It is important to consider that these tweets were
exchanged during the early pandemic period of April 28th to June 1st, 2020 of
the first wave and North America reported the peak in May, 2020. Anger may be
directed towards missing summer outdoor activities due to stay-at-home
restrictions, whereas fear may be expressed by those living in high risk
clusters of elderly and those with chronic conditions. It is important to note
that the negative feeling of disgust was minimal and people may be aware of
the importance of quarantine regulations. These emotion expressions and trend
detection over time provide important messages where public health decision
makers can be aware of in the future public health regulation ordering. Though
people trust the public health measures, their anticipation towards negative
emotions need to be considered in the way the public health regulations are
ordered and imposed. Negative connotation of anticipation and fear can be
overcome with public education using social media.
### 5.4 Deep learning model configurations and Training details
The objective of task III of this work is to automatically detect emotions
from #StayAtHome tweets by enabling Multi-Channel CNN methodology as a
computational model for the emotion detection of the COVID-19 tweets. The mode
trained on tweets based on the created dataset of COVID-19 emotion (task II).
First we get the input data of the Task I, we leverage our COVID-19 tweet data
to train our own word embedding with Word2Vec technique [13] which provides a
much richer text representation than the classical, word-based approaches. The
corpus for training word2vec is generated by selecting all the necessary
words, followed by preprocessing the data and removing all stop words. The
output of word2vec are real values are vectors (we used 100 as the dimension).
For our implementation, we used to Keras library. Then the dense layers are
used to output the result of the deep learning model for COVID-19 emotion
detection. As already discussed, every training tweet is automatically
annotated (labelked) using one of eight different emotions including fear,
joy, trust, sadness, anger, surprise, and anticipation. To the best of our
knowledge, this is one of the few studies of automatic emotion detection from
COVID-19 tweets by focusing on Stay-At-Home issues.
### 5.5 Validation and comparison of deep learning model
As Long short-term memory (LSTM) [28] – [29] is a standard base-line for this
research area, we also use that deep model in our study. We compare the CNN
model discussed in sec. IV with the LSTM-Softmax for COVID-19 emotion
detection of the #StayAtHome dataset. The LSTM network used in this research
consists of 64 units. Here, we consider various parameters to train our model
with the different number of epochs such as 10, 20, 30, 40 and 50 to ensure
the significance of the obtained results. However, for each COVID-19 tweet, we
have 8 labels that are features for our detection. Therefore, the output of
the deep-learning model can determine the type of COVID-19 tweets with the
labels.
Figure 7: The F1-score for COVID-19 emotion detection by comparing the CNN
model and the LSTM model
### 5.6 Evaluation of the effectiveness of the framework
We evaluated the performance of the research model with emotions classes. Fig.
7 provides a clear view of variation with different parameters using word
embedding trained. In particular, our preliminary results indicate that the
multi-channel CNN out performs the LSTM-Softmax in terms of various epochs
based on a multi-class F-score as a standard metric. The advantage of the CNN
model comes for detection of the type of emotions in COVID-19 tweets, which
enables to avoid overfitting and still be able to find complex patterns to
emotion detection in the introduced data.
## 6 DISCUSSION, LIMITATION AND FUTURE WORK
Our results, in general, suggested that the machine learning methods we use
are appropriate for the emotion detection of COVID-19 tweets. The study
results clearly demonstrated anticipation as a prominent emotional semantic.
Among various definitions for this emotion semantic analysis, anticipation is
considered as one of “the mature ways of dealing with real stress”[30].
Regarding this definition, people can lower their stress during the COVID-19
pandemic by anticipating and preparing how they are going to deal with it.
Anticipation can be interpreted as either future positive and negative events,
according to [31], and are aligned with hope and fear which are the typical
anticipatory feelings that arise in response to possibilities of future such
events. A study that included multiple unigrams and bi-grams related to
COVID-19 twitter feeds were analyzed using machine learning approaches and
their findings were similar to ours in that the dominant theme identified was
anticipation with a mixed of feelings of trust, anger and fear [32].
To develop a framework that can understand the type of standard emotions
contained in COVID-19 sentences in social media is among the challenging
topics of NLP for the public health and mental healthcare delivery [33] –
[35]. Therefore, in this paper, a multi-task framework is presented to make a
smart emotion detection system based on the Stay-At-Home aspects of COVID-19
tweets. All the experiments are performed using different parameter settings.
The results suggest that the CNN model with two convolutional layers with
filter sizes of 3 and 4 can achieve good performance with various metrics for
emotion detection and classification.
Use of online social network text data to understand user health behaviors and
emotions has become an emerging research area in NLP and health informatics
[36] – [49]. COVID-19 introduced an unprecedented global threat that public
health planning and policy making community are still struggling to find best
practices to curb the pandemic. As the pandemic evolved public health
guidelines became strict measures imposed on the general public. This one
track minded approach of combating the spread or better known as flattening
the curve, neglected emotional and mental health of the individuals who were
subject to those strict public health ordering. This study findings showed a
mechanism of how the emotions and semantic trends of people’s reactions to
COVID-19 public health restrictions can be obtained for knowledge discovery
and can inform related decision making. The advantage of such an approach is
that identifying these online trends provide easy and helpful information
about public reactions to particular issues and thus it has recently attracted
the attention of medical and computer researchers. The framework proposed in
this research covers three practical tasks that are related to each other with
a common goal to develop a deep-learning system for emotion detection and
analysis of informative trends from COVID-19 tweets of people’s reaction
during the stay-at-home. Our final results uncovered important directions for
public health policy makers [40] and decision makers [41] to pay attention to
emotional issues that stemmed from those strict public health restrictions.
This research has some limitations, e.g the size of dataset, data inclusion
limited to emotions based on texts of COVID-19 issues. Currently, our data
consists of 1,047,968 tweets based on #StatAtHome tweets from 28-April to
2-June of 2020. Although more tweets can be extracted based on #StayAtHome, we
believe that the number of current tweets is sufficient to draw reasonable
conclusions to direct possibilities of uncovering importance of consequences
of public health orders and restrictions. We acknowledge that the imbalanced
dataset representing different emotions is also a limitation of our work. We
did not consider slang or emoticons to compute emotions in the tweet contents,
it would be useful to build a new emotional lexical to cover slang words
related to COVID-19 issues. A significantly longer temporal horizon
longitudinal dataset would allow us using LSTM on sequences of tweets, as well
as replacing the cross-validation. train/test approach with one based time-
stamp rather than the cross-validation approach. Further advantage of a
temporally larger dataset is an opportunity of a longitudinal study combining
geographically-based tweeter -detected emotions with COVID-19 incidences and
expanded public health regulations to enable geographic-area targeted public
health decision making. The framework we developed showed potential to
accurately uncover emotional responses and temporal trend detection of mood
changes due to quarantine related public health orders.
## 7 CONCLUSION
This paper presented a novel framework for emotion detection using COVID-19
tweets in relation to the “stay-at-home” public health guidelines. For this
framework, a multi-task framework of COVID-19 emotions detection via a CNN
model was presented. The research further shows that the framework is
effective in capturing the emotions and semantics trends in social media
messages during the pandemic. Moreover, it presents a more insightful
understanding of COVID-19 tweets by automatically identifying the type of
emotions including both negative and positive reaction and the magnitude of
their presentation. The framework can be applied to uncover reactions to
similar public health policies that affect people’s well being. We identified
ways to improve the findings in future research. We discuss potentially
significant, realistic future work, such as extending the longitudinal
character of the results, inclusion of geography-based public health orders
and spatially-annotated COVID-19 case loads.
## Ethical Approval
All procedures performed in studies involving human participants were in
accordance with the ethical standards of the institutional and/or national
research committee and with the 1964 Helsinki declaration and its later
amendments or comparable ethical standards.
Declaration of Conflict of Interest : All authors declare no conflict of
interest directly related to the submitted work.
## References
* Leek et al. (2016) [1] Becker, M., et al., Natural language processing of German clinical colorectal cancer notes for guideline-based treatment evaluation. International journal of medical informatics, 2019. 127: p. 141-146.
* Leek et al. (2016) [2] Ong, D., et al., Modeling emotion in complex stories: the Stanford Emotional Narratives Dataset. IEEE Transactions on Affective Computing, 2019.
* Leek et al. (2016) [3] Organization, W.H., Coronavirus disease (COVID-19) pandemic. 2020: p. https://www.who.int/emergencies/diseases/novel-coronavirus-2019.
* Leek et al. (2016) [4] Cauberghe, V., et al., How Adolescents Use Social Media to Cope with Feelings of Loneliness and Anxiety During COVID-19 Lockdown. Cyberpsychology, Behavior, and Social Networking, 2020.
* Leek et al. (2016) [5] de Las Heras-Pedrosa, C., P. Sánchez-Núñez, and J.I. Peláez, Sentiment analysis and emotion understanding during the covid-19 pandemic in spain and its impact on digital ecosystems. IJERPH, 2020. 17(15): p. 5542.
* Leek et al. (2016) [6] Li, Q., et al., Tracking and Analyzing Public Emotion Evolutions During COVID-19: A Case Study from the Event-Driven Perspective on Microblogs. IJERPH, 2020. 17(18): p. 6888.
* Leek et al. (2016) [7] Yu, S., et al. Emoticon analysis for chinese health and fitness topics. in International Conference on Smart Health. 2014. Springer.
* Leek et al. (2016) [8] Johnsen, J.-A.K., et al., Differences in Emotional and Pain-Related Language in Tweets About Dentists and Medical Doctors: Text Analysis of Twitter Content. JMIR public health and surveillance, 2019. 5(1): p. e10432.
* Leek et al. (2016) [9] Khanpour, H. and C. Caragea. Fine-grained emotion detection in health-related online posts. Empirical Methods in Natural Language Processing. 2018.
* Leek et al. (2016) [10] Plaza-del-Arco, F.M., et al., Improved emotion recognition in Spanish social media through incorporation of lexical knowledge. Future Generation Computer Systems, 2020. 110: p. 1000-1008.
* Leek et al. (2016) [11] asan, M., E. Rundensteiner, and E. Agu, Automatic emotion detection in text streams by analyzing twitter data. International Journal of Data Science and Analytics, 2019. 7(1): p. 35-51.
* Leek et al. (2016) [12] Jelodar, H., et al., Deep sentiment classification and topic discovery on novel coronavirus or covid-19 online discussions: Nlp using lstm recurrent neural network approach. IEEE JBHI , 2020.
* Leek et al. (2016) [13] Aslam, F., et al., Sentiments and emotions evoked by news headlines of coronavirus disease (COVID-19) outbreak. Humanities and Social Sciences Communications, 2020. 7(1): p. 1-9.
* Leek et al. (2016) [14] Venigalla, A.S.M., S. Chimalakonda, and D. Vagavolu. Mood of India During Covid-19-An Interactive Web Portal Based on Emotion Analysis of Twitter Data. in Conference Companion Publication of the 2020 on Computer Supported Cooperative Work and Social Computing. 2020.
* Leek et al. (2016) [15] Plutchik, R., A general psychoevolutionary theory of emotion, in Theories of emotion. 1980, Elsevier. p. 3-33.
* Leek et al. (2016) [16] Ekman, P., Basic emotions. Handbook of cognition and emotion, 1999. 98(45-60): p. 16.
* Leek et al. (2016) [17] Mohammad, S., S. Kiritchenko, and X. Zhu. NRC-Canada: Building the State-of-the-Art in Sentiment Analysis of Tweets. Proceedings of the Seventh International Workshop on Semantic Evaluation . 2013.
* Leek et al. (2016) [18] Kwon, S., MLT-DNet: Speech emotion recognition using 1D dilated CNN based on multi-learning trick approach. Expert Systems with Applications, 2020: p. 114177.
* Leek et al. (2016) [19] Sun, X., Y. Song, and M. Wang, Towards sensing emotion with deep visual analysis: A long-term psychological modeling approach. IEEE MultiMedia, 2020.
* Leek et al. (2016) [20] Gautam, R. and M. Sharma, Prevalence and Diagnosis of Neurological Disorders Using Different Deep Learning Techniques: A Meta-Analysis. Journal of Medical Systems, 2020. 44(2): p. 49.
* Leek et al. (2016) [21] Hofmann, T., Unsupervised learning by probabilistic latent semantic analysis. Machine learning, 2001. 42(1-2): p. 177-196.
* Leek et al. (2016) [22] Blei, D.M., A.Y. Ng, and M.I. Jordan, Latent dirichlet allocation. Journal of machine Learning research, 2003. 3(Jan): p. 993-1022.
* Leek et al. (2016) [23] Li, H. and H. Xu, Deep reinforcement learning for robust emotional classification in facial expression recognition. Knowledge-Based Systems, 2020. 204: p. 106172.
* Leek et al. (2016) [24] Lee, R.Y., et al., Identifying Goals of Care Conversations in the Electronic Health Record Using Natural Language Processing and Machine Learning. Journal of Pain and Symptom Management, 2020.
* Leek et al. (2016) [25] Wu, P., et al., Social media opinion summarization using emotion cognition and convolutional neural networks. International Journal of Information Management, 2020. 51: p. 101978.
* Leek et al. (2016) [26] Kim, Y. Convolutional Neural Networks for Sentence Classification.Empirical Methods in Natural Language Processing (EMNLP). 2014.
* Leek et al. (2016) [27] Kim, J.-C. and K. Chung, Discovery of knowledge of associative relations using opinion mining based on a health platform. Personal and Ubiquitous Computing, 2019: p. 1-11.
* Leek et al. (2016) [28] Hochreiter, S. and J. Schmidhuber, Long short-term memory. Neural computation, 1997. 9(8): p. 1735-1780.
* Leek et al. (2016) [29] Uddin, Md Zia, and Erik G. Nilsson. ”Emotion recognition using speech and neural structured learning to facilitate edge intelligence.” Engineering Applications of Artificial Intelligence 94 (2020): 103775.
* Leek et al. (2016) [30] Conte, H.R. and R. Plutchik, Ego defenses: Theory and measurement. 1995: John Wiley & Sons.
* Leek et al. (2016) [31] MacLeod, A., Prospection, well-being, and mental health. 2017: Oxford University Press.
* Leek et al. (2016) [32] Xue, J., et al., Twitter Discussions and Emotions About the COVID-19 Pandemic: Machine Learning Approach. Journal of medical Internet research, 2020. 22(11): p. e20550.
* Leek et al. (2016) [33] Ta, N., et al., Evaluating Public Anxiety for Topic-based Communities in Social Networks. IEEE Transactions on Knowledge and Data Engineering, 2020.
* Leek et al. (2016) [34] Yang, X., et al., A big data analytics framework for detecting user-level depression from social networks. IJIM, 2020. 54: p. 102141.
* Leek et al. (2016) [35] Xue, J., et al., Twitter Discussions and Emotions About the COVID-19 Pandemic: Machine Learning Approach. JMIR, 2020. 22(11): p. e20550.
* Leek et al. (2016) [36] Pérez-Rodríguez, Gael, Martín Pérez-Pérez, Florentino Fdez-Riverola, and Anália Lourenço. ”Mining the sociome for Health Informatics: Analysis of therapeutic lifestyle adherence of diabetic patients in Twitter.” Future Generation Computer Systems (2020).
* Leek et al. (2016) [37] Puppala, Mamta, Tiancheng He, Shenyi Chen, Richard Ogunti, Xiaohui Yu, Fuhai Li, Robert Jackson, and Stephen TC Wong. ”METEOR: an enterprise health informatics environment to support evidence-based medicine.” IEEE Transactions on Biomedical Engineering 62, no. 12 (2015): 2776-2786.
* Leek et al. (2016) [38] Fang, R., Pouyanfar, S., Yang, Y., Chen, S. C., & Iyengar, S. S. (2016). Computational health informatics in the big data age: a survey. ACM Computing Surveys (CSUR), 49(1), 1-36.T4
* Leek et al. (2016) [39] Roberts, K., Boland, M. R., Pruinelli, L., Dcruz, J., Berry, A., Georgsson, M., … & Jiang, Y. (2017). Biomedical informatics advancing the national health agenda: the AMIA 2015 year-in-review in clinical and consumer informatics. Journal of the American Medical Informatics Association, 24(e1), e185-e190.
* Leek et al. (2016) [40] Zon, Hilaire, Milena Pavlova, and Wim Groot. ”Exploring decision makers’ knowledge, attitudes and practices about decentralisation and health resources transfer to local governments in Burkina Faso.” Global Public Health (2020): 1-1432. Karmegam, D., T. Ramamoorthy, and B. Mappillairajan, A systematic review of techniques employed for determining mental health using social media in psychological surveillance during disasters. Disaster medicine and public health preparedness, 2020. 14(2): p. 265-272.
* Leek et al. (2016) [41] Stead, William W., and Nancy M. Lorenzi. ”Health informatics: linking investment to value.” Journal of the American Medical Informatics Association 6, no. 5 (1999): 341-348.
|
Further author information: Send correspondence to<EMAIL_ADDRESS>
# TIPTOP: A NEW TOOL TO EFFICIENTLY PREDICT YOUR FAVORITE AO PSF
Benoit Neichela Olivier Beltramo-Martina Cédric Plantetb Fabio Rossib
Guido Agapitob Thierry Fuscoc,a Elena Carolod Giulia Carlàb Michele
Cirasuolod Remco van der Burge aAix Marseille Univ CNRS CNES LAM
Marseille France;
bINAF - Osservatorio Astrofisico di Arcetri Largo E. Fermi 5 50125 Firenze
Italy;
cONERA B.P. 72 F-92322 Châtillon France;
dINAF - Osservatorio Astrofisico di Padova Vicolo dell’Osservatorio 5 35122
Padova Italy eEuropean Southern Observatory Karl-Schwarzschild-str-2 85748
Garching Germany
###### Abstract
The Adaptive Optics (AO) performance significantly depends on the available
Natural Guide Stars (NGSs) and a wide range of atmospheric conditions (seeing,
Cn2, windspeed, …). In order to be able to easily predict the AO performance,
we have developed a fast algorithm - called TIPTOP - producing the expected AO
Point Spread Function (PSF) for any of the existing AO observing modes (SCAO,
LTAO, MCAO, GLAO), and any atmospheric conditions. This TIPTOP tool takes its
roots in an analytical approach, where the simulations are done in the Fourier
domain. This allows to reach a very fast computation time (few seconds per
PSF), and efficiently explore the wide parameter space. TIPTOP has been
developed in Python, taking advantage of previous work developed in different
languages, and unifying them in a single framework. The TIPTOP app is
available on GitHub at: https://github.com/FabioRossiArcetri/TIPTOP, and will
serve as one of the bricks for the ELT Exposure Time Calculator.
###### keywords:
Adaptive Optics, Point Spread Function, Telescope, ELT
## 1 INTRODUCTION
### 1.1 Adaptive Optics Everywhere
Adaptive Optics (AO) aims at compensating the quickly varying aberrations
induced by the Earth’s atmosphere. It overcomes the natural “seeing” frontier:
the blurring of images imposed by atmospheric turbulence limiting the angular
resolution of ground-based telescopes to that achievable by a 10 to 50cm
telescope. Over the 20 past-years, AO for astronomy went from a demonstration
phase, to a well-proven and operational technique. Today, all 8/10m telescopes
are equipped with AO, and progressively turned into adaptive telescopes. The
next step forward will come from the so-called Extremely Large Telescopes (39m
diameter for the ELT[1], 30m for the TMT[2], 24m for the GMT[3]) that will see
first light in less than a decade. The scientific potential of these giants
fully relies on complex AO systems, often integrated inside the telescope
itself, and providing high-resolution images to all the instrumentation
downstream. One crucial aspect for the science observations assisted by AO is
the knowledge of the Point Spread Function (PSF). The PSF delivered by AO
systems has a complex shape, combining spatial, spectral and temporal
variability, such that it is difficult to predict. The AO PSF also highly
depends on the atmospheric parameters and the Natural Guide Stars (NGSs)
selected. Finally, the AO-PSF can also have a very different behavior
depending on the AO flavor. The goal of this paper is to present a simple tool
- called TIPTOP - which aims at simulating the expected AO PSF for any sort of
AO system. In particular, this tool will be used in the frame of the ELT
observation preparation, where the users will need to know the AO performance
in order to properly design their observation strategies.
### 1.2 The ESO Working Groups
ESO has initiated a series of Working Groups (WGs) to address the general
problematic of “preparing observations with the ELT”. A detailed description
of the WG can be found on the new ELT website: https://elt.eso.org/about/.
The goal of these WGs is to provide the necessary infrastructure to prepare
and execute observations with the ELT. Under this thematic, 4 WGs are
respectively addressing the issue of:
* •
Providing relevant star catalogs, down to H=21 which may represent the
faintest stars to be used by AO instruments
* •
Providing tools to select the optimal asterism of NGSs & Predicting the
Adaptive Optics (AO) PSF based on those selected stars and the atmospheric
conditions
* •
Provide realistic numerical models of the instruments
* •
and eventually Providing an Exposure Time Calculator
Those four WGs are (logically) chained, each one getting information from the
previous one, and providing inputs to the next one.
In this work, we focus on defining an algorithm capable of choosing the best
combination of star(s) available in the field of view, and generate the PSF
expected for the observation for a given AO system, a given observing mode and
a given set of environmental conditions. In the case of the ELT, the exact
requirements for this tool are defined in Section 2, but before going in these
details, let us first define what the AO PSF is.
### 1.3 The AO PSF
The 2D (x,y) AO-PSF formed at the focal plane of the scientific instrument is
a function of wavelength ($\lambda$), time (t) and field position (r). To
first order, it can be described by the convolution of three contributors: the
telescope, the AO and the science instrument:
$\mathrm{PSF}(x,y,\lambda,t,r)=\mathrm{PSF}_{Telescope}*\mathrm{PSF}_{Atmosphere/OA}*\mathrm{PSF}_{Instrument}$
(1)
Figure 1: From left to right: the telescope PSF for an aperture including
spiders and segments; a seeing limited PSF, a perfect AO-corrected PSF, a
standard AO-PSF, the final PSF including pixel sampling and instrument
aberrations.
Telescope PSF: The first term includes all the telescope specificities, the
first one being the diffraction pattern imposed by the telescope aperture. For
circular apertures, this would be the well-known Airy pattern, with a FWHM
equal to $\lambda$/D, where D is the telescope diameter. This first term also
includes the effect of central obstruction, spiders and all the telescope
aberrations that will not (or partially) be corrected by the AO system, as for
instance vibrations, windshake, field aberrations or phasing errors in case of
segmented mirrors. Those aberrations are field, time and wavelength dependent
and may affect all the PSF focal positions (i.e. all x,y – see Figure 1, left
inset). As a first approach, these effects are encoded in TIPTOP via a phase
mask. This phase mask includes the telescope pupil and a static shape to
encode the telescope phasing issues. Vibrations and windshake can be added as
a convolutional kernel. Some standard aberrations are proposed by default
within TIPTOP, and users can also load their own if needed.
Atmosphere/AO PSF: The second term depends on the atmospheric and AO system
characteristics. To first order, the AO system can be seen as a transfer
function filtering the atmospheric perturbations. If there were no atmosphere,
this PSF contributor would become a Dirac, and the resulting PSF would be
independent of the AO system characteristics. This is the case for space
missions, where the final PSF only depends on the telescope and instrument
aberrations. At the other end of the range of the limiting cases, if the AO
system is turned-off, this PSF becomes seeing-limited with a FWHM equal to
$\lambda$/r0, where r0 (Fried parameter) encodes the atmospheric turbulence
strength. Typical values of r0 are on the order of tens of centimeter,
therefore, this atmospheric PSF is fully dominating the final PSF shape when
compared to the telescope PSF (Figure 1 – 2nd inset). The seeing-limited PSF
is strongly time dependent, with variations faster than seconds, and with FWHM
variations spanning 0.3 to 2arcsecs for typical astronomical sites.
The AO system partially compensates for the aberrations induced by the
atmosphere and the telescope. It is first important to understand that,
because of the limited number of actuators on the AO deformable mirror, only a
limited number of spatial frequencies can be corrected by the AO system. For
instance, if the AO system were perfectly correcting all the aberrations
within the range of its deformable mirror, the final PSF would be the
combination of the Airy function near the optical axis and remains of the
extended seeing-limited wings for focal positions above the correction range
(Figure 1 \- middle inset). In reality, the AO system is not perfect and
suffers from measurement noise, temporal or aliasing errors among others.
Those error terms impact the PSF shape within the correction range and
strongly depend on wavelength, field position and time (Figure 1, 4th inset).
This is where TIPTOP will implement different filters for different AO
systems. Some standard configurations corresponding to the different ELT
instruments (see Section 1.4) are proposed, and the users will also have the
possibility to play with the AO system parameters if they want to.
Instrument PSF: This last term includes all the instrument characteristics,
the first one being the sampling of the PSF by the detector pixels (Figure 1 –
right inset). But the scientific instruments may also carry their own
aberrations, called NCPA (for Non-Common Path Aberrations). As for the
telescope aberrations, part of those NCPA can be compensated by the AO system,
and if these aberrations are static, they can be calibrated during day-time. A
particular case applies to Integral Field Spectrographs, which can produce
differential aberrations over the wavelength range, and for which the NCPA
compensation can only be performed for a specific wavelength. Within TIPTOP,
the users have the possibility to change the PSF sampling, or add static NCPAs
maps if required.
### 1.4 Adaptive Optics for the ELT
Different spatial angular performance, hence different archetypes of AO-PSFs,
are reached with different implementation of the AO modules. In the case of
the ELT, all the instruments will implement AO. Indeed, one specificity of the
ELT is to include a deformable mirror in its optical train: the fourth mirror
(a.k.a. M4[4]). This mirror has almost 6000 actuators that can be controlled
at high temporal speed (up to 1000Hz), and all the instruments will make use
of it. Depending on the science cases addressed by each ELT instruments, each
is implementing a different AO flavor.
Figure 2: Illustration of the different AO flavors to be implemented for the
ELT. Top-Left: SCAO will be implemented by HARMONI, MOSAIC, MICADO and HIRES.
Top-Right: LTAO will be implemented by HARMONI Bottom-Left: MCAO will be
implemented by MAORY to feed MICADO. Bottom-Right: GLAO and MOAO will be
implemented by MOSAIC.
More specifically, HARMONI, MICADO and HIRES will implement Single Conjugate
Adaptive Optics (SCAO - see figure 2 top left) systems [5, 6, 7]. SCAO
provides the best performance, brings the images to the diffraction-limit of
the 39m telescope, but requires bright and close enough reference stars.
Typically, a NGS with a magnitude brighter than R=14, and within a radius of
$\sim$15arcsec should be used. For SCAO the shape of the PSF will mostly
depend on integrated atmospheric parameters like the seeing, or the overall
wind speed, and of course on the magnitude and off-axis distance of the NGS.
The PSF shape also depends on the nature of the Wave-Front Sensors (WFS) ; all
the SCAO systems planned for the ELT are using Pyramid WFSs. This is included
within TIPTOP, and each instrument will have a specific configuration file to
integrate its own specificity.
In order to tackle the sky-coverage issue of SCAO, the ELT will implement 6
Laser Guide Stars (LGSs) to allow for Laser Tomography AO (LTAO) (see figure 2
top right). An LTAO system provides almost similar performance as a SCAO
system, however, over a fraction of the sky which is now almost complete. The
sky coverage is not 100%, because at least one NGS is still required to
compensate for image motion at least. But this NGS may be fainter (typically
H$<$19), and could be picked at a larger distance from the scientific target
(typically 1 arcmin). HARMONI will implement an LTAO system [7]. In this case,
the shape of the PSF depends on the vertical structure of the turbulence,
including seeing and wind speed, and also the position and magnitude of the
NGS. This is all encoded into TIPTOP.
An LTAO system solves the sky-coverage limitation, but the correction provided
is only optimized for a small Field of View (FoV - typically less than 10
arcsec). To increase the corrected FoV, it is necessary to implement post-
focal deformable mirrors, that are used in conjunction with M4. With more
deformable mirrors, and the several LGSs, an MCAO system can deliver
diffraction limited performance over a field that can reach 1 or 2 arcminutes
(see figure 2 bottom left). MAORY is the MCAO module of the ELT [8, 9]. It
will feed MICADO. The PSF shape also depends on the vertical structure of the
atmosphere, and the magnitude / location of the required NGSs.
If one wants to significantly increase the corrected FoV, trade-offs are to be
made on the level of correction provided by the AO system. With a single
deformable mirror, as is M4, but combining WFSs measurements from far off-axis
LGSs or NGSs, the system is called Ground Layer AO (GLAO). The level of
correction provided by a GLAO system will be partial, but uniform over a large
field (see figure 2 bottom right). A GLAO system only compensates for the
atmospheric turbulence in the first hundreds of meters above the telescope,
but those are usually the most energetic ones. As such, a GLAO correction will
not provide diffraction limited images, but typically shrink the seeing PSF
image by a factor 2 to 5. It can be seen as seeing-reducer, shifting the
median seeing of Armazones from $\sim$0.65 arcsec down to $\sim$0.2 or
0.3arcsec. MOSAIC intends to use a GLAO correction for its High-Multiplex Mode
(HMM[10]). In this case, the shape of the PSF mostly depends on the fraction
of the turbulence near the ground[11].
Finally, one way to improve the performance over a very large FoV is to
provide local corrections, with dedicated deformable mirrors. This is called
Multi-Object AO (MOAO). MOAO systems are mostly driven by extra-galactic
science cases, where it is not needed to have a full corrected FoV, but only
focus on specific directions: where the galaxies are. MOSAIC intends to
implement an MOAO correction for its High-Definition Mode (HDM[10]).
The goal of TIPTOP is to provide the estimated AO-PSFs for all these AO
configurations, in a fast enough way so that users can predict the performance
for as many configurations as needed. The exact requirements for TIPTOP are
described in the next section.
## 2 Top Level Requirements for an AO-Prediction tool
Mostly motivated by the needs of the ESO WG, we have defined what the TIPTOP
tool shall (or not) deliver. This is summarized below.
First, we recall that the deliverables of TIPTOP are twofold:
1. 1.
Starting from a given catalog of stars, including star positions and
magnitude, TIPTOP shall rank the possible asterisms by their expected
performance.
2. 2.
Based on a given set of system, atmospheric and NGSs parameters (set to be
defined), TIPTOP shall provide the expected AO-PSF.
The AO-PSFs are provided over a grid, and at the wavelengths provided by the
users. The PSF spatial and spectral sampling is also a free parameter adjusted
by the user. By default, we should be able to provide any sampling,
wavelength, field position, if we assume the right inputs are provided. By
default, the PSFs produced are long (infinite) exposure PSFs. It will be
possible to also generate short exposure PSFs, but this will come as a second
step.
One important requirement for TIPTOP is to be able to generate PSFs quickly.
Typically, the goal is to have a tool that can generate AO-PSFs on-the-fly,
with an output produced in less than a few seconds. This directly impacts the
choice of the algorithms, and the final implementation, as explained in
Section 6.
In terms of final performance, it is important to note that the main
motivation for this work is to provide AO-PSFs for an Exposure Time Calculator
(ETCs), and not to do science analysis. Hence if the final AO-PSF accuracy is
in the order of few percent, it should be considered as a satisfactory result.
The exact requirement on accuracy is still under construction, and for that
some detailed simulations are carried out to understand the impact of the PSF
shape on the final astrophysical SNR. Of course this depends on each and every
science case, so this task will be an on-going work in the following years.
Typical performance achieved with the current version is described in Section
7.
If the TIPTOP algorithm is fast enough, then it will be possible to provide a
range of PSFs around the observational inputs, spanning the multi dimensional
parameter space of environment conditions (r0, Tau0,Theta0, L0, Sodium
content, etc…). This may be used to provide error bars on the estimated PSFs
and/or assess the feasibility of an Observing Block.
Finally, if the tool can be fast enough, it may be used during the night for
queue planning. Based on the current atmospheric conditions, or based on the
predicted conditions of a weather forecast algorithm, the night-time operator
may be able to predict the associated AO performance, and select the best
instrument accordingly.
## 3 Basic Strategy
There are several ways of computing AO PSFs, but as the main requirement was
to be fast, we focused our strategy around analytical tools, computing the PSF
from a residual phase Power Spectral Density (PSD) in the Fourier domain. For
that, we recycled the work from Neichel et al. 2008[12] , and Plantet et al.
2018[13] .
The strategy is to decouple the High-Order (HO) part of the PSF, which only
depends on the LGS constellation and atmospheric conditions, from the Low-
Order (LO) part (a.k.a. the jitter), which strongly depends on the chosen NGS
asterism. From a schematic point of view, this strategy is described by Figure
3. Both parts are computed in parallel: the HO produces a PSF including all
the tomography/telescope aspects. The LO produces a map of jitter across the
field, for each of the NGS asterism combinations. The user can then select the
best NGSs for his observation. The final PSF is then produced by convolving
the HO PSF with the jitter kernel.
Figure 3: Schematic description of the TIPTOP strategy. One one hand, and
based on a star catalog, the expected jitter is computed. This output can be
used to test different NGSs asterism, and select the best one, based on user
criteria (e.g. more uniform jitter, best peak performance, etc…). In parallel,
the high-order part of the PSF is computed, which is mostly fixed by the
system and atmospheric inputs. The two parts are eventually convolved to form
the final PSFs, which can be estimated over the field, and at different
wavelengths.
In fact, the path is slightly more complex, and is described by Figure 4. The
high-order part computes PSDs of the residual phase, for each field direction,
but it also computes it for the specific NGS directions. This is required as
one of the inputs for the Low-Order computation is the residual phase variance
(in fact the PSF shape with its Strehl Ratio SR and FWHM) in the NGSs
directions. This grid of HO PSFs can be computed at several wavelengths, and
any directions.
Figure 4: Block diagram schematic of the different steps computed within
TIPTOP. White boxes are the inputs, and grey boxes are the algorithm bricks.
Each part can be called independently, or the full sequence can be ran at
once.
## 4 High-Order part of the PSF
The high-order part of the PSF is computed from the PSD of the AO-corrected
phase $\mathrm{PSD}_{Atmosphere/AO}$ following the scheme presented Neichel et
al. 2008[12]:
$\mathrm{PSF}_{Atmosphere/AO}\propto\mathcal{F}^{-1}(\exp(\mathcal{F}\left[\mathrm{PSD}_{Atmosphere/AO}(\mathbf{k})\right])),$
(2)
where $\mathcal{F}(\mathbf{x})$ is the 2D Fourier transform of $\mathbf{x}$
and $\mathbf{k}$ the spatial frequencies domain, and
$\mathrm{PSD}_{Atmosphere/AO}$ is derived from a combination of PSDs of
specific AO errors that are assumed to be independent:
$\mathrm{PSD}_{Atmosphere/AO}=\mathrm{PSD}_{Fitting}+\mathrm{PSD}_{Noise}+\mathrm{PSD}_{Aliasing}+\mathrm{PSD}_{Spatio-
temporal},$ (3)
where each contribution is detailed below:
* •
Fitting: refers to the uncorrected high-spatial frequencies above the AO
correction radius, e.g. $1/(2\times pitch_{act})$ in the PSF domain, where
$pitch_{act}$ is the DM actuators pitch in meters. The transition is generally
assumed to be circular to account for filtered modes; the user can setup a
squared transition if desired, though. This term only depends on the seeing
value (this term is poorly L0-sensitive) at the reference wavelength, which
are user-defined parameters.
* •
Noise: refers to the WFS noise (detector, shot noise, background) that creates
a signal that propagates through the AO loop and affects the PSF. The model
accounts for the (tomographic) wavefront reconstruction and the AO loop
temporal model. The use can either provide WFS characteristics (pixels/sub-
aperture, read-out-noise, total throughput, pixel scale) that will feed a
noise variance calculator following [14] formulas, or directly provide the
noise variance in $rd^{2}$.
* •
Aliasing: refers to the high-spatial frequencies that are aliased owing to the
WFS spatial sampling and that propagate through the AO loop as well. The
calculation of this term is particularly demanding for tomographic systems,
and in order to speed it up, the code systematically computes the aliasing in
a SCAO scenario and does not account for the propagation through the
tomographic reconstructor and the projector. In practice, the PSF shape is
weakly sensitive to this approximation.
* •
Spatio-temporal: refers to the spatial error (wavefront reconstruction,
tomography, DM projections, anisoplanatism for SCAO systems) that is combined
with the temporal error (loop bandwidth, delays) into a single term. The exact
calculation of this term is complex and given in Ref. 12. The user must define
the positions and altitude (for LGSs) of guide stars, as well as the altitude
conjugations/actuators pitch of the DM and the optimization directions. The
tomographic error is calculated in the context of pseudo-open-loop command
(POLC) and Minimum Mean Square Reconstruction (MMSE) only. If LGSs are
considered, the atmospheric layers are stretched to account for the cone
effect.
## 5 Low Order part of the PSF
The approach to compute the low-order residuals is mostly based on the method
presented in Plantet et al. 2018 [13], which was designed for MAORY. The
residual jitter is considered as the quadratic sum of 3 independent terms:
* •
Windshake/vibrations: The wind on the telescope and/or the instrumentation
itself can produce strong vibrations, that will mostly be seen as a jitter.
The correction of these vibrations needs to be fast, as they might have
significant power at high frequencies. On the other hand, since everything
happens at the level of the telescope, this jitter is isoplanatic. We can thus
consider a SCAO-like case on the brightest NGS to compute this error term. The
residual is derived from the expected temporal PSD of the vibrations (for now
purely theoretical) on which we apply a temporal filter corresponding to an AO
control law, e. g. a double integrator. The temporal filter parameters (gain,
loop frequency…) are optimized with respect to the SNR expected on the
brightest NGS.
* •
Tomographic error: This error is due to the difference between the turbulence
volume on the line of sight of the science camera and the ones on the lines of
sight of the NGSs. It only depends on the NGS asterism geometry. The
contribution of the tomographic error is computed from the formulas given in
Plantet et al. 2018 [13]. The formulas can easily be adapted to any direction
in the scientific FoV and/or to the case of a single NGS, for which the
residual jitter becomes the classical anisoplanatism error.
* •
Noise error: The noise error corresponds to the propagation of the NGS
sensors’ noise (photon noise, detector noise…) through the LO loop. The jitter
error is analytically computed for each sensor in the same way as it is done
for the windshake in Plantet et al. 2018 [13] (see section 2.1 and appendix
B): we compute the slope error from a simple Gaussian model of the PSF, and
then propagate the noise through a simple integrator loop. If there is only
one NGS, then the noise error is directly the result of that computation. If
several NGSs are used, then we also need to propagate the error through the
tomographic reconstructor. This latter propagation is detailed in appendix C
of Plantet et al. 2018 [13], together with the tomographic error.
## 6 Python implementation
We developed the simulation software in Python, making an effort in its design
and implementation to provide the option to run its computationally intensive
parts either on CPU or GPU (Nvidia CUDA enabled). We followed the approach
described in Rossi 2020 [15]: mathematical formulas were at first specified as
SymPy expressions, allowing easy verification and preliminary checks on their
correctness. Then, such symbolic expressions were translated automatically to
the Array Programming backend of choice (NumPy for CPU or CuPy for GPU in our
case) by the SEEING (Sympy Expressions Evaluation Implemented oN the GPU)
library, to be finally used in the backend agnostic numerical code. The final
simulations were produced on a machine equipped with an NVIDIA TITAN X
graphics card, providing a speedup between one and two orders of magnitude
compared to execution on CPU only. All simulation parameters are configurable
and stored in a .ini file. The software is available on GitHub at:
https://github.com/FabioRossiArcetri/TIPTOP.
## 7 TIPTOP Performance
We have started to test the accuracy of the outputs produced by TIPTOP. For
that, we compared the PSFs produced by TIPTOP with PSFs obtained from End-to-
End (E2E) codes, as for instance PASSATA[16] or OOMAO[17]. A first result is
shown in Figure 5, in this case for a MAORY-like configuration. In this
configuration, 3 NGSs of magnitude H=18, 19 and 21, located around the
scientific field are considered. The associated jitter map then shows the
typical expected elongation. Once convolved with the grid of HO-PSF, the final
result is the grid of PSF shown on the right of Figure 5. A zoom-in on the
output, and a comparison with the PASSATA PSF (used as a reference “true” PSF)
is also shown. In this example it can be seen that results are quite close,
and that the overall PSF morphology is well modelized by TIPTOP. The main
difference between the E2E and the analytical PSF mostly comes from the
limited exposure time simulated with PASSATA, and the associated speckles.
Figure 5: Illustration of an output produced by TIPTOP, as a grid of PSFs over
a 1 arcmin FoV. In this specific case, we simulate a MAORY configuration, with
3NGSs of magnitude H=18, 19 and 21, and located around the scientific field.
The lower inset shows the comparison between a PSF obtained with the PASSATA
E2E code, considered as the “true” PSF here, and the TIPTOP output. It can be
seen that the main features of the PSFs are properly reproduced, and the main
differences come from the finite exposure time of the E2E code. The big
advantage of TIPTOP, is that it only took few seconds to compute all the PSFs,
while it would have taken several hours for an E2E tool.
We have performed more comparison between E2E and TIPTOP PSFs, shown in Figure
6 and 7. In Figure 6 we considered a MAVIS configuration, to speed up the E2E
simulations. MAVIS is a 3rd generation MCAO instrument for the VLT, working in
the visible. It will use 8 LGSs, 3 DMS and 3NGSs. Results of Figure 6 show the
relationship between two PSFs metrics - the FWHM and the SR - when computed
for an E2E PSFs and from TIPTOP. This is done at different wavelengths. The
agreement is quite good, with less than a few percents of error. The bottom
row of Figure 6 shows some examples of PSF radial cut, along with the
residual. Again the agreement between the E2E results and the fast analytical
TIPTOP is quite good, with less than a few percents of error. These first
tests are very encouraging and validate our approach.
Figure 6: Comparison between PASSATA E2E and TIPTOP analytical outputs, for a
MAVIS configuration. Top row shows the comparison for the FWHM (left) and SR
(right), and the bottom row shows some radial cut of the PSFs over the field,
respectively at the center of the science field (0,0), 15arcsec off axis along
the X-direction (15,0) and 15arcsec off-axis along the 2 axis (15.0, 15.0).
The overall agreement between the two tools is better than few percent, and
fulfills the performance requirement.
Finally, a last example of the good behavior of the TIPTOP output is shown in
Figure 7 for an HARMONI SCAO and LTAO configuration. In this case, we compared
the phase variance (strongly related to the SR) produced by the OOMAO E2E tool
and the TIPTOP output, depending on the WFS sub-aperture size. Again, the
agreement with this other E2E tool is quite impressive, and validates the
analytical approach.
Figure 7: Comparison between OOMAO E2E and TIPTOP outputs for an HARMONI SCAO
and LTAO configuration. The residual variance vs. WFS sub-aperture size is
used as the metric in this case. The agreement between the 2 codes is again
very good.
Finally, it is important to note that the TIPTOP tool can easily be tweaked to
adjust the output performance, and once the first AO systems will be on-sky,
it will be possible to calibrate the software to get the best match as
possible with real observations.
## 8 Conclusions
TIPTOP is a new tool able to quickly, but efficiently simulate any AO-PSF. Its
main purpose will be to be integrated in a pipeline of observation preparation
for the ELT. As such, TIPTOP produces any kind of AO-PSFs (SCAO, LTAO, MCAO,
GLAO, MOAO) in only a few seconds. These PSFs can be computed at any sampling,
position in the field and wavelength. As such, it is a very flexible tool,
which may be useful for other applications. The overall implementation has
been done in Python, making use of GPU computing power, and the software is
available on GitHub at: https://github.com/FabioRossiArcetri/TIPTOP.
###### Acknowledgements.
This work has been prepared as part of the activities of the French National
Research Agency (ANR) through the ANR APPLY (grant ANR-19-CE31-0011). Authors
also acknowledge the support of OPTICON H2020 (2017-2020) Work Package 1
(Calibration and test tools for AO assisted E-ELT instruments). OPTICON is
supported by the Horizon 2020 Framework Programme of the European Commission’s
(Grant number 730890). Authors are also acknowledging the support by the
Action Spécifique Haute Résolution Angulaire (ASHRA) of CNRS/INSU co-funded by
CNES, and the support of LABEX FOCUS ANR-11-LABX-0013.
## References
* [1] www.elt.eso.org
* [2] www.tmt.org
* [3] www.gmto.org
* [4] Vernet, E., Cayrel, M., Hubin, N., Mueller, M., Biasi, R., Gallieni, D., and Tintori, M., “Specifications and design of the E-ELT M4 adaptive unit,” in [Adaptive Optics Systems III ], Ellerbroek, B. L., Marchetti, E., and Véran, J.-P., eds., Society of Photo-Optical Instrumentation Engineers (SPIE) Conference Series 8447, 844761 (July 2012).
* [5] Clénet, Y., Buey, T., Gendron, E., Hubert, Z., Vidal, F., Cohen, M., Chapron, F., Sevin, A., Fédou, P., Barbary, G., Baudoz, P., Borgo, B., Ben Nejma, S., Chambouleyron, V., Déo, V., Dupuis, O., Durand, S., Ferreira, F., Gaudemard, J., Gratadour, D., Huby, E., Huet, J.-M., Le Ruyet, B., Nguyen-Tuong, N., Perrot, C., Thijs, S., Younès, Y., Rousset, G., Feautrier, P., Zins, G., Diolaiti, E., Ciliegi, P., Esposito, S., Busoni, L., Schubert, J., Hartl, M., Hörmann, V., and Davies, R., “The MICADO first-light imager for the ELT: towards the preliminary design review of the MICADO-MAORY SCAO,” in [Adaptive Optics Systems VI ], Close, L. M., Schreiber, L., and Schmidt, D., eds., Society of Photo-Optical Instrumentation Engineers (SPIE) Conference Series 10703, 1070313 (July 2018).
* [6] Xompero, M., Giordano, C., Bonaglia, M., Di Rico, G., Agapito, G., Esposito, S., Tozzi, A., Sanna, N., Oliva, E., and Marconi, A., “ELT-HIRES the high resolution spectrograph for the ELT: implementing exoplanet atmosphere reflection detection with a SCAO module,” in [Adaptive Optics Systems VI ], Close, L. M., Schreiber, L., and Schmidt, D., eds., Society of Photo-Optical Instrumentation Engineers (SPIE) Conference Series 10703, 1070341 (July 2018).
* [7] Neichel, B., Fusco, T., Sauvage, J. F., Correia, C., Dohlen, K., El-Hadi, K., Blanco, L., Schwartz, N., Clarke, F., Thatte, N. A., Tecza, M., Paufique, J., Vernet, J., Le Louarn, M., Hammersley, P., Gach, J. L., Pascal, S., Vola, P., Petit, C., Conan, J. M., Carlotti, A., Vérinaud, C., Schnetler, H., Bryson, I., Morris, T., Myers, R., Hugot, E., Gallie, A. M., and Henry, D. M., “The adaptive optics modes for HARMONI: from Classical to Laser Assisted Tomographic AO,” in [Adaptive Optics Systems V ], Marchetti, E., Close, L. M., and Véran, J.-P., eds., Society of Photo-Optical Instrumentation Engineers (SPIE) Conference Series 9909, 990909 (July 2016).
* [8] Diolaiti, E., Ciliegi, P., Abicca, R., Agapito, G., Arcidiacono, C., Baruffolo, A., Bellazzini, M., Biliotti, V., Bonaglia, M., Bregoli, G., Briguglio, R., Brissaud, O., Busoni, L., Carbonaro, L., Carlotti, A., Cascone, E., Correia, J.-J., Cortecchia, F., Cosentino, G., Caprio, V. D., de Pascale, M., Rosa, A. D., Vecchio, C. D., Delboulbé, A., Rico, G. D., Esposito, S., Fantinel, D., Feautrier, P., Felini, C., Ferruzzi, D., Fini, L., Fiorentino, G., Foppiani, I., Ghigo, M., Giordano, C., Giro, E., Gluck, L., Hénault, F., Jocou, L., Kerber, F., Penna, P. L., Lafrasse, S., Lauria, M., le Coarer, E., Louarn, M. L., Lombini, M., Magnard, Y., Maiorano, E., Mannucci, F., Mapelli, M., Marchetti, E., Maurel, D., Michaud, L., Morgante, G., Moulin, T., Oberti, S., Pareschi, G., Patti, M., Puglisi, A., Rabou, P., Ragazzoni, R., Ramsay, S., Riccardi, A., Ricciardi, S., Riva, M., Rochat, S., Roussel, F., Roux, A., Salasnich, B., Saracco, P., Schreiber, L., Spavone, M., Stadler, E., Sztefek, M.-H., Ventura, N., Vérinaud, C., Xompero, M., Fontana, A., and Zerbi, F. M., “MAORY: adaptive optics module for the E-ELT,” in [Adaptive Optics Systems V ], Marchetti, E., Close, L. M., and Véran, J.-P., eds., 9909, 768 – 774, International Society for Optics and Photonics, SPIE (2016).
* [9] Ciliegi, P. et al., “MAORY: the adaptive optics module for the Extremely Large Telescope (ELT),” 11448, in these proceedings.
* [10] Morris, T., Basden, A., Calcines-Rosario, A., Dohlen, K., Dubbeldam, C., El Hadi, K., Fitzsimons, E., Fusco, T., Gendron, É., Hammer, F., Jagourel, P., Jenkins, D., Morel, C., Morris, S., Rousset, G., Townson, M., Vola, P., and Younger, E., “Phase A AO system design and performance for MOSAIC at the ELT,” in [Adaptive Optics Systems VI ], Close, L. M., Schreiber, L., and Schmidt, D., eds., Society of Photo-Optical Instrumentation Engineers (SPIE) Conference Series 10703, 1070316 (July 2018).
* [11] Fusco, T., Bacon, R., Kamann, S., Conseil, S., Neichel, B., Correia, C., Beltramo-Martin, O., Vernet, J., Kolb, J., and Madec, P. Y., “Reconstruction of the ground-layer adaptive-optics point spread function for MUSE wide field mode observations,” Astronomy and Astrophysics 635, A208 (Mar. 2020).
* [12] Neichel, B., Fusco, T., and Conan, J.-M., “Tomographic reconstruction for wide-field adaptive optics systems: Fourier domain analysis and fundamental limitations,” Journal of the Optical Society of America A 26, 219 (Dec. 2008).
* [13] Plantet, C., Agapito, G., Giordano, C., Busoni, L., Bonaglia, M., Esposito, S., Arcidiacono, C., Cortecchia, F., Ciliegi, P., Diolaiti, E., Bellazzini, M., Ragazzoni, R., and Feautrier, P., “LO WFS of MAORY: performance and sky coverage assessment,” in [Adaptive Optics Systems VI ], Close, L. M., Schreiber, L., and Schmidt, D., eds., Society of Photo-Optical Instrumentation Engineers (SPIE) Conference Series 10703, 1070346 (July 2018).
* [14] Rousset, G., Primot, J., and Fontanella, J. C., “Visible wavefront sensor development.,” LEST Foundation, Technical Report 28, 17–33 (1987).
* [15] Rossi, “Numerical code generation from symbolic expressions in python,” Astronomical Data Analysis Software and Systems (ADASS) XXX (2020).
* [16] Agapito, G., Puglisi, A., and Esposito, S., “PASSATA: object oriented numerical simulation software for adaptive optics,” in [Adaptive Optics Systems V ], Marchetti, E., Close, L. M., and Véran, J.-P., eds., 9909, 2164 – 2172, International Society for Optics and Photonics, SPIE (2016).
* [17] Conan, R. and Correia, C., “Object-oriented Matlab adaptive optics toolbox,” in [Adaptive Optics Systems IV ], Marchetti, E., Close, L. M., and Véran, J.-P., eds., 9148, 2066 – 2082, International Society for Optics and Photonics, SPIE (2014).
|
# Meissner-London state in anisotropic superconductors of cuboidal shape
Ruslan Prozorov<EMAIL_ADDRESS>Ames Laboratory, Ames, Iowa 50011, USA
Department of Physics & Astronomy, Iowa State University, Ames, Iowa 50011,
USA
(22 February 2021)
###### Abstract
A simple procedure to extract anisotropic London penetration depth components
from the magnetic susceptibility measurements in realistic samples of cuboidal
shape is described.
## I Anisotropic magnetic susceptibility
The total magnetic moment of arbitrary shaped magnetic sample is given by [1]:
$\bm{m}=\int\left(\mu_{0}^{-1}\bm{B}\left(\bm{r}\right)-\bm{H}_{0}\right)dV$
(1)
where $H_{0}$ is the applied magnetic field and the integration is carried
over volume that completely encloses the sample. This equation is trivial in
the infinite geometries (without demagnetization effects), but for arbitrary
samples requires a rigorous derivation from the general equation for the
magnetic moment via the integration of shielding currents,
$\bm{m}=\int{\left[\bm{r}\times\bm{j}(\bm{r})\right]dV}/2$ [1]. In fact,
Eq.(1) includes the demagnetization effects of and can be used to define the
effective demagnetization factors [1]. Initial magnetic flux penetration into
a superconductor is linear in $\lambda/R$ if this ratio is much smaller than
one. When penetrating flux sweeps about 1/3 of the sample dimension in the
direction of penetration, highly non-linear (hyperbolic) functions take over
so that diverging $\lambda/R$ results in zero magnetic susceptibility. For
example, for an infinite slab of width $2w$,
$\chi=(\lambda/w)\tanh{(w/\lambda})-1$. However, for small $\lambda/R$, the
penetration is linear and can be quite generally written as [2]:
$\chi\left(1-N\right)=D\frac{\lambda}{R}-1$ (2)
where $D$ is the dimensionality of the flux penetration and $N$ is the
demagnetizing factor in the direction of the applied magnetic field. In
particular, $D=1$ for an infinite slab in a parallel magnetic field, $D=2$ for
an infinite cylinder in a parallel field and $D=3$ for a sphere).
Moreover, linear approximation means that we can simply calculate magnetic
susceptibility as the ratio of shielded volume to the total volume. Owing to
the exponential attenuation of the magnetic field from the surface, one can
roughly assume a complete field penetration into the layer of depth $\lambda$
and no magnetic field at all beyond that. Then, the volume penetrated by the
magnetic flux is determined by the corresponding components of the London
penetration depth as shown in Fig.1.
Figure 1: Definitions of symbols and directions in a cuboidal sample with
different components of the London penetration depths.
Magnetic susceptibilities can be measured in three principal directions, so
that, for example, $\chi_{a}$ is magnetic susceptibility measured with a
magnetic field applied along the $a-$ axis. These definitions of magnetic
susceptibility includes demagnetization correction. This means that for any
sample, components of the magnetic susceptibility are normalized so that, for
example, $\chi_{a}=-1$ when $\lambda_{c}=\lambda_{b}=0$. For example, if total
magnetic moment, $m(\lambda(T))$, is measured, then the normalized magnetic
susceptibility is given my $\chi(T)=m(\lambda(T))/|m(\lambda(T)=0)|$ where the
denominator is the theoretical magnetic moment of a perfect diamagnetic sample
of the same shape, $m(0)=-VH_{0}/(1-N)$. Therefore, for any measurement, we
need to know the value of the measured quantity for this ideal case of perfect
screening.
For solid bulk superconductors showing nearly perfect diamagnetism at low
temperatures, $\lambda_{i}(T_{min})/R_{i}\ll 1$ one can simply normalize the
measured magnetization by the total variation of magnetization from zero
screening to (almost) perfect screening. The details of the procedure depend
on the measurement technique.
In case of DC magnetometry one can simply take the value of the magnetic
moment at the lowest temperature and use it for normalization. However, the
sensitivity and dynamic range are the main issues here, because it should be
possible to detect the change of a magnetic moment when magnetic field
penetrates into a depth of $\lambda$ , which is about three order of magnitude
less than the total signal. To resolve the temperature variation of
$\lambda(T)$ one needs to resolve the fractions of that signal. Moreover, in
order to estimate London penetration depths, the sample must be in London-
Meissner state, in a magnetic field much smaller that $H_{c1}.$ Therefore,
while theoretical possible, direct measurements of $\lambda(T)$ from DC
magnetization are not practical at least with the commercial magnetometers.
Figure 2: Measured signal, resonator frequency shift in this case, is used to
calculate normalized magnetic susceptibility. See details in the text.
In the case of frequency-domain AC susceptibility measurements, such as tunnel
- diode resonator used in this work, microwave cavity perturbation or even
amplitude-domain AC susceptibility, sufficient sensitivity and the dynamic
range can be achieved (although this requires significant effort) for
precision measurements of $\lambda(T)$. However, one can no longer take the
signal difference from above $T_{c}$ to the lowest temperature, because of the
screening of the applied AC fields by the induced eddy currents. This
situation is shown in Fig.2 where panel (a) shows typical raw measurement of
the frequency shift variation upon sweeping temperature. The calibrated
susceptibility, needed for our analysis is shown in panel (b). Basically,
London penetration depth dictates the depth of magnetic field penetration
below $T_{c}$, whereas normal-metal skin depth, $\delta$, takes over above
$T_{c}$. In order to determine the total screening, one can estimate the
normal state screening from the independent resistivity measurements through
$T_{c}$ and known frequency. This is not the most precise approach.
Alternatively, the measurement device must allow for the extraction of the
sample from the sensing coil in situ. This requires substantial modification
of the experimental setup, but once built gives the ability to calibrate every
measured sample. This technique was used in the present work. Details of the
calibration of tunnel-diode resonator are given elsewhere [3].
## II London penetration depth
Assuming that we now have all three measurements of the normalized magnetic
susceptibility in small magnetic fields, represented by a vector,
$\bm{X}=\left(1+\chi_{a},1+\chi_{b},1+\chi_{c}\right)$ (3)
it is straightforward to show that counting penetrated volume in each
direction with the designations of Fig.1 and keeping only linear in $\lambda$
terms, we readily obtain:
$\bm{X}=\bm{L}\cdot\bm{\Lambda}$ (4)
where vector $\bm{\Lambda}=\left(\lambda_{a},\lambda_{b},\lambda_{c}\right)$
and the coupling London matrix is:
$\bm{L}=\left(\begin{array}[]{ccc}0&\frac{1}{c}&\frac{1}{b}\\\
\frac{1}{c}&0&\frac{1}{a}\\\ \frac{1}{b}&\frac{1}{a}&0\end{array}\right)$ (5)
The solution follows,
$\bm{\Lambda}=\bm{L}^{-1}\cdot\bm{X}$ (6)
where the inverse of the London matrix is given by:
$\bm{L}^{-1}\bm{=}\left(\begin{array}[]{ccc}-\frac{bc}{a}&c&b\\\
c&-\frac{ac}{b}&a\\\ b&a&-\frac{ab}{c}\end{array}\right)$ (7)
The resulting solution is:
$\left\\{\begin{array}[]{l}2\lambda_{a}=-\frac{bc}{a}\left(1+\chi_{a}\right)+c\left(1+\chi_{b}\right)+b\left(1+\chi_{c}\right)\\\
2\lambda_{b}=c\left(1+\chi_{a}\right)-\frac{ac}{b}\left(1+\chi_{b}\right)+a\left(1+\chi_{c}\right)\\\
2\lambda_{c}=b\left(1+\chi_{a}\right)+a\left(1+\chi_{b}\right)-\frac{ab}{c}\left(1+\chi_{c}\right)\end{array}\right.$
(8)
This allows evaluating three principal components of the London penetration
depths from three independent measurements of the magnetic susceptibility.
Indeed, it is hard to find perfect samples with ideal geometry and, therefore,
errors in the amplitudes are inevitable and this is only a rough
approximation. However, as we demonstrate in this work, this procedure allows
finding non-trivial temperature dependencies of the penetration depth,
especially when they are supported by independent measurements of, for
example, anisotropic normal state resistivity and upper critical fields that
are tied together by thermodynamic Rutgers relations.
## III Simplification for tetragonal crystals
A typical case of tetragonal system or close to tetragonal that is often used
to describe majority of iron pnictides and high-$T_{c}$ cuprates, there are
two principal values of the London penetration depth, in plane,
$\lambda_{a}=\lambda_{b}=\lambda_{ab}$, and out of plane, $\lambda_{c}.$ Note,
however, that the sample still has three different dimensions, $a,b,c$. This
means that all three components of magnetic susceptibility will be different.
In this case the above general solutions are simplified as:
$\left\\{\begin{array}[]{l}\lambda_{ab}=\frac{ab}{a+b}\left(1+\chi_{c}\right)\\\
\lambda_{c}=b\left(1+\chi_{a}\right)-\frac{ab^{2}}{c\left(a+b\right)}\left(1+\chi_{c}\right)\\\
....=a\left(1+\chi_{b}\right)-\frac{ba^{2}}{c\left(a+b\right)}\left(1+\chi_{c}\right)\\\
....=\frac{ab}{b-a}\left(\chi_{b}-\chi_{a}\right)\end{array}\right.$ (9)
These are very useful formulas as they show that in order to evaluate in-plane
London penetration depth one needs to measure only $\chi_{c}$ which is what
most experiments do. To obtain $c-$axis penetration depth one needs to measure
perpendicular components $\chi_{a}$ and/or $\chi_{b}$. Having both will
improve the accuracy of the estimate. An important quantity in determining the
microscopic mechanisms behind unconventional superconductors is temperature
dependence of penetration depth anisotropy and described procedure enables
just that. Main text shows the step by step application of the described
procedure in the case of SrPt3P superconductor.
###### Acknowledgements.
I thank Takasada Shibauchi and Kota Ishihara for very useful discussions of
the presented model. This work was supported by the U.S. Department of Energy
(DOE), Office of Science, Basic Energy Sciences, Materials Science and
Engineering Division. Ames Laboratory is operated for the U.S. DOE by Iowa
State University under contract DE-AC02-07CH11358.
## References
* Prozorov and Kogan [2018] R. Prozorov and V. G. Kogan, Phys. Rev. Applied 10, 014030 (2018).
* Prozorov _et al._ [2000a] R. Prozorov, R. W. Giannetta, A. Carrington, and F. M. Araujo-Moreira, Phys. Rev. B 62, 115 (2000a).
* Prozorov _et al._ [2000b] R. Prozorov, R. W. Giannetta, A. Carrington, P. Fournier, R. L. Greene, P. Guptasarma, D. G. Hinks, and A. R. Banks, Appl. Phys. Lett. 77, 4202 (2000b).
|
# Electro-Optic Lithium Niobate Metasurfaces
Bofeng Gao The Key Laboratory of Weak-Light Nonlinear Photonics, Ministry of
Education, School of Physics and TEDA Applied Physics Institute, Nankai
University, Tianjin 300071, P.R. China Mengxin Ren
<EMAIL_ADDRESS>The Key Laboratory of Weak-Light Nonlinear
Photonics, Ministry of Education, School of Physics and TEDA Applied Physics
Institute, Nankai University, Tianjin 300071, P.R. China Collaborative
Innovation Center of Extreme Optics, Shanxi University, Taiyuan, Shanxi
030006, P.R. China Wei Wu The Key Laboratory of Weak-Light Nonlinear
Photonics, Ministry of Education, School of Physics and TEDA Applied Physics
Institute, Nankai University, Tianjin 300071, P.R. China Wei Cai The Key
Laboratory of Weak-Light Nonlinear Photonics, Ministry of Education, School of
Physics and TEDA Applied Physics Institute, Nankai University, Tianjin 300071,
P.R. China Jingjun Xu<EMAIL_ADDRESS>The Key Laboratory of Weak-Light
Nonlinear Photonics, Ministry of Education, School of Physics and TEDA Applied
Physics Institute, Nankai University, Tianjin 300071, P.R. China
###### Abstract
Many applications of metasurfaces require an ability to dynamically change
their properties in time domain. Electrical tuning techniques are of
particular interest, since they pave a way to on-chip integration of
metasurfaces with optoelectronic devices. In this work, we propose and
experimentally demonstrate an electro-optic lithium niobate (EO-LN)
metasurface that shows dynamic modulations to phase retardation of transmitted
light. Quasi-bound states in the continuum (QBIC) are observed from our
metasurface. And by applying external electric voltages, the refractive index
of the LN is changed by Pockels EO nonlinearity, leading to efficient phase
modulations to the transmitted light around the QBIC wavelength. Our EO-LN
metasurface opens up new routes for potential applications in the field of
displaying, pulse shaping, and spatial light modulating.
Formed by artificial subwavelength building blocks known as meta-atoms,
metasurfaces have demonstrated abilities to control optical waves with
unprecedented flexibility and opened up recently our imagination for realizing
a new generation of flat optical components outperforming their conventional
bulky counterparts.zheludev2012metamaterials Despite their impressive
advances, current metasurfaces are mostly static in nature whose optical
properties are set in stone after their fabrication process. Realizing
modulation of the metasurfaces’ properties in time domain can provide new
opportunities to manipulate light and facilitate a transition to dynamic
optical devices.li2017nonlinear ; shaltout2019spatiotemporal ; he2019tunable ;
ren2020tailorable For this purpose, different dynamic tuning mechanisms such
as optical pumping,wang2016optically ; taghinejad2018ultrafast ;
ren2017reconfigurable ; nicholls2017ultrafast ; guan2019pseudospin thermal
heating,ou2011reconfigurable chemical reaction,di2016nanoscale ;
duan2017dynamic and electrical stimulationsamson2010metamaterial ;
karvounis2020electro have been implemented. Among all these tuning
mechanisms, electrical tuning techniques are of particular interest, because
they hold a promise to integrating the metasurfaces with other on-chip
optoelectronic devices. The most common electrical methods reported so far are
based on triggering free carrier modulations,salary2020tunable ;
chen2006active ; dabidian2015electrical molecular
reorientations,decker2013electro and phase transitionsdriscoll2009memory ;
jia2018dynamically in active materials integrated in the meta-atoms. However,
above approaches rely on relative slow physical processes and the switching
time is normally limited below nanosecond.lee2014ultrafast
Lithium niobate (LN) is one of the most appealing materials to overcome this
challenge. The LN shows outstanding Pockels electro-optic (EO) effect, and
their refractive index could be changed by an electrical voltage within a
femtosecond timescale.gaborit2013nonperturbative Thus the LN enables optical
modulators with much higher switching rates.el2008optical Recently, thin-film
LN on insulator (LNOI)levy1998fabrication ; rabiei2004optical emerges as a
promising platform for ultracompact photonic devices.poberaj2012lithium ;
fang2017monolithic ; qi2020integrated ; kong2020recent Thanks to the large
refractive index contrast between the LN film and the substrate (such as
silica), optical modes are tightly confined within the nanometers-thin LN
layer leading to improved EO modulation efficiency. And a variety of on-chip
EO modulator units with tens to hundreds of GHz modulation speeds have been
demonstrated using different LNOI microstructures, for example Mach-Zehnder
interferometric waveguide,wang2018integrated ; he2019high photonic
crystals,ding2019integration ; li2020lithium micro-ringsguarino2007electro ;
zhang2019broadband or micro-disks.bo2015lithium In recent years, there have
been significant advances in fabricating LN metasurfacesgao2019lithium ;
fang2020second , which led to the demonstration of intriguing tunable second
harmonic properties.ma2020resonantly ; fedotova2020second However, the EO
modulation by the LN metasurface, to the best of our knowledge, has not been
experimentally explored.
The large EO modulation strength essentially implies sensitively tunable
metasurfaces’ properties (such as phase retardation) by the EO induced
refractive index changes. For this purpose, an efficient way is to utilize
high-quality factor (high-Q) resonant modes with a narrow spectra linewidth
which significantly elongate the effective optical path and photons lifetime
in the meta-atoms, yielding enhanced local fields that experience the EO-
refractive index changes. An attractive approach for the extremely high
Q-factors is provided by the physics of bound states in the continuum (BICs).
Such concept was first proposed in quantum systems where the electron wave
function exhibits localization within the continuous spectrum of propagating
waves.von1929remarkable ; friedrich1985interfering Recently, BICs have also
attracted considerable attentions in photonics.marinica2008bound ;
hsu2016bound Mathematically, the BIC states show infinite Q-factors where the
optical energy is trapped without leakage.koshelev2018asymmetric ;
xiang2020tailoring The ideal BICs have a vanishing spectral linewidth and are
not observable in the electromagnetic spectra. However, in practice,
introducing a structural asymmetry or oblique excitation could break the ideal
BIC conditions.liu2019high ; huang2020highly Consequently the perfect BIC
modes will convert to quasi-BIC (QBIC) states that manifest themselves as
extremely narrow spectral resonances with large Q-factors.azzam2018formation ;
han2019all Such QBICs have been observed in extended photonic systems and
hold a great promise for various applications including vortex beam
generation,wang2020generating nonlinear enhancement,carletti2018giant ;
krasikov2018nonlinear ; anthur2020continuous low threshold
lasingkodigala2017lasing ; huang2020ultrafast and sensitive
sensing.romano2018label ; leitis2019angle
Figure 1: Design of EO-LN metasurface. (a) Schematic illustration of the LN
metasurface. The properties of the meta-atoms are actively modulated by
applying an external electric voltage V(t). The geometrical parameters of the
metasurface are lattice constant $D$, ridge width $d$, and thickness $h$. (b)
Scanning electron microscope images of the fabricated sample. Scale bars are 5
$\mu$m (up panel) and 500 nm (down panel), respectively.
In this paper, we numerically and experimentally demonstrate a LN metasurface
offering EO phase modulation to transmitted light in the visible frequency
regime. To yield an obvious phase modulation, we utilize a nanograting array
under oblique incidence in which the EO effect induced by applied bias voltage
is significantly enhanced by leveraging the QBIC states, resulting a 1.46
times larger EO modulation strength compared with the unstructured LN film. To
the best of our knowledge, this is the first experimental demonstration of the
EO modulation by the LN metasurfaces. Our results would act as a novel dynamic
EO platform for wavefront engineering, pulse shaping, and polarization
control, etc.
A designed schematic of the EO-LN metasurface is shown in Fig. 1. The
metasurface is composed of an array of LN nanogratings residing on a fused
quartz substrate. The LN ridge width is presented by $d$ and grating period by
$D$. Height $h$ is 200 nm which is determined by thickness of the LN film
(LNOI by NANOLN corporation) used for the metasurface fabrication. In our
design, the orientation of the nanogratings is parallel to the $e$-axis of the
LN crystal. And the external electric field is applied also along the $e$-axis
to take the advantage of the largest EO-coefficient element $\gamma_{33}$. We
fabricated the metasurfaces by focused ion beam technique (FIB, Ga+, 30kV,
24pA) following previous procedure.gao2019lithium And footprint of the
metasurface array is 10$\times$10$\mu$m2. The Au electrodes with 10 $\mu$m gap
were fabricated via standard UV-lithography procedure. Figure 1 (b) gives the
scanning electron microscope (SEM) images of total footprint and zoomed-in
view of the fabricated LN-EO metasurface with $D$=390 nm and $d$=290 nm.
Figure 2: BIC states in LN metasurface. (a) Incident-angle resolved
transmission spectra of the LN metasurface. (b) Comparison of transmission $T$
between 0∘ (red line) and 4∘ (blue and black lines) incidence shows a collapse
of BIC to sharp Fano shaped QBIC resonance. The incident wave is polarized
along $x$-axis and located within $yoz$-plane. The LN is assumed to be
lossless for red and blue lines, while $n_{i}$ is set as 0.002 to consider the
loss caused by FIB fabrication (black line). (c) Spectra for transmitted phase
$\phi$. (d) Electric field distributions of eigenmodes for 0∘ and 4∘
incidence. The electric fields are tightly confinement within LN layer at 0∘
and shows a clear leakage to substrate for 4∘ incidence.
Figure 2(a) shows a full map of the transmission spectrum (T) of the
metasurfaces with $D$ = 390 nm as a function of the incident angle ($\theta$)
under $x$-polarized incident light. The spectra were calculated using a finite
element method (COMSOL Multiphysics). And ellipsometric measured refractive
indices of the birefringent LN and the fused quartz were used in the
simulations. Such grating structures are expected to support the symmetry-
protected BIC modes at normal incidence.hsu2016bound And as shown in Fig.
2(a), resonant modes indicated by white dashed rectangles clearly emerge for
oblique incidence (nonzero $\theta$). In order to clarify the behavior of the
BIC modes, we plot the transmission T and phase retardation ($\phi$) spectra
in Fig. 2(b) and (c), respectively. It can be observed that ultra-narrow
asymmetric Fano-shaped transmission dips and abrupt phase slips occur around
the wavelength of 667 nm for incident angle of $\theta$=4∘ (blue lines). These
resonances vanish from the spectra at $\theta$=0∘ (red curves). Such
characteristic is clear a manifestation of occurrence of the BIC resonant
modes. Left panel of Fig. 2(d) demonstrates the eigenmode distribution of the
$x$-component of electric field ($E_{x}$) at $\theta$=0∘ in the $yoz$ cross
section of the meta-atom. The mode exhibits an antisymmetric profile along the
horizontal direction with a node formed at the center, corresponding to an odd
mode parity symmetry. The electromagnetic fields are tightly confined in the
LN layer and decoupled from the free-space propagating waves. However, for
$\theta$=4∘ the electric energy clearly leaks out into SiO2 substrate, and the
magnitude of the electric fields in the LN become 4 times weaker than the
ideal BIC mode. Such phenomena further confirm the presence of the true BIC
for normal incidence which collapses into the QBIC modes for oblique
excitation. Despite the LN is ideally lossless within the studied spectral
range, however the Ga+ contamination and the lattice damage during the FIB
milling will inevitably deteriorate the optical performance of the
metasurface.geiss2014photonic Such influence was taken into account in the
simulation by putting the imaginary part of the LN refractive index $n_{i}$ as
0.002. And the calculated results are shown by black curves. It can be clearly
seen that the ultra-sharp dip in $T$ and abrupt phase slip in $\phi$ are
preserved, however the resonance strength in both $T$ and $\phi$ is reduced in
the presence of optical loss.
Such high Q-factor QBIC resonance leads to a increased lifetime of photons and
strong localization of the fields within the meta-atoms, which would
significantly enhance the light-matter interaction at nanoscale and boost the
spectral tunability resulting from the EO induced refractive index change in
the LN. And the extremely sharp QBIC phase resonance can yield a substantial
phase modulation in transmission through small EO spectral shifts of the
modes. Figure 3 demonstrates the simulated phase spectra of the transmitted
light through the metasurface for different variations in the real part of
refractive index of the LN while maintaining $n_{i}$=0.002. It can be clearly
observed that the phase spectrum shifts by 0.6 nm to shorter wavelengths for
the reduced refractive index, while redshifts by the increased refractive
index. The choice of operating wavelength at the QBIC resonance wavelength of
667 nm is denoted by the vertical dashed line in Fig. 3(a). The phase
modulation $\Delta\phi$ at this wavelength are calculated as a function of
$\Delta n$ and the results are plotted in Fig. 3(b). It is shown that a
modulation span $\Delta\phi$ of $\pm$0.42 rad in the transmitted light phase
is obtained through tuning QBIC resonance via a $\Delta n$ modulation from
-0.0025 to 0.0025.
Figure 3: Simulated EO-phase modulation by metasurface. (a) Transmission
phase spectra of the LN metasurface with $D$=390 nm around QBIC wavelength
(indicated by vertical dashed line) for 4∘ oblique incidence corresponding to
different refractive index modulations. (b) Relation between transmitted phase
modulation and refractive index modulation for the LN metasurface at the QBIC
wavelength.
To experimentally evaluate the transmission spectrum of the fabricated LN
metasurface, we built a micro-spectrometer. Output of a supercontinuum laser
(NKT EXR-15) was focused onto the LN metasurfaces by a 10$\times$ objective.
The transmitted light was analyzed using a spectrometer (Horiba MicroHR). The
measured transmission spectrum under $x$-polarized incidence is given in Fig.
4(b). A QBIC resonance dip clearly appears around 633 nm, as indicated by a
vertical black arrow. It is shown that the experimentally measured QBIC
resonance is much shallower and broader leading to a smaller Q-factor compared
with the simulation results shown in Fig. 2. Such discrepancy could be
explained by the fabrication imperfections. Furthermore, both normal and
oblique incidence components were included in the experiment, thus the QBIC
dip may be averaged out by the normal incident component.
Figure 4: Experimental EO phase modulation by metasurface. (a) A schematic
diagram of a laser heterodyne detection system. An acoustic-optic frequency
shifter (AOFS) is used to divide an input 633 nm laser (with optical frequency
of $f_{o}$) into two parts. The 0th order light is used as probe light to
excite the LN metasurface. And the frequency of the 1st order light is
downshifted by $f_{c}$=110 MHz and used as reference light. The metasurface is
modulated by a sinusoidal electric voltage at $f_{m}$=1.0 MHz. (b)
Experimentally measured transmission spectrum of the metasurface with $D$=390
nm excited by focused light. A clear QBIC resonance dip is observed around 633
nm as indicated by a vertical black arrow. (c) Power spectrum of optical beats
recorded by a RF spectrum analyzer. Three distinct peaks are observed at
$f_{c}-f_{m}$=109 MHz, $f_{c}$=110 MHz and $f_{c}+f_{m}$=111 MHz. (d) Phase
modulation magnitude $m$ measured from the LN metasurface and the unstructured
LN film. Heights of histograms are average value of multiple measurements, and
error bars are their standard deviations.
The EO phase modulation by the metasurfaces was characterized by a home-built
laser heterodyne detection system [shown in Fig 4(a)]. A $x$-polarized 633 nm
continuous wave laser (CNILaser, MGL-III-532) was launched into an acoustic-
optic frequency shifter (AOFS) and was divided into two parts, i.e. the 0th
order transmitted and the 1st order diffracted light beams. The 0th order
light without frequency shift was used as probe light to focus onto the LN
metasurface by the 10$\times$ objective. And the 1st order light with a
frequency downshift of $f_{c}$=110 MHz was used as reference light. An
arbitrary waveform generator (Agilent 33250A) was used to generate a
sinusoidal driving voltage signal at $f_{m}$=1.0 MHz, which was further
amplified to be 300 Vpp (peak-to-peak magnitude, -150V to +150V output
voltage) using a high-voltage amplifier (Falco Corp.), and then was fed into
the electrodes. As a consequence, the phase of the probe light was changed by
the EO response of the LN metasurface. After interfering the probe light with
the reference beam, optical beats were generated which were further recorded
by a photodetector and a RF spectrum analyzer. In our measurements, the
visibility of the optical beats has been optimized by equilibrate the powers
and optical paths of the two beams.
Assuming the probe and reference beams at the photodetector are
$E_{p}=e^{i[2\pi f_{o}t+m\sin(2\pi f_{m}t)]}$ and
$E_{r}=e^{i2\pi(f_{o}-f_{c})t}$, respectively where $f_{o}$ is the optical
frequency of the 633 nm laser, and $m$ is the EO phase modulation magnitude.
Thus the optical beats can be described as $I=|E_{p}+E_{r}|^{2}=2+2\cos[2\pi
f_{c}t+m\sin(2\pi f_{m}t)]$. When the $m\ll 1$, such beat signal could be
expressed by a set of standard Bessel function expansions
$I=2+2\\{J_{0}(m)\cos(2\pi
f_{c}t)+J_{1}(m)[\cos(2\pi(f_{c}+f_{m})t)-\cos(2\pi(f_{c}-f_{m})t)]\\}$. And
the corresponding Fourier frequency spectrum can be expressed as
$\mathcal{F}(f)=2\delta(f)+J_{0}(m)\delta(f-f_{c})+J_{1}(m)[\delta(f-(f_{c}+f_{m}))-\delta(f-(f_{c}-f_{m}))]$,
in which $\delta(f)$ is the Kronecker delta function.zhang2015optical This
equation indicates that the phase modulation signal results in three discrete
frequency components $f_{c}-f_{m}$, $f_{c}$ and $f_{c}+f_{m}$, respectively.
And as shown in Fig.4(c), indeed three distinct peaks are observed at 109 MHz,
110 MHz and 111 MHz in the experimental power spectrum. The magnitude of the
frequency component at $f_{c}$ and ${f_{c}}\pm{f_{m}}$ are proportional to
$J_{0}^{2}\left(m\right)$ and $J_{1}^{2}\left(m\right)$ respectively. Then the
phase modulation magnitude $m$ can be mathematically demodulated by the
experimental ratio of
${{J_{0}^{2}\left(m\right)}\mathord{\left/{\vphantom{{J_{0}^{2}\left(m\right)}{J_{1}^{2}\left(m\right)}}}\right.\kern-1.2pt}{J_{1}^{2}\left(m\right)}}$.
And the deduced $m$ for the metasurface and the unstructured LN film are shown
in Fig. 4(d). The average $m$ value of 0.0041 rad is achieved from the
metasurface, which is larger than the 0.0028 rad by the unstructured LN film.
This explicitly show that the adoption of the metasurface is a valid way for
stronger EO-modulation.
In conclusion, We have numerically and experimentally demonstrated an EO
tunable LN metasurfaces that provide a dynamic control over the phase
retardation of transmitted light in the visible spectral regime. The oblique
incidence enables collapse of symmetry-protected BICs into Fano resonant QBIC
modes with ultrahigh Q-factors, which significantly increases the lifetime of
photons and field confinement within the resonators leading to the improved
modulation sensitivity. The proposed EO-LN metasurface is of great interest
for developing multifunctional and tunable optical components such as
ultracompact spatial light modulators, optical switches, which would find
users in various applications including displaying, optical wavefront shaping,
and so on.
###### Acknowledgements.
This work was supported by National Key R&D Program of China (2017YFA0305100,
2017YFA0303800, 2019YFA0705000); National Natural Science Foundation of China
(92050114, 91750204, 61775106, 11904182, 12074200, 11774185); Guangdong Major
Project of Basic and Applied Basic Research (2020B0301030009); 111 Project
(B07013); PCSIRT (IRT0149); Open Research Program of Key Laboratory of 3D
Micro/Nano Fabrication and Characterization of Zhejiang Province; Fundamental
Research Funds for the Central Universities (010-63201003, 010-63201008, and
010-63201009); Tianjin youth talent support program. We thank Nanofabrication
Platform of Nankai University for fabricating samples.
## References
* (1) N. I. Zheludev, Y. S. Kivshar, _Nat. Mater._ 2012, _11_ , 917–924.
* (2) G. Li, S. Zhang, T. Zentgraf, _Nat. Rev. Mater._ 2017, _2_ , 1–14.
* (3) A. M. Shaltout, V. M. Shalaev, M. L. Brongersma, _Science_ 2019, _364_.
* (4) Q. He, S. Sun, L. Zhou, _Research_ 2019, _2019_ , 1849272.
* (5) M. Ren, W. Cai, J. Xu, _Adv. Mater._ 2020, _32_ , 1806317.
* (6) Q. Wang, E. T. Rogers, B. Gholipour, C.M. Wang, G. Yuan, J. Teng, N. I. Zheludev, _Nature Photon._ 2016, _10_ , 60–65.
* (7) M. Taghinejad, H. Taghinejad, Z. Xu, K.T. Lee, S. P. Rodrigues, J. Yan, A. Adibi, T. Lian, W. Cai, _Nano Lett._ 2018, _18_ , 5544–5551.
* (8) M.X. Ren, W. Wu, W. Cai, B. Pi, X.Z. Zhang, J.-J. Xu, _Light-Sci. Appl._ 2017, _6_ , e16254–e16254.
* (9) L. H. Nicholls, F. J. Rodríguez-Fortuño, M. E. Nasir, R. M. Córdova-Castro, N. Olivier, G. A. Wurtz, A. V. Zayats, _Nature Photonics_ 2017, _11_ , 628–633.
* (10) C. Guan, J. Shi, J. Liu, H. Liu, P. Li, W. Ye, S. Zhang, _Laser Photonics Rev._ 2019, _13_ , 1800242.
* (11) J.Y. Ou, E. Plum, L. Jiang, N. I. Zheludev, _Nano Lett._ 2011, _11_ , 2142–2144.
* (12) G. Di Martino, S. Tappertzhofen, S. Hofmann, J. Baumberg, _Small_ 2016, _12_ , 1334–1341.
* (13) X. Duan, S. Kamin, N. Liu, _Nat. Commun._ 2017, _8_ , 1–9.
* (14) Z. Sámson, K. MacDonald, F. De Angelis, B. Gholipour, K. Knight, C. Huang, E. Di Fabrizio, D. Hewak, N. Zheludev, _Appl. Phys. Lett._ 2010, _96_ , 143105.
* (15) A. Karvounis, V. V. Vogler-Neuling, F. U. Richter, E. Dénervaud, M. Timofeeva, R. Grange, _Adv. Opt. Mater._ 2020, _8_ , 2000623\.
* (16) M. M. Salary, H. Mosallaei, _ACS Photonics_ 2020.
* (17) H.-T. Chen, W. J. Padilla, J. M. Zide, A. C. Gossard, A. J. Taylor, R. D. Averitt, _Nature_ 2006, _444_ , 597–600.
* (18) N. Dabidian, I. Kholmanov, A. B. Khanikaev, K. Tatar, S. Trendafilov, S. H. Mousavi, C. Magnuson, R. S. Ruoff, G. Shvets, _ACS Photonics_ 2015, _2_ , 216–227.
* (19) M. Decker, C. Kremers, A. Minovich, I. Staude, A. E. Miroshnichenko, D. Chigrin, D. N. Neshev, C. Jagadish, Y. S. Kivshar, _Opt. Express_ 2013, _21_ , 8879–8885.
* (20) T. Driscoll, H.-T. Kim, B.-G. Chae, B.-J. Kim, Y.-W. Lee, N. M. Jokerst, S. Palit, D. R. Smith, M. Di Ventra, D. N. Basov, _Science_ 2009, _325_ , 1518–1521.
* (21) Z.-Y. Jia, F.-Z. Shu, Y.-J. Gao, F. Cheng, R.-W. Peng, R.-H. Fan, Y. Liu, M. Wang, _Phys. Rev. Appl._ 2018, _9_ , 034009.
* (22) J. Lee, S. Jung, P.-Y. Chen, F. Lu, F. Demmerle, G. Boehm, M.-C. Amann, A. Alù, M. A. Belkin, _Adv. Opt. Mater._ 2014, _2_ , 1057–1063.
* (23) G. Gaborit, J. Dahdah, F. Lecoche, P. Jarrige, Y. Gaeremynck, E. Duraz, L. Duvillaret, _IEEE T. Plasma Sci._ 2013, _41_ , 2851–2857.
* (24) T. S. El-Bawab, _Optical switching_ , Springer Science & Business Media, 2008.
* (25) M. Levy, R. Osgood Jr, R. Liu, L. Cross, G. Cargill III, A. Kumar, H. Bakhru, _Appl. Phys. Lett._ 1998, _73_ , 2293–2295.
* (26) P. Rabiei, P. Gunter, _Appl. Phys. Lett._ 2004, _85_ , 4603–4605.
* (27) G. Poberaj, H. Hu, W. Sohler, P. Guenter, _Laser Photonics Rev._ 2012, _6_ , 488–503.
* (28) Z. Fang, Y. Xu, M. Wang, L. Qiao, J. Lin, W. Fang, Y. Cheng, _Sci. Rep._ 2017, _7_ , 45610.
* (29) Y. Qi, Y. Li, _Nanophotonics_ 2020, _1_.
* (30) Y. Kong, F. Bo, W. Wang, D. Zheng, H. Liu, G. Zhang, R. Rupp, J. Xu, _Adv. Mater._ 2020, _32_ , 1806452.
* (31) C. Wang, M. Zhang, X. Chen, M. Bertrand, A. Shams-Ansari, S. Chandrasekhar, P. Winzer, M. Lončar, _Nature_ 2018, _562_ , 101–104.
* (32) Y. R. J. J. Z. R. Y. X. S. G. S. S. X. W. L. Z. L. L. C. G. H. C. S. Y. L. L. . X. C. Mingbo He, Mengyue Xu, _Nature Photon._ 2019, _13_ , 359–364.
* (33) T. Ding, Y. Zheng, X. Chen, _Opt. Lett._ 2019, _44_ , 1524–1527.
* (34) M. Li, J. Ling, Y. He, U. A. Javid, S. Xue, Q. Lin, _Nat. Commun._ 2020, _11_ , 1–8.
* (35) A. Guarino, G. Poberaj, D. Rezzonico, R. Degl’Innocenti, P. Günter, _Nature Photon._ 2007, _1_ , 407–410.
* (36) M. Zhang, B. Buscaino, C. Wang, A. Shams-Ansari, C. Reimer, R. Zhu, J. M. Kahn, M. Lončar, _Nature_ 2019, _568_ , 373–377.
* (37) F. Bo, J. Wang, J. Cui, S. K. Ozdemir, Y. Kong, G. Zhang, J. Xu, L. Yang, _Adv. Mater._ 2015, _27_ , 8075–8081.
* (38) B. Gao, M. Ren, W. Wu, H. Hu, W. Cai, J. Xu, _Laser Photonics Rev._ 2019, _13_ , 1800312.
* (39) B. Fang, H. Li, S. Zhu, T. Li, _Photonics Res._ 2020, _8_ , 1296–1300.
* (40) J. Ma, M. Ren, W. Wu, W. Cai, J. Xu, _arXiv preprint arXiv:2002.06594_ 2020.
* (41) A. Fedotova, M. Younesi, J. Sautter, A. Vaskin, F. J. Lo?chner, M. Steinert, R. Geiss, T. Pertsch, I. Staude, F. Setzpfandt, _Nano Lett._ 2020, _20_ , 8608–8614.
* (42) J. Von Neuman, E. Wigner, _Phys. Z_ 1929, _30_ , 465.
* (43) H. Friedrich, D. Wintgen, _Phys. Rev. A_ 1985, _32_ , 3231.
* (44) D. Marinica, A. Borisov, S. Shabanov, _Phys. Rev. Lett._ 2008, _100_ , 183902.
* (45) C. W. Hsu, B. Zhen, A. D. Stone, J. D. Joannopoulos, M. Soljačić, _Nat. Rev. Mater._ 2016, _1_ , 1–13.
* (46) K. Koshelev, S. Lepeshov, M. Liu, A. Bogdanov, Y. Kivshar, _Phys. Rev. Lett._ 2018, _121_ , 193903.
* (47) J. Xiang, Y. Xu, J.-D. Chen, S. Lan, _Nanophotonics_ 2020, _9_ , 133–142.
* (48) Z. Liu, Y. Xu, Y. Lin, J. Xiang, T. Feng, Q. Cao, J. Li, S. Lan, J. Liu, _Phys. Rev. Lett._ 2019, _123_ , 253901.
* (49) Z. Huang, M. Wang, Y. Li, J. Shang, K. Li, W. Qiu, J. Dong, H. Guan, Z. Chen, H. Lu, _arXiv preprint arXiv:2006.10908_ 2020.
* (50) S. I. Azzam, V. M. Shalaev, A. Boltasseva, A. V. Kildishev, _Phys. Rev. Lett._ 2018, _121_ , 253901.
* (51) S. Han, L. Cong, Y. K. Srivastava, B. Qiang, M. V. Rybin, A. Kumar, R. Jain, W. X. Lim, V. G. Achanta, S. S. Prabhu, _Adv. Mater._ 2019, _31_ , 1901921.
* (52) B. Wang, W. Liu, M. Zhao, J. Wang, Y. Zhang, A. Chen, F. Guan, X. Liu, L. Shi, J. Zi, _Nature Photon._ 2020, _14_ , 623–628.
* (53) L. Carletti, K. Koshelev, C. De Angelis, Y. Kivshar, _Phys. Rev. Lett._ 2018, _121_ , 033903.
* (54) S. Krasikov, A. Bogdanov, I. Iorsh, _Phys. Rev. B_ 2018, _97_ , 224309.
* (55) A. P. Anthur, H. Zhang, R. Paniagua-Dominguez, D. A. Kalashnikov, S. T. Ha, T. W. Maß, A. I. Kuznetsov, L. Krivitsky, _Nano Lett._ 2020, _20_ , 8745–8751.
* (56) A. Kodigala, T. Lepetit, Q. Gu, B. Bahari, Y. Fainman, B. Kanté, _Nature_ 2017, _541_ , 196–199.
* (57) C. Huang, C. Zhang, S. Xiao, Y. Wang, Y. Fan, Y. Liu, N. Zhang, G. Qu, H. Ji, J. Han, _Science_ 2020, _367_ , 1018–1021.
* (58) S. Romano, G. Zito, S. Torino, G. Calafiore, E. Penzo, G. Coppola, S. Cabrini, I. Rendina, V. Mocella, _Photonics Res._ 2018, _6_ , 726–733.
* (59) A. Leitis, A. Tittl, M. Liu, B. H. Lee, M. B. Gu, Y. S. Kivshar, H. Altug, _Sci. Adv._ 2019, _5_ , eaaw2871.
* (60) R. Geiss, S. Diziain, M. Steinert, F. Schrempel, E.-B. Kley, A. Tünnermann, T. Pertsch, _Phys. Status Solidi A_ 2014, _211_ , 2421–2425.
* (61) W. Zhang, W. Gao, L. Huang, D. Mao, B. Jiang, F. Gao, D. Yang, G. Zhang, J. Xu, J. Zhao, _Opt. Express_ 2015, _23_ , 17576–17583.
|
# Smoothing effect and Derivative formulas for Ornstein–Uhlenbeck processes
driven by subordinated cylindrical Brownian noises
Alessandro Bondi
###### Abstract
We investigate the concept of cylindrical Wiener process subordinated to a
strictly $\alpha$–stable Lévy process, with $\alpha\in\left(0,1\right)$, in an
infinite dimensional, separable Hilbert space, and consider the related
stochastic convolution. We then introduce the corresponding Ornstein–Uhlenbeck
process, focusing on the regularizing properties of the Markov transition
semigroup defined by it. In particular, we provide an explicit, original
formula –which is not of Bismut–Elworthy–Li’s type– for the Gateaux
derivatives of the functions generated by the operators of the semigroup, as
well as an upper bound for the norm of their gradients. In the case
$\alpha\in\left(\frac{1}{2},1\right)$, this estimate represents the starting
point for studying the Kolmogorov equation in its mild formulation.
###### keywords:
Subordinated cylindrical Wiener process , Isotropic $\alpha$–stable processes,
Markov transition semigroup, Derivative formulas , Gradient estimates.
## 1 Introduction
The aim of the paper is to analyze the Ornstein–Uhlenbeck processes
$Z^{x},\,x\in H$, being $H$ an infinite dimensional, separable Hilbert space.
They are defined as the $H$–valued, mild solutions of the linear stochastic
differential equations
$dZ^{x}_{t}=AZ_{t}^{x}dt+\sqrt{Q}\,dW_{L_{t}},\quad Z^{x}_{0}=x\in H,$
where $A\colon\mathcal{D}\left(A\right)\subset H\to H$ is a linear,
selfadjoint, negative definite, unbounded operator, and $Q\colon H\to H$ is a
linear, bounded, nonnegative definite operator. By construction, $A$ and $Q$
share a common CONS of eigenvectors for $H$: it is denoted by
$\left(e_{n}\right)_{n}$. The main novelty of our work consists in the
structure of the noise $W_{L}$. Intuitively speaking, it can be thought of as
$W_{L_{t}}=\sum_{n=1}^{\infty}\beta^{n}_{L_{t}}e_{n},\quad t\geq 0,$
where $\left(\beta^{n}\right)_{n}$ is a sequence of independent Brownian
motions and $L=\left(L_{t}\right)_{t}$ is an independent, strictly
$\alpha$–stable subordinator representing the random time change, for
$\alpha\in\left(0,1\right)$. Therefore $W_{L}$ is nothing else than a
subordinated cylindrical Wiener process, even if, of course, the convergence
of the series needs to be formally investigated.
In literature the canonical case is the Gaussian one, which involves a
cylindrical Wiener process
$W_{t}=\sum_{n=1}^{\infty}\beta^{n}_{t}e_{n},\,t\geq 0.$ There is a
well–established theory concerning this setting, and we may refer to the book
[5] for an extensive collection of results on the subject. Another important
framework is the one proposed by [11], where the authors deal with a
cylindrical, $\alpha$–stable Lévy process
$Z_{t}=\sum_{n=1}^{\infty}\zeta^{n}_{t}e_{n},\,t\geq 0$: here
$\left(\zeta^{n}\right)_{n}$ are independent, real–valued, symmetric
$\alpha$–stable Lévy processes, for $\alpha\in\left(0,2\right)$. Despite the
interesting generalization offered by this approach, the structure of the
noise could be questionable in some applications, especially in physics. In
fact, fixing $t>0$ and $N\in\mathbb{N}$, the corresponding Galerkin projection
of $Z_{t}$ has characteristic function
$\mathbb{E}\left[e^{i\left\langle
h,\sum_{n=1}^{N}\zeta^{n}_{t}e_{n}\right\rangle}\right]=e^{-t\gamma^{\alpha}\sum_{n=1}^{N}\left|\left\langle
h,e_{n}\right\rangle\right|^{\alpha}},\quad h\in H,$
for some constant $\gamma>0$. Therefore with respect to the Brownian case we
lose the isotropy, that is, the rotational stability of the noise, which is a
property as desirable as realistic for a random perturbation.
Motivated by this argument, it is worth studying the results contained in the
aforementioned works also for the subordinated process $W_{L}$, since its
Galerkin projections are $2\alpha$–stable, isotropic Lévy processes, as we
shall discuss in Section 2. With this purpose in mind, the present paper just
focuses on the linear case, i.e., the Ornstein–Ulenbeck (henceforth
abbreviated as _OU_) one. A number of complications arises from the approach
that we suggest, the most evident being the lack of independence of the
processes $\left(\beta^{n}_{L}\right)_{n}$, which in general makes the
techniques used in the other cases unfeasible. Nevertheless, the structure of
the noise allows to construct the objects of our interest and to carry out our
arguments with the intuition that, conditioning on the $\sigma$–algebra
generated by the subordinator $L$, we are dealing with time–shifted Brownian
motions.
The paper is structured as follows. In Section 2 we carefully describe the
theoretical framework of our analysis and suggest a natural procedure
–essentially relying on _Markov’s inequality_ – to construct both the
subordinated cylindrical Wiener process $W_{L}$, or, more precisely,
$\sqrt{Q}W_{L}$, which in general takes values in a Hilbert space bigger than
$H$, and the stochastic convolution $\tilde{Z}_{A,Q}$, which is a $H$–valued
random process instead.
In Section 3 we are concerned with the smoothing effect of the Markov
transition semigroup $R=\left(R_{t}\right)_{t\geq 0}$ associated with
$\left(Z^{x}\right)_{x\in H}$, defined by
$R_{t}\phi\left(x\right)\coloneqq\mathbb{E}\left[\phi\left(Z^{x}_{t}\right)\right],\quad
x\in H,\,\phi\in\mathcal{B}_{b}\left(H\right),\,t\geq 0.$
We first study the finite–dimensional case, starting with a deterministic time
change (see Theorem 4) and subsequently recovering the random time shift in
Theorem 6. This way to proceed is customary while working with subordinated
Brownian motions (see, e.g., [9, 15]). Taking advantage of the linear
structure of our model, we are able to get a derivation formula for
$R_{t}\phi$ (see Equation (18)) with a density argument, shunning an
application of the Bismut–Elworthy–Li’s type formula provided by [15]. This is
a remarkable fact, also because it is consistent with the Gaussian framework,
where it is preferable to use the _Bismut–Elworthy–Li’s formula_ only in the
nonlinear case. Then in Theorem 7 we pass to the general, infinite–dimensional
setting under suitable assumptions. A subtle difference between the finite–
and infinite–dimensional cases is that in the former we get an expression for
the Gateaux derivative of $R_{t}\phi$ for every
$\phi\in\mathcal{B}_{b}\left(H\right)$, whereas in the latter such a formula
(see Equation (26)) holds true only for $\phi\in C_{b}\left(H\right)$. In
addition, in Corollary 8 we provide a gradient estimate that, for
$\alpha\in\left(\frac{1}{2},1\right)$, represents the starting point for the
analysis of the Kolmogorov equation in its mild form with fixed–point
arguments, analysis which will be the topic of a future research.
Each of the previous two sections is closed by an example which studies a
concrete framework, namely $H=L^{2}_{0}\left(\mathbb{T}^{d}\right)$, with
$\mathbb{T}^{d}=\mathbb{R}^{d}/\mathbb{Z}^{d}$ being the $d$–dimensional
torus. Herein we discuss the hypotheses required by the several theorems of
the paper, with explicit computations that offer a parallel with the
corresponding, well–known results of the Gaussian setting.
## 2 Subordinated Cylindrical Wiener Process and Stochastic Convolution
Let $H$ be a separable Hilbert space and $\left(e_{n}\right)_{n}$ be a
complete orthonormal system. We consider a complete probability space
$\left(\Omega,\mathcal{F},\mathbb{P}\right)$ and introduce a sequence of
independent Brownian motions $\left(\beta^{n}\right)_{n}$ on it. Let
$L=\left(L_{t}\right)_{t}$ be a strictly $\alpha$–stable subordinator, i.e.,
an increasing Lévy process where the distribution of $L_{1}\sim\mu$ is
characterized by
$\hat{\mu}\left(u\right)=\exp\left\\{-\bar{c}\left|u\right|^{\alpha}\left(1-i\tan\frac{\pi\alpha}{2}\,\text{sign}\left(u\right)\right)\right\\},\quad
u\in\mathbb{R},$ (1)
with $\bar{c}>0,\,\alpha\in\left(0,1\right)$. The Laplace transform of $\mu$
is given by
$L_{\mu}\left(u\right)=\mathbb{E}\left[e^{-uL_{1}}\right]=e^{-c^{\prime}u^{\alpha}},\quad
u\geq 0,$ (2)
where $c^{\prime}$ is a constant depending on $\bar{c}$ (for an expression of
$c^{\prime}$ we refer to [13, Example $24.12$], but it is of no use in our
work). Let us introduce the subordinated Brownian motions
$\left(\beta^{n}_{L_{t}}\right)_{t},n\in\mathbb{N}$: assuming $L$ to be
independent from $\left(\beta^{n}\right)_{n}$, [13, Theorem $30.1$] implies
that $\left(\beta^{n}_{L}\right)_{n}$ are real–valued Lévy processes.
Denoting by $\mathcal{N}$ the family of $\mathcal{F}$–negligible sets, we
introduce the augmented $\sigma$–algebra
$\mathcal{F}^{L}\coloneqq\sigma\left(\mathcal{F}^{L}_{0}\cup\mathcal{N}\right)$,
where $\mathcal{F}^{L}_{0}$ is the natural $\sigma$–algebra generated by the
subordinator. Analogously, we consider the augmented $\sigma$–algebras
$\mathcal{F}^{\beta^{n}}$ generated by the Brownian motions. Thanks to the
hypotheses of independence that we have assumed on the processes, we have that
$\mathcal{F}^{L},\,\left(\mathcal{F}^{\beta^{n}}\right)_{n}$ are mutually
independent. In our context, it is natural to deal with different filtrations.
Specifically, for every $n\in\mathbb{N}$ let
$\mathbb{F}^{n}=\left(\mathcal{F}^{\beta^{n}}_{t}\right)_{t}$ be the minimal
augmented filtration generated by $\beta^{n}$, that is,
$\mathcal{F}^{\beta^{n}}_{t}\coloneqq\sigma\left(\left({\mathcal{F}_{0}^{\beta^{n}}}\right)_{t}\cup\mathcal{N}\right)$
for every $t\geq 0$, where
$\left(\left({\mathcal{F}_{0}^{\beta^{n}}}\right)_{t}\right)_{t}$ is the
natural filtration of the process. According to [12, Theorem I$.31$],
$\mathbb{F}^{n}$ satisfies the usual hypotheses. Then we construct a complete
filtration associated with the subordinated Brownian motions. It is denoted by
$\mathbb{F}_{L}=\left(\mathcal{F}_{t}\right)_{t}$, where we define
$\mathcal{F}_{t}\coloneqq\sigma\left(\bigcup_{n\in\mathbb{N}}\mathcal{F}^{\beta_{L}^{n}}_{t}\right),\quad
t\geq 0,$
with $\mathbb{F}^{n}_{L}=\left(\mathcal{F}^{\beta_{L}^{n}}_{t}\right)_{t}$
being the minimal augmented filtration associated with $\beta^{n}_{L}$.
###### Remark 1.
In the finite dimensional case, we denote by
$W_{L}^{N}=\left(W^{N}_{L_{t}}\right)_{t}$ the subordinated,
$\mathbb{R}^{N}$–valued Brownian motion, meaning that
$W^{N}_{L_{t}}=\left[\begin{matrix}\beta^{1}_{L_{t}}&\cdots&\beta^{N}_{L_{t}}\end{matrix}\right]^{T},\quad
t\geq 0.$
By [13, Theorem $30.1$], $W^{N}_{L}$ is an $\mathbb{R}^{N}$–valued Lévy
process, and it is easy to verify that its minimal augmented filtration
$\left(\mathcal{F}^{W_{L}^{N}}_{t}\right)_{t}$ coincides with
$\mathbb{F}_{L}$. This fact shows that the construction that we have carried
out for $\mathbb{F}_{L}$ is natural.
Using the notation we have just introduced, in the general case the
$\sigma$–algebras constituting $\mathbb{F}_{L}$ can be expressed as follows:
$\mathcal{F}_{t}=\sigma\left(\bigcup_{N\in\mathbb{N}}\mathcal{F}^{W_{L}^{N}}_{t}\right),\quad
t\geq 0.$
### 2.1 Subordinated Cylindrical Wiener Process
The aim of this section is to give a rigorous meaning to the formal notation
$W_{L_{t}}=\sum_{n=1}^{\infty}\beta^{n}_{L_{t}}e_{n},t>0$.
First, fix $h\in H,\,t>0$ and notice that the series
$\sum_{n=1}^{\infty}\beta^{n}_{L_{t}}\left\langle h,e_{n}\right\rangle$
converges in distribution. Indeed, even if the random variables
$\left(\beta^{n}_{L_{t}}\right)_{n\in\mathbb{N}}$ are not independent due to
the presence of the subordinator, we can still exploit the mutual independence
of the $\sigma$–algebras $\left(\mathcal{F}^{\beta^{n}}\right)_{n}$ by
conditioning with respect to $\mathcal{F}^{L}$, which in turn is independent
from the previous ones. In order to do so, we use the law of total expectation
together with (2) to get, for every $u\in\mathbb{R}$,
$\displaystyle\mathbb{E}\left[\exp\left\\{iu\sum_{n=1}^{N}\beta^{n}_{L_{t}}\left\langle
h,e_{n}\right\rangle\right\\}\right]$
$\displaystyle\\!=\mathbb{E}\left[\mathbb{E}\left[{\left.\kern-1.2pt\exp\left\\{iu\\!\sum_{n=1}^{N}\beta^{n}_{r}\left\langle
h,e_{n}\right\rangle\right\\}\vphantom{\big{|}}\right|_{r=L_{t}}}\Bigg{|}\mathcal{F}^{L}\right]\right]$
$\displaystyle=\mathbb{E}\left[{\left.\kern-1.2pt\mathbb{E}\left[\exp\left\\{iu\\!\sum_{n=1}^{N}\beta^{n}_{r}\left\langle
h,e_{n}\right\rangle\right\\}\right]\vphantom{\big{|}}\right|_{r=L_{t}}}\right]\\!\\!=\mathbb{E}\left[\prod_{n=1}^{N}\exp\left\\{-\frac{1}{2}L_{t}\left|u\right|^{2}\left|\left\langle
h,e_{n}\right\rangle\right|^{2}\right\\}\right]$
$\displaystyle=\exp\left\\{-tc^{\prime}\frac{1}{2^{\alpha}}\left|u\right|^{2\alpha}\left(\sum_{n=1}^{N}\left|\left\langle
h,e_{n}\right\rangle\right|^{2}\right)^{\alpha}\right\\}\underset{N\to\infty}{\longrightarrow}\exp\left\\{-t\frac{c^{\prime}}{2^{\alpha}}\left\lVert
h\right\rVert^{2\alpha}_{H}\left|u\right|^{2\alpha}\right\\}.$ (3)
Hence applying Lévy’s continuity theorem we see that the series
$\sum_{n=1}^{\infty}\beta^{n}_{L_{t}}\left\langle h,e_{n}\right\rangle$
converges in distribution to a symmetric, $2\alpha$–stable random variable.
Moreover, for every $n\in\mathbb{N}$, choosing $h=e_{n}$ and $N>n$ the
computations in (2.1) provide the distribution of the Lévy process
$\beta^{n}_{L}$, namely
$\mathbb{E}\left[e^{iu\beta^{n}_{L_{t}}}\right]=\exp\left\\{-t\frac{c^{\prime}}{2^{\alpha}}\left|u\right|^{2\alpha}\right\\},\quad
u\in\mathbb{R},\,\text{ for any }t>0.$ (4)
The process $W_{L}=\left(W_{L_{t}}\right)_{t}$ is a _subordinated cylindrical
Wiener process_ , but we might also call it _cylindrical, $2\alpha$–stable
isotropic process_. In fact, for every $N\in\mathbb{N}$ and $t>0$, if we
denote by $\pi_{N}$ the projection onto the first $N$ Fourier components and
by $H_{N}$ its range, an argument analogous to the one in (2.1) yields:
$\mathbb{E}\left[\exp\left\\{i\left\langle
z,\sum_{n=1}^{N}\beta^{n}_{L_{t}}e_{n}\right\rangle\right\\}\right]=\exp\left\\{-t\frac{c^{\prime}}{2^{\alpha}}\left(\sum_{n=1}^{N}\left|\left\langle
z,e_{n}\right\rangle\right|^{2}\right)^{\alpha}\right\\},\quad z\in{H}.$
Hence canonically identifying $H_{N}$ with $\mathbb{R}^{N}$, the Galerkin
projection $\left(\sum_{n=1}^{N}\beta^{n}_{L_{t}}e_{n}\right)_{t}$ can be read
as an $\mathbb{R}^{N}$–valued, $2\alpha$–stable, isotropic Lévy process.
Secondly, we consider a linear, bounded, nonnegative definite operator $Q:H\to
H$ such that $e_{n}$ is one of its eigenvectors corresponding to the
eigenvalue $\sigma_{n}^{2}\geq 0$, $n\in\mathbb{N}$. We study the convergence
in probability –on an appropriate space– of the series:
$\sqrt{Q}W_{L_{t}}=\sum_{n=1}^{\infty}\sigma_{n}\beta^{n}_{L_{t}}e_{n},\quad
t>0.$
Let us introduce a bounded sequence $\left(\rho_{n}\right)_{n}$ of strictly
positive numbers such that
$\sum_{n=1}^{\infty}\rho_{n}^{2r}\sigma_{n}^{2r}<\infty$ for some
$r\in\left(0,\alpha\right)$, and consider the corresponding Hilbert space
$\left(V,\left\langle\cdot,\cdot\right\rangle_{V}\right)$, where
$V\coloneqq\left\\{h\in H:\sum_{n=1}^{\infty}\rho_{n}^{-2}\left|\left\langle
h,e_{n}\right\rangle\right|^{2}<\infty\right\\}\quad\text{and}\quad\left\langle
v,w\right\rangle_{V}\coloneqq\sum_{n=1}^{\infty}\rho_{n}^{-2}\left\langle
v,e_{n}\right\rangle\left\langle w,e_{n}\right\rangle,\quad v,w\in V.$ (5)
Evidently $V\subset H$ with dense and continuous embedding, therefore using
the concept of _Gelfand triple_ we can think a generic $h\in H$ as an object
in $V^{\prime}$, namely
$\left\langle h,v\right\rangle_{V^{\prime},V}=\sum_{n=1}^{\infty}\left\langle
h,e_{n}\right\rangle\left\langle v,e_{n}\right\rangle,\quad v\in V.$
Noticing that $\left\langle
h,\cdot\right\rangle_{V^{\prime},V}=\left\langle\tilde{v},\cdot\right\rangle_{V}$,
where $\tilde{v}\coloneqq\sum_{n=1}^{\infty}\rho_{n}^{2}\left\langle
h,e_{n}\right\rangle e_{n}\in V$, we can apply _Riesz representation theorem_
to get $\left\lVert
h\right\rVert^{2}_{V^{\prime}}=\sum_{n=1}^{\infty}\rho_{n}^{2}\left|\left\langle
h,e_{n}\right\rangle\right|^{2}.$ Now we fix $t>0$ and show that
$\left(\sum_{n=1}^{N}\sigma_{n}\beta^{n}_{L_{t}}e_{n}\right)_{N}\subset
V^{\prime}$ is a Cauchy sequence in probability. Indeed, applying _Markov’s
inequality_ and using the fact that the function
$\phi\left(x\right)=x^{r},\,x\geq 0$, is subadditive and strictly increasing
as $0<r<\alpha<1$, for every $\epsilon>0$ we get:
$\displaystyle\mathbb{P}$
$\displaystyle\left(\left\lVert\sum_{n=p}^{q}\sigma_{n}\beta^{n}_{L_{t}}e_{n}\right\rVert_{V^{\prime}}>\epsilon\right)\leq\mathbb{P}\left(\phi\left(\left\lVert\sum_{n=p}^{q}\sigma_{n}\beta^{n}_{L_{t}}e_{n}\right\rVert_{V^{\prime}}^{2}\right)>\phi\left(\epsilon^{2}\right)\right)\leq\frac{1}{\epsilon^{2r}}\mathbb{E}\left[\phi\left(\left\lVert\sum_{n=p}^{q}\sigma_{n}\beta^{n}_{L_{t}}e_{n}\right\rVert_{V^{\prime}}^{2}\right)\right]$
$\displaystyle=\\!\epsilon^{-2r}\mathbb{E}\left[\phi\left(\sum_{n=p}^{q}\sigma_{n}^{2}\rho_{n}^{2}\left|\beta^{n}_{L_{t}}\right|^{2}\right)\right]\\!\leq\epsilon^{-2r}\sum_{n=p}^{q}\mathbb{E}\left[\left(\sigma_{n}^{2r}\rho_{n}^{2r}\left|\beta^{n}_{L_{t}}\right|^{2r}\right)\right]\\!=\\!\epsilon^{-2r}\mathbb{E}\left[\left|\beta^{1}_{L_{t}}\right|^{2r}\right]\left(\sum_{n=p}^{q}\sigma_{n}^{2r}\rho_{n}^{2r}\right)\\!\underset{p,q\to\infty}{\longrightarrow}0,$
where we use that by construction
$\beta^{n}_{L_{t}}\sim\beta^{1}_{L_{t}},n\in\mathbb{N},$ and that by (4) they
all generate a $2\alpha$–stable distribution, which has finite moment of order
$2r$ (see also Remark 2). By completeness, we can conclude the existence of an
a.s. unique, $V^{\prime}$–valued random variable $\sqrt{Q}W_{L_{t}}$ such that
$\sqrt{Q}W_{L_{t}}=\mathbb{P}-\lim_{N\to\infty}\sum_{n=1}^{N}\sigma_{n}\beta^{n}_{L_{t}}e_{n}\quad\text{in
}V^{\prime}.$
Actually such a convergence in probability is true also in the
$\mathbb{P}-$a.s. sense, as the following, easy and general lemma proves.
###### Lemma 1.
Let $\left(X^{n}\right)_{n}$ be a sequence of real–valued random variables
defined on a probability space $\left(\Omega,\mathcal{F},\mathbb{P}\right)$
and $H$ be a separable Hilbert space admitting $\left(e_{n}\right)_{n}$ as
CONS. If $\sum_{n=1}^{\infty}X^{n}e_{n}$ converges in probability, then it
converges $\mathbb{P}-$a.s.
###### Proof.
Let
$S\coloneqq\mathbb{P}-\lim_{N\to\infty}\sum_{n=1}^{N}X^{n}e_{n}\colon\Omega\to
H$. Obviously
$S\left(\omega\right)=\sum_{n=1}^{\infty}\left\langle
S\left(\omega\right),e_{n}\right\rangle
e_{n}=H-\\!\\!\lim_{N\to\infty}\sum_{n=1}^{N}\left\langle
S\left(\omega\right),e_{n}\right\rangle e_{n},\quad\omega\in\Omega.$ (6)
Convergence in measure implies a.s. convergence along a subsequence, hence we
have
$S\left(\omega\right)=H-\\!\\!\lim_{k\to\infty}\sum_{n=1}^{N_{k}}X^{n}\left(\omega\right)e_{n}\quad\text{for
}\mathbb{P}-\text{a.e. }\omega\in\Omega.$
Therefore, for $\mathbb{P}-$a.e. $\omega\in\Omega$, we see that the Fourier
components of $S$ are
$\left\langle
S\left(\omega\right),e_{\bar{n}}\right\rangle=\lim_{k\to\infty}\left\langle\sum_{n=1}^{N_{k}}X^{n}\left(\omega\right)e_{n},e_{\bar{n}}\right\rangle=X^{\bar{n}}\left(\omega\right)\quad\text{for
every }\bar{n}\in\mathbb{N}.$
Substituting in (6) we conclude
$S\left(\omega\right)=H-\\!\\!\lim_{N\to\infty}\sum_{n=1}^{N}X^{n}\left(\omega\right)e_{n}\quad\text{for
}\mathbb{P}-\text{a.e. }\omega\in\Omega,$
as we stated. ∎
Going back to $\sqrt{Q}W_{L_{t}}$, since $\left(\rho_{n}e_{n}\right)_{n}$ is a
CONS for the Hilbert space $V$, Lemma 1 allows to write
$\sqrt{Q}W_{L_{t}}=\lim_{N\to\infty}\left\langle\sum_{n=1}^{N}\sigma_{n}\beta^{n}_{L_{t}}e_{n},\cdot\right\rangle_{V^{\prime},V}=\lim_{N\to\infty}\sum_{n=1}^{N}\rho_{n}\sigma_{n}\beta^{n}_{L_{t}}\left\langle\left(\rho_{n}e_{n}\right),\cdot\right\rangle_{V}\quad\mathbb{P}-\text{a.s.}$
It then follows that
$\left\langle\sqrt{Q}W_{L_{t}},v\right\rangle_{V^{\prime},V}=\lim_{N\to\infty}\sum_{n=1}^{N}\sigma_{n}\beta^{n}_{L_{t}}\left\langle
v,e_{n}\right\rangle$ for every $v\in V,\,\mathbb{P}-$a.s. Combining this with
(2.1), we can see that
$\left\langle\sqrt{Q}W_{L_{t}},v\right\rangle_{V^{\prime},V}$ has a symmetric,
$2\alpha$–stable distribution. We collect the previous results in the next
theorem.
###### Theorem 2.
1. 1.
Given $h\in H$ and $t>0$, the series
$\sum_{n=1}^{\infty}\beta^{n}_{L_{t}}\left\langle h,e_{n}\right\rangle$
converges in distribution to a real–valued, symmetric, $2\alpha$–stable random
variable $X_{t}$ whose characteristic function is
$\mathbb{E}\left[e^{iuX_{t}}\right]=\exp\left\\{-t\frac{c^{\prime}}{2^{\alpha}}\left\lVert
h\right\rVert^{2\alpha}_{H}\left|u\right|^{2\alpha}\right\\},\quad
u\in\mathbb{R}.$
2. 2.
Consider a linear, bounded, nonnegative definite operator $Q:H\to H$ such that
$\left(e_{n}\right)_{n}$ is a basis of its eigenvectors corresponding to the
eigenvalues
$\left(\sigma_{n}^{2}\right)_{n}\left(\subset\mathbb{R}_{+}\right)$. Let
$\left(\rho_{n}\right)_{n}$ be a bounded sequence of strictly positive weights
such that $\sum_{n=1}^{\infty}\rho_{n}^{2r}\sigma_{n}^{2r}<\infty$ for some
$0<r<{\alpha}$. Then the corresponding Hilbert space
$\left(V,\left\langle\cdot,\cdot\right\rangle_{V}\right)$ defined in (5) is
continuously embedded with density in $H$ and, for every $t>0$, the random
variable $\sqrt{Q}W_{L_{t}}\colon\Omega\to V^{\prime}$ is defined as
$\sqrt{Q}W_{L_{t}}\coloneqq\lim_{N\to\infty}\sum_{n=1}^{N}\sigma_{n}\beta^{n}_{L_{t}}e_{n}\quad\mathbb{P}-\text{a.s.}$
In particular, for every $v\in V$,
$\sum_{n=1}^{N}\beta^{n}_{L_{t}}\left\langle\sqrt{Q}v,e_{n}\right\rangle\underset{N\to\infty}{\longrightarrow}\left\langle\sqrt{Q}W_{L_{t}},v\right\rangle_{V^{\prime},V}\quad\mathbb{P}-\text{a.s.}$
###### Remark 2.
We can state the finiteness of the moment of order $2r$ of the random variable
$\beta^{1}_{L_{t}}$ without explicitly knowing its distribution, i.e., without
using (4). In fact, we can proceed as follows:
$\mathbb{E}\left[\left|\beta^{1}_{L_{t}}\right|^{2r}\right]=\mathbb{E}\left[\mathbb{E}\left[\left|\beta^{1}_{L_{t}}\right|^{2r}\Bigr{|}\mathcal{F}_{L}\right]\right]=\frac{2^{r}}{\sqrt{\pi}}\Gamma\left(\frac{2r+1}{2}\right)\mathbb{E}\left[L_{t}^{r}\right]<\infty,$
since we are dealing with $0<r<\alpha.$ For the second equality we refer to
[14, Equation $\left(17\right)$].
### 2.2 Stochastic Convolution
Let $A:\mathcal{D}\left(A\right)\subset H\to H$ be a linear, selfadjoint,
negative definite, unbounded operator that shares with $Q$ a common basis of
eigenvectors $\left(e_{n}\right)_{n}$. We denote by
$\left(-\lambda_{n}\right)_{n}$, with
$0<\lambda_{1}\leq\lambda_{2}\leq\dots\leq\lambda_{n}\leq\cdots$ the
corresponding eigenvalues, i.e., $Ae_{n}=-\lambda_{n}e_{n},\,n\in\mathbb{N}$.
Recalling that $\alpha\in\left(0,1\right)$ has been fixed at the beginning of
Section 2, it is convenient to introduce the shorthand notation
$X\sim\text{stable}\left(\alpha,\beta,\gamma,\delta\right)$ to denote a random
variable $X$ with characteristic function given by
$\mathbb{E}\left[e^{iuX}\right]=\exp\left\\{-\gamma^{\alpha}\left|u\right|^{\alpha}\left(1-i\beta\tan\frac{\pi\alpha}{2}\text{sign}\left(u\right)\right)+i\delta
u\right\\},\quad u\in\mathbb{R}.$
where $\left|\beta\right|\leq 1,\gamma>0$ and $\delta\in\mathbb{R}$. Hence by
(4), for every $n\in\mathbb{N}$ the Lévy process $\beta^{n}_{L}$ has random
variables distributed as
$\beta^{n}_{L_{t}}\sim\text{stable}\left(2\alpha,0,\left(t\frac{c^{\prime}}{2^{\alpha}}\right)^{{1}/\left({2\alpha}\right)},0\right),\quad
t>0.$
We denote by $U^{n}=\left(U_{t}^{n}\right)_{t\geq 0}$ the OU–process
$U_{t}^{n}\coloneqq\int_{0}^{t}e^{-\lambda_{n}\left(t-s\right)}\sigma_{n}\,d\beta^{n}_{L_{s}},\,t\geq
0:$ this is the unique (up to evanescence) solution of the one dimensional
stochastic differential equation
$dU^{n}_{t}=-\lambda_{n}U^{n}_{t}dt+\sigma_{n}\,d\beta^{n}_{L_{t}},\quad
U^{n}_{0}=0.$ (7)
The processes $\left(U^{n}\right)_{n}$ are càdlàg and adapted to the
filtration $\mathbb{F}_{L}$, and direct computations (see, e.g., [1,
Proposition $3.2$]) show that
$U_{t}^{n}\sim\text{stable}(2\alpha,0,\gamma_{n}\left(t\right),0)$, where
$\gamma_{n}\left(t\right)\coloneqq\left(\frac{c^{\prime}}{2^{\alpha}}\right)^{{1}/\left({2\alpha}\right)}\\!\\!\left(\int_{0}^{t}\\!\\!e^{-2\alpha\lambda_{n}\left(t-s\right)}\sigma_{n}^{2\alpha}\,ds\right)^{{1}/\left({2\alpha}\right)}\\!\\!\\!=\sigma_{n}\left(\frac{c^{\prime}}{2^{\alpha+1}\alpha}\right)^{{1}/\left({2\alpha}\right)}\\!\\!\left(\frac{1-e^{-2\alpha\lambda_{n}t}}{\lambda_{n}}\right)^{{1}/\left({2\alpha}\right)}\\!\\!,\quad
t>0,\,n\in\mathbb{N}.$
We are now in position to construct the _stochastic convolution_ and the
corresponding OU–process.
###### Theorem 3.
Assume that
$\sum_{n=1}^{\infty}\dfrac{\sigma_{n}^{2r}}{\lambda_{n}^{{r}/{\alpha}}}<\infty\quad\text{for
some }r\in\left(0,\alpha\right).$ (i)
Then, for all $t\\!>\\!0$, the series $\sum_{n=1}^{\infty}\\!U^{n}_{t}e_{n}$
converges $\mathbb{P}-$a.s. to a random variable
$\tilde{Z}_{A,Q}\\!\left(t\right)=\\!\int_{0}^{t}e^{\left(t-s\right)A}\sqrt{Q}\,dW_{L_{s}}$.
The resulting process
$\tilde{Z}_{A,Q}=\left(\tilde{Z}_{A,Q}\left(t\right)\right)_{t}$ is
$\mathbb{F}_{L}$–adapted and is called _stochastic convolution_.
The corresponding OU–process starting at $x\in H$, denoted by
$Z^{x}=\left(Z_{t}^{x}\right)_{t}$ and defined by
$Z_{t}^{x}\coloneqq
e^{tA}x+\int_{0}^{t}e^{\left(t-s\right)A}\sqrt{Q}\,dW_{L_{s}}=e^{tA}x+\tilde{Z}_{A,Q}\left(t\right),\quad
t\geq 0,$
is $\mathbb{F}_{L}$–adapted and Markovian with homogeneity in time.
###### Proof.
Fix $t>0$. Thanks to the preceding discussion, we know that
$U^{n}_{t}\sim\gamma_{n}\left(t\right)X,\,n\in\mathbb{N}$, where $X$ is a
random variable such that $X\sim\text{stable}\left(2\alpha,0,1,0\right)$. Then
an application of Markov’s inequality entails:
$\mathbb{P}\left(\left\lVert\sum_{n=p}^{q}U^{n}_{t}e_{n}\right\rVert_{H}\\!\\!\\!>\epsilon\right)\\!\\!\leq\epsilon^{-2r}\mathbb{E}\left[\phi\left(\left\lVert\sum_{n=p}^{q}U^{n}_{t}e_{n}\right\rVert_{H}^{2}\right)\right]\\!\\!=\epsilon^{-2r}\mathbb{E}\left[\phi\left(\sum_{n=p}^{q}\left|U^{n}_{t}\right|^{2}\right)\right]\\!\\!\leq\epsilon^{-2r}\sum_{n=p}^{q}\mathbb{E}\left[\left(\left|U^{n}_{t}\right|^{2r}\right)\right]\\\
=\epsilon^{-2r}\mathbb{E}\left[\left|X\right|^{2r}\right]\left(\frac{c^{\prime}}{2^{\alpha+1}\alpha}\right)^{r/\alpha}\left(\sum_{n=p}^{q}\frac{\sigma_{n}^{2r}}{\lambda_{n}^{r/\alpha}}\left(1-e^{-2\alpha\lambda_{n}t}\right)^{r/\alpha}\right)\leq
c\left(\epsilon\right)\left(\sum_{n=p}^{q}\frac{\sigma_{n}^{2r}}{\lambda_{n}^{r/\alpha}}\right)\underset{p,q\to\infty}{\longrightarrow}0,\quad\epsilon>0,$
with
$c\left(\epsilon\right)\coloneqq\epsilon^{-2r}\mathbb{E}\left[\left|X\right|^{2r}\right]\left(\dfrac{c^{\prime}}{2^{\alpha+1}\alpha}\right)^{{r}/{\alpha}}$
and $\phi\left(x\right)=x^{r}$, as above. Therefore the series converges in
probability:
$\tilde{Z}_{A,Q}\left(t\right)=\int_{0}^{t}e^{\left(t-s\right)A}\sqrt{Q}\,dW_{L_{s}}\coloneqq\mathbb{P}-\lim_{N\to\infty}\sum_{n=1}^{N}U^{n}_{t}e_{n}.$
An application of Lemma 1 shows that such convergence is true in the
$\mathbb{P}-$a.s. sense, as well. Obviously $\tilde{Z}_{A,Q}$ is an
$\mathbb{F}_{L}$–adapted process, since the space
$\left(\Omega,\mathcal{F},\mathbb{P}\right)$ is complete by hypothesis,
$\mathbb{F}_{L}$ is complete by construction and the one dimensional
OU–processes $U^{n}$ are $\mathbb{F}_{L}$–adapted.
Concerning the OU–processes, for every $x\in H$ we can express the random
variables of $Z^{x}=\left(Z_{t}^{x}\right)_{t}$ as follows:
$Z^{x}_{t+h}\overset{a.s.}{=}e^{hA}Z^{x}_{t}+\int_{t}^{t+h}e^{\left(t+h-s\right)A}\sqrt{Q}\,dW_{L_{s}}=e^{hA}Z^{x}_{t}+\sum_{n=1}^{\infty}\left(\int_{t}^{t+h}e^{-\lambda_{n}\left(t+h-s\right)}\sigma_{n}\,d\beta_{L_{s}}^{n}\right)e_{n},\quad
t,h\geq 0.$
This immediately implies the Markovianity of the process, recalling the
independence of the increments of the Lévy processes
$\left(\beta^{n}_{L}\right)_{n}$. The time homogeneity is obtained by a
standard argument relying on the stationarity of the increments of the same
processes and the fact that the coefficients of the one–dimensional SDEs in
(7) are time–autonomous. The proof is then complete. ∎
We close this section with an example which analyzes a common framework in
applications (see, e.g., [6]).
###### Example 1.
Let $\mathbb{T}^{d}=\mathbb{R}^{d}/\mathbb{Z}^{d}$ be the $d$–dimensional
torus and denote by $e_{k}$ the functions
$e_{k}\left(x\right)\coloneqq\begin{cases}\cos\left(2\pi k\cdot
x\right),&k\in\mathbb{Z}^{d}_{+}\\\ \sin\left(2\pi k\cdot
x\right),&k\in\mathbb{Z}^{d}_{-}\end{cases},\quad x\in\mathbb{T}^{d},$
where $\mathbb{Z}^{d}_{+}\coloneqq\left\\{\left(k_{1}>0\right)\text{ or
}\left(k_{1}=0\text{ and }k_{j}>0\text{ for }j=2,\dots,d\right)\right\\}$ and
$\mathbb{Z}^{d}_{-}\coloneqq-\mathbb{Z}^{d}_{+}$. Then
$\left\\{e_{k}:k\in\mathbb{Z}_{0}^{d}\right\\}$ constitute a complete
orthonormal system for the Hilbert space
$H=L^{2}_{0}\left(\mathbb{T}^{d};\mathbb{R}\right)\coloneqq\left\\{f\in
L^{2}\left(\mathbb{T}^{d};\mathbb{R}\right):\int_{\mathbb{T}^{d}}f\left(x\right)dx=0\right\\},$
where of course
$\mathbb{Z}_{0}^{d}\coloneqq\mathbb{Z}^{d}\setminus\left\\{0\right\\}.$ In
particular, for every $f\in H$, we have
$f=\sum_{k\in\mathbb{Z}^{d}_{0}}\hat{f}_{k}e_{k},\quad\hat{f}_{k}\coloneqq\int_{\mathbb{T}^{d}}f\left(x\right)e_{k}\left(x\right)\,dx,\quad
k\in\mathbb{Z}^{d}_{0}.$
We first introduce the Sobolev spaces
$W_{0}^{\beta,2}\left(\mathbb{T}^{d}\right)\coloneqq\left\\{f\in
H:\sum_{k\in\mathbb{Z}^{d}_{0}}\left|k\right|^{2\beta}\hat{f}_{k}^{2}<\infty\right\\},\quad\left\lVert
f\right\rVert_{W_{0}^{\beta,2}}^{2}\coloneqq\sum_{k\in\mathbb{Z}^{d}_{0}}\left|k\right|^{2\beta}\hat{f}_{k}^{2},$
and then define the linear operator $A$ as follows:
$A\colon W_{0}^{2,2}\left(\mathbb{T}^{d}\right)\to H\quad\text{ such that
}\quad Af=\Delta
f=-\left(2\pi\right)^{2}\sum_{k\in\mathbb{Z}^{d}_{0}}\left|k\right|^{2}\hat{f}_{k}e_{k}.$
In particular, the eigenvalues of $A$ corresponding to $e_{k}$ are
$-\lambda_{k}=-\left(2\pi\right)^{2}\left|k\right|^{2}$, hence $A$ is
unbounded and negative definite. Moreover it is selfadjoint, as well. Now we
analyze Hypothesis (i) for two specifications of the linear, bounded, positive
semidefinite operator $Q\colon H\to H$.
* •
Let $Q=\text{Id}$. Then $\sigma_{k}=1,\,k\in\mathbb{Z}^{d}_{0}$, and (i) reads
$\frac{1}{\left(2\pi\right)^{2r/\alpha}}\sum_{k\in\mathbb{Z}^{d}_{0}}\frac{1}{\left|k\right|^{2r/\alpha}}<\infty\quad\text{for
some }r\in\left(0,\alpha\right),$
which is satisfied if and only if $d=1$. Hence the stochastic convolution is
defined only in dimension $d=1$.
* •
Set $Q=Q_{\eta}=\left(-\Delta\right)^{-\eta}$ for $\eta>0$, the negative
fractional power of the Laplacian, defined as an operator $Q_{\eta}\colon H\to
H$ such that
$Q_{\eta}f=\frac{1}{\left(2\pi\right)^{2\eta}}\sum_{k\in\mathbb{Z}^{d}_{0}}\frac{1}{\left|k\right|^{2\eta}}\hat{f}_{k}e_{k},\quad
f\in H.$
In this case the convergence of the infinite sum in (i) amounts to requiring
$\eta>\left(\frac{d}{2r}-\frac{1}{\alpha}\right)\vee 0$. Since $r$ is chosen
freely in the interval $\left(0,\alpha\right)$, Hypothesis (i) is satisfied if
and only if
$\eta>\left(\frac{d-2}{2\alpha}\right)\vee 0.$ (8)
This fact can be interpreted as follows: the higher the dimension $d$, the
weaker the effect of the noise on the high Fourier modes needs to be in order
to have the well–posedness of the stochastic convolution.
## 3 Smoothing effect of the Markov Transition Semigroup
Let us introduce the Markov transition semigroup $R=\left(R_{t}\right)_{t}$
associated with the OU–processes $\left(Z^{x}\right)_{x\in H}$, which is given
by
$R_{t}\phi\left(x\right)\coloneqq\mathbb{E}\left[\phi\left(Z^{x}_{t}\right)\right],\quad
x\in H,\,\phi\in\mathcal{B}_{b}\left(H\right),\,t\geq 0,$
where $B_{b}\left(H\right)$ is the space of bounded, real–valued,
Borel–measurable functions in $H$. Evidently, each $R_{t}$ is linear and
bounded from $C_{b}\left(H\right)$ into itself and $R_{0}$ is the identity.
Our aim is to prove that, under suitable conditions, the operator $R_{t}$ has
a smoothing effect for every $t>0$. Specifically, given a function $\phi\in
B_{b}\left(H\right)$, in the case $\alpha\in\left(\frac{1}{2},1\right)$ we are
going to show that $R_{t}\phi\in C^{1}_{b}\left(H\right)$ and that the
following gradient estimate holds:
$\sup_{x\in H}\left\lVert\nabla
R_{t}\phi\left(x\right)\right\rVert_{H}\leq\frac{C}{t^{{\gamma}}}\left\lVert\phi\right\rVert_{\infty}\quad\text{for
every $t>0$, for some }0<{\gamma}<1,\,C>0.$ (9)
### 3.1 Finite dimensional case $H=\mathbb{R}^{N}$
Let $H=\mathbb{R}^{N}$ and
$W^{N}=\left[\begin{matrix}\beta^{1}&\cdots&\beta^{N}\end{matrix}\right]^{T}$.
We start by presenting a theorem which allows to obtain an original derivation
formula for the semigroup corresponding to the finite–dimensional OU processes
$Z_{t}^{\ell}\left(x\right)$. They are defined as the unique, càdlàg solutions
of the linear SDEs
$dZ^{\ell}_{t}\left(x\right)=AZ^{\ell}_{t}\left(x\right)dt+\sqrt{Q}\,dW^{N}_{\ell_{t}},\,Z^{\ell}_{0}\left(x\right)=x,$
and can be expressed by the variation of constant formula as follows:
$Z^{\ell}_{t}\left(x\right)=e^{tA}x+\int_{0}^{t}e^{\left(t-s\right)A}\sqrt{Q}\,dW^{N}_{\ell_{s}},\quad
t\geq 0,\,\mathbb{P}-\text{a.s.},$ (10)
where $x\in\mathbb{R}^{N}$ and $\ell\colon\mathbb{R}_{+}\to\mathbb{R}_{+}$ is
an increasing, càdlàg function such that $\ell_{0}=0$ and $\ell_{t}>0$ for
every positive $t$: the set of functions with these properties will be denoted
by $\mathbb{S}$. Note that, for every $\ell\in\mathbb{S}$,
$W^{N}_{\ell}=\left(W^{N}_{\ell_{t}}\right)_{t}$ is a càdlàg martingale with
respect to the filtration $\left(\mathcal{F}^{N}_{\ell_{t}}\right)_{t}$, where
$\left(\mathcal{F}^{N}_{t}\right)_{t}$ is the minimal augmented filtration
generated by $W^{N}$. Analogously, for every $\ell\in\mathbb{S}$, we introduce
the filtrations
$\mathbb{F}^{n}_{\ell}=\left(\mathcal{F}^{\beta^{n}}_{\ell_{t}}\right)_{t},\,n\in\mathbb{N}$,
and observe that $\beta^{n}_{\ell}=\left(\beta^{n}_{\ell_{t}}\right)_{t}$ is a
càdlàg, $\mathbb{F}^{n}_{\ell}$–martingale. The proof of such theorem is
essentially based on the deterministic time–change procedure described by
Zhang in [15, Section $2$], but exploits the linear nature of our setting to
avoid the application of the _Bismut–Elworthy–Li’s formula_ (see, e.g., [3,
Proposition $8.21$]). For the sake of completeness we report its main
passages.
###### Theorem 4.
Let $t>0,\,\phi\in C_{b}\left(\mathbb{R}^{N}\right),\,\ell\in\mathbb{S}$ and
assume that $\sigma_{n}^{2}>0,\,n=1,\dots,N$. Then the function
$\mathbb{E}\left[\phi\left(Z^{\ell}_{t}\left(\cdot\right)\right)\right]$ is
differentiable at any point $x\in\mathbb{R}^{N}$ in every direction
$h\in\mathbb{R}^{N}$, and
$\left\langle\nabla\mathbb{E}\left[\phi\left(Z^{\ell}_{t}\left(x\right)\right)\right],h\right\rangle=\mathbb{E}\left[\phi\left(Z_{t}^{\ell}\left(x\right)\right)\left(\sum_{n=1}^{N}\frac{1}{\sigma_{n}}\frac{e^{-\lambda_{n}t}\left\langle
h,e_{n}\right\rangle}{\int_{0}^{t}e^{-2\lambda_{n}\left(t-s\right)}d\ell_{s}}\int_{0}^{t}e^{-\lambda_{n}\left(t-s\right)}d\beta_{\ell_{s}}^{n}\right)\right].$
(11)
###### Proof.
For every $\epsilon>0$ denote by
$\ell^{\epsilon}\left(t\right)\coloneqq\frac{1}{\epsilon}\int_{t}^{t+\epsilon}\ell_{s}\,ds,\,t\geq
0$, the _Steklov’s averages_ of $\ell$. They are strictly increasing,
absolutely continuous functions such that, for every $t\geq 0$,
$\ell^{\epsilon}_{t}\downarrow\ell_{t}$ as $\epsilon\downarrow 0$. Let
$\gamma^{\epsilon}\coloneqq\left(\ell^{\epsilon}\right)^{-1}\colon\left[\ell_{0}^{\epsilon},\infty\right)\to\mathbb{R}_{+}$
and define $Z^{\ell^{\epsilon}}\left(x\right)$ as in (10), i.e., for every
$x\in\mathbb{R}^{N}$ the process $Z^{\ell^{\epsilon}}\left(x\right)$ is the
unique solution of the linear SDE
$dZ^{\ell^{\epsilon}}_{t}\left(x\right)=AZ^{\ell^{\epsilon}}_{t}\left(x\right)dt+\sqrt{Q}\,dW^{N}_{\ell^{\epsilon}_{t}},\,Z^{\ell^{\epsilon}}_{0}=x$.
Now introduce the time–shifted processes
$Y_{t}^{\ell^{\epsilon}}\left(x\right)\coloneqq
Z^{\ell^{\epsilon}}_{\gamma^{\epsilon}_{t}}\left(x\right),\,t\geq\ell^{\epsilon}_{0}$,
and observe that
$Y^{\ell^{\epsilon}}_{t}\left(x\right)=x+\int_{\ell_{0}^{\epsilon}}^{t}AY_{s}^{\ell^{\epsilon}}\left(x\right)\dot{\gamma^{\epsilon}_{s}}\,ds+\sqrt{Q}\left(W^{N}_{t}-W^{N}_{\ell_{0}^{\epsilon}}\right),\quad
t\geq\ell_{0}^{\epsilon},\,\mathbb{P}-\text{a.s.},$
which shows that
$dY_{t}^{\ell^{\epsilon}}\left(x\right)=AY_{t}^{\ell^{\epsilon}}\left(x\right)\dot{\gamma^{\epsilon}_{t}}dt+\sqrt{Q}\,dW^{N}_{t},\,Y_{\ell_{0}^{\epsilon}}^{\ell^{\epsilon}}\left(x\right)=x$.
Therefore,
$Y_{t}^{\ell^{\epsilon}}\left(x\right)=e^{A\gamma_{t}^{\epsilon}}x+\int_{\ell_{0}^{\epsilon}}^{t}e^{A\left(\gamma_{t}^{\epsilon}-\gamma_{s}^{\epsilon}\right)}\sqrt{Q}\,dW^{N}_{s},\quad
t\geq\ell_{0}^{\epsilon},\,\mathbb{P}-\text{a.s.}$
In particular, since
$\int_{\ell_{0}^{\epsilon}}^{\ell_{t}^{\epsilon}}e^{2A\left(t-\gamma^{\epsilon}_{s}\right)}Q\,ds=\int_{0}^{t}e^{2A\left(t-s\right)}Q\,d\ell^{\epsilon}_{s}$,
where the integral is to be interpreted entrywise, we have
$Z_{t}^{\ell^{\epsilon}}\left(x\right)=Y_{\ell_{t}^{\epsilon}}^{\ell^{\epsilon}}\left(x\right)\sim\mathcal{N}\left(e^{At}x,\int_{0}^{t}e^{2A\left(t-s\right)}Q\,d\ell^{\epsilon}_{s}\right).$
At this point, we fix a generic $t>0,\,x\in\mathbb{R}^{N}$ and use [15,
Equation ($2.6$)] (it is just an application of _Gronwall lemma_) to get the
convergence, in the $L^{2}$–sense, of
$Z_{t}^{\ell^{\epsilon}}\left(x\right)\to Z_{t}^{\ell}\left(x\right)$ as
$\epsilon\downarrow 0$. Moreover, recalling that
$\ell^{\epsilon}_{t}\downarrow\ell_{t}$ as $\epsilon\downarrow 0$, we invoke
_Helly’s second theorem_ (see [10, Theorem $7.3$]) to get
$\int_{0}^{t}e^{2A\left(t-s\right)}Q\,d\ell^{\epsilon}_{s}\to\int_{0}^{t}e^{2A\left(t-s\right)}Q\,d\ell_{s}$
as $\epsilon\downarrow 0$. Whence,
$Z_{t}^{\ell}\left(x\right)\sim\mathcal{N}\left(e^{At}x,\int_{0}^{t}e^{2A\left(t-s\right)}Q\,d\ell_{s}\right).$
(12)
If we take $\phi\in C_{b}\left(\mathbb{R}^{N}\right)$, an explicit computation
simply based on the derivation of the normal density function implies, for
every direction $h\in\mathbb{R}^{N}$,
$\left\langle\nabla\mathbb{E}\left[\phi\left(Z^{\ell}_{t}\left(x\right)\right)\right],h\right\rangle=\mathbb{E}\left[\phi\left(Z_{t}^{\ell}\left(x\right)\right)\left\langle\left(\int_{0}^{t}e^{2A\left(t-s\right)}Q\,d\ell_{s}\right)^{-1}\left(\int_{0}^{t}e^{A\left(t-s\right)}\sqrt{Q}\,dW^{N}_{\ell_{s}}\right),e^{tA}h\right\rangle\right],$
which coincides with (11) upon expanding the notation. ∎
###### Remark 3.
The previous proof does not need the continuity of the function $\phi$.
Therefore, _Theorem_ 4 holds true for every
$\phi\in\mathcal{B}_{b}\left(\mathbb{R}^{N}\right)$.
Now we investigate the subordinated Brownian motion case. The intuition behind
the argument is to condition with respect to the $\sigma$–algebra
$\mathcal{F}^{L}$, so that it is possible to apply the deterministic
time–shift result we have just obtained in Theorem 4 upon changing the
reference probability space. Let us denote by $\mathbb{W}$ the space of
continuous functions from $\mathbb{R}_{+}$ to $\mathbb{R}^{N}$ vanishing at
$0$ and endow it with the Borel $\sigma$–algebra
$\mathcal{B}\left(\mathbb{W}\right)$ associated with the topology of locally
uniform convergence. The pushforward probability measure generated by
$W^{N}\left(\cdot\right)\colon\left(\Omega,\mathcal{F},\mathbb{P}\right)\to\left(\mathbb{W},\mathcal{B}\left(\mathbb{W}\right)\right)$
is denoted by $\mathbb{P}^{\mathbb{W}}$ and makes the canonical process
$\mathfrak{x}=\left(x_{t}\right)_{t}$ a Brownian motion, where by definition
$x_{t}\left(w\right)\coloneqq w_{t},\quad w\in\mathbb{W},\,t\geq 0.$
We work with the usual completion
$\left(\mathbb{W},\overline{\mathcal{B}\left(\mathbb{W}\right)},\overline{\mathbb{P}^{\mathbb{W}}}\right)$
of this probability space: by [8, Theorem $7.9$], $\mathfrak{x}$ is still a
Brownian motion with respect to its minimal augmented filtration, which in
turn satisfies the usual hypotheses and is denoted by
$\mathbb{F}^{\mathbb{W}}$. In particular, note that the completeness of the
space $\left(\Omega,\mathcal{F},\mathbb{P}\right)$ implies the measurability
of
$W^{N}\left(\cdot\right)\colon\left(\Omega,\mathcal{F},\mathbb{P}\right)\to\left(\mathbb{W},\overline{\mathcal{B}\left(\mathbb{W}\right)}\right)$
and the fact that $\overline{\mathbb{P}^{\mathbb{W}}}$ is still the
pushforward probability measure generated by $W^{N}\left(\cdot\right)$.
Obviously, $W^{N}\left(\cdot\right)$ is independent from $\mathcal{F}^{L}$: as
a consequence, a regular conditional distribution of $W^{N}\left(\cdot\right)$
given $\mathcal{F}^{L}$ is the probability kernel
$\mathbb{P}\left(W^{N}\\!\left(\cdot\right)\in\cdot\big{|}\mathcal{F}^{L}\right)\colon\Omega\times\overline{\mathcal{B}\left(\mathbb{W}\right)}\to\left[0,1\right]\,\,\text{
such that }\,\,\mathbb{P}\left(W^{N}\\!\left(\cdot\right)\in
A\right)\left(w\right)\coloneqq\overline{\mathbb{P}^{\mathbb{W}}}\left(A\right),\,\omega\in{\Omega},\,A\in\overline{\mathcal{B}\left(\mathbb{W}\right)}.$
(13)
As regards the space $\mathbb{S}$, for every $t\geq 0$ we introduce the map
$y_{t}\colon\mathbb{S}\to\mathbb{R}$ defined by
$y_{t}\left(\ell\right)\coloneqq\ell_{t},\,\ell\in\mathbb{S}$, and consider
the $\sigma$–algebra
$\mathcal{F}^{\mathbb{S}}\coloneqq\sigma\left(y^{-1}_{t}\left(B\right),\,B\in\mathcal{B}\left(\mathbb{R}\right),\,t\geq
0\right)$. Since
$L\left(\cdot\right)\colon\left(\Omega,\mathcal{F}^{L},\mathbb{P}\right)\to\left(\mathbb{S},\mathcal{F}^{\mathbb{S}}\right)$
is measurable, we can construct the pushforward probability measure
$\mathbb{P}^{\mathbb{S}}$ on
$\left(\mathbb{S},\mathcal{F}^{\mathbb{S}}\right)$. At this point we take into
account the product space
$\left(\mathbb{W}\times\mathbb{S},\overline{\mathcal{B}\left(\mathbb{W}\right)}\otimes\mathcal{F}^{\mathbb{S}},\overline{\mathbb{P}^{\mathbb{W}}}\otimes\mathbb{P}^{\mathbb{S}}\right)$
and note that, thanks to the mutual independence of $W^{N}\left(\cdot\right)$
and $L\left(\cdot\right)$, the product measure
$\overline{\mathbb{P}^{\mathbb{W}}}\otimes\mathbb{P}^{\mathbb{S}}$ is indeed
the pushforward probability measure generated by
$\psi\colon\Omega\to\mathbb{W}\times\mathbb{S},\,\psi\left(\omega\right)\coloneqq\left(W^{N}_{\cdot}\left(\omega\right),L_{\cdot}\left(\omega\right)\right)$.
Finally, we take the process $z=\left(z_{t}\right)_{t}$ defined by
$z_{t}\left(w,\ell\right)\coloneqq
w_{\ell_{t}},\quad\left(w,\ell\right)\in\mathbb{W}\times\mathbb{S},\,t\geq 0,$
and denote by $\mathbb{F}^{z}=\left(\mathcal{F}^{z}_{t}\right)_{t}$ its
natural filtration. By construction, $W^{N}_{L_{t}}=z_{t}\circ\psi$ for every
$t\geq 0$. Putting together all these properties, we can conclude that $z$ is
a Lévy process with respect to the right–continuous filtration
$\mathbb{F}^{z}_{+}=\left(\mathcal{F}^{z}_{t+}\right)_{t}$, where
$\mathcal{F}^{z}_{t+}\coloneqq\bigcap_{\epsilon>0}\mathcal{F}^{z}_{t+\epsilon},\quad
t\geq 0.$
Endowing the product space with this filtration, the stochastic integral of
suitable processes with respect to $z$ is well defined. Let us consider then a
deterministic, continuous, bounded, $\mathbb{R}^{N}$–valued process
$\xi=\left(\xi_{t}\right)_{t}$: weaker assumptions can be done on it, but in
our framework these are sufficient. Clearly the subordinated Brownian motion
$W^{N}_{L}$ is adapted with respect to the right–continuous filtration
$\psi^{-1}\left(\mathbb{F}^{z}_{+}\right)$, therefore the usual rules of
change of probability space (see, e.g., [7, §X-$2$]) entail
$\int_{0}^{t}\xi_{s}\cdot dW^{N}_{L_{s}}=\left(\int_{0}^{t}\xi_{s}\cdot
dz_{s}\right)\circ\psi,\quad t\geq 0,\,\mathbb{P}-\text{a.s.}$ (14)
We conclude this preliminary discussion with an important substitution
formula.
###### Lemma 5.
Let $\xi=\left(\xi_{t}\right)_{t}$ be a deterministic, continuous, bounded,
$\mathbb{R}^{N}$–valued process. Then, for any $t>0$,
$\left(\int_{0}^{t}\xi_{s}\cdot
dz_{s}\right)\left(\cdot,\ell\right)=\int_{0}^{t}\xi_{s}\cdot
dx_{\ell_{s}}\quad\overline{\mathbb{P}^{\mathbb{W}}}-\text{a.s., for
}\mathbb{P}^{\mathbb{S}}-\text{a.e. }\ell\in\mathbb{S},$
where the integral on the right–hand side of the equality is intended in the
sense of stochastic integrals by càdlàg martingales on the filtered
probability space
$\left(\mathbb{W},\overline{\mathcal{B}\left(\mathbb{W}\right)},\overline{\mathbb{P}^{\mathbb{W}}};\mathbb{F}^{\mathbb{W}}_{\ell}\right).$
###### Proof.
Fix $t>0$ and introduce the elementary, predictable (with respect to both
$\mathbb{F}^{z}_{+}$ and $\mathbb{F}^{\mathbb{W}}_{\ell},\,\ell\in\mathbb{S}$)
processes
$\xi^{m}_{s}\coloneqq\xi_{0}1_{\left\\{0\right\\}}\left(s\right)+\sum_{i=0}^{m-1}\xi_{t_{i}}1_{]t_{i},t_{i+1}]}\left(s\right),$
where $t_{i}=\frac{t}{m}i,\,i=0,\dots,m$. The continuity of $\xi$ implies that
$\xi^{m}\to\xi$ pointwise; furthermore, since $\xi$ is bounded, the sequence
$\left(\xi^{m}\right)_{m}$ is uniformly bounded. This implies that
$\int_{0}^{t}\xi_{s}\cdot
dz_{s}=\left(\overline{\mathbb{P}^{\mathbb{W}}}\otimes\mathbb{P}^{\mathbb{S}}\right)\\!-\lim_{m\to\infty}\int_{0}^{t}\xi^{m}_{s}\cdot
dz_{s}.$
Now convergence in probability implies almost–sure convergence along a
subsequence, hence we can say that for $\mathbb{P}^{\mathbb{S}}-$a.e.
$\ell\in\mathbb{S}$,
$\left(\int_{0}^{t}\xi^{m_{k}}_{s}\cdot
dz_{s}\right)\left(\cdot,\ell\right)\underset{k\to\infty}{\longrightarrow}\left(\int_{0}^{t}\xi_{s}\cdot
dz_{s}\right)\left(\cdot,\ell\right)\quad\overline{\mathbb{P}^{\mathbb{W}}}-\text{a.s.}$
(15)
With the same argument as above, we have
$\int_{0}^{t}\xi_{s}\cdot
dx_{\ell_{s}}=\overline{\mathbb{P}^{\mathbb{W}}}\\!-\lim_{k\to\infty}\int_{0}^{t}\xi^{m_{k}}_{s}\cdot
dx_{\ell_{s}}\quad\text{for every $\ell\in\mathbb{S}$}.$ (16)
On the other hand, by the very definition of stochastic integral it is
immediate to notice that, for every
$\left(w,\ell\right)\in\mathbb{W}\times\mathbb{S},$
$\left(\int_{0}^{t}\xi^{m_{k}}_{s}\cdot
dz_{s}\right)\left(w,\ell\right)=\sum_{i=0}^{{m_{k}}-1}\xi_{t_{i}}\cdot\left(z_{t_{i+1}}-z_{t_{i}}\right)\left(w,\ell\right)=\sum_{i=0}^{{m_{k}}-1}\xi_{t_{i}}\cdot\left(x_{\ell_{t_{i+1}}}-x_{\ell_{t_{i}}}\right)\left(w\right)=\left(\int_{0}^{t}\xi^{m_{k}}_{s}\cdot
dx_{\ell_{s}}\right)\left(w\right).$
Combining the last equation with (15) and (16) we get
$\left(\int_{0}^{t}\xi_{s}\cdot
dz_{s}\right)\left(\cdot,\ell\right)=\int_{0}^{t}\xi_{s}\cdot
dx_{\ell_{s}}\quad\overline{\mathbb{P}^{\mathbb{W}}}-\text{a.s., for
}\mathbb{P}^{\mathbb{S}}-\text{a.e. }\ell\in\mathbb{S},$
proving the thesis of the lemma. ∎
A useful result due to [2, Equation $\left(14\right)$] shows that there exists
a constant $c>0$ such that, for every $t>0$, the density $\eta_{t}$ of $L_{t}$
satisfies
$\eta_{t}\left(s\right)\leq c\,t\,s^{-1-\alpha}e^{-ts^{-\alpha}},\quad s>0.$
As a consequence, for every $p\geq 1$ we have that ${L_{t}}^{-1}\in L^{p}$,
with
$\mathbb{E}\left[\frac{1}{L_{t}^{p}}\right]^{1/p}\leq
c_{\alpha,p}\frac{1}{t^{1/\alpha}}\quad\text{for some }c_{\alpha,p}>0.$ (17)
We are now in position to obtain the derivation formula for the Markov
transition semigroup, together with an estimate on its gradient, in the
subordinated Brownian motion case.
###### Theorem 6.
Let $t>0,\,\phi\in C_{b}\left(\mathbb{R}^{N}\right)$ and assume that
$\sigma_{n}^{2}>0,\,n=1,\dots,N$. Then the function
$\mathbb{E}\left[\phi\left(Z^{\cdot}_{t}\right)\right]$ is differentiable at
any point $x\in\mathbb{R}^{N}$ in every direction $h\in\mathbb{R}^{N}$, and
$\left\langle\nabla\mathbb{E}\left[\phi\left(Z_{t}^{x}\right)\right],h\right\rangle=\mathbb{E}\left[\phi\left(Z_{t}^{x}\right)\left(\sum_{n=1}^{N}\frac{1}{\sigma_{n}}\frac{e^{-\lambda_{n}t}\left\langle
h,e_{n}\right\rangle}{\int_{0}^{t}e^{-2\lambda_{n}\left(t-s\right)}dL_{s}}\int_{0}^{t}e^{-\lambda_{n}\left(t-s\right)}d\beta_{L_{s}}^{n}\right)\right].$
(18)
In addition, there exists $c_{\alpha}>0$ such that the following gradient
estimate holds:
$\sup_{x\in\mathbb{R}^{N}}\left|\nabla\mathbb{E}\left[\phi\left(Z_{t}^{x}\right)\right]\right|\leq
c_{\alpha}\left\lVert\phi\right\rVert_{\infty}\sup_{n=1,\dots,N}\left(\frac{1}{\sigma_{n}}\sqrt[2\alpha]{\frac{2\alpha\lambda_{n}}{1-e^{-2\alpha\lambda_{n}t}}}e^{-\lambda_{n}t}\right)\quad\text{for
every }t>0.$ (19)
###### Proof.
Fix $t>0$ and $\phi\in C_{b}\left(\mathbb{R}^{N}\right)$. In what follows, we
denote by $\mathbb{E}^{\mathbb{W}}\left[\cdot\right]$ the expected value of a
random variable defined on
$\left(\mathbb{W},\overline{\mathcal{B}\left(\mathbb{W}\right)},\overline{\mathbb{P}^{\mathbb{W}}}\right)$.
Bearing in mind that
$Z_{t}^{x}=e^{tA}x+\int_{0}^{t}e^{\left(t-s\right)A}\sqrt{Q}\,dW^{N}_{L_{s}}$,
by (14) we have, for every $x\in\mathbb{R}^{N}$,
$Z_{t}^{x}=\left(e^{tA}x+\int_{0}^{t}e^{\left(t-s\right)A}\sqrt{Q}\,dz_{s}\right)\circ\psi=\left(e^{tA}x+\int_{0}^{t}e^{\left(t-s\right)A}\sqrt{Q}\,dz_{s}\right)\left(W^{N}\left(\cdot\right),L\left(\cdot\right)\right)\quad\mathbb{P}-\text{a.s.}$
Therefore recalling the expression (13) for the regular conditional
distribution
$\mathbb{P}\left(W^{N}\\!\left(\cdot\right)\in\cdot\big{|}\mathcal{F}^{L}\right)$,
we apply the _disintegration formula_ for the conditional expectation to write
$\displaystyle\mathbb{E}\left[\phi\left(Z^{x}_{t}\right)\right]$
$\displaystyle=\mathbb{E}\left[\mathbb{E}\left[\phi\left(Z^{x}_{t}\right)\bigg{|}\mathcal{F}^{L}\right]\right]=\mathbb{E}\left[\int_{\mathbb{W}}\phi\left(\left(e^{tA}x+\int_{0}^{t}e^{\left(t-s\right)A}\sqrt{Q}\,dz_{s}\right)\left(w,L\left(\cdot\right)\right)\right)\overline{\mathbb{P}^{\mathbb{W}}}\left(dw\right)\right]$
$\displaystyle=\mathbb{E}\left[{\left.\kern-1.2pt\mathbb{E}^{\mathbb{W}}\left[\phi\left(e^{tA}x+\int_{0}^{t}e^{\left(t-s\right)A}\sqrt{Q}\,dx_{\ell_{s}}\right)\right]\vphantom{\big{|}}\right|_{\ell=L\left(\cdot\right)}}\right]=\mathbb{E}\left[{\left.\kern-1.2pt\mathbb{E}^{\mathbb{W}}\left[\phi\left(Z^{\ell}_{t}\left(x\right)\right)\right]\vphantom{\big{|}}\right|_{\ell=L\left(\cdot\right)}}\right],\quad
x\in\mathbb{R}^{N},$
where in the second–to–last equality we use Lemma 5 and the fact that
$\mathbb{P}^{\mathbb{S}}$ is the pushforward probability measure generated by
$L\left(\cdot\right)$ on $\mathbb{S}$. Take $x\in\mathbb{R}^{N}$ and a
direction $h\in\mathbb{R}^{N}$; if we can justify the derivation under the
expected value, an application of (11) immediately leads to (18), as the
following computations based on the previous argument show:
$\displaystyle\left\langle\nabla\mathbb{E}\left[\phi\left(Z_{t}^{x}\right)\right],h\right\rangle=\mathbb{E}\left[{\left.\kern-1.2pt\mathbb{E}^{\mathbb{W}}\left[\phi\left(Z_{t}^{\ell}\left(x\right)\right)\left(\sum_{n=1}^{N}\frac{1}{\sigma_{n}}\frac{e^{-\lambda_{n}t}\left\langle
h,e_{n}\right\rangle}{\int_{0}^{t}e^{-2\lambda_{n}\left(t-s\right)}d\ell_{s}}\int_{0}^{t}e^{-\lambda_{n}\left(t-s\right)}dx_{\ell_{s}}^{n}\right)\right]\vphantom{\big{|}}\right|_{\ell=L\left(\cdot\right)}}\right]$
(20)
$\displaystyle\quad\\!=\\!\mathbb{E}\\!\left[\sum_{n=1}^{N}\\!\frac{1}{\sigma_{n}}\frac{e^{-\lambda_{n}t}\left\langle
h,e_{n}\right\rangle}{\int_{0}^{t}\\!e^{-2\lambda_{n}\left(t-s\right)}dL_{s}}\\!\left\\{\\!\int_{\mathbb{W}}\\!\left(\\!\phi\\!\left(\\!e^{tA}x\\!+\\!\int_{0}^{t}\\!e^{\left(t-s\right)A}\sqrt{Q}\,dz_{s}\right)\\!\\!\times\\!\\!\left(\int_{0}^{t}\\!e^{-\lambda_{n}\left(t-s\right)}dz_{s}^{n}\right)\\!\\!\right)\\!\left(w,L\left(\cdot\right)\right)\overline{\mathbb{P}^{\mathbb{W}}}\\!\left(dw\right)\right\\}\\!\right]$
$\displaystyle\quad\\!=\mathbb{E}\left[\mathbb{E}\left[\phi\left(Z_{t}^{x}\right)\left(\sum_{n=1}^{N}\frac{1}{\sigma_{n}}\frac{e^{-\lambda_{n}t}\left\langle
h,e_{n}\right\rangle}{\int_{0}^{t}e^{-2\lambda_{n}\left(t-s\right)}dL_{s}}\int_{0}^{t}e^{-\lambda_{n}\left(t-s\right)}d\beta_{L_{s}}^{n}\right)\Bigg{|}\mathcal{F}^{L}\right]\right].$
Indeed, such a derivation is licit, since _Jensen’s inequality_ and (12)
entail
$\displaystyle\left|{\left.\kern-1.2pt\mathbb{E}^{\mathbb{W}}\left[\phi\left(Z_{t}^{\ell}\left(x\right)\right)\left(\sum_{n=1}^{N}\frac{1}{\sigma_{n}}\frac{e^{-\lambda_{n}t}\left\langle
h,e_{n}\right\rangle}{\int_{0}^{t}e^{-2\lambda_{n}\left(t-s\right)}d\ell_{s}}\int_{0}^{t}e^{-\lambda_{n}\left(t-s\right)}dx_{\ell_{s}}^{n}\right)\right]\vphantom{\big{|}}\right|_{\ell=L\left(\cdot\right)}}\right|^{2}\leq\left\lVert\phi\right\rVert_{\infty}^{2}\sum_{n=1}^{N}\frac{1}{\sigma_{n}^{2}}\frac{e^{-2\lambda_{n}t}\left|\left\langle
h,e_{n}\right\rangle\right|^{2}}{\int_{0}^{t}e^{-2\lambda_{n}\left(t-s\right)}dL_{s}},$
(21)
with the right–hand side which does not depend on $x$ and is integrable. In
fact, for every $n=1,\dots,N$, recalling that
$L_{1}\sim\text{stable}\left(\alpha,1,\bar{c}^{1/\alpha},0\right)$ by (1), we
have
$\int_{0}^{t}e^{-2\lambda_{n}\left(t-s\right)}dL_{s}\sim\text{stable}\left(\alpha,1,\bar{c}^{\frac{1}{\alpha}}\left(\frac{1-e^{-2\alpha\lambda_{n}t}}{2\alpha\lambda_{n}}\right)^{1/\alpha},0\right)\Longrightarrow\int_{0}^{t}e^{-2\lambda_{n}\left(t-s\right)}dL_{s}\sim\left(\frac{1-e^{-2\alpha\lambda_{n}t}}{2\alpha\lambda_{n}}\right)^{\frac{1}{\alpha}}L_{1},$
hence by (17) there exists $c_{\alpha}>0$ such that
$\mathbb{E}\left[\sum_{n=1}^{N}\frac{1}{\sigma_{n}^{2}}\frac{e^{-2\lambda_{n}t}\left|\left\langle
h,e_{n}\right\rangle\right|^{2}}{\int_{0}^{t}e^{-2\lambda_{n}\left(t-s\right)}dL_{s}}\right]\leq\mathbb{E}\left[\frac{1}{L_{1}}\right]\left(\sum_{n=1}^{N}\frac{e^{-2\lambda_{n}t}}{\sigma_{n}^{2}}\left(\frac{2\alpha\lambda_{n}}{1-e^{-2\alpha\lambda_{n}t}}\right)^{\frac{1}{\alpha}}\left|\left\langle
h,e_{n}\right\rangle\right|^{2}\right)\\\ \leq
c_{\alpha}\sum_{n=1}^{N}\frac{e^{-2\lambda_{n}t}}{\sigma_{n}^{2}}\left(\frac{2\alpha\lambda_{n}}{1-e^{-2\alpha\lambda_{n}t}}\right)^{\frac{1}{\alpha}}\left|\left\langle
h,e_{n}\right\rangle\right|^{2}.$ (22)
Concerning the gradient estimate, it is sufficient to combine (20), (21) &
(22) and to recall that the $L^{1}$–norm of a random variable is smaller than
its $L^{2}$–norm to get
$\left|\left\langle\nabla\mathbb{E}\left[\phi\left(Z_{t}^{x}\right)\right],h\right\rangle\right|\leq
c_{\alpha}\left\lVert\phi\right\rVert_{\infty}\sup_{n=1,\dots,N}\left(\frac{1}{\sigma_{n}}\sqrt[2\alpha]{\frac{2\alpha\lambda_{n}}{1-e^{-2\alpha\lambda_{n}t}}}e^{-\lambda_{n}t}\right)\left|h\right|,\quad
x,h\in\mathbb{R}^{N},$
where the constant $c_{\alpha}$ is allowed to be different from the one in
(22). The desired inequality (19) is then recovered taking the $\sup$ for
$\left|h\right|\leq 1$, and the proof is complete. ∎
As in Remark 3, note that Theorem 6 holds true for every
$\phi\in\mathcal{B}_{b}\left(\mathbb{R}^{N}\right)$.
### 3.2 Infinite dimensional case
In this subsection we analyze the general case where $H$ is infinite
dimensional. Assuming $\sigma_{n}^{2}>0,\,n\in\mathbb{N}$, let us introduce
the following Hypothesis:
$\sup_{n}\left(\frac{1}{\sigma_{n}}\sqrt[2\alpha]{\frac{2\alpha\lambda_{n}}{1-e^{-2\alpha\lambda_{n}t}}}e^{-\lambda_{n}t}\right)\leq
C_{t}\quad\text{ for every }t>0,\,\text{for some function }C_{t}>0.$ (ii)
In this setting, for every $h\in H$ and $t>0$, we can define the real–valued
random variable
$\sum_{n=1}^{\infty}\frac{1}{\sigma_{n}}\frac{e^{-\lambda_{n}t}\left\langle
h,e_{n}\right\rangle}{\int_{0}^{t}e^{-2\lambda_{n}\left(t-s\right)}dL_{s}}\int_{0}^{t}e^{-\lambda_{n}\left(t-s\right)}d\beta_{L_{s}}^{n}\coloneqq
L^{2}-\lim_{N\to\infty}\left(\sum_{n=1}^{N}\frac{1}{\sigma_{n}}\frac{e^{-\lambda_{n}t}\left\langle
h,e_{n}\right\rangle}{\int_{0}^{t}e^{-2\lambda_{n}\left(t-s\right)}dL_{s}}\int_{0}^{t}e^{-\lambda_{n}\left(t-s\right)}d\beta_{L_{s}}^{n}\right).$
Indeed, with the same argument as the one in (22), Hypothesis (ii) yields
$\displaystyle\mathbb{E}\left[\left|\sum_{n=m}^{M}\frac{1}{\sigma_{n}}\frac{e^{-\lambda_{n}t}\left\langle
h,e_{n}\right\rangle}{\int_{0}^{t}e^{-2\lambda_{n}\left(t-s\right)}dL_{s}}\int_{0}^{t}e^{-\lambda_{n}\left(t-s\right)}d\beta_{L_{s}}^{n}\right|^{2}\right]$
$\displaystyle\leq
c_{\alpha}\sum_{n=m}^{M}\frac{e^{-2\lambda_{n}t}}{\sigma_{n}^{2}}\left(\frac{2\alpha\lambda_{n}}{1-e^{-2\alpha\lambda_{n}t}}\right)^{\frac{1}{\alpha}}\left|\left\langle
h,e_{n}\right\rangle\right|^{2}$ $\displaystyle\leq
c_{\alpha}\,C_{t}^{2}\,\left(\sum_{n=m}^{M}\left|\left\langle
h,e_{n}\right\rangle\right|^{2}\right)\underset{m,M\to\infty}{\longrightarrow}0,$
where $c_{\alpha}>0$. In particular,
$\mathbb{E}\left[\left|\sum_{n=1}^{\infty}\frac{1}{\sigma_{n}}\frac{e^{-\lambda_{n}t}\left\langle
h,e_{n}\right\rangle}{\int_{0}^{t}e^{-2\lambda_{n}\left(t-s\right)}dL_{s}}\int_{0}^{t}e^{-\lambda_{n}\left(t-s\right)}d\beta_{L_{s}}^{n}\right|^{2}\right]^{\frac{1}{2}}\leq\sqrt{c_{\alpha}}\,C_{t}\,\left\lVert
h\right\rVert_{H}.$ (23)
Hence the following, useful property holds:
$\sum_{n=1}^{\infty}\frac{1}{\sigma_{n}}\frac{e^{-\lambda_{n}t}\left\langle
h_{m},e_{n}\right\rangle}{\int_{0}^{t}e^{-2\lambda_{n}\left(t-s\right)}dL_{s}}\int_{0}^{t}e^{-\lambda_{n}\left(t-s\right)}d\beta_{L_{s}}^{n}\overset{L^{2}}{\longrightarrow}\sum_{n=1}^{\infty}\frac{1}{\sigma_{n}}\frac{e^{-\lambda_{n}t}\left\langle
h,e_{n}\right\rangle}{\int_{0}^{t}e^{-2\lambda_{n}\left(t-s\right)}dL_{s}}\int_{0}^{t}e^{-\lambda_{n}\left(t-s\right)}d\beta_{L_{s}}^{n}\quad\text{as
}h_{m}\to h.$ (24)
At this point we can present the main theorem of the paper.
###### Theorem 7.
Assume $\sigma_{n}^{2}>0,\,n\in\mathbb{N}$, together with Hypotheses (i)&
(ii).
Then for every $\phi\in B_{b}\left(H\right)$ and $t>0$ the function
$R_{t}\phi\in C^{1}_{b}\left(H\right)$ and there exists $c_{\alpha}>0$ such
that
$\sup_{x\in H}\left\lVert\nabla R_{t}\phi\left(x\right)\right\rVert_{H}\leq
c_{\alpha}\,C_{t}\left\lVert\phi\right\rVert_{\infty}\quad\text{for every
}t>0.$ (25)
Moreover, given $\phi\in C_{b}\left(H\right)$ and $t>0$, for every $x,h\in H$
the Gateaux derivative of $R_{t}\phi$ at $x$ along the direction $h$ is given
by
$\left\langle\nabla
R_{t}\phi\left(x\right),h\right\rangle=\mathbb{E}\left[\phi\left(Z_{t}^{x}\right)\left(\sum_{n=1}^{\infty}\frac{1}{\sigma_{n}}\frac{e^{-\lambda_{n}t}\left\langle
h,e_{n}\right\rangle}{\int_{0}^{t}e^{-2\lambda_{n}\left(t-s\right)}dL_{s}}\int_{0}^{t}e^{-\lambda_{n}\left(t-s\right)}d\beta_{L_{s}}^{n}\right)\right].$
(26)
###### Proof.
Fix $t>0$ and a function $\phi\in C_{b}\left(H\right)$.
We first consider the case $\dim H=N,$ identifying $H$ with $\mathbb{R}^{N}$,
as usual. Evidently (26) coincides with (18) and the map $x\mapsto\nabla
R_{t}\left(x\right)$ is a continuous function from $\mathbb{R}^{N}$ into
itself: this follows from dominated convergence, together with $\phi\in
C_{b}\left(\mathbb{R}^{N}\right)$ and $Z_{t}^{x_{n}}\to Z_{t}^{x}$ a.s. as
$x_{n}\to x$. Moreover, Hypothesis (ii) applied to (19) directly entails (25),
therefore $R_{t}\phi\in C_{b}^{1}\left(\mathbb{R}^{N}\right)$. In order to
pass to infinite dimension it is convenient to write
$\displaystyle R_{t}\phi\left(x+h\right)-R_{t}\phi\left(x\right)$
$\displaystyle=\int_{0}^{1}\left\langle\nabla
R_{t}\phi\Big{(}\left(1-\rho\right)x+\rho\left(x+h\right)\Big{)}\,,\,h\right\rangle
d\rho$ $\displaystyle=\int_{0}^{1}\mathbb{E}\left[\phi\left(Z_{t}^{x+\rho
h}\right)\left(\sum_{n=1}^{N}\frac{1}{\sigma_{n}}\frac{e^{-\lambda_{n}t}\left\langle
h,e_{n}\right\rangle}{\int_{0}^{t}e^{-2\lambda_{n}\left(t-s\right)}dL_{s}}\int_{0}^{t}e^{-\lambda_{n}\left(t-s\right)}d\beta_{L_{s}}^{n}\right)\right]d\rho.$
(27)
We now consider the general case $\dim H=\infty$. Let $\pi_{N}$ be the
projection onto the first $N$ Fourier components and $H_{N}$ be its range. Due
to the diagonal structure of our model, the projections $\pi_{N}Z_{t}^{x}$ of
the OU–process are, $\mathbb{P}-$a.s.,
$\pi_{N}Z^{x}_{t}=\sum_{n=1}^{N}e^{-\lambda_{n}t}\left\langle
x,e_{n}\right\rangle
e_{n}+\sum_{n=1}^{N}\left(\int_{0}^{t}e^{-\lambda_{n}\left(t-s\right)}\sigma_{n}d\beta_{L_{s}}^{n}\right)e_{n},\quad
N\in\mathbb{N}.$
Therefore introducing the operators
$A_{N}\coloneqq{\left.\kern-1.2ptA\vphantom{\big{|}}\right|_{H_{N}}}$ and
$Q_{N}\coloneqq{\left.\kern-1.2ptQ\vphantom{\big{|}}\right|_{H_{N}}}$, which
map $H_{N}$ into itself, we can write
$\pi_{N}Z^{x}_{t}=e^{tA_{N}}\left(\pi_{N}x\right)+\tilde{Z}_{A_{N},Q_{N}}\left(t\right)$:
this shows that such projections are OU–processes in $H_{N}$. Thus, the
dominated convergence theorem together with the expression in (3.2) and the
continuity of $\phi$ give
$\displaystyle R_{t}\phi\left(x+h\right)-R_{t}\phi\left(x\right)$
$\displaystyle=\lim_{N\to\infty}\mathbb{E}\left[\phi\left(\pi_{N}Z_{t}^{x+h}\right)-\phi\left(\pi_{N}Z_{t}^{x}\right)\right]$
$\displaystyle=\lim_{N\to\infty}\int_{0}^{1}\mathbb{E}\left[\phi\left(\pi_{N}Z_{t}^{x+\rho
h}\right)\left(\sum_{n=1}^{N}\frac{1}{\sigma_{n}}\frac{e^{-\lambda_{n}t}\left\langle
h,e_{n}\right\rangle}{\int_{0}^{t}e^{-2\lambda_{n}\left(t-s\right)}dL_{s}}\int_{0}^{t}e^{-\lambda_{n}\left(t-s\right)}d\beta_{L_{s}}^{n}\right)\right]d\rho$
$\displaystyle=\int_{0}^{1}\mathbb{E}\left[\phi\left(Z_{t}^{x+\rho
h}\right)\left(\sum_{n=1}^{\infty}\frac{1}{\sigma_{n}}\frac{e^{-\lambda_{n}t}\left\langle
h,e_{n}\right\rangle}{\int_{0}^{t}e^{-2\lambda_{n}\left(t-s\right)}dL_{s}}\int_{0}^{t}e^{-\lambda_{n}\left(t-s\right)}d\beta_{L_{s}}^{n}\right)\right]d\rho.$
Now we can define
$D_{t,x}\left(h\right)\coloneqq\mathbb{E}\left[\phi\left(Z_{t}^{x}\right)\left(\sum_{n=1}^{\infty}\frac{1}{\sigma_{n}}\frac{e^{-\lambda_{n}t}\left\langle
h,e_{n}\right\rangle}{\int_{0}^{t}e^{-2\lambda_{n}\left(t-s\right)}dL_{s}}\int_{0}^{t}e^{-\lambda_{n}\left(t-s\right)}d\beta_{L_{s}}^{n}\right)\right]$:
it is the Fréchet differential of $R_{t}\phi$ at $x$ (hence, in particular,
(26) is verified). To see this, it is sufficient to note that the linear
operator $D_{t,x}\left(\cdot\right)$ is continuous by the property in (24) and
to apply Hölder’s inequality, the dominated convergence theorem and (23) to
get, for a positive constant $c_{\alpha}$,
$\displaystyle\left|R_{t}\phi\left(x+h\right)-R_{t}\phi\left(x\right)-D_{t,x}\left(h\right)\right|\leq
c_{\alpha}\,C_{t}\left\lVert
h\right\rVert_{H}\int_{0}^{1}\mathbb{E}\left[\left|\phi\left(Z_{t}^{x+\rho
h}\right)-\phi\left(Z_{t}^{x}\right)\right|^{2}\right]^{1/2}d\rho=o\left(\left\lVert
h\right\rVert_{H}\right).$
The upper bound (25) for the norm of the gradient is then obtained by (23)
from the next, straightforward computation:
$\left\lVert\nabla R_{t}\phi\left(x\right)\right\rVert_{H}=\sup_{\left\lVert
h\right\rVert_{H}\leq 1}\left|\left\langle\nabla
R_{t}\phi\left(x\right),h\right\rangle\right|=\sup_{\left\lVert
h\right\rVert_{H}\leq 1}\left|D_{t,x}\left(h\right)\right|\leq
c_{\alpha}\,C_{t}\,\left\lVert\phi\right\rVert_{\infty},\quad x\in H.$
We also note that
$\sup_{\left\lVert h\right\rVert_{H}\leq
1}\left|\left(D_{t,x_{n}}-D_{t,x}\right)\left(h\right)\right|\leq
c_{\alpha}\,C_{t}\,\mathbb{E}\left[\left|\phi\left(Z_{t}^{x_{n}}\right)-\phi\left(Z_{t}^{x}\right)\right|^{2}\right]^{1/2}\to
0\quad\text{as }x_{n}\to x:$
this proves the continuity of the map $x\mapsto D_{t,\cdot}$, hence
$R_{t}\phi\in C^{1}_{b}\left(H\right)$.
Finally, we need to study the case where $\phi$ is just Borel measurable and
bounded, without the hypothesis of continuity. In order to do this, it is
sufficient to observe that by the _mean value theorem_ and (25) we have, for
every $\phi\in C_{b}^{2}\left(H\right),$
$\left|R_{t}\phi\left(x\right)-R_{t}\phi\left(y\right)\right|\leq
c_{\alpha}\,C_{t}\,\left\lVert\phi\right\rVert_{\infty}\left\lVert
x-y\right\rVert_{H},\quad x,y\in H.$ (28)
Being $R_{t}$ Markovian, [4, Lemma $7.1.5$] implies that the same holds true
for every $\phi\in B_{b}\left(H\right)$. In particular, $R_{t}$ maps bounded,
Borel measurable functions in bounded, Lip–continuous functions. The semigroup
law let us write $R_{t}\phi=R_{s}\left(R_{t-s}\phi\right)$ for some $0<s<t$,
which proves $R_{t}\phi\in C_{b}^{1}\left(H\right)$. The bound (25) follows
from (28), hence the proof is complete. ∎
We now focus on the gradient estimate (9). We need to substitute Hypothesis
(ii) with the following, stronger one:
$\sup_{n}\left(\frac{1}{\sigma_{n}}\sqrt[2\alpha]{\frac{2\alpha\lambda_{n}}{1-e^{-2\alpha\lambda_{n}t}}}e^{-\lambda_{n}t}\right)\leq
C_{0}\frac{1}{t^{\gamma}},\quad\text{for every }t>0,\,\text{for some
}C_{0}>0,\,0<\gamma<1.$ (iii)
In other terms, in Hypothesis (ii) we take $C_{t}\coloneqq
C_{0}\,t^{-\gamma},\,t>0$, for some $C_{0}>0,\,\gamma\in\left(0,1\right)$.
###### Remark 4.
Observe that, for every $n\in\mathbb{N}$, the term
$\frac{1}{\sigma_{n}}\sqrt[2\alpha]{\frac{2\alpha\lambda_{n}}{1-e^{-2\alpha\lambda_{n}t}}}e^{-\lambda_{n}t}\sim\frac{1}{\sigma_{n}}\frac{1}{t^{1/\left(2\alpha\right)}}\quad\text{as
}t\downarrow 0.$
Therefore, Hypothesis (iii) should be verified only in the case
$\alpha\in\left(\frac{1}{2},1\right)$ and for some
$\gamma\in\left[\frac{1}{2\alpha},1\right)$.
It is also worth noticing that Hypothesis (iii) is equivalent to the next
condition:
$\sigma_{n}\geq C_{1}\,\lambda_{n}^{\frac{1}{2\alpha}-\gamma},\quad
n\in\mathbb{N},$ (iii′)
for some $C_{1}>0$ and $\gamma\in\left[\frac{1}{2\alpha},1\right)$. A short
argument proving the latter fact is shown in [11, _Hypothesis (N)_].
At this point the next result is immediate.
###### Corollary 8.
Consider $\alpha\in\left(\frac{1}{2},1\right)$ and assume
$\sigma_{n}^{2}>0,\,n\in\mathbb{N}$, together with Hypotheses (i)& (iii).
Then for every $\phi\in\mathcal{B}_{b}\left(H\right)$ the function
$R_{t}\phi\in C_{b}^{1}\left(H\right),\,t>0,$ and the gradient estimate (9)
holds, namely there exists a constant $C>0$ such that
$\sup_{x\in H}\left\lVert\nabla
R_{t}\phi\left(x\right)\right\rVert_{H}\leq\frac{C}{t^{\gamma}}\left\lVert\phi\right\rVert_{\infty}\quad\text{for
every }t>0,$
where $\gamma\in\left[\frac{1}{2\alpha},1\right)$ is the one appearing in
Hypothesis (iii).
###### Example 2.
We investigate Hypothesis (iii), in its equivalent formulation (iii′) provided
by Remark 4, in the same framework as in Example 1. So we take $A=\Delta$
(hence
$-\lambda_{k}=-\left(2\pi\right)^{2}\left|k\right|^{2},\,k\in\mathbb{Z}_{0}^{d}$)
and study two possible choices for $Q$.
* •
If $Q=\text{Id}$, then
$1\geq\frac{1}{\left(2\pi\left|k\right|\right)^{2\left(\gamma-\frac{1}{2\alpha}\right)}},\quad
k\in\mathbb{Z}_{0}^{d}$
for every $\gamma\in\left[\frac{1}{2\alpha},1\right)$. Therefore, in dimension
$d=1$ both conditions (i) and (iii) are satisfied. In particular, motivated by
the fact that $R_{t}$ is a regularization operator with $R_{0}=\text{Id}$, we
are interested in the behavior of $\nabla R_{t}\phi$ around $0$, where
$\phi\in\mathcal{B}_{b}\left(H\right)$. Therefore we choose
$\gamma=\frac{1}{2\alpha}$ and Corollary 8 provides the next estimate:
$\sup_{x\in H}\left\lVert\nabla R_{t}\phi\left(x\right)\right\rVert_{H}\leq
C\,\frac{1}{t^{2\alpha}}\left\lVert\phi\right\rVert_{\infty}\quad\text{for
every }t>0,$
for a positive constant $C$.
* •
If $Q=Q_{\eta}=\left(-\Delta\right)^{-\eta}$ for $\eta>0$, then
$\sigma^{\left(\eta\right)}_{k}=\lambda_{k}^{-\eta/2},\,k\in\mathbb{Z}_{0}^{d}$,
and (iii′) holds true if and only if $\eta\leq 2\gamma-\frac{1}{\alpha}$.
Since we can take any $\gamma\in\left[\frac{1}{2\alpha},1\right)$, the
aforementioned condition holds as soon as $\eta<2-\frac{1}{\alpha}.$ Combining
this result with (8) obtained in Example 1, we conclude that Hypotheses (i)
and (iii) simultaneously hold if and only if
$\eta\in\left(\max\left\\{\frac{d-2}{2\alpha},0\right\\}\\!,\,2-\frac{1}{\alpha}\right).$
It then follows that there exist negative fractional powers of the Laplacian
$Q_{\eta}=\left(-\Delta\right)^{-\eta}$ meeting the requirements of Corollary
8 up to dimension $d=3$. Specifically, for $d=1,2$ there is a $Q_{\eta}$ with
the searched properties for every $\alpha\in\left(\frac{1}{2},1\right)$,
whereas in dimension $d=3$ we can find such a $Q_{\eta}$ only for
$\alpha\in\left(\frac{3}{4},1\right)$.
## Acknowledgment
I thank Professor Franco Flandoli for useful discussions and valuable insight
into the subject.
## References
* [1] Benth, F. E., Benth, J. S., & Koekebakker, S. (2008). _Stochastic modelling of electricity and related markets_ (Vol. 11). World Scientific.
* [2] Bogdan, K., Stós, A., & Sztonyk, P. (2003). Harnack inequality for stable processes on d-sets. _Studia Math, 158_(2), 163–198.
* [3] Da Prato, G. (2014). _Introduction to stochastic analysis and Malliavin calculus_ (Vol. 13). Springer.
* [4] Da Prato, G., & Zabczyk, J. (1996). _Ergodicity for infinite dimensional systems_ (Vol. 229). Cambridge University Press.
* [5] Da Prato, G., & Zabczyk, J. (2014). _Stochastic equations in infinite dimensions_. Cambridge university press.
* [6] Flandoli, F., & Luo, D. (2020). Convergence of transport noise to Ornstein–Uhlenbeck for 2D Euler equations under the enstrophy measure. _The Annals of Probability, 48_(1), 264–295.
* [7] Jacod, J. (2006). _Calcul stochastique et problemes de martingales_ (Vol. 714). Springer.
* [8] Karatzas, I., & Shreve, S. E. (1998). Brownian motion. In _Brownian Motion and Stochastic Calculus_ (pp. 47–127). Springer, New York, NY.
* [9] Kusuoka, S. (2010). Malliavin calculus for stochastic differential equations driven by subordinated Brownian motions. _Kyoto Journal of Mathematics, 50_(3), 491–520.
* [10] Natanson, I. P. (2016)._Theory of functions of a real variable_. Courier Dover Publications.
* [11] Priola, E., & Zabczyk, J. (2011). Structural properties of semilinear SPDEs driven by cylindrical stable processes. _Probability theory and related fields, 149_(1-2), 97–137.
* [12] Protter, P. E. (2005). _Stochastic integration and differential equations_. Springer, Berlin, Heidelberg.
* [13] Ken-Iti, S. (1999). _Lévy processes and infinitely divisible distributions_. Cambridge university press.
* [14] Winkelbauer, A. (2012). Moments and absolute moments of the normal distribution. _arXiv preprint arXiv:1209.4340_
* [15] Zhang, X. (2013). Derivative formulas and gradient estimates for SDEs driven by $\alpha$-stable processes. _Stochastic Processes and their Applications, 123_(4), 1213–1228.
|
# Equilibrium of an Arbitrary Bunch Train in the
Presence of Multiple Resonator Wake Fields
Robert Warnock<EMAIL_ADDRESS>SLAC National Accelerator
Laboratory, Stanford University, Menlo Park, CA 94025, USA Department of
Mathematics and Statistics, University of New Mexico, Albuquerque, NM 87131,
USA
###### Abstract
A higher harmonic cavity (HHC), used to cause bunch lengthening for an
increase in the Touschek lifetime, is a feature of several fourth generation
synchrotron light sources. The desired bunch lengthening is complicated by the
presence of required gaps in the bunch train. In a recent paper the author and
Venturini studied the effect of various fill patterns by calculating the
charge densities in the equilibrium state, through coupled Haïssinski
equations. We assumed that the only collective force was from the beam loading
(wake field) of the harmonic cavity in its lowest mode. The present paper
improves the notation and organization of the equations so as to allow an easy
inclusion of multiple resonator wake fields. This allows one to study the
effects of beam loading of the main accelerating cavity, higher order modes of
the cavities, and short range geometric wakes represented by low-$Q$
resonators. As an example these effects are explored for ALS-U. The
compensation of the induced voltage in the main cavity, achieved in practice
by a feedback system, is modeled by adjustment of the generator voltage
through a new iterative scheme. Except in the case of a complete fill, the
compensated main cavity beam loading has a substantial effect on the bunch
profiles and the Touschek lifetimes. A $Q=6$ resonator, approximating the
effect of a realistic short range wake, is also consequential for the bunch
forms.
## I Introduction
This is a sequel to Ref.prabI , in which we explored the action of a higher
harmonic cavity (HHC), a standard component of 4th generation synchrotron
light sources, employed to lengthen the bunch and reduce the effect of
Touschek scattering. In that work we introduced an effective scheme to compute
the equilibrium state of charge densities in an arbitrary bunch train. The
train is allowed to have arbitrary gaps and bunch charges. We chose the
simplest possible physical model, in which the only induced voltage (wake
field) is due to the lowest mode of the HHC. We write $V_{r3}$ for this
voltage, the notation designating “resonator, 3rd harmonic”. We recognized,
however, that excitation of the main accelerating cavity (MC) by the bunch
train produces an induced voltage $V_{r1}$ of comparable magnitude, the effect
described as beam loading. Our excuse for omitting $V_{r1}$ was that in
practice it is largely cancelled by adjusting the rf generator voltage $V_{g}$
through a feedback system. The sum of $V_{r1}$ and $V_{g}$ should closely
approximate $V_{rf}$, the desired accelerating voltage.
In real machines there are always gaps in the bunch train, and that leads to
varying bunch profiles and centroid displacements along the train. At first
sight this would suggest that $V_{r1}$ would be different for different
bunches, so that compensation could only be partial, perhaps only manifest in
some average sense. On the contrary, we shall calculate an equilibrium state
in which the compensation is essentially perfect for all bunches. This happens
by an adjustment of the charge densities of all bunches, to new forms that
sometimes differ substantially from those without the MC. The adjustment is
achieved automatically through a new algorithm presented here. This iterative
procedure minimizes a mean square deviation of $V_{r1}+V_{g}$ from $V_{rf}$,
summed over all bunches, as a function of two generator parameters, which are
equivalent to amplitude and phase.
It is not clear that this is a faithful model of the feedback mechanism, which
could conceivably amount to a weaker constraint on the bunch profiles.
Nevertheless, this study clarifies the mathematical structure of the problem,
and appears to be a worthwhile preliminary to a full time-dependent model of
the system including a realistic description of feedback.
Beside the main cavity, we also assess the role of the short range wake field
from geometric aberrations in the vacuum chamber, and higher order modes in
the HHC. These effects are added with the help of improvements in notation and
organization of the equations.
Extensive numerical results are reported with parameters for ALS-U. For
consistency with the previous work the parameters chosen are partially out of
date as the machine design stands at present, but that will not greatly affect
our pricipal conclusions. Although the qualitative picture of the model with
HHC alone is still in place, there are large quantitative changes. Even then,
we underestimate the full effects, because we can only get convergence of our
iterative method when the current is a few percent less than the design
current.
In Section II we briefly recall our previous algorithm for solving the coupled
Haïssinski equations. Section III introduces the improved notation and
organization which allows an easy inclusion of multiple resonator wake fields.
Section IV enlarges the system of equations to provide a calculation of the
diagonal terms in the potential, thus overcoming a limitation of the previous
formulation. Section V describes the method for determining the generator
parameters so as to compensate the induced voltage in the main cavity. Section
VI, with several subsections, reports numerical results for the case of ALS-U
alsu1 ; alsu2 , always making comparisons to results with only the HHC in
place. Subsection VI.1 treats the case of a complete fill, illustrating the
compensation of the main cavity in the simplest instance. Subsection VI.2
considers a partial fill with distributed gaps, as proposed for the machine.
Subsection VI.3 is concerned with over-stretching by reduction of the HHC
detuning. Subsection VI.4 explores the effect of the short range wake field
with a realistic wake potential. Subsection VI.5 checks the effect of the
principal higher order mode of the HHC. Subsection VI.6 presents our closest
approach to a realistic model, including the harmonic cavity, the compensated
main cavity, and the short range wake, altogether. Subsection VI.7 examines
the effect of the main cavity when there is only a single gap in the bunch
train. Section VII reviews our conclusions and possibilities for further work.
Appendix A derives the expression for the diagonal terms in the potential, for
resonators of arbitrary $Q$.
## II Summary of method to compute the equilibrium charge densities
In prabI we derived a set of equations to determine the equilibrium charge
densities of $n_{b}$ bunches, which may be stated succinctly as follows:
$F(\hat{\rho},I)=0\ .$ (1)
Here $I$ is the average current and $\hat{\rho}$ is a vector with $2n_{b}$
real components, consisting of the real and imaginary parts of
$\hat{\rho}_{i}(k_{r3})$, where $k_{r3}$ is the wave number of the lowest
resonant mode of the 3rd harmonic cavity. These quantities are defined in
terms of the beam frame charge densities $\rho_{i}(z)$, normalized to 1 on its
region of support $[-\Sigma,\Sigma]$, as
$\hat{\rho}_{i}(k)=\frac{1}{2\pi}\int_{-\Sigma}^{\Sigma}\exp\big{(}-ikz(1+i/2Q)\big{)}\rho_{i}(z)dz\
,$ (2)
where $Q$ is the quality factor of the cavity. The vector in (1) is arranged
as follows:
$\hat{\rho}=\big{[}~{}{\rm Re}\hat{\rho}_{1}(k_{r3}),\cdots,{\rm
Re}\hat{\rho}_{n_{b}}(k_{r3}),~{}{\rm Im}\hat{\rho}_{1}(k_{r3}),\cdots,{\rm
Im}\hat{\rho}_{n_{b}}(k_{r3})~{}\big{]}\ .$ (3)
Accordingly, $F$ in (1) is a real vector with $2n_{b}$ components, so that we
have $2n_{b}$ nonlinear algebraic equations in $2n_{b}$ unknowns, depending on
the parameter $I$.
For the high $Q$ of a typical HHC the quantity (2) is very close to the
Fourier transform, but we have persistently written all equations for general
$Q$ for later applications involving low-$Q$ resonators.
In (1) the diagonal terms of the induced voltage have been dropped, i.e. the
effects on a bunch of its own excitation of the cavity. This omission is
justified for the typical high $Q$ of an HHC. Our method to handle the
diagonal terms in the general case is introduced in Section IV.
A solution $\hat{\rho}$ of (1) determines the charge densities by the formula
of Eq.(50) in prabI ,
$\rho_{i}(z_{i})=\frac{1}{A_{i}}\exp\big{[}-\mu U_{i}(z_{i})\big{]}\ ,$ (4)
where $U_{i}$ is the potential felt by the $i$-th bunch, defined in Eq.(51) of
prabI . Here $\mu$ and $A_{i}$ are constants, and $z_{i}$ is the beam frame
longitudinal coordinate of the $i$-th bunch. The potential $U_{i}$ depends on
all components of $\hat{\rho}$, on the mean energy loss per turn $U_{0}$, and
on the parameters of the applied voltage $V_{rf}$ which we write as
$V_{rf}(z)=V_{1}\sin(k_{1}z+\phi_{1})=V_{1}\big{(}\cos\phi_{1}\sin(k_{1}z)+\sin\phi_{1}\cos(k_{1}z)\big{)}\
.$ (5)
We solve (1) by the matrix version of Newton’s iteration, defined in (67) of
prabI . We begin at small current $I$, taking all components of the first
guess for $\hat{\rho}$ to be the transform (2) of a Gaussian with the natural
bunch length. We then continue step-wise to the desired current, making a
linear extrapolation in current to provide a starting guess for the next
Newton iteration at incremented current. The extrapolation is accomplished by
solving for $\partial\hat{\rho}/\partial I$ from the $I$-derivative of (1):
$\frac{\partial F}{\partial\hat{\rho}}\frac{\partial\hat{\rho}}{\partial
I}+\frac{\partial F}{\partial I}=0\ .$ (6)
## III Formalism for Multiple Resonators
The scheme allows the inclusion of any number of resonator wake fields, but to
do that conveniently requires some care in notation and organization of the
equations. With $n_{r}$ resonators there are $2n_{b}n_{r}=n_{u}$ unknowns,
which we assemble in one long vector $\tilde{\rho}$ :
$\displaystyle\tilde{\rho}=\big{[}~{}\tilde{\rho}(k)\ ,\
k=1,\cdots,n_{u}~{}\big{]}=$ $\displaystyle\big{[}~{}{\rm
Re}\hat{\rho}_{1}(k_{r1}),\cdots,~{}{\rm Re}\hat{\rho}_{n_{b}}(k_{r1}),~{}{\rm
Im}\hat{\rho}_{1}(k_{r1})~{},\cdots,~{}{\rm
Im}\hat{\rho}_{1}(k_{r1})~{},\cdots,$ $\displaystyle~{}~{}{\rm
Re}\hat{\rho}_{1}(k_{r,n_{r}}),\cdots,~{}{\rm
Re}\hat{\rho}_{n_{b}}(k_{r,n_{r}})~{},~{}{\rm
Im}\hat{\rho}_{1}(k_{r,n_{r}}),\cdots,~{}{\rm
Im}\hat{\rho}_{n_{b}}(k_{r,n_{r}})~{}\big{]}\ .$ (7)
Here $k_{r,n}$ is the resonant wave number of the $n$-th resonator, and the
subscript of $\hat{\rho}$ denotes as usual the bunch number.
To identify the bunch number and the resonator number for the $k$-th component
of the vector, we define two index maps: $\iota(k)$ which gives the bunch
number and $r(k)$ which gives the resonator number. Namely,
$\displaystyle\iota(k)=\left\\{\begin{array}[]{l}\mod(k,n_{b})~{}~{}{\rm
if}\mod(k,n_{b})\neq 0\\\ ~{}~{}~{}n_{b}~{}~{}{\rm if}\mod(k,n_{b})=0\\\
\end{array}\right\\}$ (10) $\displaystyle
r(k)=\bigg{\lceil}\frac{k}{2n_{b}}\bigg{\rceil}\ .$ (11)
Here $\lceil x\rceil$, the ceiling of $x$, is the least integer greater than
or equal to $x$. We also need two projection operators: $P_{re}(k)$ which is
equal to 1 if $k$ corresponds to a ${\rm Re}\hat{\rho}$ and is zero otherwise,
and $P_{im}(k)$ which is equal to 1 if $k$ corresponds to a ${\rm
Im}\hat{\rho}$ and is zero otherwise. These are expressed in terms of the
ceiling of $k/n_{b}$ as follows:
$\displaystyle P_{re}(k)=\frac{1}{2}\bigg{[}1-(-1)^{\lceil
k/n_{b}\rceil}\bigg{]}\ ,$ $\displaystyle
P_{im}(k)=\frac{1}{2}\bigg{[}1+(-1)^{\lceil k/n_{b}\rceil}\bigg{]}\ .$ (12)
The potential $U_{j}(z)$ for bunch $j$, generalizing Eq.(51) of prabI to
allow $n_{r}$ resonators, is stated as
$\displaystyle
U_{j}(z)=\frac{eV_{1}}{k_{1}}\big{[}x_{1}\cos(k_{1}z)-x_{2}\sin(k_{1}z)-x_{1}\big{]}+U_{0}z$
(13)
$\displaystyle+\sum_{n=1}^{n_{r}}U^{d}_{jn}(z)+\sum_{k=1}^{n_{u}}M(z)_{j,k}~{}\tilde{\rho}_{k}\
,\quad j=1,\cdots,n_{b}\ ,\quad-\Sigma~{}\leq~{}z~{}\leq~{}\Sigma\ .$ (14)
The first term in (13) is $e$ times the integral of the applied voltage, now
called the generator voltage and written as
$V_{g}(z)=V_{1}\big{[}x_{1}\sin(k_{1}z)+x_{2}\cos(k_{1}z)\big{]}\ .$ (15)
At $x_{1}=\cos\phi_{1},\ x_{2}=\sin\phi_{1}$ this reduces to the desired
$V_{rf}$ of (5). In an amplitude-phase representation we have
$V_{g}(z)=\tilde{V}_{1}\sin(k_{1}z+\tilde{\phi}_{1})\
,\quad\tilde{V}_{1}=(x_{1}^{2}+x_{2}^{2})^{1/2}V_{1}\
,\quad\tilde{\phi}_{1}=\tan^{-1}(x_{2}/x_{1})\ .$ (16)
The first term in (14) represents the diagonal contributions, the effect on
bunch $j$ of its own excitation of the resonators, as opposed to excitation by
the other bunches which is described by the second term. By writing the latter
as a simple matrix-vector product we greatly simplify the calculation of the
Jacobian of the system, making it formally the same for any number of
resonators.
Referring to Eqs.(28), (51), (55), (56), (57), (58) of prabI , we can write
down the matrix elements $M(z)_{i,k}$ in the second term of (14). For this we
introduce a notation appropriate for labeling by the index $k$ of (7).
Functions of $k$, defined via the index maps, are labeled with a tilde:
$\displaystyle\tilde{k}_{r,k}=k_{r,r(k)}\ ,$
$\displaystyle\tilde{\xi}_{k}=\xi_{\iota(k)}\
,\quad\tilde{A}_{k}=A_{\iota(k)}$ $\displaystyle\tilde{R}_{sk}=R_{s,r(k)}\
,\quad\tilde{Q}_{k}=Q_{r(k)}\ ,$
$\displaystyle\tilde{\eta}_{k}=\eta_{r(k)},\quad\tilde{\psi}_{k}=\psi_{r(k)}\
,$
$\displaystyle\tilde{\phi}_{j,k}=\tilde{k}_{r,k}\big{[}(m_{\iota(k)}-m_{j})\lambda_{1}+\theta_{j-1,\iota(k)}C\big{]}\
,$ $\displaystyle\sigma_{j,k}(z)=\mathcal{S}\big{(}\tilde{k}_{r,k}z,\
\tilde{Q}_{k},\ \tilde{\phi}_{j,k}+\tilde{\psi}_{k}\big{)}\ ,$
$\displaystyle\gamma_{j,k}(z)=\mathcal{C}\big{(}\tilde{k}_{r,k}z,\
\tilde{Q}_{k},\ \tilde{\phi}_{j,k}+\tilde{\psi}_{k}\big{)}\ ,$ (17)
where
$\displaystyle\mathcal{S}(k_{r}z,Q,\phi)=\frac{1}{1+(1/2Q)^{2}}\bigg{[}\exp(-k_{r}z/2Q)\bigg{(}\sin(k_{r}z+\phi)-\frac{1}{2Q}\cos(k_{r}z+\phi)\bigg{)}\bigg{]}_{0}^{z}\
,$
$\displaystyle\mathcal{C}(k_{r}z,Q,\phi)=\frac{1}{1+(1/2Q)^{2}}\bigg{[}\exp(-k_{r}z/2Q)\bigg{(}\cos(k_{r}z+\phi)+\frac{1}{2Q}\sin(k_{r}z+\phi)\bigg{)}\bigg{]}_{0}^{z}\
.$ (18)
The result for the matrix from (51) and (57) of prabI is seen to be (noting
that $\omega_{r}/k_{r}=c$)
$M(z)_{j,k}=2\pi
ce^{2}N\frac{\tilde{\eta}_{j}\tilde{R}_{sj}}{\tilde{Q}_{j}}(1-\delta_{j,\iota(k)})\tilde{\xi}_{k}\exp(-\tilde{\phi}_{j,k}/2\tilde{Q}_{k})\big{[}P_{re}(k)\sigma_{j,k}(z)+P_{im}(k)\gamma_{j,k}(z)\big{]}\
.$ (19)
In the present notation the system of coupled Haïssinski equations,
generalizing (66) of prabI , takes the form
$\displaystyle
F_{j}(\tilde{\rho})=\tilde{A}_{j}\tilde{\rho}_{j}-\frac{1}{2\pi}\int_{-\Sigma}^{\Sigma}\big{[}P_{re}(k)\cos(\tilde{k}_{r,j}\zeta)-P_{im}(k)\sin(\tilde{k}_{r,j}\zeta)\big{]}$
$\displaystyle\hskip
56.9055pt\cdot\exp\big{[}\tilde{k}_{r,j}\zeta/2\tilde{Q}_{j}-\mu\
U_{\iota(j)}(\zeta)\big{]}d\zeta=0\ ,\quad j=1,\cdots,n_{u}\ .$ (20)
The normalization integral appearing in the first term is
$\tilde{A}_{j}=\int_{-\Sigma}^{\Sigma}\exp\big{[}-\mu\
U_{\iota(j)}(\zeta)\big{]}d\zeta\ .$ (21)
We require the Jacobian matrix $[\partial F_{j}/\partial\tilde{\rho}_{k}]$ for
the solution of (20) by Newton’s method, assuming that the diagonal terms are
fixed. This is found immediately from (14), (20), and (21) as
$\displaystyle\frac{\partial
F_{j}}{\partial\tilde{\rho}_{k}}=\tilde{A}_{j}\delta_{j,k}-\mu\int_{-\Sigma}^{\Sigma}\exp\big{[}-\mu
U_{\iota(j)}(\zeta)\big{]}M(\zeta)_{j,k}$
$\displaystyle\cdot\bigg{[}\tilde{\rho}_{j}-\frac{1}{2\pi}\big{[}P_{re}(k)\cos(\tilde{k}_{r,j}\zeta)-P_{im}(k)\sin(\tilde{k}_{r,j}\zeta)\big{]}\exp\big{[}\tilde{k}_{r,j}\zeta/2\tilde{Q}_{j}\big{]}\bigg{]}d\zeta\
.$ (22)
The compact expressions in (19), (20), and (22) are quite convenient for
coding, and lead to a short program to solve the Haïssinski equations with any
number of resonators. For $\zeta$ at $n_{p}$ mesh points $z_{i}$ used in the
integrals we have the array $M(i,j,k)=M(z_{i})_{j,k}$ of manageable dimension
$n_{p}\times n_{b}\times n_{u}$ which can be computed and stored at the top,
outside the Newton iteration.
For the work of the following section we also need the induced voltage from
the main cavity, which we designate as the first resonator in the list
$(n=1)$. For the $j$-th bunch this takes the form
$\displaystyle V_{r1j}(z)=-2\pi
ceN\frac{k_{r1}R_{s1}\eta_{1}}{Q_{1}}\bigg{[}\sum_{k=1}^{2n_{b}}(1-\delta_{j,\iota(k)})\tilde{\xi}_{k}\exp(-(\tilde{k}_{r,k}z+\tilde{\phi}_{j,k})/2\tilde{Q}_{k})$
$\displaystyle\bigg{(}P_{re}(k)\cos(\tilde{k}_{r,k}z+\tilde{\phi}_{j,k}+\tilde{\psi}_{k})-P_{im}(k)\sin(\tilde{k}_{r,k}z+\tilde{\phi}_{i,k}+\tilde{\psi}_{k})\bigg{)}\tilde{\rho}_{k}+v^{d}_{1j}(z)\bigg{]}\
.$ (23)
The diagonal term $v^{d}_{1j}$ can be evaluated in terms of integrals derived
in Appendix A.
## IV The full system of equations with diagonal terms
Through (20) we have a system of $n_{u}$ algebraic equations for determination
of $\tilde{\rho}$, provided that the diagonal terms in $U_{i}$ are given. The
latter are functionals of the charge densities $\rho_{i}(z_{i})$, from which
it follows that (20) can be stated in vector notation as
$\tilde{\rho}=\mathcal{A}(\tilde{\rho},\rho,I)\ .$ (24)
On the other hand, the $\rho_{i}(z_{i})$ are determined in turn as solutions
of integral equations provided that $\tilde{\rho}$ is given. The integral
equations are like normal single-bunch Haïssinski equations, but with a
background potential determined by $\tilde{\rho}$, namely
$\rho_{i}(z_{i})=\frac{1}{A_{i}}\exp\bigg{[}-\mu
U_{i}(z_{i},~{}\rho_{i},~{}\tilde{\rho})\bigg{]}\ ,\quad i=1,\cdots,n_{b}\ .$
(25)
In vector notation
$\rho=\mathcal{B}(\rho,\tilde{\rho},I)\ ..$ (26)
The potential $U_{i}$ depends on the $\rho_{i}$ through its diagonal terms, in
the first sum in (14). Our procedure will be to interleave the solution of
(24) at fixed $\tilde{\rho}$, by the usual Newton method, with the solution of
(26) at fixed $\tilde{\rho}$ . If this algorithm converges we shall have
consistency between $\rho$ and $\tilde{\rho}$ and a solution of the full
system.
It turns out, most fortunately, that the solution of (26) is obtained by plain
iteration as would be applied to a contraction mapping,
$\rho^{(n+1)}=\mathcal{B}(\rho^{(n)},\tilde{\rho},I)\ .$ (27)
In our application this usually converges to adequate accuracy in just one
step, or three at most, and takes negligible time.
This scheme based on (24) and (26) is used in all calculations reported below.
It replaces the method used in prabI , which was to evaluate the diagonal
terms from the value of $\rho$ from the previous Newton iterate. That works
only for high-$Q$ resonators, so is not adequate for handling the short range
machine wake.
## V Algorithm to adjust the generator parameters $(x_{1},x_{2})$
We wish to choose $(x_{1},x_{2})$ so as to minimize, in some sense, the
difference
$V_{rf}(z_{i})-V_{g}(z_{i},x_{1},x_{2})-V_{r1i}(z_{i},x_{1},x_{2})\ ,$ (28)
for all $i=1,\cdots,n_{b}$. A reasonable and convenient choice for an
objective function to minimize is the sum of the squared $L^{2}$ norms of the
quantities (28). With a normalizing factor to make it dimensionless and of
convenient magnitude that is
$\displaystyle f(x_{1},x_{2})=$ $\displaystyle\frac{1}{2\Sigma
V_{1}^{2}}\sum_{i=1}^{n_{b}}\int_{-\Sigma}^{\Sigma}\bigg{[}V_{1}(\cos\phi_{1}-x_{1})\sin(k_{1}z)+V_{1}(\sin\phi_{1}-x_{2})\cos(k_{1}z)-V_{r1i}(z,x_{1},x_{2})\bigg{]}^{2}dz\
.$ (29)
The region of integration $[-\Sigma,\Sigma]$ is the same as that used in the
definition of the potential $U_{i}$.
Note that the minimum of $f$ cannot be strictly zero, since $V_{r1}$ is
sinusoidal with wave number $k_{r1}$, whereas the other terms are sinusoidal
with a slightly different wave number $k_{1}$.
Let us adopt the vector notation $x=(x_{1},x_{2})$ with norm
$|x|=|x_{1}|+|x_{2}|$. The equations to solve now depend on $x$, having the
form
$F(\tilde{\rho},I,x)=0\ .$ (30)
To avoid notational clutter we suppress reference to the diagonal terms,
leaving it understood that a solution of (30) for $\tilde{\rho}$ actually
involves the scheme of the previous session. As usual we solve for
$\tilde{\rho}$, for an increasing sequence of $I$-values. The scheme will be
to minimize $f(x)$ at each $I$, thus providing a new $x=\arg\min f$ to be used
at the next value of $I$. As will now be explained, the minimization will also
be done iteratively, so that we have an $x$-iteration embedded in the
$\tilde{\rho}$-iteration.
We wish to zero $\nabla_{x}F$, which is to find $x$ to solve the equations
$\displaystyle\sum_{i=1}^{n_{b}}\int_{-\Sigma}^{\Sigma}\bigg{[}V_{1}(\cos\phi_{1}-x_{1})\sin(k_{1}z)+V_{1}(\sin\phi_{1}-x_{2})\cos(k_{1}z)-V_{r1i}(z,x)\bigg{]}$
$\displaystyle\times\left[\begin{array}[]{c}V_{1}\sin(k_{1}z)+\partial_{x_{1}}V_{r1i}(z,x)\\\
V_{1}\cos(k_{1}z)+\partial_{x_{2}}V_{r1i}(z,x)\end{array}\right]dz=\left[\begin{array}[]{c}0\\\
0\end{array}\right]\ .$ (35) (36)
To solve (36) a first thought might be to apply Newton’s method, starting at
some low current and choosing the zero current solution
$(\cos\phi_{1},\sin\phi_{1})$ as the first guess. This would be awkward,
however, since it would involve the second derivatives of $V_{r1i}$ with
respect to $(x_{1},x_{2})$. The first derivatives must already be done by an
expensive numerical differentiation, and the second numerical derivative would
be error prone and even more expensive. Instead, let us assume that we have a
first guess $(x_{10},x_{20})$ and suppose that in a small neighborhood of that
point the first derivatives of $V_{r1i}$ can be regarded as constant. Then
second derivatives are zero and the Taylor expansion of $V_{r1i}$ gives two
linear equations to solve for $(x_{1},x_{2})$, namely
$\left[\begin{array}[]{cc}a_{11}&a_{12}\\\
a_{21}&a_{22}\end{array}\right]\left[\begin{array}[]{c}x_{1}\\\
x_{2}\end{array}\right]=\left[\begin{array}[]{c}b_{1}\\\
b_{2}\end{array}\right]\ ,$ (37)
where
$\displaystyle a_{11}=\sum_{i}\int\alpha_{1i}(z,x_{0})^{2}dz\ ,\quad
a_{22}=\sum_{i}\int\alpha_{2i}(z,x_{0})^{2}dz\ ,$ $\displaystyle
a_{12}=a_{21}=\sum_{i}\int\alpha_{1i}(z,x_{0})\alpha_{2i}(z,x_{0})dz\ ,$
$\displaystyle b_{1}=\sum_{i}\int\alpha_{1i}(z,x_{0})\beta_{i}(z,x_{0})dz\
,\quad b_{2}=\sum_{i}\int\alpha_{2i}(z,x_{0})\beta_{i}(z,x_{0})dz\ ,$ (38)
with
$\displaystyle\alpha_{1i}(z,x_{0})=V_{1}\sin(k_{1}z)+\partial_{x_{1}}V_{r1i}(z,x_{0})\
,$
$\displaystyle\alpha_{2i}(z,x_{0})=V_{1}\cos(k_{1}z)+\partial_{x_{2}}V_{r1i}(z,x_{0})\
,$
$\displaystyle\beta_{i}(z,x_{0})=-V_{r1i}(z,x_{0})+\nabla_{x}V_{r1i}(z,x_{0})\cdot
x_{0}+V_{1}\sin(k_{1}z+\phi_{1})$ (39) (40)
By (37) we have an update $x_{0}\rightarrow x$ which establishes the pattern
of the general iterate $x^{(k)}\rightarrow x^{(k+1)}$. This will be carried to
convergence in the sense $|x^{(k+1)}-x^{(k)}|<\epsilon_{x}$, with a suitable
$\epsilon_{x}$ to be determined by experiment. Each iterate requires a value
for $V_{r1i}$ and for $\nabla_{x}V_{r1i}$, which we compute numerically by a
divided difference,
$\frac{\partial V_{r1i}}{\partial
x_{1}}(z,x)\approx\frac{V_{r1i}(z,x_{1}+\Delta
x,x_{2})-V_{r1i}(z,x_{1},x_{2})}{\Delta x}\ .$ (41)
Thus one $x$-iteration requires three $\tilde{\rho}$-iterations to provide the
necessary values of $V_{r1i}$ (which are constructed from $\tilde{\rho}$). The
first $\tilde{\rho}$ iteration to find $V_{r1i}(z,x_{1},x_{2})$ produces a
$\tilde{\rho}$ which is a very good guess to start the remaining two
iterations to make the derivatives, which then converge quickly.
The choice of $\Delta x$ in (41) requires a compromise between accuracy and
avoiding round-off error. We found that $\Delta x=10^{-4}$ was widely
satisfactory, whereas success with smaller values depended on the
circumstances.
## VI Numerical results with and without the main cavity
As in prabI we illustrate with parameters for ALS-U alsu1 ; alsu2 , the
forthcoming Advanced Light Source Upgrade. Although the machine design is not
yet final, one provisional set of parameters for our main cavity (actually the
effect of two cavities together) is as follows:
$R_{s}=0.8259~{}M\Omega\ ,\quad Q=3486\ ,\quad\delta
f=f_{r1}-f_{1}=-82.54~{}kHz$ (42)
Here the shunt impedance $R_{s}$ and quality factor $Q$ are loaded values, the
unloaded values divided by $1+\beta$, with coupling parameter $\beta=7.233$.
We take these parameters for the main cavity, otherwise keeping the same
parameters as in prabI , Table I. Thus we take $U_{0}=$217 keV, even though a
value of 330 keV may be contemplated for the set (42).
### VI.1 Complete Fill
We first take the case of a complete fill, thus $n_{b}=h=328$. The average
current is to be 500 mA, which we reach in 8 steps starting from 200 mA. The
CPU time is 15 minutes, rather than 20 seconds for the calculation without the
main cavity. The increase is mostly due to a much slower convergence of the
$\tilde{\rho}$-iteration, the $x$-iteration being a minor factor in CPU time.
To save time we gave $\epsilon_{x}$ the rather large value of 0.05, but then
made a refinement to $\epsilon_{x}=10^{-6}$ at the final current, in an extra
2 minute. The steepness of the objective function $f(x1,x2)$ of (29) is
extraordinary, having values around $10^{4}$ in the sequence with
$\epsilon_{x}=0.05$ while falling to a value close to 1 after the refinement.
An interesting question is how this steepness would be reflected in a feedback
system.
The result for the charge density, shown in the blue curve of Fig.1, is quite
close to the result without the main cavity, shown in red. It should be
emphasized that there is no explicit constraint requiring all bunches to be
the same. We have computed 328 bunches separately, and have found that they
all come out to be the same. This constitutes a good check on the correctness
of the equations and the code.
Figure 1: Charge density for complete fill at 500 mA, with compensated main
cavity (blue) and without main cavity (red).
In Fig.2 we show the compensation mechanism. The sum of the generator voltage
$V_{g}$ and the induced voltage $V_{r1}$ from the main cavity is the orange
curve. The latter deviates from the desired effective voltage $V_{rf}$ by less
than 2%, as is seen Fig.3.
Figure 2: MC induced voltage $V_{r1}$, generator voltage $V_{g}$ and their
sum. Figure 3: Relative deviation of $V_{r1}+V_{g}$ from $V_{rf}$.
The phasor of the generator voltage moves closer to $\pi/2$ and its magnitude
$\sqrt{x_{1}^{2}+x_{2}^{2}}$ increases from 1 to 1.0245, in comparison to the
phasor of $V_{rf}$. The corresponding values of $(x_{1},x_{2})$ are
$(x_{10},x_{20})=(\cos\phi_{1},\sin\phi_{1})=(~{}-0.93231,~{}0.36167~{})\quad\rightarrow\quad(x_{1},x_{2})=(~{}-0.29098,~{}0.98018~{})\
.$ (43)
### VI.2 Partial fill C2 with distributed gaps
Next we take a partial fill with distributed gaps, labeled as fill C2; see
Section XIII-C of prabI . There are 284 bunches in 11 trains, with 4 empty
buckets between trains. There are 9 trains of 26 and 2 of 25, with the latter
positioned at opposite sides of the ring. All bunches have the same charge. As
in the preceding example we start the calculation at low average current and
advance in steps trying to reach the desired 500 mA. The convergence of
iterations is at first similar to that of the preceding case, but begins to
falter around 430 mA average current, at which point the convergence of the
$\tilde{\rho}$-iteration becomes problematic. By taking smaller and smaller
steps in current we can reach 496 mA, but beyond that point the Jacobian
matrix of the system appears to approach a singularity, as is indicated by its
estimated condition number having a precipitous increase, from 700 at the last
good solution to 2900 at a slightly higher current. Nevertheless, the
$x$-iterations continue to converge as long as the $\tilde{\rho}$-iterations
do. In the following, graphs are plotted for the maximum achievable current,
stated in figure captions.
Now the plots of $V_{g}$ and $V_{r1}$ and their sum look exactly the same as
in Fig.2, for every bunch. The minimization of $f(x_{1},x_{2})$ has caused the
bunch forms to rearrange themselves so that the compensation is essentially
perfect for every bunch. The deviation of $V_{r1}+V_{g}$ from $V_{rf}$,
scarcely visible on the scale of Fig.2, varies from bunch to bunch, but is
still less than 3% for all bunches.
Fig.5 shows 9 bunch profiles in one train, to be compared with the
corresponding results without the main cavity in Fig. 5. The main cavity
causes considerably more bunch distortion along the train, and also a bigger
variation in the rms bunch lengths, as is seen in Fig.6. The plots show the
ratio of bunch length to the natural bunch length. The head of the train is on
the right, with the highest bunch number.
The corresponding results for the bunch centroids is seen in Figs.7. Again the
deviation from the case without the main cavity is quite substantial.
Figure 4: Charge densities in a train of 26,
surrounded by gaps of 4 buckets, fill C2,
MC beam loading included, $I_{\rm av}=496$ mA.
Figure 5: Charge densities in a train of 26,
surrounded by gaps of 4 buckets, fill C2,
MC beam loading omitted, $I_{\rm av}=496$ mA.
Figure 6: Bunch length increases in a train of 26, surrounded by gaps of 4
buckets, with main cavity beam loading (blue) and without (red). $I_{\rm
av}=496$ mA. The plot is the ratio of bunch length $\sigma$ to the natural
bunch length $\sigma_{0}$. Figure 7: Centroids $<z>$ in a train of 26,
surrounded by gaps of 4 buckets, with cavity beam loading (blue) and without
(red). $I_{\rm av}=496$ mA
The main point of practical interest is the increase in Touschek lifetime
achieved through the bunch stretching caused by the HHC. Again, the MC has a
sizeable effect in reducing the lifetime and in causing a larger variation
along a train. This is shown in Fig.8 which gives the ratio of the lifetime
$\tau$ to the lifetime $\tau_{0}$ without the MC.
Figure 8: Touschek lifetime increase along a train, with compensated MC (blue)
and without (red). $I_{\rm av}=496$ mA.
We next consider the same fill pattern with 11 trains, but with a taper in the
bunch charges putting more charge at the ends, according to a power law as
shown in Fig. 15 of prabI . This is an example of invoking guard bunches to
reduce the effect of gaps. As is seen in Figs.10 and 10, the guarded inner
bunches, which resemble that of the complete fill, are little affected by the
MC. The strong asymmetry between the front and back of the train is perhaps
surprising, but it should be noticed that Fig.10 already shows an appreciable
front-back asymmetry. The strong amplification of this asymmetry by the MC is
in line with its big effects seen generally.
Figure 9: Case of tapered bunch charges,
MC beam loading included, $I_{\rm av}=496$ mA.
Figure 10: Case of tapered bunch charges,
MC beam loading omitted, $I_{\rm av}=496$ mA.
### VI.3 Decrease of HHC detuning for over-stretching
There is practical interest in the possibility of over-stretching for an
additional increase in the Touschek lifetime. This entails a decrease in the
detuning of the HHC, which produces a larger r.m.s. bunch length but a bunch
profile with a dip in the middle, thus a double peak. In our case a decrease
from $df=250.2$ kHz to $df=235$ kHz produces a double peak in the model
without the MC at full current, as is seen in Fig.4 of prabI . We would like
to know how this setup looks with the compensated main cavity in play. Not
surprisingly, the convergence of our iterative solution breaks down at a lower
current than in the case of the normal detuning; the stronger the bunch
distortions the poorer the convergence. With $df=235$ kHz and the MC we can
only reach 474.5 mA, which is not enough to see a double peak. Nevertheless it
is useful to compare the result at that current with the result in absence of
the MC, as displayed for 9 bunches in a train of 27 in Figs.12 and 12.
Figure 11: Fill C2 with HHC + MC,
detuning $df=235$ kHz, $I_{\rm av}=474.5$ mA.
Figure 12: Fill C2 with HHC only,
detuning $df=235$ kHz, $I_{\rm av}=474.5$ mA.
Even at a current significantly less that the 500 mA design current the
distortion due to the main cavity is quite large, which leads to the
conclusion that the main cavity must be included in a realistic simulation of
over-stretching.
### VI.4 Effect of the short range wake field
The short range wake field from various unavoidable corrugations in the vacuum
chamber retains importance in the latest storage rings, in spite of the best
efforts to reduce it. Since it can cause substantial bunch distortion in the
absence of an HHC, we would like to know how much it affects the operation of
the HHC. A result for the longitudinal wake potential at ALS-U, from a
detailed computation by Dan Wang dwang , is shown in Fig.13.
Figure 13: Wake potential (pseudo - Green function) for the ALS-U storage
ring, computed with a 1 mm driving bunch.
The corresponding impedance,
$Z(f)=\frac{1}{c}\int_{-\infty}^{\infty}e^{-ikz}W(z)dz\ ,\quad f=kc/2\pi\ ,$
(44)
is plotted in Fig.14.
Figure 14: Longitudinal impedance $Z(f)$ for ALS-U.
In Ref.prabI we suggested that a low-$Q$ resonator wake could be treated on
the same footing as the high-$Q$ resonators, and for that reason we wrote all
equations for a general value of $Q$. We recognized, however, that the
diagonal term in the potential would now be dominant, while being nearly
negligible in the high-$Q$ case. It could not be treated by the method used in
prabI , but is easily handled by the presently adopted method of Section IV.
For the equilibrium state, the impedance at $f>20$ GHz is irrelevant, even
though it could have a role out of equilibrium. This assertion follows from
the fact that the frequency spectrum of our calculated charge densities never
extends beyond 15 GHZ, no matter which wake fields are included. Consequently,
a reasonable step is to concentrate on the first big peak at 11.5 GHz. The
wake potential in our equations (defined in (19) of prabI ) is based on an
impedance as follows, which is of Lorentzian form with half-width $\Gamma/2$:
$Z(f)=iR_{s}\frac{\Gamma}{2}\bigg{[}\frac{1}{f-f_{r}+i\Gamma/2}+\frac{1}{f+f_{r}+i\Gamma/2}\bigg{]}=Z(-f)^{*}\
,\quad\Gamma/2=f_{r}/2Q\ .$ (45)
Figures 16 and 16 show a fit to (45) with parameters as follows:
$f_{r}=11.549~{}{\rm GHz}\ ,\quad R_{s}=5730~{}\Omega\ ,\quad Q=6\ .$ (46)
The fit is rough in the imaginary part, but probably good enough to estimate
the magnitude of the effect of the short range wake.
Figure 15: Fit of ${\rm Re}Z$ to Lorentzian and
LRC circuit formulas.
Figure 16: Fit of ${\rm Im}Z$ to Lorentzian and
LRC circuit formulas.
Discussions of low-$Q$ resonator models in the literature usually invoke the
impedance of an LRC circuit, $Z(f)=R/(1+iQ(f_{r}/f-f/f_{r}))$, often with $Q$
near 1. As is illustrated in Figures 16 and 16, in our case with $Q=6$ the LRC
model does not give a better fit than the simpler Lorentzian, except for
enforcing $Z(0)=0$. At the expense of some complication our equations could be
modified to accommodate the LRC form, but that appears to be unnecessary, at
least in the present example.
Henceforth, the impedance from (45) and (46) will be referred to as SR (short
range). Taking first a complete fill, and including just the HHC and SR, we
get the result of Fig.17.
Figure 17: Charge density for a complete fill, with HHC plus the first peak in
the short range impedance (blue), and with HHC alone (red). $I_{\rm av}=500$
mA.
Next we consider the partial fill C2 with distributed gaps as treated in the
previous section. Figures 19 and 19 show the results for HHC+SR and HHC alone.
Figure 18: Fill C2, HHC + SR, $I_{\rm av}=476.2$ mA.
Figure 19: Fill C2, HHC alone, $I_{\rm av}=476.2$ mA.
As expected, the effects of SR are more pronounced in the partial fill than in
the complete fill. Correspondingly, the maximum current achieved is 472.6 mA.
As in previous cases we expect a substantially larger effect at the design
current of 500 mA.
### VI.5 Higher order mode (HOM) of the harmonic cavity
At the present stage of design the most prominent longitudinal HOM of the HHC
for ALS-U is a TM011 mode with the following parameters luo :
$R_{s}=3000~{}\Omega\ ,\quad Q=80\ ,\quad f_{r}=2.29~{}{\rm GHz}$ (47)
A calculation for fill C2 with the HHC and this HOM gave the result of Fig.20.
The effect of the HOM on the charge densities is less than 2%, in a small
shift at the top of the distributions.
At least for the equilibrium state in ALS-U, it appears that the HOM can be
neglected. The role of HOM’s in longitudinal coupled-bunch instabilities is
discussed in Ref.cullinan .
Figure 20: Two bunches in fill C2 with HHC and its higher order mode. $I_{\rm
av}=500$ mA.
### VI.6 The full model: HHC+MC+SR.
We are now prepared to include the harmonic cavity, the compensated main
cavity, and the short range wake, altogether. The convergence of the Newton
sequence suffers even more than in the previous cases, and the continuation in
current reaches only $I_{\rm av}=471.9$ mA. Effects seen at this current must
severely underestimate what can be expected at 500 mA, because of the strong
variation near the design current that we have observed in every case.
For fill C2 we see the charge densities in Figures 22 and 22.
Figure 21: HHC+MC+SR, $I_{\rm av}=471.9$ mA.
Figure 22: HHC+MC, $I_{\rm av}=471.9$ mA.
### VI.7 The case of a single gap, with main cavity beam loading
It is worthwhile to examine the effect of main cavity beam loading when there
is only a single gap in the fill pattern, even though this is not directly
relevant to the ALS-U design. With 284 bunches, a gap of 44 buckets, and
HHC+MC we get the result of Fig.24 for charge densities, to be compared with
the case of HHC alone in Fig.24. This result could be obtained with the full
current of $500$ mA. The graphs show 6 bunches at the head of the train
(right), middle of the train (middle), and end of the train (left).
Figure 23: HHC+MC, single gap, $I_{\rm av}=500$ mA.
Figure 24: HHC, single gap, $I_{\rm av}=500$ mA.
The bunch lengthening is smaller and the centroid displacement greater when
the MC is included. The comparison of bunch lengthenings is shown in Fig.25.
Figure 25: Single gap, bunch lengthening ratio, for HHC+MC (blue) and with HHC
alone (red).
## VII Conclusions and outlook
Continuing the investigation of Ref.prabI we have extended the physical model
to include the effect of the main accelerating cavity in its fundamental mode,
previously omitted. We introduced a new algorithm to adjust the parameters of
the rf generator voltage so as to compensate the voltage induced in the cavity
by the beam, thus putting the net accelerating voltage at a desired value.
When the cavity is excited by a bunch train with gaps this compensation
implies a modification of the bunch profiles, which is produced automatically
in our scheme.
We illustrated the outcome for parameters of the forthcoming ALS-U storage
ring, revisiting examples treated in prabI without the main cavity. The
results are similar in modo grosso, but there are significant quantitative
differences, especially in cases of overstretching of bunches. Generally
speaking there is more bunch distortion and less symmetrical patterns in the
bunch trains, and the rms bunch lengthening is a bit smaller and much more
variable along the train. Correspondingly, the Touschek lifetime increase is
smaller and more variable over a train.
We have not tried to model the feedback system that compensates the beam
loading in practice. Our aim was only to show the theoretical existence of an
equilibrium state with precise compensation in place.
It was disappointing, and somewhat surprising, to find that the Newton
iteration to solve the coupled Haïsinski equations encounters convergence
difficulties at large current (near the design current) when either the main
cavity wake or the short range wake is added to the HHC wake.
A colleague suggested that the failure of convergence might hint at an
instability. One should be cautious about such an idea. The issue here is just
the existence of an equilibrium. An equilibrium may or may not be stable under
time evolution, so stability is a different issue.
Our failure to find an equilibrium in some cases of high current may be due to
a failure of technique, not necessarily an indication that no equilibrium
exists. At high current we are trying to achieve convergence of the Newton
iteration close to a singularity of the Jacobian, but not squarely on the
singularity. In this case it is crucial to have a starting guess sufficiently
close to a solution, but in practice the required degree of closeness is
unknown. We made some efforts to improve the guess by a seemingly careful
continuation in current from the last good solution, but there was no clear
success.
A likely remedy for the convergence failure is to return to the conventional
formulation of the Haïssinski equations as integral equations for the charge
densities, in place of the present formulation as algebraic equations for
Fourier amplitudes. For a single Haïssinski integral equation discretized on a
mesh in $z$-space, the Newton iterative solution is ultra-robust, converging
at currents far beyond realistic values bobkarl . It seems likely that similar
good behavior will hold for the coupled integral equations. The size of the
discretized system does not grow with the number of resonator wakes, in
contrast to the present system, and the full $z$-space description of the
short range wake could be invoked in place of the low-$Q$ resonator model.
To make this $z$-space formulation feasible on modest computer resources we
can assume that all bunch sub-trains are identical, and all separated by
identical gaps. For the ALS-U this would mean artificially increasing the
harmonic number from 328 to 330, and having 11 trains of 26 separated by gaps
of 4 buckets. Then we have 26 independent charge densities, which can
adequately be described by 100 mesh points each. Thus the Jacobian of the
Newton iteration is $2600\times 2600$, a modest size that will yield a very
quick computation.
Moreover, this identical train model would make it feasible to do a time-
domain solution of the coupled Vlasov-Fokker-Planck equations by the method of
local characteristics (discretizing the Perron-Frobenius operator) senigallia
. This could answer the urgent question of stability of the equilibria, and
provide a window to the dynamics out of equilibrium. Incidentally,
determination of the instability threshold by the linearized Vlasov system
could also be attempted.
It was gratifying to find that the sub-iteration to enforce the main cavity
compensation converged very quickly whenever the main iteration converged. It
can be employed in the same way in the proposed $z$-space system. Also, the
formalism for multiple resonator wakes will still be advantageous in the
$z$-space scheme.
## VIII Acknowledgments
I thank Teresia Olsson for a helpful correspondence, Dan Wang for her wake
potential, and Tianhuan Luo for information on the HHC design. Marco Venturini
posed the main cavity compensation problem in general terms. Karl Bane
encouraged the study of the short range wake. This work was supported in part
by the U. S. Department of Energy, Contract Nos. DE-AC03-76SF00515. My work is
aided by an affiliation with Lawrence Berkeley National Laboratory as Guest
Senior Scientist.
## Appendix A Diagonal terms in the potential.
Here we find the formula for a generic term in the first sum of (14). For this
we revert to the notation used in the case of a single resonator.
The term in question is the last term of (51) in prabI , defined through (60)
of that paper, as follows:
$\displaystyle
U_{i}^{d}(z_{i})=\frac{e^{2}N\omega_{r}R_{s}\eta\xi_{i}}{Q}\bigg{[}\int_{0}^{z_{i}}d\zeta\int_{-\Sigma}^{\zeta}\exp(-k_{r}(\zeta-u)/2Q)\cos(k_{r}(\zeta-u)+\psi)\rho_{i}(u)du$
$\displaystyle+\int_{0}^{z_{i}}d\zeta\int_{\zeta}^{\Sigma}\exp(-k_{r}(\zeta-u+C)/2Q)\cos(k_{r}(\zeta-u+C)+\psi)\rho_{i}(u)du\bigg{]}\
.$ (48)
The repeated integrals can be replaced by single integrals through integration
by parts. First apply the double angle formula to the cosine, so as to bring
out factors of $\cos(k_{r}u)$ and $\sin(k_{r}u)$. The $u$-integrals involving
those factors are functions of $\zeta$, which are to be differentiated in the
partial integration with respect to $\zeta$. The corresponding integration
with respect to $\zeta$ is done with the help of (55) and (56) (as indefinite
integrals) in prabI . The result is
$\displaystyle
U^{d}_{i}(z_{i})=\frac{ce^{2}NR_{s}\eta\xi_{i}}{Q(1+(1/2Q)^{2})}\big{[}~{}I_{1}+I_{2}~{}\big{]}\
,$ $\displaystyle
I_{1}=\int_{-\Sigma}^{z_{i}}\exp(k_{r}u/2Q)\bigg{[}a(z_{i})\cos(k_{r}u)+b(z_{i})\sin(k_{r}u)\bigg{]}\rho_{i}(u)du$
$\displaystyle\hskip
22.76228pt-\big{(}\sin\psi-\frac{1}{2Q}\cos\psi\big{)}\int_{-\Sigma}^{z_{i}}\rho_{i}(u)du\
,$ $\displaystyle
I_{2}=\int^{\Sigma}_{z_{i}}\exp(k_{r}u/2Q)\bigg{[}a(z_{i}+C)\cos(k_{r}u)+b(z_{i}+C)\sin(k_{r}u)\bigg{]}\rho_{i}(u)du$
$\displaystyle\hskip
22.76228pt+\exp(-k_{r}C/2Q)\big{(}\sin(k_{r}C+\psi)-\frac{1}{2Q}\cos(k_{r}C+\psi)\big{)}\int_{-\Sigma}^{z_{i}}\rho_{i}(u)du\
,$ $\displaystyle
a(z)=\exp\big{(}-k_{r}z/2Q\big{)}\big{(}\sin(k_{r}z+\psi)-\frac{1}{2Q}\cos(k_{r}z+\psi)\big{)}\
,$ $\displaystyle
b(z)=-\exp\big{(}-k_{r}z/2Q\big{)}\big{(}\cos(k_{r}z+\psi)+\frac{1}{2Q}\sin(k_{r}z+\psi)\big{)}\
.$ (49)
Here we have dropped and added terms independent of $z_{i}$, which only affect
the normalization (21), and have used the double angle formula in reverse to
consolidate some terms. Writing
$\int_{z_{i}}^{\Sigma}=\int_{-\Sigma}^{\Sigma}-\int_{-\Sigma}^{z_{i}}$, we see
that there are three different integrals to evaluate,
$\int_{-\Sigma}^{z_{i}}\big{[}~{}1,~{}\exp(k_{r}u/2Q)\cos(k_{r}u),~{}\exp(k_{r}u/2Q)\sin(k_{r}u)~{}\big{]}\rho_{i}(u)du\
,$ (50)
which can be built up stepwise on a mesh in $z_{i}$. Thus we can compute and
store the diagonal terms on the mesh in negligible time. Note that $I_{2}$ is
totally negligible for the small $Q$ that we encounter in representing the
geometric wake, owing to the tiny prefactor $\exp(-k_{r}C/2Q)$.
Summing (49) over the $n_{r}$ choices of the resonator parameters
$k_{r},R_{s},Q,\eta,\psi$ we obtain the first term of (14).
## References
* (1) R. Warnock and M. Venturini, Equilibrium of an arbitrary bunch train in presence of a passive harmonic cavity: Solution through coupled Haïssinski equations, Phys. Rev. Accel. Beams 23, 064403 (2020).
* (2) C. Steier, A. Anders, J. Byrd, K. Chow, R. Duarte, J. Jung, T. Luo, H. Nishimura, T. Oliver, J. Osborn et al., “R+D progress towards a diffraction limited upgrade of the ALS”, Proc. IPAC2016, Busan, Kroea.
* (3) C. Steier, A. Allézy, A. Anders, K. Baptiste, J. Byrd, K. Chow, G. Cutler, R. Donahue, R. Duarte, J.-Y. Jung et al., “Status of the conceptual design of ALS-U”, Proc. IPAC2017, Copenhagen, Denmark.
* (4) Dan Wang, Lawrence Berkeley National Laboratory, private communication. This is from work in progress.
* (5) Tianhuan Luo, Lawrence Berkeley National Laboratory, private communication.
* (6) F. J. Cullinan, Å. Andersson, and P. F. Tavares, Harmonic-cavity stabilization of longitudinal coupled-bunch instabilitiess with a nonuniform fill, Phys. Rev. Accel. Beams 23, 074402 (2020).
* (7) R. Warnock and K. Bane, Numerical Solution of the Haïssinski Equation for the Equilibrium State of a Stored Electron Beam, Phys. Rev. Accel. Beams 21, 124401 (2018).
* (8) R. Warnock, Study of Bunch Instabilities by the Nonlinear Vlasov-Fokker-Planck Equation, Nuc. Instrum. Methods Phys. Res. A 561, 186 (2006).
|
[1,2]Anastassia M.Makarieva [1]Andrei V.Nefiodov
1]Theoretical Physics Division, Petersburg Nuclear Physics Institute, Gatchina
188300, St. Petersburg, Russia 2]Institute for Advanced Study, Technical
University of Munich, Garching bei München 85748, Germany
A. M. Makarieva<EMAIL_ADDRESS>
# Alternative expression for the maximum potential intensity of tropical
cyclones
###### Abstract
Emanuel’s concept of maximum potential intensity (E-PI) estimates the maximum
velocity of tropical cyclones from environmental parameters. At the point of
maximum wind, E-PI’s key equation relates proportionally the centrifugal
acceleration (squared maximum velocity divided by radius) to the radial
gradient of saturated moist entropy. The proportionality coefficient depends
on the outflow temperature. Here it is shown that a different relationship
between the same quantities derives straightforwardly from the gradient-wind
balance and the definition of entropy, with the proportionality coefficient
depending on the radial gradient of local air temperature. The robust
alternative reveals a previously unexplored constraint: for E-PI to be valid,
the the outflow temperature should be a function of the radial temperature
gradient at the point of maximum wind. When the air is horizontally isothermal
(which, as we argue, is not an uncommon condition), this constraint cannot be
satisfied, and E-PI’s key equation underestimates the squared maximum velocity
by approximately twofold. This explains “superintensity” (maximum wind speeds
exceeding E-PI). The new formulation predicts less superintensity at higher
temperatures, corroborating recent numerical simulations. Previous analyses
are re-evaluated to reveal inconsistent support for the explanation of
superintensity by supergradient winds alone. In Hurricane Isabel 2003, maximum
superintensity is found to be associated with minimal gradient-wind imbalance.
Modified to diagnostically account for supergradient winds, the new
formulation shows that air temperature increasing towards the storm center can
mask the effect of gradient-wind imbalance, thus reducing “superintensity” and
formally bringing E-PI closer to observations. The implications of these
findings for assessing real storms are discussed.
Tropical storms threaten human lives and livelihoods. Numerical models can
simulate a wide range of storm intensities under the same environmental
conditions (e.g., Tao et al., 2020a). Thus it is desirable to have a reliable
theoretical framework that would, from the first principles, confine model
outputs to the domain of reality (Emanuel, 2020). The theoretical formulation
for maximum potential intensity of tropical cyclones by Emanuel (1986) (E-PI)
has been long considered as an approximate upper limit on storm intensity (see
discussions by Garner (2015), Kieu and Moon (2016) and Kowaleski and Evans
(2016)). At the same time, the phenomenon of “superintensity”, when the
observed or modelled storm velocities exceed E-PI, has been perceived as an
important research challenge (e.g., Persing and Montgomery, 2003; Montgomery
et al., 2006; Bryan and Rotunno, 2009a; Rousseau-Rizzi and Emanuel, 2019; Li
et al., 2020). Since the strongest storms are the most dangerous ones, it is
important to understand when and why the theoretical limits can be exceeded.
The principal way of approaching the superintensity problem was to reveal how
the E-PI assumptions can be modified to yield greater intensities. For
example, Montgomery et al. (2006) suggested that superintensity could result
from an additional heat source provided by the storm eye (a source of energy
not considered in E-PI). Bryan and Rotunno (2009b) evaluated this mechanism in
a numerical modelling study and found it to be small. In another numerical
simulation, Bryan and Rotunno (2009a) investigated how superintensity could
result from the flow being supergradient (while E-PI assumed the gradient-wind
balance) and found this mechanism to be more significant than the eye heat
source. For a recent overview of superintensity assessments in modelling
studies see Rousseau-Rizzi and Emanuel (2019).
Here we present a different approach. We show that, even in the case when all
the E-PI assumptions hold, E-PI will systematically underestimate storm
intensities provided the air is horizontally isothermal at the point of
maximum wind. This conclusion follows straightforwardly from the definition of
saturated moist entropy. At the point of maximum wind E-PI relates the radial
gradients $\partial s^{*}/\partial r$ and $\partial p/\partial r$ of saturated
moist entropy $s^{*}$ and air pressure $p$ via an external parameter (the
outflow temperature $T_{o}$). However, $s^{*}$ being a state variable, its
radial gradient is a local function of the radial gradients of air pressure
$p$ and temperature $T$. Thus, specifying a relationship between $\partial
s^{*}/\partial r$ and $\partial p/\partial r$ uniquely sets $\partial
T/\partial r$. Conversely, setting $\partial T/\partial r=0$ relates $\partial
s^{*}/\partial r$ and $\partial p/\partial r$ in a specific way that, under
common atmospheric conditions, is shown here to be incompatible with E-PI. For
E-PI to match observations, $\partial T/\partial r$ at the point of maximum
wind must be a function of the outflow temperature $T_{o}$.
The assumption of horizontal isothermy is not required for the derivation of
E-PI’s expression for maximum velocity (see, e.g., Eqs. (1)-(22) of Emanuel
and Rotunno (2011) and Eq. (5) below). Probably because of that, and since the
above constraint on E-PI remained unknown, it became common, in diverse E-PI
developments, to treat the horizontal temperature gradient as a free parameter
to be specified at one’s discretion, with a common assumption of $\partial
T/\partial r=0$ at the top of the boundary layer. In particular, Emanuel and
Rotunno (2011, p. 2245) assumed temperature $T_{b}$ of the top of the boundary
layer to be constant in their attempt to modify E-PI’s expression for maximum
velocity to account for a possible dependence of the outflow temperature on
angular momentum. Emanuel (1986, p. 588), see also Emanuel (1988), likewise
assumed a constant $T_{b}$ while assessing storm intensity in terms of the
minimum central pressure. The same assumption was used in the many E-PI
representations of tropical cyclones as Carnot heat engines (e.g., Emanuel
(1986, Fig. 13), Emanuel (1991, Eq. (7)), Emanuel (2006)). More recently,
Rousseau-Rizzi and Emanuel (2019) extended the E-PI concept to describe
surface winds and also assumed horizontal isothermy. Thus, $\partial
T/\partial r=0$ appears to be an a priori plausible and widely used
approximation, which we will therefore consider in greater detail.
To clarify the logic of the foregoing analyses for our readers, we believe
that it could be useful to contrast the following two viewpoints. The first
one is from one of our reviewers: E-PI does not make any assumptions about the
horizontal temperature gradient; E-PI conforms to observations, and where it
does not, the mismatch has been explained by supergradient winds. As far as
E-PI does not assume horizontal isothermy, it is not valid to consider the
case of horizontal isothermy and to infer from this, as we do based on our
alternative formulation, that E-PI underestimates maximum winds.
Our alternative viewpoint is as follows. While we agree that E-PI does not
assume anything about horizontal temperature gradient, here we show that it
predicts it. Thus, assessing the value of $\partial T/\partial r$ is a test of
E-PI’s validity. We will give examples and argue that in many cases a
negligible or small temperature gradient at the point of maximum wind is a
plausible approximation. In those cases E-PI does not conform to observations
and does underestimate maximum winds. We have re-evaluated the study of
supergradient winds by Bryan and Rotunno (2009a) to reveal that their analysis
is not self-consistent and does not explain superintensity either in Hurricane
Isabel 2003 or in their own numerical model (see, respectively, section 2 and
Appendix C). At this point, this leaves the new formulation, easily modifiable
to diagnostically account for supergradient winds, the only available
explanation of “superintensity”.
We derive the alternative expression for maximum potential intensity and
discuss how the conventional and alternative expressions relate to each other
as dependent on temperature in section 1. We illustrate the obtained
relationships with the data for Hurricane Isabel 2003 in section 2 and discuss
their implications in section 3.
## 1 Different expressions for maximum intensity
### 1.a Conventional E-PI
E-PI has three blocks, with distinct sets of assumptions applied to each
block: the free troposphere including the top of the boundary layer, the
interior of the boundary layer and the ocean-atmosphere interface (Table 1).
Here we focus on the first block.
For the free troposphere, the key relationship of E-PI is between saturated
moist entropy $s^{*}$ and angular momentum $M$ (for a compact derivation see
Emanuel and Rotunno, 2011, Eq. (11)):
$-(T_{1}-T_{2})\frac{ds^{*}}{dM}=\frac{v_{1}}{r_{1}}-\frac{v_{2}}{r_{2}},\quad
z\geq z_{b}.$ (1)
Here $T_{1}$, $T_{2}$ and $v_{1}$, $v_{2}$ are, respectively, air temperatures
and tangential wind speeds at arbitrary distances $r_{1}$ and $r_{2}$ from the
storm center on a surface defined by the given value of $ds^{*}/dM$, $z_{b}$
is the height of the boundary layer, and
$M=vr+\frac{1}{2}fr^{2}.$ (2)
The Coriolis parameter $f\equiv 2\Omega\sin\varphi$ is assumed constant
($\varphi$ is latitude, $\Omega$ is the angular velocity of Earth’s rotation).
For the definition of saturated moist entropy see Eq. (A1) in Appendix A.
Relationship (1) is derived assuming that for $z\geq z_{b}$ hydrostatic and
gradient-wind balances hold and surfaces of constant $s^{*}$ and $M$ coincide
(Emanuel, 1986; Emanuel and Rotunno, 2011).
One can choose $r_{1}$ at the top of the boundary layer ($z=z_{b}$) and
$r_{2}$ in the outflow in the free troposphere, where $v_{2}=0$. Then, since
$s^{*}=s^{*}(M)$, multiplying Eq. (1) by $\partial M/\partial r$ gives (cf.
Emanuel, 1986, Eq. (12)):
$\varepsilon T_{b}\frac{\partial s^{*}}{\partial r}=-\frac{v}{r}\frac{\partial
M}{\partial r},\quad z=z_{b}.$ (3)
Here $\varepsilon\equiv(T_{b}-T_{o})/T_{b}$ is the Carnot efficiency,
$T_{b}=T_{1}$ is the local temperature at the top of the boundary layer,
$T_{o}=T_{2}$ is the outflow temperature. Note that Eq. (3) does not assume
$\partial T_{b}/\partial r=\rm const$.
If the radial gradients of $s^{*}$ and $M$ relate as their respective surface
fluxes $\tau_{s}$ and $\tau_{M}$, see Table 1,
$\frac{\partial s^{*}/\partial r}{\partial M/\partial
r}=\frac{\tau_{s}}{\tau_{M}}=-\frac{C_{k}}{C_{D}}\frac{k_{s}^{*}-k_{0}}{T_{s}rv_{s}},$
(4)
Eq. (3) yields the E-PI expression for maximum intensity
$v_{E}^{2}=\varepsilon\frac{C_{k}}{C_{D}}(k_{s}^{*}-k_{0}).$ (5)
All the notations are given in Table 1. The local difference between saturated
$k_{s}^{*}$ and actual $k_{0}$ enthalpies at the air-sea interface is a priori
unknown. To relate it to environmental parameters, yet another set of
assumptions is required, see block E-III in Table 1.
Equation (5) assumes that $v(z_{b})=v_{s}$ and $T_{b}=T_{s}$, where the
subscript $s$ refers to $z=0$. Nuances stemming from $v(z_{b})\neq v_{s}$ and
$T_{b}\neq T_{s}$ (for their discussion, see Emanuel and Rotunno, 2011) do not
matter for our foregoing results, as these nuances equally affect $v_{E}$ and
the alternative estimate $v_{A}$ to be derived in the next section.
### 1.b Alternative expression for maximum potential intensity
Since saturated moist entropy $s^{*}$ is a state variable, its radial gradient
can be expressed in terms of the radial gradients of air pressure and
temperature (see Eq. (A9)):
$\frac{1}{1+\zeta}T\frac{\partial s^{*}}{\partial r}=-\alpha_{d}\frac{\partial
p}{\partial r}\left(1-\frac{1}{\Gamma}\frac{\partial T/\partial r}{\partial
p/\partial r}\right),$ (6)
where $p$ is air pressure, $\zeta\equiv L\gamma_{d}^{*}/(RT)$, $R=8.3$ J mol-1
K-1 is the universal gas constant, $\gamma_{d}^{*}\equiv p_{v}^{*}/p_{d}$,
$p_{v}^{*}$ is the partial pressure of saturated water vapor, $p_{d}$ is the
partial pressure of dry air, $L\simeq 45$ kJ mol-1 is the latent heat of
vaporization, $\Gamma$ (K Pa-1) is the moist adiabatic lapse rate of air
temperature (see its definition (A10)), and $\alpha_{d}\equiv 1/\rho_{d}$ is
the inverse dry air density. Below we assume $\alpha_{d}\simeq\alpha$, where
$\alpha\equiv 1/\rho$ is the inverse air density. Equation (6) does not
contain any assumptions but follows directly from the definition of saturated
moist entropy.
In gradient-wind balance,
$\alpha\frac{\partial p}{\partial r}=\frac{v^{2}}{r}+fv,$ (7)
and at the radius of maximum wind $r=r_{m}$, where $\partial v/\partial r=0$
and $\partial M/\partial r=v+fr$, we have
$\alpha\frac{\partial p}{\partial r}=\frac{v}{r}\frac{\partial M}{\partial
r},\,\,\,r=r_{m}.$ (8)
Introducing
$\mathcal{C}\equiv 1-\frac{1}{\Gamma}\frac{\partial T/\partial r}{\partial
p/\partial r}$ (9)
and combining Eqs. (6)-(9) we obtain an alternative version of Eq. (3),
$\frac{1}{\mathcal{C}(1+\zeta)}T_{b}\frac{\partial s^{*}}{\partial
r}=-\frac{v}{r}\frac{\partial M}{\partial r},\quad r=r_{m},z=z_{b},$ (10)
from which, using Eq. (4), our alternative expression for maximum intensity
results:
$v_{A}^{2}=\frac{1}{\mathcal{C}(1+\zeta)}\frac{C_{k}}{C_{D}}(k_{s}^{*}-k_{0}).$
(11)
Equation (4) has been used for deriving maximum intensities (5) and (11) from,
respectively, Eqs. (3) and (10). The assumptions yielding Eq. (4) pertain to
the boundary layer. They are independent of the E-PI assumptions behind Eq.
(3) that pertain to the free troposphere, see Table 1. The difference in
maximum intensities $v_{E}$ (5) and $v_{A}$ (11),
$\left(\frac{v_{A}}{v_{E}}\right)^{2}=\frac{1}{\varepsilon\mathcal{C}(1+\zeta)},$
(12)
stems from Eqs. (3) and (10). Both equations assume gradient-wind balance.
Equation (3) is valid at $z=z_{b}$, Eq. (10) is valid at the point of maximum
wind $r=r_{m}$, $z=z_{b}$. (We assume, as does E-PI, that the point of maximum
wind is at $z=z_{b}$.) Equation (3) assumes hydrostatic balance and
$s^{*}=s^{*}(M)$ for $z\geq z_{b}$. Equation (10) does not require these
assumptions. Therefore, at the point of maximum wind, Eq. (10) is more general
than Eq. (3). As such, Eq. (10) can be used to test the validity of Eq. (3)
and, hence, of Eq. (5) versus Eq. (11).
Table 1: Alternative formulation of maximum intensity (A-I), three logical
blocks of E-PI (E-I, E-II and E-III) and the resulting E-PI and alternative
upper limits on maximum velocity. The alternative estimate assumes that the
E-PI assumptions E-II and E-III are valid.
Atmospheric region | Assumptions | Key relationship | References
---|---|---|---
A-I. Point of maximum wind | The air is in gradient-wind balance; $v_{m}/r_{m}\gg f/2$ | $\displaystyle\frac{v_{m}}{r_{m}}=-{\color[rgb]{0,0,0}\frac{1}{(1+\zeta)\mathcal{C}}}T_{b}\frac{\partial s^{*}/\partial r}{\partial M/\partial r}$ | Present work
E-I. Upper atmosphere and the top of boundary layer ($z\geq z_{b}$) | The air is in hydrostatic and gradient-wind balance; surfaces of constant saturated moist entropy $s^{*}$ and angular momentum $M$ coincide; $v_{m}/r_{m}\gg f/2$ | $\displaystyle\frac{v_{m}}{r_{m}}=-{\color[rgb]{0,0,0}\varepsilon}T_{b}\frac{\partial s^{*}/\partial r}{\partial M/\partial r}$ | Emanuel (1986, Eqs. (12), (13))
E-II. Boundary layer near the radius of maximum wind ($0<z\leq z_{b}$) | Horizontal turbulent fluxes of $s^{*}$ and $M$ are negligible compared to vertical ones; surfaces of constant $s^{*}$ and $M$ are approximately vertical; turbulent fluxes of $s^{*}$ and $M$ vanish at $z=z_{b}$ | $\displaystyle\frac{\partial s^{*}/\partial r}{\partial M/\partial r}=\frac{\tau_{s}}{\tau_{M}}=-\frac{C_{k}}{C_{D}}\frac{k_{s}^{*}-k_{0}}{T_{s}rv_{s}}$ | Emanuel (1986, Eqs. (32), (33)), Emanuel and Rotunno (2011, Eqs. (17), (19), (20))
E-III. Air-sea interface near the radius of maximum wind | The upper limit for the air-sea disequilibrium is set by the ambient relative humidity $\mathcal{H}_{a}$ | $k_{s}^{*}-k_{0}\simeq L_{v}(q_{s}^{*}-q_{0})$,
$q_{s}^{*}-q_{0}\lesssim(1-\mathcal{H}_{a})q^{*}_{a}$ | Emanuel (1995, p. 3971), Emanuel (1989, Eq. (38))
E-PI estimate | $T_{b}\simeq T_{s}$, $r=r_{m}$, $v_{s}(r_{m})\simeq v_{m}$ | $\displaystyle v_{m}^{2}\lesssim\hat{v}_{E}^{2}\equiv{\color[rgb]{0,0,0}\varepsilon}\frac{C_{k}}{C_{D}}L_{v}(1-\mathcal{H}_{a})q^{*}_{a}$ | Emanuel (1989, Eq. (38) and Table 1)
Alternative PI estimate | $T_{b}\simeq T_{s}$, $r=r_{m}$, $v_{s}(r_{m})\simeq v_{m}$ | $\displaystyle v_{m}^{2}\lesssim\hat{v}_{A}^{2}\equiv{\color[rgb]{0,0,0}\frac{1}{(1+\zeta)\mathcal{C}}}\frac{C_{k}}{C_{D}}L_{v}(1-\mathcal{H}_{a})q^{*}_{a}$ | Present work
* Notes: $v_{m}$ and $r_{m}$ are the maximum velocity and the radius where it is observed; $\tau_{s}$ and $\tau_{M}$ are the surface fluxes of, respectively, entropy and angular momentum; $C_{k}$ and $C_{D}$ are exchange coefficients for enthalpy and momentum; $r$ is local radius; $k_{s}^{*}$ is saturated enthalpy at sea surface temperature $T_{s}$; $k_{s}^{*}-k_{0}=c_{p}(T_{s}-T_{0})+L_{v}(q_{s}^{*}-q_{0})$, where $c_{p}$ is the specific heat capacity of air at constant pressure, $L_{v}$ is the latent heat of vaporization, $q_{s}^{*}$ is the saturated mixing ratio at $T_{s}$; $v_{s}$, $k_{0}$, $q_{0}$ and $T_{0}$ are, respectively, the tangential wind speed, enthalpy, water vapor mixing ratio and air temperature at a reference height (usually about 10 m above the sea level); $\mathcal{H}_{a}$ and $q^{*}_{a}$ are the relative humidity and saturated mixing ratio at the sea surface temperature in the ambient environment outside the storm core. For $C_{k}/C_{D}\simeq 1$, $\mathcal{H}_{a}=0.8$ and $T_{b}\simeq T_{s}=300$ K, we obtain the E-PI upper limit $\hat{v}_{E}=60$ m s-1 for $\varepsilon\simeq 0.3$ and the alternative upper limit $\hat{v}_{A}=85$ m s-1 for $\mathcal{C}=1$ (isothermal case).
## 2 Comparison of conventional and alternative maximum intensities
### 2.a Horizontal isothermy
Equation (3) (key to E-PI) is valid, i.e., $v_{E}=v_{A}$, if
$\varepsilon=\frac{1}{\mathcal{C}(1+\zeta)}.$ (13)
Since $\zeta$ (A10) is proportional to saturated partial pressure of water
vapor and increases exponentially with temperature, Eq. (12) predicts that,
for a given $\mathcal{C}$, the value of $\varepsilon$ must decline with
increasing temperature.
While the assumption of an isothermal top of boundary layer ($\mathcal{C}=1$)
is not required for deriving $v_{E}$ (5), Emanuel (1986, cf. his Eqs. (13),
(17) and (26)) did use this assumption to derive the central surface
pressure111Dr. Steve Garner noted that a factor approximately equal to
$1-\varepsilon(1+\zeta)$ is present in the denominator of Emanuel (1986)’s Eq.
(26) for central pressure. This singularity appeared there because, to derive
his Eq. (26), Emanuel (1986) simultaneously used $\varepsilon T_{b}\partial
s^{*}/\partial r=-\alpha\partial p/\partial r$ and a version of our Eq. (6)
for unsaturated isothermal air, Emanuel (1986)’s Eqs. (21) and (25),
respectively, as if they were independent constraints. Since, in view of Eq.
(12), they are not, Emanuel (1986)’s Eq. (26) is an identity. Hypercanes
introduced based on this equation are a misconception, see Appendix B for
details.. Temperature $T_{b}$ of the top of the boundary layer was taken “to
be constant as is generally observed (e.g., see Frank, 1977)” (Emanuel, 1986,
p. 588). Emanuel and Rotunno (2011, p. 2245) also assumed $T_{b}=\rm const$.
This apparently plausible assumption deserves a special consideration.
With $\mathcal{C}=1$, Eq. (12) is not satisfied under common atmospheric
conditions (Fig. 1a). The maximum Carnot efficiency estimated from the
temperatures observed in the outflow and at the top of the boundary layer is
$\varepsilon=0.35$ (DeMaria and Kaplan, 1994). Assuming that $T_{b}$ does not
usually exceed $303$ K ($30$℃), the minimum value of $1/(1+\zeta)=0.5$ is
$1.4$-fold larger. It corresponds to the largest $\gamma_{d}^{*}\simeq 0.05$
for $T_{b}=303$ K and $p_{d}\simeq p=800$ hPa. The partial pressure
$p_{v}^{*}$ of saturated water vapor and, hence, $\gamma_{d}^{*}$ depend
exponentially on air temperature. The realistic temperatures at the top of the
boundary layer are commonly significantly lower than $303$ K. Thus, the
discrepancy between $v_{E}$ and $v_{A}$ should be commonly significantly
higher (Fig. 1a). The mismatch diminishes when $\mathcal{C}>1$, i.e., when
$T_{b}$ increases towards the storm center.
Since $1/(1+\zeta)$ declines with $T$, the discrepancy between $v_{E}$ and
$v_{A}$ diminishes with increasing temperature (Fig. 1). This explains why,
beyond a certain temperature, superintensity $v_{A}/v_{E}>1$ becomes
impossible and changes to what Li et al. (2020), who established this pattern
in a numerical modelling study, termed “sub-MPI intensity”. The particular
temperature at which this shift occurs depends on the value of $\mathcal{C}$.
Figure 1b shows that $(v_{L}/v_{E})^{2}$, where $v_{L}$ is maximum intensity
produced by the model of Li et al. (2020), declines with growing temperature
faster than does $(v_{A}/v_{E})^{2}=1/[\varepsilon\mathcal{C}(1+\zeta)]$, Eq.
(12), if the outflow temperature $T_{o}$, $\mathcal{C}$ and $p$ (the latter
enters the definition of $\zeta$) are assumed to be temperature-independent.
This indicates that $\mathcal{C}$ should increase, and/or $p$ and/or $T_{o}$
should decrease, at higher temperatures, propositions that could be tested in
future studies222Note that Li et al. (2020) applied Eq. (5) to surface wind,
so $p$ and $T$ for their data shown in our Fig. 1b refer to surface pressure
and temperature at the point of maximum wind. Since they observed an increased
storm intensity at higher surface temperatures, a decrease of surface pressure
is quite plausible..
Figure 1: Parameters $\varepsilon$ versus $1/(1+\zeta)$ (a) and
$(v_{A}/v_{E})^{2}$ (b) as dependent on temperature $T$. In (a), the
$\varepsilon\equiv(T_{b}-T_{o})/T_{b}$, with $T_{b}=T$ curves correspond to
different outflow temperatures $T_{o}$; the $1/(1+\zeta)$ curves correspond to
$p_{d}$ values of $800$ and $900$ hPa, see Eq. (A10). In (b), the solid curve
and the dashed curves correspond to $\mathcal{C}=1$ and $\mathcal{C}=2$,
respectively, in Eq. (12); $T_{o}$ and $p_{d}$ used to calculate $\varepsilon$
and $\zeta$ are shown at the curves. Open circles and solid squares correspond
to $(v_{L}/v_{E})^{2}$ from, respectively, Fig. 2a and Fig. 11c of Li et al.
(2020), where $v_{L}$ is the maximum surface wind speed derived from their
model and $v_{E}$ is the maximum intensity calculated by Li et al. (2020) from
Eq. (5) with sea surface temperature used instead of $T_{b}$ in $\varepsilon$.
### 2.b Supergradient wind and Hurricane Isabel 2003
Figure 2: Moist $\Gamma$ and dry $\Gamma_{d}$ adiabatic temperature gradients
as dependent on temperature. They are calculated for $p=850$ hPa, see Eq.
(A10). The circles indicate the mean temperature gradients (K hPa-1) observed
in Hurricane Isabel 2003 on September 13 between the eyewall and the outer
core at the surface (i) and at the top of the boundary layer (ii) and between
the eye and the eyewall at the top of the boundary layer (iii); the squares
indicate the local temperature gradients (K hPa-1) at the point of maximum
wind calculated from Eq. (15) and the data of Table 2 for Hurricane Isabel
2003 on September 12, 13 and 14. Note that negative values correspond to
temperature increasing towards the storm center. See section 2.b for
calculation details.
In the general case, instead of the gradient-wind balance (7), we can write
$\alpha\frac{\partial p}{\partial r}\equiv\mathcal{B}\frac{v^{2}}{r},$ (14)
where $\mathcal{B}$ defines the degree to which the flow is radially
unbalanced: $\mathcal{B}<1$ for the supergradient flow when the outward-
directed centrifugal force is larger than the inward-pulling pressure
gradient. For example, in the numerical experiment of Bryan and Rotunno
(2009a, Fig. 8), $\mathcal{B}\simeq 0.5$ at the point of maximum wind, but
$\mathcal{B}\simeq 1$ at the radius of maximum wind at the surface (see Fig. 8
of Bryan and Rotunno (2009a) and Appendix C). (Notably, if the gradient-wind
imbalance is indeed minimal at the surface, it could not explain
superintensity of surface winds in the simulations of Li et al. (2020).)
If, under supergradient conditions, $v/r\gg\partial v/\partial r\simeq 0$ at
the point of maximum wind, we have $\partial M/\partial r\simeq v+fr\simeq v$
(assuming $v/r\gg f$) and Eq. (6) can be written as
$\frac{1}{(1+\zeta)}\frac{T_{b}}{\mathcal{B}\mathcal{C}}\frac{\partial
s^{*}}{\partial r}=-\frac{v_{m}^{2}}{r_{m}}.$ (15)
In comparison, Eq. (3) at the point of maximum wind becomes (see, e.g., Bell
and Montgomery, 2008, Eq. (5)):
$\varepsilon T_{b}\frac{\partial s^{*}}{\partial r}=-\frac{v_{E}^{2}}{r_{m}}.$
(16)
Comparison of Eq. (15) with Eq. (16) reveals that the flow being supergradient
($\mathcal{B}<1$) and air temperature declining towards the storm center
($\mathcal{C}<1$) cause $v_{E}$ to underestimate maximum velocity $v_{m}$ (15)
more than it does in the radially balanced isothermal case ($\mathcal{B}=1$,
$\mathcal{C}=1$). Conversely, for E-PI’s Eq. (16) to be consistent with
observations for a radially balanced ($\mathcal{B}=1$) or supergradient
($\mathcal{B}\leq 1$) flow, the temperature at the point of maximum wind must
increase towards the hurricane center ($\mathcal{C}>1$).
When, as is the case in the stronger storms, the pressure gradient is
sufficiently steep and the radial motion sufficiently rapid, the radial
expansion of air is accompanied by a drop of temperature. In the well-studied
Hurricane Isabel 2003 the surface air cooled by about 4 K while moving from
the outer core (150-250 km) to the eyewall (40-50 km) (Montgomery et al.,
2006, Fig. 4c). Over the same distance, the surface pressure fell by less than
50 hPa (from less than 1013 hPa to 960 hPa) (Aberson et al., 2006, Fig. 4).
(Air pressure at the outermost closed isobar $\sim 465$ km from the center was
1013 hPa, hence at 150-250 km from the center it should have been smaller.)
With $\Delta p\simeq-45$ hPa and $\Delta T\simeq-4$ K, at $T\simeq 300$ K and
$p\simeq 10^{3}$ hPa, the horizontal temperature gradient at the surface
$\Delta T/\Delta p=0.09$ K hPa-1 approaches the dry adiabatic gradient
$\Gamma_{d}=\mu T/p=0.1$ K hPa-1 (see Fig. 2).
At the top of the boundary layer the radial flow is weaker than it is on the
surface, and the mean horizontal temperature gradient is smaller (Montgomery
et al., 2006, Fig. 4b,c). At the level of maximum wind $z_{m}=1$ km in
Hurricane Isabel 2003 the temperature difference between the eyewall and the
outer core was $\Delta T\simeq-2$ K (Montgomery et al., 2006, Fig. 4c).
Assuming that the pressure difference at this level is about $0.9$ of its
value at the surface, see Eq. (A13), $\Delta p\simeq-0.9\times 45$ hPa, we
have $\Delta T/\Delta p=0.05$ K hPa-1. The mean horizontal temperature
gradient between the outer core and the eyewall at the top of the boundary
layer approaches the moist adiabatic gradient $\Gamma=0.04$ K hPa-1 for
$T=293$ K and $p=850$ hPa (Fig. 2).
In the eye, the surface heat fluxes and the descending air motion work to
elevate the air temperature above that at the eyewall and in the outer core.
The air temperature in the eye rises towards the storm center and $\partial
T/\partial r<0$. For Hurricane Isabel 2003, with pressure and temperature
differences at $z_{m}=1$ km between the eye and the eyewall $\Delta
p\simeq-30$ hPa (Aberson et al., 2006, Fig. 4) and $\Delta T=3$ K (Montgomery
et al., 2006, Fig. 4c), for $p=850$ hPa and $T=293$ K we have $\Delta T/\Delta
p=-0.1$ K hPa${}^{-1}\leavevmode\nobreak\ \simeq-2.5\Gamma$ (Fig. 2).
That the horizontal temperature gradient changes its sign somewhere in the
eyewall suggests that $\partial T/\partial r=0$ at the point of maximum wind
is a plausible assumption (see section 3.a). However, the magnitudes of
horizontal temperature gradients on both sides of the eyewall are large enough
to significantly impact the maximum velocity estimates (Fig. 2).
For example, if at the point of maximum wind the horizontal temperature
gradient were close to $\Gamma$ (as it was on average between the eyewall and
the outer core in Hurricane Isabel 2003), then $\mathcal{C}\to 0$ and
$\mathcal{B}\mathcal{C}(1+\zeta)\varepsilon\to 0$. In this case E-PI’s $v_{E}$
would formally infinitely underestimate $v_{m}$ (15). Physically, this limit
corresponds to the situation when the moist adiabat is locally horizontal,
$\partial s^{*}/\partial r\to 0$, such that the dependence between saturated
moist entropy and maximum velocity vanishes, see Eq. (15).
If, on the other hand, at the point of maximum wind the horizontal temperature
gradient were equal to $-2.5\Gamma$ (as it was on average between the eye and
the eyewall in Hurricane Isabel 2003), then $\mathcal{C}=3.5$. In this case,
E-PI’s $v_{E}^{2}$ would overestimate $v_{m}^{2}$ by
$\mathcal{B}\mathcal{C}(1+\zeta)\varepsilon=1.4$-fold for a balanced flow
($\mathcal{B}=1$). With $\mathcal{B}=0.8$ as discussed above, the overestimate
reduces to $1.1$ (about $10\%$).
For a known $\mathcal{B}$, the value of $\mathcal{C}$ can be derived from the
observed values of variables entering Eq. (15). The data for Hurricane Isabel
2003 suggest that at the point of maximum wind the air temperature increases
towards the center $\mathcal{C}>1$, but not enough to bring E-PI in agreement
with observations: on September 12 and 14, E-PI’s Eq. (16) underestimates the
observed squared maximum velocity $v_{m}^{2}$ by about $50\%$ and $30\%$,
respectively (Table 2).
The closest agreement is observed on September 13, when $\mathcal{C}$ is the
largest (Fig. 2). Given that the flow at this date was supergradient with
$\mathcal{B}\simeq 0.8$ (Bell and Montgomery, 2008), this agreement does not
indicate that the storm is in thermal wind balance (cf. Montgomery et al.,
2006, p. 1345). Rather, it suggests that the large value of $\mathcal{C}=2.9$
nearly compensated the underestimate that would have otherwise resulted from
$\mathcal{B}<1$. The underestimate is greatest on September 12, when the local
temperature gradient is closest to zero (Fig. 2) and $\mathcal{C}$ is close to
unity (Table 2). The data in Table 2 indicate that superintensity cannot be
explained by supergradient winds alone (see Appendix C), because maximum
superintensity observed on September 12 corresponds to minimal gradient-wind
imbalance ($\mathcal{B}=0.95$).
Table 2: Parameters of Eqs. (15) and (16) estimated from observations for
Hurricane Isabel 2003.
Date | $r_{m}$, km | $T_{o}$, ℃ | $\theta_{e}$, K | $\partial\theta_{e}/\partial r$, K km-1 | $v_{m}$, m s-1 | $v_{E}$, m s-1 | $\varepsilon$ | $\mathcal{B}$ | $(v_{m}/v_{E})^{2}$ | $\mathcal{C}$
---|---|---|---|---|---|---|---|---|---|---
12 September | $25$ | $-65$ | $360$ | $-0.5$ | $80$ | $54$ | $0.29$ | $0.95$ | $2.2$ | $1.1$
13 September | $45$ | $-58$ | $357$ | $-0.6$ | $76$ | $74$ | $0.27$ | $0.72$ | $1.1$ | $3.2$
14 September | $50$ | $-56$ | $357$ | $-0.35$ | $74$ | $61$ | $0.26$ | $0.85$ | $1.5$ | $2.2$
* Observed values of the radius of maximum wind $r_{m}$, outflow temperature $T_{o}$, maximum velocity $v_{m}$ and $\partial\theta_{e}/\partial r$ are taken from, respectively, the first and the third columns of Table 2 of Bell and Montgomery (2008), and equivalent potential temperature $\theta_{e}$ from their Fig. 5 at $r=r_{m}$ and $z=1$ km; $\mathcal{B}$ is calculated from Eq. (C3) using data from Bryan and Rotunno (2009a)’s Table 1 (see Appendix C); $\mathcal{C}$ is calculated from Eq. (15). At the top of the boundary layer, temperature $T_{b}=293\leavevmode\nobreak\ K(20$℃) corresponding to $z_{b}=1$ km is assumed for all the three days based on Fig. 4c of Montgomery et al. (2006), $\zeta=L\gamma_{d}^{*}/RT_{b}=0.49$ for $p_{d}=850$ hPa and $p_{v}^{*}=23$ hPa. The values of $v_{E}$ are E-PI estimates of maximum velocity obtained from Eq. (16), where $\partial s^{*}/\partial r=(c_{p}/\theta_{e})\partial\theta_{e}/\partial r$, $c_{p}=1$ kJ kg-1 K-1 (see Montgomery et al., 2006, Eq. (A2)).
## 3 Discussion
### 3.a The nature and magnitude of the horizontal temperature gradient
The alternative expression for maximum potential intenstity, Eq. (11), shows
how the magnitude of the radial temperature gradient in the cyclone core can
be used to assess E-PI’s validity. Until now it has not received much
attention in such assessments (e.g., Montgomery et al., 2006; Bryan and
Rotunno, 2009a; Emanuel and Rotunno, 2011; Wang and Lin, 2020, 2021). There
were some discussions of possible changes of outflow temperature (e.g.,
Emanuel and Rotunno, 2011; Montgomery et al., 2019; Montgomery and Smith,
2020), but regarding the characteristic magnitude, or even sign, of $\partial
T/\partial r$ at the point of maximum wind, the literature is offering no
clues. Here we provide a few initial considerations.
The energy budget of an air parcel at the sea surface that does not contain
liquid water and moves horizontally with radial velocity $u$ and total
velocity $V$, can be written as
$c_{p}\frac{dT}{dt}=\alpha_{d}\frac{dp}{dt}+\frac{\delta Q}{dt}\simeq
u\alpha\frac{\partial p}{\partial r}\left[1+\left(\frac{\partial p}{\partial
r}\right)^{-1}\frac{\rho C_{k}c_{p}}{z_{m}}(T_{s}-T_{0})\frac{V}{u}\right].$
(17)
Here we neglected horizontal diffusion, as did Emanuel (1986), and assumed
that all heat $\delta Q/dt$ (W kg-1) that the air parcel receives, comes from
local surface flux of sensible heat $J_{S}=\rho C_{k}c_{p}(T_{s}-T_{0})V$ (W
m-2). We assumed that all this surface flux of sensible heat is absorbed below
a certain level $z_{J}$, and, in the second equality of Eq. (17), approximated
the volume-specific heat input $\rho\delta Q/dt$ (W m-3) by
$\rho\frac{\delta Q}{dt}=\frac{J_{S}}{z_{J}}=\frac{\rho
C_{k}c_{p}}{z_{m}}(T_{s}-T_{0})V,$ (18)
under the assumption that $z_{J}=z_{m}$ ($z_{m}$ is the level of maximum wind,
for other notations see Table 1). With common values of $c_{p}=1$ kJ kg-1 K-1,
$C_{k}\simeq 10^{-3}$, $\rho=1$ kg m-3, and with $\partial p/\partial r=0.03$
Pa m-1, $T_{s}-T_{0}=3$ K and $z_{m}=1$ km as in Hurricane Isabel 2003
(Montgomery et al., 2006), the factor in front of the fraction $V/u$ inside
the square brackets in Eq. (17) is numerically equal to $0.1$. Since in
Hurricane Isabel 2003 at the surface $V/u\simeq-2$ both in the outer core and
in the eyewall (Montgomery et al., 2006, Fig. 4a,b), Eq. (17) predicts a
$20\%$ reduction from dry adiabaticity for the horizontal temperature gradient
of the surface flow. The actual reduction is only $10\%$ (see Fig. 2, point
i). This suggests that $z_{J}=z_{m}$ could be an underestimate or that some
minor additional cooling is provided by other mechanisms (e.g., by subcloud
evaporation not accounted for in Eq. (17)).
At the level of maximum wind $z_{m}=1$ km, the ratio $V/|u|$ in Hurricane
Isabel increases up to $\sim 4$ in the outer core and up to $\sim 20$ in the
eyewall. The impact of the surface heat flux should, however, be diminished by
this altitude. The resulting mean horizontal temperature gradient at $z=z_{m}$
between the eyewall and the outer core (half of what the air has at the
surface, Fig. 2, point ii) should reflect both processes.
Equation (17) indicates that in intense cyclones the air-sea temperature
disequilibrium in the eyewall can be largely a product of cyclone’s activity.
As the air moves over an isothermal oceanic surface and cools due to
expansion, the air-sea temperature disequilibrium increases. In Hurricane
Isabel 2003, the air cooled by $4$ K as it moved from the outer core to the
eyewall – this compares well to the estimated $T_{s}-T_{0}\simeq 3$ K in the
inner core. The disequilibrium is by $1$ K smaller than the adiabatic cooling
of the air because the oceanic surface was also $1$ K colder in the inner core
than in the outer core ($27.5^{\rm o}$C versus $28.5^{\rm o}$C, Montgomery et
al., 2006). This cooling of the oceanic surface in the eyewall can be
attributed to different causes like turbulent mixing of the upper oceanic
level by hurricane winds (Montgomery et al., 2006) or to a smaller flux of
solar radiation in the eyewall as compared to the storm’s outskirts (Zhou et
al., 2017). Furthermore, if the surface air in the eyewall has cooled
appreciably as compared to its ambient environment, it must have a high
relative humidity, up to saturation. Thus, saturated air near the radius of
maximum wind is also a product of cyclone’s activity. Once the air reaches
saturation, it cannot easily cool further as there appears an additional
source of heat (latent heat) not reflected in the right-hand part of Eq. (17).
A stronger air-sea disequilibrium due to the drop of air temperature at the
radius of maximum wind in the stronger storms is visible in models. For
example, in Fig. 9c of Wang and Lin (2020) in the strongest tropical cyclone
(maximum wind speed over 90 m s-1) there is a pronounced local maximum of air-
sea disequilibrium in the vicinity of maximum wind. In contrast, the weaker
cyclones (maximum wind speeds about 50 m s-1) in Figs. 9a and 9b of Wang and
Lin (2020), as well as all cyclones in Fig. 5 of Wang and Lin (2021) (maximum
wind speeds below $65$ m s-1), display a more monotonic decline of the air-sea
disequilibrium from the outer environment towards the center. (Notably, while
reporting these distinct patterns of air-sea disequilibrium, Wang and Lin
(2020) and Wang and Lin (2021) did not analyze the magnitude of either radial
velocity or radial gradient of air temperature.) For observed cyclones, the
horizontal flow was still isothermal in Hurricane Earl 2010 (maximum wind
speed 64 m s-1) but approached dry adiabatic in Hurricane Isabel 2003 (maximum
wind speed 75 m s-1) (Smith and Montgomery, 2013; Bell and Montgomery, 2008).
This suggests, as a hypothesis for further studies, that adiabaticity is
approached in very intense cyclones only.
Between the eyewall and the eye the radial velocity changes its sign, so at a
certain point, specifically when $V/u\simeq-10$ for the case of Hurricane
Isabel 2003, the temperature gradient at the surface must turn to zero. This
happens somewhere within the eyewall, i.e., close to the radius of maximum
wind. If the surface air at the point where $\partial T_{s}/\partial r=0$ is
saturated, as it approximately is in the eyewall in Hurricane Isabel 2003
(Montgomery et al., 2006, Fig. 4d), the radial temperature gradient at the
level of maximum wind $z=z_{m}$, and the corresponding value of $\mathcal{C}$,
will be approximately the same as they are at the surface (see Eq. (A14)).
These arguments justify the plausibility of horizontal isothermy
($\mathcal{C}=1$) at the point of maximum wind.
As the surface air moves further towards the center beyond the point where
$\partial T_{s}/\partial r=0$, it begins to warm. As it is warmed by the
surface heat flux, the temperature disequilibrium across the air-sea interface
should diminish. Thus, an increase in $\mathcal{C}$ that reduces $v_{A}$ is
accompanied by a decrease in $T_{s}-T_{0}$ that should reduce $v_{A}$ even
further, see Eq. (11). The complex interplay of these influences, and their
profound dependence on the storm’s dynamic structure and the relation between
the primary and secondary circulations ($v/u$), may help understand why
maximum velocities, for a given environment, strongly depend on the
characteristics of the initial vortex (Tao et al., 2020a).
### 3.b External and internal parameters in the maximum potential intensity
formulations
While in our alternative formulation $v_{A}$ (11) the value of $\mathcal{C}$
is determined by the internal structure of the cyclone, E-PI has been
characterized as a closed theory that allows the estimation of storm’s maximum
speed from environmental parameters alone (compare $\hat{v}_{A}$ and
$\hat{v}_{E}$ in Table 1). In this interpretation, the outflow temperature
$T_{o}$ (which corresponds to the point where $v=0$ and is, strictly speaking,
a property of the cyclonic flow itself rather than of its environment) is
assumed to approximately coincide with the temperature at which an air parcel
saturated at ambient surface temperature and raised moist adiabatically in an
environmental sounding becomes neutrally buoyant (e.g., Rotunno and Emanuel,
1987; Wang et al., 2014).
The environmental soundings assumed to be representative of the outflow
location are commonly measured at a distance of $300$-$700$ km from the storm
center (e.g., Montgomery et al., 2006). At these radii the environment
experiences a strong influence of the cyclone. For example, in North Atlantic
hurricanes at a distance of $400$ km from the center the column water vapor
content and precipitation rate are, respectively, $15\%$ and two times higher
than they are in hurricane absence (Makarieva et al., 2017, Fig. 4a,g). In
Hurricane Isabel 2003 on September 12, mean precipitation rate at the radius
of the outermost closed isobar ($\sim 400$ km) was ten times the local
climatological mean in hurricane absence (Makarieva et al., 2017, Fig. 11).
Tropical storms can also perturb the tropopause temperature by up to 3 K
(Venkat Ratnam et al., 2016).
These empirical findings have two implications. First, calculating
$\hat{v}_{E}$ (Table 1) for an arbitrary environment may not be very
informative: if the cyclone modifies its outflow region, there should exist
“unmodified” environments where cyclonic outflows may never happen. On the
other hand, these observations indicate that the cyclonic flow emanating to
the free troposphere from the point of maximum wind with a given $\mathcal{C}$
could, in principle, transform the downstream environment to such a degree
that the equality $v_{A}=v_{E}$ will hold. In this case, Eq. (12) will define
the outflow temperature $T_{o}$ as follows:
$T_{o}=T_{b}\left[1-\frac{1}{\mathcal{C}(1+\zeta)}\right].$ (19)
For Lilly’s analytical model, which is closely related to E-PI, it was
established that the outflow temperature (which Lilly himself believed was
environmentally prescribed) is controlled by the interior flow of the cyclone
(Tao et al., 2020b). Equations (13) and (19) are probably more transparent
than, but has a similar physical meaning as, Tao et al. (2020b)’s Eq. (2),
although explicitly demonstrating the equivalence of different maximum
potential intensity formulations may require considerable space (cf. Makarieva
et al., 2021).
For $\mathcal{C}\simeq 1$ and a realistic $T_{b}$, Eq. (19) predicts that the
outflow temperature should be about one half of $T_{b}$, i.e., around $150$ K.
Such temperatures are well below the tropopause temperature and cannot be
realized. When the flow approaches the tropopause, it ceases to be adiabatic
thus violating one of the key E-PI’s assumptions for the free troposphere. For
the flow to be adiabatic, as the first equality in Eq. (17) reminds, the
external heat input must be significantly smaller than the change of internal
energy due to expansion. But closer to the tropopause far from the cyclone
core the horizontal pressure gradient vanishes, while the vertical motion
cannot be adiabatic due to the fact that the stratospheric warming increases
rapidly as $T_{o}$ diminishes below the tropopause temperature $T_{t}$ (e.g.,
Wang et al., 2014, Eq. (1)). Generally, $\mathcal{C}>1$ at the point of
maximum wind can result from the surface air warming, and/or from the surface
relative humidity increasing, towards the center (see Eqs. (A14) and (A17)).
Makarieva et al. (2021) showed that conventional E-PI at the point of maximum
wind corresponds to an infinitely narrow thermodynamic cycle, where the finite
changes of temperature and pressure are adiabatic and where total work in the
free troposphere is zero. Work of this cycle is equal to work at the top of
the boundary layer $z_{m}=z_{b}$ and to heat input (that also occurs at
$z_{m}=z_{b}$) multiplied by Carnot efficiency. The cycle being infinitely
narrow, its efficiency does not depend on the infinitely small change of
temperature and/or relative humidity at the boundary layer (that is why E-PI
did not require any assumptions about horizontal temperature gradient). But it
does depend on the assumed adiabaticity of the finite parts of the cycle. The
stratospheric warming violates this E-PI assumption and makes the formula for
$v_{E}$ (5) invalid.
The interpretation of E-PI as a cycle with zero work in the free troposphere
helps understand this fact. For brevity, we consider a saturated isothermal
boundary layer (an isothermal path $\rm B^{\prime}b$ in Fig. 1b of Makarieva
et al., 2020), but extension for an arbitrary temperature gradient and
unsaturated conditions is straightforward. At the boundary layer (path $\rm
B^{\prime}b$) we have heat input $Q_{\rm in}=L_{v}dq^{*}-\alpha_{d}dp$ and
work $A=-\alpha_{d}dp=\varepsilon Q_{\rm in}$. In the free troposphere (path
$\rm bcC^{\prime}B^{\prime}$ in Fig. 1b of Makarieva et al., 2020) from
$\delta Q=c_{p}dT-\alpha_{d}dp+L_{v}dq^{*}$ (A8) we find that, as far as the
integrals of $c_{p}dT$ and $\alpha_{d}dp$ are zero, the cycle’s heat output
equals $Q_{\rm out}=-L_{v}dq^{*}<0$. Since $A=Q_{\rm in}+Q_{\rm out}$, we have
$-\alpha_{d}dp=[\varepsilon/(1-\varepsilon)]L_{v}dq^{*}$. On the other hand,
for an isothermal saturated case from the Clausius-Clapeyron law we have
$dq^{*}/q^{*}=-dp/p_{d}$ (see, e.g., Eq. (3) of Makarieva et al. (2017) or Eq.
(9) of Tao et al., 2020b). Combining this333To our knowledge, the first
mention of the discrepancy between the E-PI formulation and the alternative
one stemming from the definition of $dq$, as illustrated by Eqs. (13) and
(19), was by Makarieva et al. (2019, Eqs. (40) and (43)). with the previous
expression and noting that $\zeta\equiv L_{v}q^{*}/(\alpha_{d}p_{d})$, see Eq.
(A10), we obtain Eq. (13) with $\mathcal{C}=1$. In other words, the difference
between $v_{A}$ and $v_{E}$ results from E-PI relating $L_{v}dq^{*}$ to
$\alpha_{d}dp$ via Carnot efficiency $\varepsilon$, while the same magnitudes
are distinctly related by the Clausius-Clapeyron law.
When the adiabaticity in the free troposphere is perturbed by stratospheric
warming, such that heat output is no longer $-L_{v}dq^{*}$, the relationship
between work and heat input is perturbed as well, and the E-PI formulation no
longer holds. The implication is that in those cases when $\mathcal{C}$ is
sufficiently small, while the observed outflow temperature approaches the
tropopause temperature, $T_{o}\simeq T_{t}$, $v_{E}$ will underestimate
$v_{m}$ due to the violation of adiabaticity. Such “superintensity”
(unexplained by supergradient winds, see Appendix C) was found by Wang et al.
(2014) who forced the tropopause temperature to be constant. When, on the
other hand, $T_{o}\ll T_{t}$ such that Eq. (19) could hold, the potential of
E-PI to predict $v_{m}$ from environmental parameters (e.g., from $T_{t}$) is
diminished.
### 3.c Why $v_{E}$ underestimates, while $\hat{v}_{E}$ overestimates,
maximum winds
If horizontal isothermy is a common condition under which $v_{E}$ (5)
underestimates $v_{m}$, one has to explain why in most cases the maximum wind
speeds observed in real cyclones are well below the environmental version of
E-PI, $\hat{v}_{E}$ (Table 1). Since the underestimate $v_{E}<v_{m}$ results
from E-PI assumptions pertaining to the free troposphere (block E-I in Table
1), the overestimate $\hat{v}_{E}>v_{m}$ indicates that a certain
overcompensation should occur in the assumptions pertaining to the remaining
two E-PI blocks, the boundary layer interior and the air-sea interface (Table
1, blocks E-II and E-III).
One compensating overestimate should result from E-PI’s assumptions concerning
the disequlibrium $\Delta k=k_{s}^{*}-k_{0}=c_{p}\Delta T+L_{v}\Delta q$ at
the air-sea interface at the radius of maximum wind. Since the local enthalpy
difference $\Delta k$ is unknown, E-PI limits it from above by assuming that
the local difference in mixing ratios $\Delta q$ is less than the water vapor
deficit $(1-\mathcal{H}_{a})q_{a}^{*}$ in the ambient environment (Table 1,
block E-III). However, as Emanuel (1986) and Emanuel and Rotunno (2011)
pointed out, in reality $\Delta k$ tends to decline from the outer core
towards the storm center. Indeed, if the radial inflow is sufficiently slow,
as it is in the weaker storms, the surface air can remain in approximate
thermal equilibrium with the oceanic surface. In his original evaluations of
E-PI Emanuel (1986, p. 591) assumed $\Delta T=0$. On the other hand,
evaporation into the air parcels that are spiraling inward increases the
relative humidity and diminishes $\Delta q$. As a result, in the weaker
cyclones the actual $\Delta k$ at the radius of maximum wind can be much lower
than its ambient constraint $(1-\mathcal{H}_{a})q_{a}^{*}$. This would
overcompensate the underestimate of the observed $v_{m}$ at the top of the
boundary layer by E-PI’s Eq. (16) and, provided the assumptions in block E-II
hold, explain why in most cases the E-PI upper limit $\hat{v}_{E}$ (Table 1)
goes above the observed maximum velocities.
In the stronger storms, as we discussed, the air streams so quickly towards
the center that it cools significantly compared to the isothermal oceanic
surface it moves above. As pointed out by Camp and Montgomery (2001) and
Montgomery et al. (2006), this cooling tends to offset the increase in
relative humidity, such that the mixing ratio $q$ does not considerably grow,
and $\Delta q$ does not diminish significantly, towards the center. In this
case the E-PI’s assumption, $\Delta q\simeq(1-\mathcal{H}_{a})q_{a}^{*}$,
becomes valid. No overcompensation occurs in the third block of E-PI. As a
result, in the strongest storms the underestimate stemming from the first
block of E-PI becomes explicit and $\hat{v}_{E}<v_{m}$.
A distinct type of compensation can occur between the temperature gradient and
the supergradient wind (coefficients $\mathcal{C}$ and $\mathcal{B}$ in Eq.
(15)). We have shown that supergradient winds in the formulation of Bryan and
Rotunno (2009a) are not sufficient to explain the mismatch between E-PI and
actual maximum velocities in either numerical simulations or real cyclones
(Table 2 and Appendix C). Our new formulation explains that, despite E-PI is
assumed to underpredict supergradient winds ($\mathcal{B}<1$), it can
nevertheless match the observations when $\mathcal{C}>1$, i.e., when the winds
are supergradient but the temperature at the point of maximum wind rises
towards the center. This appears to be the case in Hurricane Isabel 2003 on
September 13 (Table 2). In other words, the question why E-PI in most cases
overestimates observed intensities, is directly relevant to why it sometimes
underestimates them, i.e., to the superintensity problem.
Finally, there is a major uncertainty pertaining to the second block of E-PI,
i.e., to the transition from the volume to surface fluxes of entropy and
momentum, Eq. (4). This transition, while key to both conventional E-PI, to
its recent modification for the surface winds, and to any local formulation of
maximum intensity based on Eq. (4), including $v_{A}$ (11), has not been
rigorously justified. Assessing the validity of E-PI, Bryan and Rotunno
(2009a, p. 3049) chose not to evaluate the underlying assumptions in the
derivations that yield Eq. (4). Therefore, even when this equation, or its
modifications, are reported to match numerical simulations (it is not always
the case, see Fig. 6 of Bryan and Rotunno, 2009a), the nature and generality
of the agreement remain unclear. Zhou et al. (2017) in a modelling study
showed that with a pronounced sea surface cooling in the eyewall the E-PI
assumptions underlying Eq. (4) (Table 1, second column) do not hold and the
cyclone intensity is to a large degree insensitive to the magnitude of the
air-sea disequilibrium. Likewise, in hurricane Isabel 2003 we observe that its
intensity ($v_{m}^{2}$ varies by $17$% during the three days of observations)
is largely insensitive to the corresponding variation in $v_{E}^{2}$ (varies
by two times) and $\mathcal{BC}$ (varies by two times as well), see Table 2.
In what we consider to be the most transparent discussion of Eq. (4) to date,
Emanuel (2004, p. 173) justified Eq. (4) by assuming that at the point of
maximum wind $ds/dt$ and $dM/dt$ relate as $\tau_{s}$ and $\tau_{M}$, while
requiring that $\partial s/\partial z=0$. However, since at the point of
maximum wind $u=0$ (Bryan and Rotunno, 2009a), with $\partial s/\partial z=0$
this means that $ds/dt=0$, an obstacle not discussed in E-PI derivations (cf.
Makarieva et al., 2020).
In their recent extension of E-PI to surface winds, Rousseau-Rizzi and Emanuel
(2019) did not justify the transition from volume to surface energy fluxes.
Montgomery and Smith (2020) and Makarieva et al. (2020) pointed out this
omission, i.e., the need to explain how Rousseau-Rizzi and Emanuel (2019)’s
Eq. (15) can be derived from their Eq. (14). In their responses, Rousseau-
Rizzi and Emanuel (2020) and Emanuel and Rousseau-Rizzi (2020) did not provide
the required derivation. To obtain surface fluxes from volume fluxes, Emanuel
and Rousseau-Rizzi (2020) equated fluxes of different dimensions in their Eq.
(6) (and related). This undermines their respective derivations and
conclusions. In particular, as demonstrated by Makarieva et al. (2020),
without an explicit representation of how surface and volume fluxes relate, it
is not possible to address the dissipative heating issue, which, as we discuss
in Appendix C, appears to be responsible for the unexplained singularity in
the superintensity account by Bryan and Rotunno (2009a) (see also discussions
by Makarieva et al., 2010; Bister et al., 2011; Kieu, 2015; Bejan, 2019;
Emanuel and Rousseau-Rizzi, 2020).
A systemic problem for the second block of E-PI (Table 1) is that the freedom
to make assumptions about the ratios of surface-to-volume fluxes, is limited.
It is the same problem that causes a mismatch between $v_{A}$ and $v_{E}$: in
E-PI, one cannot freely specify $\partial T/\partial r$. The E-PI assumptions
and resulting formulations should be checked for compatibility with two
fundamental relationships444 Dr. Steve Garner suggested that Emanuel and
Rousseau-Rizzi (2020)’s Eq. (6) could be fixed if the dimensionless $C_{D}$
and $C_{k}$ are re-defined by dividing them by an arbitrary scale height $h$
(cf. Emanuel, 2004, Eqs. (8.19) and (8.20)). Such a replacement does not
indeed change anything in Emanuel and Rousseau-Rizzi (2020)’s Eqs. (2)-(11),
but it explicates the incompatibility between their Eqs. (1) and (2), thus
illustrating our argument about conflicting assumptions. Indeed, if we accept
Emanuel and Rousseau-Rizzi (2020)’s Eq. (6) with $C_{D}$ understood as
$C_{D}/h$, then at the surface $-\mathbf{F}\cdot\mathbf{V}=V^{3}C_{D}/h$.
Emanuel and Rousseau-Rizzi (2020)’s Eq. (1) for the isothermal saturated
surface flow takes the form of Makarieva et al. (2020)’s Eq. (15), which, in
the present notations, gives
$[\varepsilon/(1-\varepsilon)]L_{v}dq^{*}/dt=V^{3}C_{D}/h$ after
$-\mathbf{F}\cdot\mathbf{V}$ is replaced with $V^{3}C_{D}/h$. Comparing this
with Emanuel and Rousseau-Rizzi (2020)’s Eq. (2), we find that the two
equations can be reconciled only if
$L_{v}dq^{*}/dt=V(k_{s}^{*}-k_{0})C_{k}/h$, which makes no sense, since the
former is the flux of latent heat and the latter – of enthalpy. Another
manifestation of conflicting assumptions related to dissipative heating is the
singularity in Eq. (C4) in the superintensity analysis of Bryan and Rotunno
(2009a), see Appendix C.. The first one is the Clausius-Clapeyron law, which
dictates how $dq$, $dT$, $dp$ and $d\mathcal{H}$ relate both on a streamline
(where $dp\neq 0$) and across the air-sea interface (where $dp=0$) (see
Makarieva et al., 2017, Eq. (3)). The second one is the equality between work
and turbulent dissipation at the point of maximum wind (see Makarieva et al.,
2020, Eq. (9)). Besides, as Eq. (17) demonstrates, there are less obvious non-
local constraints on the magnitude of the air-sea disequlibrium at the radius
of maximum wind that result from the storm’s radial and tangential velocity
profiles. While a detailed analysis of this subject is beyond the scope of the
present paper, we emphasize that the discussion of the numerical validity of
E-PI could be more meaningful if a comprehensive theoretical justification of
Eq. (4) (and its modifications, including the transition between Rousseau-
Rizzi and Emanuel (2019)’s Eqs. (14) and (15) and Emanuel and Rousseau-Rizzi
(2020)’s Eqs. (1) and (2)) were provided.
In the meantime, we conclude by outlining a strategically different
perspective on maximum winds. In our view, due to the gross uncertainties
surrounding Eq. (4) and its underlying assumptions, it is difficult to expect
that either E-PI, our alternative formulation or any other local formulation
like that of Lilly’s model, will stand future scrutiny and validate as an
informative estimate of storm intensity. Storm intensity is an integral
property of the entire storm’s energetics, whereby the power released over a
large area is concentrated in the eyewall to generate maximum wind. It cannot
be a local function of the highly variable heat input at the radius of maximum
wind. The perceived success of E-PI – that it produces a plausible if not 100%
robust upper limit on maximum intensity (despite deriving from assumptions
that systematically underestimate and overestimate intensities) – can be
explained by the fact that the quantitative parameters in the final expression
for kinetic energy incidentally combine into an integral storm parameter – the
partial pressure of water vapor (Appendix D). Not only the characteristic
rates of hurricane intensification coincide, in their order of magnitude, with
precipitation rate (Lackmann and Yablonsky, 2004), not only the steady-state
hurricane wind power is proportional to condensation rate (Makarieva et al.,
2015), but indeed the partial pressure of water vapor is a characteristic
scale for maximum kinetic energies observed in real storms. Its exponential
temperature dependence would explain that of observed maximum intensities.
## Acknowledgments
The authors are grateful to Dr. Steve Garner and two anonymous reviewers for
their constructive criticisms and suggestions. Our response to the reviewers
can be found in Appendix E. Work of A.M. Makarieva is partially funded by the
Federal Ministry of Education and Research (BMBF) and the Free State of
Bavaria under the Excellence Strategy of the Federal Government and the
Länder, as well as by the Technical University of Munich – Institute for
Advanced Study.
## 4 Appendices
### Appendix A Deriving the alternative formulation
Moist entropy $s$ per unit mass of dry air is defined as (e.g., Eq. (2) of
Emanuel (1988), Eq. (A4) of Pauluis (2011))
$s=(c_{pd}+q_{t}c_{l})\ln\frac{T}{T^{\prime}}-\frac{R}{M_{d}}\ln\frac{p_{d}}{p^{\prime}}+q\frac{L_{v}}{T}-q\frac{R}{M_{v}}\ln\mathcal{H}.$
(A1)
Here, $L_{v}=L_{v}(T^{\prime})+(c_{pv}-c_{l})(T-T^{\prime})$ is the latent
heat of vaporization (J kg-1);
$q\equiv\rho_{v}/\rho_{d}\equiv\mathcal{H}q^{*}$ is the water vapor mixing
ratio; $\rho_{v}$ is water vapor density; $\mathcal{H}$ is relative humidity;
$q^{*}=\rho_{v}^{*}/\rho_{d}$, $q_{l}=\rho_{l}/\rho_{d}$, and $q_{t}=q+q_{l}$
are the mixing ratio for saturated water vapor, liquid water, and total water,
respectively; $\rho_{d}$, $\rho_{v}^{*}$, and $\rho_{l}$ are the density of
dry air, saturated water vapor and liquid water, respectively; $c_{pd}$ and
$c_{pv}$ are the specific heat capacities of dry air and water vapor at
constant pressure; $c_{l}$ is the specific heat capacity of liquid water;
$R=8.3$ J mol-1 K-1 is the universal gas constant; $M_{d}$ and $M_{v}$ are the
molar masses of dry air and water vapor, respectively; $p_{d}$ is the partial
pressure of dry air; $T$ is the temperature; $p^{\prime}$ and $T^{\prime}$ are
reference air pressure and temperature.
For saturated moist entropy $s^{*}$ ($q=q^{*}$, $\mathcal{H}=1$) we have
$\displaystyle
Tds^{*}=(c_{pd}+q_{t}c_{l})dT-\frac{RT}{M_{d}}\frac{dp_{d}}{p_{d}}+L_{v}dq^{*}+q^{*}dL_{v}-q^{*}L_{v}\frac{dT}{T}=\left(c_{p}-\frac{q^{*}L_{v}}{T}\right)dT-\frac{RT}{M_{d}}\frac{dp_{d}}{p_{d}}+L_{v}dq^{*},$
(A2)
where $c_{p}\equiv c_{pd}+q^{*}c_{pv}+q_{l}c_{l}$ and
$dL_{v}=(c_{pv}-c_{l})dT$. Equation (A2) additionally assumes
$q_{t}=\mathrm{const}$ (reversible adiabat).
The ideal gas law for the partial pressure $p_{v}$ of water vapor is
$p_{v}=N_{v}RT,\quad N_{v}=\frac{\rho_{v}}{M_{v}},$ (A3)
where $M_{v}$ and $\rho_{v}$ are the molar mass and density of water vapor.
Using Eq. (A3) with $p_{v}=p_{v}^{*}$ in the definition of $q^{*}$
$q^{*}\equiv\frac{\rho_{v}^{*}}{\rho_{d}}=\frac{M_{v}}{M_{d}}\frac{p_{v}^{*}}{p_{d}}\equiv\frac{M_{v}}{M_{d}}\gamma_{d}^{*},\quad\gamma_{d}^{*}\equiv\frac{p_{v}^{*}}{p_{d}},$
(A4)
and applying the Clausius-Clapeyron law
$\frac{dp_{v}^{*}}{p_{v}^{*}}=\frac{L}{RT}\frac{dT}{T},\quad L\equiv
L_{v}M_{v},$ (A5)
we obtain for the last term in Eq. (A2)
$L_{v}dq^{*}=L_{v}\frac{M_{v}}{M_{d}}\left(\frac{dp_{v}^{*}}{p_{d}}-\frac{p_{v}^{*}}{p_{d}}\frac{dp_{d}}{p_{d}}\right)=L_{v}\frac{M_{v}}{M_{d}}\left(\frac{p_{v}^{*}}{p_{d}}\frac{dp_{v}^{*}}{p_{v}^{*}}-\frac{p_{v}^{*}}{p_{d}}\frac{dp_{d}}{p_{d}}\right)=L_{v}q^{*}\left(\frac{L}{RT}\frac{dT}{T}-\frac{dp_{d}}{p_{d}}\right).$
(A6)
Using the Clausius-Clapeyron law (A5), the ideal gas law $p_{d}=N_{d}RT$,
where $N_{d}=\rho_{d}/M_{d}$, and noting that $p=p_{v}^{*}+p_{d}$, we obtain
for the last but one term in Eq. (A2)
$\frac{RT}{M_{d}}\frac{dp_{d}}{p_{d}}=\frac{RT}{M_{d}}\left(\frac{dp}{p_{d}}-\frac{dp_{v}^{*}}{p_{d}}\right)=\frac{dp}{M_{d}N_{d}}-\frac{RTp_{v}^{*}}{M_{d}p_{d}}\frac{dp_{v}^{*}}{p_{v}^{*}}=\frac{dp}{\rho_{d}}-L_{v}\frac{M_{v}p_{v}^{*}}{M_{d}p_{d}}\frac{dT}{T}=\frac{dp}{\rho_{d}}-q^{*}L_{v}\frac{dT}{T}.$
(A7)
Taking into account Eq. (A7), Eq. (A2) reads
$\displaystyle Tds^{*}=c_{p}dT-\alpha_{d}dp+L_{v}dq^{*}.$ (A8)
Putting Eqs. (A6) into Eq. (A8) yields
$\displaystyle
Tds^{*}=\left(c_{p}+\frac{L_{v}q^{*}}{T}\frac{L(1+\gamma_{d}^{*})}{RT}\right)dT-\left(1+\frac{L\gamma_{d}^{*}}{RT}\right)\frac{dp}{\rho_{d}}=-(1+\zeta)\alpha_{d}dp\left(1-\frac{1}{\Gamma}\frac{dT}{dp}\right).$
(A9)
Here
$\Gamma\equiv\frac{\alpha_{d}}{c_{p}}\frac{1+\zeta}{1+\mu\zeta(\xi+\zeta)},\quad\xi\equiv\frac{L}{RT},\quad\zeta\equiv\xi\gamma_{d}^{*}\equiv\frac{L}{RT}\frac{p_{v}^{*}}{p_{d}}\equiv\frac{L_{v}q^{*}}{\alpha_{d}p_{d}},\quad\mu\equiv\frac{R}{C_{p}}=\frac{2}{7},$
(A10)
where $\alpha_{d}\equiv 1/\rho_{d}$ is the volume per unit mass of dry air and
$C_{p}\simeq c_{p}M_{d}$ is the molar heat capacity of air at constant
pressure.
Approximating air molar mass by molar mass $M_{d}$ of dry air and $c_{p}$ by
$c_{pd}$, we can conveniently express $\Gamma$ as
$\Gamma\simeq\frac{T}{p}\frac{\mu(1+\zeta)}{1+\mu\zeta(\xi+\zeta)}\simeq\frac{T}{p}\frac{\mu(1+\xi\gamma_{d}^{*})}{1+\mu\xi^{2}\gamma_{d}^{*}}.$
(A11)
E-PI’s assumption that entropy is well mixed in the boundary layer ($\partial
s^{*}/\partial z=0$) (block E-II in Table 1) implies a tight link between
radial gradients of temperature at a reference height at the surface (the
subscript $0$ for temperature-related variables, see Table 1, and $s$ for
density and pressure) and at the top of boundary layer (the subscript $b$).
When at the radius of maximum wind the surface air is saturated, as it was,
for example, in Hurricane Earl 2010 (Smith and Montgomery, 2013), we have
$\partial s^{*}_{0}/\partial r=\partial s^{*}_{b}/\partial r$ and obtain from
Eq. (A9)
$(1+\zeta_{b})\mathcal{C}_{b}\frac{\alpha_{d}}{T_{b}}\frac{\partial
p}{\partial
r}\bigg{|}_{z=z_{b}}=(1+\zeta_{0})\mathcal{C}_{0}\frac{\alpha_{d}}{T_{0}}\frac{\partial
p}{\partial r}\bigg{|}_{z=0},$ (A12)
where $\mathcal{C}$ is defined in Eq. (9).
In hydrostatic equilibrium $\partial p/\partial z\simeq-p/h_{d}$, $h_{d}\equiv
RT/(gM_{d})\sim 9$ km, we have for $z_{b}\ll h_{d}$
$\frac{\partial p}{\partial r}\bigg{|}_{z=z_{b}}=\frac{\partial p}{\partial
r}\bigg{|}_{z=0}+z_{b}\frac{\partial^{2}p}{\partial z\partial
r}\bigg{|}_{z=0}=\frac{\partial p}{\partial
r}\bigg{|}_{z=0}+z_{b}\frac{\partial^{2}p}{\partial r\partial
z}\bigg{|}_{z=0}\simeq\left(1-\frac{z_{b}}{h_{d}}\right)\frac{\partial
p}{\partial r}\bigg{|}_{z=0}.$ (A13)
In the last approximation we have taken into account that $h_{d}$ can be
assumed constant in the boundary layer $z\leq z_{b}\sim 1$ km, since, in the
vertical, the relative change of temperature ($\sim 1\%$) is much less than
the change of pressure ($\sim 10\%$).
Using Eq. (A13) and taking into account that in hydrostatic equilibrium
$p_{b}=(1-z_{b}/h_{d})p_{s}$, we have from Eq. (A12) with
$\alpha_{d}\simeq\alpha$
$\mathcal{C}_{b}=\mathcal{C}_{0}\frac{1+\zeta_{0}}{1+\zeta_{b}}\frac{\rho_{b}}{\rho_{s}}\frac{T_{b}}{T_{0}}\frac{h_{d}}{h_{d}-z_{b}}=\mathcal{C}_{0}\frac{1+\zeta_{0}}{1+\zeta_{b}}\frac{p_{b}}{p_{s}}\frac{h_{d}}{h_{d}-z_{b}}=\mathcal{C}_{0}\frac{1+\zeta_{0}}{1+\zeta_{b}}=1.1\mathcal{C}_{0},$
(A14)
where
$\zeta_{0}=\dfrac{L}{RT_{0}}\frac{p_{v0}^{*}}{p_{s}},\quad\zeta_{b}=\dfrac{L}{RT_{b}}\frac{p_{vb}^{*}h_{d}}{p_{s}(h_{d}-z_{b})}.$
(A15)
For characteristic values observed in Hurricane Isabel 2003, $T_{0}=297$ K,
$T_{0}-T_{b}=4$ K we have $p_{v0}^{*}=30$ hPa, $p_{vb}^{*}=23$ hPa, such that
with $p_{s}=10^{3}$ hPa and $z_{b}/h_{d}\simeq 0.1$ the coefficient at
$\mathcal{C}_{0}$ equals $1.1$. (Note that in this evaluation the difference
$T_{0}-T_{b}$ is not arbitrary but should correspond to the assumed moist
adiabatic lapse rate for $z\leq z_{b}$.) When the surface air is isothermal,
$\mathcal{C}_{0}=1$ and $\mathcal{C}_{b}=1.1$. This shows that the air
temperature at the top of the boundary layer does increase towards the storm
center (this is due to the fact that the water vapor mixing ratio at the
surface increases towards the center). However, this increase, and the
corresponding $\mathcal{C}_{b}$ value, are too small to fix the approximately
twofold mismatch between $\varepsilon$ and $1/(1+\zeta)$, see Eq. (13) and
Fig. 1a.
For an isothermal process with $q_{l}=0$ and variable relative humidity, we
have from Eq. (A1) and $dq/q=(p/p_{d})d\mathcal{H}/\mathcal{H}-dp/p_{d}$
$Tds=-\alpha_{d}dp+L_{v}dq=-\alpha_{d}dp\left(1+\frac{L_{v}q}{\alpha_{d}p_{d}}-\frac{L_{v}q^{*}p}{\alpha_{d}p_{d}}\frac{d\mathcal{H}}{dp}\right)=-\alpha_{d}dp\left[1+\zeta\mathcal{H}\left(1-\frac{p}{\mathcal{H}}\frac{d\mathcal{H}}{dp}\right)\right].$
(A16)
Here we omitted the term $-(RT/M_{v})\ln\mathcal{H}dq$, which is at least
$\xi=L/RT\sim 18$ times less than $L_{v}dq$. For $\mathcal{C}_{0}=1$
(horizontally isothermal air at the sea surface), we obtain from Eq. (A16) and
$\partial s_{0}/\partial r=\partial s^{*}_{b}/\partial r$ by analogy with Eq.
(A14)
$\mathcal{C}_{b}(1+\zeta_{b})=1+\zeta_{0}\mathcal{H}\left(1-\frac{p_{s}}{\mathcal{H}}\frac{\partial\mathcal{H}/\partial
r}{\partial p_{s}/\partial r}\right).$ (A17)
For an adiabatic process with $T_{b}<T_{0}$ we have $q^{*}_{b}\leq
q_{0}=\mathcal{H}q^{*}_{0}$. With $\partial\mathcal{H}/\partial r\geq 0$
(relative humidity at the surface increasing towards the storm center) it
follows that $\mathcal{C}_{b}\geq 1$. For $\partial\mathcal{H}/\partial r=0$,
the maximum value of $\mathcal{C}_{b}$ corresponds to saturation
$\mathcal{H}=1$ and is given by Eq. (A14). (For a dry adiabat that only
reaches saturation at $z=z_{b}$ we would have $\mathcal{C}_{b}\simeq 1$ as
$T_{b}\simeq T_{0}$). Therefore, E-PI’s assumptions that the surface air is
isothermal (Emanuel, 1986, p. 589), while $\partial\mathcal{H}/\partial r=0$
at the radius of maximum wind (where $\mathcal{H}$ is assumed to be equal to
its undisturbed ambient value, Emanuel, 1995, p. 3971), are equivalent to
assuming $\mathcal{C}_{b}\simeq 1$ at the point of maximum wind. If
$\partial\mathcal{H}/\partial r\neq 0$, by varying its value it is possible to
satisfy Eq. (12) at the point of maximum wind. Then a check of E-PI’s validity
would be not the value of $\partial T_{b}/\partial r$, but
$\partial\mathcal{H}/\partial r$ at the surface, which, in this context,
cannot be freely specified in E-PI. However, for stronger storms that reach
saturation at the radius of maximum wind, we would have
$\partial\mathcal{H}/\partial r=0$ (section 3.a).
### Appendix B Hypercanes
Hypercanes were introduced as winds with theoretically infinite velocities
that should occur with sea surface temperatures exceeding approximately 40oC
(Emanuel, 1988), i.e., at those temperatures where $1/\varepsilon=1+\zeta$ and
the solid and dashed lines in Fig. 1a begin to intersect. It is not a
coincidence.
The singularity responsible for hypercanes first appeared in Emanuel (1986)’s
Eq. (26) for the central pressure drop. This equation derives from combining
two equations. The first one is Eq. (A16) for the horizontally isothermal air
at the sea surface, which corresponds to Emanuel (1986)’s Eq. (25). The second
one is $\varepsilon T_{b}ds^{*}=-\alpha_{d}dp$ for the top of the boundary
layer $z=z_{b}$, which corresponds to Emanuel (1986)’s Eq. (21). From these
two equations, assuming as before that $\partial s^{*}_{0}/\partial r=\partial
s^{*}_{b}/\partial r$ and using Eq. (A13) (cf. the unnumbered equation after
Eq. (25) on p. 589 of Emanuel, 1986), we obtain
$\frac{1}{\varepsilon}=1+\zeta_{0}\mathcal{H}\left(1-\frac{p_{s}}{\mathcal{H}}\frac{\partial\mathcal{H}/\partial
r}{\partial p_{s}/\partial r}\right).$ (B1)
Solving this for $(1/p_{s})\partial p_{s}/\partial r$ gives
$\frac{\partial\ln p_{s}}{\partial
r}=\frac{-\varepsilon\zeta_{0}\partial\mathcal{H}/\partial
r}{1-\varepsilon(1+\zeta_{0}\mathcal{H})}.$ (B2)
Linearizing this yields Emanuel (1986)’s Eq. (26) (where in the numerator the
last term proportional to squared outflow radius is for simplicity omitted).
The singularity corresponding to $\partial p_{s}/\partial r\to\infty$ arises
at $1/\varepsilon=1+\zeta_{0}\mathcal{H}$ under the assumption that
$\varepsilon$ is independent of $\zeta_{0}$. This assumption is incorrect,
since for E-PI to be valid, Eq. (13) must hold, such that
$1/\varepsilon=\mathcal{C}_{b}(1+\zeta_{b})$. In view of Eq. (A17), the latter
expression is equal to the right-hand part of Eq. (B1), which means that
Emanuel (1986)’s Eq. (26) is an identity, from which nothing can be deduced.
Hypercanes do not exist.
### Appendix C The superintensity analysis of Bryan and Rotunno (2009a)
Bryan and Rotunno (2009a, Eq. (24)) derived the following diagnostic
expression to quantify how supergradient winds influence the potential
intensity estimate:
$v_{\rm B}^{2}=v_{E}^{*2}+ar_{m}\eta_{m}w_{m}.$ (C1)
Here $v_{E}^{*}$ is a E-PI velocity estimate that involves dissipative heating
($v_{E}^{*2}=av_{E}^{2}$ with $a\equiv T_{s}/T_{o}>1$), $w$ is vertical
velocity, $\eta\equiv\partial u/\partial z-\partial w/\partial r$ is the
azimuthal component of absolute vorticity, and the subscript $m$ indicates
that the variables are evaluated at the point of maximum (tangential) wind.
(Since at this point $u=0$ and $w\ll v$, the points of maximum wind and
maximum tangential wind practically coincide.)
Bryan and Rotunno (2009a, Eq. (22)) assumed that
$\eta_{m}w_{m}=\frac{v_{m}^{2}}{r_{m}}-\alpha\frac{\partial p}{\partial
r}\equiv(1-\mathcal{B})\frac{v_{m}^{2}}{r_{m}}.$ (C2)
The identity uses our definition of $\mathcal{B}$ (14). From Eq. (C2) we have
$\mathcal{B}=1-\frac{r_{m}\eta_{m}w_{m}}{v_{m}^{2}}.$ (C3)
Based on the data of Bell and Montgomery (2008) for Hurricane Isabel 2003,
Bryan and Rotunno (2009a) compiled terms from the right-hand part of Eq. (C3)
in their Table 1. Using that table, we estimate $\mathcal{B}$ for September
12, 13 and 14 as, respectively, $0.95$, $0.72$ and $0.85$ (these values are
used in our Table 2). For September 13, $\mathcal{B}=0.72$ obtained from Eq.
(C3) approximately agrees with the statement of Bell and Montgomery (2008, p.
2037) that on this day “the boundary layer tangential wind was $\sim 15\%$
supergradient” near the radius of maximum wind (this corresponds to
$\mathcal{B}=1/1.15^{2}=0.76$).
For their control simulation, Bryan and Rotunno (2009a, p. 3055) report the
following values: $v_{m}=109$ m s-1, $r_{m}=17.5$ km, $\eta_{m}=0.03$ s-1,
$w_{m}=8$ m s-1. Using these values, we estimate $\mathcal{B}=\mathcal{B}_{\rm
B1}=0.65$ from Eq. (C3).
Using Eq. (C3) in Eq. (C1), while assuming that $v_{\rm B}=v_{m}$, we find
$v_{\rm B}^{2}=\frac{v_{E}^{*2}}{1-a(1-\mathcal{B})}.$ (C4)
Putting $\mathcal{B}=\mathcal{B}_{\rm B1}=0.65$ into Eq. (C4) with $a=1.5$ and
$v_{E}^{*}=72$ m s-1 (the values Bryan and Rotunno (2009a) report for their
control simulation), we find $v_{\rm B1}=105$ m s-1. This is in good agreement
with $v_{\rm B}=107$ m s-1 obtained by Bryan and Rotunno (2009a) from their
original formulation given by Eq. (C1). The minor discrepancy is due to
$v_{\rm B}=v_{m}$ that we put to obtain Eq. (C4).
From the fact that their estimated $v_{\rm B}=107$ m s-1 is sufficiently close
to the actual maximum velocity $v_{m}=109$ m s-1 in their control simulation,
Bryan and Rotunno (2009a, p. 3055) concluded that “the neglect of unbalanced
flow effects is mostly responsible for the systematic underprediction by
E-PI”.
Figure 3: Part of Bryan and Rotunno (2009a)’s Fig. 8 with the thick curve
showing the streamline to which the point of maximum wind (black circle)
belongs, and the thin contours showing $(1-\mathcal{B})/\mathcal{B}$ with the
interval of $0.2$, in Bryan and Rotunno (2009a)’s control simulation. We have
added two red squares to indicate possible locations where the point of
maximum wind should have been if $\mathcal{B}=\mathcal{B}_{\rm B1}=0.65$ and
$(1-\mathcal{B})/\mathcal{B}=0.55$ corresponding to the data from Bryan and
Rotunno (2009a)’s section 6b, see Eq. (C3), were correct.
However, a closer inspection of these results reveals that they are not self-
consistent. In their Fig. 8, Bryan and Rotunno (2009a) analyzed the degree to
which the gradient-wind balance is broken in their control simulation by
plotting contours of $(1-\mathcal{B})/\mathcal{B}$ (Fig. 3). That figure shows
that the point of maximum wind at $r_{m}=17.5$ km corresponds to
$(1-\mathcal{B})/\mathcal{B}\gtrsim 1$ and $\mathcal{B}=\mathcal{B}_{\rm
B2}\lesssim 0.5$. This is confirmed in the text, which says that at the point
of maximum wind the sum $v_{m}^{2}/r_{m}+fv_{m}$ (here the second term is
negligibly small) is twice the absolute magnitude of $\alpha\partial
p/\partial r$ (see Bryan and Rotunno, 2009a, p. 3050 and Eq. (12)).
Given that Eq. (C4) has a singularity at $\mathcal{B}=1-1/a\simeq\varepsilon$,
the estimate of $v_{\rm B}$ is sensitive to minor variations around
$\mathcal{B}=1-1/a=0.33$, to which $\mathcal{B}_{\rm B2}\lesssim 0.5$, that is
retrieved from Fig. 8 of Bryan and Rotunno (2009a), is sufficiently close.
Putting $\mathcal{B}=\mathcal{B}_{\rm B2}=0.5$ into Eq. (C4) we obtain $v_{\rm
B2}=144$ m s-1, which is far beyond $v_{m}=109$ m s-1 in their control
simulation.
One can hypothesize either that the data reported by Bryan and Rotunno (2009a)
in their section 6b and Fig. 8 refer, for an undisclosed reason, to different
simulations555A hint that this could be the case is provided by the fact that
throughout their paper Bryan and Rotunno (2009a) report slightly different
$v_{m}$ values for their control simulation, $v_{m}=108$ m s-1 on p. 3046
versus $v_{m}=109$ m s-1 on p. 3055. or that some of the assumptions behind
their derivations do not hold. In either case, until this major discrepancy,
$\mathcal{B}_{\rm B1}\neq\mathcal{B}_{\rm B2}$ and $v_{\rm B1}\ll v_{\rm B2}$,
is resolved (Fig. 3), Bryan and Rotunno (2009a)’s conclusion, that their
numerical simulations support the hypothesis of superintensity being largely
due to supergradient winds, lacks a solid ground.
Furthermore, the infinite winds at $\mathcal{B}=1-1/a>0$, implied by their
derivations and exposed by Eq. (C4), require an explanation. While we
ultimately leave it to the authors to discuss this yet another E-PI
singularity (the first one being $1/[1-\varepsilon(1+\zeta)]$ in Emanuel
(1986)’s Eq. (26), see section 2 and Appendix B), here we can offer one
thought. That $v_{\rm B}\to\infty$ at a finite $\mathcal{B}\to 1-1/a$ is
solely due to $a>1$. The latter inequality is a consequence of “dissipative
heating” assumed to recirculate within the cyclone to make it stronger.
Without this dissipative heating, $a=1$ in Eq. (C4) (see footnote 2 of Bryan
and Rotunno, 2009a), and the unphysical singularity disappears. The
“superintensity” effect of supergradient winds is then accounted for by
dividing the conventional (“balanced”) $v_{E}^{2}$ (5) by $\mathcal{B}<1$, a
straightforward procedure identical to what we applied in our alternative
formulation in Eq. (15).
We conclude by demonstrating how our alternative expression (15) can be
applied to the control simulation of Bryan and Rotunno (2009a). To exclude
dissipative heating, we note that $v_{E}^{2}=v_{E}^{*2}/a$. The same factor,
$1/1.5$, characterizes the ratio between the left-hand and right-hand parts of
Bryan and Rotunno (2009a)’s Eq. (8), where the second term in the right-hand
part accounts for dissipative heating (see Fig. 6 and p. 3049 of Bryan and
Rotunno, 2009a). This means that Bryan and Rotunno (2009a)’s Eq. (8) without
dissipative heating – i.e., our relationship (4) – for their control
simulation is exact.
With $(v_{m}/v_{E})^{2}=a(v_{m}/v_{E}^{*})^{2}=1.5(109/72)^{2}=3.4$, we have a
significant superintensity. With $\mathcal{B}=0.5$, from Eqs. (15) and (16) we
find that
$[(1+\zeta)\varepsilon\mathcal{C}]^{-1}=\mathcal{B}(v_{m}/v_{E})^{2}=1.7$.
This means that supergradient winds account for one half of the superintensity
factor $3.4$, leaving the remaining $1.7$ unexplained. For $\varepsilon\simeq
1-1/a=0.33$ assuming $T_{b}=293$ K as in Hurricane Isabel 2003, which had the
temperature of surface air (297.5 K) similar to Bryan and Rotunno (2009a)’s
control simulation (298 K), we have $1/(1+\zeta)=0.67$ for $850$ hPa and
$\mathcal{C}=[\mathcal{B}(v_{m}/v_{E})^{2}(1+\zeta)\varepsilon]^{-1}=1.2$.
This situation is comparable to Hurricane Isabel 2003 on September 12 (Table
2), i.e., the air at the point of maximum wind in Bryan and Rotunno (2009a)’s
control simulation is close to horizontal isothermy. This explains why in
Bryan and Rotunno (2009a)’s control simulation E-PI strongly underestimates
$v_{m}$ even after the account of supergradient winds. Wang et al. (2014), who
followed Bryan and Rotunno (2009a)’s approach, similarly found that the effect
of supergradient winds was insufficient to explain superintensity in their 3D
model. With no other quantitative explanation at hand, they hypothesized that
the discrepancy could be attributed to the neglect of turbulent mixing or to
cyclones not being truly in steady state, but did not examine $\partial
T_{b}/\partial r$.
### Appendix D E-PI and the partial pressure of water vapor
Using Eq. (A4) and $q\equiv\mathcal{H}q^{*}$, where $\mathcal{H}\equiv
p_{v}/p_{v}^{*}$ is relative humidity, the E-PI upper limit $\hat{v}_{E}$ on
maximum velocities (Table 1) can be re-written as follows:
$\hat{v}_{E}^{2}\sim\left(\frac{1}{2}\varepsilon\frac{C_{k}}{C_{D}}\frac{L}{RT_{s}}\frac{1-\mathcal{H}_{a}}{\mathcal{H}_{a}}\right)\frac{2p_{va}}{\rho_{a}}.$
(D1)
Here $p_{va}=\mathcal{H}_{a}p_{va}^{*}$ is the actual partial pressure of
water vapor in surface air in the ambient environment, $p_{va}^{*}$ is the
partial pressure of saturated water vapor at sea surface temperature in the
ambient environment, $\rho_{a}$ is ambient air density at the surface. Using
typical tropical values $T_{s}=300$ K, $\mathcal{H}_{a}=0.8$, $\rho_{a}\simeq
1.2$ kg m-3, $\varepsilon=0.32$ (Table 1 of Emanuel, 1989) and
$C_{k}/C_{D}=1$, we have $p_{va}=28$ hPa and $v_{m}\simeq 60$ m s-1 in
agreement with Table 1 of Emanuel (1989).
The coefficient in parentheses in Eq. (D1) for the same typical parameters is
close to unity and depends only weakly on air temperature:
$\frac{1}{2}\varepsilon\frac{C_{k}}{C_{D}}\frac{L}{RT_{s}}\frac{1-\mathcal{H}_{a}}{\mathcal{H}_{a}}\simeq
1.$ (D2)
This means that numerically the scaling of maximum velocity in E-PI
practically coincides with the scaling
$\rho_{a}\frac{\hat{v}_{E}^{2}}{2}=p_{va}$ (D3)
proposed within the concept of condensation-induced atmospheric dynamics (for
a more detailed discussion see Makarieva et al., 2019, section 5). Introducing
the dissipative heating leads to additional factor $1/(1-\varepsilon)\sim 1$
in Eq. (D2) and $\hat{v}_{E}^{2}$.
### Appendix E Response to the reviewers
Jul 28, 2021
Ref.: JAS-D-21-0149
Editor Decision
Dear Dr. Makarieva,
I am now in receipt of all reviews of your manuscript “Alternative expression
for the maximum potential intensity of tropical cyclones”, and an editorial
decision of Major Revision has been reached. The reviews are included below or
attached.
We invite you to submit a revised paper by Sep 26, 2021. If you anticipate
problems meeting this deadline, please contact me as soon as possible at
<EMAIL_ADDRESS>
Along with your revision, please upload a point-by-point response that
satisfactorily addresses the concerns and suggestions of each reviewer. To
help the reviewers and Editor assess your revisions, our journal recommends
that you cut-and-paste the reviewer and Editor comments into a new document.
As you would conduct a dialog with someone else, insert your responses in a
different font, different font style, or different color after each comment.
If you have made a change to the manuscript, please indicate where in the
manuscript the change has been made. (Indicating the line number where the
change has been made would be one way, but is not the only way.)
Although our journal does not require it, you may wish to include a tracked-
changes version of your manuscript. You will be able to upload this as
“additional material for reviewer reference”. Should you disagree with any of
the proposed revisions, you will have the opportunity to explain your
rationale in your response.
Please go to www.ametsoc.org/PUBSrevisions and read the AMS Guidelines for
Revisions. Be sure to meet all recommendations when revising your paper to
ensure the quickest processing time possible.
When you are ready to submit your revision, go to
https://www.editorialmanager.com/amsjas/ and log in as an Author. Click on the
menu item labeled “Submissions Needing Revision” and follow the directions for
submitting the file.
Thank you for submitting your manuscript to the Journal of the Atmospheric
Sciences. I look forward to receiving your revision.
Sincerely,
Dr. Zhuo Wang
Editor
Journal of the Atmospheric Sciences
September 26, 2021
Ref.: JAS-D-21-0149
Resubmission of revised manuscript666Manuscript JAS-D-21-0149 with line
numbers can be found at https://bioticregulation.ru/ab.php?id=alt
Dear Editors,
Thank you for your consideration of our work “Alternative expression for the
maximum potential intensity of tropical cyclones”, of which a major revision
we are now resubmitting. We are grateful for this opportunity to further
clarify our arguments while addressing the rigorous criticisms and insightful
suggestions from our two reviewers.
We appreciate that one of our reviewers has changed his characterization of
our work from “fundamentally flawed” to“good scholarship that deserves
publication”, and that our second reviewer, despite having downgraded the same
work from “thought-provoking” to “fundamentally flawed”, now also believes
that its publication could nevertheless be allowed.
The viewpoint of our second reviewer, as we understood it, is as follows: (1)
E-PI does not make any assumptions about the horizontal temperature gradient;
(2) E-PI conforms to observations, and (3) where it does not, the mismatch has
been explained by supergradient winds. As far as E-PI does not assume
horizontal isothermy, it is not valid to consider the case of horizontal
isothermy and to infer from this, as we do based on our alternative
formulation, that E-PI underestimates maximum winds.
Our work presents a different picture. First, while we agree that E-PI does
not assume anything about horizontal temperature gradient, we show that it
unambigously predicts it. It is a novel result. It is explicitly recognized as
such by our first reviewer and never commented upon, but likewise never
challenged, by our second reviewer. Assessing the value of $\partial
T/\partial r$ is a direct test of E-PI’s validity.
Second, we give examples and argue that in many cases a negligible or small
temperature gradient at the point of maximum wind is a plausible
approximation. In those cases E-PI does not conform to observations and does
underestimate maximum winds.
Third, prompted by our second reviewer, we have carefully re-evaluated the
study of supergradient winds by Bryan and Rotunno (2009a), to reveal that
their analysis is not self-consistent and does not explain superintensity
either in their own numerical model or in Hurricane Isabel 2003. (Notably,
Wang et al. (2014) similarly found that supergradient winds did not explain
superintensity in their model; however, their results apparently did not raise
concerns and did not urge a re-evaluation of Bryan and Rotunno (2009a)’s
conclusions.) This leaves our new formulation, easily modified to
diagnostically account for supergradient winds, the only available coherent
explanation of “superintensity”.
Addressing our two reviewers’ shared concern, we have re-written the
concluding section to explicitly discuss the difference between our
alternative formulation that relies on the inner core parameters, versus the
conventional E-PI interpreted as estimating intensity from environmental
parameters alone. We draw the parallel between our work and the recent study
by Tao et al. (2020b), who showed that in Lilly’s model the outflow
temperature is controlled by the inner core structure. Furthermore, to
corroborate our previous qualitative discussion, we now provide a quantitative
estimate of the horizontal temperature gradient that results from the storm’s
secondary circulation (Eq. (17) in the concluding section). Judging from the
reviewers’ feedback received so far, these relationships are novel and the
underlying mechanisms previously unrecognized.
We hope that, in this revised form, the readers could now indeed be allowed to
make their own judgement about our work and possibly advance the discussion of
this important and complex subject even further. Once again, we are grateful
to our reviewers for their constructive inputs that we perceive as even more
valuable in view of several disagreements.
Sincerely,
Anastassia Makarieva
Andrei Nefiodov
RESPONSE TO THE REVIEWERS
We are thankful to our reviewers for their time and constructive criticisms
and suggestions. We have deepened our understanding of the subject during this
discussion. The reviewers’ inputs have been very valuable for us, which we
gratefully acknowledge.
REVIEWER COMMENTS
Reviewer #1: General Remarks
I no longer have a problem with the introduction, which has been rewritten to
de-emphasize adjustments in the wind-pressure relationship. An inflow
temperature variation (the new emphasis) is one of the 2 ways to get a non-
trivial $dp/dr$ (or delta $p$ in E86), the other being a $RH$ variation, which
has been the only consideration in the literature to now. The adjustment
represented by the factor $B$ in Eq. 14 does not accomplish this; hence my
problem with the emphasis in the first ms. Moving from E86’s bulk changes in
$p$, $s$ and $T$ to radial derivatives allows familiar substitutions from the
aerodynamic laws. I agree that this is novel. The reader is now reassured by
Eq. 4 that the analysis retains the same source and sink of energy as in the
established theory.
Thank you for this evaluation. Regarding Eq. 4, as we discuss in the revised
concluding section, it is the most problematic part of E-PI. The study of Zhou
et al. (2017) provides a good illustration.
The title is strictly accurate, but like one of the other reviewers, I find it
misleading. The alternative expression is robust but not well constrained.
After all, the vortex is (at a minimum) a two-dimensional system while the new
analysis uses a one-dimensional model. The inflow temperature gradient at the
outer eyewall is not known even approximately from environmental parameters,
even if one can often say qualitatively what it might look like (as is done in
the new concluding section). One cannot produce maps of MPI using the new
expression. But we know that the established MPI has problems with outflow
boundary conditions, outflow energy dissipation and non-conservation of
entropy. These have been looked at in various papers. The “robust” new
expression can be a useful way to quantify the errors, but I don’t think it
explains them or even sorts them out. To say it explains them is to say that
the radial $T$ gradient is controlling the outflow conditions, dissipation and
updraft entrainment. My impression from the modeling study of Zhou et al (JAS
2017) is that the 2D constraints can conspire to ignore $T(r)$. To avoid
confusing the research and observations communities, the paper must include a
broader discussion of the practical meaning of the new expression.
We agree with the majority of the above concerns and evaluations and have
fully re-written the concluding section to explicitly discuss relevant issues.
Now the concluding section has three subsections: “The nature and magnitude of
the horizontal temperature gradient”, “External and internal parameters in the
maximum potential intensity formulations” and “Why $v_{E}$ underestimates,
while $\hat{v}_{E}$ overestimates, maximum winds”. We highlight several points
below.
…The inflow temperature gradient at the outer eyewall is not known even
approximately…
To say that it is unconstrained at all would be too big a leap from the
previous convenient assumption that it is universally zero. We now present a
quantitative estimate for this gradient as dependent on the ratio between
radial and total wind speeds, Eq. (17), and show on the example of Hurricane
Isabel 2003 that it agrees well with observations. One of the main
implications is that the air-sea disequilibrium in the eyewall of intense
cyclones is largely a product of the cyclone’s velocity structure. It cannot
be freely specified. Once these patterns receive more attention, it can well
be hoped that the radial temperature gradient at the point of maximum wind
turns out to better constrained than it seems now, unstudied as it is.
…One cannot produce maps of MPI using the new expression…
We discuss why such maps may not be very informative in any case, given the
exposed MPI limitations.
…but I don’t think it explains them or even sorts them out…
Not alone, but together with the recently proposed interpretation of E-PI as a
thermodynamic cycle with zero work in the free troposphere, it does. In
particular, it allows one to quantify how the deviation from adiabaticity at
the tropopause causes the mismatch between $v_{A}$ and $v_{E}$.
…After all, the vortex is (at a minimum) a two-dimensional system while the
new analysis uses a one-dimensional model…
This is a point that we don’t quite take. The alternative formulation $v_{A}$
is valid for any model, 1D, 2D or 3D, provided that the air is saturated at
the point of maximum wind. Its extension to the unsaturated case is
straightforward. In this case, E-PI will predict not the radial temperature
gradient, but a relationship between $\partial T/\partial r$ and
$\partial\mathcal{H}/\partial r$. However, unsaturated air at the point of
maximum wind ($z_{m}\sim 0.5$–$1$ km) is, in our view, an unrealistic
assumption that demonstrably conflicts with the other E-PI assumptions (e.g.
with the assumption that entropy is well mixed in the vertical).
The established MPI doesn’t depend as much on the restriction toi $RH=1$ as
this paper does. Extending the trajectories into the sub-cloud layer where
$s<s^{*}$ only impacts the angular momentum conservation, which does not seem
like a new problem. Thermal-wind balance and conservation of $s$ are not
affected by diving into the sub-cloud layer, at least in the idealized setup
(you don’t need the Maxwell relations if you integrate around a closed circuit
– made up of “adjacent heat cycles” in the Rousseau-Rizzi terminology). An
alternative expression could also be developed for $d(RH)/dr>0$ and $dT/dr=0$,
which is probably a bit more appropriate for the inflowing air at the surface.
If observations are available, this could be a test of the new expression
without depending on assumptions about the height of cloud base.
Indeed, E-PI does not depend on $\mathcal{H}$ at $z=z_{m}$, because it is
based on the consideration of an infinitely narrow cycle. Whatever
irreversibilities occur in the infinitely small parts of the cycle, they do
not affect the cycle’s efficiency provided that the finite parts remain
adiabatic. We discuss this in section 3.b. However, for real cyclones with the
point of maximum wind located at around 1 km, $\mathcal{H}=1$ is a valid
assumption. Regarding the surface, we have derived this alternative
relationship for isothermal air, see Eq. (A17), but it is not very realistic
in view of the fact that the surface air cools towards the eyewall and
approaches saturation, see Eq. (17).
Let me go through some of the authors’ replies to my review:
Fundamentally flawed is, for example, Eq. (6) of Emanuel and Rousseau-Rizzi
(2020), where the units of the right-hand and left-hand sides do not match.
That is indeed wrong.
I was intrigued to know what the authors consider “fundamental.”
Unfortunately, the reference doesn’t have an equation 6. The original 2019
paper does have an equation 6, but the units match. For the record, I don’t
consider a typo to be a fundamental flaw. I mention my example of a
fundamental flaw below.
Emanuel and Rousseau-Rizzi (2020) was referred to as Rousseau-Rizzi and
Emanuel (2020) in the manuscript, hence the difficulties in locating Eq. (6),
our fault. We appreciate the additional post-review discussions after Eq. (6)
was located. Our extended comment on this equation is provided in Footnote 4
in the revised text.
These are interesting observations, but they do not appear to be relevant to
our work or represent a criticism of it. Indeed, unlike our original Eq. (6)
(Eq. (3) in the revised ms), Eq. (26) of Emanuel (1986) is not a local
differential equation applicable at the point of maximum wind. The E-PI
formula for maximum velocity is derived from Eq. (13) of Emanuel (1986) and
not from his Eq. (26), the latter containing additional assumptions not
applicable in the differential form.
The relevance is that with $dT/dr=0$ everywhere (along with $dRH/dr=0$), you
don’t have a vortex. After that, the manipulation of the wind-pressure
relationship via the factor $B$ is meaningless. There are dangers in using a
one-dimensional model for a two-dimensional system and I think this identifies
the biggest one.
Once again, we don’t quite understand this concern. We don’t assume $\partial
T/\partial r=0$ and $\partial\mathcal{H}/\partial r=0$ everywhere. We only
assume, as does Emanuel (1986), that $\partial\mathcal{H}/\partial r=0$
(saturated air) at the point of maximum wind. Our formulation simply
represents the radial gradient of one state variable, $s^{*}$, via the radial
gradients of two others, $p$ and $T$. It is just the definition of $s^{*}$, it
is independent of the system dimensionality.
As an aside, factor $1-\varepsilon(1+\zeta)$ appeared in the denominator of
Emanuel (1986)’s Eq. (26) because, to derive his Eq. (26), Emanuel (1986) used
his Eq. (13) (our Eq. 3) together with $\partial T_{b}/\partial r=0$. Since
generally Emanuel (1986)’s Eq. (13) is incompatible with $\partial
T_{b}/\partial r=0$ (Fig. 1a), Emanuel (1986)’s Eq. (26) is incorrect. This
singularity does not exist. Hypercanes introduced based on this equation are a
misconception.
Emanuel doesn’t assume $\partial T_{b}/\partial r=0$ in this context. He
assumes that $T_{b}$ is the same as in the environment. This leaves room for
non-monotonic fluctuations of $T_{b}$ and even monotonic fluctuations that are
small enough in aggregate.
It is a valid argument. We could have been more precise in pinpointing the
error. The problem with Eq. 26 is as follows. To derive this equation, Emanuel
(1986) simultaneously used $\varepsilon T_{b}\partial s^{*}/\partial
r=-\alpha\partial p/\partial r$ and a version of our Eq. (6) for unsaturated
isothermal air, see Emanuel (1986)’s Eqs. 21 and 25, respectively, as if they
were independent constraints. Since they are not, Eq. (26) represents an
identity, from which nothing can be deduced. Hypercanes do not exist. We dwell
on this in some detail in Appendix B, since this problem illustrates the
pervasive influence of the exposed constraint Eq. (13) on all E-PI constructs.
The 1995 JGR paper does a pretty good job of establishing that the hypercane
is not a misconception. The denominator comes from linearizing the exponential
curve in Fig. 1 of the 1988 paper. One can also work with the logarithm of Eq.
22 in that paper.
This just shows that, without a robust theory to check them against, numerical
simulations are always available to illustrate any desired pattern, including
those that cannot exist in reality.
In terms of 1) and 2) above, what is said in our work is, simplifying, that
$a=a(T_{o})$ and $b=b(\partial T/\partial r)$. Thus requiring $a=b$ sets a
relationship between the outflow temperature $T_{o}$ and the radial gradient
of temperature, $\partial T/\partial r$, at the point where $\partial
v/\partial r=0$. This is a novel, previously unexplored constraint on E-PI
Agreed, as I mentioned in my general remarks.
The literature is clear enough to deduce that Eq. (26) of Emanuel (1986) is
incorrect.
Again, I disagree.
Please see Appendix B.
This is generally true, but it can be shown that if the atmosphere is
saturated at the radius of maximum wind (as it usually is), the local radial
gradient of $T_{b}$ will be too small to appreciably change the mismatch
between Eqs. (3) and (10) in the revised manuscript if $T_{s}$ is constant,
see Eq. (A13) in the Appendix.
This seems right if we are talking about cloud base. But I am puzzled by the
implications of using the surface air instead of cloud base for this 1-D
analysis, as I mentioned in my general remarks. If the analysis with
$dRH/dr<0$ turns out to be substantially different, I would conclude that
variation of the cloud base height cannot be small.
We have provided an expression for $\mathcal{C}_{b}$ for the case of
isothermal surface air and $\partial\mathcal{H}/\partial r\neq 0$, see Eq.
(A17). Obviously, by varying $\partial\mathcal{H}/\partial r$ it is possible
to arrange any value of $\partial T_{b}/\partial r$ and $\mathcal{C}_{b}$ at
the point of maximum wind. The implication is then that
$\partial\mathcal{H}/\partial r$ in E-PI cannot be freely prescribed; it is
predicted. In particular, it is not possible to set
$\partial\mathcal{H}/\partial r=0$ at the radius of maximum wind as assumed by
Emanuel (1986) and Emanuel (1995), who postulated that beginning from this
radius the relative humidity does not change. This assumption together with
constant $T_{s}$ is equivalent to constant $T_{b}$, see Eq. (A17). We,
however, believe that a more realistic assumption, at least for intense
storms, is constant $\mathcal{H}=1$ at the surface at the radius of maximum
wind and variable $T_{s}$, as discussed in section 3.a.
What is shown in our analysis of Hurricane Isabel is that the observed degree
$B$, to which the wind is supergradient, is not sufficient to explain the
mismatch between E-PI and observations. It is necessary to assume a
significant radial gradient of $T_{b}$.
I take this point. My interpretation (perhaps consistent with the
considerations of the range and distribution of $dT/dr$ in the Conclusions) is
that this analysis smokes out inconsistencies in the outflow assumptions (as I
mentioned above) and also in the assumptions of azimuthal symmetry and
steadiness, without sorting them out or explaining them. I would like to see
more awareness that the reasons for the inconsistency are complicated and
require a 2D and possibly 3D model and that there are practical difficulties
in diagnosing or quantifying them from $dT/dr$.
We believe that we have outlined the a major reason for the inconsistency and
discuss it in considerable detail in section 3.b. The inconsistency results
from the violation of adiabaticity at the tropopause, i.e., indeed it has to
do with the outflow region. Again, we are not sure what is meant by a 2D model
(isn’t E-PI itself a 2D model?), but 3D model is not required to conceptualize
it. Regarding the radial temperature gradient, we also discuss it
quantitatively in a separate section (section 3.a) and explicitly recognize
the multitude of complex factors that determine its value.
More detailed comments:
I spent some time going through the Appendix. If some of my reviewers had done
the same for me in the past, I might have been spared some embarrassment. The
present Appendix is unnecessarily complicated. The basic exercise is to make
substitutions into the energy equation, basically
$Tds+R_{v}T\log(RH)dq=-alpha*dp+dk$. Instead, the Appendix first integrates
the equation to obtain an expression for s, then differentiates it. That is
circuitous. Another problem is the sprinkling of factors of $R_{v}/R$ (or
$L_{v}/L$ or $q/\gamma$) throughout. Please choose one set or the other.
Thank you for your time spent on examining the derivations. We agree that it
is a precious part of reviewers’ work which sometimes goes unappreciated. We
understand your stance on our Appendix. However, we prefer to retain it as it
is in hope that, since no factual errors appear to have been spotted, its form
will do minimal harm. Writing out a definition of entropy seems to be the
convention in the literature on the subject, see, e.g., Eq. (6) of Emanuel
(1991) and Eq. (23) of Bryan and Rotunno 2009
https://doi.org/10.1175/2008MWR2709.1. While debates are on-going in the
Journal of the Atmospheric Sciences about what moist entropy actually is
(e.g., https://doi.org/10.1175/JAS-D-18-0126.1), it is better to be explicit.
Regarding the factors, their duality reflects the interplay between the
gravity law (that acts on $\rho$) and the ideal gas law that operates with
molar densities. We prefer $R$, $L$ and $\gamma$, but it is the tradition to
write $\alpha dp$. Again, we hope that this compromise will do little harm.
Equations 3-5: somewhere in here you are using $T_{b}\sim T_{s}$.
This is recognized in the paragraph following Eq. (5).
line 195: Is 850 hPa an approximation of the cloud base? What are the
implications of misdiagnosing this level and its variation with radius?
It is an approximation of the level $z_{m}$ of maximum wind. Provided that the
air is saturated at this level, which, to our understanding, is usually the
case, a possible inaccuracy in its value has only a minor impact on $v_{A}$,
see Fig. 1a.
Paragraph starting with line 196: be careful not to compare VA and VE in the
“core.” VE only applies at the base of outflowing trajectories.
We agree and have never compared $v_{A}$ and $v_{E}$ in the core.
Paragraphs starting with lines 273 and 294. The tone is not collegial and
sounds like an unrebutted trial summation. I strongly advise the authors to
rewrite both paragraphs when they are in a better mood. I won’t weigh in on
the disagreements except to say that I don’t know what units mismatch is
referred to, and I don’t see any recognition of E86’s long discussion about
the transition from a ventilated to a non-ventilated boundary layer across the
RMW.
We have re-written the entire concluding section. The disagreement is now
spelt out in Footnote 4, thank you for the post-review discussion. The
ventilated and non-ventilated boundaries were introduced by Emanuel (1986) to
specify how relative humidity at the surface changes with radius. We discuss
in the end of Appendix A that there is no freedom in E-PI to make such an
arbitrary specification.
To summarize, this work does present a novel “fact-check” for the established
MPI formulation, but there is not enough recognition of the limitations of
using a 1-D model. It is good scholarship that deserves to be published but
the improvements that I think are necessary would still amount to a major
revision.
Thank you for this evaluation. We have done our best to address your comments.
Steve Garner 7/15/2021
Review of “Alternative Expression for the Maximum Potential Intensity of
Tropical Cyclones”, by Anastassia M. Makarieva and Andrei V. Nefiodov.
Manuscript Number: JAS-D-21-0149
Recommendation: Major Revisions
Summary: I thank the authors for their detailed responses to the comments on
my original review. I do think that this resubmitted manuscript is an
improvement upon the original, but I also still believe that this work is
fundamentally flawed from a conceptual and physical standpoint, and is reliant
on substituting arguments concerning frameworks other than the specific one
being evaluated (E86). I appreciate that the authors have clarified that
radially constant Tb is not actually used in the derivation of the maximum
potential wind speed in E86, and so E-PI does not require this assumption. I
agree that the authors here provide an alternative diagnostic expression for
evaluating some of the constraints of potential intensity theory, but they do
not provide an actual alternative to the E86 prediction of the maximum
achievable intensity for a given environment, i.e., the authors do not present
a closed formulation of PI (this itself is not a critical flaw, but should be
clarified). I don’t agree with the authors’ argument that the radial gradient
of Tb is a key to understanding superintensity, as again (and as the authors
concede), E-PI does not actually assume this quantity. Showing that adding on
such an assumption to E-PI can render the outflow temperature inconsistent
with the respective specified value does not mean that this somehow results in
superintensity, or that observed or simulated TCs that exceed E-PI do so for
the reasons argued by the authors. My initial instinct is to again recommend
rejection of this manuscript, because I don’t think it is possible to
circumvent what I see as its fundamental flaws, which are that the authors
continue to treat E-PI as having made an assumption that it does not (despite
in some places acknowledging this), and that imposing this assumption onto the
theory results in an implied outflow temperature that would be unrealistic.
Because E86 don’t make the assumption of constant Tb that the authors add on
to it, it isn’t valid to draw conclusions about the importance of the radial
temperature gradient within the context of that theory of potential intensity.
In spite of my inclination to recommend rejection, given the complicated
nature of potential intensity theories and their continued debate within the
literature, I could still see it being reasonable to allow readers to make
their own judgement as to the validity of this study, and so I therefore am
instead recommending Major Revisions. However, in order for this study to be
published, I think that the authors need to made adequate changes sufficient
to address my concerns below. Most importantly, they need to remove (or
appropriately modify) arguments that imply that E-PI is reliant on an
assumption it doesn’t make or that they are demonstrating that E-PI generally
underestimates TC intensity as a result of this supposed assumption, and they
need to properly deal with previous studies that have been able to show that
superintensity in numerical simulations can be explained by supergradient
flow.
We appreciate the reviewer’s detailed and thought-provoking comments on our
work. We have added the following two paragraphs at the end of the
Introduction, which we hope will minimize the probability of the readers
getting confused:
To clarify the logic of the foregoing analyses for our readers, we believe
that it could be useful to contrast the following two viewpoints. The first
one is from one of our reviewers: E-PI does not makes any assumptions about
the horizontal temperature gradient; E-PI conforms to observations, and where
it does not, the mismatch has been explained by supergradient winds. As far as
E-PI does not assume horizontal isothermy, it is not valid to consider the
case of horizontal isothermy and to infer from this, as we do based on our
alternative formulation, that E-PI underestimates maximum winds.
Our alternative viewpoint is as follows. While we agree that E-PI does not
assume anything about horizontal temperature gradient, here we show that it
predicts it. Thus, assessing the value of $\partial T/\partial r$ is a test of
E-PI’s validity. We will give examples and argue that in many cases a
negligible or small temperature gradient at the point of maximum wind is a
plausible approximation. In those cases E-PI does not conform to observations
and does underestimate maximum winds. We have re-evaluated the study of
supergradient winds by Bryan and Rotunno (2009a) to reveal that their analysis
is not self-consistent and does not explain superintensity either in Hurricane
Isabel 2003 or in their own numerical model (see, respectively, section 2 and
Appendix C). At this point, this leaves the new formulation, easily modifiable
to diagnostically account for supergradient winds, the only available
explanation of “superintensity”.
Major Comments:
A. Can this explain superintensity when the isothermal assumption isn’t
actually required by E-PI?
Yes, it can, when the atmosphere is actually isothermal – irrespective of what
E-PI assumes about it.
The authors agree that E-PI (as formulated in E86, and which they rely on in
this study) does not actually require that Tb=constant. Because calculating
this theoretical bound on intensity doesn’t use such an assumption, I think
the framing of this study is not valid. What the authors are doing is applying
an unrequired additional constraint and then showing that when this is done,
this imposes a constraint on the outflow temperature, and requires the outflow
temperature to be much colder than was initially assumed in the calculation of
E-PI, and thereby requiring the potential intensity to be much greater than in
E-PI (where the “inconsistent” outflow temperature is specified).
Our study is framed around the isothermal case not because E-PI assumes it,
but because it is relevant to the real atmosphere and not uncommon in models.
For example, as we demonstrate in our analysis of Bryan and Rotunno (2009a)’s
results, their control simulation is fairly close to horizontal isothermy
(with $\mathcal{C}=1.2$), as was Hurricane Isabel on September 12 (with
$\mathcal{C}=1.1$).
The authors dismiss the critique that superintensity has been shown to be due
to the effects of unbalanced flow as in their view “too optimistic”, but they
don’t explain why. Several studies (e.g., Bryan and Rotunno 2009, Rousseau-
Rizzi and Emanuel 2019) have carefully investigated this and shown that when
accounting for (in diagnostic form) unbalanced flow, the discrepancy between
the simulated maximum wind speed and the diagnostic PI is small. I think that
in order for this study to be published, the authors must discuss these
studies with respect to their prior evaluations of superintensity and
explicitly acknowledge their respective findings that superintensity is
explained by supergradient flow. The authors must also explain why they
believe that those authors’ conclusions in this respect are incorrect, and/or
attempt to reconcile the current study with these existing studies.
We have added a new Appendix C “The superintensity analysis of Bryan and
Rotunno (2009a)”, where we re-evaluate their results, which Rousseau-Rizzi and
Emanuel (2019) build on, in considerable detail. We show that their analysis
is self-inconsistent and does not support their conclusions. Specifically, the
values of $\mathcal{B}$ (a measure of gradient-wind imbalance) for their
control simulation, shown in their Fig. 8 and reported in their section 6b, do
not match. This discrepancy has profound implications for superintensity
assessments. Besides, we have shown that Bryan and Rotunno (2009a)’s
formulation contains an unphysical (and unattended) singularity implying
infinite winds at finte $\mathcal{B}$. Finally, for Hurricane Isabel 2003,
using the data provided by Bryan and Rotunno (2009a), we show that maximum
superintensity is associated with minimum gradient-wind imbalance. We hope
that these grounds could be sufficient for our reviewer to ultimately welcome
our self-coherent explanation of “superintensity”.
It is not sufficient to simply point to Li et al. (2020), because that study
did not examine unbalanced flow, and it also evaluated only the “surface PI”,
rather than the traditional gradient wind PI of E86.
As the reviewer correctly points out (see Comment 4 below), “the formulation
of surface PI differed from that of the original “gradient wind PI” of E86
only by Ts replacing Tb in the numerator of the thermodynamic efficiency
term”. Our alternative formulation is also equally valid for the surface
winds. Therefore, the surface version of E-PI inherits from the conventional
boundary layer E-PI all the limitations revealed by the alternative
formulation.
The authors presume that the relationship between superintensity and SST found
by Li et al. (2020) has no relationship to supergradient flow, and is instead
“readily explained by our alternative formulation”, but this is speculative. I
note that the authors now have added a discussion of why they believe the
theory of Rousseau-Rizzi (2019) is invalid, but this does not actually address
the fact that RE19 also found good correspondence between traditional gradient
wind PI (as in E86) and numerical simulations.
In view of our own peer-review process, how Li et al. (2020) could have
published their superintensity analysis without checking the influential
supergradient wind explanation, is a valid question to the authors, reviewers
and handling editor. We can only hypothesize that this could have happened
because the explanation did not work. (Not disclosing negative results is a
common plague across all scientific disciplines.) At least in the control
simulation of Bryan and Rotunno (2009a, Fig. 8) the cyclostrophic imbalance at
the surface is minimal, so it could hardly explain superintensity of the
surface wind. (We added this note at the end of section 2.a). We are puzzled
that the reviewer does not welcome our explanation and calls it “speculative”.
In fact, it was to address the reviewer’s previous concern, that “the
framework by which [we] are able to validate [our] hypothesis is somewhat
unclear”, that we provided this explanation. The framework is clear now. Our
formulation, and no other, predicts less superintensity at higher
temperatures, Li et al. (2020) establish this pattern in their numerical
simulations, we outline which parameters should be paid attention to in order
to understand the underlying physics.
B. What is the physical interpretation of Va exceeding Ve?
Fig. 1 is interpreted by the authors to show that when Tb=constant (C=1), the
theoretical potential intensity from their alternative formulation (Va)
greatly exceeds that predicted by Emanuel’s PI (Ve), for realistic values of
Tb. I don’t think this is a physically meaningful interpretation.
We respectfully disagree. It is quite meaningful when $T_{b}$ is actually
constant, as it approximately was in Hurricane Isabel 2003 on September 12 and
in the control simulation of Bryan and Rotunno (2009a).
What this actually shows is that if you constrain Tb=constant while specifying
Tb (and p at the same location) and Tout, then this requires Tout to be much
colder than has been specified for the purposes of calculating E-PI. And so
for example, for the solid curve in Fig. 1, where Tout=-60 C, p=850mb, and
C=1, at Tb=295 K, $(Va/Ve)^{2}$ is about 2.3, and the so the authors would
argue that this means that E-PI underestimates the “true” potential intensity
by about 50%. What this would mean is that Tout=-60 C as assumed for
calculating E-PI is inconsistent, and the supposedly “true” Tout must be much
colder, so as to satisfy Eq. 13. But Tout is a physical parameter
corresponding to the temperature at which outflow occurs in the upper
troposphere and/or lower stratosphere, and so we aren’t really free to vary it
arbitrarily as is implied by the authors argument. In other words, for a given
environmental sounding in the framework of E86, Tout is effectively pre-
determined, and so a calculation that requires Tout to be much colder than
given by the sounding is incorrect (alternatively, within an observed or
simulated TC, Tout can be diagnosed and so again we aren’t free to arbitrarily
change Tout to be much colder than this assessed value). Therefore, I think it
is wrong on physical grounds to conclude for the reasons of the authors that
E86 underestimates the maximum achievable intensity of TCs for a given
thermodynamic environment. I think the authors need to acknowledge this
implication of their argument regarding the outflow temperature, and address
how it relates to their conclusions. I appreciate the authors’ argument that
their findings are more general than this implication with respect to outflow
temperatures, but I think this inconsistency is a critical flaw of the
analysis of this study.
In the revised manuscript, we discuss the reviewer’s perspective on our
results in considerable detail and devote a specific equation, Eq. (19), to
quantify the unrealistically low outflow temperature implied by Eq. (13). We
explain why E-PI does not work (due to the violation of the adiabaticity) when
the outflow temperatures are close to the temperature of the tropopause.
The reviewer first states that for an isothermal atmosphere our argument
implies a colder than observed outflow temperature, and then states that such
an implication is invalid as “we aren’t really free to vary [the outflow
temperature] arbitrarily” because it is set by observations. But, precisely
because the outflow temperature is set by observations and we are not free to
vary it, our argument for an isothermal atmosphere unambiguously indicates
that some of the E-PI assumptions simply do not hold. (We are not alone in
hinting at this possibility. For example, Tao et al. (2020a) opined that E-PI
does not properly account for realistic horizontal turbulent diffusion
mixing.)
The authors argue in their response to my original Major Comment A that “if it
appears from the diagnostic Eq. 6 that E-PI underestimates actual intensity,
the actual E-PI intensity formula will indeed be an underestimate unless the
other assumptions of E-PI are also incorrect and compensate the error in Eq.
6…” But again, E86 does not make this assumption (of constant Tb) in deriving
the equation for maximum wind speed. So it seems that the authors are still in
some respects treating this as if it is an actual constraint and cause of
underestimates in E-PI, but it is not, and so this argument is incorrect.
Again, irrespective of what E-PI assumes, the atmosphere can be horizontally
isothermal, in which case E-PI underestimates maximum velocity. This said, we
have made every effort to make it clear that E-PI does not make any
assumptions about horizontal temperature gradient in general and horizontal
isothermy in particular. Specifically, talking about horizontal isothermy, we
removed the words “as commonly assumed in various E-PI applications” from the
abstract and replaced it with “which, as we argue, is not an uncommon
condition”.
C. Is there a closed expression for potential intensity in the alternative
formulation?
The authors’ formulation of PI as presented in this manuscript is diagnostic,
and I don’t think the authors make this sufficiently clear. Except in the case
where Tb is assumed to be radially constant, the factor “C” depends directly
on the radial pressure gradient, which is unknown (and also could not really
be defined a priori for a given environment, independent of the maximum wind
speed itself, which is what is trying to be solved for). The moist adiabatic
lapse rate is also dependent on the unknown pressure, as is the factor “zeta”.
We have added a special section 3.b “External and internal parameters in the
maximum potential intensity formulations”, where we explicitly discuss these
issues, including the limitations of E-PI as a closed theory.
In their response to my original review where I point out that this manuscript
does not present an alternative closed theory for potential intensity, the
authors simply state: “An alternative expression for maximum intensity is
given in Eq. (11)”. Although I do appreciate that the authors have now
expressed this as an equation for the wind speed, this isn’t a closed theory,
in the sense that (as stated above), “C” and “zeta” are both unknown a priori.
To be fair, the formulation of E-PI that the authors are comparing to here is
also not closed (and perhaps I wasn’t sufficiently clear in my original review
in this respect), in that $k_{s}*$ and k are unknown (as also stated by the
authors). But E-PI also has been developed further into a closed form where
the maximum potential intensity can be calculated based solely on
environmental parameters. And so I am concerned that the authors are not
really presenting a true alternative expression for maximum intensity. I see
that in Table 1, the authors “convert” their expression into one that
resembles the closed form of E-PI. But because C is unknown, the authors’
formulation is not actually usable for making a priori predictions of maximum
potential intensity for a given environment. And so I think the authors should
more explicitly distinguish between diagnostic and environmental forms of PI,
and make clear that while E86 also developed a closed theory where potential
intensity can be predicted from environmental parameters alone, the authors’
“alternative formulation” is not such a theory.
We agree that it is not. Our alternative formulation, and the E-PI limitations
that it exposes, provide an indication that there cannot be any valid theory
of potential intensity based on the consideration of local heat input at the
radius of maximum wind. Our view is that moving beyond this conceptual
approach is necessary, if considerable progress in improving TC intensity
predictions is to be achieved.
D. Discussion of inflow temperature
In my original comments, I noted that a horizontal gradient being nearly
adiabatic might imply that surface heat fluxes are negligible, which seemed
unlikely. The authors respond that this is a “fundamental misconception”, but
I don’t find their explanation convincing.
In the revised text, we have attempted to clarify our arguments to make them
more convincing. We have added a new section 3.a “The nature and magnitude of
the horizontal temperature gradient”, where we provide a quantitative support
for our reasoning.
If there are no sources of heat (from surface sensible heat fluxes, turbulent
mixing, or diabatic heating), then radial flow towards lower pressure will
result in adiabatic expansion and a radial gradient that is consistent with
the dry adiabatic lapse (and as I stated, in this specific case, the magnitude
of the inflow is irrelevant, only the radial pressure gradient matters).
We agree. We note that by emphasizing this specific case, the reviewer now
implicitly recognizes that in other cases the magnitude of the inflow can be
relevant. Indeed, it is.
But if any of those heat sources exist (and neglecting any other offsetting
evaporative cooling), then the radial gradient will not be consistent with the
dry adiabatic lapse rate.
We disagree, please see below.
To be honest, I don’t understand the point of the authors’ argument about
units.
For example, velocity and acceleration have different units and, therefore,
cannot be compared with each other. In the presence of an unbalanced force,
according to Newton’s second law of motion, there is always a non-zero
acceleration, while local velocity can be zero.
If sensible heat is transferred from the ocean to the atmosphere, then this
source will necessarily offset some portion of the cooling tendency from
adiabatic expansion of the inflow.
We disagree. Transfer of sensible heat $J_{S}$ from the ocean to the
atmosphere has the units of W m-2. It is a surface flux. The cooling tendency
from adiabatic expansion of the inflow has the units of W m-3. It is a volume
flux: it describes how an air volume cools as it moves inward. Without
introducing a relevant length scale for the spatial change of $J_{S}$ (in our
case – for how it changes with altitude), it is not possible to say anything
about how these fluxes relate. In particular, air inflow at the surface can be
strictly adiabatic if at the surface $\partial J_{s}/\partial z=0$. This means
that the surface air moves and expands too fast for any absorbed surface heat
flux to appreciably perturb its temperature change.
The main point of my original comments concerning the radial inflow is that
the authors imply that the weaker inflow at the top of the boundary layer as
compared to the surface results in less (radial) adiabatic expansion at the
top of the boundary layer, and that this is why the radial temperature
gradient was estimated to be smaller at this level than at the surface in
Hurricane Isabel, and I don’t think this explanation is correct. The pressure
gradient at the top of the boundary layer is nearly the same as at the
surface, and so the net radial adiabatic expansion that would occur (again,
absent other sources of heat) is nearly the same; the rate of inflow itself
doesn’t matter (when assuming that the adiabatic expansion is the only
contributing process, as the authors do). Therefore, the explanation for the
different radial temperature gradients at the top of the boundary layer and at
the surface as proposed by the authors cannot be correct; there must be other
processes that contribute substantially to the temperature budget, and it
isn’t simply a consequence of different inflow velocities.
We respectfully disagree with this logic and ensuing conclusions. The reviewer
first states that adiabaticity of the inflow presumes no surface sources of
heat (this is incorrect, see above, the flow can be locally adiabatic in the
presence of surface heat fluxes), then states that in this case the radial
velocity does not matter (correct in the absence of surface heat sources, but
they are present), then concludes that our explanation of the zero temperature
gradient at the top of the boundary layer due to zero radial inflow cannot be
correct – because radial velocity does not matter.
However, as we clarify in the discussion following Eq. (17), in the general
case, adiabaticity depends on the relationship between the radial velocity and
the vertical change of the surface flux. If the surface flux is zero, the flow
is adiabatic at any negligibly small radial velocity. But if the surface flux
is not zero (and it is not zero in the real atmosphere, as the reviewer
certainly agrees), then the faster the air expands radially for any given
$\partial J_{S}/\partial z$, the closer to adiabaticity it will be.
Conversely, when the air does not move at all along the radial pressure
gradient ($u=0$, as it is at the point of maximum wind), then there is no
expansion and no reason for a horizontal change of temperature – irrespective
of the magnitude of the radial pressure gradient. These are our grounds to
expect a change in the sign of $\partial T/\partial r$ in the vicinity of
maximum wind, which, judging from the reviewer’s feedback, appear to be novel.
The authors’ response to my comment within the revised manuscript leads into a
digression towards a separate dispute they have with the study of Rousseau-
Rizzi and Emanuel (2019) and their associated comment and reply articles.
While this is tangentially relevant to the current study in that it concerns
potential intensity, this newly added discussion seems to me to be a
distraction from the main arguments of this manuscript, and so I recommend
that it be removed.
The E-PI limitations exposed by the alternative formulation are not confined
to the free troposphere, but likewise affect the transition from volume to
surface fluxes as formulated by Eq. 4. These are very general limitations:
their nature is that, while developing one’s theory, one cannot arbitrarily
specify (by direct assumption or by prediction) the ratios between changes of
pressure, temperature and water vapor mixing ratio, as these are already
related by fundamental constraints that must be respected. An analogue of Eq.
(4) was not specified by Rousseau-Rizzi and Emanuel (2019) for their
derivation of surface E-PI, which, as the reviewer points out elsewhere,
differs only insignificantly from the boundary layer PI. The transition from
volume to surface fluxes was then demonstrably incorrectly specified by
Emanuel and Rousseau-Rizzi (2020) (see Footnote 4). This provides another
illustration (the first one being the hypercanes) to the pervasiveness of the
limitations exposed by the alternative formulation. That is why we believe
that it is relevant to the present discussion.
Further, their characterization of the comment and reply articles appears to
me to be misleading. The authors refer to Montgomery and Smith (2020) as
having pointed out a supposed omission in the derivation of RE19 related to
converting from volume to surface energy fluxes, but as far as I can tell,
MS20 make no such assertion; instead, MS20 dispute an unrelated aspect of the
derivation of RE19 (related to outflow temperature).
Montgomery and Smith (2020, p. 1890) wrote: “…the derivations on p. 1701 would
have been easier to follow had the authors written down the variable of
integration. In particular, it is hard to see how Eq. (15) is obtained from
Eq. (14), since it would appear that the integrals on each side of the
equation have been effectively canceled and replaced by a point evaluation of
the integrands.” This step from Eq. 14 to Eq. 15 is precisely the transition
from the volume to the surface fluxes. In their response, Rousseau-Rizzi and
Emanuel (2020) neither wrote down the variable of integration nor explicitly
derived Eq. 15 from Eq. 14.
The authors then refer to their own comment article as having also pointed out
this omission, which is true, but in my reading of that comment, the authors
don’t actually dispute the validity of RE19 related to that aspect of their
derivation. Instead, in their comment article, the authors conclude that RE19
made an implicit assumption in their derivation, and state “We consider it
reasonable”.
But Emanuel and Rousseau-Rizzi (2020) disagreed with our characterization of
their work and referred to it as being a contradiction.
The authors in their comment then transition to an argument about dissipative
heating, which doesn’t seem relevant to the present study.
Dissipative heating involves one more constraint, which is incompatible with
the other E-PI constraints, see Footnore 4. It is an illustration of the
conflicting assumptions/predictions in E-PI that all have a single cause.
Now, in the revised text, the dissipative heating is more relevant than in the
previous versions, because we show that dissipative heating is responsible for
the unexplained singularity in the superintensity account of Bryan and Rotunno
(2009a), see Appendix C.
So the authors appear to be mischaracterizing RE20 as having not responded to
a critique of MS20 that they do not appear to actually make, and not
responding to a critique of M20 that doesn’t actually challenge the derivation
of RE19 in the manner portrayed here. So if the authors choose to keep this
section (which I recommend removing), they must ensure that their
characterization of the relevant literature is accurate.
We have revised and clarified our arguments on this topic. They are now
confined to three paragraphs in the end of the concluding section plus
Footnote 4. We believe that this discussion is useful and could urge more
clarity in important issues that continue to be obfuscated. We conclude that
“While a detailed analysis of this subject is beyond the scope of the present
paper, we emphasize that the discussion of the numerical validity of E-PI
could be more meaningful if a comprehensive theoretical justification of Eq.
(4) (and its modifications, including the transition between Rousseau-Rizzi
and Emanuel (2019)’s Eqs. 14 and 15 and Emanuel and Rousseau-Rizzi (2020)’s
Eqs. 1 and 2) were provided.”
Minor Comments (and specific examples relating to Major Comments):
1\. P2 l21, p9 l150
Change “grows” to “increases”.
The first sentence was removed from the revised version. The second sentence
was corrected as suggested.
2\. P4 l51
But since the E-PI prediction of maximum wind speed does not actually require
assuming that the air is horizontally isothermal (as the authors now concede),
I think it is misleading to suggest that E-PI underestimates the maximum
achievable TC intensity in a given environment. I think the authors’ argument
here is essentially a “strawman”, because in practice, E-PI is not calculated
using this imposed constraint.
We respectfully disagree that our argument is a “strawman”, because, as we
have already noted, the real atmosphere can be isothermal at the point of
maximum wind irrespective of what E-PI assumes about it. In particular, it is
nearly isothermal in Hurricane Isabel on September 12 and in the control
simulation of Bryan and Rotunno (2009a), which we discuss in detail in,
respectively, section 2 and Appendix C. Horizontal isothermy of surface air
together with $\partial\mathcal{H}/\partial r=0$ at $z=0$ at the radius of
maximum wind, as assumed by Emanuel (1986) and Emanuel (1995), also
corresponds to horizontal isothermy at the point of maximum wind, see Appendix
A, Eq. (A17). In the concluding section we argue why this should generally be
a valid approximation in intense cyclones.
3\. P4 l65-66
Reading over ER11, those authors make the approximation that Tb is constant
when solving for the full radius-height structure of the theoretical wind
field. I’m not certain if they actually make this approximation or not in
order to solve for their approximate analytical asymptotic formulation (their
Eq. 41) of the maximum wind speed (which agrees very closely with their
numerical solution of the theory; their Fig. 9). As discussed in ER11, those
authors make several other approximations that are possibly more severe than
assuming constant Tb, such as neglecting the pressure dependence of saturation
entropy at the SST. Also note that ER11 compare their numerical solutions of
their theory to simulations using the RE87 model (their Fig. 10), and find
very good agreement in the prediction of the maximum wind speed, for different
values of Ck/Cd. Though these were “tuned” (through adjusting the outer
radius) to match the RMW, as far as I can tell, they were not tuned to match
the peak wind speed. And though the RE87 model also neglected the pressure
dependence of s0* to be consistent with the approximation in the theory, I
don’t think it is possible for the effective Tb in the RE87 model to be tuned,
and so presumably any deviations from constant Tb in that model would affect
the simulated TC, were it actually very sensitive to this, as argued by the
authors in the current manuscript. Note that as ER11 intentionally use large
values of both horizontal and vertical mixing lengths in their simulations, it
is unlikely that supergradient winds are a significant factor in that study.
Based on my reading of ER11 discussed above, I think the authors need to
address the apparent contradiction between their own arguments in this
manuscript that assuming Tb is constant results in large underestimation of TC
potential intensity, and the results of ER11, which indicate that an excellent
agreement between theory and numerical simulations is found for the maximum
wind speed, under an assumption of constant Tb in the theoretical analysis.
We appreciate this comprehensive defense of Emanuel and Rotunno (2011)’s
findings, especially in view of their critique by Montgomery et al. (2019) who
demonstrated their inconsistency using a numerical model. But we respectfully
disagree that we should delve into this topic. A numerical model can be tuned
to produce the desired outcome in a variety of ways, such that to pinpoint
from outside what has been done wrong can be in principle impossible. For
example, if Bryan and Rotunno (2009a) had been just slightly less transparent
in reporting their results, to expose the inconsistency in their
superintensity analyses – not supported by their model but presented as if
they were – would have been nearly impossible. Emanuel and Rotunno (2011)’s
assumption of $\partial T_{b}/\partial r=0$ was unambiguously, from the first
principles that are superior to any model, shown to be incompatible with the
other E-PI assumptions – we emphasize that the reviewer does not question this
fact. The burden of proof is now on Emanuel and Rotunno – to persuade the
readers, likewise from the first principles, that this incompatibility is
inconsequential for their conclusions. Our present results will remain
unaffected whatever the outcome.
4\. P4 l72
The authors point out that Rousseau-Rizzi and Emanuel (2019) assume constant
T, in their derivation of a form of PI that applies to the surface wind speed.
However, that study showed that the formulation of surface PI differed from
that of the original “gradient wind PI” of E86 only by Ts replacing Tb in the
numerator of the thermodynamic efficiency term. Since E86 did not need to
assume constant temperature to arrive at the equation for gradient wind PI,
this implies that the assumption of constant T necessary to arrive at the
equation for surface PI is quantitatively unimportant, since surface and
gradient wind PI do not differ that much. As Rousseau-Rizzi and Emanuel (2019)
show that surface PI acts well as a bound on the surface wind speed of
numerically simulated TCs, I think that the authors of the current manuscript
must deal with this contradiction to their arguments. If the authors’ argument
is correct, then why does surface PI (which uses the assumption of constant
temperature) agree well with numerically simulated TCs and with gradient wind
PI (which crucially does not use the assumption of constant temperature)?
The surface E-PI appears to “agree well with numerically simulated TCs” and
“act well as a bound on the surface wind speed of numerically simulated TCs”
only in the study of Rousseau-Rizzi and Emanuel (2019), who proposed it.
Already the next at least partially independent study that examined the
surface E-PI found that it does not. We are puzzled by the fact that the
reviewer is aware of the study of Li et al. (2020) but nevertheless makes the
above statements.
At the same time, the reviewer asks a pertinent question, which can be
reframed as follows: Why does the conventional E-PI formulation, which does
not assume horizontal isothermy, end up with a Carnot efficiency that is only
valid for reversible cycles? Our guess is that Rousseau-Rizzi and Emanuel
(2019) invoked horizontal isothermy to justify the use of Carnot efficiency in
their formulation of surface PI. In reality, E-PI is based on a consideration
of an infinitely narrow thermodynamic cycle with efficiency insensitive to
infinitely small irreversibilities. Therefore, as we discuss in section 3.b,
the assumption of isothermy is not needed for the surface PI either, and,
moreover, it conflicts with the other E-PI assumptions, an obstacle which the
authors do not appear to be taking into consideration. It is another
illustration of the novelty of our formulation and its implications.
5\. P4 l73
Insert “to be” after “appears”.
6\. P6 l98
Spell out “constant”.
7\. P6 l104
Change “low index” to “the subscript”.
8\. P7 l114
Insert “and” before “$\alpha_{d}$”.
9\. P8 l131
Change “that” to “than”.
Comments 5, 7, 8, and 9: Revised as suggested. Comment 6: according to the new
AMS rules, "constant" should be abbreviated as "const", see
https://www.ametsoc.org/index.cfm/ams/publications/author-
information/formatting-and-manuscript-components/.
10\. P8 l135
The authors agree that Tb=constant is not actually assumed in the derivation
of the E86 potential intensity (wind speed). Although this clarity in the
resubmitted manuscript is welcome, I remain concerned that this poses a major
issue for the presumed relevance of this study. Since calculations of the
environmentally-determined PI (which are generally based on some formulation
related to that of E86) do not as far as I’m aware actually use this
assumption, then it isn’t really true that the theoretical maximum intensity
for a given environment is actually a large underestimate of the actual
intensity of real TCs or simulated TCs as a result.
“Environmentally-determined PI” corresponds to $\hat{v}_{E}$ in the revised
text (see Table 1). First, regarding simulated TCs, this closed form of E-PI
is based on the assumption of an isothermal surface air and relative humidity
being constant at the surface everywhere beyond the radius of maximum wind
(Emanuel, 1995, p. 3971). We show in Appendix A that these assumptions
correspond to $\partial T_{b}/\partial r\simeq 0$. So, yes, the
environmentally-determined PI does indeed implicitly assume horizontal
isothermy at the point of maximum wind. And indeed, the control simulation in
the study of Bryan and Rotunno (2009a) is close to horizontal isothermy (see
Appendix C).
Second, for a real and well-studied TC, Hurricane Isabel 2003, it was shown
precisely that the theoretical maximum intensity for a given environment is
actually a large underestimate of the actual intensity. This underestimate
cannot be due to supergradient winds, since they are minimal on the day when
the underestimate is most significant (September 12, see Table 2). It is a
result of the horizontal isothermy at $z=z_{m}$.
11\. P8 l137
Although E86 assumed constant Tb in deriving an equation for the central
surface pressure, I’m pretty sure that in the revised Emanuel (1995)
formulation, the central surface pressure is no longer solved independently of
the maximum wind speed, and so this assumption isn’t really used anymore in
that framework (i.e., the pressure equations of E86 are no longer considered
valid in the present day applications of the theory, as discussed in E95).
The assumption of constant relative humidity at the surface at the radius of
maximum and the assumption of horizontally isothermal surface air, both of
which are used by Emanuel (1995), are equivalent to implicitly assuming an
approximately constant $T_{b}$, see Eq. (A17). Generally, the alternative
formulation makes it clear that there is no freedom to specify surface
relative humidity in E-PI, provided horizontal isothermy at the surface is
assumed.
12\. P8 l141
Insert “therefore” before “C=1”.
Revised.
13\. P9 l152
I think the authors need to clarify what they mean by “superintensity” here,
as I don’t think this is how it is commonly used in the literature. In its
conventional usage, superintensity refers to the exceedance of observed or
simulated wind speeds beyond that of some theoretical bound. But here, the
authors are instead using superintensity to refer to a comparison between two
different theoretical bounds, without respect to real or simulated TCs. This
distinction matters, because the implication of the authors’ statement is that
they have shown that there is some temperature beyond which the theoretical
potential intensity can’t be exceeded by observed or simulated TCs, but I
don’t think the analysis here actually demonstrates this. Note the fact that
the authors refer to the simulations of Li et al. (2020) isn’t relevant to
this concern, because here, the authors are implicitly defining superintensity
as the ratio of two theoretical quantities, not the ratio of a
simulated/observed intensity to the theoretical bound.
Here, in our view, as also below in Comment 17, an important point has
apparently escaped the reviewer’s attention: $v_{A}$ can be used to test the
validity of $v_{E}$, and not vice versa, because $v_{A}$ is more robust
(requires less assumptions) than $v_{E}$. This is explained after Eq. (12). In
this sense $v_{A}$ evaluated at the point of maximum wind is not a theoretical
estimate, since it solely derives from the definition of moist entropy. By
analogy, molar density of air diagnosed from pressure and temperature is not a
theoretical estimate; this “estimate” solely derives from the ideal gas law,
which as we know cannot be very wrong.
Further, Li et al. (2020) use the surface PI of Rousseau-Rizzi and Emanuel
(2019) for evaluation of the wind speed in their simulations, and so what they
show is that the surface wind speed no longer exceeds the surface-PI in
simulations with high SST. But the present manuscript concerns the theoretical
bound on the maximum wind speed at the top of the boundary layer, and so the
authors’ comparison to Li et al. (2020) here isn’t really appropriate in my
view. I don’t think Li et al. (2020) examined the maximum wind speed at any
height, but it is likely that in their simulations, that maximum still exceeds
the respective theoretical maximum, in association with supergradient winds.
We have shown that the explanation of superintensity by supergradient winds as
proposed by Bryan and Rotunno (2009a) is not valid, see Appendix C.
In other words, for a given simulation, it is likely that the maximum surface
wind speed will be closer to surface PI than the maximum wind speed at the top
of the boundary layer is to the traditional gradient wind PI, and so a lack of
surface PI superintensity at high SST does not necessarily imply a lack of
“traditional” superintensity at high SST. As the authors’ analysis here in
this manuscript largely concerns the traditional superintensity whereas Li et
al. are evaluating surface PI superintensity, I don’t the results of Li et al.
can be used to support the authors’ argument. In any case, I don’t think the
authors of this manuscript have actually shown that real or simulated TCs
cannot exceed E86 PI at high SSTs.
We respectfully disagree with the statement that the results of Li et al.
(2020) do not support our argument, for three reasons. First, while it is true
that our analysis largely concerns the traditional superintensity, our robust
alternative formulation is more general and equally applies to all E-PI’s
modifications where $\partial V/\partial r=0$ holds (including surface winds).
Second, at the surface the gradient-wind imbalance can be small (as it is in
the control simulation of Bryan and Rotunno (2009a)), so supergradient winds
cannot explain superintensity. Finally, given that the explanation of
superintensity by supergradient winds alone does not hold, see above, even if
the conventional E-PI demonstrates superintensity at high SST, our formulation
will nevertheless have to be involved to explain it.
14\. P10 l178-187
As I mentioned in my original review, although air flowing inwards towards
lower pressure will expand and thereby tend to cool adiabatically, this is not
the only process going on here. Depending on the location, surface (and
turbulent) heat fluxes will act to warm the air, as will condensational
heating in convection (e.g., within the eyewall), while evaporation of
precipitation falling into the air parcel will yield a cooling tendency.
Although it’s possible that the radial expansion is dominant, the authors need
to acknowledge and discuss these other contributions to the temperature budget
of an air parcel, and not simply assume that adiabatic expansion is the only
important term.
We have presented a quantitative analysis comparing the relative contributions
of adiabatic cooling and surface heat flux, Eq. (17), and discussed in
considerable detail the nature and magnitude of the horizontal temperature
gradient at different locations, see the new section 3.a.
15\. P11 l196
Although it’s true that air within the eye is warmer than elsewhere in
association with subsidence, this subsidence generally only occurs above the
boundary layer. The boundary layer eye of a well-developed hurricane is
typically warm and moist, and its relative warmth is largely the result of
surface sensible heat fluxes.
Revised as follows: “In the eye, the surface heat fluxes and the descending
air motion work to elevate the air temperature above that at the eyewall and
in the outer core.”
16\. P11 l207
The authors state that when the radial temperature gradient approaches the
moist adiabatic lapse rate, E-PI would “infinitely underestimate” the true
potential intensity. This doesn’t seem physically meaningful to me, and I
think underscores that the framework developed here doesn’t really show that
E-PI is underestimating intensity for the reasons given by the authors. And
once again, E-PI does not actually require specifying the radial temperature
gradient, and so I don’t think this is a valid manner of interpreting its
predictions.
The moist adiabatic lapse rate in the radial direction is a theoretical limit
in the considered problem, which corresponds to a disappearing link between
$ds^{*}$ and $dp$, since $ds^{*}$ tends to zero. We believe that discussing it
is informative. Again, it is not relevant whether E-PI assumes anything about
it or not. The question is whether such a gradient can be realized in nature.
17\. P12 l220-225
The authors state that “the large value of C=2.9 nearly compensated the
underestimate that would have otherwise resulted from $B<1$”. But as far as I
can tell, they are inferring C by substituting the observed maximum wind speed
into Eq. 15, not directly estimating C from observations. Therefore, I think
this argument is kind of circular, and we don’t really know that C=2.9, and
therefore we don’t know that a large C compensated for supergradient winds and
brought the observed wind speed into better agreement with E-PI.
The laws of nature are useful as they obviate the need to measure each and
every variable. For example, for an ideal gas you don’t need to measure
simultaneously $T$, $p$ and molar density $N$. From the ideal gas law $p=NRT$
and a known combination of any two variables, you unambiguously know the third
one. Similarly, $\mathcal{C}$ is unambiguously derived from our Eq. (15),
provided all the other variables are known. It cannot be wrong unless the
first law of thermodynamics fails. This is a manifestation of our alternative
formulation being more robust than E-PI (which contains additional
assumptions), an essential point that appears to have escaped the reviewer’s
attention. Therefore, if we trust the observations for Hurricane Isabel 2003,
we do know that a large $\mathcal{C}$ compensated for supergradient winds.
18\. P12 l227
Again, expansion as air flows inwards towards lower pressure is not the only
factor determining the radial temperature gradient near the RMW. Contrary to
the authors’ assertion otherwise, sensible heat fluxes will always contribute
to the temperature budget (unless the air temperature equals SST).
The input of the sensible heat fluxes can be arbitrarily small for any finite
temperature difference, as Eq. (17) demonstrates. It all depends on how fast
the flux of sensible heat declines with altitude relative to how fast the air
moves along the pressure gradient.
19\. P13 l233
The authors refer to a positive temperature gradient at the location of
maximum wind speed being associated with descending motion in the eye. But at
the RMW itself, which is typically located within the eyewall, there is
ascent, not descent, and so other processes are responsible for the local
positive radial temperature gradient.
We agree. This paragraph was removed. Instead, we mention that “Generally,
$\mathcal{C}>1$ at the point of maximum wind can result from the surface air
warming, and/or from the surface relative humidity increasing, towards the
center (see Eqs. (A14) and (A17) in Appendix A).”
20\. P13 l240
“the E-PI’s” should be “the E-PI”.
This line was removed from the text.
21\. P13 l241-243
The authors need to make clearer here that what they are referring to is a
diagnostic PI, and not the traditional concept of PI from E86 of an a priori
environmental thermodynamic limit on achievable intensity. A weak TC (over
SSTs that would support a much stronger TC) is well below its potential
intensity as traditionally defined. It has become common (e.g., Persing and
Montgomery 2003, Bryan and Rotunno 2009) in some studies to instead
diagnostically evaluate whether the conditions of a given PI theory are
satisfied in a numerical simulation, and to compare the simulated intensity at
a given time to the diagnostically calculated PI using data within the TC.
This is ok, but when doing so, the distinction between this diagnostic and
predictions of the maximum intensity for a given environment needs to be made
clear.
We have introduced a special variable $\hat{v}_{E}$ for the “environmental”
version of E-PI as compared to the diagnostic $v_{E}$ (5). In section 3.c we
discuss why these two estimates behave differently when compared with the
observed maximum wind speeds.
22\. P13 l244
This sentence as written is confusing, and I’m not sure what the authors are
trying to say. I think perhaps the authors are intending to state that while
E-PI provides an upper limit on the maximum achievable intensity of TCs, in
its diagnostic form, it underestimates the intensity of a given TC when it is
weak (at least in the view of the authors).
We have clarified as follows: “If horizontal isothermy is a common condition
under which $v_{E}$ (5) underestimates $v_{m}$, one has to explain why in most
cases the maximum wind speeds observed in real cyclones are well below the
environmental version of E-PI, $\hat{v}_{E}$ (Table 1). Since the
underestimate $v_{E}<v_{m}$ results from E-PI assumptions pertaining to the
free troposphere (block E-I in Table 1), the overestimate $\hat{v}_{E}>v_{m}$
indicates that a certain overcompensation should occur in the assumptions
pertaining to the remaining two E-PI blocks, the boundary layer interior and
the air-sea interface (Table 1, blocks E-II and E-III).”
23\. P14 l262 Change “In the result” to “As a result”.
24\. P16 l303 Change “monotonous” to “monotonic”.
25\. P16 l308 Change “adiabat” to “adiabatic“.
Comments 23, 24, and 25: Revised as suggested.
26\. P16 l312-318
Once again, E-PI does not assume anything about the radial temperature
gradient at the top of the boundary layer to obtain the relationship that the
authors are investigating in this study, as the authors now acknowledge. Given
that fact, it is not surprising that this gradient has “not received much
attention in the assessment of E-PI’s validity“, because it has no bearing on
E-PI’s validity (again, I am referring to the formulation in E86, which is the
same form that the authors are examining).
We believe that if there had been a general understanding that, without
assuming anything about the radial temperature gradient, E-PI actually
predicts it, the radial temperature gradient, as a straightforward check on
E-PI ’s validity, would have received much more attention than it did while
the relevant knowledge was missing. Indeed, the radial temperature gradient
has a direct bearing on E-PI’s validity.
27\. P322-323
The discussion in this paragraph is very misleading. The authors are
concluding that E-PI can’t provide an upper limit on TC intensity, as a result
of as assumption that E-PI does not make. It isn’t appropriate to substitute
assumptions made elsewhere (even if by the same authors) to invalidate a
framework that does not make that assumption.
28\. P16 l323-327
Remove the parentheses surrounding these sentences.
29\. P17 l348
“bottomline” should be “bottom line“.
Comments 27, 28, and 29: As the concluding section was considerably re-
written, the corresponding lines were removed.
30\. P18 l352
The authors state that “There were no data utilized in this study.” I don’t
think this is really true, as Table 2 uses data from Hurricane Isabel, as does
Fig. 2, and Fig. 1 uses data from the simulations of Li et al. (2020).
Revised as “There were no raw data utilized in this study”.
31\. Fig. 2
“See section b” should say “See section 3b“.
Corrected.
## References
* Aberson et al. (2006) Aberson, S. D., Montgomery, M. T., Bell, M., and Black, M.: Hurricane Isabel (2003): New insights into the physics of intense storms. Part II: Extreme localized wind, Bull. Amer. Meteor. Soc., 87, 1349–1354, 10.1175/BAMS-87-10-1349, 2006.
* Bejan (2019) Bejan, A.: Thermodynamics of heating, Proc. Roy. Soc. A, 475, 20180 820, 10.1098/rspa.2018.0820, 2019.
* Bell and Montgomery (2008) Bell, M. M. and Montgomery, M. T.: Observed structure, evolution, and potential intensity of category 5 Hurricane Isabel (2003) from 12 to 14 September, Mon. Wea. Rev., 136, 2023–2046, 10.1175/2007MWR1858.1, 2008.
* Bister et al. (2011) Bister, M., Renno, N., Pauluis, O., and Emanuel, K.: Comment on Makarieva et al. ‘A critique of some modern applications of the Carnot heat engine concept: The dissipative heat engine cannot exist’, Proc. Roy. Soc. A, 467, 1–6, 10.1098/rspa.2010.0087, 2011.
* Bryan and Rotunno (2009a) Bryan, G. H. and Rotunno, R.: Evaluation of an analytical model for the maximum intensity of tropical cyclones, J. Atmos. Sci., 66, 3042–3060, 10.1175/2009JAS3038.1, 2009a.
* Bryan and Rotunno (2009b) Bryan, G. H. and Rotunno, R.: The influence of near-surface, high-entropy air in hurricane eyes on maximum hurricane intensity, J. Atmos. Sci., 66, 148–158, 10.1175/2008JAS2707.1, 2009b.
* Camp and Montgomery (2001) Camp, J. P. and Montgomery, M. T.: Hurricane maximum intensity: Past and present, Mon. Wea. Rev., 129, 1704–1717, 10.1175/1520-0493(2001)129<1704:HMIPAP>2.0.CO;2, 2001.
* DeMaria and Kaplan (1994) DeMaria, M. and Kaplan, J.: Sea surface temperature and the maximum intensity of Atlantic tropical cyclones, J. Climate, 7, 1324–1334, 10.1175/1520-0442(1994)007<1324:SSTATM>2.0.CO;2, 1994.
* Emanuel (2004) Emanuel, K.: Tropical cyclone energetics and structure, pp. 165–192, Cambridge University Press, 10.1017/CBO9780511735035.010, 2004.
* Emanuel (2006) Emanuel, K.: Hurricanes: Tempests in a greenhouse, Physics Today, 59, 74–75, 10.1063/1.2349743, 2006.
* Emanuel (2020) Emanuel, K.: The relevance of theory for contemporary research in atmospheres, oceans, and climate, AGU Advances, 1, 10.1029/2019AV000129, 2020.
* Emanuel and Rotunno (2011) Emanuel, K. and Rotunno, R.: Self-stratification of tropical cyclone outflow. Part I: Implications for storm structure, J. Atmos. Sci., 68, 2236–2249, 10.1175/JAS-D-10-05024.1, 2011.
* Emanuel and Rousseau-Rizzi (2020) Emanuel, K. and Rousseau-Rizzi, R.: Reply to “Comments on ‘An evaluation of hurricane superintensity in axisymmetric numerical models’”, J. Atmos. Sci., 77, 3977–3980, 10.1175/JAS-D-20-0199.1, 2020.
* Emanuel (1986) Emanuel, K. A.: An air-sea interaction theory for tropical cyclones. Part I: Steady-state maintenance, J. Atmos. Sci., 43, 585–604, 10.1175/1520-0469(1986)043<0585:AASITF>2.0.CO;2, 1986.
* Emanuel (1988) Emanuel, K. A.: The maximum intensity of hurricanes, J. Atmos. Sci., 45, 1143–1155, 10.1175/1520-0469(1988)045<1143:TMIOH>2.0.CO;2, 1988.
* Emanuel (1989) Emanuel, K. A.: The finite-amplitude nature of tropical cyclogenesis, J. Atmos. Sci., 46, 3431–3456, 10.1175/1520-0469(1989)046<3431:TFANOT>2.0.CO;2, 1989\.
* Emanuel (1991) Emanuel, K. A.: The theory of hurricanes, Annu. Rev. Fluid Mech., 23, 179–196, 10.1146/annurev.fl.23.010191.001143, 1991.
* Emanuel (1995) Emanuel, K. A.: Sensitivity of tropical cyclones to surface exchange coefficients and a revised steady-state model incorporating eye dynamics, J. Atmos. Sci., 52, 3969–3976, 10.1175/1520-0469(1995)052<3969:SOTCTS>2.0.CO;2, 1995.
* Frank (1977) Frank, W. M.: The structure and energetics of the tropical cyclone I. Storm structure, Mon. Wea. Rev., 105, 1119–1135, 10.1175/1520-0493(1977)105<1119:TSAEOT>2.0.CO;2, 1977.
* Garner (2015) Garner, S.: The relationship between hurricane potential intensity and CAPE, J. Atmos. Sci., 72, 141–163, 10.1175/JAS-D-14-0008.1, 2015.
* Kieu (2015) Kieu, C.: Revisiting dissipative heating in tropical cyclone maximum potential intensity, Quart. J. Roy. Meteor. Soc., 141, 2497–2504, 10.1002/qj.2534, 2015.
* Kieu and Moon (2016) Kieu, C. Q. and Moon, Z.: Hurricane intensity predictability, Bull. Amer. Meteor. Soc., 97, 1847–1857, 10.1175/BAMS-D-15-00168.1, 2016.
* Kowaleski and Evans (2016) Kowaleski, A. M. and Evans, J. L.: A reformulation of tropical cyclone potential intensity theory incorporating energy production along a radial trajectory, Mon. Wea. Rev., 144, 3569–3578, 10.1175/MWR-D-15-0383.1, 2016\.
* Lackmann and Yablonsky (2004) Lackmann, G. M. and Yablonsky, R. M.: The importance of the precipitation mass sink in tropical cyclones and other heavily precipitating systems, J. Atmos. Sci., 61, 1674–1692, 2004.
* Li et al. (2020) Li, Y., Wang, Y., Lin, Y., and Fei, R.: Dependence of superintensity of tropical cyclones on SST in axisymmetric numerical simulations, Mon. Wea. Rev., 148, 4767–4781, 10.1175/MWR-D-20-0141.1, 2020.
* Makarieva et al. (2010) Makarieva, A. M., Gorshkov, V. G., Li, B.-L., and Nobre, A. D.: A critique of some modern applications of the Carnot heat engine concept: The dissipative heat engine cannot exist, Proc. Roy. Soc. A, 466, 1893–1902, 10.1098/rspa.2009.0581, 2010.
* Makarieva et al. (2015) Makarieva, A. M., Gorshkov, V. G., and Nefiodov, A. V.: Empirical evidence for the condensational theory of hurricanes, Phys. Lett. A, 379, 2396–2398, 10.1016/j.physleta.2015.07.042, 2015.
* Makarieva et al. (2017) Makarieva, A. M., Gorshkov, V. G., Nefiodov, A. V., Chikunov, A. V., Sheil, D., Nobre, A. D., and Li, B.-L.: Fuel for cyclones: The water vapor budget of a hurricane as dependent on its movement, Atmos. Res., 193, 216–230, 10.1016/j.atmosres.2017.04.006, 2017.
* Makarieva et al. (2019) Makarieva, A. M., Gorshkov, V. G., Nefiodov, A. V., Chikunov, A. V., Sheil, D., Nobre, A. D., Nobre, P., and Li, B.-L.: Hurricane’s maximum potential intensity and surface heat fluxes, URL https://arxiv.org/abs/1810.12451, eprint arXiv: 1810.12451v2 [physics.ao-ph], 2019.
* Makarieva et al. (2020) Makarieva, A. M., Nefiodov, A. V., Sheil, D., Nobre, A. D., Chikunov, A. V., Plunien, G., and Li, B.-L.: Comments on “An evaluation of hurricane superintensity in axisymmetric numerical models”, J. Atmos. Sci., 77, 3971–3975, 10.1175/JAS-D-20-0156.1, 2020.
* Makarieva et al. (2021) Makarieva, A. M., Gorshkov, V. G., Nefiodov, A. V., Chikunov, A. V., Sheil, D., Nobre, A. D., Nobre, P., Plunien, G., and Molina, R. D.: Water lifting and outflow gain of kinetic energy in tropical cyclones, URL https://arxiv.org/abs/2106.12544v1, eprint arXiv: 2106.12544v1 [physics.ao-ph], 2021.
* Montgomery and Smith (2020) Montgomery, M. T. and Smith, R. K.: Comments on “An evaluation of hurricane superintensity in axisymmetric numerical models”, J. Atmos. Sci., 77, 1887–1892, 10.1175/JAS-D-19-0175.1, 2020.
* Montgomery et al. (2006) Montgomery, M. T., Bell, M. M., Aberson, S. D., and Black, M. L.: Hurricane Isabel (2003): New insights into the physics of intense storms. Part I: Mean vortex structure and maximum intensity estimates, Bull. Amer. Meteor. Soc., 87, 1335–1347, 10.1175/BAMS-87-10-1335, 2006.
* Montgomery et al. (2019) Montgomery, M. T., Persing, J., and Smith, R. K.: On the hypothesized outflow control of tropical cyclone intensification, Quart. J. Roy. Meteor. Soc., 145, 1309–1322, 10.1002/qj.3479, 2019.
* Pauluis (2011) Pauluis, O.: Water vapor and mechanical work: A comparison of Carnot and steam cycles, J. Atmos. Sci., 68, 91–102, 10.1175/2010JAS3530.1, 2011.
* Persing and Montgomery (2003) Persing, J. and Montgomery, M. T.: Hurricane superintensity, J. Atmos. Sci., 60, 2349–2371, 10.1175/1520-0469(2003)060<2349:HS>2.0.CO;2, 2003.
* Rotunno and Emanuel (1987) Rotunno, R. and Emanuel, K. A.: An air-sea interaction theory for tropical cyclones. Part II: Evolutionary study using a nonhydrostatic axisymmetric numerical model, J. Atmos. Sci., 44, 542–561, 1987.
* Rousseau-Rizzi and Emanuel (2019) Rousseau-Rizzi, R. and Emanuel, K.: An evaluation of hurricane superintensity in axisymmetric numerical models, J. Atmos. Sci., 76, 1697–1708, 10.1175/JAS-D-18-0238.1, 2019.
* Rousseau-Rizzi and Emanuel (2020) Rousseau-Rizzi, R. and Emanuel, K.: Reply to “Comments on ‘An evaluation of hurricane superintensity in axisymmetric numerical models’”, J. Atmos. Sci., 77, 1893–1896, 10.1175/JAS-D-19-0248.1, 2020.
* Smith and Montgomery (2013) Smith, R. K. and Montgomery, M. T.: How important is the isothermal expansion effect in elevating equivalent potential temperature in the hurricane inner core?, Quart. J. Roy. Meteor. Soc., 139, 70–74, 10.1002/qj.1969, 2013.
* Tao et al. (2020a) Tao, D., Bell, M., Rotunno, R., and van Leeuwen, P. J.: Why do the maximum intensities in modeled tropical cyclones vary under the same environmental conditions?, Geophys. Res. Lett., 47, 10.1029/2019GL085980, 2020a.
* Tao et al. (2020b) Tao, D., Rotunno, R., and Bell, M.: Lilly’s model for steady-state tropical cyclone intensity and structure, J. Atmos. Sci., 77, 3701–3720, 10.1175/JAS-D-20-0057.1, 2020b.
* Venkat Ratnam et al. (2016) Venkat Ratnam, M., Ravindra Babu, S., Das, S. S., Basha, G., Krishnamurthy, B. V., and Venkateswararao, B.: Effect of tropical cyclones on the stratosphere–troposphere exchange observed using satellite observations over the north Indian Ocean, Atmos. Chem. Phys., 16, 8581–8591, 10.5194/acp-16-8581-2016, 2016.
* Wang and Lin (2020) Wang, D. and Lin, Y.: Size and structure of dry and moist reversible tropical cyclones, J. Atmos. Sci., 77, 2091–2114, 10.1175/JAS-D-19-0229.1, 2020\.
* Wang and Lin (2021) Wang, D. and Lin, Y.: Potential role of irreversible moist processes in modulating tropical cyclone surface wind structure, J. Atmos. Sci., 78, 709 – 725, 10.1175/JAS-D-20-0192.1, 2021.
* Wang et al. (2014) Wang, S., Camargo, S. J., Sobel, A. H., and Polvani, L. M.: Impact of the tropopause temperature on the intensity of tropical cyclones: An idealized study using a mesoscale model, J. Atmos. Sci., 71, 4333–4348, 10.1175/JAS-D-14-0029.1, 2014.
* Zhou et al. (2017) Zhou, W., Held, I. M., and Garner, S. T.: Tropical cyclones in rotating radiative-convective equilibrium with coupled SST, J. Atmos. Sci., 74, 879–892, 10.1175/JAS-D-16-0195.1, 2017.
|
# Filters on a countable vector space
Iian B. Smythe Department of Mathematics, University of Michigan, East Hall,
530 Church Street, Ann Arbor, MI 48109 www.iiansmythe.com<EMAIL_ADDRESS>
(Date: October 3, 2021)
###### Abstract.
We study various combinatorial properties, and the implications between them,
for filters generated by infinite-dimensional subspaces of a countable vector
space. These properties are analogous to selectivity for ultrafilters on the
natural numbers and stability for ordered-union ultrafilters on
$\mathrm{FIN}$.
###### 2010 Mathematics Subject Classification:
Primary 03E05, Secondary 15A03.
The author would like to thank Andreas Blass for many insightful conversations
which contributed to this work.
## 1\. Introduction
Throughout, we fix a countably infinite-dimensional vector space $E$ over a
countable (possibly finite) field $F$, with distinguished basis $(e_{n})$; one
may take $E=\bigoplus_{n}F$ and $e_{n}$ the $n$th unit coordinate vector. We
will use the term “subspace” in reference to $E$ exclusively to mean linear
subspace. Our primary objects of study here are filters of subsets of
$E\setminus\\{0\\}$ (we abuse terminology and call these filters on $E$),
generated by infinite-dimensional subspaces. All such filters will be assumed
to be proper and contain all subspaces of finite codimension.
We follow the terminology and notation of [21]. A sequence $(x_{n})$ of
nonzero vectors in $E$ is called a _block sequence_ (and its span, a _block
subspace_) if for all $n$,
$\max(\mathrm{supp}(x_{n}))<\min(\mathrm{supp}(x_{n+1})),$
where the _support_ of a nonzero vector $v$, $\mathrm{supp}(v)$, is the finite
set of those $i$’s such that $e_{i}$ has a nonzero coefficient in the basis
expansion of $v$. By taking linear combinations of basis vectors and thinning
out, we can see that every infinite-dimensional subspace contains an infinite
block sequence. Note that $\mathrm{supp}(v)$ is an element of $\mathrm{FIN}$,
the set of nonempty finite subsets of $\omega$.
The set of infinite block sequences in $E$ is denoted by ${E^{[\infty]}}$ and
inherits a Polish topology from $E^{\omega}$, where is $E$ discrete. We denote
the set of finite block sequences by ${E^{[<\infty]}}$. Block sequences are
ordered by their spans: we write $X\preceq Y$ if $\langle
X\rangle\subseteq\langle Y\rangle$, where $\langle\cdot\rangle$ denotes the
span (with $0$ removed), or equivalently, each entry of $X$ is a linear
combination of entries from $Y$. We write $X/n$ (or $X/\vec{x}$, for
$\vec{x}\in{E^{[<\infty]}}$) for the tail of $X$ consisting of those vectors
with supports entirely above $n$ (or the supports of $\vec{x}$, respectively),
and $X\preceq^{*}Y$ if $X/n\preceq Y$ for some $n$.
###### Definition 1.1.
A filter $\mathcal{F}$ on $E$ is a _block filter_ if it has a base of sets of
the form $\langle X\rangle$ for $X\in{E^{[\infty]}}$.
From now on, whenever we use the notation $\langle X\rangle$, it will be
understood that $X\in{E^{[\infty]}}$.
In [21], we considered _families_ 111Those readers familiar with [21] should
be cautioned that many of the definitions for families therein simplify in the
case of filters, and that some of the results in the present article may no
longer hold when “filter” is replaced by “family”. This relationship is
similar to that between ultrafilters and the more general class of coideals on
$\omega$. in ${E^{[\infty]}}$, i.e., subsets of ${E^{[\infty]}}$ which are
upwards closed with respect to $\preceq^{*}$, with filters in
$({E^{[\infty]}},\preceq)$ being those families in ${E^{[\infty]}}$ which are
$\preceq$-downwards directed, as an important special case. As remarked there,
one can go back and forth between filters in $({E^{[\infty]}},\preceq)$ and
the block filters on $E$ they generate by taking spans and their inverse
images, respectively.
###### Definition 1.2.
Given a block filter $\mathcal{F}$ on $E$:
1. (a)
a set $D\subseteq E$ is _$\mathcal{F}$ -dense_ if for every $\langle
X\rangle\in\mathcal{F}$, there is an infinite-dimensional subspace
$V\subseteq\langle X\rangle$ such that $V\subseteq D$.
2. (b)
$\mathcal{F}$ is _full_ if whenever $D\subseteq E$ is $\mathcal{F}$-dense, we
have that $D\in\mathcal{F}$.
Fullness is the analogue in this setting to being an ultrafilter: A filter
$\mathcal{U}$ on $\omega$ is an ultrafilter if and only if whenever
$d\subseteq\omega$ has infinite intersection with each element of
$\mathcal{U}$, $d\in\mathcal{U}$.
Already, the notion of a full block filter has substantial content: While they
can be constructed using the Continuum Hypothesis $(\mathsf{CH})$, Martin’s
Axiom $(\mathsf{MA})$, or by forcing directly with $(E^{[\infty]},\preceq)$,
they project to ordered-union ultrafilters on $\mathrm{FIN}$ via supports and
thus cannot be proved to exist in $\mathsf{ZFC}$ alone (see §5 and §6 of [21]
for details). The following additional properties can also be obtained by the
same methods:
###### Definition 1.3.
A block filter $\mathcal{F}$ on $E$ is:
1. (a)
a _$(p)$ -filter_ (or has the _$(p)$ -property_) if whenever $\langle
X_{n}\rangle\in\mathcal{F}$ for all $n$, there is an $\langle
X\rangle\in\mathcal{F}$ such that $X\preceq^{*}X_{n}$ for all $n$.
2. (b)
_spread_ if whenever $I_{0}<I_{1}<I_{2}<\cdots$ is a sequence of intervals in
$\omega$, there is an $\langle X\rangle\in\mathcal{F}$, where $X=(x_{n})$,
such that for every $n$, there is an $m$ for which
$I_{0}<\mathrm{supp}(x_{n})<I_{m}<\mathrm{supp}(x_{n+1})$.
3. (c)
a _strong $(p)$-filter_ (or has the _strong $(p)$-property_) if whenever
$\langle X_{\vec{x}}\rangle\in\mathcal{F}$ for all
$\vec{x}\in{E^{[<\infty]}}$, there is an $\langle X\rangle\in\mathcal{F}$ such
that $X/\vec{x}\preceq X_{\vec{x}}$ for all $\vec{x}\sqsubseteq X$.
In (a) and (c), the $X$ described is called a _diagonalization_ of $(X_{n})$
or $(X_{\vec{x}})$, respectively. If $\mathcal{F}$ is both full and a (strong)
$(p)$-filter, then we refer to it as a _(strong) $(p^{+})$-filter_.
Much of [21] is devoted to showing that filters with these properties
“localize” a Ramsey-theoretic dichotomy for block sequences in $E$, due to
Rosendal [17], and one in Banach spaces, due to Gowers [8]. Such dichotomies
are phrased in terms of games:
Given $X\in{E^{[\infty]}}$, the _asymptotic game_ played below $X$, $F[X]$, is
a two player game where the players alternate, with I going first and playing
natural numbers $n_{k}$, and II responding with nonzero vectors $y_{k}$
forming a block sequence and such that $n_{k}<\min(\mathrm{supp}(y_{k}))$.
$\begin{matrix}\text{I}&n_{0}&&n_{1}&&n_{2}&&\cdots\\\
\text{II}&&y_{0}&&y_{1}&&y_{2}&\cdots\end{matrix}$
Likewise, the _Gowers game_ played below $X$, $G[X]$, is defined with I going
first and playing infinite block sequences $Y_{k}\preceq X$, and II responding
with nonzero vectors $y_{k}$ forming a block sequence and such that
$y_{k}\in\langle Y_{k}\rangle$.
$\begin{matrix}\text{I}&Y_{0}&&Y_{1}&&Y_{2}&&\cdots\\\
\text{II}&&y_{0}&&y_{1}&&y_{2}&\cdots\end{matrix}$
In both of these games, the _outcome_ of a round of the game is the block
sequence $(y_{k})$ consisting of II’s moves. The notion of a _strategy_ for
one of the players is defined in a natural way. Given a set
$\mathbb{A}\subseteq E^{[\infty]}$, we say that a player has a strategy for
_playing into (or out of)_ $\mathbb{A}$ if they posses a strategy such that
all the resulting outcomes line in (or out of) $\mathbb{A}$.
The following is the local form of Rosendal’s dichotomy; Rosendal’s original
result can be recovered by simply omitting any mention of $\mathcal{F}$.
###### Theorem 1.4 (Theorem 1.1 in [21]).
Let $\mathcal{F}$ be a $(p^{+})$-filter on $E$.222As mentioned in [21], an
apparent weakening of the $(p)$-property akin to semiselectivity, namely that
a sequence of dense open subsets of $\mathcal{F}$ must possess a
diagonalization in $\mathcal{F}$, is all that is used in the proof of this
result. However, it will be a consequence of Theorem 4.1 below that, for block
filters, this is equivalent to the $(p)$-property. If
$\mathbb{A}\subseteq{E^{[\infty]}}$ is analytic, then there is an $\langle
X\rangle\in\mathcal{F}$ such that either
1. (i)
I has a strategy in $F[X]$ for playing out of $\mathbb{A}$, or
2. (ii)
II has a strategy in $G[X]$ for playing into $\mathbb{A}$.
While we won’t deal explicitly with Banach spaces here, the spread condition,
along with being $(p^{+})$, was used to obtain the local form of Gowers result
for Banach spaces (Theorem 1.4 in [21]). These results are analogous to the
way selective ultrafilters on $\omega$ and stable ordered-union ultrafilters
on $\mathrm{FIN}$ localize the respective dichotomies for analytic partitions
of $[\omega]^{\omega}$ and $\mathrm{FIN}^{[\infty]}$ (see [14] and [2]).
One apparent deficiency in Theorem 1.4 is that it is not obvious whether
either conclusion, (i) or (ii), guarantees that $\mathcal{F}$ meets the
complement of $\mathbb{A}$ or $\mathbb{A}$ itself, respectively. This is
rectified by the following assumption:
###### Definition 1.5.
A block filter $\mathcal{F}$ on $E$ is _strategic_ if whenever $\alpha$ is a
strategy for II in $G[X]$, where $\langle X\rangle\in\mathcal{F}$, there is an
outcome $Y$ of $\alpha$ such that $\langle Y\rangle\in\mathcal{F}$.
complete combinatorics Theorem 1.2strategic $(p^{+})$-filter Proposition
4.6strong $(p^{+})$-filter Lemma 8.13spread $(p^{+})$-filter$(p^{+})$-filter
Theorem 1.1local Rosendal dichotomy Proposition 3.6full block filter Figure 1.
The implications for block filters proved in [21].
Under large cardinal assumptions, if $\mathcal{F}$ is a strategic
$(p^{+})$-filter, then Theorem 1.4 can be extended to all “reasonably
definable” subsets $\mathbb{A}$ and moreover, being “strategic $(p^{+})$”
exactly characterizes genericity of a block filter over the inner-model
$\mathbf{L}(\mathbb{R})$ (Theorems 1.2 and 1.3 in [21]). This latter property
is known in the literature as having _complete combinatorics_.333Originally,
“complete combinatorics” was used to describe genericity over
$\mathrm{HOD}(\mathbb{R})^{\mathbf{V}[G]}$, where $G$ is $\mathbf{V}$-generic
for the Lev́y collapse of a Mahlo cardinal [12], a property proved
(implicitly) for selective ultrafilters in [14]. The contemporary usage avoids
passing to a Lévy collapse extension at the expense of stronger large cardinal
hypotheses. The $\mathsf{ZFC}$ content of “complete combinatorics”, in all
examples of which the author is aware, is that the filter meets all dense open
analytic sets, in the relevant $\sigma$-distributive Suslin partial order.
That this characterizes strategic $(p^{+})$-filters in $\mathsf{ZFC}$ is
implicit in [21].
Figure 1 shows the implications between these various properties of block
filters, as proved in [21], with references to the relevant propositions
therein (the implication from “spread $(p^{+})$” to “$(p^{+})$” is trivial).
The unidirectional double arrow $\Longrightarrow$ indicates a strict
implication; when $|F|>2$, it is consistent with $\mathsf{ZFC}$ that there is
a strong $(p^{+})$-filter which is not strategic (this is Corollary 8.9 in the
recent preprint [19]).
The goal of the present article is to investigate the combinatorics of block
filters having the above properties, as well the possibility of reversing the
remaining arrows in Figure 1. We begin by considering the special case of the
finite field of order $2$ and its relationship to $\mathrm{FIN}$, where a
complete analysis is possible. In general, we will see that the cardinality of
the field $F$, in so far as it is either $2$, finite, or infinite, plays an
important role. Those interested in spoilers may skip ahead to Figure 2. We
also prove alternate characterizations of strong $(p^{+})$-filters (Theorem
5.1) and strategic $(p^{+})$-filters (Theorem 5.5) using a restricted version
of the Gowers game.
The results below are inspired by the various equivalent characterizations of
selectivity for ultrafilters, originally proved by Booth and Kunen in [3] (see
Chapter 11 of [9] for a modern treatment), and of stability for ordered-union
ultrafilters, proved by Blass in [2]. Some of our result originate in the
author’s PhD thesis [20], but have remained otherwise unpublished, while
others are making their first appearance here.
## 2\. $\mathrm{FIN}$ and the finite field of order two
A sequence $(a_{n})$ in $\mathrm{FIN}$ is a _block sequence_ if
$\max(a_{n})<\min(a_{n+1})$ for all $n$. The set of all infinite block
sequences in $\mathrm{FIN}$ is denoted by $\mathrm{FIN}^{[\infty]}$ and
inherits a Polish topology from $\mathrm{FIN}^{\omega}$. We write
$\mathrm{FIN}^{[<\infty]}$ for the set of finite block sequences in
$\mathrm{FIN}$. Given $A,B\in\mathrm{FIN}^{[\infty]}$, we write $\langle
A\rangle$ for the set of all finite unions of entries from $A$ and $A\preceq
B$ if $\langle A\rangle\subseteq\langle B\rangle$ (likewise for $A/n$ and
$A\preceq^{*}B$) to agree with our notation above. We reserve the notation
$\langle A\rangle$ for when $A$ is a block sequence. For
$A\in\mathrm{FIN}^{[\infty]}$, we write $A^{[\infty]}$ for the set of those
$B\in\mathrm{FIN}^{[\infty]}$ such that $B\preceq A$.
The Ramsey theory for $\mathrm{FIN}$ is largely a consequence of the finite-
unions form of Hindman’s Theorem [10]:
###### Theorem 2.1.
For any $C\subseteq\mathrm{FIN}$, there is an $A\in\mathrm{FIN}^{[\infty]}$
such that either $\langle A\rangle\subseteq C$ or $\langle A\rangle\cap
C=\emptyset$.
The relevant notions for ultrafilters on $\mathrm{FIN}$ were defined by Blass
in [2]: An ultrafilter $\mathcal{F}$ of subsets of $\mathrm{FIN}$ is _ordered-
union_ if it has a base of sets of the form $\langle A\rangle$. $\mathcal{F}$
is _stable_ if whenever $\langle A_{n}\rangle\in\mathcal{F}$ for all
$n\in\omega$, there is a $\langle B\rangle\in\mathcal{F}$ such that
$B\preceq^{*}A_{n}$ for all $n$. These are connected to block filters on $E$
via the support map:
###### Theorem 2.2 (Theorem 6.3 in [21]).
If $\mathcal{F}$ is a full block filter on $E$, then
$\mathrm{supp}(\mathcal{F})=\\{A\subseteq\mathrm{FIN}:\exists
X\in\mathcal{F}(A\supseteq\\{\mathrm{supp}(v):v\in X\\})\\}$
is an ordered-union ultrafilter on $\mathrm{FIN}$. If, moreover, $\mathcal{F}$
is a $(p)$-filter, then $\mathrm{supp}(\mathcal{F})$ is stable.
In the case when $|F|=2$, nonzero vectors in $E$ can be identified with their
supports in $\mathrm{FIN}$, addition of vectors with disjoint supports
corresponds to their union, and scalar multiplication trivializes. Thus, the
study of block sequences in $E$ reduces to the study of block sequences in
$\mathrm{FIN}$, and $\mathcal{F}$ is a full ($(p^{+})$, respectively) block
filter if and only if it is an (stable) ordered-union ultrafilter: One
direction is Theorem 2.2, while the converse follows from the fact that if
$D\subseteq E$ is $\mathcal{U}$-dense and $\mathcal{U}$ is an _ultra_ filter,
then $D\in\mathcal{U}$.
We will see in Theorem 4.1 below that the second to last implication in Figure
1 reverses: For block filters, being a $(p^{+})$-filter is equivalent to
satisfying Theorem 4.1, regardless of the field. The $|F|=2$ case highlights a
difficulty in understanding the last implication in Figure 1; whether it
reverses in this case is equivalent to whether every ordered-union ultrafilter
is stable, a long-standing open problem (see, e.g., [11]). We will not attempt
to shed any additional light on this question here.
While Theorem 1.4 can be rephrased for stable ordered-union ultrafilters and
$\mathrm{FIN}$, a stronger result holds; stable ordered-union ultrafilters
localize the infinite-dimensional form of Hindman’s Theorem [10] due to
Milliken and Taylor [16] [22]. This is one of several equivalents to stability
proved in [2]:
###### Theorem 2.3 (Theorem 4.2 in [2]).
Let $\mathcal{U}$ be an ordered-union ultrafilter on $\mathrm{FIN}$. The
following are equivalent:
1. (i)
$\mathcal{U}$ is stable.
2. (ii)
For any analytic set $\mathbb{A}\subseteq\mathrm{FIN}^{[\infty]}$, there is an
$\langle A\rangle\in\mathcal{U}$ such that either
$A^{[\infty]}\subseteq\mathbb{A}$ or $A^{[\infty]}\cap\mathbb{A}=\emptyset$.
Assuming large cardinal hypotheses, the methods of [5] can be used to extend
(ii) to all subsets $\mathbb{A}$ in $\mathbf{L}(\mathbb{R})$ and prove
complete combinatorics for stable ordered-union ultrafilters (see also the
discussion on p. 121-122 of [2]). Consequently, when $|F|=2$, all but possibly
the last of the conditions in Figure 1 are equivalent. To see this directly:
###### Corollary 2.4.
If $\mathcal{U}$ is a stable ordered-union ultrafilter, then $\mathcal{U}$ is
strategic.
###### Proof.
Let $\langle A\rangle\in\mathcal{U}$ and $\alpha$ be a strategy for II in
$G[A]$. By Lemma 4.7 in [21], there is an analytic set $\mathbb{A}$ of
outcomes of $\alpha$ which is dense below $A$, in the sense of forcing with
$(\mathrm{FIN}^{[\infty]},\preceq)$. By Theorem 2.3 applied to $A^{[\infty]}$,
there is a $B\preceq A$ with $\langle B\rangle\in\mathcal{U}$ such that either
$B^{[\infty]}\subseteq\mathbb{A}$ or $B^{[\infty]}\cap\mathbb{A}=\emptyset$.
Since $\mathbb{A}$ is dense below $B$, the latter is impossible. In
particular, $B\in\mathbb{A}\subseteq[\alpha]$, so $\mathcal{U}$ contains an
outcome of $\alpha$. Thus, $\mathcal{U}$ is strategic. ∎
Another variation on stability appears in the literature [15] [23]: An
ultrafilter $\mathcal{U}$ on $\mathrm{FIN}$ is _selective_ if it is ordered-
union and whenever $\langle A_{a}\rangle\in\mathcal{U}$ for all
$a\in\mathrm{FIN}^{[<\infty]}$, there is a $\langle B\rangle\in\mathcal{U}$
such that $B/a\preceq A_{a}$ for all $a\preceq B$. Note the resemblance to our
strong $(p)$-property. While this property appears stronger than stability, it
is, again, equivalent:
###### Corollary 2.5.
If $\mathcal{U}$ is a stable ordered-union ultrafilter on $\mathrm{FIN}$, then
$\mathcal{U}$ is selective.
###### Proof.
Suppose we are given $\langle A_{a}\rangle\in\mathcal{U}$ for all
$a\in\mathrm{FIN}^{[<\infty]}$. Define
$\displaystyle\mathbb{D}_{0}$
$\displaystyle=\\{B\in\mathrm{FIN}^{[\infty]}:\text{ $B/a\preceq A_{a}$ for
all $a\preceq B$}\\}$ $\displaystyle\mathbb{D}_{1}$
$\displaystyle=\\{B\in\mathrm{FIN}^{[\infty]}:\text{ $\langle B\rangle$ and
the $\langle A_{a}\rangle$'s do not generate a filter}\\}.$
Let $\mathbb{D}=\mathbb{D}_{0}\cup\mathbb{D}_{1}$. It is straightforward to
verify that $\mathbb{D}$ is analytic and dense open in
$(\mathrm{FIN}^{[\infty]},\preceq)$. By Theorem 2.3, there is a $\langle
B\rangle\in\mathcal{U}$ such that $B\in\mathbb{D}$. Clearly,
$B\notin\mathbb{D}_{1}$, so $B\in\mathbb{D}_{0}$ and thus witnesses
selectivity. ∎
Both of the previous corollaries are instances of complete combinatorics at
work in the $\mathsf{ZFC}$ context.
Returning to the setting of an arbitrary countable field, we have seen that
every $(p^{+})$-filter produces a stable ordered-union ultrafilter. In Theorem
2.8, we prove a converse. We’ll need some notation: For
$X=(x_{n})\in{E^{[\infty]}}$, let
$\mathrm{supp}(X)=(\mathrm{supp}(x_{n}))\in\mathrm{FIN}^{[\infty]}$. Part (a)
of the following lemma implies that
$\mathrm{supp}:{E^{[\infty]}}\to\mathrm{FIN}^{[\infty]}$ is a projection, in
the sense of forcing.
###### Lemma 2.6.
1. (a)
Suppose that $X\in{E^{[\infty]}}$ and $A\in\mathrm{FIN}^{[\infty]}$ are such
that $A\preceq\mathrm{supp}(X)$. Then, there is a $Y\in{E^{[\infty]}}$ such
that $Y\preceq X$ and $\mathrm{supp}(Y)=A$.
2. (b)
Suppose that $(X_{n})$ is a $\preceq^{*}$-decreasing sequence in
${E^{[\infty]}}$ and $A\in\mathrm{FIN}^{[\infty]}$ is such that
$A\preceq^{*}\mathrm{supp}(X_{n})$ for all $n$. Then, there is a
$Y\in{E^{[\infty]}}$ such that $Y\preceq^{*}X_{n}$ for all $n$ and
$\mathrm{supp}(Y)=A$.
###### Proof.
Part (a) is easier, so we will just prove (b) here instead: Write $A=(a_{k})$,
with each $a_{k}\in\mathrm{FIN}$. For notational convenience, let
$X_{-1}=(e_{n})$ and $m_{-1}=-1$. For each $n\geq 0$, let $m_{n}$ be such that
$A/m_{n}\preceq\mathrm{supp}(X_{n})$ and $X_{n}/m_{n}\preceq X_{n-1}$. We may
assume that each $m_{n}=\max(\mathrm{supp}(a_{i}))$ for some $i$ and that
$m_{n}<m_{n+1}$. For each $n\geq-1$, and each of the finitely many $a_{k}$’s
with $\mathrm{supp}(a_{k})\subseteq(m_{n},m_{n+1}]$, choose $y_{k}\in\langle
X_{n}\rangle$ such that $\mathrm{supp}(a_{k})=y_{k}$. Let $Y=(y_{k})$.
Clearly, $\mathrm{supp}(Y)=A$. Moreover, for each $n$ and all $k$ with
$\mathrm{supp}(y_{k})\geq m_{n}$, $y_{k}\in\langle X_{n}/m_{n}\rangle$, and so
$Y\preceq^{*}X_{n}$. ∎
###### Lemma 2.7.
Let $\mathcal{F}$ be a block filter on $E$ such that
$\mathrm{supp}(\mathcal{F})$ is a stable ordered-union ultrafilter. If
$D\subseteq E$ is $\mathcal{F}$-dense and $\langle Y\rangle\in\mathcal{F}$,
then there is a $Z\preceq Y$ such that $\langle Z\rangle\subseteq D$ and
$\mathrm{supp}(Z)\in\mathrm{supp}(\mathcal{F})$.
###### Proof.
Let
$\mathbb{D}=\\{A\in\mathrm{FIN}^{[\infty]}:\exists Z\in{E^{[\infty]}}(Z\preceq
Y\land\mathrm{supp}(Z)=A\land\langle Z\rangle\subseteq D)\\},$
an analytic subset of $\mathrm{FIN}^{[\infty]}$. By Theorem 2.3, there is a
$\langle B\rangle\in\mathrm{supp}(\mathcal{F})$ with
$B\in\mathrm{FIN}^{[\infty]}$ such that either
$B^{[\infty]}\subseteq\mathbb{D}$ or $B^{[\infty]}\cap\mathbb{D}=\emptyset$.
We claim the latter cannot happen: As $\langle
B\rangle\in\mathrm{supp}(\mathcal{F})$, there is a $\langle
Y^{\prime}\rangle\in\mathcal{F}$ such that $\mathrm{supp}(Y^{\prime})\preceq
B$. Since $\mathcal{F}$ is a block filter, we may further assume that
$Y^{\prime}\preceq Y$. As $D$ is $\mathcal{F}$-dense, there is a $V\preceq
Y^{\prime}$ such that $\langle V\rangle\subseteq D$, and so
$\mathrm{supp}(V)\in B^{[\infty]}\cap\mathbb{D}$. Thus,
$B^{[\infty]}\subseteq\mathbb{D}$, and in particular, $B\in\mathbb{D}$. Any
$Z\in{E^{[\infty]}}$ which witnesses $B\in\mathbb{D}$ will satisfy the desired
conclusion. ∎
###### Theorem 2.8.
$(\mathsf{CH})$444$\mathsf{CH}$ is only used here in so far as it allows us to
avoid diagonalizing uncountable-length sequences in $\mathcal{U}$. If,
instead, $\mathsf{MA}$ holds and $\mathcal{U}$ was closed under
diagonalizations of length $<2^{\aleph_{0}}$ (such stable ordered-union
ultrafilters can be easily constructed using $\mathsf{MA}$ and Lemma 5 of
[6]), then our proof would go through mutatis mutandis. If $\mathcal{U}$ is a
stable ordered-union ultrafilter on $\mathrm{FIN}$, then there is a
$(p^{+})$-filter $\mathcal{F}$ on $E$ such that
$\mathrm{supp}(\mathcal{F})=\mathcal{U}$.
###### Proof.
Using $\mathsf{CH}$, we can enumerate all subsets of $E$ as $D_{\xi}$, and all
elements $A\in\mathrm{FIN}^{[\infty]}$ such that $\langle
A\rangle\in\mathcal{U}$ as $A_{\eta}$, for $\xi,\eta<\aleph_{1}$. We will
construct, via transfinite recursion, a $\preceq^{*}$-decreasing sequence
$(X_{\alpha})_{\alpha<\aleph_{1}}$ in ${E^{[\infty]}}$ that will generate the
promised $(p^{+})$-filter $\mathcal{F}$.
$\alpha=0$: Choose $X_{0}^{\prime}\in{E^{[\infty]}}$ such that
$\mathrm{supp}(X_{0}^{\prime})=A_{0}$. If $D_{0}$ is such that there is some
$Y\preceq X_{0}^{\prime}$ with $\langle Y\rangle\subseteq D_{0}$ and
$\langle\mathrm{supp}(Y)\rangle\in\mathcal{U}$, then choose $X_{0}$ to be such
a $Y$. If not, take $X_{0}=X_{0}^{\prime}$.
$\alpha=\beta+1$: Suppose we have defined $X_{\gamma}$ for $\gamma\leq\beta$
such that $\langle\mathrm{supp}(X_{\gamma})\rangle\in\mathcal{U}$. There is
some $\langle B\rangle\in\mathcal{U}$ such that $\langle
B\rangle\subseteq\langle\mathrm{supp}(X_{\beta})\rangle\cap\langle
A_{\beta+1}\rangle$. Apply Lemma 2.6(a) to obtain an
$X_{\beta+1}^{\prime}\in{E^{[\infty]}}$ such that $X_{\beta+1}^{\prime}\preceq
X_{\beta}$ and $\mathrm{supp}(X_{\beta+1}^{\prime})=B$. If $D_{\beta+1}$ is
such that there is some $Y\preceq X_{\beta+1}^{\prime}$ with $\langle
Y\rangle\subseteq D_{\beta+1}$ and
$\langle\mathrm{supp}(Y)\rangle\in\mathcal{U}$, then choose $X_{\beta+1}$ to
be such a $Y$. If not, take $X_{\beta+1}=X_{\beta+1}^{\prime}$.
$\alpha=\beta$ for limit $\beta$: Suppose we have defined $X_{\gamma}$ for
$\gamma<\beta$ such that
$\langle\mathrm{supp}(X_{\gamma})\rangle\in\mathcal{U}$. Let $(\gamma_{n})$ be
a strictly increasing cofinal sequence in $\beta$. Since $\mathcal{U}$ is
stable, there is some $A\in\mathrm{FIN}^{[\infty]}$ such that $\langle
A\rangle\in\mathcal{U}$ and $A\preceq^{*}\mathrm{supp}(X_{\gamma_{n}})$ for
all $n$. We may, moreover, assume that $A\preceq A_{\beta}$. By Lemma 2.6(b),
there is an $X_{\beta}^{\prime}\in{E^{[\infty]}}$ such that
$X_{\beta}^{\prime}\preceq^{*}X_{n}$ for all $n$ and
$\mathrm{supp}(X_{\beta}^{\prime})=A$. If $D_{\beta}$ is such that there is
some $Y\preceq X_{\beta}^{\prime}$ with $\langle Y\rangle\subseteq D_{\beta}$
and $\langle\mathrm{supp}(Y)\rangle\in\mathcal{U}$, then choose $X_{\beta}$ to
be such a $Y$. If not, take $X_{\beta}=X_{\beta}^{\prime}$. This completes the
construction.
Let $\mathcal{F}$ be the block filter on $E$ generated by the $X_{\alpha}$’s.
Our construction has ensured that $\mathcal{F}$ has the $(p)$-property and
that $\mathrm{supp}(\mathcal{F})\supseteq\mathcal{U}$, and hence
$\mathrm{supp}(\mathcal{F})=\mathcal{U}$, since $\mathcal{U}$ is an
ultrafilter. It remains to verify that $\mathcal{F}$ is full. Suppose that
$D=D_{\xi}$ is $\mathcal{F}$-dense. By Lemma 2.7 applied to the
$X_{\xi}^{\prime}$ (for which we’ve ensured $\langle
X_{\xi}^{\prime}\rangle\in\mathcal{F}$) found in stage $\xi$ of the above
construction, there must be some $Y\preceq X_{\xi}^{\prime}$ such that
$\langle Y\rangle\subseteq D$ and
$\langle\mathrm{supp}(Y)\rangle\in\mathcal{U}$, meaning that $X_{\xi}$ was
chosen so that $\langle X_{\xi}\rangle\subseteq D$. Thus, $\mathcal{F}$ is a
$(p^{+})$-filter. ∎
It was shown in [2] that if $\mathcal{U}$ is an ordered-union ultrafilter,
then
$\displaystyle\min(\mathcal{U})$
$\displaystyle=\\{\\{\min(\mathrm{supp}(a)):a\in A\\}:A\in\mathcal{U}\\}$
$\displaystyle\max(\mathcal{U})$
$\displaystyle=\\{\\{\max(\mathrm{supp}(a)):a\in A\\}:A\in\mathcal{U}\\}$
are nonisomorphic selective ultrafilters on $\omega$, and conversely, if
$\mathcal{V}_{0}$ and $\mathcal{V}_{1}$ are nonisomorphic selective
ultrafilters, then (assuming $\mathsf{CH}$) there is a stable ordered-union
ultrafilter $\mathcal{U}$ on $\mathrm{FIN}$ such that
$\min(\mathcal{U})=\mathcal{V}_{0}$ and $\max(\mathcal{U})=\mathcal{V}_{1}$.
This can now be combined with the previous theorem to get a similar conclusion
for $(p^{+})$-filters on $E$.
## 3\. Fullness and maximality
A full block filter $\mathcal{F}$ on $E$ is always maximal amongst block
filters, and in fact is maximal with respect to all filters generated by
infinite-dimensional subspaces of $E$. That is, for any infinite-dimensional
subspace $V$ of $E$, if $V\cap X\neq\\{0\\}$ for all $X\in\mathcal{F}$, then
$V\in\mathcal{F}$ (to see this, just let $D=V$ in the definition of “full”).
Filters of subspaces with the latter property were studied by Bergman and
Hrushovski in [1], where they were called _linear ultrafilters_ ; we will
instead call them _subspace maximal_.
###### Proposition 3.1.
Let $\mathcal{F}$ be a filter generated by infinite-dimensional subspaces of
$E$. The following are equivalent:
1. (i)
$\mathcal{F}$ is subspace maximal.
2. (ii)
For every subspace $V\subseteq E$, either $V\in\mathcal{F}$ or there is some
direct complement $V^{\prime}$ of $V$ (i.e., $V\cap V^{\prime}=\\{0\\}$ and
$V\oplus V^{\prime}=E$) such that $V^{\prime}\in\mathcal{F}$.
3. (iii)
For every linear transformation $T$ on $E$ (to any $F$-vector space), either
$\ker(T)\in\mathcal{F}$ or there is a subspace $X\in\mathcal{F}$ such that
$T\\!\upharpoonright\\!X$ is injective.
###### Proof.
The equivalence if (i) and (ii) is part of Lemma 3 of [1].
(i $\Rightarrow$ iii) Given $T$, suppose that $T$ is not injective on any
subspace $Y\in\mathcal{F}$. This means that $\ker(T)$ has nontrivial
intersection with every such $Y$. Hence, by subspace maximality,
$\ker(T)\in\mathcal{F}$.
(iii $\Rightarrow$ i) Let $Y$ be an infinite-dimensional subspace of $E$ which
has nontrivial intersection with every subspace $X\in\mathcal{F}$. Let
$Y^{\prime}$ be a direct complement to $Y$ in $E$. Take $T:E\to E$ to be the
unique linear transformation determined by
$T(y+y^{\prime})=y^{\prime}$
for $y\in Y$ and $y^{\prime}\in Y^{\prime}$. So, $\ker(T)=Y$. If there was a
subspace $X\in\mathcal{F}$ such that $T\\!\upharpoonright\\!X$ was injective,
then by assumption, $X\cap Y$ is nontrivial and $T\\!\upharpoonright\\!X\cap
Y$ is injective, a contradiction. Thus, $Y=\ker(T)\in\mathcal{F}$. ∎
We mention here a result from [1] about the relationship between selective
ultrafilters on $\omega$ and filters of subspaces of $E$: Proposition 18 in
[1] says that, given an ultrafilter $\mathcal{U}$ on $\omega$, the set
$\\{\langle(e_{i})_{i\in A}\rangle:A\in\mathcal{U}\\}$, together with the
finite-codimensional subspaces of $E$, generates (via finite intersections and
supersets) a subspace maximal filter on $E$ if and only if $\mathcal{U}$ is
selective. However, it is not clear if the resulting filter on $E$ can be a
block filter. Moreover, as it is consistent with $\mathsf{ZFC}$ that there is
a unique (up to isomorphism) selective ultrafilter, and hence no ordered-union
ultrafilters (cf. VI.5 in [18] and the comments at the end of the previous
section), one cannot obtain a full block filter on $E$ from a selective
ultrafilter alone.
In contrast to the above forms of maximality, unless $|F|=2$, a block filter
on $E$ is _never_ an ultrafilter (of subsets). This is a consequence of the
existence of asymptotic pairs:
###### Definition 3.2.
1. (a)
A set $A\subseteq E\setminus\\{0\\}$ is _asymptotic_ if for every infinite-
dimensional subspace $V$ of $E$, $V\cap A\neq\emptyset$.
2. (b)
An _asymptotic pair_ is a pair of disjoint asymptotic sets.
A standard construction of an asymptotic pair uses the _oscillation_ of a
nonzero vector $v=\sum a_{n}e_{n}$, defined by
$\mathrm{osc}(v)=|\\{i\in\mathrm{supp}(v):a_{i}\neq a_{i+1}\\}|.$
It is shown in the proof of Theorem 7 in [13] that if $|F|>2$, then on any
infinite-dimensional subspace of $E$, the range of $\mathrm{osc}$ contains
arbitrarily long intervals (i.e., is a _thick_ set), and thus the sets
$\displaystyle A_{0}$ $\displaystyle=\\{v\in
E\setminus\\{0\\}:\mathrm{osc}(v)\text{ is even}\\}$ $\displaystyle A_{1}$
$\displaystyle=\\{v\in E\setminus\\{0\\}:\mathrm{osc}(v)\text{ is odd}\\}$
form an asymptotic pair. Note that $\mathrm{osc}$, and thus the $A_{i}$, are
invariant under multiplication by nonzero scalars.
Given a block filter $\mathcal{F}$, a set $D\subseteq E$ is
$\mathcal{F}$-dense if and only if $A=E\setminus D$ fails to be asymptotic
below every $\langle X\rangle\in\mathcal{F}$. This immediately implies the
following alternate characterization of fullness:
###### Proposition 3.3.
A block filter $\mathcal{F}$ on $E$ is full if and only if for every
$A\subseteq E\setminus\\{0\\}$, there is an $\langle X\rangle\in\mathcal{F}$
such that either $\langle X\rangle\cap A=\emptyset$ or $A$ is asymptotic below
$\langle X\rangle$. ∎
## 4\. The $(p)$-property and its relatives
We begin this section by showing that if a block filter witnesses the local
form of Rosendal’s dichotomy, then it must have the $(p)$-property.
###### Theorem 4.1.
Let $\mathcal{F}$ be a block filter on $E$. If all clopen subsets of
${E^{[\infty]}}$ satisfy the conclusion of Theorem 4.1, then $\mathcal{F}$ has
the $(p)$-property.
###### Proof.
Let $\langle X_{n}\rangle\in\mathcal{F}$ for each $n$. Define
$\mathbb{A}=\\{(x_{n})\in{E^{[\infty]}}:\text{ if
$m\leq\max(\mathrm{supp}(x_{0}))$, then $x_{1}\in X_{m}$}\\}.$
Clearly, $\mathbb{A}$ is a clopen subset of ${E^{[\infty]}}$. By our
assumption, applied to $\mathbb{A}^{c}$, there is an $\langle
X\rangle\in\mathcal{F}$ such that either (i) I has a strategy in $F[X]$ for
playing into $\mathbb{A}$ or (ii) II has a strategy in $G[X]$ for playing into
$\mathbb{A}^{c}$.
We claim that (ii) cannot happen. Suppose otherwise, denote II’s strategy by
$\alpha$, and consider the following round of $G[X]$: In the first inning, I
plays $X$ and II responds with $\alpha(X)$. In the second inning, I plays some
$Y\preceq X$ such that $\langle
Y\rangle\subseteq\bigcap_{m\leq\max(\mathrm{supp}(\alpha(X)))}\langle
X_{m}\rangle$,555Note that here, we could take $\langle
Y\rangle\in\mathcal{F}$. This shows that block filters witnessing Theorem 5.2
below, while not necessarily full, must still have the $(p)$-property. which
defeats any possible next move by II, contrary to what we know about $\alpha$.
Thus, (i) holds. Denote by $\sigma$ the resulting strategy for I. Let
$Y=X/\sigma(\emptyset)$, so $\langle Y\rangle\in\mathcal{F}$. Let $m$ be
given. In the first inning of $F[X]$, let I play $\sigma(\emptyset)$, and let
II play any $y\in\langle Y\rangle$ such that $m\leq\max(\mathrm{supp}(y))$. In
the second inning, I plays $\sigma(y)$, which ensures that for any
$z\in\langle Y/\sigma(y)\rangle$, $z\in\langle X_{m}\rangle$. In other words,
$Y/\sigma(y)\preceq X_{m}$. Since $m$ was arbitrary, this shows that
$Y\preceq^{*}X_{m}$ for all $m$, verifying the $(p)$-property. ∎
Next, we show that the $(p)$-property implies something which resembles the
strong $(p)$-property, except that the family of elements of the filter which
we diagonalize is indexed by finite sequences in $\mathrm{FIN}$ instead of in
$E$.
###### Theorem 4.2.
Let $\mathcal{F}$ be a $(p^{+})$-filter on $E$. Then, whenever $(\langle
X_{\vec{a}}\rangle)_{\vec{a}\in\mathrm{FIN}^{[<\infty]}}$ is contained in
$\mathcal{F}$, there is an $\langle X\rangle\in\mathcal{F}$ such that
$X/\vec{a}\preceq X_{\vec{a}}$ whenever $\vec{a}\sqsubseteq\mathrm{supp}(X)$.
###### Proof.
Let $(\langle X_{\vec{a}}\rangle)_{\vec{a}\in\mathrm{FIN}^{[<\infty]}}$ be
given as described. Since $\mathrm{FIN}^{[<\infty]}$ is countable and
$\mathcal{F}$ is a $(p)$-filter, there is an $\langle X\rangle\in\mathcal{F}$
such that $X\preceq^{*}X_{\vec{a}}$ for all
$\vec{a}\in\mathrm{FIN}^{[<\infty]}$. Writing $\mathrm{supp}(X)^{[<\infty]}$
for those finite block sequences in $\mathrm{FIN}$ coming from $\langle
X\rangle$, let
$\mathcal{B}=\\{\vec{a}^{\smallfrown}b\in\mathrm{supp}(X)^{[<\infty]}:\forall
v\in\langle X\rangle(\mathrm{supp}(v)=b\rightarrow v\in\bigcap\\{\langle
X_{\vec{c}}\rangle:\vec{c}\sqsubseteq\vec{a}\\})\\}$
and
$\mathbb{B}=\\{A\in\mathrm{supp}(X)^{[\infty]}:\forall
n(A\\!\upharpoonright\\!n\in\mathcal{B})\\}.$
Clearly, $\mathbb{B}$ is a Borel subset of $\mathrm{supp}(X)^{[\infty]}$. By
Theorem 2.3 applied to the stable ordered-union ultrafilter (by Theorem 2.2)
$\mathrm{supp}(\mathcal{F})$, there is $\langle Y\rangle\in\mathcal{F}$ with
$Y=(y_{n})\preceq X$, such that either
$\mathrm{supp}(Y)^{[\infty]}\subseteq\mathbb{B}$ or
$\mathrm{supp}(Y)^{[\infty]}\cap\mathbb{B}=\emptyset$. Note, however, that the
latter is impossible: Since $Y\preceq^{*}X_{\vec{a}}$ for all
$\vec{a}\in\mathrm{FIN}^{[<\infty]}$, we can thin $Y=(y_{n})$ out to a
subsequence $Y^{\prime}=(y_{n_{k}})$ such that
$\mathrm{supp}(Y^{\prime})\in\mathbb{B}$: take
$\displaystyle y_{n_{0}}$ $\displaystyle\in\langle X_{\emptyset}\rangle$
$\displaystyle y_{n_{1}}$ $\displaystyle\in\langle
X_{\emptyset}\rangle\cap\langle X_{(\mathrm{supp}(y_{n_{0}}))}\rangle$
$\displaystyle y_{n_{2}}$ $\displaystyle\in\langle
X_{\emptyset}\rangle\cap\langle
X_{(\mathrm{supp}(y_{n_{0}}))}\rangle\cap\langle
X_{(\mathrm{supp}(y_{n_{0}}),\mathrm{supp}(y_{n_{1}}))}\rangle$
and so on. Thus, $\mathrm{supp}(Y)^{[\infty]}\subseteq\mathbb{B}$, and in
particular, $\mathrm{supp}(Y)\in\mathbb{B}$, so $Y/\vec{a}\preceq X_{\vec{a}}$
whenever $\vec{a}\sqsubseteq\mathrm{supp}(Y)$. ∎
###### Corollary 4.3.
Let $\mathcal{F}$ be a $(p^{+})$-filter on $E$. Then, whenever $\langle
X_{n}\rangle$ is in $\mathcal{F}$ for all $n$, there is an $\langle
X\rangle\in\mathcal{F}$, with $X=(x_{n})$, such that $X/x_{n}\preceq
X_{\max(\mathrm{supp}(x_{n}))}$ for all $n$.
###### Proof.
Given $\langle X_{n}\rangle$ as described, let $X_{\vec{a}}=X_{\max(\vec{a})}$
for all $\vec{a}\in\mathrm{FIN}^{[<\infty]}$ and apply Theorem 4.2. ∎
###### Corollary 4.4.
Every $(p^{+})$-filter on $E$ is spread.
###### Proof.
Let $\mathcal{F}$ be a $(p^{+})$-filter and $I_{0}<I_{1}<\cdots$ be an
increasing sequence of nonempty intervals in $\omega$. Let $X=(e_{n})$. For
each $k\in\omega$, let $m_{k}$ be the least integer such that
$k\leq\max(I_{m_{k}})$ and let $X_{k}=X/\max(I_{m_{k}+1})$. Let $Y=(y_{n})$,
with $\langle Y\rangle\in\mathcal{F}$, be as in Corollary 4.3. We may assume
$Y\preceq X/\max(I_{0})$. Then, for any $n$, if
$k=\max(\mathrm{supp}(y_{n}))$, then $Y/k\preceq X/\max(I_{m_{k}+1})$, and so
$I_{0}<\mathrm{supp}(y_{n})<I_{m_{k}+1}<\mathrm{supp}(y_{n+1})$. ∎
When $F$ is a finite field, we can go one step further:
###### Corollary 4.5.
Assume $|F|<\infty$. Every $(p^{+})$-filter on $E$ is a strong
$(p^{+})$-filter.
###### Proof.
Let $\mathcal{F}$ be a $(p^{+})$-filter and $(\langle
X_{\vec{x}}\rangle)_{\vec{x}\in{E^{[<\infty]}}}$ in $\mathcal{F}$. Note that
since $|F|<\infty$, for each $a\in\mathrm{FIN}$, there are only finitely many
vectors $v\in E$ having support contained in $a$. For each
$\vec{a}\in\mathrm{FIN}^{[<\infty]}$, let $\langle
X_{\vec{a}}\rangle\in\mathcal{F}$ be such that
$\langle X_{\vec{a}}\rangle\subseteq\bigcap\\{\langle
X_{\vec{x}}\rangle:\mathrm{supp}(\vec{x})\sqsubseteq\vec{a}\\}.$
By Theorem 4.2, there is a $\langle X\rangle\in\mathcal{F}$ such that
$X/\vec{a}\preceq X_{\vec{a}}$ for all $\vec{a}\sqsubseteq\mathrm{supp}(X)$.
So, if $\vec{x}\sqsubseteq X$, then
$X/\vec{x}=X/\mathrm{supp}(\vec{x})\preceq X_{\mathrm{supp}(\vec{x})}\preceq
X_{\vec{x}}.$
This verifies the strong $(p)$-property. ∎
We do not know if Corollary 4.5 holds for infinite fields.
We note here that the spread condition is analogous to another property for
ultrafilters on $\omega$: Recall that an ultrafilter $\mathcal{U}$ on $\omega$
is a _q-point_ if for every partition $\bigcup_{m}I_{m}$ of $\omega$ into
finite sets, there exists an $x\in\mathcal{U}$ such that $\forall m(|x\cap
I_{m}|\leq 1)$. It is well-known that every selective ultrafilter is a
q-point, though the converse (consistently) fails. Let’s say (temporarily)
that an ultrafilter $\mathcal{U}$ on $\omega$ is _spread_ if for every
sequence of finite intervals $I_{0}<I_{1}<I_{2}<\cdots$ in $\omega$, there
exists an $x\in\mathcal{U}$ such that for every $n$, there is an $m$ such that
$I_{0}<x_{n}<I_{m}<x_{n+1}$, where $(x_{n})$ is the increasing enumeration of
$x$.
###### Proposition 4.6.
Let $\mathcal{U}$ be an ultrafilter on $\omega$. The following are equivalent:
1. (i)
$\mathcal{U}$ is a q-point
2. (ii)
For every sequence of finite sets $I_{0}<I_{1}<I_{2}<\cdots$ in $\mathbb{N}$,
there exists a $x\in\mathcal{U}$ such that $\forall m(|x\cap I_{m}|\leq 1)$
3. (iii)
$\mathcal{U}$ is spread.
###### Proof.
(i $\Rightarrow$ ii): This is trivial.
(ii $\Rightarrow$ iii): Let $I_{0}<I_{1}<I_{2}<\cdots$ be a sequence of
intervals in $\omega$. Let $x\in\mathcal{U}$ be as in (ii). We may assume that
$I_{0}<x_{0}$. We partition $x=u\cup v$ as follows: $u_{n}=x_{2n}$ and
$v_{n}=x_{2n+1}$ for all $n$, where $(x_{n})$ is the increasing enumeration of
$y$. For every $n$, since $u_{n}=x_{2n}$, $v_{n}=x_{2n+1}$, and
$u_{n+1}=x_{2n+2}$ must be contained in three distinct $I_{k}$’s, the middle
interval must separate $u_{n}$ and $u_{n+1}$, that is, there is an $m$ such
that $I_{0}<u_{n}<I_{m}<u_{n+1}$. Similarly for the $v_{n}$. Since
$\mathcal{U}$ is an ultrafilter, one of $u$ or $v$ must be in $\mathcal{U}$.
(iii $\Rightarrow$ i): Let $\bigcup_{m}I_{m}$ be a partition of $\omega$ into
finite sets. We define an interval partition $\omega=\bigcup_{k}J_{k}$ as
follows: $J_{0}=[0,\max I_{0}]$. Let $J_{1}$ be the smallest interval
immediately above $J_{0}$ such that $J_{0}\cup J_{1}$ covers $I_{1}$ and all
$I_{m}$ for which $I_{m}\cap J_{0}\neq\emptyset$. Continue in this fashion,
letting $J_{k+1}$ be the smallest interval immediately above $J_{k}$ such that
$J_{0}\cup\cdots\cup J_{k}\cup J_{k+1}$ covers $I_{k+1}$ and all $I_{m}$ for
which $I_{m}\cap(J_{0}\cup\cdots\cup J_{k})\neq\emptyset$. Let
$x\in\mathcal{U}$ be as in the definition of spread applied to
$J_{0}<J_{1}<\cdots$. Towards a contradiction, suppose that $x_{i}<x_{j}$ are
both in some $I_{m}$. Let $n$ be the least such that $I_{m}\subseteq
J_{0}\cup\cdots\cup J_{n}$. We may assume $n>1$ (otherwise, we are done). By
minimality of $n$, $I_{m}\cap(J_{0}\cup\cdots\cup J_{n-2})=\emptyset$. Thus,
$I_{m}\subseteq J_{n-1}\cup J_{n}$. But then, $x_{i}$ and $x_{j}$ fail to be
separated by one of the $J_{k}$’s, contrary to $x$ witnessing that
$\mathcal{U}$ is spread. ∎
## 5\. The restricted Gowers game and strategic filters
Given a block filter $\mathcal{F}$ and $\langle X\rangle\in\mathcal{F}$, we
define the _restricted Gowers game_ $G_{\mathcal{F}}[X]$ below $X$ exactly
like $G[X]$ except that player I is restricted to playing $Y\preceq X$ such
that $\langle Y\rangle\in\mathcal{F}$. Since all subspaces spanned by tails of
$X$ are automatically in $\mathcal{F}$, we may think of $G_{\mathcal{F}}[X]$
as an intermediate between the games $F[X]$ and $G[X]$. Throughout this
section, we will say that an outcome $Y$ of one of the games is “in
$\mathcal{F}$” if $\langle Y\rangle$ is. Our first result here relates
strategies for I in $G_{\mathcal{F}}[X]$ to the strong $(p)$-property, and is
based on a characterization of selective ultrafilters (Theorem 11.17(b) in
[9]).
###### Theorem 5.1.
Let $\mathcal{F}$ be a block filter on $E$. $\mathcal{F}$ has the strong
$(p)$-property if and only if for every $X\in\mathcal{F}$ and every strategy
$\sigma$ for I in $G_{\mathcal{F}}[X]$, there is an outcome of $\sigma$ in
$\mathcal{F}$.
###### Proof.
($\Rightarrow$) Towards a contradiction, suppose that $\sigma$ is a strategy
for I in $G_{\mathcal{F}}[X]$, $\langle X\rangle\in\mathcal{F}$, and no
outcome of $\sigma$ is in $\mathcal{F}$. Define sets
$\mathcal{A}_{\vec{x}}\subseteq\mathcal{F}$ as follows:
$\mathcal{A}_{\emptyset}=\\{\langle\sigma(\emptyset)\rangle\\}$ and in
general, $\mathcal{A}_{\vec{x}}$ is the set of all $\langle
Y\rangle\in\mathcal{F}$ such that $Y$ is played by I, when I follows $\sigma$
and $\vec{x}=(x_{0},\ldots,x_{n-1})$ are the first $n$ moves by II. Some
$\vec{x}$ may not be valid moves for II against $\sigma$, in which case we let
$\mathcal{A}_{\vec{x}}=\mathcal{A}_{\vec{x}^{\prime}}$ where
$\vec{x}^{\prime}$ is the maximal initial segment of $\vec{x}$ consisting of
valid moves. Then, for all $\vec{x}$, $\mathcal{A}_{\vec{x}}$ is finite, and
$\mathcal{A}_{\vec{x}}\subseteq\mathcal{A}_{\vec{y}}$ whenever
$\vec{x}\sqsubseteq\vec{y}$.
For each $\vec{x}$, pick $\langle Y_{\vec{x}}\rangle\in\mathcal{F}$ such that
for all $Y\in\mathcal{A}_{\vec{x}}$, $Y_{\vec{x}}\preceq Y$. By the strong
$(p)$-property, there is a $\langle Y\rangle\in\mathcal{F}$, say
$Y=(y_{n})\preceq X$, such that $Y/\vec{y}\preceq Y_{\vec{y}}$ for all
$\vec{y}\sqsubseteq Y$.
Consider the play of $G_{\mathcal{F}}[X]$ wherein I follows $\sigma$ and II
plays $y_{0}$, $y_{1}$, etc. This is a valid play by II by our choice of $Y$:
$y_{0}\in\langle
Y_{\emptyset}\rangle\subseteq\langle\sigma(\emptyset)\rangle$,
$y_{1}\in\langle Y/(y_{0})\rangle\subseteq\langle
Y_{(y_{0})}\rangle\subseteq\langle\sigma(y_{0})\rangle$, etc. The resulting
outcome is $Y$, and $\langle Y\rangle\in\mathcal{F}$, a contradiction to our
assumption about $\sigma$.
($\Leftarrow$) Suppose that $\mathcal{F}$ does not have the strong
$(p)$-property, so there are $\langle X_{\vec{x}}\rangle\in\mathcal{F}$ for
all $\vec{x}\in{E^{[<\infty]}}$ such there for no $\langle
X\rangle\in\mathcal{F}$ is it the case that $X/\vec{x}\preceq X_{\vec{x}}$ for
all $\vec{x}\sqsubseteq X$. Take $\langle X\rangle\in\mathcal{F}$ arbitrary.
We define a strategy $\sigma$ for I in $G_{\mathcal{F}}[X]$ as follows: Start
by playing $Y_{\emptyset}\preceq X,X_{\emptyset}$. If II plays
$y_{0}\in\langle Y_{0}\rangle$, respond by playing some $Y_{(y_{0})}\preceq
Y_{\emptyset},X_{(y_{0})}$. In general, if II has played
$(y_{0},\ldots,y_{k})$, respond by playing some
$Y_{(y_{0},\ldots,y_{k})}\preceq
Y_{(y_{0},\ldots,y_{k-1})},X_{(y_{0},\ldots,y_{k})}$. Note that in each move,
we can always find such a $\langle Y_{\vec{y}}\rangle\in\mathcal{F}$ since
$\mathcal{F}$ is a block filter. If $Y$ is an outcome of a round of
$G_{\mathcal{F}}[X]$ where I followed $\sigma$, then for every
$\vec{y}\sqsubseteq Y$, $Y/\vec{y}\preceq X_{\vec{y}}$. In other words, $Y$ is
a diagonalization of $\langle X_{\vec{x}}\rangle_{\vec{x}\in{E^{[<\infty]}}}$,
and thus by assumption, cannot be in $\mathcal{F}$.∎
Since every strategy for I in $F[X]$ is also a strategy for I in
$G_{\mathcal{F}}[X]$, it follows that if $\mathcal{F}$ is a strong
$(p)$-filter, $\langle X\rangle\in\mathcal{F}$, and $\sigma$ a strategy for I
in $F[X]$, then there is an outcome of $\sigma$ in $\mathcal{F}$ (this is
Theorem 4.3 in [21]).
The restricted Gowers game can be used to prove a version of Theorem 1.4 for
$(p)$-filters without the extra assumption of fullness. This result is due
independently to the author and, in more generality, to Noé de Rancourt:
###### Theorem 5.2 (Theorem 3.11.5 in [20] and Theorem 3.3 in [4]).
Let $\mathcal{F}$ be a $(p)$-filter on $E$. If
$\mathbb{A}\subseteq{E^{[\infty]}}$ is analytic, then there is an $\langle
X\rangle\in\mathcal{F}$ such that either
1. (i)
I has a strategy in $F[X]$ for playing out of $\mathbb{A}$, or
2. (ii)
II has a strategy in $G_{\mathcal{F}}[X]$ for playing into $\mathbb{A}$.
The following is a version of being strategic for the restricted games.
###### Definition 5.3.
A block filter $\mathcal{F}$ on $E$ is _$+$ -strategic_ if whenever $\alpha$
is a strategy for II in $G_{\mathcal{F}}[X]$, where $\langle
X\rangle\in\mathcal{F}$, there is an outcome of $\alpha$ which is in
$\mathcal{F}$.
What is the difference between being $+$-strategic and strategic? We will see
below that, at least for $(p)$-filters, it is exactly fullness.
We will need the following notion and a lemma: A _tree_ is a subset
$T\subseteq{E^{[<\infty]}}$ which is closed under initial segments. The set
$[T]$ of infinite branches through $T$ is a closed subset of ${E^{[\infty]}}$.
###### Lemma 5.4 (cf. Lemma 6.4 in [7]).
Let $\mathcal{F}$ be a filter on $E$, $\langle X\rangle\in\mathcal{F}$, and
$\alpha$ a strategy for II in $G_{\mathcal{F}}[X]$. Then, there is a tree
$T\subseteq{E^{[<\infty]}}$ such that:
1. (i)
$[T]\subseteq[\alpha]$, and
2. (ii)
whenever $(y_{0},\ldots,y_{n})\in T$ and $\langle Y\rangle\in\mathcal{F}$,
there is a $y\in\langle Y\rangle$ so that $(y_{0},\ldots,y_{n},y)\in T$.
###### Proof.
We will define a pair of trees $T\subseteq{E^{[<\infty]}}$ and
$S\subseteq\mathcal{F}^{<\infty}$ as follows: Put $\emptyset\in T$ and $S$.
The first level of $T$ consists of all $(y)\in{E^{[<\infty]}}$ such that $y$
is a “first move” by II according to $\alpha$. That is, there some $Y\preceq
X$ such that $\langle Y\rangle\in\mathcal{F}$ and $\alpha(Y)=y$. For each such
$y$, pick a corresponding $Y$ in its preimage under $\alpha$; these comprise
the first level of $S$.
We continue inductively. Having put $(y_{0},\ldots,y_{n})\in T$ and
$(Y_{0},\ldots,Y_{n})\in S$ with $\alpha(Y_{0},\ldots,Y_{i})=y_{i}$ for $i\leq
n$, we put $(y_{0},\ldots,y_{n},y)$ if there is some $Y\preceq X$ such that
$\langle Y\rangle\in\mathcal{F}$ and $\alpha(Y_{0},\ldots,Y_{n},Y)=y$. Choose
some $Y$ with this property and put $(Y_{0},\ldots,Y_{n},Y)$ into $S$.
Clearly, $[T]\subseteq[\alpha]$. To see that $[T]$ satisfies (ii), let
$(y_{0},\ldots,y_{n})\in T$ and $\langle Y\rangle\in\mathcal{F}$ with
$Y\preceq X$. Let $(Y_{0},\ldots,Y_{n})\in S$ be such that
$\alpha(Y_{0},\ldots,Y_{i})=y_{i}$ for $i\leq n$, and put
$y_{n+1}=\alpha(Y_{0},\ldots,Y_{n},Y)\in\langle Y\rangle$. By construction,
there is some $Y_{n+1}$ with $(Y_{0},\ldots,Y_{n},Y_{n+1})\in S$ and
$y_{n+1}=\alpha(Y_{0},\ldots,Y_{n},Y_{n+1})$. ∎
###### Theorem 5.5.
Let $\mathcal{F}$ be $(p)$-filter on $E$. Then, $\mathcal{F}$ is $+$-strategic
if and only if $\mathcal{F}$ is strategic and full.
###### Proof.
($\Rightarrow$) First observe that +-strategic implies strategic: Given any
$\langle X\rangle\in\mathcal{F}$ and strategy $\alpha$ for II in $G[X]$, let
$\alpha^{\prime}$ be the restriction of $\alpha$ to $G_{\mathcal{F}}[X]$ (in
the obvious sense). Since $\mathcal{F}$ is +-strategic, there is an outcome of
$\alpha^{\prime}$, and thus of $\alpha$, in $\mathcal{F}$.
To see that $\mathcal{F}$ is full, let $D\subseteq E$ be $\mathcal{F}$-dense
and put
$\mathbb{D}=\\{Y\in{E^{[\infty]}}:\langle Y\rangle\subseteq D\\},$
a closed subset of ${E^{[\infty]}}$. By Theorem 5.2, there is an $\langle
X\rangle\in\mathcal{F}$ such that either I has a strategy in $F[X]$ for
playing into $\mathbb{D}^{c}$, or II has a strategy in $G_{\mathcal{F}}[X]$
for playing in $\mathbb{D}$. However, the former is impossible: pick $Z\preceq
X$ in $\mathbb{D}$ and let II in $F[X]$ always play elements of $\langle
Z\rangle$. As $\mathcal{F}$ is +-strategic, there is some outcome of II’s
strategy in $G_{\mathcal{F}}[Y]$ in $\mathcal{F}$, verifying fullness.
($\Leftarrow$) Assume that $\mathcal{F}$ is strategic and full, that is,
$\mathcal{F}$ is a strategic $(p^{+})$-filter. We must prove that
$\mathcal{F}$ is +-strategic. Let $\langle X\rangle\in\mathcal{F}$ and
$\alpha$ a strategy for II in $G_{\mathcal{F}}[X]$. Let
$T\subseteq{E^{[<\infty]}}$ be as in Lemma 5.4. By Theorem 1.4, there is a
$Y\preceq X$ such that $\langle Y\rangle\in\mathcal{F}$ and either I has a
strategy in $F[Y]$ for playing into $[T]^{c}$, or II has a strategy in $G[Y]$
for playing into $[T]$. The former is impossible as II has a strategy in
$F[Y]$ for playing into $[T]$: Inductively apply the property in Lemma 5.4(ii)
to the tail block sequences played by I in $F[Y]$. Thus, II has a strategy in
$G[Y]$ for playing into $[T]$. As $\mathcal{F}$ is strategic, there is some
outcome of this strategy, and thus some element of $[\alpha]$, in
$\mathcal{F}$. ∎
Theorem 5.5 is a crucial part of the consistency proof of the existence of a
strong $(p^{+})$-filter which is not strategic in [19].
## 6\. Summary and further questions
Figure 2 shows where the implications between the properties described in the
Introduction stand at the end of this article. The single arrows $\rightarrow$
indicate that the converse remains open for arbitrary fields (with
$\mathsf{ZFC}$ as a base theory). In addition to sorting out the remaining
implications in this diagram, there are a few other questions we wish to
highlight for further investigation (each when $|F|>2$):
###### Question 1.
Does the Continuum Hypothesis (or Martin’s Axiom) imply the existence of
$(p^{+})$-filters which are not strategic?
###### Question 2.
Is it consistent with $\mathsf{ZFC}$ that there are stable ordered-union
ultrafilters on $\mathrm{FIN}$, but not $(p^{+})$-filters on $E$?
###### Question 3.
Is there a meaningful version of the Rudin–Keisler ordering and its
accompanying theory for filters on vector spaces? If so, are $(p^{+})$-filters
minimal?
complete combinatoricsstrategic $(p^{+})$-filterstrong $(p^{+})$-filterspread
$(p^{+})$-filter Corollary 4.4$(p^{+})$-filter Theorem 4.1Theorem 2.4 (when
$|F|=2$) Corollary 4.5 (when $|F|<\infty$)local Rosendal dichotomyfull block
filter Figure 2. Updated implications from Figure 1.
## References
* [1] G. M. Bergman and E. Hrushovski. Linear ultrafilters. Comm. Algebra, 26(12):4079–4113, 1998.
* [2] A. Blass. Ultrafilters related to Hindman’s finite-unions theorem and its extensions. In Logic and combinatorics (Arcata, Calif., 1985), volume 65 of Contemp. Math., pages 89–124. Amer. Math. Soc., Providence, RI, 1987.
* [3] D. Booth. Ultrafilters on a countable set. Ann. Math. Logic, 2(1):1–24, 1970/1971.
* [4] N. de Rancourt. Ramsey theory without pigeonhole principle and the adversarial Ramsey principle. Trans. Amer. Math. Soc., 373(7):5025–5056, 2020.
* [5] I. Farah. Semiselective coideals. Mathematika, 45(1):79–103, 1998.
* [6] V. Ferenczi and C. Rosendal. Ergodic Banach spaces. Adv. Math., 195(1):259–282, 2005.
* [7] V. Ferenczi and C. Rosendal. Banach spaces without minimal subspaces. J. Funct. Anal., 257(1):149–193, 2009.
* [8] W. T. Gowers. An infinite Ramsey theorem and some Banach-space dichotomies. Ann. of Math. (2), 156(3):797–833, 2002.
* [9] L. J. Halbeisen. Combinatorial set theory. Springer Monographs in Mathematics. Springer, Cham, 2017. With a gentle introduction to forcing, Second edition.
* [10] N. Hindman. Finite sums from sequences within cells of a partition of $\mathbb{N}$. J. Combinatorial Theory Ser. A, 17:1–11, 1974.
* [11] P. Krautzberger. On union ultrafilters. Order, 29(2):317–343, 2012.
* [12] C. Laflamme. Forcing with filters and complete combinatorics. Ann. Pure Appl. Logic, 42(2):125–163, 1989.
* [13] C. Laflamme, L. Nguyen Van Thé, M. Pouzet, and N. Sauer. Partitions and indivisibility properties of countable dimensional vector spaces. J. Combin. Theory Ser. A, 118(1):67–77, 2011.
* [14] A. R. D. Mathias. Happy families. Ann. Math. Logic, 12(1):59–111, 1977.
* [15] J. G. Mijares. A notion of selective ultrafilter corresponding to topological Ramsey spaces. MLQ Math. Log. Q., 53(3):255–267, 2007.
* [16] K. R. Milliken. Ramsey’s theorem with sums or unions. J. Combinatorial Theory Ser. A, 18:276–290, 1975.
* [17] C. Rosendal. An exact Ramsey principle for block sequences. Collect. Math., 61(1):25–36, 2010.
* [18] S. Shelah. Proper and improper forcing. Perspectives in Mathematical Logic. Springer-Verlag, Berlin, second edition, 1998.
* [19] I. B. Smythe. Parametrizing the Ramsey theory of block sequences I: Discrete vector spaces. Preprint. [arXiv:2108.00544].
* [20] I. B. Smythe. Set theory in infinite-dimensional vector spaces. PhD thesis, Cornell University, 2017.
* [21] I. B. Smythe. A local Ramsey theory for block sequences. Trans. Amer. Math. Soc., 370(12):8859–8893, 2018.
* [22] A. D. Taylor. A canonical partition relation for finite subsets of $\omega$. J. Combinatorial Theory Ser. A, 21(2):137–146, 1976.
* [23] Y. Y. Zheng. Selective ultrafilters on FIN. Proc. Amer. Math. Soc., 145(12):5071–5086, 2017.
|
# Stable Matching for Selection of Intelligent Reflecting Surfaces in
Multiuser MISO Systems
Jawad Mirza, , Bakhtiar Ali and Muhammad Awais Javed J. Mirza, B. Ali and M.
A. Javed are with the Department of Electrical and Computer Engineering,
COMSATS University Islamabad, Islamabad, Pakistan, (Emails:
<EMAIL_ADDRESS>{bakhtiar$\\_$ali, awais.javed}@comsats.edu.pk).
###### Abstract
In this letter, we present an intelligent reflecting surface (IRS) selection
strategy for multiple IRSs aided multiuser multiple-input single-output (MISO)
systems. In particular, we pose the IRS selection problem as a stable matching
problem. A two stage user-IRS assignment algorithm is proposed, where the main
objective is to carry out a stable user-IRS matching, such that the sum rate
of the system is improved. The first stage of the proposed algorithm employs a
well-known Gale Shapley matching designed for the stable marriage problem.
However, due to interference in multiuser systems, the matching obtained after
the first stage may not be stable. To overcome this issue, one-sided (i.e.,
only IRSs) blocking pairs (BPs) are identified in the second stage of the
proposed algorithm, where the BP is a pair of IRSs which are better off after
exchanging their partners. Thus, the second stage validates the stable
matching in the proposed algorithm. Numerical results show that the proposed
assignment achieves better sum rate performance compared to distance-based and
random matching algorithms.
###### Index Terms:
MISO Systems, IRS, Stable matching
## I Introduction
Intelligent reflecting surface (IRS) is an artificial passive surface that
consists of large number of low-cost reflecting elements. By introducing phase
shifts and/or amplitude variations, IRS can reflect the incident
electromagnetic wave towards the specified direction, thus enabling a
smart/programmable wireless environment [1]. IRS has been envisioned to
revolutionize high frequency wireless communication systems particularly when
combined with other promising technologies such as massive multiple-input
multiple-output (MIMO) and terahertz communications. More concisely, the
quality of the MIMO channel link can be improved, i.e., unfavourable
propagation conditions can be controlled by judiciously designing the phase
shifts of IRS reflecting elements.
Multiple IRSs aided communication systems have shown to provide robust data
transmission and wide coverage area [2] compared to the single IRS deployment.
This motivates us to investigate user-IRS association problem in multiple IRSs
aided multiuser multiple-input single-output (MISO) systems. There are few
studies that deal with the user-IRS association problem in single-input
single-output (SISO) systems, however, here we only discuss related studies
investigating this problem in MISO systems. A distance based user-IRS
association is performed in [3, 4], where an IRS is assigned to a nearby user.
To get rid off the complicated inter-IRSs interference, orthogonal IRS
channels are considered in [5] and the user-IRS assignment is based on a
greedy search algorithm.
In this paper, the user-IRS assignment problem in multiuser MISO systems is
modeled as a matching problem. We assume that each user can be matched to at
most one IRS, resulting in a one-to-one matching problem such as stable
marriage. The seminal studies in matching theory demonstrate that there exist
at least one stable matching for general preferences in one-to-one games [6].
In this study, the user preference is based on local information available
i.e., the user rate without interference. Whereas, the base station (BS)
controls and manages preferences of IRSs which are based on the user rate with
interference, as it is assumed that the BS has perfect knowledge of global
channel state information (CSI). With these two-sided preferences, we propose
a two stage IRS optimal stable matching algorithm, where the first stage
consists of a well-known Gale Shapley matching algorithm and the second stage
identifies blocking pairs (BPs) in the current matching. Due to interference,
an IRS choice of a user will impact the choices of the other IRSs in the
network, therefore, it is important to identify BPs until a stable matching is
obtained. Note that in a stably matched association, there is no single user-
IRS pair which is better off, if allowed to change their assigned partners.
## II System Model
Consider a downlink multiuser MISO communication system assisted by multiple
IRSs as shown in Figure 1. The system consists of a BS equipped with $M$
transmit antennas. There are total $K$ number of single-antenna users being
served by the BS. In addition to that, there are total $L$ number of IRS units
deployed in the surrounding area, where each IRS consists of $N$ reflecting
elements. Let $\mathcal{U}$ and $\mathcal{R}$ denote the set of users and
IRSs, respectively. The main objective is to achieve a stable users-IRSs one-
to-one matching, $\mu:\mathcal{U}\to\mathcal{R}$, such that the sum rate of
the network is improved.
Figure 1: Illustrative system model of the considerd IRSs assisted multiuser
MISO system with $K=L=2$.
The baseband equivalent direct channel from the BS to the $k^{\text{th}}$
user, is denoted by $\mathbf{h}_{\text{d},k}\in\mathbb{C}^{M\times 1}$. The
channel from the BS to the $l^{\text{th}}$ IRS is represented by
$\mathbf{G}_{l}\in\mathbb{C}^{N\times M}$ and the reflected channel from the
$l^{\text{th}}$ IRS towards the user $k$, is denoted by
$\mathbf{f}_{k,l}\in\mathbb{C}^{N\times 1}$. In this study, we assume that
channels follow quasi-static flat fading, where channel values remain same
within a coherence interval $T_{c}$. Moreover, it is assumed that perfect CSI
is available at the BS. The entries of the direct-link channel are assumed to
be an independent and identically distributed (i.i.d.) complex Gaussian random
variable with zero mean and unit variance, such that,
$\mathbf{h}_{\text{d},k}\sim\mathcal{CN}({\bf 0}_{M},{\bf I}_{M})$. Due to
close proximity of IRSs and BS, we assume that a line-of-sight (LoS) path
exists between the BS and $l^{\text{th}}$ IRS. Therefore, $\mathbf{G}_{l}$ can
be modeled as a Rician fading channel, given by
$\mathbf{G}_{l}=\sqrt{\kappa_{g}/(\kappa_{g}+1)}\mathbf{G}_{{l}}^{\rm
LoS}+\sqrt{1/(\kappa_{g}+1)}\mathbf{G}_{{l}}^{\rm NLoS},$ (1)
where $\kappa_{g}$ is the Rician factor and $\mathbf{G}_{{l}}^{\rm
LoS}\in\mathbb{C}^{N\times M}$ denotes the channel associated with the LoS
component, while $\mathbf{G}_{l}^{\rm NLoS}\in\mathbb{C}^{N\times M}$
represents the non-LoS (NLoS) channel matrix, whose entries are i.i.d and
follow the complex Gaussian distribution with zero mean and unit variance. In
(1), the fixed LoS channel component ${\bf G}_{l}^{\rm LoS}$ is modeled
as111We use $(\cdot)^{H}$, $(\cdot)^{*}$, $(\cdot)^{T}$ and $(\cdot)^{-1}$ to
denote the conjugate transpose, the conjugate, the transpose, and the inverse
operations, respectively. For any given matrix $\mathbf{A}$, the quantity
$\mathbf{A}(i,j)$ denotes the entry of the matrix $\mathbf{A}$ corresponding
to the $i^{\text{th}}$ row and $j^{\text{th}}$ column. Similarly,
$\mathbf{A}(:,l)$ represents the $l^{\text{th}}$ column of the matrix
$\mathbf{A}$. $\mathbf{a}(n)$ denotes the $n^{\text{th}}$ entry of the vector
$\mathbf{a}$.
$\mathbf{G}_{l}^{\rm
LoS}=\mathbf{a}_{N}\left(\theta_{l}^{\mathrm{AoA}}\right)\mathbf{a}_{M}^{H}\left(\theta_{l}^{\mathrm{AoD}}\right),$
(2)
where $\theta_{l}^{\mathrm{AoA}}$ and $\theta_{l}^{\mathrm{AoD}}$ represent
the angle of arrival (AoA) and angle of departure (AoD) of the $l^{\text{th}}$
IRS, respectively. The $n^{\text{th}}$-dimensional general uniform linear
array response vector, denoted by
$\mathbf{a}_{n}(\theta)\in\mathbb{C}^{n\times 1}$, can be expressed as
$\mathbf{a}_{n}(\theta)=\left[1,e^{j\frac{2{\pi}d}{\lambda}\sin(\theta)},e^{j\frac{4{\pi}d}{\lambda}\sin(\theta)},\ldots,e^{j\frac{2(n-1){\pi}d}{\lambda}\sin(\theta)}\right]^{T},$
where $\theta$ is the angle, $d$ is the distance between neighbouring
elements, and $\lambda$ denotes the wavelength of the carrier. The channel
between the $l^{\text{th}}$ IRS and the user $k$ is given by
$\mathbf{f}_{k,l}=\sqrt{\kappa_{f}/(\kappa_{f}+1)}\mathbf{f}_{k,l}^{\rm
LoS}+\sqrt{1/(\kappa_{f}+1)}\mathbf{f}_{k,l}^{\rm NLoS},$ (3)
where $\kappa_{f}$ is the Rician factor and $\mathbf{f}_{k,l}^{\rm
LoS}=\mathbf{a}_{N}\left(\theta_{k,l}\right)\in\mathbb{C}^{N\times 1}$ is the
fixed LoS channel, where $\theta_{k,l}$ represent the AoD from the
$l^{\text{th}}$ IRS to the $k^{\text{th}}$ user. $\mathbf{f}_{k,l}^{\rm
NLoS}\in\mathbb{C}^{N\times 1}$ is the NLoS channel vector, whose entries are
i.i.d. and follow the complex Gaussian distribution with zero mean and unit
variance. These channels are also multiplied by the square root of the
distance-dependent path loss, whose general form is given in Section IV.
Assuming that the $k^{\text{th}}$ user is paired with the IRS $\mu(k)$, where
$1<\mu(k)<L$, then we can define desired and interfering channels at the
$k^{\text{th}}$ user as
$\mathbf{h}_{k,\mu(k)}=\mathbf{h}_{\text{d},k}^{H}+\mathbf{f}_{k,\mu(k)}^{H}\mathbf{\Phi}_{\mu(k)}\mathbf{G}_{\mu(k)}$
(4)
and
$\mathbf{z}_{k,l}=\sum_{l\neq\mu(k)}^{L}\mathbf{f}_{k,l}^{H}\mathbf{\Phi}_{l}\mathbf{G}_{l},$
(5)
respectively, where $\mathbf{\Phi}_{\mu(k)}$ is the diagonal phase shift
matrix for the $\mu(k)^{\text{th}}$ IRS, given by
$\mathbf{\Phi}_{\mu(k)}=\text{diag}\\{\alpha_{1,\mu(k)}e^{j\phi_{1,\mu(k)}},\alpha_{2,\mu(k)}e^{j\phi_{2,\mu(k)}},\ldots,\alpha_{N,\mu(k)}e^{j\phi_{N,\mu(k)}}\\}$,
where $\phi_{n,\mu(k)}$ and $\alpha_{n,\mu(k)}$ denote the phase and ON/OFF
state of the $n^{\text{th}}$ element of the IRS $\mu(k)$. Here, set of
discrete phase shifts are considered for reflecting elements. We can write the
received signal at the $k^{\text{th}}$ user assisted by the selected
$\mu(k)^{\text{th}}$ IRS as
$\displaystyle
y_{k,\mu(k)}=\sqrt{P_{k}}\big{(}\mathbf{h}_{k,\mu(k)}+\sum_{l\neq\mu(k)}^{L}\mathbf{z}_{k,l}\big{)}\mathbf{w}_{k}s_{k}+$
$\displaystyle\sum_{j=1,j\neq
k}^{K}\sqrt{P_{j}}\big{(}\mathbf{h}_{k,\mu(k)}+\sum_{l\neq\mu(k)}^{L}\mathbf{z}_{k,l}\big{)}\mathbf{w}_{j}s_{j}+n_{k},$
(6)
where $P_{k}$ denotes the transmit power allocated for the $k^{\text{th}}$
user. The transmit precoding matrix for the $k^{\text{th}}$ user at the BS is
represented by $\mathbf{w}_{k}$, where $\|\mathbf{w}_{k}\|^{2}=1$. The
transmitted symbol for the $k^{\text{th}}$ user is given by $s_{k}$. The
additive noise at the $k^{\text{th}}$ user is represented by $n_{k}$, which is
assumed to follow the i.i.d. Gaussian distribution with zero mean and variance
$\sigma_{k}^{2}$. Using (II), we can write the SINR of the $k^{\text{th}}$
user as
$\gamma_{k,\mu(k)}=\frac{P_{k}\left\|\left(\mathbf{h}_{k,\mu(k)}+\sum_{l\neq\mu(k)}^{L}\mathbf{z}_{k,l}\right)\mathbf{w}_{k}\right\|^{2}}{\sum_{j\neq
k}^{K}P_{j}\left\|\left(\mathbf{h}_{k,\mu(k)}+\sum_{l\neq\mu(k)}^{L}\mathbf{z}_{k,l}\right)\mathbf{w}_{j}\right\|^{2}+\sigma_{k}^{2}}.$
By treating multiuser interference as noise, we can express the achievable
rate $R_{k}$ (in bits/s/Hz) at the $k^{\text{th}}$ user as
$R_{k}=\log_{2}\left(1+\gamma_{k,\mu(k)}\right)$. Consequently, the overall
sum rate of the network is given by $R_{\text{sum}}=\sum_{k=1}^{K}R_{k}$.
Similar to [7], we employ fixed zero-forcing (ZF) precoding at the BS. The
concatenated channel matrix is given by
${\mathbf{H}}=[{\mathbf{h}}_{1,\mu{(1)}},\ldots,{\mathbf{h}}_{K,\mu{(K)}}]^{T}$.
The ZF precoding vector for the $k^{\textrm{th}}$ user is denoted by
${\mathbf{w}}_{k}$, which is the $k^{\text{th}}$ normalized column of the
matrix ${\mathbf{W}}$, where
${\mathbf{W}}={\mathbf{H}}^{H}({\mathbf{H}}{\mathbf{H}}^{H})^{-1}$. For the
phase shifts or passive beamforming design, we employ an instantaneous SNR
maximization approach [4] for the reflective link. For the $k^{\text{th}}$
user, the SNR maximization problem is
$\max_{\mathbf{\Phi}_{\mu(k)}}\ \
\frac{\|\mathbf{f}_{k,\mu(k)}^{H}\mathbf{\Phi}_{\mu(k)}\mathbf{G}_{\mu(k)}\|^{2}}{N_{0}},\
\ \text{s.t.}\ |\phi_{n,\mu(k)}|=1,\ \forall n$ (7)
The sub-optimal solution of the problem (7) is provided in Algorithm 1 [4],
where user and IRS indices are ignored for simplicity. It is based on a
discrete reflection phase set
$\mathcal{Z}=\\{-\pi,-\pi+(2\pi/2^{B}),\ldots,-\pi+(2\pi/2^{B})(2^{B}-1)\\}$,
where $B$ is the number of IRS control bits. In steps 4 and 5, we have
$\Gamma_{n,\hat{m}}=|\mathbf{f}(n)||\mathbf{G}(n,\hat{m})|$. Although,
Algorithm 1 provides a sub-optimal solution, but it exhibit low complexity,
i.e., $\mathcal{O}(N)$.
1 Input: $\mathbf{G}$, $\mathbf{f}$ and $\mathcal{Z}$
2 Initialize: Set $s_{0}=0$, Select $\hat{m}=\mathop{\arg\max}\limits_{1\leq
m\leq M}\left\|{{{\mathbf{G}}(:,m)}}\right\|$
3 for _ $n=1,\ldots,N$,_ do
4 find
$\hat{\theta}=\mathop{\arg\max}\limits_{{\theta}\in\mathcal{Z}}\left|{{s_{{{n-1}}}}+\Gamma_{n,\hat{m}}e^{j(\angle\mathbf{f}^{*}(n)+\angle\mathbf{G}(n,\hat{m})+{\theta})}}\right|$
5 set
${s_{n}}={s_{n-1}}+\Gamma_{n,\hat{m}}e^{j(\angle\mathbf{f}^{*}(n)+\angle\mathbf{G}(n,\hat{m})+{\hat{\theta}})}$
6 set ${\phi_{{n}}}=\hat{\theta}$
7
8 end for
9Output:
${{\boldsymbol{\Phi}}}={\rm{diag}}\left\\{{e^{j\phi_{1}},e^{j\phi_{2}},\cdots,e^{j\phi_{N}}}\right\\}$
Algorithm 1 Passive Beamforming Design [4]
## III Proposed IRS Selection Strategy
In this section, we present our proposed IRS-user assignment framework for the
system model discussed above. Here, we explain three important stages of the
selection strategy, namely; CSI acquisition, preference list setup and stable
matching. The proposed algorithm is designed for the case when the number of
users in the system is equal to the number of IRSs, i.e., $K=L$. Let
$\mathcal{U}=\\{u_{1},u_{2},\ldots,u_{K}\\}$ and
$\mathcal{R}=\\{r_{1},r_{2},\ldots,r_{L}\\}$ denote the set of users and IRSs,
respectively. The aim of the proposed method is to obtain a stable user-IRS
matching $\mu:\mathcal{U}\to\mathcal{R}$ that maximize the overall sum-rate of
the network, such that
$\max_{\mu}\ \sum_{k}^{K}\log_{2}\left(1+\gamma_{k,\mu(k)}\right),\
\text{s.t.}\ \mu\ \text{is a matching},$ (8)
where $\mu(k)$ denotes the index of the IRS which is matched to the
$k^{\text{th}}$ user. Each user will be matched with only one IRS.
CSI Acquisition: Although, we have assumed perfect CSI in this study, for
practical implementation of the proposed algorithm, it is important to provide
the details of CSI acquisition process at the BS and users.
_a) Global CSI at BS:_ We classify the CSI acquisition stage in two main
categories; direct channel and reflected IRS channel estimations. During
direct channel estimation, we assume that all IRS units are turned OFF. Here,
conventional multiuser MIMO training based TDD channel estimation technique
can be employed by leveraging uplink/downlink channel reciprocity. For the
estimation of IRS-based reflected channels, each IRS is turned ON one-by-one,
i.e., for the $i^{\text{th}}$ IRS, we have $\alpha_{n,i}=1$, $\forall n$,
whereas, $\alpha_{n,j}=0$ where $j\neq i$. When the $i^{\text{th}}$ IRS is
turned ON, each user sends an orthogonal training sequence of length $T$
(where $T<T_{c}$) to the BS in the same time-frequency resource. The BS
estimates downlink channels from the received observation using appropriate
criteria. For practical channel estimation, we refer the reader to a single
IRS based multiuser channel estimation in [8]. CSI acquisition continues at
the BS until all the IRS are turned ON and OFF one-by-one in a systematical
manner. At the end of the training process, the BS acquires all the downlink
user channels, i.e., both direct and reflected channels.
_b) Local CSI at users:_ In this study, we assume that the user only have a
knowledge of its own channel. For this purpose, downlink training can be used
to acquire the local CSI, which is given by $\mathbf{h}_{k,l}$ for the
$k^{\text{th}}$ user assisted by the $l^{\text{th}}$ IRS.
Preference List Setup: After the completion of uplink and downlink CSI
acquisition process, each user generates its preference list based on the
offered rates from IRSs. The channel gain for the $k^{\text{th}}$ user when
served by the $l^{\text{th}}$ IRS is given by $|h_{k,l}|^{2}$. The user
computes the rate by using
$C_{k,l}=\log_{2}(1+(|h_{k,l}|^{2}/\sigma_{k}^{2}))$ $\forall\ l$. At the
user, the preference list consists of IRSs, which are ranked in a descending
order based on their offered rate. This means that the user’s preference list
is based on the local channel information (without interference) as users do
not have the information of other user channels. The preference list created
by the $k^{\text{th}}$ user is denoted by ${Pl\\_u}_{k}$.
On the other hand, the BS has perfect knowledge of global CSI, and therefore,
unlike the user preference list, the preference lists created at the BS for
IRSs are based on the users rates with interference, i.e., for the
$k^{\text{th}}$ user $R_{k,\mu(k)}=\log_{2}(1+\gamma_{k,\mu(k)})$. This rate
can also be expressed with respect to the $l^{\text{th}}$ IRS as
$R_{\mu(l),l}=\log_{2}(1+\gamma_{\mu(l),l})$, where $\mu(l)$ denotes the index
of the user which is matched to the $l^{\text{th}}$ IRS. It is not feasible
for the BS to compute all the possible user-IRS permutations as this will
increase the computational overhead significantly at the BS. Therefore, to
compute preference lists at the BS for IRSs, a computationally less complex
strategy is designed where the BS finds a small number of random user-IRS
permutations, however, it is assured that each user is assigned with all the
IRSs. An example for the $K=L=3$ case is presented here to explain the
strategy. The BS generates a user-IRS association matrix consisting of total
$L$ number of rows instead of $L$! rows if all permutations are considered.
This random user-IRS association matrix for the $K=L=3$ case is given by
$\mathbf{A}=\begin{bmatrix}(u_{1},r_{1})&(u_{2},r_{2})&(u_{3},r_{3})\\\
(u_{1},r_{3})&(u_{2},r_{1})&(u_{3},r_{2})\\\
(u_{1},r_{2})&(u_{2},r_{3})&(u_{3},r_{1})\end{bmatrix}.$ (9)
The rows of the matrix $\mathbf{A}$ define three different user-IRS
associations for computing the preference lists of IRSs at the BS. In the
first row, the first user is paired with the first IRS, i.e.,
$u_{1}\leftrightarrow r_{1}$, while other users have $u_{2}\leftrightarrow
r_{2}$ and $u_{3}\leftrightarrow r_{2}$ associations. The preference list of
the first IRS will computed by sorting the rates of the users with
associations $\\{\mathbf{A}(1,1),\mathbf{A}(2,2),\mathbf{A}(3,3)\\}$ in the
descending order. The preference list for the $l^{\text{th}}$ IRS is
represented by ${Pl\\_r}_{l}$.
Input: Set of all users $\mathcal{U}$ and IRSs $\mathcal{R}$, user preference
lists ${Pl\\_u}_{k}$ $\forall$ $k$, IRS preference lists ${Pl\\_r}_{l}$
$\forall$ $l$
1
2
_Stage 1: Gale-Shapley_
3
4
5Initialize Each IRS $r_{l}\in\mathcal{R}$ to be free, $\mu=\emptyset$
6 while _IRS $r_{l}\in\mathcal{R}$ is free and ${Pl\\_r}_{l}\neq\emptyset$_
do
7 $u_{k}=$ first user on $r_{l}$’s list to whom $r_{l}$ has not proposed yet
8 if ($u_{k}$ is not assigned)
9 Assign $u_{k}$ and $r_{l}$ to be allocated to each other
10 $\mu\leftarrow\mu\cup(u_{k},r_{l})$
11
12 else if ($u_{k}$ prefers $r_{l}$ over previously assigned $r_{j}$)
13 Assign $r_{j}$ to be free $\mu\leftarrow\mu/(u_{k},r_{j})$
14 Assign $u_{k}$ and $r_{l}$ to be allocated to each other
$\mu\leftarrow\mu\cup(u_{k},r_{l})$
15 else
16 $u_{k}$ rejects $r_{l}$ and ($r_{l}$ remains unassigned)
17 end if
18 end while
19Output $\mu$: matched user-IRS pairs
20
21
_Stage 2: Stable Matching (identifying BPs)_
22
23
24Set $\mu_{t}=\mu$ and $t=0$
25 while _$\mu_{t}$ is not Pareto optimal_ do
26 for all IRSs pairs $(r_{i},r_{j})$ do
27 Switch users of pair: $(\mu_{t}(r_{j}),r_{i})$ and $(\mu_{t}(r_{i}),r_{j})$
28 Compute new user rates $R_{\mu_{t}(j),i}$ and $R_{\mu_{t}(i),j}$
29 if ($R_{\mu_{t}(j),i}>R_{\mu_{t}(i),i}$ and
$R_{\mu_{t}(i),j}>R_{\mu_{t}(j),j}$)
30 IRS pair $(r_{i},r_{j})$ is allowed to exchange users
31 else
32 IRS pair $(r_{i},r_{j})$ is not allowed to exchange users
33 end if
34 end for
35 Set $t=t+1$
36
37 end while
38Output $\mu_{s}=\mu_{t}$: stably matched user-IRS pairs
Algorithm 2 User-IRS Assignment Algorithm
Stable Matching: The studied problem is a bipartite matching problem with two-
sided preferences. The proposed user-IRS assignment algorithm comprises of two
phases: 1) Gale-Shapley and 2) blocking pair (BP) identification for
stability. The pseudo code of the proposed algorithm is given in Algorithm 2.
Each user shares its preference list with the BS. Although, the BS has global
CSI available and it can compute users preference lists, however, due to
imperfect uplink/downlink channel reciprocity in practical systems, we rely on
users to compute and share their preferences.
After obtaining the preference list of the users, the BS performs the proposed
user-IRS assignment which is IRS optimal. For the given IRS, the BS assigns
the most preferred user to the IRS, if it is not already matched with any
other IRS. If that preferred user is already matched to one of the other IRSs,
then it is re-assigned to the proposing IRS only if the user also prefers it
over the assigned IRS. The same process is repeated for all the IRSs until all
the IRSs are matched.
Since, the IRS matching with any given user will also effects the performance
of other users, therefore the matching output $\mu$ at step 14 may not be
stable. This instability is due to the interference which could be caused by
IRSs assigned to other users. The structure of ZF precoding also effects the
stability at this phase. The matching obtained with such interdependence is
called as matching games with externality [6]. Therefore, in the second stage
of the Algorithm 2, the BS finds the BPs in the current matching $\mu$. A BP
is a pair of user and IRS $(u,r)$, who prefers each other over their current
partners. The BS searches for all unstable IRSs pairs such that if the rate
obtained by exchanging users is beneficial for both the IRSs then exchange is
allowed. This one-sided (only IRSs) stability is called Pareto optimality in
matching theory, where there is no other matching in which some IRS is better
off, while no IRS is worse off. The process continues until a trade-in-free
environment is reached, resulting in a stable matching $\mu_{s}$.
## IV Simulation Results
In this section, we run extensive simulations to access the performance of our
proposed user-IRS matching algorithm. We consider a cellular communication
setup where a single BS is located at the origin $(0,0)$ and IRSs are
distributed equispaced on a circle around the BS with radius $d_{R}$. Users
are deployed uniformly at each realization inside the circle with maximum
spread equal to $d_{R}/2$ from the BS. Throughout the simulations, the value
of distance is set to $d_{R}=50$, unless stated otherwise. All simulation
results are obtained by statistically averaging over large number of channel
realizations.
The path loss between the BS-IRS and IRS-user links are modelled as
$L_{bi}=K_{bi}({d_{bi}})^{-\alpha_{2}}=C_{\nu}\delta_{b}\delta_{i}({d_{bi}})^{-\alpha_{2}}$
and
$L_{iu}=K_{iu}({d_{iu}})^{-\alpha_{2}}=C_{\nu}\delta_{i}\delta_{u}({d_{iu}})^{-\alpha_{2}}$,
respectively, where $\alpha_{2}=2$ is the path loss exponent. $d_{bi}$ and
$d_{iu}$ are the distances of the BS to IRS and IRS to user, respectively. The
quantities $\delta_{b}$ and $\delta_{u}$ are antenna gains of the BS and user
antennas respectively, while $\delta_{i}$ is the reflection gain of the IRS.
The path loss of the IRS assisted composite link can be written as
$L_{biu}=L_{bi}L_{iu}=C_{\nu}^{2}\delta_{b}\delta_{u}\delta_{i}^{2}({d_{bi}d_{iu}})^{-\alpha_{2}},$
(10)
The relative reflection gain is given by
$\zeta=\delta_{i}/(\sqrt{\delta_{b}\delta_{u}})\Rightarrow\delta_{i}^{2}=\zeta^{2}\delta_{b}\delta_{u}$.
We can also write (10) as
$L_{biu}=(C_{\nu})^{2}\zeta^{2}({d_{bi}d_{iu}})^{-\alpha_{2}}$, where we have
kept $\zeta=10$ dB. For the direct link between the BS and user, the path loss
exponent is taken as $\alpha=3.5$. Throughout this section, we set
$\kappa_{g}=\kappa_{f}=10$, $C_{\nu}=-30$ dB and $\delta_{b}=\delta_{u}=0$ dB.
The impact of increasing IRS reflecting elements on the sum rate performance
is shown in Fig. 2 with $L=K=M=10$. The total transmit power is kept at $10$
dB, which is equally distributed among the users. For comparison, we also plot
sum rate results for distance-based matching [3, 4], original Gale Shapley
matching (i.e., only phase 1 of the Algorithm 2) and random matching. It can
be seen from Fig. 2 that the proposed user-IRS assignment algorithm provides
the best sum rate performance as compared to the other schemes. The
performance of the standard Gale Shapely matching is comparable with the
performance of the distance based matching algorithm where the IRSs are
assigned to the nearest user. The distance-based matching is performed using
accurate distances, which is not possible in practical systems.
Figure 2: Sum rate versus number of Reflecting elements.
Fig. 3 shows the sum rate performance of the proposed algorithm for various
values of the total transmit power at the BS. Here, we use $L=K=M=8$ and
$N=50$. From Fig. 3, it is noticed that the increase in sum rate is more
evident in low transmit power regimes. The reason for this trend is that at
high transmit powers interference also rises in the network. Among the schemes
plotted, the proposed algorithm has the superior sum rate performance.
Figure 3: Sum rate versus BS total transmit power.
The effect of increasing the deployment radius $d_{R}$ on the achievable sum-
rate is captured in Table I. The parameters kept are similar to that of Fig.
3, except for the value of transmit power which in this case is kept at $9$
dB. The sum rate initially improves as inter-user interference reduces as
$d_{R}$ increases. However, as $d_{R}$ increases further, the higher path loss
results in a performance degradation. This suggests that more reflective
elements are needed to overcome this degradation.
TABLE I: Sum rate (bps/Hz) results for different $d_{R}$ values Method | $d_{R}$ |
---|---|---
50 | 100 | 150 | 200 | 250
Proposed Matching | 37.0 | 38.3 | 38.6 | 38 | 35.8
Distance Matching | 29.2 | 29.9 | 29.6 | 29.1 | 27.0
Finally, Fig. 4 shows the sum rate performance against different numbers of
IRSs/users in the network. Here, we have $N=20$ and the total transmit power
at the BS is $5$ dB. The results are plotted for two different values of $M$,
i.e., $M=16$ and $M=25$. Intuitively, the sum rate increases as the number of
IRSs/users increases. It has been noticed that the performance with $M=16$
starts degrading as $K\to M$, suggesting that more transmit antennas or
reflecting elements are required to maintain the performance gain.
Figure 4: Sum rate versus number of IRSs/users $(K=L)$.
## V Conclusion
We have proposed an IRSs selection strategy based on stable matching for
multiuser MISO communication systems. To achieve user-IRS association in the
network that improves the overall sum rate, we rely on two stage matching
algorithm. In the first stage the Gale Shapley algorithm is used to find the
user-IRS matching. In the second stage of the algorithm, IRSs BPs are
determined who are willing to exchange their users, if it is beneficial for
both IRSs. Through simulations, it is revealed that the proposed user-IRS
assignment outperforms distance-based and random matching algorithms. For
future work, it will be useful to investigate user-IRS assignment for MIMO
systems by jointly optimizing passive and active beamforming.
## References
* [1] Q. Wu and R. Zhang, “Towards smart and reconfigurable environment: Intelligent reflecting surface aided wireless network,” _IEEE Commun. Mag._ , vol. 58, no. 1, pp. 106–112, Nov 2019.
* [2] Z. Yang, M. Chen, W. Saad, W. Xu, M. Shikh-Bahaei, H. V. Poor, and S. Cui, “Energy-efficient wireless communications with distributed reconfigurable intelligent surfaces,” [Online] Available https://arxiv.org/abs/2005.00269, 2020.
* [3] Q. Wu and R. Zhang, “Joint active and passive beamforming optimization for intelligent reflecting surface assisted SWIPT under QoS constraints,” _IEEE J. Sel. Areas Commun._ , vol. 38, no. 8, pp. 1735–1748, Aug 2020.
* [4] M. Jung, W. Saad, M. Debbah, and C. S. Hong, “Asymptotic optimality of reconfigurable intelligent surfaces: Passive beamforming and achievable rate,” in _Proc. IEEE Int. Conf. Commun. (ICC’20)_ , Dublin, Ireland, Jun 2020, pp. 1–6.
* [5] X. Li, J. Fang, F. Gao, and H. Li, “Joint active and passive beamforming for intelligent reflecting surface-assisted massive MIMO systems,” [Online] Available https://arxiv.org/abs/1912.00728, 2019.
* [6] Y. Gu, W. Saad, M. Bennis, M. Debbah, and Z. Han, “Matching theory for future wireless networks: Fundamentals and applications,” _IEEE Commun. Mag._ , vol. 53, no. 5, pp. 52–59, May 2015.
* [7] C. Huang, A. Zappone, G. C. Alexandropoulos, M. Debbah, and C. Yuen, “Reconfigurable intelligent surfaces for energy efficiency in wireless communication,” _IEEE Trans. Wireless Commun._ , vol. 18, no. 8, pp. 4157–4170, June 2019.
* [8] H. Liu, X. Yuan, and Y.-J. A. Zhang, “Matrix-calibration-based cascaded channel estimation for reconfigurable intelligent surface assisted multiuser MIMO,” _IEEE J. Sel. Areas Commun._ , vol. 38, no. 11, pp. 2621–2636, Nov 2020.
|
# Convex Generalized Nash Equilibrium Problems and Polynomial Optimization
Jiawang Nie Jiawang Nie, Department of Mathematics, University of California
San Diego, 9500 Gilman Drive, La Jolla, CA, USA, 92093<EMAIL_ADDRESS>and
Xindong Tang Xindong Tang, Department of Applied Mathematics, The Hong Kong
Polytechnic University, Hung Hom, Kowloon, Hong Kong.
<EMAIL_ADDRESS>
###### Abstract.
This paper studies convex Generalized Nash Equilibrium Problems (GNEPs) that
are given by polynomials. We use rational and parametric expressions for
Lagrange multipliers to formulate efficient polynomial optimization for
computing Generalized Nash Equilibria (GNEs). The Moment-SOS hierarchy of
semidefinite relaxations are used to solve the polynomial optimization. Under
some general assumptions, we prove the method can find a GNE if there exists
one, or detect nonexistence of GNEs. Numerical experiments are presented to
show the efficiency of the method.
###### Key words and phrases:
Generalized Nash Equilibrium Problem, Convex polynomials, Polynomial
optimization, Moment-SOS relaxation, Lagrange multiplier expression.
###### 2010 Mathematics Subject Classification:
90C33, 91A10, 90C22, 65K05
## 1\. Introduction
The Generalized Nash Equilibrium Problem (GNEP) is a kind of game to find
strategies for a group of players such that each player’s objective function
is optimized, for given other players’ strategies. Suppose there are $N$
players and the $i$th player’s strategy is a vector
$x_{i}\in\mathbb{R}^{n_{i}}$ (the $n_{i}$-dimensional real Euclidean space).
We write that
$x_{i}:=(x_{i,1},\ldots,x_{i,n_{i}}),\quad x:=(x_{1},\ldots,x_{N}).$
The total dimension of all strategies is $n:=n_{1}+\ldots+n_{N}.$ The main
task of the GNEP is to find a tuple $u=(u_{1},\ldots,u_{N})$ of strategies
such that each $u_{i}$ is a minimizer of the $i$th player’s optimization
(1.1)
$\mbox{F}_{i}(u_{-i}):\,\left\\{\begin{array}[]{cl}\min\limits_{x_{i}\in\mathbb{R}^{n_{i}}}&f_{i}(u_{1},\ldots,u_{i-1},x_{i},u_{i+1},\ldots,u_{N})\\\
\mathit{s.t.}&g_{i,j}(u_{1},\ldots,u_{i-1},x_{i},u_{i+1},\ldots,u_{N})=0\,(j\in\mathcal{E}_{i}),\\\
&g_{i,j}(u_{1},\ldots,u_{i-1},x_{i},u_{i+1},\ldots,u_{N})\geq
0\,(j\in\mathcal{I}_{i}),\end{array}\right.$
where $u_{-i}:=(u_{1},\ldots,u_{i-1},u_{i+1},\ldots,u_{N})$, the $f_{i}$ and
$g_{i,j}$ are continuously differentiable functions in $x_{i}$, and the
$\mathcal{E}_{i}$, $\mathcal{I}_{i}$ are disjoint finite (possibly empty)
labeling sets. The point $u$ satisfying the above is called a Generalized Nash
Equilibrium (GNE). For notational convenience, when the $i$th player’s
strategy is considered, we use $x_{-i}$ to denote the subvector of all
players’ strategies except the $i$th one, i.e.,
$x_{-i}\,:=\,(x_{1},\ldots,x_{i-1},x_{i+1},\ldots,x_{N}),$
and write $x=(x_{i},x_{-i})$ accordingly.
This paper focuses on the Generalized Nash Equilibrium Problem of Polynomials
(GNEPP), i.e., all the functions $f_{i}$ and $g_{i,j}$ are polynomials in $x$.
For each $i=1,\ldots,N$, let $X_{i}$ be the point-to-set map such that
(1.2)
$X_{i}(x_{-i})\,:=\,\left\\{x_{i}\in\mathbb{R}^{n_{i}}\left|\begin{array}[]{l}g_{i,j}(x_{i},x_{-i})=0,\,j\in\mathcal{E}_{i},\\\
g_{i,j}(x_{i},x_{-i})\geq 0,\,j\in\mathcal{I}_{i}\end{array}\right.\right\\}.$
The $X_{i}(x_{-i})$ is the feasible strategy set of $\mbox{F}_{i}(x_{-i})$.
The domain of $X_{i}$ is
$\operatorname{dom}(X_{i}):=\\{x_{-i}\in\mathbb{R}^{n-n_{i}}:X_{i}(x_{-i})\neq\emptyset\\}.$
The tuple $x$ is said to be a feasible point of the GNEP if $x_{i}\in
X_{i}(x_{-i})$ for all $i$. Denote the set
(1.3)
$X:=\left\\{x\in\mathbb{R}^{n}\left|\begin{array}[]{l}g_{i,j}(x_{i},x_{-i})=0,\,j\in\mathcal{E}_{i},\,i=1,\ldots,N,\\\
g_{i,j}(x_{i},x_{-i})\geq
0,\,j\in\mathcal{I}_{i},\,i=1,\ldots,N\end{array}\right.\right\\}.$
Then $x$ is a feasible point for the GNEP if and only if $x\in X.$
###### Definition 1.1.
The GNEP given by (1.1) is called convex 111 In some literature, this is also
called player-convex, to distinguish from jointly-convex GNEPs; see [13]. if
for all $i=1,\ldots,N$ and for all given $x_{-i}\in\operatorname{dom}(X_{i})$,
the objective $f_{i}(x_{i},x_{-i})$ is convex in $x_{i}$ on $X_{i}(x_{-i})$,
all $g_{i,j}(x_{i},x_{-i})\,(j\in\mathcal{E}_{i})$ are affine linear in
$x_{i}$, and all $g_{i,j}(x_{i},x_{-i})\,(j\in\mathcal{I}_{i})$ are concave in
$x_{i}$.
For instance, consider the $2$-player GNEPP
(1.4)
$\begin{array}[]{lllll}\min\limits_{x_{1}\in\mathbb{R}^{3}}&\sum\limits_{j=1}^{3}(x_{1,j}-x_{2,j})^{2}&\vline&\min\limits_{x_{2}\in\mathbb{R}^{3}}&\sum\limits_{j=1}^{3}\Big{(}(x_{2,j})^{4}-x_{2,j}\prod\limits_{k=1}^{3}x_{1,k}\Big{)}\\\
\mathit{s.t.}&x_{2}^{T}x_{1}-1=0,&\vline&\mathit{s.t.}&\|x_{1}\|^{2}-\|x_{2}\|^{2}\geq
0.\\\ &(x_{11},x_{12},x_{13})\geq 0;&\vline&\end{array}$
In the above, the $\|\cdot\|$ denotes the Euclidean norm. For each $i$, the
Hessian of $f_{i}$ with respect to $x_{i}$ is positive semidefinite for all
$x_{-i}\in\operatorname{dom}(X_{i})$. All players have convex optimization
problems, so this is a convex GNEP. One can directly check that it has a
unique GNE $u=(u_{1},u_{2})$ with
$u_{1}=\left(\frac{\sqrt[3]{2}}{\sqrt{3}},\frac{\sqrt[3]{2}}{\sqrt{3}},\frac{\sqrt[3]{2}}{\sqrt{3}}\right),\
u_{2}=\left(\frac{1}{\sqrt[6]{108}},\frac{1}{\sqrt[6]{108}},\frac{1}{\sqrt[6]{108}}\right).$
GNEPs originated from economics in [9, 4]. Recently, it has been widely used
in many areas, such as economics, transportation, telecommunications and
pollution control. Convex GNEPs often appear in applications. We refer to [1,
3, 8, 54] for recent work on applications of GNEPs. Some application examples
are shown in Section 6.
For the classical Nash Equilibrium Problems (NEPs) of polynomials, there exist
semidefinite relaxation methods [2, 50]. Convex GNEPs can be reformulated as
variational inequality (VI) or quasi-variational inequality (QVI) problems
[14, 23, 38, 53, 22]. The Karush-Kuhn-Tucker (KKT) system for all player’s
optimization problems is considered in [12]. The penalty functions are used to
solve convex GNEPs in [17, 21, 18]. Some methods using the Nikaido-Isoda
function are given in [13, 27, 28]. The Lemke’s method is used to solve affine
GNEPs [56]. For general nonconvex GNEPs, we refer to [5, 11, 15, 29, 49]. It
is generally quite difficult to solve GNEPs, even if they are convex. This is
because the KKT system of a convex GNEP may still be difficult to solve. The
set of GNEs may be nonconvex, even for convex NEPs (see [50]). We refer to
[16, 19] for surveys on GNEPs.
### Contributions
This paper focuses on convex GNEPPs. Under some constraint qualifications, a
feasible point is a GNE if and only if it satisfies the KKT conditions. We
introduce rational and parametric expressions for Lagrange multipliers and
formulate polynomial optimization for computing GNEs. Our major results are:
* •
For GNEPPs, we introduce the rational expression for Lagrange multipliers and
study their properties. We prove the existence of rational expressions and
give a sufficient and necessary condition for positivity of denominators.
Moreover, we give parametric expressions for Lagrange multipliers for several
cases. For all GNEPs, parametric expressions always exist.
* •
Using rational and parametric expressions, we formulate polynomial
optimization and propose an algorithm for computing GNEs. Under some general
assumptions, we prove that the algorithm can compute a GNE if it exists, or
detect nonexistence of GNEs. This is the first numerical method that has these
properties, to the best of the authors’ knowledge.
* •
The Moment-SOS semidefinite relaxations are used to solve polynomial
optimization for finding and verifying GNEs. Numerical experiments are
presented to show the efficiency of the method.
The paper is organized as follows. Some preliminaries about polynomial
optimization are given in Section 2. We introduce rational expressions for
Lagrange multipliers in Section 3. The parametric expressions for Lagrange
multipliers are given in Section 4. We formulate polynomial optimization
problems for computing GNEs and show how to solve them using the Moment-SOS
hierarchy in Section 5. Numerical experiments and applications are given in
Section 6. Conclusions and some discussions are given in Section 7.
## 2\. Preliminaries
### Notation
The symbol $\mathbb{N}$ (resp., $\mathbb{R}$, $\mathbb{C}$) stands for the set
of nonnegative integers (resp., real numbers, complex numbers). For a positive
integer $k$, denote the set $[k]:=\\{1,\ldots,k\\}$. For a real number $t$,
$\lceil t\rceil$ (resp., $\lfloor t\rfloor$) denotes the smallest integer not
smaller than $t$ (resp., the biggest integer not bigger than $t$). We use
$e_{i}$ to denote the vector such that the $i$th entry is $1$ and all others
are zeros. By writing $A\succeq 0$ (resp., $A\succ 0$), we mean that the
matrix $A$ is symmetric positive semidefinite (resp., positive definite). For
the $i$th player’s strategy vector $x_{i}\in\mathbb{R}^{n_{i}}$, the $x_{i,j}$
denotes the $j$th entry of $x_{i}$, for $j=1,\ldots,n_{i}$. When we write
$(y,x_{-i})$, it means that the $i$th player’s strategy is
$y\in\mathbb{R}^{n_{i}}$, while the vector of all other players’ strategy is
fixed to be $x_{-i}$. Let $\mathbb{R}[x]$ denote the ring of polynomials with
real coefficients in $x$, and $\mathbb{R}[x]_{d}$ denote its subset of
polynomials whose degrees are not greater than $d$. For the $i$th player’s
strategy vector $x_{i}$, the notation $\mathbb{R}[x_{i}]$ and
$\mathbb{R}[x_{i}]_{d}$ are defined in the same way. For $i$th player’s
objective $f_{i}(x)$, the notation $\nabla_{x_{i}}f_{i}$,
$\nabla^{2}_{x_{i}}f_{i}$ respectively denote its gradient and Hessian with
respect to $x_{i}$.
In the following, we use the letter $z$ to represent either $x$, $x_{i}$ or
$(x,\omega)$ for some new variables $\omega$, for convenience of discussion.
Suppose $z:=(z_{1},\ldots,z_{l})$. For a polynomial $p(z)\in\mathbb{R}[z]$,
the $p=0$ means $p(z)$ is identically zero on $\mathbb{R}^{l}$. We say the
polynomial $p$ is nonzero if $p\neq 0$. Let
$\alpha:=(\alpha_{1},\ldots,\alpha_{l})\in\mathbb{N}^{l}$, and we denote
$z^{\alpha}:=z_{1}^{\alpha_{1}}\cdots
z_{l}^{\alpha_{l}},\quad|\alpha|:=\alpha_{1}+\ldots+\alpha_{l}.$
For an integer $d>0$, denote the monomial power set
${\mathbb{N}}_{d}^{l}\,:=\,\\{\alpha\in{\mathbb{N}}^{l}:\,\ |\alpha|\leq
d\\}.$
We use $[z]_{d}$ to denote the vector of all monomials in $z$ whose degree is
at most $d$, ordered in the graded alphabetical ordering. For instance, if
$z=(z_{1},z_{2})$, then
$[z]_{3}=(1,z_{1},z_{2},z_{1}^{2},z_{1}z_{2},z_{2}^{2},z_{1}^{3},z_{1}^{2}z_{2},z_{1}z_{2}^{2},z_{2}^{3}).$
Throughout the paper, a property is said to hold generically if it holds for
all points in the space of input data except a set of Lebesgue measure zero.
### 2.1. Ideals and positive polynomials
Let $\mathbb{F}:=\mathbb{R}\ \mbox{or}\ \mathbb{C}$. For a polynomial
$p\in\mathbb{F}[z]$ and subsets $I,J\subseteq\mathbb{F}[z]$, define the
product and Minkowski sum
$p\cdot I:=\\{pq:\,q\in I\\},\quad I+J:=\\{a+b:\,a\in I,b\in J\\}.$
The subset $I$ is an ideal if $p\cdot I\subseteq I$ for all
$p\in\mathbb{F}[z]$ and $I+I\subseteq I$. For a tuple of polynomials
$q=(q_{1},\ldots,q_{m})$, the set
$\mbox{Ideal}[q]:=q_{1}\cdot\mathbb{F}[z]+\ldots+q_{m}\cdot\mathbb{F}[z]$
is the ideal generated by $q$, which is the smallest ideal containing each
$q_{i}$.
We review basic concepts in polynomial optimization. A polynomial
$\sigma\in\mathbb{R}[z]$ is said to be a sum of squares (SOS) if
$\sigma=p_{1}^{2}+\ldots+p_{k}^{2}$ for some polynomials
$p_{i}\in\mathbb{R}[z]$. The set of all SOS polynomials in $z$ is denoted as
$\Sigma[z]$. For a degree $d$, we denote the truncation
$\Sigma[z]_{d}\,:=\,\Sigma[z]\cap\mathbb{R}[z]_{d}.$
For a tuple $g=(g_{1},\ldots,g_{t})$ of polynomials in $z$, its quadratic
module is the set
$\mbox{Qmod}[g]\,:=\,\Sigma[z]+g_{1}\cdot\Sigma[z]+\ldots+g_{t}\cdot\Sigma[z].$
Similarly, we denote the truncation of $\mbox{Qmod}[g]$
$\mbox{Qmod}[g]_{2d}\,:=\,\Sigma[z]_{2d}+g_{1}\cdot\Sigma[z]_{2d-\deg(g_{1})}+\ldots+g_{t}\cdot\Sigma[z]_{2d-\deg(g_{t})}.$
The tuple $g$ determines the basic closed semi-algebraic set
(2.1) $\mathcal{S}(g)\,:=\,\\{z\in\mathbb{R}^{l}:g_{1}(z)\geq
0,\ldots,g_{t}(z)\geq 0\\}.$
For a tuple $h=(h_{1},\ldots,h_{s})$ of polynomials in $\mathbb{R}[z]$, its
real zero set is
$\mathcal{Z}(h):=\\{z\in\mathbb{R}^{l}:h_{1}(z)=\ldots=h_{s}(z)=0\\}.$
The set $\mbox{Ideal}[h]+\mbox{Qmod}[g]$ is said to be archimedean if there
exists $\rho\in\mbox{Ideal}[h]+\mbox{Qmod}[g]$ such that the set
$\mathcal{S}(\rho)$ is compact. If $\mbox{Ideal}[h]+\mbox{Qmod}[g]$ is
archimedean, then $\mathcal{Z}(h)\cap\mathcal{S}(g)$ must be compact.
Conversely, if $\mathcal{Z}(h)\cap\mathcal{S}(g)$ is compact, say,
$\mathcal{Z}(h)\cap\mathcal{S}(g)$ is contained in the ball $R-\|z\|^{2}\geq
0$, then $\mbox{Ideal}[h]+\mbox{Qmod}[g,R-\|z\|^{2}]$ is archimedean and
$\mathcal{Z}(h)\cap\mathcal{S}(g)=\mathcal{Z}(h)\cap\mathcal{S}(g,R-\|z\|^{2})$.
Clearly, if $f\in\mbox{Ideal}[h]+\mbox{Qmod}[g]$, then $f\geq 0$ on
$\mathcal{Z}(h)\cap\mathcal{S}(g)$. The reverse is not necessarily true.
However, when $\mbox{Ideal}[h]+\mbox{Qmod}[g]$ is archimedean, if $f>0$ on
$\mathcal{Z}(h)\cap\mathcal{S}(g)$, then $f\in\mbox{Ideal}[h]+\mbox{Qmod}[g]$.
This conclusion is referenced as Putinar’s Positivstellensatz [55].
Interestingly, if $f\geq 0$ on $\mathcal{Z}(h)\cap\mathcal{S}(g)$, we also
have $f\in\mbox{Ideal}[h]+\mbox{Qmod}[g]$, under some standard optimality
conditions [42].
### 2.2. Localizing and moment matrices
Let $\mathbb{R}^{\mathbb{N}_{2d}^{l}}$ denote the space of all real vectors
that are labeled by $\alpha\in\mathbb{N}_{2d}^{l}$. A vector
$y\in\mathbb{R}^{\mathbb{N}_{2d}^{l}}$ is labeled as
$y\,=\,(y_{\alpha})_{\alpha\in\mathbb{N}_{2d}^{l}}.$
Such $y$ is called a truncated multi-sequence (tms) of degree $2d$. For a
polynomial
$f=\sum_{\alpha\in\mathbb{N}^{l}_{2d}}f_{\alpha}z^{\alpha}\in\mathbb{R}[z]_{2d}$,
define the operation
(2.2) $\langle
f,y\rangle\,:=\,{\sum}_{\alpha\in\mathbb{N}^{l}_{2d}}f_{\alpha}y_{\alpha}.$
The operation $\langle f,y\rangle$ is a bilinear function in $(f,y)$. For a
polynomial $q\in\mathbb{R}[z]$, with $\deg(q)\leq 2d$, and the integer
$t=d-\lceil\deg(q)/2\rceil$, the outer product $q\cdot[z]_{t}([z]_{t})^{T}$ is
a symmetric matrix polynomial in $z$, with length $\binom{n+t}{t}$. We write
the expansion as
$q\cdot[z]_{t}([z]_{t})^{T}\,=\,{\sum}_{\alpha\in\mathbb{N}_{2d}^{l}}z^{\alpha}Q_{\alpha},$
for some symmetric matrices $Q_{\alpha}$. Then we define the matrix function
(2.3)
$L_{q}^{(d)}[y]\,:=\,{\sum}_{\alpha\in\mathbb{N}_{2d}^{l}}y_{\alpha}Q_{\alpha}.$
It is called the $d$th localizing matrix of $q$ generated by $y$. For given
$q$, the matrix $L_{q}^{(d)}[y]$ is linear in $y$. Localizing and moment
matrices are important for getting semidefinite relaxations of solving
polynomial optimization [31, 40, 41]. They are also useful for solving
truncated moment problems [20, 45] and tensor decompositions [46, 47]. We
refer to [33, 34, 36, 37, 39, 44] for more references about polynomial
optimization and moment problems.
### 2.3. Lagrange multiplier expressions
We study optimality conditions for Generalized Nash Equilibrium Problems.
Consider the $i$th player’s optimization. For convenience, suppose
$\mathcal{E}_{i}\cup\mathcal{I}_{i}=[m_{i}]$ and
$g_{i}:=(g_{i,1},\ldots,g_{i,m_{i}})$. For a given $x_{-i}$, under some
suitable constraint qualifications (e.g., the linear independence constraint
qualification (LICQ), Mangasarian-Fromovite constraint qualification (MFCQ),
or the Slater’s Condition; see [7] for them), if $x_{i}$ is a minimizer of
$\mbox{F}_{i}(x_{-i})$, then there exists a Lagrange multiplier vector
$\lambda_{i}:=(\lambda_{i,1},\ldots,\lambda_{i,m_{i}})$ such that
(2.4)
$\left\\{\begin{array}[]{l}\nabla_{x_{i}}f_{i}(x)-\sum_{j=1}^{m_{i}}\lambda_{i,j}\nabla_{x_{i}}g_{i,j}(x)=0,\\\
\lambda_{i}\perp g_{i}(x),\,g_{i,j}(x)=0\,(j\in\mathcal{E}_{i}),\\\
\lambda_{i,j}\geq 0\,(j\in\mathcal{I}_{i}),\,g_{i,j}(x)\geq
0\,(j\in\mathcal{I}_{i}).\end{array}\right.$
This is called the first order Karush-Kuhn-Tucker system for
$\mbox{F}_{i}(x_{-i})$. Such $(x_{i},\lambda_{i})$ is called a critical pair
of $\mbox{F}_{i}(x_{-i})$. Therefore, if $x$ is a GNE, under constraint
qualifications, then (2.4) holds for all $i\in[N]$, i.e., there exist Lagrange
multiplier vectors $\lambda_{1},\ldots,\lambda_{N}$ such that
(2.5)
$\left\\{\begin{array}[]{l}\nabla_{x_{i}}f_{i}(x)-\sum_{j=1}^{m_{i}}\lambda_{i,j}\nabla_{x_{i}}g_{i,j}(x)=0\,(i\in[N]),\\\
\lambda_{i}\perp
g_{i}(x)\,(i\in[N]),\,g_{i,j}(x)=0\,(i\in[N],j\in\mathcal{E}_{i}),\\\
\lambda_{i,j}\geq 0\,(i\in[N],j\in\mathcal{I}_{i}),\,g_{i,j}(x)\geq
0\,(i\in[N],j\in\mathcal{I}_{i}).\\\ \end{array}\right.$
A point $x$ satisfying (2.5) is called a KKT point for the GNEP. For convex
GNEPs, each KKT point is a GNE [16, Theorem 4.6].
For each critical pair $(x_{i},\lambda_{i})$ of $\mbox{F}_{i}(x_{-i})$, the
equation (2.4) implies that
(2.6)
$\underbrace{\begin{bmatrix}\nabla_{x_{i}}g_{i,1}(x)&\nabla_{x_{i}}g_{i,2}(x)&\cdots&\nabla_{x_{i}}g_{i,m_{i}}(x)\\\
g_{i,1}(x)&0&\cdots&0\\\ 0&g_{i,2}(x)&\cdots&0\\\
\vdots&\vdots&\ddots&\vdots\\\
0&0&\cdots&g_{i,m_{i}}(x)\end{bmatrix}}_{G_{i}(x)}\underbrace{\begin{bmatrix}\lambda_{i,1}\\\
\lambda_{i,2}\\\ \vdots\\\
\lambda_{i,m_{i}}\end{bmatrix}}_{\lambda_{i}}=\underbrace{\begin{bmatrix}\nabla_{x_{i}}f_{i}(x)\\\
0\\\ \vdots\\\ 0\end{bmatrix}}_{\hat{f}_{i}(x)}.$
If there exists a matrix polynomial $L_{i}(x)$ such that
(2.7) $L_{i}(x)G_{i}(x)=I_{m_{i}},$
then the Lagrange multipliers $\lambda_{i}$ can be expressed as
$\lambda_{i}=L_{i}(x)\hat{f}_{i}(x).$
The vector of polynomials
$\lambda_{i}(x):=(\lambda_{i,1}(x),\ldots,\lambda_{i,m_{i}}(x))$ is called a
polynomial expression for Lagrange multipliers[48], where $\lambda_{i,j}(x)$
is the $j$th component of $L_{i}(x)\hat{f}_{i}(x)$. The matrix polynomial
$G_{i}(x)$ is said to be nonsingular if it has full column rank for all
$x\in\mathbb{C}^{n}$. It was shown that $G_{i}(x)$ is nonsingular if and only
if there exists $L_{i}(x)\in\mathbb{R}[x]^{(m_{i}+n_{i})\times m_{i}}$ such
that (2.7) holds [48, Proposition 5.1]. The nonsingularity of $G_{i}(x)$ is
independent of objective functions or other player’s constraints.
For example, consider the GNEP given by (1.4). The first player’s optimization
has a polynomial expression of Lagrange multipliers
(2.8)
$\lambda_{1,1}=x_{1}^{T}\nabla_{x_{1}}f_{1},\,\lambda_{1,j+1}=\frac{\partial{f_{1}}(x)}{\partial{x_{1,j}}}-\lambda_{1,1}x_{2,j}\,(j=1,2,3).$
For the second player, the matrix polynomial $G_{2}(x)$ is not nonsingular,
and polynomial expressions do not exist. In section 6, we give a rational
expression for the second player’s Lagrange multipliers.
## 3\. Rational expressions for Lagrange Multipliers
In Section 2.3, a polynomial expression for the $i$th player’s Lagrange
multipliers exists if and only if the matrix $G_{i}(x)$ is nonsingular. For
classical NEPs of polynomials, the nonsingularity holds generically [48, 50].
However, this is often not the case for GNEPs. Let
$g_{i}=(g_{i,1},\ldots,g_{i,m_{i}})$ be the tuple of constraining polynomials
in $\mbox{F}_{i}(x_{-i})$ and $G_{i}(x)$ be the matrix polynomial as in (2.7).
If there exists a matrix polynomial $\hat{L}_{i}(x)$ and a nonzero scalar
polynomial $q_{i}(x)$ such that
(3.1) $\hat{L}_{i}(x)G_{i}(x)\,=\,q_{i}(x)\cdot I_{m_{i}},$
then $q_{i}(x)\lambda_{i}=\hat{L}_{i}(x)\hat{f}_{i}(x)$ for all critical pairs
$(x_{i},\lambda_{i})$ of $\mbox{F}_{i}(x_{-i})$. Let
(3.2) $\hat{\lambda}_{i}(x)\,:=\,\hat{L}_{i}(x)\hat{f}_{i}(x).$
Denote by $\hat{\lambda}_{i,j}(x)$ the $j$th entry of $\hat{\lambda}_{i}(x)$.
###### Definition 3.1.
For the $i$th player’s optimization $\mbox{F}_{i}(x_{-i})$, if there exist
polynomials $\hat{\lambda}_{i,1},\ldots,\hat{\lambda}_{i,m_{i}}$ and a nonzero
polynomial $q_{i}$ such that $q_{i}(x)\geq 0$ for all $x\in X$, and
$\hat{\lambda}_{i,j}(x)=q_{i}(x)\lambda_{i,j}$ holds for all critical pairs
$(x_{i},\lambda_{i})$, then we call the tuple
$\hat{\lambda}_{i}/q_{i}:=(\hat{\lambda}_{i,1}(x)/q_{i}(x),\ldots,\hat{\lambda}_{i,m_{i}}(x)/q_{i}(x))$
a rational expression for Lagrange multipliers.
The following is an example of rational expression.
###### Example 3.2.
Consider the 2-player convex GNEP
(3.3)
$\begin{array}[]{lllll}\min\limits_{x_{1}\in\mathbb{R}^{2}}&f_{1}(x_{1},x_{2})&\vline&\min\limits_{x_{2}\in\mathbb{R}^{1}}&f_{2}(x_{1},x_{2})\\\
\mathit{s.t.}&2-x_{1}^{T}x_{1}-x_{2}\geq
0;&\vline&\mathit{s.t.}&3x_{2}-x_{1}^{T}x_{1}\geq 0,\,1-x_{2}\geq
0.\end{array}$
The matrices of polynomials $G_{1}(x)$ and $G_{2}(x)$ are
$G_{1}(x):=\left[\begin{array}[]{c}-2x_{1,1}\\\ -2x_{1,2}\\\
2-x_{1}^{T}x_{1}-x_{2}\end{array}\right],\quad
G_{2}(x):=\left[\begin{array}[]{cc}3&-1\\\ 3x_{2}-x_{1}^{T}x_{1}&0\\\
0&1-x_{2}\end{array}\right].$
For $x_{1}=(0,0)$ and $x_{2}=2$, the $G_{1}(x)$ is the zero vector. For
$x_{1}=(\sqrt{3},0)$ and $x_{2}=1$, $\mbox{rank}(G_{2}(x))=1$. Both
$G_{1}(x),G_{2}(x)$ are not nonsingular, so there are no polynomial
expressions for Lagrange multipliers. However, (3.1) holds for
(3.4)
$\begin{array}[]{rl}q_{1}(x)=2-x_{2},&q_{2}(x)=1-\frac{1}{3}x_{1}^{T}x_{1},\\\
\hat{L}_{1}(x)=\left[\begin{array}[]{ccc}-\frac{x_{1,1}}{2}&-\frac{x_{1,2}}{2}&1\end{array}\right],&\hat{L}_{2}(x)=\left[\begin{array}[]{ccc}\frac{1}{3}-\frac{1}{3}x_{2}&\frac{1}{3}&\frac{1}{3}\\\
\frac{1}{3}x_{1}^{T}x_{1}-x_{2}&1&1\end{array}\right].\end{array}$
The Lagrange multiplier expressions are
(3.5)
$\lambda_{1}=\frac{-x_{1}^{T}\nabla_{x_{1}}f_{1}}{2q_{1}},\,\lambda_{2,1}=\frac{(1-x_{2})}{3q_{2}}\cdot\frac{\partial{f_{2}}}{\partial{x_{2}}},\,\lambda_{2,2}=\frac{x_{1}^{T}x_{1}-3x_{2}}{3q_{2}}\cdot\frac{\partial{f_{2}}}{\partial{x_{2}}}.$
In section 3.2, we show that if none of the $g_{i,j}$ is identically zero,
then a rational expression for $\lambda_{i}$ always exists.
### 3.1. Optimality conditions and rational expressions
Suppose for each $i$, there exists a rational expression
$\hat{\lambda}_{i}/q_{i}$ for the $i$th player’s Lagrange multiplier vector.
Since $q_{i}(x)\lambda_{i,j}=\hat{\lambda}_{i}(x)$ and $q_{i}(x)\geq 0$ for
all $x\in X$, the following holds for all KKT points
(3.6)
$\left\\{\begin{array}[]{l}q_{i}(x)\nabla_{x_{i}}f_{i}(x)-\sum\nolimits_{j=1}^{m_{i}}\hat{\lambda}_{i,j}(x)\nabla_{x_{i}}g_{i,j}(x)=0\,(i\in[N]),\\\
\hat{\lambda}_{i}(x)\perp
g_{i}(x),g_{i,j}(x)=0\,(j\in\mathcal{E}_{i},i\in[N]),\\\ g_{i,j}(x)\geq
0,\hat{\lambda}_{i,j}(x)\geq
0\,(j\in\mathcal{I}_{i},i\in[N]).\end{array}\right.$
Under some constraint qualifications, if $x$ is a GNE, then it satisfies
(3.6). For convex GNEPs, if $x$ satisfies (3.6) and $q_{i}(x)>0$, then $x$
must be a GNE, since it satisfies (2.5) with $\lambda_{i,j}$ given by
$\lambda_{i,j}=\hat{\lambda}_{i,j}(x)/{q_{i}(x)}.$ This leads us to consider
the following optimization problem
(3.7) $\left\\{\begin{array}[]{rll}\min\limits_{x\in
X}&[x]_{1}^{T}\Theta[x]_{1}\\\
\mathit{s.t.}&q_{i}(x)\nabla_{x_{i}}f_{i}(x)-\sum\nolimits_{j=1}^{m_{i}}\hat{\lambda}_{i,j}(x)\nabla_{x_{i}}g_{i,j}(x)=0\,(i\in[N]),\\\
&\hat{\lambda}_{i,j}(x)\perp
g_{i,j}(x)\,(j\in\mathcal{E}_{i}\cup\mathcal{I}_{i},i\in[N]),\\\
&\hat{\lambda}_{i,j}(x)\geq
0\,(j\in\mathcal{I}_{i},i\in[N]).\end{array}\right.$
In the above, $\Theta$ is a generically chosen positive definite matrix. The
following proposition is straightforward.
###### Proposition 3.3.
For the GNEPP given by (1.1), suppose for each $i\in[N]$, the Lagrange
multiplier vector $\lambda_{i}$ has the rational expression as in Definition
3.1.
1. (i)
If (3.7) is infeasible, then the GNEP has no KKT points. Therefore, if every
GNE is a KKT point, then the infeasibility of (3.7) implies the nonexistence
of GNEs.
2. (ii)
Assume the GNEP is convex. If $u$ is a feasible point of (3.7) and
$q_{i}(u)>0$ for all $i\in[N]$, then $u$ must be a GNE.
In Proposition 3.3 (ii), if $q_{i}(u)=0$, then $u$ may not be a GNE. The
following is such an example.
###### Example 3.4.
[17, Example A.8] Consider the 3-player convex GNEP
$\begin{array}[]{llllllll}\min\limits_{x_{1}\in\mathbb{R}^{1}}&-x_{1}&\vline&\min\limits_{x_{2}\in\mathbb{R}^{1}}&(x_{2}-0.5)^{2}&\vline&\min\limits_{x_{3}\in\mathbb{R}^{1}}&(x_{3}-1.5x_{1})^{2}\\\
\mathit{s.t.}&x_{3}\leq x_{1}+x_{2}\leq 1,&\vline&\mathit{s.t.}&x_{3}\leq
x_{1}+x_{2}\leq 1,&\vline&\mathit{s.t.}&0\leq x_{3}\leq 2.\\\ &x_{1}\geq
0;&\vline&&x_{2}\geq 0;&\vline&\end{array}$
For the first two players ($i=1,2$), the equation (3.1) holds for
$\hat{L}_{i}(x):=\left[\begin{array}[]{cccc}x_{i}(1-x_{1}-x_{2})&x_{i}&x_{i}&x_{1}+x_{2}-1\\\
x_{i}(x_{3}-x_{1}-x_{2})&x_{i}&x_{i}&x_{1}+x_{2}-x_{3}\\\
0&0&0&1-x_{3}\end{array}\right],\ q_{i}(x):=x_{i}(1-x_{3}).$
For the third player ($i=3$), the equation (3.1) holds for
$\hat{L}_{3}(x):=\frac{1}{2}\cdot\left[\begin{array}[]{cccc}2-x_{3}&1&1\\\
-x_{3}&1&1\end{array}\right],\quad q_{3}:=1.$
The Lagrange multiplier expressions can be obtained by letting
$\hat{\lambda}_{i}(x):=\hat{L}_{i}(x)\hat{f}_{i}(x)$. It is clear that
$u_{1}=0,u_{2}=0.5,u_{3}=0$ satisfy (2.5) with $q_{1}(u)=0$. However,
$u_{1}=0$ is not a minimizer for the first player’s optimization
$\mbox{F}_{1}(u_{-1})$. It is interesting to note that for
$u_{1}=\frac{2}{3},u_{2}=\frac{1}{3},u_{3}=1$, the tuple
$u=(u_{1},u_{2},u_{3})$ satisfies (2.5) with $q_{1}(u)=q_{2}(u)=0$, but $u$ is
still a GNE [17].
We would like to remark that for some special GNEPs, the equality $q_{i}(u)=0$
may imply that $u_{i}$ is a minimizer of $\mbox{F}_{i}(u_{-i})$. See Example
3.8 for such a case.
### 3.2. Existence of rational expressions
We study the existence of rational expressions with nonnegative $q_{i}(x)$.
The following is a useful lemma.
###### Lemma 3.5.
For the $i$th player’s optimization $\mbox{F}_{i}(x_{-i})$, if every
$g_{i,j}(x)$ is not identically zero, then a rational expression exists for
$\lambda_{i}$.
###### Proof.
Let $H_{i}(x)=G_{i}(x)^{T}G_{i}(x)$, where $G_{i}(x)$ is the matrix polynomial
in (2.6). If every $g_{i,j}(x)$ is not identically zero, then the determinant
$\det H_{i}(x)$ is also not identically zero. Let $\mbox{adj}\,H_{i}(x)$
denote the adjoint matrix of $H_{i}(x)$, then
$H_{i}(x)\cdot\mbox{adj}\,H_{i}(x)\,=\,\det H_{i}(x)\cdot I_{m_{i}}.$
For $\hat{L}_{i}(x):=\mbox{adj}\,H_{i}(x)\cdot G_{i}(x)^{T}$, we get the
rational expression
(3.8) $\lambda_{i,j}(x)\,=\,\frac{1}{\det
H_{i}(x)}\hat{L}_{i}(x)\cdot\hat{f}_{i}(x).$
Moreover, $q_{i}(x)\geq 0$ for all $x$, since $H_{i}(x)$ is positive
semidefinite everywhere. ∎
The rational expression in (3.8) may not be very practical, because the
determinantal polynomials often have high degrees. In practice, we usually
have rational expressions with low degrees. If each $q_{i}(x)>0$ for all $x\in
X$, then every solution of (3.7) is a GNE. One wonders when a rational
expression exists with $q_{i}(x)>0$ on $X$. The matrix polynomial $G_{i}$ is
said to be nonsingular on $X$ if $G_{i}(x)$ has full column rank for all $x\in
X$. For the GNEP given in Example 3.2, both $G_{1}(x)$ and $G_{2}(x)$ are
nonsingular on $X$. The following proposition is useful.
###### Proposition 3.6.
The matrix $G_{i}(x)$ is nonsingular on $X$ if and only if there exists a
matrix polynomial $\hat{L}_{i}(x)$ satisfying (3.1) with $q_{i}(x)>0$ on $X$.
###### Proof.
First, if the matrix polynomial $G_{i}(x)$ has full column rank for all $x\in
X$, let $H_{i}(x):=G_{i}(x)^{T}G_{i}(x)$, then $H_{i}(x)$ is positive definite
and the determinant $\det H_{i}(x)>0$ for all $x\in X$. Therefore, for
$\hat{L}_{i}(x):=\mbox{adj}\,H_{i}(x)$, the equation (3.8) is satisfied with
$q_{i}(x):=\det H_{i}(x)>0$ over $X$. Second, if (3.1) holds with $q_{i}(x)>0$
on $X$, then $G_{i}(x)$ is clearly nonsingular on $X$. ∎
###### Remark.
If $G_{i}(x)$ is nonsingular on $X$, then the LICQC must hold for the $i$th
player’s optimization. Furthermore, if this holds for all $i\in[N]$, then all
GNEs are KKT points.
### 3.3. A numerical method for finding rational expressions
We give a numerical method for finding rational expressions for Lagrange
multipliers. It was introduced in [51] for solving bilevel optimization
problems. Let $G_{i}(x)$ be the matrix polynomial defined in (2.6). For
convenience, denote the tuples
$g_{\mathcal{E}}:=(g_{i,j})_{i\in[N],j\in\mathcal{E}_{i}},\quad
g_{\mathcal{I}}:=(g_{i,j})_{i\in[N],j\in\mathcal{I}_{i}}.$
For a priori degree $d$, consider the following linear convex optimization:
(3.9)
$\left\\{\begin{array}[]{rl}\max\limits_{\hat{L}_{i},q_{i},\gamma}&\gamma\\\
\mathit{s.t.}&\hat{L}_{i}\cdot G_{i}=q_{i}\cdot I_{m_{i}},\,q_{i}(v)=1,\\\
&q_{i}-\gamma\in\mbox{Ideal}[g_{\mathcal{E}}]_{2d}+\mbox{Qmod}[g_{\mathcal{I}}]_{2d},\\\
&\hat{L}_{i}\in(\mathbb{R}[x]_{2d-\deg{G_{i}}})^{m_{i}\times(m_{i}+n_{i})}.\end{array}\right.$
In the above, the first equality is the same as (3.1). The second equality
ensures that $q_{i}$ is not identically zero, where $v$ is a priori point in
$X$. The constraint
$q_{i}-\gamma\in\mbox{Ideal}[g_{\mathcal{E}_{i}}]+\mbox{Qmod}[g_{\mathcal{E}_{i}}]$
forces the $q_{i}(x)\geq\gamma$ on $X$. Therefore, if the maximum $\gamma$ is
positive, then $q_{i}(x)>0$ on $X$. By Lemma 3.5, one can always find a
feasible $\gamma\geq 0$ satisfying (3.9), for some $d\leq\deg(H(x))$, if none
of $g_{i,j}(x)$ is identically zero. By Proposition 3.6, if each $G_{i}(x)$ is
nonsingular on $X$ and the archimedeanness holds for $X$, then there must
exist $\gamma>0$ satisfying (3.9) for some $d$. If
$(\hat{L}_{i},q_{i},\gamma)$ is a feasible point of (3.9), then one can get a
rational expression for Lagrange multipliers by letting
$\hat{\lambda}_{i,j}(x)=\hat{L}_{i}(x)\hat{f}_{i}(x)$.
###### Example 3.7.
Consider the GNEP in Example 3.2. We have
$g_{\mathcal{E}}=\emptyset,\
g_{\mathcal{I}}=(2-x_{1}^{T}x_{1}-x_{2},3x_{2}-x_{1}^{T}x_{1},1-x_{2}).$
Let $\hat{L}_{1}(x)$ and $\hat{L}_{2}(x)$ be the matrix polynomials in (3.4),
and $q_{1}(x)=2-x_{2},q_{2}(x)=1-\frac{1}{3}x_{1}^{T}x_{1}$. Let $v:=(0,0,1)$
for both players, and $\gamma_{1}=1$, $\gamma_{2}=1/2$. Then, the
$(\hat{L}_{i}(x),q_{i}(x),\gamma_{i})$ is a feasible point of (3.11), for each
$i=1,2$. In fact, we have
$\begin{array}[]{l}q_{1}(v)=q_{2}(v)=1,\quad
q_{1}(x)-\gamma_{1}=1-x_{2}=0+1\cdot(1-x_{2})\in\mbox{Qmod}[g_{\mathcal{I}}]_{2},\\\
q_{2}(x)-\gamma_{2}=\frac{1}{2}-\frac{1}{3}x_{1}^{T}x_{1}=\frac{1}{4}(2-x_{1}^{T}x_{1}-x_{2})+\frac{1}{12}(3x_{2}-x_{1}^{T}x_{1})\in\mbox{Qmod}[g_{\mathcal{I}}]_{2}.\end{array}$
The rational expressions for Lagrange multipliers are given by (3.5).
###### Example 3.8.
Consider the following GNEP
$\begin{array}[]{lllll}\min\limits_{x_{1}\in\mathbb{R}^{3}}&f_{1}(x_{1},x_{2})&\vline&\min\limits_{x_{2}\in\mathbb{R}^{3}}&f_{2}(x_{1},x_{2})\\\
\mathit{s.t.}&1-x_{1}^{T}x_{1}-x_{2}^{T}x_{2}\geq
0;&\vline&\mathit{s.t.}&1-x_{1}^{T}x_{1}-x_{2}^{T}x_{2}\geq 0.\end{array}$
The constraining tuples $g_{\mathcal{E}}:=\emptyset,\
g_{\mathcal{I}}:=(1-x_{1}^{T}x_{1}-x_{2}^{T}x_{2}).$ Let $v:=(0,0,0)$,
$\gamma_{1}=\gamma_{2}=0$, $q_{1}(x)=1-x_{2}^{T}x_{2}$,
$q_{2}(x)=1-x_{1}^{T}x_{1}$, and
$\hat{L}_{1}=\left[-\frac{1}{2}x_{1,1},\,-\frac{1}{2}x_{1,2},\,-\frac{1}{2}x_{1,3},\,1\right],\
\hat{L}_{2}=\left[-\frac{1}{2}x_{2,1},\,-\frac{1}{2}x_{2,2},\,-\frac{1}{2}x_{2,3},\,1\right].$
One can verify that $q_{1}(v)=q_{2}(v)=1$ and
$\begin{array}[]{l}q_{1}(x)-\gamma_{1}=1-x_{2}^{T}x_{2}=x_{1}^{T}x_{1}+1\cdot(1-x_{1}^{T}x_{1}-x_{2}^{T}x_{2})\in\mbox{Qmod}[g_{\mathcal{I}}]_{2},\\\
q_{2}(x)-\gamma_{2}=1-x_{1}^{T}x_{1}=x_{2}^{T}x_{2}+1\cdot(1-x_{1}^{T}x_{1}-x_{2}^{T}x_{2})\in\mbox{Qmod}[g_{\mathcal{I}}]_{2}.\end{array}$
By Proposition 3.6, we know $(\hat{L}_{1}(x),q_{1}(x),\gamma_{1})$ and
$(\hat{L}_{2}(x),q_{2}(x),\gamma_{2})$ are minimizers of (3.9) for $i=1,2$
respectively. Therefore, we get the rational expression
(3.10) $\lambda_{1}=\frac{-x_{1}^{T}\nabla_{x_{1}}f_{1}}{2\cdot q_{1}(x)},\
\lambda_{2}=\frac{-x_{2}^{T}\nabla_{x_{2}}f_{2}}{2\cdot q_{2}(x)}.$
For each $i=1,2$, if $q_{i}(x)=0$, then $0\leq{x_{i}}^{T}x_{i}\leq
1-{x_{-i}}^{T}x_{-i}=0.$ This implies $x_{i}=(0,0,0)$ is the only feasible
point of the $i$th player’s optimization and hence it is the minimizer.
Therefore, each feasible point of (3.7) is a GNE.
One can solve (3.9) numerically for getting rational expressions. This is done
in Example 6.6.
## 4\. Parametric expressions for Lagrange multipliers
For some GNEPs, it may be difficult to find convenient rational expressions
for Lagrange multipliers. Sometimes, the denominators may have high degrees.
This is the case especially when $m_{i}>n_{i}$. If some $q_{i}$ has high
degree, the polynomial optimization (3.6) also has a high degree, which makes
the result moment SDP relaxations (see subsections 5.1 and 5.2) very difficult
to be solved. To fix such issues, we introduce parametric expressions for
Lagrange multipliers.
###### Definition 4.1.
For the $i$th player’s optimization $\mbox{F}_{i}(x_{-i})$, a parametric
expression for the Lagrange multipliers is a tuple of polynomials
$\hat{\lambda}_{i}(x,\omega_{i}):=(\hat{\lambda}_{i,1}(x,\omega_{i}),\ldots,\hat{\lambda}_{i,m_{i}}(x,\omega_{i})),$
in $x$ and in a parameter $\omega_{i}:=(\omega_{i,1},\ldots,\omega_{i,s_{i}})$
with $s_{i}\leq m_{i}$, such that $(x_{i},\lambda_{i})$ is a critical pair if
and only if there is a value of $\omega_{i}$ such that (2.4) is satisfied for
$\lambda_{i,j}=\hat{\lambda}_{i,j}(x,\omega_{i})$ with $j\in[m_{i}]$.
The following is an example of parametric expressions.
###### Example 4.2.
Consider the 2-player convex GNEP
$\begin{array}[]{lllll}\min\limits_{x_{1}\in\mathbb{R}^{2}}&f_{1}(x_{1},x_{2})&\vline&\min\limits_{x_{2}\in\mathbb{R}^{2}}&f_{2}(x_{1},x_{2})\\\
\mathit{s.t.}&x_{1,1}-2x_{1,2}+x_{2,2}\geq
0,&\vline&\mathit{s.t.}&x_{1,2}+x_{2,2}-x_{2,1}^{2}+1\geq 0,\\\
&1-x_{2,1}\cdot x_{1}^{T}x_{1}\geq 0,&\vline&&2-x_{2,2}\geq 0,1+x_{2,2}\geq
0,\\\ &x_{1,1}\geq 0,x_{1,2}\geq 0;&\vline&&x_{2,1}\geq 0.\\\ \end{array}$
The Lagrange multipliers can be expressed as
(4.1) $\left\\{\begin{array}[]{l}\lambda_{1,1}=\omega_{1,1},\\\
\lambda_{1,2}=\frac{1}{2}x_{1,1}(\frac{\partial{f_{1}}}{\partial{x_{1,1}}}-\omega_{1,1})+\frac{1}{2}x_{1,2}(\frac{\partial{f_{1}}}{\partial{x_{1,2}}}+2\omega_{1,1}),\\\
\lambda_{1,3}=\frac{\partial{f_{1}}}{\partial{x_{1,1}}}-\omega_{1,1}+2x_{2,1}x_{1,1}\lambda_{1,2},\\\
\lambda_{1,4}=\frac{\partial{f_{1}}}{\partial{x_{1,2}}}+2\omega_{1,1}+2x_{2,1}x_{1,2}\lambda_{1,2};\\\
\lambda_{2,1}=\omega_{2,1},\\\
\lambda_{2,2}=-\frac{1}{3}\cdot\left[(\frac{\partial{f_{2}}}{\partial{x_{2,1}}}+2x_{2,1}\omega_{2,1})x_{2,1}+(\frac{\partial{f_{2}}}{\partial{x_{2,2}}}-\omega_{2,1})(x_{2,2}+1)\right],\\\
\lambda_{2,3}=\frac{\partial{f_{2}}}{\partial{x_{2,2}}}+\lambda_{2,2}-\omega_{2,1},\\\
\lambda_{2,4}=\frac{\partial{f_{2}}}{\partial{x_{2,1}}}+2x_{2,1}\omega_{2,1}.\end{array}\right.$
Parametric expressions are quite useful for solving the GNEPs. The following
are some useful cases.
* (i)
Suppose the $i$th player’s optimization $\mbox{F}_{i}(x_{-i})$ contains the
nonnegative constraints, i.e., its constraints are
$x_{i,1}\geq 0,\ldots,x_{i,n_{i}}\geq 0,\quad g_{i,j}(x)\geq
0\,(j=n_{i}+1,\ldots,m_{i}).$
Let $s_{i}:=m_{i}-n_{i}$, then a parametric expression is
(4.2)
$\boxed{\begin{array}[]{l}(\lambda_{i,1},\ldots,\lambda_{i,n_{i}})=\nabla_{x_{i}}f_{i}-\sum_{k=1}^{s_{i}}\omega_{i,k}\cdot\nabla_{x_{i}}g_{i,k+n_{i}},\\\
(\lambda_{i,n_{i}+1},\ldots,\lambda_{i,m_{i}})=(\omega_{i,1},\ldots,\omega_{i,s_{i}}).\end{array}}$
* (ii)
Suppose the $i$th player’s optimization $\mbox{F}_{i}(x_{-i})$ contains box
constraints, i.e., its constraints are
$\begin{array}[]{rl}x_{i,j}-a_{i,j}\geq 0,\,b_{i,j}-x_{i,j}\geq
0,&j=1,\ldots,n_{i}\\\ g_{i,j}(x)\geq 0.&j=n_{i}+1,\ldots,m_{i}\end{array}$
Let $s_{i}:=m_{i}-2n_{i}$, then a parametric expression is
(4.3)
$\boxed{\begin{array}[]{ll}\lambda_{i,j}=\frac{b-x_{i,j}}{b-a}\cdot\left(\frac{\partial{f_{i}}}{\partial{x_{i,j}}}-\sum_{k=1}^{s_{i}}\omega_{i,k}\cdot\frac{\partial{g_{i,k+2n_{i}}}}{\partial{x_{i,j}}}\right),&j=1,3,\ldots,2n_{i}-1\\\
\lambda_{i,j}=\frac{a-x_{i,j}}{b-a}\cdot\left(\frac{\partial{f_{i}}}{\partial{x_{i,j}}}-\sum_{k=1}^{s_{i}}\omega_{i,k}\cdot\frac{\partial{g_{i,k+2n_{i}}}}{\partial{x_{i,j}}}\right),&j=2,4,\ldots,2n_{i}\\\
\lambda_{i,j}=\omega_{i,j-2n_{i}}.&j=2n_{i}+1,\ldots,m_{i}\end{array}}$
* (iii)
Suppose the $i$th player’s optimization $\mbox{F}_{i}(x_{-i})$ contains
simplex constraints, i.e., its constraints are
$1-e^{T}x_{i}\geq 0,x_{i,1}\geq 0,\ldots,x_{i,n_{i}}\geq 0,\,g_{i,j}(x)\geq
0,\,j=n_{i}+2,\ldots,m_{i}.$
Let $s_{i}:=m_{i}-n_{i}-1$, then a parametric expression is
(4.4)
$\boxed{\begin{array}[]{ll}\lambda_{i,j}=(\nabla_{x_{i}}f_{i}-\sum_{k=1}^{s_{i}}\omega_{i,k}\cdot\nabla_{x_{i}}g_{i,k+n_{i}+1})^{T}x_{i},&j=1\\\
\lambda_{i,j}=\frac{\partial{f_{i}}}{\partial{x_{i,j-1}}}-\sum_{k=1}^{s_{i}}\omega_{i,k}\cdot\frac{\partial{g_{i,k+n_{i}+1}}}{\partial{x_{i,j-1}}}-\lambda_{i,1},&j=2,\ldots,n_{i}+1\\\
\lambda_{i,j}=\omega_{i,j-n_{i}-1}.&j=n_{i}+2,\ldots,m_{i}\end{array}}$
* (iv)
Suppose the $i$th player’s optimization $\mbox{F}_{i}(x_{-i})$ contains linear
constraints, i.e., its constraints are
$a_{j}^{T}x_{i}-b_{j}(x_{-i})\geq 0,\,j=1,\ldots,r,\quad g_{i,j}(x)\geq
0,\,j=r+1,\ldots,m_{i},$
where each $b_{j}$ is a polynomial in $x_{-i}$. Let
$A=\begin{bmatrix}a_{1}&\cdots&a_{r}\end{bmatrix}^{T}$. Assume
$\mbox{rank}{A}=r$. If we let $s_{i}:=m_{i}-r$, then a parametric expression
is
$\boxed{\begin{array}[]{l}(\lambda_{i,1},\ldots,\lambda_{i,r})=(AA^{T})^{-1}A(\nabla_{x_{i}}f_{i}-\sum_{k=1}^{s_{i}}\omega_{i,k}\cdot\nabla_{x_{i}}g_{i,k+r}),\\\
(\lambda_{i,r+1},\ldots,\lambda_{i,m_{i}})=(\omega_{i,1},\ldots,\omega_{i,s_{i}}).\end{array}}$
* (v)
Suppose there exists a labeling subset
$\mathcal{T}_{i}:=(t_{1},\ldots,t_{r})\subseteq[m_{i}]$ such that
$\hat{G}_{i}(x):=\left[\begin{array}[]{rrr}\nabla_{x_{i}}g_{i,t_{1}}(x)&\ldots&\nabla_{x_{i}}g_{i,t_{r}}(x)\\\
g_{i,t_{1}}(x)&&\\\ &\ddots&\\\ &&g_{i,t_{r}}(x)\end{array}\right]$
is nonsingular for all $x\in\mathbb{C}^{n}$. By [48, Proposition 5.1], there
exists a matrix polynomial $D_{i}(x)$ such that
$D_{i}(x)\cdot\hat{G}_{i}(x)=I_{r}$. Let $s_{i}:=m_{i}-r$, then a parametric
expression is
$\boxed{\begin{array}[]{l}(\lambda_{i,1},\ldots,\lambda_{i,r})=D_{i}(x)(\nabla_{x_{i}}f_{i}-\sum_{k=1}^{s_{i}}\omega_{i,k}\cdot\nabla_{x_{i}}g_{i,k+r}),\\\
(\lambda_{i,r+1},\ldots,\lambda_{i,m_{i}})=(\omega_{i,1},\ldots,\omega_{i,s_{i}}).\end{array}}$
We would like to remark that parametric expressions for Lagrange multipliers
always exist. For instance, one can get a parametric expression by letting
$\omega_{i,j}=\lambda_{i,j}$ for all $j$. Such expression is called a trivial
parametric expression. However, it is preferable to have small $s_{i}$, to
save computational costs.
### 4.1. Optimality conditions and parametric expressions
Suppose all players have parametric expressions for their Lagrange multipliers
as in Definition 4.1. Let $s:=s_{1}+\ldots+s_{N}$, and denote
$\mathbf{x}\,:=\,(x,\omega_{1},\ldots,\omega_{N}).$
The optimality conditions (2.5) can be equivalently expressed as
(4.5)
$\left\\{\begin{array}[]{l}\nabla_{x_{i}}f_{i}(x)-\sum_{j=1}^{m_{i}}\hat{\lambda}_{i,j}(\mathbf{x})\nabla_{x_{i}}g_{i,j}(x)=0\,(i\in[N]),\\\
\hat{\lambda}_{i}(\mathbf{x})\perp
g_{i}(x),g_{i,j}(x)=0\,(j\in\mathcal{E}_{i},i\in[N]),\\\ g_{i,j}(x)\geq
0,\hat{\lambda}_{i,j}(\mathbf{x})\geq
0\,(j\in\mathcal{I}_{i},i\in[N]).\end{array}\right.$
For convex GNEPs, a point $x$ is a GNE if and only if there exists
$\omega:=(\omega_{1},\ldots,\omega_{N})$ such that $\mathbf{x}$ satisfies
(4.5). Therefore, we consider the optimization
(4.6) $\left\\{\begin{array}[]{rll}\min\limits_{\mathbf{x}\in
X\times\mathbb{R}^{s}}&\left[\mathbf{x}\right]_{1}^{T}\Theta\left[\mathbf{x}\right]_{1}\\\
\mathit{s.t.}&\nabla_{x_{i}}f_{i}(x)-\sum\nolimits_{j=1}^{m_{i}}\hat{\lambda}_{i,j}(\mathbf{x})\nabla_{x_{i}}g_{i,j}(x)=0\,(i\in[N]),\\\
&\hat{\lambda}_{i,j}(\mathbf{x})\perp
g_{i,j}(x)\,(j\in\mathcal{E}_{i}\cup\mathcal{I}_{i},i\in[N]),\\\
&\hat{\lambda}_{i,j}(\mathbf{x})\geq
0\,(j\in\mathcal{I}_{i},i\in[N]).\end{array}\right.$
In the above, the $\Theta$ is a generically chosen positive definite matrix.
The following proposition is straightforward.
###### Proposition 4.3.
For the GNEPP given by (1.1), suppose each player’s optimization has a
parametric expression for their Lagrange multipliers as in Definition 4.1.
1. (i)
If (4.6) is infeasible, then the GNEP has no KKT points. If every GNE is a KKT
point, then the infeasibility of (4.6) implies nonexistence of GNEs.
2. (ii)
Assume the GNEP is convex. If $(u,w)$ is a feasible point of (4.6), then $u$
is a GNE.
## 5\. The polynomial optimization reformulation
In this section, we give an algorithm for solving convex GNEPs. We assume each
$\lambda_{i}$ has either a rational or parametric expression, as in Definition
3.1 or 4.1. If $\lambda_{i}$ has a polynomial or parametric expression, we let
$q_{i}(x):=1$. If $\lambda_{i}$ has a polynomial or rational expression, then
we let $s_{i}=0$. Recall the notation
$\mathbf{x}\,:=\,(x,\omega_{1},\ldots,\omega_{N}).$
Choose a generic positive definite matrix $\Theta$. Then solve the following
polynomial optimization
(5.1)
$\left\\{\begin{array}[]{rll}\min\limits_{\mathbf{x}}&\left[\mathbf{x}\right]_{1}^{T}\Theta\left[\mathbf{x}\right]_{1}\\\
\mathit{s.t.}&q_{i}(x)\nabla_{x_{i}}f_{i}(x)-\sum\nolimits_{j=1}^{m_{i}}\hat{\lambda}_{i,j}(\mathbf{x})\nabla_{x_{i}}g_{i,j}(x)=0\,(i\in[N]),\\\
&\hat{\lambda}_{i,j}(\mathbf{x})\perp
g_{i,j}(x)\,(j\in\mathcal{E}_{i}\cup\mathcal{I}_{i},i\in[N]),\\\
&g_{i,j}(x)=0\,(j\in\mathcal{E}_{i},i\in[N]),\\\ &g_{i,j}(x)\geq
0\,(j\in\mathcal{I}_{i},i\in[N]),\\\ &\hat{\lambda}_{i,j}(\mathbf{x})\geq
0\,(j\in\mathcal{I}_{i},i\in[N]).\end{array}\right.$
If (5.1) is infeasible, then there are no KKT points. Since $\Theta$ is
positive definite, if (5.1) is feasible, then it must have a minimizer, say,
$(u,w)\in X\times\mathbb{R}^{s}$. For convex GNEPs, if $q_{i}(u)>0$ for all
$i$, then $u$ must be a GNE. If $q_{i}(u)\leq 0$ for some $i$, then $u$ may or
may not be a GNE. To check this, we solve the following optimization problem
for those $i$ with $q_{i}(u)\leq 0$
(5.2)
$\left\\{\begin{array}[]{llll}\delta_{i}:=&\min\limits_{x_{i}}&f_{i}(x_{i},u_{-i})-f_{i}(u_{i},u_{-i})&\\\
&\mathit{s.t.}&g_{i,j}(x_{i},u_{-i})=0\,(j\in\mathcal{E}_{i}),\
g_{i,j}(x_{i},u_{-i})\geq 0\,(j\in\mathcal{I}_{i}).\end{array}\right.$
This is a polynomial optimization in $x_{i}$. Since $u\in X$, the point
$u_{i}$ is feasible for (5.2), so $\delta_{i}\leq 0$. If $\delta_{i}\geq 0$
for all $i$, then $u$ must be a GNE. The following is an algorithm for solving
the GNEP.
###### Algorithm 5.1.
For the convex GNEP given by (1.1), do the following:
Step 0:
Choose a generic positive definite matrix $\Theta$ of length $n+s+1$.
Step 1:
Solve the polynomial optimization (5.1). If it is infeasible, then there are
no KKT points and stop; otherwise, solve it for a minimizer $(u,w)$.
Step 2:
If all $q_{i}(u)>0$, then $u$ is a GNE. Otherwise, for those $i$ with
$q_{i}(u)\leq 0$, solve the optimization (5.2) for the minimum value
$\delta_{i}$. If $\delta_{i}\geq 0$ for all such $i$, then $u$ is a GNE;
otherwise, it is not.
In Step 0, we can choose $\Theta=R^{T}R$ for a randomly generated square
matrix $R$ of length $n+s+1$. When $\Theta$ is a generic positive definite
matrix, the optimization (5.1) must have a unique minimizer, if its feasible
set is nonempty. This is shown in Theorem 5.4(ii). Since the objective
$f_{i}(x_{i},u_{-i})$ is assumed to be convex in $x_{i}$, if it is bounded
from below on $X_{i}(u_{-i})$, then (5.2) must have a minimizer (see [6,
Theorem 3]). In applications, we are mostly interested in cases that (5.2) has
a minimizer, for the existence of a GNE. In the subsections 5.1 and 5.2, we
will discuss how to solve polynomial optimization problems in Algorithm 5.1,
by the Moment-SOS hierarchy of semidefinite relaxations. The convergence of
Algorithm 5.1 is shown as follows.
###### Theorem 5.2.
For the convex GNEPP given by (1.1), suppose each Lagrange multiplier vector
$\lambda_{i}$ has a rational expression as in Definition 3.1 or a parametric
expression as in Definition 4.1.
1. (i)
If $(u,w)$ is a feasible point of (5.1) such that $q_{i}(u)>0$ for all $i$,
then $u$ is a GNE.
2. (ii)
Assume every GNE is a KKT point. If (5.1) is infeasible, then the GNEP has no
GNEs. If $\Theta$ is positive definite and every $q_{i}(x)>0$ for all feasible
$x$ of (5.1), then Algorithm 5.1 will find a GNE if it exists.
###### Proof.
(i) This is directly implied by Propositions 3.3 and 4.3.
(ii) If (5.1) is infeasible, then there is no GNE, because every GNE is
assumed to be a KKT point and it must be feasible for (5.1). Next, assume
(5.1) is feasible. Since $\Theta$ is positive definite, the optimization (5.1)
has a minimizer, say, $(u,w)$. By the given assumption, we have $q_{i}(u)>0$
for all $i$. So $u$ is a GNE, by (i). ∎
###### Remark.
For convex GNEPs, we can choose not to use nontrivial expressions for Lagrange
multipliers, i.e., we consider the polynomial optimization (5.1) with
$s_{i}=m_{i}$ and $\lambda_{i,j}=\omega_{i,j}$ for all $i$ and $j$. By doing
this, we can get an algorithm like Algorithm 5.1 to get GNEs. However, this
approach is usually very inefficient computationally, because it results in
more variables for the polynomial optimization (5.1). Note that when Lagrange
multiplier expressions (LMEs) are not used, each Lagrange multiplier is
treated as a new variable. Moreover, solving (5.1) without LMEs may require
higher order Moment-SOS relaxations. This is shown in numerical experiments in
Section 5.1. In Example 6.1(i-ii), we compare the performance of Algorithm 5.1
with and without LMEs. Computational results show the advantage of using them.
In Theorem 5.2(ii), if $q_{i}(x)>0$ for all $x\in X$, then we must have
$q_{i}(x)>0$ for all feasible $x$ of (5.1). Suppose $(u,w)$ is a computed
minimizer of (5.1). If $u$ is not a GNE, i.e., $\delta_{i}<0$ for some $i$, we
can let $\mathcal{N}\subseteq[N]$ be the labeling set of $i$ with
$\delta_{i}<0$. By Theorem 5.2, we know $q_{i}(u)=0$ for all
$i\in\mathcal{N}$. For a priori small $\varepsilon>0$, we can add the
inequalities $q_{i}(x)\geq\varepsilon\,(i\in\mathcal{N})$ to the optimization
(5.1), to exclude $u$ from the feasible set. Then we solve the following new
optimization
(5.3) $\left\\{\begin{array}[]{cll}\min\limits_{\mathbf{x}\in
X\times\mathbb{R}^{s}}&\left[\mathbf{x}\right]_{1}^{T}\Theta\left[\mathbf{x}\right]_{1}\\\
\mathit{s.t.}&q_{i}(x)\nabla_{x_{i}}f_{i}(x)-\sum\nolimits_{j=1}^{m_{i}}\hat{\lambda}_{i,j}(\mathbf{x})\nabla_{x_{i}}g_{i,j}(x)=0\,(i\in[N]),\\\
&\hat{\lambda}_{i,j}(\mathbf{x})\perp
g_{i,j}(x)\,(j\in\mathcal{E}_{i}\cup\mathcal{I}_{i},i\in[N]),\\\
&\hat{\lambda}_{i,j}(\mathbf{x})\geq 0\,(j\in\mathcal{I}_{i},i\in[N]),\\\
&q_{i}(x)\geq\varepsilon\,(i\in\mathcal{N}).\end{array}\right.$
If $\varepsilon>0$ is not small enough, the constraint
$q_{i}(x)\geq\varepsilon$ may also exclude some GNEs. If the new optimization
(5.3) is infeasible, one can heuristically get a candidate GNE by choosing a
different generic positive definite $\Theta$ in (5.1). In computational
practice, when a GNE exists, it is very likely that we can get one by doing
this. However, how to detect nonexistence of GNEs when (5.1) is feasible can
be theoretically difficult. The theoretical side of this problem is mostly
open, to the best of the authors’ knowledge.
### 5.1. The optimization for all players
We discuss how to solve the polynomial optimization problems in Algorithm 5.1,
by using the Moment-SOS hierarchy of semidefinite relaxations [31, 33, 34, 36,
37]. We refer to the notation in subsections 2.1 and 2.2.
First, we discuss how to solve the optimization (5.1). Denote the polynomial
tuples
(5.4)
$\Phi_{i}:=\Big{\\{}q_{i}(x)\nabla_{x_{i}}f_{i}(\mathbf{x})-\sum_{j=1}^{m_{i}}\hat{\lambda}_{i,j}(\mathbf{x})\nabla_{x_{i}}g_{i,j}(x)\Big{\\}}\cup\Big{\\{}g_{i,j}(x):j\in\mathcal{E}_{i}\Big{\\}}\\\
\cup\Big{\\{}\hat{\lambda}_{i,j}(\mathbf{x})\cdot
g_{i,j}(x):j\in\mathcal{I}_{i}\Big{\\}},$ (5.5)
$\Psi_{i}:=\Big{\\{}g_{i,j}(x):j\in\mathcal{I}_{i}\Big{\\}}\cup\Big{\\{}\hat{\lambda}_{i,j}(\mathbf{x}):j\in\mathcal{I}_{i}\Big{\\}}.\qquad\qquad\qquad\qquad\qquad\qquad$
For notational convenience, for a vector $p=(p_{1},\ldots,p_{s})$, the set
$\\{p\\}$ stands for $\\{p_{1},\ldots,p_{s}\\}$, in the above. Denote the
unions
$\Phi\,:=\bigcup_{i=1}^{N}\Phi_{i},\quad\Psi\,:=\,\bigcup_{i=1}^{N}\Psi_{i}.$
They are both finite sets of polynomials. Then, the optimization (5.1) can be
equivalently written as
(5.6)
$\left\\{\begin{array}[]{rl}\vartheta_{\min}:=\min\limits_{\mathbf{x}}&\theta(\mathbf{x}):=[\mathbf{x}]_{1}^{T}\Theta[\mathbf{x}]_{1}\\\
\mathit{s.t.}&p(\mathbf{x})=0\,\,(\forall\,p\in\Phi),\\\ &q(\mathbf{x})\geq
0\,\,(\forall\,q\in\Psi).\end{array}\right.$
Denote the degree
$d_{0}\,:=\,\max\\{\lceil\deg(p)/2\rceil:\,p\in\Phi\cup\Psi\\}.$
For a degree $k\geq d_{0}$, consider the $k$th order moment relaxation for
solving (5.6)
(5.7)
$\left\\{\begin{array}[]{rl}\vartheta_{k}\,:=\,\min\limits_{y}&\left<\theta,y\right>\\\
\mathit{s.t.}&y_{0}=1,\,L_{p}^{(k)}[y]=0\,(p\in\Phi),\\\ &M_{k}[y]\succeq
0,\,L_{q}^{(k)}[y]\succeq 0\,(q\in\Psi),\\\
&y\in\mathbb{R}^{\mathbb{N}^{n+s}_{2k}}.\end{array}\right.$
Its dual optimization problem is the $k$th order SOS relaxation
(5.8) $\left\\{\begin{array}[]{ll}\max&\gamma\\\
\mathit{s.t.}&\theta-\gamma\in\mbox{Ideal}[\Phi]_{2k}+\mbox{Qmod}[\Psi]_{2k}.\\\
\end{array}\right.$
For relaxation orders $k=d_{0},d_{0}+1,\ldots$, we get the Moment-SOS
hierarchy of semidefinite relaxations (5.7)-(5.8). This produces the following
algorithm for solving the polynomial optimization problem (5.6).
###### Algorithm 5.3.
Let $\theta,\Phi,\Psi$ be as in (5.6). Initialize $k:=d_{0}$.
Step 1:
Solve the semidefinite relaxation (5.7). If it is infeasible, then (5.6) has
no feasible points and stop; otherwise, solve it for a minimizer $y^{*}$.
Step 2:
Let $\mathbf{u}=(u,w):=(y^{*}_{e_{1}},\ldots,y^{*}_{e_{n+s}})$. If
$\mathbf{u}$ is feasible for (5.6) and $\vartheta_{k}=\theta(u)$, then
$\mathbf{u}$ is a minimizer of (5.6). Otherwise, let $k:=k+1$ and go to Step
1.
In the Step 2, $e_{i}$ denotes the labeling vector such that its $i$th entry
is $1$ while all other entries are $0$. For instance, when $n=s=2$,
$y_{e_{3}}=y_{0010}$. The optimization (5.7) is a relaxation of (5.6). This is
because if $\mathbf{x}$ is a feasible point of (5.6), then
$y=[\mathbf{x}]_{2k}$ must be feasible for (5.7). Hence, if (5.7) is
infeasible, then (5.6) must be infeasible, which also implies the nonexistence
of KKT points. Moreover, the optimal value $\vartheta_{k}$ of (5.7) is a lower
bound for the minimum value of (5.6), i.e.,
$\vartheta_{k}\leq\theta(\mathbf{x})$ for all $\mathbf{x}$ that is feasible
for (5.6). In the Step 2, if $\mathbf{u}$ is feasible for (5.6) and
$\vartheta_{k}=\theta(\mathbf{u})$, then $\mathbf{u}$ must be a minimizer of
(5.6). The Algorithm 5.3 can be implemented in GloptiPoly [26]. The
convergence of Algorithm 5.3 is shown as follows.
###### Theorem 5.4.
Assume the set
$\mbox{Ideal}[\Phi]+\mbox{Qmod}[\Psi]\subseteq\mathbb{R}[\mathbf{x}]$ is
archimedean.
* (i)
If (5.6) is infeasible, then the moment relaxation (5.7) must be infeasible
when the order $k$ is big enough.
* (ii)
Suppose (5.6) is feasible and $\Theta$ is a generic positive definite matrix.
Then (5.6) has a unique minimizer. Let $\mathbf{u}^{(k)}$ be the point
$\mathbf{u}$ produced in the Step 2 of Algorithm 5.3 in the $k$th loop. Then
$\mathbf{u}^{(k)}$ converges to the unique minimizer of (5.6). In particular,
if the real zero set of $\Phi$ is finite, then $\mathbf{u}^{(k)}$ is the
unique minimizer of (5.6), when $k$ is sufficiently large.
###### Proof.
(i) If (5.6) is infeasible, the constant polynomial $-1$ can be viewed as a
positive polynomial on the feasible set of (5.6). Since
$\mbox{Ideal}[\Phi]+\mbox{Qmod}[\Psi]$ is archimedean, we have
$-1\in\mbox{Ideal}[\Phi]_{2k}+\mbox{Qmod}[\Psi]_{2k}$, for $k$ big enough, by
the Putinar Positivstellensatz [55]. For such a big $k$, the SOS relaxation
(5.8) is unbounded from above, hence the moment relaxation (5.7) must be
infeasible.
(ii) When the optimization (5.6) is feasible, it must have a unique minimizer,
say, $\mathbf{x}^{*}$. To see this, let $\theta$ be defined as in (5.6), $K$
be the feasible set of (5.6), and $\mathcal{R}_{2}(K)$ be the set of tms’s in
$\mathbb{R}^{\mathbb{N}^{n}_{2}}$ admitting K-representing measures. Consider
the linear conic optimization problem
(5.9) $\left\\{\begin{array}[]{ll}\min&\left<\theta,y\right>\\\
\mathit{s.t.}&y_{0}=1,y\in\mathcal{R}_{2}(K).\end{array}\right.$
If $\Theta$ is generic in the cone of positive definite matrice, the objective
$\left<\theta,y\right>$ is a generic linear function in $y$. By [43,
Proposition 5.2], the optimization (5.9) has a unique minimizer. The minimum
value of (5.9) is equal to $\vartheta_{\min}$. Therefore, (5.6) has a unique
minimizer when $\Theta$ is generic. The convergence of $\mathbf{u}^{(k)}$ to
$\mathbf{x}^{*}$ is shown in [57] or [40, Theorem 3.3]. For the special case
that $\Phi(\mathbf{x})=0$ has finitely many real solutions, the point
$\mathbf{u}^{(k)}$ must be equal to $\mathbf{x}^{*}$, when $k$ is large
enough. This is shown in [35] (also see [41]). ∎
The archimedeaness of the set $\mbox{Ideal}[\Phi]+\mbox{Qmod}[\Psi]$ is
essentially requiring that the feasible set of (5.6) is compact. The
archimedeaness is sufficient but not necessary for Algorithm 5.3 to converge.
Even if the archimedeaness fails to hold, Algorithm 5.3 is still applicable
for solving (5.1). If the point $\mathbf{u}^{(k)}$ is feasible and
$\vartheta_{k}=\theta(\mathbf{u}^{(k)})$, then $\mathbf{u}^{(k)}$ must be a
minimizer of (5.1), regardless of the archimedeaness holds or not. Moreover,
without archimedeaness, the infeasibility of (5.7) still implies that (5.1) is
infeasible. In our computational practice, Algorithm 5.3 almost always has
finite convergence.
The polynomial optimization (5.3) can be solved in the same way by the Moment-
SOS hierarchy of semidefinite relaxations. The convergence property is the
same. For the cleanness of this paper, we omit the details.
### 5.2. Checking Generalized Nash Equilibria
Suppose $\mathbf{u}=(u,w)\in\mathbb{R}^{n}\times\mathbb{R}^{s}$ is a minimizer
of (5.1). For convex GNEPPs, if all $q_{i}(u)>0$, then $u$ is a GNE, by
Theorem 5.2(i). If $q_{i}(u)\leq 0$ for some $i$, we need to solve the
optimization (5.2), to check if $u=(u_{i},u_{-i})$ is a GNE or not, Note that
(5.2) is a convex polynomial optimization problem in $x_{i}$. For given
$u_{-i}$, if it is bounded from below, then (5.2) achieves its optimal value
at a minimizer.
Consider the $i$th player’s optimization with $q_{i}(u)\leq 0$. For notational
convenience, we denote the polynomial tuples
(5.10)
$H_{i}(u):=\big{\\{}g_{i,j}(x_{i},u_{-i}):j\in\mathcal{E}_{i}\big{\\}}\cup\big{\\{}\hat{\lambda}_{i,j}(x_{i},u_{-i})\cdot
g_{i,j}(x_{i},u_{-i}):j\in\mathcal{I}_{i}\big{\\}}\\\
\cup\big{\\{}q_{i}(x_{i},u_{-i})\nabla_{x_{i}}f_{i}(x_{i},u_{-i})-\sum_{j=1}^{m_{i}}\hat{\lambda}_{i,j}(x_{i},u_{-i})\nabla_{x_{i}}g_{i,j}(x_{i},u_{-i})\big{\\}},$
(5.11)
$J_{i}(u):=\big{\\{}g_{i,j}(x_{i},u_{-i}):j\in\mathcal{I}_{i}\big{\\}}\cup\big{\\{}\hat{\lambda}_{i,j}(x_{i},u_{-i}):j\in\mathcal{I}_{i}\big{\\}}.$
Like in (5.4)-(5.5), the set $\\{p\\}$ stands for $\\{p_{1},\ldots,p_{s}\\}$,
when $p=(p_{1},\ldots,p_{s})$ is a vector of polynomial. The sets
$H_{i}(u),J_{i}(u)$ are finite collections.
Under some suitable constraint qualification conditions (e.g., the Slater’s
Condition), when (5.2) has a minimizer, it is equivalent to
(5.12)
$\left\\{\begin{array}[]{rrl}\eta_{i}:=&\min\limits_{x_{i}\in\mathbb{R}^{n_{i}}}&\zeta_{i}(x_{i}):=f_{i}(x_{i},u_{-i})-f_{i}(u_{i},u_{-i})\\\
&\mathit{s.t.}&p(x_{i})=0\,(p\in H_{i}(u)),\\\ &&q(x_{i})\geq 0\,(q\in
J_{i}(u)).\end{array}\right.$
Denote the degree in variables $x_{i}$ for its constraining polynomials
(5.13)
$d_{i}:=\max\big{\\{}\lceil\deg(\zeta_{i}(x_{i},u_{-i}))/2,\deg(p(x_{i}))/2,\\\
\deg(q(x_{i}))/2:p\in H_{i}(u),q\in J_{i}(u)\rceil\big{\\}}.$
For a degree $k\geq d_{i}$, the $k$th order moment relaxation for (5.6) is
(5.14)
$\left\\{\begin{array}[]{rrl}\eta_{i}^{(k)}:=&\min\limits_{y}&\left<\zeta_{i}(x_{i}),y\right>\\\
&\mathit{s.t.}&y_{0}=1,\,L_{p}^{(k)}[y]=0\,(p\in H_{i}(u)),\\\
&&M_{k}[y]\succeq 0,\,L_{q}^{(k)}[y]\succeq 0\,(q\in J_{i}(u)),\\\
&&y\in\mathbb{R}^{\mathbb{N}^{n_{i}}_{2k}}.\end{array}\right.$
The dual optimization problem of (5.14) is the $k$th order SOS relaxation
(5.15) $\left\\{\begin{array}[]{ll}\max&\gamma\\\
\mathit{s.t.}&\zeta_{i}(x_{i})-\gamma\in\mbox{Ideal}[H_{i}(u)]_{2k}+\mbox{Qmod}[J_{i}(u)]_{2k}.\\\
\end{array}\right.$
By solving the above relaxations for $k=d_{i},d_{i}+1,\ldots$, we get the
Moment-SOS hierarchy of relaxations (5.14)-(5.15). This gives the following
algorithm.
###### Algorithm 5.5.
For a minimizer $u=(u_{i},u_{-i})$ of (5.1) with $q_{i}(u)\leq 0$, solve the
$i$th player’s optimization (5.12). Initialize $k:=d_{i}$.
Step 1:
Solve the moment relaxation (5.14) for the minimum value $\eta_{i}^{(k)}$ and
a minimizer $y^{*}$. If $\eta_{i}^{(k)}\geq 0$, then $\eta_{i}=0$ and stop;
otherwise, go to the next step.
Step 2:
Let $t:=d_{i}$ as in (5.13). If $y^{*}$ satisfies the rank condition
(5.16)
$\operatorname{rank}{M_{t}[y^{*}]}\,=\,\operatorname{rank}{M_{t-d_{i}}[y^{*}]},$
then extract a set $U_{i}$ of $r:=\operatorname{rank}{M_{t}(y^{*})}$
minimizers for (5.12) and stop.
Step 3:
If (5.16) fails to hold and $t<k$, let $t:=t+1$ and then go to Step 2;
otherwise, let $k:=k+1$ and go to Step 1.
We would like to remark that the optimization (5.12) is always feasible,
because $u_{i}$ is a feasible point since $u$ is a minimizer of (5.1). The
moment relaxation (5.14) is also feasible. Because $\eta_{i}^{(k)}$ is a lower
bound for $\eta_{i}$, and $\eta_{i}\leq\zeta_{i}(u_{i},u_{-i})=0$, if
$\eta_{i}^{(k)}\geq 0$, then $\eta_{i}$ must be $0$. In Step 2, the rank
condition (5.16) is called flat truncation [40]. It is a sufficient (and
almost necessary) condition to check convergence of moment relaxations. When
(5.16) holds, the method in [25] can be used to extract $r$ minimizers for
(5.12). The Algorithm 5.5 can also be implemented in GloptiPoly [26]. If
$\mbox{Ideal}[H_{i}(u)]+\mbox{Qmod}[J_{i}(u)]$ is archimedean, then
$\eta_{i}^{(k)}\to\eta_{i}$ as $k\to\infty$ [31]. It is interesting to remark
that
$I_{1}:=\mbox{Ideal}[g_{i,j}(x_{i},u_{-i}):j\in\mathcal{E}_{i}]\subseteq\mbox{Ideal}[H_{i}(u)],$
$I_{2}:=\mbox{Qmod}[g_{i,j}(x_{i},u_{-i}):j\in\mathcal{I}_{i}]\subseteq\mbox{Qmod}[J_{i}(u)].$
If $I_{1}+I_{2}$ is archimedean, then
$\mbox{Ideal}[H_{i}(u)]+\mbox{Qmod}[J_{i}(u)]$ must also be archimedean.
Furthermore, we have the following convergence theorem for Algorithm 5.5.
###### Remark.
To check the flat truncation (5.16), we need to evaluate the ranks of
$M_{t}[y^{*}]$ and $M_{t-d_{i}}[y^{*}]$. Evaluating matrix ranks is a
classical problem in numerical linear algebra. When a matrix is near to be
singular, it may be difficult to determine its rank accurately, due to round-
off errors. In computational practice, we often determine the rank of a matrix
as the number of its singular values larger than a tolerance (say, $10^{-6}$).
We refer to [10] for determining matrix ranks numerically. Moreover, when
(5.12) has a unique minimizer, the ranks of $M_{t}[y^{*}]$ and
$M_{t-d_{i}}[y^{*}]$ are one, the flat truncation (5.16) is relatively easy to
check by looking at the largest singular value.
###### Theorem 5.6.
For the convex polynomial optimization (5.2), assume its optimal value is
achieved at a KKT point. If either one of the following conditions hold,
* (i)
The set $I_{1}+I_{2}$ is archimedean, and the Hessian
$\nabla_{x_{i}}^{2}\zeta_{i}(x_{i}^{*},u_{-i})\succ 0$ for a minimizer
$x_{i}^{*}$ of (5.12); or
* (ii)
The real zero set of polynomials in $H_{i}(u)$ is finite,
then Algorithm 5.5 must terminate within finitely many loops.
###### Proof.
Since its optimal value is achieved at a KKT point, the optimization problem
(5.2) is equivalent to (5.12).
(i) If $I_{1}+I_{2}$ is archimedean and
$\nabla_{x_{i}}^{2}\zeta_{i}(x_{i}^{*},u_{-i})\succ 0$ if $x_{i}^{*}$ is a
minimizer of (5.12), then $\zeta_{i}(x_{i})-\eta_{i}\in I_{1}+I_{2}$, by [30,
Corollary 3.3]. Since
$I_{1}+I_{2}\subseteq\mbox{Ideal}[H_{i}(u)]+\mbox{Qmod}[J_{i}(u)],$
we have
$\zeta_{i}(x_{i})-\eta_{i}\in\mbox{Ideal}[H_{i}(u)]_{2k}+\mbox{Qmod}[J_{i}(u)]_{2k}$
for all $k$ big enough. Therefore, Algorithm 5.5 must terminate within
finitely many loops, by the duality theory.
(ii) If the real zero set of polynomials in $H_{i}(u)$ is finite, then the
conclusion is implied by [41, Theorem 1.1] and [40, Theorem 2.2]. ∎
###### Remark.
If the objective polynomial in (5.2) is SOS-convex and its constraining ones
are SOS-concave (see [24] for the definition of SOS-convex polynomials), then
Algorithm 5.5 must terminate in the first loop (see [32]). If the optimal
value of (5.2) is not achieved at a KKT point, the classical Moment-SOS
hierarchy of semidefinite relaxations can be used to solve it. We refer to
[30, 32, 31, 33, 34, 36, 37] for the work for solving general polynomial
optimization.
## 6\. Numerical experiments
In this section, we apply Algorithm 5.1 to solve convex GNEPs. To use it, we
need Lagrange multiplier expressions. This can be done as follows.
* •
When polynomial expressions exist, we always use them. In particular, we use
polynomial expressions for the first player of the GNEP given by (1.4), the
first player in Example 6.1(ii), the third player in Examples 3.4 and
6.7(i-ii), the production unit and market players in Example 6.9.
* •
We use rational expressions for all players in Examples 6.3, 6.4 and 6.6.
Moreover, rational expressions are used for the second player of the GNEP
given by (1.4), the first two players in Examples 3.4 and 6.7(i-ii), and the
consumer players in Example 6.9. For Example 6.6, the rational expression is
obtained by solving (3.9) numerically.
* •
When it is difficult to find convenient polynomial or rational expressions, we
use parametric expressions for Lagrange multipliers. For all players in
Examples 6.5, 6.8, we use parametric expressions.
We apply the software GloptiPoly 3 [26] and SeDuMi [58] to solve the Moment-
SOS relaxations for the polynomial optimization (5.6) and (5.12). We use the
software YALMIP for solving (3.9). The computation is implemented in an
Alienware Aurora R8 desktop, with an Intel® Core(TM) i7-9700 CPU at
3.00GHz$\times$8 and 16GB of RAM, in a Windows 10 operating system. For
neatness of the paper, only four decimal digits are shown for the
computational results.
In Step 2 of Algorithm 5.1, if the optimal values $\delta_{i}\geq 0$ for each
$i$ such that $q_{i}(u)\leq 0$, then the computed minimizer of (5.1) is a GNE.
In numerical computations, we may not have $\delta_{i}\geq 0$ exactly due to
round-off errors. Typically, when $\delta_{i}$ is near zero, say,
$\delta_{i}\geq-10^{-6}$, we regard the computed solution as an accurate GNE.
In the following, all the GNEPs are convex.
###### Example 6.1.
(i) For the GNEP given by (1.4), the first player has a polynomial expression
for Lagrange multipliers given by (2.8), and the second player has a rational
expression given as
$\lambda_{2,1}=\frac{-x_{2}^{T}\nabla_{x_{2}}f_{2}}{2q_{2}(x)},\quad
q_{2}(x)=x_{1}^{T}x_{1}.$
For each $i$, the $q_{i}(x)>0$ for all $x\in X$. We ran Algorithm 5.1 and
obtained the GNE $u=(u_{1},u_{2})$ with
(6.1) $u_{1}\approx(0.7274,0.7274,0.7274),\quad
u_{2}\approx(0.4582,0.4582,0.4582).$
It took around $2.83$ seconds.
(ii) If the first player’s objective is changed to
$f_{1}(x)=(x_{2,1}+x_{2,2}-2x_{2,3})(x_{1,1}+x_{1,2}-2x_{1,3})^{2}+x_{1,1}+x_{1,2}-2x_{1,3},$
then the GNEP has no GNE, detected by Algorithm 5.1. It took around $70.31$
seconds to detect the nonexistence. The matrix polynomials $G_{1}(x)$ and
$G_{2}(x)$ are nonsingular on $X$, so all GNEs must be KKT points if they
exist.
In the following, we compare the performance of Algorithm 5.1 with the method
of solving the optimization (5.1) without using Lagrange multiplier
expressions, i.e., each Lagrange multiplier is treated as a new variable for
polynomials. The comparison for Example 6.1(i) is given in Table 1. The
computational results for the method using Lagrange multiplier expressions
(i.e., for Algorithm 5.1) are given in the column labeled “Algorithm 5.1”. The
results for the method without using Lagrange multiplier expressions are given
in the column labeled “Without LME”. In the rows, the value $k$ is the
relaxation order for solving (5.1). The subcolumn “time” lists the consumed
time (in seconds) for solving the moment relaxation of order $k$, and the
subcolumn “GNE” shows if a GNE is obtained or not. When $k=2$ for Algorithm
5.1, the degree of relaxation is less than appearing polynomials, so we
display that “not applicable (n.a.)”.
Table 1. The computational results for Example 6.1(i). | Algorithm 5.1 | Without LME
---|---|---
| time | GNE | time | GNE
$k=2$ | n.a. | n.a. | 4.03 | no
$k=3$ | 2.83 | yes | 1350.09 | no
For Example 6.1(ii), the comparison is given in Table 2. This GNEP does not
have a GNE. When no LMEs are used, the $4$th order moment relaxation cannot be
solved due to out of memory. However this can be done by using nontrivial
LMEs.
Table 2. The computational results for Example 6.1(ii). | Algorithm 5.1 | Without LME
---|---|---
| time | nonexistence of GNE | time | nonexistence of GNE
$k=2$ | n.a. | n.a. | 3.67 | not detected
$k=3$ | 2.77 | not detected | 1201.75 | not detected
$k=4$ | 70.31 | detected | out of memory
###### Example 6.2.
Consider the GNEP in Example 3.4. We use Lagrange multiplier expressions given
there. By Algorithm 5.1, we obtained a feasible point $\hat{u}\approx
10^{-4}\cdot(0.1274,0.4102,0.3219)$ of (5.1) with $q_{1}(\hat{u})\approx
0.1274\cdot 10^{-4}$ and $q_{2}(\hat{u})\approx 0.4102\cdot 10^{-4}$. We
solved (5.2), for $i=1,2$, to check if $\hat{u}$ is a GNE or not, and got
$\delta_{1}\approx-1.0000$, $\delta_{2}\approx-1.8996\cdot 10^{-10}$.
Therefore, we solved (5.3) with $\mathcal{N}=\\{1\\}$ and $\varepsilon=0.1$,
and obtained a GNE $u=(u_{1},u_{2},u_{3})$ with
$u_{1}\approx 0.5000,\ u_{2}\approx 0.5000,\ u_{3}\approx 0.7500,\
q_{1}(u)\approx q_{2}(u)\approx 0.1250.$
It took around $0.89$ second.
###### Example 6.3.
Consider the GNEP in Example 3.2 with objectives
$f_{1}(x)=\sum\limits_{j=1}^{2}(x_{1,j}-1)^{2}+x_{2}(x_{1,1}-x_{1,2}),\
f_{2}(x)=(x_{2})^{3}-x_{1,1}x_{1,2}x_{2}-x_{2}.$
The rational expressions for both players are given by (3.5). For each $i$,
the $q_{i}(x)>0$ for all $x\in X$. We ran Algorithm 5.1 and got the GNE
$u=(u_{1},u_{2})$ with
$u_{1}\approx(0.4897,1.0259),\,u_{2}\approx 0.7077.$
It took around 0.20 second.
###### Example 6.4.
Consider the GNEP in Example 3.8 with objectives
$f_{1}(x)=10x_{1}^{T}x_{2}-\sum_{j=1}^{3}x_{1,j},\
f_{2}(x)=\sum_{j=1}^{3}(x_{1,j}x_{2,j})^{2}+(3\prod_{j=1}^{3}x_{1,j}-1)\sum_{j=1}^{3}x_{2,j}.$
We use rational expressions as in (3.10). From Example 3.8, we know all
feasible points of (5.1) are GNEs. By Algorithm 5.1, we got the GNE
$u=(u_{1},u_{2})$ with
$u_{1}\approx(0.9864,0.0088,0.0088),\quad u_{2}\approx(0.0836,0.0999,0.0999).$
It took around 2.03 seconds.
###### Example 6.5.
Consider the GNEP in Example 4.2 with objectives
$\begin{array}[]{l}f_{1}(x)=x_{2,1}(x_{1,1})^{3}+(x_{1,2})^{3}-\sum\nolimits_{j=1}^{2}x_{1,j}\cdot\sum\nolimits_{j=1}^{2}x_{2,j},\\\
f_{2}(x)=(x_{1,1}+x_{1,2})(x_{2,1})^{3}-3x_{2,1}+(x_{2,2})^{2}+x_{1,1}x_{1,2}x_{2,2}.\end{array}$
We use parametric expressions as in (4.1). For each $i$, the $q_{i}(x)>0$ for
all $x\in X$. By Algorithm 5.1, we got the GNE $u=(u_{1},u_{2})$ with
$u_{1}\approx(0.6475,0.2786),\quad u_{2}\approx(1.0391,-0.0902).$
It took around 63.97 seconds.
###### Example 6.6.
Consider the $2$-player GNEP
$\begin{array}[]{cllcl}\min\limits_{x_{1}\in\mathbb{R}^{2}}&(x_{1,1})^{2}+2(x_{1,2})^{2}&\vline&\min\limits_{x_{2}\in\mathbb{R}^{2}}&\|x_{1}\|^{2}\cdot\|x_{2}\|^{2}+3x_{1}^{T}x_{2}\\\
&\qquad\quad+3\sum\nolimits_{j=1}^{2}x_{1,j}(x_{2,j})^{2}&\vline&&\qquad\qquad\qquad\qquad+x_{2,1}-x_{2,2}\\\
\mathit{s.t.}&x_{1,1}+2x_{1,2}-x_{2,1}\leq
1,&\vline&\mathit{s.t.}&(x_{2,1})^{2}+x_{1,2}x_{2,1}\leq 2,\\\
&(x_{1,2})^{2}+(x_{2,1})^{2}\leq 3,&\vline&&(x_{1,1})^{2}+(x_{2,2})^{2}\leq
3,\\\ &x_{1,1}\geq 0,&\vline&&x_{2,2}\geq 0.\\\ \end{array}$
We solve (3.9) numerically for $i=1,2$ with $v=(0,0,0,0),d=2$ to get rational
expressions for $\lambda_{i}$’s. By Algorithm 5.1, we got the GNE
$u=(u_{1},u_{2})$ with
$\begin{array}[]{ll}u_{1}\approx(0.0000,-1.3758),&\quad q_{1}(u)\approx
6.7538;\\\ u_{2}\approx(-0.2641,1.3544),&\quad q_{2}(u)\approx
2.3227.\end{array}$
It took around $0.41$ second in solving $(\ref{eq:findL})$ for both players,
and $6.40$ seconds to find the GNE. For neatness of the paper, we do not
display Lagrange multiplier expressions obtained by solving (3.9).
###### Example 6.7.
(i) Consider the 3-player GNEP
$\mbox{1st
player:}\left\\{\begin{array}[]{ll}\min\limits_{x_{1}\in\mathbb{R}^{2}}&x_{2,1}(x_{1,1})^{2}+x_{2,2}(x_{1,2})^{2}-(x_{3,1})^{2}x_{1,1}-(x_{3,2})^{2}x_{1,2}\\\
\mathit{s.t.}&x_{1}^{T}x_{1}\leq 1+x_{2}^{T}x_{2};\end{array}\right.$
$\mbox{2nd
player:}\left\\{\begin{array}[]{ll}\min\limits_{x_{2}\in\mathbb{R}^{2}}&(x_{2,1})^{3}+(x_{2,2})^{3}-x_{1,1}x_{2,1}x_{3,1}-x_{1,2}x_{2,2}x_{3,2}\\\
\mathit{s.t.}&x_{2,1}+x_{2,2}\leq 1+x_{3}^{T}x_{3},\,x_{2,1}\geq
0,\,x_{2,2}\geq 0;\\\ \end{array}\right.$ $\mbox{3rd
player:}\left\\{\begin{array}[]{ll}\min\limits_{x_{3}\in\mathbb{R}^{2}}&\big{(}\sum_{i=1}^{3}(x_{i,1}+x_{i,2})\big{)}^{2}-x_{3,1}-x_{3,2}\\\
\mathit{s.t.}&x_{3,1}\geq x_{1,1},\,x_{3,2}\geq x_{1,2}.\end{array}\right.$
The first player’s Lagrange multipliers have a rational expression, that
$\lambda_{1}=\frac{-x_{1}^{T}\nabla_{x_{1}}f_{1}}{2q_{1}(x)},\quad
q_{1}(x)=1+x_{2}^{T}x_{2}.$
For the second player, we let $q_{2}(x)=1+x_{3}^{T}x_{3}$, and there exists a
rational expression for $\lambda_{2}$, that
$\lambda_{2,1}=\frac{-x_{2}^{T}\nabla_{x_{2}}f_{2}}{q_{2}(x)},\quad\lambda_{2,2}=\frac{\partial
f_{2}}{\partial x_{2,1}}+\lambda_{2,1},\quad\lambda_{2,3}=\frac{\partial
f_{2}}{\partial x_{2,2}}+\lambda_{2,1}.$
For $\lambda_{3}$, we use the polynomial expression that
$\lambda_{3,1}=\frac{\partial f_{3}}{\partial
x_{3,1}},\quad\lambda_{3,2}=\frac{\partial f_{3}}{\partial x_{3,2}}.$
For each $i$, the $q_{i}(x)>0$ for all $x\in X$. By Algorithm 5.1, we got the
GNE $u=(u_{1},u_{2},u_{3})$ with
$\begin{array}[]{c}u_{1}\approx(0.1097,0.0750),\ u_{2}\approx(0.0663,0.0458),\
u_{3}\approx(0.1205,0.0828).\end{array}$
It took around $3.23$ seconds.
(ii) If the third player’s objective function becomes
$\begin{array}[]{c}\big{(}\sum_{i=1}^{3}(x_{i,1}-x_{i,2})\big{)}^{2}-x_{3,1}-x_{3,2},\end{array}$
then Algorithm 5.1 took around $2.86$ seconds to detect nonexistence of GNEs.
Note that all the matrix polynomials $G_{i}(x)\,(i=1,\dots,3)$ are nonsingular
on $X$, so all GNEs must be KKT points if they exist.
###### Example 6.8.
[17, Example A.3] Consider the GNEP of $3$ players. For $i=1,2,3$, the $i$th
player aims to minimize the quadratic function
$f_{i}(x)=\frac{1}{2}x_{i}^{T}A_{i}x_{i}+x_{i}^{T}(B_{i}x_{-i}+b_{i}).$
All variables have box constraints $-10\leq x_{i,j}\leq 10$, for all $i,j$. In
addition to them, the first player has linear constraints
$x_{1,1}+x_{1,2}+x_{1,3}\leq 20,\,x_{1,1}+x_{1,2}-x_{1,3}\leq
x_{2,1}-x_{3,2}+5$; the second player has $x_{2,1}-x_{2,2}\leq
x_{1,2}+x_{1,3}-x_{3,1}+7$; and the third player has $x_{3,2}\leq
x_{1,1}+x_{1,3}-x_{2,1}+4.$ The values of parameters are set as follows
$\begin{array}[]{cccc}A_{1}=\left[\begin{array}[]{ccc}20&5&3\\\ 5&5&-5\\\
3&-5&15\end{array}\right],\ A_{2}=\left[\begin{array}[]{ccc}11&-1\\\
-1&9\end{array}\right],\ A_{3}=\left[\begin{array}[]{ccc}48&39\\\
39&53\end{array}\right],\\\ B_{1}=\left[\begin{array}[]{cccc}-6&10&11&20\\\
10&-4&-17&9\\\ 15&8&-22&21\end{array}\right],\
B_{2}=\left[\begin{array}[]{ccccc}20&1&-3&12&1\\\
10&-4&8&16&21\end{array}\right],\\\
B_{3}=\left[\begin{array}[]{ccccc}10&-2&22&12&16\\\
9&19&21&-4&20\end{array}\right],\ b_{1}=\left[\begin{array}[]{ccc}1\\\ -1\\\
1\end{array}\right],\ b_{2}=\left[\begin{array}[]{ccc}1\\\
0\end{array}\right],\ b_{3}=\left[\begin{array}[]{ccc}-1\\\
2\end{array}\right].\end{array}$
We use parametric expressions for Lagrange multipliers as in (4.3). It is
clear $q_{i}(x)=1$ for all $x\in X$ and for all $i=1,2,3.$ By Algorithm 5.1,
we got the GNE $u=(u_{1},u_{2},u_{3})$ with
$\begin{array}[]{c}u_{1}\approx(-0.3805,-0.1227,-0.9932),\quad
u_{2}\approx(0.3903,1.1638),\\\ u_{3}\approx(0.0504,0.0176).\end{array}$
It took around 8.18 seconds.
###### Example 6.9.
Consider the GNEP based on the Arrow and Debreu model of a competitive economy
[4, 17]. The first $N_{1}$ players are consumers, the second $N_{2}$ players
are production units, and the last player is the market, so $N=N_{1}+N_{2}+1$.
In this GNEP, each player has $n_{1}=\dots=n_{N}$ variables. Let
$Q_{i}\in\mathbb{R}^{n_{i}\times
n_{i}},b_{i}\in\mathbb{R}^{n_{i}},\xi_{i}\in\mathbb{R}^{n_{i}}_{+}$ and
$a_{i,k}\in\mathbb{R}_{+}$ be parameters. These players’ optimization problems
are:
$\mbox{The $i$th player (a
consumer):}\left\\{\begin{array}[]{ll}\min\limits_{x_{i}\in\mathbb{R}^{n_{i}}_{+}}&\frac{1}{2}x_{i}^{T}Q_{i}x_{i}-b_{i}^{T}x_{i}\\\
\mathit{s.t.}&x_{N}^{T}x_{i}\leq
x_{N}^{T}\xi_{i}+\sum_{k=N_{1}+1}^{N-1}a_{i,k}x_{N}^{T}x_{k}.\end{array}\right.$
$\mbox{The $i$th player (a production
unit):}\left\\{\begin{array}[]{ll}\min\limits_{x_{i}\in\mathbb{R}^{n_{i}}_{+}}&-x_{N}^{T}x_{i}\\\
\mathit{s.t.}&x_{i}^{T}x_{i}\leq i-N_{1}.\end{array}\right.\qquad\qquad\qquad$
$\mbox{The $N$th player (the
market):}\left\\{\begin{array}[]{ll}\min\limits_{x_{N}\in\mathbb{R}^{n_{i}}_{+}}&x_{N}^{T}\left(\sum_{k=N_{1}+1}^{N-1}x_{k}-\sum_{k=1}^{N_{1}}(x_{k}-\xi_{k})\right)\\\
\mathit{s.t.}&\sum_{j=1}^{{n_{i}}}x_{N,j}=1.\end{array}\right.$
For each $i\in[N_{1}]$, the Lagrange multipliers have rational expressions as
$\lambda_{i,1}=\frac{-x_{i}^{T}\nabla_{x_{i}}f_{i}}{q_{i}(x)},\
\lambda_{i,j}=\frac{\partial f_{i}}{\partial
x_{i,j}}+x_{N,j}\cdot\lambda_{i,1}\,(j=1,\ldots,{n_{i}}),$
where
$q_{i}(x)=x_{N}^{T}\xi_{i}+\sum_{k=N_{1}+1}^{N-1}a_{i,k}x_{N}^{T}x_{k}>0$ for
all $x\in X$. For each $i=N_{1}+1,\ldots,N_{1}+N_{2}$, the $i$th player (a
production unit) has polynomial expressions
$\lambda_{i,1}=\frac{-x_{i}^{T}\nabla_{x_{i}}f_{i}}{2(i-N_{1})},\
\lambda_{i,j}=\frac{\partial f_{i}}{\partial
x_{i,j}}+2x_{i,j}\cdot\lambda_{i,1}\,(j=1,\ldots,n_{i}).$
For the last player (the market), we substitute $x_{N,n_{i}}$ by
$1-\sum_{j=1}^{n_{i}-1}x_{N,j}$, then the constraints become
$1-\sum_{j=1}^{n_{i}-1}x_{N,j}\geq 0,\,x_{N,1}\geq 0,\ldots,x_{N,n_{i}-1}\geq
0,$ and hence
$\lambda_{N,1}=-\sum\nolimits_{j=1}^{n_{i}-1}\frac{\partial f_{N}}{\partial
x_{N,j}}\cdot x_{N,j},\ \lambda_{N,j+1}=\frac{\partial f_{N}}{\partial
x_{N,j}}+\lambda_{N,1}\,(j=1,\ldots,n_{i}-1).$
For each $i=1,\dots,N_{1}$, when $n_{i}=2$, the parameters are given as
$\begin{array}[]{lccc}Q_{i}=\left[\begin{array}[]{ccc}0.75+0.25i&1.5-0.5i\\\
1.5-0.5i&i\end{array}\right],\quad b_{i}=\left[\begin{array}[]{ccc}0.4+0.1i\\\
0.9+0.1i\end{array}\right],\quad\xi_{i}=\left[\begin{array}[]{ccc}i\\\
i\end{array}\right],\\\
a_{i,j}=0.2+0.1i\quad(j=N_{1}+1,\dots,N_{1}+N_{2}).\end{array}$
When $n_{i}=3$, the parameters are given as:
$\begin{array}[]{lccc}Q_{i}=\left[\begin{array}[]{ccc}-1+2i&-i&i\\\
-i&1+i&1-i\\\ i&1-i&1+i\end{array}\right],\quad
b_{i}=\left[\begin{array}[]{ccc}0.4+0.1i\\\ 0.9+0.1i\\\
1.4+0.1i\end{array}\right],\quad\xi_{i}=\left[\begin{array}[]{ccc}i\\\ i\\\
i\end{array}\right],\\\
a_{i,j}=0.2+0.1i\quad(j=N_{1}+1,\dots,N_{1}+N_{2}).\end{array}$
The numerical results are presented in Table 3. The “$N$” is the total number
of all players, the “$N_{1}$” and “$N_{2}$” are the number of consumers and
production units respectively, the “$n$” (resp., “$n_{i}$”) is the dimension
of “$x$” (resp., “$x_{i}$”), the “$u$” is the GNE obtained by Algorithm 5.1,
the “$q(u)$” gives the value of the denominator vector
$q(u):=(q_{1}(u),\ldots,q_{N_{1}}(u))$, and “time” shows the consumed time (in
seconds).
Table 3. Numerical results of Example 6.9 $\begin{array}[]{c}\mbox{Number}\\\ \mbox{of}\\\ \mbox{players}\end{array}$ | dimension | $u$ | $q(u)$ | time
---|---|---|---|---
$\begin{array}[]{l}N\ =5\\\ N_{1}=2\\\ N_{2}=2\end{array}$ | $\begin{array}[]{l}n\ =10\\\ n_{i}=2\end{array}$ | $\begin{array}[]{l}(0.0000,1.0000,\quad 0.2889,0.4778,\\\ \ 0.4166,0.9091,\quad 0.5892,1.2856,\\\ \qquad\qquad 0.3143,0.6857)\end{array}$ | $\begin{array}[]{c}1.4907\\\ 2.6543\end{array}$ | 1.37
$\begin{array}[]{l}N\ =6\\\ N_{1}=3\\\ N_{2}=2\end{array}$ | $\begin{array}[]{l}n\ =12\\\ n_{i}=2\end{array}$ | $\begin{array}[]{l}(0.0000,1.0000,\quad 0.2889,0.4778,\\\ \ 0.4667,0.4000,\quad 0.4354,0.9002,\\\ \ 0.6157,1.2731,\quad 0.3260,0.6740)\end{array}$ | $\begin{array}[]{c}1.5423\\\ 2.7230\\\ 3.9038\end{array}$ | 3.82
$\begin{array}[]{l}N\ =7\\\ N_{1}=3\\\ N_{2}=3\end{array}$ | $\begin{array}[]{l}n\ =14\\\ n_{i}=2\end{array}$ | $\begin{array}[]{l}(0.0000,1.0000,\quad 0.2889,0.4778,\\\ \ 0.4667,0.4000,\quad 0.5587,0.8294,\\\ \ 0.7901,1.1729,\quad 0.9677,1.4365,\\\ \qquad\qquad 0.4025,0.5975)\end{array}$ | $\begin{array}[]{c}1.8961\\\ 3.1948\\\ 4.4935\end{array}$ | 21.26
$\begin{array}[]{l}N\ =8\\\ N_{1}=4\\\ N_{2}=3\end{array}$ | $\begin{array}[]{l}n\ =16\\\ n_{i}=2\end{array}$ | $\begin{array}[]{l}(0.0000,1.0000,\quad 0.2889,0.4778,\\\ \ 0.4667,0.4000,\quad 0.5704,0.3963,\\\ \ 0.5835,0.8121,\quad 0.8251,1.1485,\\\ \ 1.0106,1.4067,\quad 0.4181,0.5819)\end{array}$ | $\begin{array}[]{c}1.8913\\\ 3.1884\\\ 4.4855\\\ 5.7826\end{array}$ | 106.78
$\begin{array}[]{l}N\ =9\\\ N_{1}=4\\\ N_{2}=4\end{array}$ | $\begin{array}[]{l}n\ =18\\\ n_{i}=2\end{array}$ | $\begin{array}[]{l}(0.0000,1.0000,\quad 0.2889,0.4778,\\\ \ 0.4667,0.4000,\quad 0.5704,0.3963,\\\ \ 0.6258,0.7800,\quad 0.8850,1.1031,\\\ \ 1.0838,1.3510,\quad 1.2515,1.5600,\\\ \qquad\qquad 0.4451,0.5549)\end{array}$ | $\begin{array}[]{c}2.3116\\\ 3.7489\\\ 5.1861\\\ 6.6233\end{array}$ | 465.71
$\begin{array}[]{l}N\ =3\\\ N_{1}=1\\\ N_{2}=1\end{array}$ | $\begin{array}[]{l}n\ =9\\\ n_{i}=3\end{array}$ | $\begin{array}[]{l}(1.3076,1.0871,0.0962,\\\ \ 0.8087,0.5882,0.0000,\\\ \ 0.5789,0.4211,0.0000)\end{array}$ | $1.2148$ | 0.50
$\begin{array}[]{l}N\ =4\\\ N_{1}=2\\\ N_{2}=1\end{array}$ | $\begin{array}[]{l}n\ =12\\\ n_{i}=3\end{array}$ | $\begin{array}[]{l}(1.3696,1.0886,0.0652,\\\ \ 0.3500,0.7875,0.5625,\\\ \ 0.6245,0.7810,0.0000,\\\ \ 0.4443,0.5557,0.0000)\end{array}$ | $\begin{array}[]{c}1.2134\\\ 2.2846\end{array}$ | 3.76
$\begin{array}[]{l}N\ =5\\\ N_{1}=2\\\ N_{2}=2\end{array}$ | $\begin{array}[]{l}n\ =15\\\ n_{i}=3\end{array}$ | $\begin{array}[]{l}(1.7172,1.3109,0.0000,\\\ \ 0.3500,0.7875,0.5625,\\\ \ 0.7006,0.7135,0.0097,\\\ \ 0.9908,1.0091,0.0006,\\\ \ 0.4953,0.5044,0.0003)\end{array}$ | $\begin{array}[]{c}1.5121\\\ 2.6829\end{array}$ | 42.66
$\begin{array}[]{l}N\ =6\\\ N_{1}=3\\\ N_{2}=2\end{array}$ | $\begin{array}[]{l}n\ =18\\\ n_{i}=3\end{array}$ | $\begin{array}[]{l}(1.7734,1.3398,0.0000,\\\ \ 0.3500,0.7875,0.5625,\\\ \ 0.2250,0.7958,0.6542,\\\ \ 0.5780,0.8160,0.0001,\\\ \ 0.8174,1.1541,0.0040,\\\ \ 0.4146,0.5854,0.0000)\end{array}$ | $\begin{array}[]{c}1.5192\\\ 2.6923\\\ 3.8653\end{array}$ | 473.84
### 6.1. Comparison with other methods
We compare our method (i.e., Algorithm 5.1) with some classical methods for
solving convex GNEPPs, such as the two-step method in [22] based on Quasi-
variational formulation, the penalty method in [17], the exact version of
interior point method based on the KKT system in [12], and the Augmented-
Lagrangian method in [29]. All examples in Section 6 are tested for
comparisions. For Example 6.9, we test for the case that
$N_{1}=N_{2}=1,n_{i}=3$.
For a computed tuple $u:=(u_{1},\ldots,u_{N})$, we use the value
$\xi\,:=\,\max\big{\\{}\max_{i\in[N],j\in\mathcal{I}_{i}}\\{-g_{i,j}(u)\\},\max_{i\in[N],j\in\mathcal{E}_{i}}\\{|g_{i,j}(u)|\\}\big{\\}}$
to measure the feasibility violation. Clearly, the point $u$ is feasible if
and only if $\xi\leq 0$. If we solve (5.2) for all $i\in[N]$, the accuracy
parameter of $u$ is $\delta:=\max_{i\in[N]}|\delta_{i}|$. For these methods,
we use the following stopping criterion: For each time we get a new iterate
$u$, if its feasibility violation $\xi<10^{-6}$, then we compute the accuracy
parameter $\delta$. If $\delta<10^{-6}$, then we stop the iteration.
For these classical methods, the parameters are the same as given in [22, 17,
12, 27, 29]. When implementing the QVI method, we use Moment-SOS relaxations
to find projections into given sets (the maximum number of iterations for line
search is set to be $100$). For the penalty method, the MATLAB function fsolve
is used to implement the Levenberg-Marquardt Algorithm for solving all
equations involved (the maximum number of iterations is set to be $100$). The
full penalization is used when we implement the Augmented-Lagrangian method,
and a Levenberg-Marquardt type method (see [29, Algorithm 24]) is exploited to
solve penalized subproblems. We let $1000$ be the maximum number of iterations
for the QVI method, let $1000$ be the maximum number of outer iterations for
the penalty method and the Augmented-Lagrangian method, and let $10,000$ be
the maximum number of iterations for the interior point method. For initial
points, we use $(1,0,0,1,0,0)$ for Example 6.1(i-ii), $(0,0,0,0,0,0,0,0,1)$
for Example 6.9, and the zero vectors for other GNEPs. If the maximum number
of iterations is reached but the stopping criterion is not met, we still solve
the (5.2) to check if the latest iterating point is a GNE or not.
Table 4. Comparison with some methods Example | QVI | Penalty | IPM | A-L | Alogrithm 5.1
---|---|---|---|---|---
6.1(i) | time | Fail | Fail | Fail | Fail | 2.83
error | $4\cdot 10^{-9}$
6.1(ii) | time | Fail | Fail | Fail | Fail | 70.31
error | no GNE
6.2 | time | Fail | 3.45 | 0.19 | Fail | 0.89
error | $2\cdot 10^{-6}$ | $3\cdot 10^{-7}$ | $7\cdot 10^{-7}$
6.3 | time | 2.63 | 8.46 | 0.12 | 0.08 | 0.20
error | $8\cdot 10^{-7}$ | $3\cdot 10^{-6}$ | $2\cdot 10^{-7}$ | $2\cdot 10^{-7}$ | $1\cdot 10^{-8}$
6.4 | time | Fail | 4.51 | 0.29 | Fail | 2.03
error | $3\cdot 10^{-5}$ | $8\cdot 10^{-7}$ | $4\cdot 10^{-7}$
6.5 | time | 185.29 | 4.02 | 37.7 | 0.03 | 63.97
error | $9\cdot 10^{-5}$ | $2\cdot 10^{-6}$ | $5\cdot 10^{-4}$ | $3\cdot 10^{-7}$ | $4\cdot 10^{-7}$
6.6 | time | 7.78 | Fail | 0.17 | Fail | 6.40
error | $6\cdot 10^{-7}$ | $3\cdot 10^{-7}$ | $1\cdot 10^{-7}$
6.7(i) | time | 72.18 | 0.39 | 0.16 | 0.05 | 3.23
error | $4\cdot 10^{-7}$ | $8\cdot 10^{-8}$ | $5\cdot 10^{-7}$ | $1\cdot 10^{-10}$ | $7\cdot 10^{-9}$
6.7(ii) | time | Fail | Fail | Fail | Fail | 2.86
error | no GNE
6.8 | time | Fail | 0.38 | 0.16 | 0.01 | 8.18
error | $9\cdot 10^{-8}$ | $1\cdot 10^{-8}$ | $1\cdot 10^{-8}$ | $3\cdot 10^{-8}$
6.9 | time | 1.223 | 6.26 | 0.14 | Fail | 0.50
error | $3\cdot 10^{-5}$ | $8\cdot 10^{-6}$ | $3\cdot 10^{-7}$ | $7\cdot 10^{-7}$
The numerical results are presented in Table 4, and the comparison is
summarized in the following.
1. (1)
The QVI method failed to find a GNE for Example 6.1(i), because the projection
set in Step 2 is empty. Therefore the line-search could not finish (see [22,
Algorithm 4.1]). This is also the case for Examples 6.1(ii) and 6.7(ii), for
which the GNEs do not exist. For Examples 6.2 and 6.4, the sequence generated
by QVI is alternating between several points and none of them is a GNE. For
Example 6.8, the sequence does not converge.
2. (2)
The penalty method failed to find a GNE for Examples 6.1(i) and 6.6, because
the equation $F_{\varepsilon_{k}}(x)=0$ cannot be solved for some $k$ (see
[17, Algorithm 3.3]). This is also the case for Examples 6.1(ii) and 6.7(ii),
for which the GNEs do not exist.
3. (3)
The interior-point method failed to find a GNE for Examples 6.1(i), 6.1(ii)
and 6.7(ii), because the step-length is too small to efficiently decrease the
violation of KKT conditions. Note that for Examples 6.1(ii) and 6.7(ii), the
GNEs do not exist, so the Newton type directions usually do not satisfy the
sufficient descent conditions.
4. (4)
The Augmented-Lagrangian method failed to find a GNE for Example 6.1(i),
because the maximum penalty parameter ($10^{12}$) is reached before a GNE is
obtained. This is also the case for Example 6.1(ii), for which the GNEs do not
exist. For Examples 6.2, 6.4, 6.6, 6.7(ii) and 6.9, the Augmented-Lagrangian
method failed to find a GNE, because the penalization subproblems cannot be
efficiently solved.
## 7\. Conclusions and Discussions
This paper studies convex GNEPs given by polynomials. The rational and
parametric expressions for Lagrange multipliers are used. Based on these
expressions, Algorithms 5.1 is proposed for computing a GNE. The Moment-SOS
hierarchy of semidefinite relaxations are used to solve the appearing
polynomial optimization problems. Under some general assumptions, we show that
Algorithm 5.1 is able to find a GNE if there exists one, or detect
nonexistence of GNEs if there is none.
For future work, it is interesting to solve nonconvex GNEPPs. Under some
constraint qualifications, the KKT system (2.5) is necessary but not
sufficient for GNEs. A solution $u$ of (2.5) may not be a GNE for nonconvex
GNEPPs. If $u$ is not a GNE, one needs to find an efficient method to obtain a
different candidate. Such a method is proposed for solving NEPs [50]. For
GNEPs, it is not clear how to generalize the method in [50]. When the point
$u$ is not a GNE, how can we exclude it and find a better candidate? When
(5.1) is feasible, how do we detect nonexistence of GNEs? These questions are
mostly open, to the best of the authors’ knowledge.
Acknowledgements The authors would like to thank Christian Kanzow and Daniel
Steck for sharing the code for solving GNEPs. They also thank the editors and
anonymous referees for fruitful suggestions.
## References
* [1] J. Anselmi, D. Ardagna and M. Passacantando, Generalized nash equilibria for saas/paas clouds, _European Journal of Operational Research_ , 236.1: 326-339, 2014.
* [2] A.A. Ahmadi and J. Zhang, Semidefinite programming and Nash equilibria in bimatrix games, _INFORMS Journal on Computing_ , 33.2: 607-628, 2020.
* [3] D. Ardagna, M. Ciavotta and M. Passacantando, Generalized Nash equilibria for the service provisioning problem in multi-cloud systems, _IEEE Transactions on Services Computing_ , 10: 381-395, 2017.
* [4] K. Arrow and G. Debreu, Existence of an equilibrium for a competitive economy, _Econometrica: Journal of the Econometric Society_ , 22: 265-290, 1954.
* [5] Q. Ba and J. Pang, Exact penalization of generalized Nash equilibrium problems, _Operations Research_ , 2020. doi.org/10.1287/opre.2019.1942
* [6] E.G. Belousov and D. Klatte, A Frank–Wolfe type theorem for convex polynomial programs, _Computational Optimization and Applications_ , 22.1: 37-48, 2002.
* [7] D. Bertsekas. _Nonlinear programming_ , second edition, Athena Scientific, 1995.
* [8] M. Breton, G. Zaccour, and M. Zahaf, A game-theoretic formulation of joint implementation of environmental projects, _European Journal of Operational Research_ , 168: 221-239, 2006.
* [9] G. Debreu, A social equilibrium existence theorem, _Proceedings of the National Academy of Sciences_ , 38: 886-893, 1952.
* [10] J. Demmel, Applied Numerical Linear Algebra, SIAM, 1997.
* [11] A. Dreves, F. Facchinei, A. Fischer, and M. Herrich, A new error bound result for Generalized Nash Equilibrium Problems and its algorithmic application, _Computational Optimization and Applications_ , 59: 63-84, 2014.
* [12] A. Dreves, F. Facchinei, C. Kanzow, and S. Sagratella, On the solution of the KKT conditions of Generalized Nash Equilibrium Problems, _SIAM Journal on Optimization_ , 21: 1082-1108, 2011.
* [13] A. Dreves, C. Kanzow, and O. Stein, Nonsmooth optimization reformulations of player convex generalized Nash equilibrium problems, _Journal of Global Optimization_ , 53.4: 587-614, 2012.
* [14] F. Facchinei, A. Fischer, and V. Piccialli, On generalized nash games and variational inequalities, _Operations Research Letters_ , 35: 159-164, 2007.
* [15] F. Facchinei, A. Fischer, and V. Piccialli, Generalized Nash Equilibrium Problems and Newton methods, _Mathematical Programming_ , 117: 163-194, 2009.
* [16] F. Facchinei and C. Kanzow, Generalized Nash Equilibrium Problems, _Annals of Operations Research_ , 175.1: 177-211, 2010.
* [17] F. Facchinei and C. Kanzow, Penalty methods for the solution of Generalized Nash Equilibrium problems, _SIAM Journal on Optimization_ , 20: 2228-2253, 2010.
* [18] F. Facchinei and L. Lampariello, Partial penalization for the solution of Generalized Nash Equilibrium Problems, _Journal of Global Optimization_ , 50.1: 39-57, 2011.
* [19] F. Facchinei and J. Pang, Nash equilibria: the variational approach, _Convex optimization in signal processing and communications_ (D. Palomar, Y. Eldar, eds.), 443-493, Cambridge University Press, England, 2010.
* [20] L. Fialkow and J. Nie, The truncated moment problem via homogenization and flat extensions, _Journal of Functional Analysis_ , 263.6: 1682-1700, 2012.
* [21] M. Fukushima, Restricted generalized Nash equilibria and controlled penalty algorithm, _Computational Management Science_ , 8: 201-208, 2010.
* [22] D. Han, H. Zhang, G. Qian, and L. Xu, An improved two-step method for solving Generalized Nash Equilibrium Problems, _European Journal of Operational Research_ , 216.3: 613-623, 2012.
* [23] P. Harker, Generalized nash games and quasi-variational inequalities, _European Journal of Operational Research_ , 54: 81-94, 1991.
* [24] J.W. Helton and J. Nie, Semidefinite representation of convex sets, _Mathematical Programming_ , 122.1: 21-64, 2010.
* [25] D. Henrion and J. Lasserre, _Detecting global optimality and extracting solutions in GloptiPoly, Positive polynomials in control_ , 293.C310, Lecture Notes in Control and Inform. Sci., 312, Springer, Berlin, 2005.
* [26] D. Henrion, J. Lasserre, and J. Löfberg, Gloptipoly 3: moments, optimization and semidefinite programming, _Optimization Methods and Software_ , 24.4-5:761-779, 2009.
* [27] A. von Heusinger and C. Kanzow, Relaxation methods for Generalized Nash Equilibrium Problems with inexact line search, _Journal of Optimization Theory and Applications_ , 143: 159-183, 2009.
* [28] A. von Heusinger and C. Kanzow, Optimization reformulations of the Generalized Nash Equilibrium Problem using Nikaido-Isoda-type functions, _Computational Optimization and Applications_ , 43: 353-377, 2009.
* [29] C. Kanzow and D. Steck, Augmented Lagrangian methods for the solution of Generalized Nash Equilibrium Problems, _SIAM Journal on Optimization_ , 26: 2034-2058, 2016.
* [30] E De Klerk, and M Laurent, On the Lasserre hierarchy of semidefinite programming relaxations of convex polynomial optimization problems, _SIAM Journal on Optimization_ , 21.3: 824-832, 2011.
* [31] J. Lasserre, Global optimization with polynomials and the problem of moments, _SIAM Journal on Optimization_ , 11: 796-817, 2001.
* [32] J. Lasserre, Convexity in semialgebraic geometry and polynomial optimization, _SIAM Journal on Optimization_ 19.4: 1995-2014, 2009.
* [33] J. Lasserre, _An introduction to polynomial and semi-algebraic optimization_ , Volume 52, Cambridge University Press, 2015.
* [34] J. Lasserre, _The Moment-SOS Hierarchy_ , Proceedings of the International Congress of Mathematicians (ICM 2018), vol. 3, B. Sirakov, P. Ney de Souza and M. Viana (Eds.), 3761-3784, World Scientific, 2019.
* [35] J. Lasserre, M. Laurent and P. Rostalski, Semidefinite characterization and computation of zero-dimensional real radical ideals, _Foundations of Computational Mathematics_ , 8.5: 607-647, 2008.
* [36] M. Laurent, Sums of squares, moment matrices and optimization over polynomials, _Emerging Applications of Algebraic Geometry of IMA Volumes in Mathematics and its Applications_ , 149: 157-270, Springer, 2009.
* [37] M. Laurent, Optimization over polynomials: Selected topics, _Proceedings of the International Congress of Mathematicians_ , S. Jang, Y. Kim, D-W. Lee, and I. Yie (eds.), ICM 2014, 843-869, 2014.
* [38] K. Nabetani, P. Tseng, and M. Fukushima, Parametrized variational inequality approaches to Generalized Nash Equilibrium Problems with shared constraints, _Computational Optimization and Applications_ , 48: 2011, 423-452.
* [39] J. Nie and B. Sturmfels, Matrix cubes parameterized by eigenvalues, SIAM journal on matrix analysis and applications, 31.2: 755-766, 2009.
* [40] J. Nie, Certifying convergence of Lasserre’s hierarchy via flat truncation, _Mathematical Programming_ , 142.1-2: 485-510, 2013.
* [41] J. Nie, Polynomial optimization with real varieties, _SIAM Journal On Optimization_ 23.3: 1634-1646, 2013.
* [42] J. Nie, Optimality conditions and finite convergence of Lasserre’s hierarchy. _Mathematical programming_ , 146.1-2:97-121, 2014.
* [43] J. Nie, The ${\mathcal{A}}$-Truncated ${\mathcal{K}}$-Moment Problem, _Foundations of Computational Mathematics_ , 14.6, 1243-1276, 2014.
* [44] J. Nie, The hierarchy of local minimums in polynomial optimization, _Mathematical Programming_ 151.2: 555-583, 2015.
* [45] J. Nie, Linear optimization with cones of moments and nonnegative polynomials, _Mathematical Programming_ , 153.1: 247-274, 2013.
* [46] J. Nie, Generating polynomials and symmetric tensor decompositions, _Foundations of Computational Mathematics_ 17.2: 423-465, 2017.
* [47] J. Nie, Low rank symmetric tensor approximations, SIAM Journal on Matrix Analysis and Applications, 38.4: 1517-1540, 2017.
* [48] J. Nie, Tight relaxations for polynomial optimization and Lagrange multiplier expressions, _Mathematical Programming_ 178.1-2: 1-37, 2019.
* [49] J. Nie, X. Tang and L. Xu, The Gauss-Seidel method for generalized Nash equilibrium problems of polynomials, _Computational Optimization and Applications_ , 78.2: 529-557, 2021.
* [50] J. Nie and X. Tang, Nash equilibrium problems of polynomials, _Preprint_ , 2020. arXiv:2006.09490
* [51] J. Nie, L. Wang, J. Ye and S. Zhong, A Lagrange multiplier expression method for bilevel polynomial optimization, _SIAM Journal on Optimization_ 31.3: 2368-2395, 2021.
* [52] J. Nie, Z. Yang and G. Zhou, The saddle point problem of polynomials, _Foundations of Computational Mathematics_ , 2021. doi.org/10.1007/s10208-021-09526-8
* [53] J. Pang and M. Fukushima, Quasi-variational inequalities, generalized nash equilibria, and multi-leader-follower games, _Computational Management Science_ , 2: 21-56, 2005.
* [54] J. Pang G. Scutari, F. Facchinei and C. Wang, Distributed power allocation with rate constraints in Gaussian parallel interference channels, _IEEE Transactions on Information Theory_ , 54.8: 3471-3489, 2008.
* [55] M. Putinar, Positive polynomials on compact semi-algebraic sets, Indiana University Mathematics Journal, 42.3: 969-984, 1993.
* [56] D. Schiro, J. Pang, and U. Shanbhag, On the solution of affine generalized Nash equilibrium problems with shared constraints by Lemke’s method, _Mathematical Programming_ , 142.1: 1-46, 2013.
* [57] M. Schweighofer, Optimization of polynomials on compact semialgebraic sets, _SIAM J. Optim._ , 15.3: 805-825, 2005.
* [58] J. Sturm, Using sedumi 1.02, a matlab toolbox for optimization over symmetric cones, Optimization methods and software, 11.1-4:625-653, 1999.
|
What was the river Ister in the time of Strabo?]
What was the river Ister in the time of Strabo? A mathematical approach
1]Karol Mikula
[1]Department of Mathematics and Descriptive Geometry, Faculty of Civil Engineering, Slovak University of Technology, Radlinského 11, 810 05 Bratislava, Slovakia
[2]Department of Classical and Semitic Philology, Faculty of Arts, Comenius University, Gondova 2, 811 02 Bratislava, Slovakia
[1]Karol Mikula
Department of Mathematics
and Descriptive Geometry
Faculty of Civil Engineering
Slovak University of Technology
Radlinského 11
810 05–Bratislava
1]Martin Ambroz
2]Renáta Mokošová
Martin Ambroz
Department of Mathematics
and Descriptive Geometry
Faculty of Civil Engineering
Slovak University of Technology
Radlinského 11
810 05–Bratislava
Renáta Mokošová
Department of Classical
and Semitic Philology
Faculty of Arts
Comenius University
Gondova 2
811 02–Bratislava
00A06, 00A09, 00A69, 35J05, 65N06, 65K10, 68U10.
Supported by the Grants APVV-19-0460 and VEGA 1/0436/20.
In this paper, we introduce a novel method for map registration and apply it to transformation of the river Ister from Strabo's map of the World to the current map in the World Geodetic System. This transformation leads to the surprising but convincing result that Strabo's river Ister best coincides with the nowadays Tauernbach-Isel-Drava-Danube course and not with the Danube river what is commonly assumed. Such a result is supported by carefully designed mathematical measurements and it resolves all related controversies otherwise appearing in understanding and translation of Strabo's original text. Based on this result we also show that Strabo's Suevi in the Hercynian Forest corresponds to the Slavic people in the Carpathian-Alpine basin and thus that the compact Slavic settlement was there already at the beginning of the first millennium AD.
§ INTRODUCTION
Strabo's map of the World.
In this paper, we present a novel mathematical model and numerical method for the transformation of geographic maps to each other. To be more precise, we are interested in finding the transformation of a historical map to a current one in the World Geodetic System (WGS) [43]. Such a problem is also called map registration. Our mathematical model and numerical method for the map registration is based on two main principles and steps. First, we design and compute locally optimal affine transformations of one map to the other. In this step, every locally optimal affine transformation is found by means of the least square method using a set of clearly identified corresponding points. Then, in the second step, the locally optimal affine transformations are smoothly interpolated/extrapolated to all other points of the map by solving the Laplace equation with suitable boundary conditions. The solution of the Laplace equation is obtained numerically by using the finite difference method on a background grid discretizing the selected rectangular region of interest on the map. After obtaining the final transformation, which we call Locally Affine Globally Laplace (LAGL) map transformation, we can transform any point of the historical map to the current map and see which places on the current map correspond to geographic objects on the historical map.
The motivation to study this problem mathematically and numerically comes to us by reading Strabo's Geographica (Στράβωνος Γεωγραφικά)
and studying the related Strabo's map of the World published in the Encyclopaedia Biblica [10], in the section "Geography" on page 1691, see Figure <ref>. Strabo's map of the World is authored by Karl Müller, a famous 19th-century historian, classic philologist, geographer, and cartographer, who translated Strabo's Geographica from Greek to Latin in 1853 [19]. This book is one of our reading sources because it contains the Greek text without "purposeful" changes found sometimes in other sources and translations. The further very useful Greek text of Strabo's Geographica, allowing direct translation of Greek words in English are available in Perseus Digital Library [35] of Tufts University. It contains the Geographica edition by August Meineke from 1877 [20] but one has to be careful because at some points it deviates from [19] and other sources, e.g. by exchanging river names or their transcript with respect to the original. The most recent and very useful source for reading is the English translation of Strabo's work by Duane W. Roller [30] where he aims to respect the Strabo's original geographic names and do not translate them to commonly used nowadays terms. Concerning the translation and understanding of the Strabo's work, it is worth to cite Roller's book, Section 5: "there is still the problem of many rare or unique words, extensive paraphrases of earlier authors who are themselves obscure, ambiguities of style and sheer length of the work". There exist some further useful translations which can be found on the internet, e.g. [36], suitable for an introductory reading.
Strabo was a Greek geographer who lived from around 63 BC to around AD 24 [30]. He lived in Asia Minor, Rome and Alexandria and travelled a lot during his lifetime to collect the information for his work not only by reading preceding sources such as Eratosthenes and others, mainly in the famous Alexandria library. Strabo's Geographica was first published in 7 BC, collecting all the knowledge from the previous years, and he continues the work until approximately AD 23, during the reigns of the emperors Augustus and Tiberius. Strabo's Geographica is considered to be one of the rare ancient scientific works in the human history remained to modern times, it is not a historical narrative, but it gives a huge amount of useful quantified information about the known world at the beginning of the first millennium AD. And it is very important to note that we must not apply any later knowledge of the Romans about Europe and the World when reading Geographica.
As it is announced in the title of the paper, we investigate how the river Ister (Ἴστρος) is transformed from the historical Strabo's map of the World to the current map of the world.
We discovered an astonishing fact that the Strabo's river Ister, or better say the river Ister in the times of Strabo, does not correspond to the river Danube on its entire course, but it perfectly fits with the nowadays Tauernbach-Isel-Drava-Danube course (or if simplified we can just say to the Drava-Danube course). This result is surprising but convincing, supported by carefully designed quantitative mathematical measurements. First, by computing a distance of sources of the current Danube, Drava, Isel and Tauernbach rivers and the source of the transformed Strabo's Ister. Further, by computing the Hausdorff distance of curves representing the respective river courses. And finally, by computing the common length of the compared river courses in prescribed narrow bands. From all these comparisons and from Strabo's writing itself it is clear that the current Tauernbach-Isel-Drava-Danube course gives the best correspondence with the Ister in the Strabo's times. Moreover, this result avoids several, if not all, contradictions which otherwise occur when reading carefully Geographica and its later translations which used to consider the Ister as the Danube river in the whole its course, i.e. with the sources in the Schwarzwald, Germany. In the sequel, we mention briefly just a few, the most important contradictions, and their solution given by our mathematical result.
Distances to the recess of Adriatic (Monfalcone). Left up: geodesic line distance from the source of Tauernbach in the High Tauern (175 km). Right up: geodesic distance from the source of the Tauernbach through the passable valleys (200 km). Left down: distance from the source of the Tauernbach by local roads (214 km). Right down: distance from the source of the Drava river by local roads (189 km). Upper images were created by the Google Earth application while bottom images by the Google Maps application.
First, hardly explainable contradictions are given by the location of the Ister source, its distance from the recess of Adriatic and by the direction of the Ister course itself, as described by Strabo in the Book 7, Part 1, Section 1 of Geographica, which we denote by (7-1-1) and other sections are denoted in the same manner. In (7-1-1) Strabo says:
3mm"Ἴστρος ... ῥέων πρὸς νότον κατ᾽ ἀρχάς, εἶτ᾽ ἐπιστρέφων εὐθὺς ἀπὸ τῆς δύσεως ἐπὶ τὴν ἀνατολὴν καὶ τὸν Πόντον. ἄρχεται μὲν οὖν ἀπὸ τῶν Γερμανικῶν ἄκρων τῶν ἑσπερίων, πλησίον δὲ καὶ τοῦ μυχοῦ τοῦ Ἀδριατικοῦ, διέχων αὐτοῦ περὶ χιλίους σταδίους".
First of all, at the end of the second sentence, Strabo clearly says that the distance of the Ister source from the recess of Adriatic is about 1000 stadia (χιλίους σταδίους). The stadium corresponds to 177.7 - 197.3 m, see [30] page 33.
Our mathematical result, presented in the next sections of the paper, shows that the source of the Ister corresponds either to the source of Drava in Sorgenti della Drava in the Val di Pusteria or to the Isel or Tauernbach sources in the High Tauern, see Figure <ref> (this Figure and some further were created by the help of software [42]). We note that the source of the Isel river was in the past considered approximately in the place of the source of nowadays Tauernbach creek and that such an interconnected stream was called the Isola flu on historical maps [3, 21], see Figure <ref>. But let us consider the nowadays situation and measure distances from the recess of Adriatic, placed into Monfalcone close to the ancient Roman city Aquileia, to those three river sources. First, let us measure the distance to the Tauernbach source. We placed the Tauernbach source to the highest possible point below the Großvenediger (called the Windisch Taurn on Figure <ref>) on its north-east side. The direct geodesic distance (the shortest path on the Earth surface) to the Monfalcone is about 175 km and the geodesic distance through the passable valleys is about 200 km, see Figure <ref> upper row. We measured also the distance by using the local roads to the nearest place to the source, in Schildalm, and we got the distance equal to 214 km, see Figure <ref> left bottom. With a high probability, such travel may fit very well with the way to that places in the Strabo's time through the Alpine valleys. And we see that all these distances are in perfect agreement with the Strabo's information about approximately 1000 stadia from the recess of Adriatic! When we considered the distances by the local roads to the other two sources, of the Isel in the Hinterbichl at the end of the valley just south of the Großvenediger, and of the Drava in the Val di Pusteria we got 216 km and 189 km, see Figure <ref>, respectively. They are again very good estimates of 1000 stadia. This cannot happen in any case when considering the source of Ister in the source of Danube in Schwarzwald, Germany, with a distance to the recess of Adriatic about 640 km, approximately 3500 stadia, highly exceeding the Strabo's Geographica information.
Further, in the first part of the second sentence of the cited text, Strabo says that the Ister makes its beginning from the western highest summits (ἄκρων) of the "Germani" people. Indeed, Tauernbach has its spring exactly between the Großvenediger and Großglockner, two highest peaks of the High Tauern, see Figure <ref>, and of all the north-eastern Alps, thus it again perfectly fits with the Strabo's description. Here we note that Strabo also explicitly writes that the "germani" means "genuine" (γνησίους) in his time Roman language. This is clearly stated in the translation of the last sentence of section (7-1-2): "γνήσιοι γὰρ οἱ Γερμανοὶ κατὰ τὴν Ῥωμαίων διάλεκτον" by Roller [30], and also from further context of Geographica it is clear that in general Strabo uses the term "Germani" to denote all genuine people east of the Rhine (Ῥῆνος) and north and east of the Alps (including the north-eastern Alps themselves). The term "Germani" represents a much wider notion than the 19th century and the nowadays concept of Germans and their language.
All in all, from the above facts it is clear that the location of the Ister source in a close neighbourhood of the High Tauern is the only possibility fulfilling consistently both Strabo's requirements - to be near the highest summits of the Alps east of the Rhine and to be about 1000 stadia from the recess of Adriatic. Moreover, in the first sentence of the cited Greek text above, Strabo writes that the Ister first flows to the south and soon it changes direction from the west to the east up to the Black Sea. This also perfectly corresponds to the Tauernbach-Isel-Drava-Danube course. It flows about 40 km to the south and then in Lienz, after the confluence with the Drava, the course direction is changing from the west to the east. None of these facts can be derived for the Danube river from its source in the Schwarzwald, Germany.
Detail of the map by Joan Blaeu from 1665 [3] where we see the High Tauern and the Isola flu corresponding to the Tauernbach-Isel stream. The Großvenediger corresponds to the Windisch Taurn and the Großglockner to the Kalser Taurn.
Detail of river courses of Rienza, Isarco and Adige (white-cyan-blue) and the river Ister by our result, corresponding to the Tauernbach-Isel-Drava-Danube course (red-orange-yellow), plotted thicker.
The second, very important contradiction between placing the Ister source to the Schwarzwald, Germany and Strabo's Geographica occurs in section (4-6-9). The whole Book 4, Part 6 (4-6) is devoted to a detailed description of the region of Alps from the Savona (Σαβάτα)
in Liguria, Italy up to the Nanos plateau (Ὄκρᾳ)
in Inner Carniola, Slovenia. Strabo's description of the Alps (Ἄλπεις, Ἄλπεια), many times called also Albia (Ἄλβια) in (4-6), follows first the direction from the south to the north, i.e. from Savona up to the Alpine part of the river Rhine.
Then the description turns to the east, see (4-6-8) and (4-6-9), and going very consistently through the countries (even nowadays federal states) and mountainous regions of Ellvettians (Switzerland - Swiss - Helvetica), Boians (Bavaria - Bayern), Rhaetians (Tyrol and South Tyrol), Noricians (Salzburg and Upper Austria), Tauriskians (Styria - Steiermark) up to Karnians (Carinthia - Kärnten and Carniola - Krain), not far from the recess of Adriatic, above the territory of Karnians, Strabo arrives at the mountainous places where the source of Ister is located. In this part of section (4-6-9), Strabo describes the river Isaras (Ἰσάρας) which after joining with the river Atagis (Ἄταγις)
empties into the Adriatic. Clearly, Atagis corresponds to river Adige and Isaras corresponds to nowadays river Isarco - or most probably - to the course of Isarco continuing in Bressanone upstream by its (larger) tributary Rienza stemming from the Dolomites and flowing through Val di Pusteria, see Figure <ref>. And in these places also Ister (Ἴστρος) takes its beginning, Strabo says explicitly. It is very clear that all the sources, of Tauernbach, Isel or Drava, fulfils this geographic requirement. On the other hand, placing the source of Ister in the Schwarzwald, Germany cannot solve in any way such geographic situation of the Ister flowing from its source to the Black Sea and the neighbouring rivers flowing to the Adriatic. We see that our mathematical result brings the straightforward solution to this tedious and long-lasting controversy which yielded many troubles in translations of Geographica, even by exchanging the names of the rivers in Section (4-6-9) compared to original, see [19] where such possible "purposeful" changes were only indicated in brackets in the Latin translation and [20] where such changes of river names were even performed in the Greek text.
Just as a curiosity we mention that our result simply solves also the otherwise unexplainable paradox in the voyage of Argonauts, where, by Apollonius of Rhodes (3rd century BC) in his epic poem Argonautica, "the Argonauts had been obliged to abandon their regular course from Colchis homeward, and had gone from Euxine Sea (Black Sea) up the Ister and then passing down the other branch of that river, they had entered into Adriatic" [13]. By our result it is allowed, Argonauts could follow the Danube-Drava upstream up to the Val di Pusteria and continue to the Adriatic by the stream of Rienza-Isarco-Adige. The stream of Rienza is only 3 km away from the source of the Drava in Val di Pusteria where the watershed between the Black Sea and the Adriatic is located, see Figure <ref>. Of course, we do not want to claim that Argonauts made such a voyage :-) but to explain why "such story was accepted even by so able geographer as Eratosthenes who seems to have been a firm believer in the reality of the Argonautic voyage" [13]. There is no controversy and it may indicate that Eratosthenes was aware of the watershed in this Alpine region. Moreover, if we adopt a hypothesis that the notion of Ister was evolved in time as Greeks and Romans explored Europe between the Adriatic and the Black Sea from the south, we see also further "earlier" possibility of the Argonauts' voyage, following upstream the Danube-Sava course up to Zelenci and continuing near Tarvisio to the Adriatic by the Fella-Tagliamento stream.
After the above explanations of solving main geographical controversies appearing when reading Geographica and its later translations, let us begin our mathematical story.
It will be explained with all mathematical and computational details in the next Sections 2 and 3. Moreover, since we have to shift the sources of Ister from Schwarzwald to the High Tauern and the upper course of the Ister from the upper Danube to the Drava course, many historical facts from the beginning of the first millennium should be "shifted" as well. That opens many questions which will be discussed in Section 4. The paper will be finished by our conclusions in Section 5.
§ LOCALLY AFFINE GLOBALLY LAPLACE (LAGL) MAP TRANSFORMATION
To register the map $M_1$ to map $M_2$, we use an affine transformation.
In general, an affine transformation is given by the formula
\begin{equation}
\mathbf{y}=\bm{A}\,\mathbf{x}+\mathbf{b},
\end{equation}
where $\mathbf{x}=\left(x_1,x_2\right)$ is a point on the map $M_1$, and $\mathbf{y}=\left(y_1,y_2\right)$ is a point on the map $M_2$ and
\begin{equation}
\bm{A}=\begin{pmatrix}
a_1 & a_2 \\
a_3 & a_4
\end{pmatrix}
\end{equation}
is $2 \times 2$ matrix and
\begin{equation}
\mathbf{b}= \begin{pmatrix}
b_1 \\
\end{pmatrix}
\end{equation}
is a translation vector. For simplicity, we can write the affine transformation as follows
\begin{equation}
\begin{aligned}
\end{aligned}
\end{equation}
Our goal is to find the matrix $\bm{A}$ and the vector $\mathbf{b}$ such that
\begin{equation}
\left|\mathbf{y}-\left(\bm{A}\,\mathbf{x}+\mathbf{b}\right)\right|^2
\label{eq:error_func}
\end{equation}
is minimal for a chosen set of corresponding points $\mathbf{x} \in M_1$ and $\mathbf{y} \in M_2$. Let us have corresponding points $\mathbf{x_1},\dots,\mathbf{x_n}$ and $\mathbf{y_1},\dots,\mathbf{y_n}$, respectively. Such minimization for all corresponding points is equivalent to minimizing
\begin{equation}
\underset{i=1}{\overset{n}\sum}\left(\left(y_{i_1}-a_1x_{i_1}-a_2x_{i_2}-b_1\right)^2+
\left(y_{i_2}-a_3x_{i_1}-a_4x_{i_2}-b_2\right)^2
\right)\label{eq:ucel_fcia}
\end{equation}
with respect to the matrix and translation vector elements. In order to minimize (<ref>) we compute the derivatives with respect to $a_1,a_2,b_1,a_3,a_4$ and $b_2$, and set it to $0$. For example, for the element $a_1$ we get
\begin{equation}
\underset{i=1}{\overset{n}\sum}2\left(y_{i_1}-a_1x_{i_1}-a_2x_{i_2}-b_1\right)\left(-x_{i_1}\right)=0 \label{eq:deriv_ucel_fcia}
\end{equation}
and similarly for other elements. Equation (<ref>) can be written in the form
\begin{equation}
\end{equation}
and from there we can see that for all elements we get the system of linear equations
\begin{equation}
\scalemath{0.96}{
\begin{bmatrix}
\!\suma x_{i_1}^2\! &\!\!\suma x_{i_1} x_{i_2} &\!\suma x_{i_1} &\!\!\!0 &\!0 &\!0\\
\suma x_{i_1} x_{i_2}\! &\!\!\suma x_{i_2}^2 &\!\suma x_{i_2} &\!\!\!0 &\!0 &\!0\\
\!\suma x_{i_1}\! &\!\!\suma x_{i_2} &\!n &\!\!\!0 &\!0 &\!0 \\
\!0\! &\!\!0 &\!0 &\!\!\!\suma x_{i_1}^2 &\!\suma x_{i_1} x_{i_2} &\!\suma x_{i_1}\\
\!0\! &\!\!0 &\!0 &\!\!\!\suma x_{i_1} x_{i_2} &\!\suma x_{i_2}^2 &\!\suma x_{i_2}\\
\!0\! &\!\!0 &\!0 &\!\!\!\suma x_{i_1} &\!\suma x_{i_2} &\!n
\end{bmatrix}
\!\!\!
\begin{bmatrix}
\vphantom{\suma}a_1\\
\vphantom{\suma}a_2\\
\vphantom{\suma}b_1\\
\vphantom{\suma}a_3\\
\vphantom{\suma}a_4\\
\vphantom{\suma}b_2\\
\end{bmatrix}
\!\!=\!\!
\begin{bmatrix}
\suma y_{i_1} x_{i_1} \\
\suma y_{i_1} x_{i_2} \\
\suma y_{i_1}\\
\suma y_{i_2} x_{i_1} \\
\suma y_{i_2} x_{i_2} \\
\suma y_{i_2}
\end{bmatrix}}
\label{eq:transf_matrix}
\end{equation}
Regarding our application to transform the Strabo's map of the World to the current map in the WGS, we present now the finding of affine transformations $T^{p_k}$ by (<ref>) for the selected corresponding points sets $p_k, k=1,2,3$, with the corresponding points given at the Adriatic coast ($k=1$), Greece and Albania region ($k=2$) and at the Black Sea coast ($k=3$), for more geographic information about the corresponding points sets see the beginning of section <ref>. The accuracy of the affine transformation $T^{p_k}$ for the corresponding points set $p_j=\{\mathbf{x}_i,\mathbf{y}_i;i=1,...,n_{p_j}\}$ is measured by the mean error
\begin{equation}
\varepsilon_{mean}^{p_j}\left(T^{p_k}\right) = \sqrt{\frac{1}{n_{p_j}}\underset{\mathbf{x}_i \in p_j} \sum D_E\left(\mathbf{y}_i, T^{p_k}\left(\mathbf{x}_i\right)\right)^2},
\end{equation}
where $D_E\left(\mathbf{y}_i,T^{p_k}\left(\mathbf{x}_i\right)\right)$ is a geodesic distance of point $\mathbf{y}_i$ and transformed point $T^{p_k}\left(\mathbf{x}_i\right)$ computed by the GeographicLib::Geodesic class [15], We also measure the maximal error of the transformation by
\begin{equation}
\varepsilon_{max}^{p_j}\left(T^{p_k}\right) = \max_{\mathbf{x}_i \in p_j} D_E\left(\mathbf{y}_i,T^{p_k}\left(\mathbf{x}_i\right)\right).
\end{equation}
The mean errors of the transformation $T^{p_k}$ for the corresponding points sets $p_j$.
k $p_k$ - region 1c|$\varepsilon_{mean}^{p_1\vphantom{^R}}\left(T^{p_k}\right)\;\left[km\right]$ 1c|$\varepsilon_{mean}^{p_2\vphantom{^R}}\left(T^{p_k}\right)\;\left[km\right]$
1 Adriatic coast 7.527 256.856 723.619
2 Greece and Albania 197.655 24.259 126.697
3 Black sea coast 155.672 189.790 22.981
The maximal errors of the transformation $T^{p_k}$ for the corresponding points sets $p_j$.
k $p_k$ - region 1c|$\varepsilon_{max}^{p_1\vphantom{^R}}\left(T^{p_k}\right)\;\left[km\right]$ 1c|$\varepsilon_{max}^{p_2\vphantom{^R}}\left(T^{p_k}\right)\;\left[km\right]$
1 Adriatic coast 12.201 383.310 917.010
2 Greece and Albania 235.104 32.042 178.451
3 Black sea coast 198.112 232.691 35.989
The mean and maximal errors of the transformation $T^{p_G}$ computed by using the global corresponding points set $p_G = p_1 \cup p_2 \cup p_3$. In the first three lines, the errors of $T^{p_G}$ for the corresponding points just from the sets $p_k=1,2,3$ are presented. Comparing these errors with the errors on diagonals of Tables <ref> - <ref> we see the error increase. The last line represents the errors of $T^{p_G}$ for all corresponding points in the global set $p_G$.
k $p_k$ - region
1 Adriatic coast 45.730 65.453
2 Greece and Albania 62.224 92.509
3 Black Sea coast 43.519 79.303
G global 52.543 92.509
Transformations presented in the Tables <ref> - <ref> comes with acceptable errors for the corresponding points sets $p_k$ if the same corresponding points are used also for finding the transformation $T^{p_k}$, see diagonals of the tables. Unfortunately, the error dramatically rises for the corresponding points not used for finding the optimal transformation, see out of diagonal entries in the tables and Figures <ref> - <ref>. Thus these local transformations are not suitable for transformation of farther points not included in the minimization procedure. Of course, one can create a common global corresponding points set $p_G = p_1 \cup p_2 \cup p_3$ and find by (<ref>) a common global transformation $T^{p_G}$. Using such optimal affine transformation $T^{p_G}$ we get visually better results, see Fig <ref>. However, the errors $\varepsilon_{mean}^{p_k}\left(T^{p_G}\right)$ and $\varepsilon_{max}^{p_k}\left(T^{p_G}\right)$, $k=1,2,3$, see Table <ref>, have significantly increased compared to the corresponding errors on the diagonals of Tables <ref> - <ref>. Also visually we see quite large differences compared to the local transformations. For example, for the Adriatic coast region the mean error rises from 7.5 km to 45.7 km and maximal error from 12.2 km to 65.4 km which is expressed visually in Figures <ref> and <ref>. A similar result is observed near Istanbul and in other localities as well. Since we want to keep the accuracy of the local affine transformations in the neighbourhood of selected polygonal regions and not to pollute the transformations by the globally increasing errors we come with the following idea.
Optimal local affine transformation $T^{p_1}$ obtained by using the corresponding points set $p_1$ given on the Adriatic coast. Top images show the corresponding points, on the left in red on the Strabo's map and on the right in green on the map in WGS. The bottom image shows the transformed points of the set $p_1$ in red, the transformed points of the sets $p_2$ and $p_3$ in orange and their corresponding points in WGS in green. One can see, that points on Adriatic coast are transformed accurately while the points in other regions are distant from their corresponding points, see also the first rows of Tables <ref> - <ref>.
Optimal local affine transformation $T^{p_2}$ obtained by using the corresponding points set $p_2$ given in the Greece and Albania region. Top images show the corresponding points, on the left in red on the Strabo's map and on the right in green on the map in WGS. The bottom image shows the transformed points of the set $p_2$ in red, the transformed points of the sets $p_1$ and $p_3$ in orange and their corresponding points in WGS in green. One can see, that points in the Greece and Albania region are transformed accurately while the points in other regions are distant from their corresponding points, see also the second rows of Tables <ref> - <ref>.
Optimal local affine transformation $T^{p_3}$ obtained by using the corresponding points set $p_3$ given on the Black Sea coast. Top images show the corresponding points, on the left in red on the Strabo's map and on the right in green on the map in WGS. The bottom image shows the transformed points of the set $p_3$ in red, the transformed points of the sets $p_1$ and $p_2$ in orange and their corresponding points in WGS in green. One can see, that points on Black Sea coast are transformed accurately while the points in other regions are distant from their corresponding points, see also the third rows of Tables <ref> - <ref>.
Optimal local affine transformation $T^{p_G}$ obtained by using the corresponding points set $p_G=p_1 \cup p_2 \cup p_3$. The image shows the transformed points from the Strabo's map in red and their corresponding points in WGS in green. One can see that all the points are transformed relatively closely. However, in the previous local transformations, see Figures <ref> -<ref>, the points from the sets used for finding the optimal local affine transformations were transformed more accurately, see also Tables <ref> - <ref>.
By using the points $\mathbf{x}_i \in p_k$ we create a polygon denoted again without any confusion by $p_k$, and it is done for all $k=1,...,n_p$ where $n_p$ is the number of corresponding points sets and the created polygons as well. In all points of the map $M_1$, which are inside of every polygon $p_k$, we set the parameters of affine transformation to the values of locally optimal affine transformation $T^{p_k}$ found by (<ref>) using the corresponding points set $p_k$. For all other points of the map $M_1$ we use an interpolation/extrapolation approach. In 1D case, when we want to interpolate between two given function values, the natural approach is to use the linear interpolation, i.e. to connect two given values by a straight line. However, it is not so straightforward in higher dimensions. Fortunately, there exists an analogy of linear interpolation in higher dimensions given by a solution of the Laplace equation with Dirichlet boundary conditions given on the boundary of the interpolation domain. It is widely used in data processing such as filling the missing parts of photographs or other image inpainting problems. However, in our case, we have not only to interpolate the values to the regions between the polygons but also to extrapolate these values to the whole map $M_1$. To that goal, we suggest the following mathematical model. Let us consider the Laplace equation
\begin{equation}
- \Delta u\left(\mathbf{x}\right) =0
\label{eq:laplace}
\end{equation}
where solution $u$ represents the transformation matrix and translation vector elements $a_1,a_2,b_1,a_3,a_4, b_2$, together with the Dirichlet conditions prescribed in the polygons $p_k$. The Laplace equation with such Dirichlet conditions is solved in a domain $\Omega$ and the zero Neumann boundary conditions are prescribed on its boundary $\partial\Omega$. The domain $\Omega$ is chosen as a rectangular subset of the map $M_1$, see e.g. the pictures in Figure <ref> where we choose as domain $\Omega$ a rectangle surrounding Europe on the Strabo's map of the World. We note that the minus sign in the equation (<ref>) is chosen to have operator on the left hand side positive and arising matrix of the system then positive definite.
To discretize the partial differential equation (<ref>) in the domain $\Omega$ we use the finite difference method on a uniform grid with the grid size 1, see Fig. <ref>. The grid nodes $\mathbf{x}_{i,j}, i=1,\dots,N_1, j=1,\dots,N_2$ correspond to centers of pixels in the map $M_1$. The Dirichlet conditions are prescribed in $E(p_k)$, the outer discrete envelope of the polygons $p_k$, see Fig. <ref> case A.
In other grid nodes except the boundary, case B in Fig. <ref>, the Laplace operator in equation (<ref>) is approximated by
\begin{align}
\Delta u\left(\mathbf{x}\right) & = \frac{\partial^2u}{\partial{x_1}^2 }\left(\mathbf{x}\right)+ \frac{\partial^2u}{\partial{x_2}^2 }\left(\mathbf{x}\right)\\
& \approx u_{i-1,j}-2u_{i,j}+u_{i+1,j}
+ u_{i,j-1}-2u_{i,j}+u_{i,j+1}
\\
& = u_{i-1,j}+u_{i+1,j}
+ u_{i,j-1}+u_{i,j+1},
\end{align}
where $ u_{i,j}$ is an approximate value of $u$ at the grid node $\mathbf{x}_{i,j}$. Such approximation needs to be adjusted for the grid points on the boundary $\partial\Omega$. To approximate the zero Neumann boundary condition we use the reflection of values along the boundary, e.g. in case J in Fig. <ref> we set $u_{i-1,j}=u_{i+1,j}$ which leads to the following approximation on that part of the boundary
\begin{align}
\Delta u\left(\mathbf{x}\right) &\approx u_{i+1,j}-2u_{i,j}+u_{i+1,j}
+ u_{i,j-1}-2u_{i,j}+u_{i,j+1}
\\
& = 2u_{i+1,j}
+ u_{i,j-1}+u_{i,j+1}
\end{align}
and it is done similarly for other boundary grid nodes.
We summarize all discrete equations, for cases A-J, representing the numerical discretization of our model for the LAGL map transformation as follows:
A: $\mathbf{x}_{i,j} \in E(p_k) : u_{i,j} = v\left(p_k\right) $, where $v$ is any of the element of $\bm{A}$ and $\mathbf{b}$ given by locally optimal affine transformation $T^{p_k}$ found by (<ref>),
B: $\mathbf{x}_{i,j} \not\in E(p_k) \;\land\; \mathbf{x}_{i,j} \not\in \partial\Omega : -u_{i-1,j}-u_{i,j-1} + 4u_{i,j} - u_{i+1,j} - u_{i,j+1} = 0$
C: $\mathbf{x}_{i,j} \not\in E(p_k) \;\land\;
i=1, j=1
: 4u_{i,j}-2u_{i+1,j} - 2u_{i,j+1}=0$
D: $\mathbf{x}_{i,j} \not\in E(p_k) \;\land\;
i=1, j=N_2
: - 2u_{i,j-1}+ 4u_{i,j} -2u_{i+1,j}=0$
E: $\mathbf{x}_{i,j} \not\in E(p_k) \;\land\;
i=N_1, j=1
: -2u_{i-1,j}+ 4u_{i,j} - 2u_{i,j+1}=0$
F: $\mathbf{x}_{i,j} \not\in E(p_k) \;\land\;
i=N_1, j=N_2
: -2u_{i-1,j} - 2u_{i,j-1}+ 4u_{i,j}=0$
G: $\mathbf{x}_{i,j} \not\in E(p_k) \;\land\;
i=N_1, j=2,\dots,N_2-1
: -2u_{i-1,j}-u_{i,j-1} + 4u_{i,j} - u_{i,j+1}=0$
H: $\mathbf{x}_{i,j} \not\in E(p_k) \;\land\;
i=2,\dots,N_1-1, j=N_2
: -u_{i-1,j} - 2u_{i,j-1}+ 4u_{i,j}-u_{i+1,j}=0$
I: $\mathbf{x}_{i,j} \not\in E(p_k) \;\land\;
i=2,\dots,N_1-1, j=1
: -u_{i-1,j} + 4u_{i,j}-u_{i+1,j} - 2u_{i,j+1}=0$
J: $\mathbf{x}_{i,j} \not\in E(p_k) \;\land\;
i=1, j=2,\dots,N_2-1
: -u_{i,j-1} + 4u_{i,j} -2u_{i+1,j}- u_{i,j+1}=0$
For solving the above system of equations we use Eigen::SparseLU class of [11] which solves the linear systems with sparse matrices directly and efficiently by the LU decomposition.
Solution of the linear system of equations gives us the desired result, smoothly interpolated/extrapolated locally optimal affine transformations from polygons $p_k$ to every point of the (historical) map $M_1$. As we see in Figure <ref> we reproduce the optimal errors (diagonals in Tables <ref> - <ref>) of all local affine transformations thanks to the Dirichlet conditions in polygons $p_k$. These optimal values are smoothly interpolated/extrapolated to the whole historical map $M_1$, as seen in Fig. <ref>. Such locally optimal smoothly varying transformation can be used to transform any point from the historical map $M_1$ to the current map $M_2$.
Illustration of the discretization of the computational domain $\Omega$ and approximation of our mathematical model in cases $\rm{A}-\rm{J}$. By pink color we plot the grid nodes in the outer discrete envelope $E(p_k)$ of the polygon $p_k$ (plotted in red) where we consider Dirichlet conditions. In case B we consider the standard approximation of the Laplace equation while in cases $\rm{C}-\rm{J}$ we consider its adjustment at the boundary $\partial \Omega$.
Example of transformation found by using the LAGL method. Top image illustrates the result for the corresponding points sets $p_1$, $p_2$ and $p_3$ which are transformed as accurately as by their locally optimal affine transformations $T^{p_1}$, $T^{p_2}$ and $T^{p_3}$, see Figures <ref> - <ref>.
Example of transformation found by using the LAGL method.
The images show one of the transformation parameters computed by the LAGL method and presented in Figure <ref>. The upper image is plotted with the texture of the Strabo's map of the World, the lower image is plotted without the texture emphasizing the smooth transition of locally optimal affine transformations.
§ STRABO'S ISTER (ἼΣΤΡΟΣ) TRANSFORMATION
Now we present the application of the LAGL method to the transformation of the river Ister from the Strabo's map of the World to the current map in WGS. The corresponding points on Strabo's map are chosen on the east, south and west directions from the Ister river on the Strabo's map of the World and in such a way that they are reliably identifiable also on the current map of the world. It is clearly observable that the Strabo's map of the World by Karl Müller is less realistic in the northwestern and northern directions from the Alps than in other directions. But this fact correctly respects the uncertainty of Strabo’s description of this part of Europe and Karl Müller's map does not introduce any artificial information which can be found in others 19th-century map reconstructions of the Geographica. While southern Europe, west of the Alps up to Lyon (Λούγδουνον) and east of the Alps up to the Ister, is described in Geographica with a lot of quantitative geographic details including distances, the northern part contains almost no such information. The almost only quantitative information is what Strabo says about the length of the Rhine: "Asinius (historian) says that length of the river (Rhine) is six thousand stadia, but it is not, for it would be only a little more than half that in a straight line, and adding a thousand would be sufficient for bending.", see [30], section (4-3-3). The estimate of six thousand stadia (by Asinius) is however a right one since the current length of the Rhine is about 1230 km, but Strabo does not accept that and almost halved it. This shows how uncertain was the knowledge about the river Rhine on the northern planes in the time of Strabo and due to that also on the Karl Müller’s map it is much shorter than in the reality. About the northern part of Europe Strabo also writes that the region "near the Ocean is totally unknown to us", see [30], section (7-2-4), and by the "Ocean", he means the nowadays Northern and Baltic Seas. That is the reason why Karl Muller plotted the northern border of Europe (and of the World) by the dashed line. Due to the above reasons, we are not allowed to choose the corresponding points north-west and north of the Ister.
Having all these requirements in mind, we defined the first three sets of corresponding points on the Mediterranean and the Black Sea costs - in the south-west, south and east directions from the Strabo's Ister, see Figures <ref> - <ref> and Figure <ref>. Since the Mediterranean world up to the Ister river and along it as well as the regions along the Black Sea were already well known for Greeks and Romans in the time of Strabo, the chosen points are reliable regarding location, distances and directions. The first three corresponding points sets defined below fulfils all the above assumptions, but we created also the fourth one containing the points on the Italian coastline and in the Alps. The points in the Alps - the sources of Rhone (Ῥοδανὸς) and Rhine rivers seen on the Strabo's map - have some uncertainty to be identified correctly on the current map. We put the "source" of the Rhine below the Rheinwaldhorn, since in section (4-6-6) Strabo writes that it flows from the mount Adula (Ἀδούλας), and the "source" of Rhona to the Lake Geneva (lac Léman) because in (4-6-6) Strabo writes that it bears from Λημέννα λίμνη. We use this fourth corresponding points set mainly to show that considering the western Alpine region, although a bit uncertain, does not change the results significantly. At this place, we would also like to note (probably the only one) discrepancy of the Karl Müller’s map and Strabo's descriptions in Geographica. It is the sketch of the Alps, namely, the "second ridge" should encompass the sources of all the Ister, Rhein and the Rhone because all these river sources are within the Alps by Strabo. We think that Karl Müller’s sketch of the "second ridge" is related to the 19th – 20th-century standard assumption of the correspondence of the Strabo's Ister and nowadays Danube rivers.
Here are the corresponding points sets:
Adriatic coast region:
* south of Istria (Premantura)
* Opatija
* Jablanac
* Split area (Ražanj)
Black Sea coast region:
* Istanbul (north of Bosporus)
* mouth of Dniester (Zatoka)
* cape of Tendrivska gulf
* cape of Dzharylhatska gulf
* Kerč
Greece and Albania region:
* Vlorë
* Koufasaratsia
* north-east cape of Crete (Kyriamadi)
* cape of Kassandra peninsula
* Thessaloniki
Alps and Italy region:
* Ancona (Conero)
* cape south of Venice
* Trieste
* "source" of Rhine (Rheinwaldhorn)
* "source" of Rhone (Lake Geneva)
In numerical experiments presented in this section we vary following combinations of the above-defined regions:
Experiment 3.1: Adriatic coast, Black Sea coast regions,
Experiment 3.2: Adriatic coast, Greece and Albania, Black Sea coast regions,
Experiment 3.3: Adriatic coast, Black Sea coast, Alps and Italy regions.
We will use the following abbreviations:
* D - Danube
* D1 - the Danube from the source up to the confluence with the Drava
* D2 - the Danube from the confluence with the Drava up to outlet to the Black Sea
* DD - courses of Drava and Danube interconnected
* TID - courses of Tauernbach, Isel and Drava up to the confluence with the Danube
* TIDD - courses of Tauernbach, Isel, Drava and Danube interconnected
* I1 - transformed Ister from the source up to the intersection with the Danube
* I2 - transformed Ister from the intersection with Danube up to outlets to the Black Sea
To evaluate the results of transformations we compute the maximal and the mean Hausdorff distances (defined below) of the two discrete curves - one representing the real river course, precisely digitized on the current map, and one representing the discrete transformed Ister from the Strabo's map to the current map. To get the transformed Ister, first, the Ister on the Strabo's map, see Figure <ref>, was digitized to contiguous pixel set and then every center of the pixel was transformed to the current map by the LAGL map transformation. All necessary distances on the current map in WGS are computed by means of the GeographicLib::Geodesic class [15]. In Figures <ref> - <ref>, the cyan curve represents always the transformed Strabo's Ister while the white curve represents the Danube river, yellow curve the Drava river, the orange curve represents the Isel river and red curve the Tauernbach. We also measure the length of two curves matching in a prescribed narrow band by the so-called matching length defined below.
Let us have a discrete curve $A=\left\{\mathbf{a}_1,\dots,\mathbf{a}_{n_A}\right\}$. By using the points $\mathbf{a}_i, i=1,\dots, n_A$ we create the piecewise linear segments
as follows
\begin{align}
&\widehat{\mathbf{a}}_1=\overline{\mathbf{a}_1,\frac{\mathbf{a}_1+\mathbf{a}_2}{2}}, \\
\cup
\overline{\mathbf{a}_i,\frac{\mathbf{a}_{i}+\mathbf{a}_{i+1}}{2}}, \;\;\;i=2,\dots,n_A-1\\
\end{align}
where $\overline{\mathbf{u},\mathbf{v}}$ represents the line segment connecting points $\mathbf{u}$ and $\mathbf{v}$. Let $\widehat{a}_i=\left|\widehat{\mathbf{a}}_i\right|$ be the piecewise linear segment length and let $L_A$ be the length of the overall discrete curve $A$ given by the sum of the segments length.
The so-called directed mean Hausdorff distance $\overline{\delta}_{H}(A,B) $ of two discrete curves $A=\left\{\mathbf{a}_1,\dots,\mathbf{a}_{n_A}\right\}$ and $B=\left\{\mathbf{b}_1,\dots,\mathbf{b}_{n_B}\right\}$ with segments
$\widehat{A}=\left\{\widehat{\mathbf{a}}_1,\dots,\widehat{\mathbf{a}}_{n_A}\right\}$ and $\widehat{B}=\left\{\widehat{\mathbf{b}}_1,\dots,\widehat{\mathbf{b}}_{n_B}\right\}$ is given by
\begin{equation}
\overline{\delta}_{H}(A,B) =\frac{1}{L_A}
\underset{i=1}{\overset{n_A}{\sum}}\widehat{a}_i\underset{\widehat{\mathbf{b}}_j\in \widehat{B}}\min \;D_E(\mathbf{a}_i,\widehat{\mathbf{b}}_j),
\end{equation}
where $D_E(\mathbf{a}_i,\widehat{\mathbf{b}}_j)$ is the geodesic distance of the point $\mathbf{a}_i$ and the segment $\widehat{\mathbf{b}}_j\in \widehat{B}$.
Then the mean Hausdorff distance $\overline{\text{d}}_{H}(A,B) $ is given by the following formula
\begin{equation}
\overline{\text{d}}_{H}(A,B) = \frac{L_A\overline{\delta}_{H}(A,B)+L_B\overline{\delta}_{H}(B,A)}{L_A+L_B}.
\end{equation}
The so-called directed maximal Hausdorff distance $\delta_{H}(A,B)$ is given by
\begin{equation}
\delta_{H}(A,B) =
\sup_{\mathbf{a}_i\in A\vphantom{\widehat{\mathbf{b}}}}
\Inf
_{\widehat{\mathbf{b}}_j\in \widehat{B}}
\end{equation}
and then the maximal Hausdorff distance $\text{d}_{H}(A,B)$ is given by
\begin{equation}
\text{d}_{H}(A,B) =
\max
\left\{
\delta_{H}\left(A,B\right),\delta_{H}\left(B,A\right)\right\}.
\end{equation}
Finally, the sum of length of segments $\widehat{a}_i \in \widehat{A}$ within the given threshold distance $d_t$ from the segments of the curve $B$ gives the matching length $L_{m}\left(A,B,d_t\right)$ by
\begin{equation}
L_{m}\left(A,B,d_t\right) =
\underset{\widehat{\mathbf{a}}_i\in \widehat{A}}{\sum}\widehat{a}_i, \quad\quad \min_{\widehat{\mathbf{b}}_j \in \widehat{B}} D_E(\mathbf{a}_i,\widehat{\mathbf{b}}_j)<d_t.\\
\end{equation}
In the following Figures and Tables, we present the results of Experiments 3.1 - 3.3 representing three different LAGL transformations of the Strabo's Ister to the current map in WGS. In Figures, we evaluate the results visually and in Tables quantitatively.
In Figures 3.x.1 (x=1,2,3) we visualize the polygons used for finding the locally optimal affine transformations used in LAGL method, the transformed river Ister (cyan) and the rivers Danube, Drava, Isel and Tauernbach (various colors). In all these Figures the Ister in its upper course is really close to the Drava/Tauernbach-Isel-Drava(TID) river courses.
In Figures 3.x.2 we compare visually the river sources with the source of the transformed Ister. We see that all sources of Drava, Isel and Tauernbach are geographically very close to the source of the transformed Ister.
Tables 3.x.1 show that the sources of Danube and transformed Ister are very distant, around $300\;km$ in all transformations, while the sources of Drava, Isel and Tauernbach are all much closer to the source of transformed Ister in all transformations, e.g. in Table <ref> all distances are in the range from around 20 km to around 45 km, and they are slightly bigger in other Tables.
Now, let us look to Tables 3.x.2. Both the maximal and the mean Hausdorff distances (HD) are bigger when comparing Danube and Ister than when comparing Ister to the other river courses in their full length, see the first part of the Tables (first three rows). This difference is significantly emphasized in the second part of the Tables (fourth to the sixth row), where only the partial upper river courses are compared. The Hausdorff distances of Danube river and the reconstructed Ister on the upper part of their courses (HD of D1 and I1) are really high - the maximal HD is about 300 km and the mean HD is about 150 km. Opposite to that fact, the maximal and the mean HD of the transformed Ister and Drava/Tauernbach-Isel-Drava(TID) course are much lower. For example, in Table <ref> the maximal Hausdorff distances are about 40 km and the mean Hausdorff distances are about 20 km only, which quantitatively express the visual similarity of the transformed Ister and Drava/TID courses in Figure <ref>.
Tables 3.x.3 show the matching lengths in three different narrow bands 10, 50 and 100 km. It is another way to show how closely are the river streams on their length. The longer common length in a narrow band the better correspondence of the river courses is detected. As we see again in the second parts of these Tables (fifth to the sixth row), the matching length of upper courses of the transformed Ister and Drava/TID rivers is very high in narrow band 100 km (close or equal to 100%) for all three LAGL transformations. The matching length is also very high in 50 km narrow band for the first two experiments and there is a similarity of river courses also in 10 km narrow band in the first experiment, which again show the perfect correspondence of the upper Ister and Drava/TID courses. On the other hand, there is almost no similarity of the transformed Ister and the Danube river in its upper course as seen in the fourth row of the Tables.
The third part (the seventh row) of Tables 3.x.2 and 3.x.3 evaluate the quality of LAGL transformation of the Ister river. Since there is no doubt that the lower course of Danube and the transformed Ister should correspond to each other, the transformation which gives the lowest Hausdorff distances and the highest matching length on these partial river courses is the most reliable concerning the accuracy of the Strabo's Ister reconstruction. As one can see, from this point of view the most accurate is the LAGL transformation from Experiment 3.1, using just the Adriatic and the Black Sea coast regions. From the above discussion, we see that it also gives the best Strabo's Ister and Tauerbnbach-Isel-Drava-Danube correspondence.
Transformation using the Adriatic and the Black Sea coast regions
Visual comparison of current rivers (various colors) and river Ister (cyan) transformed using Adriatic and Black Sea coast regions, the polygons used for the transformations are highlighted in grey.
Detail of real rivers source (various colors) compared to the source of transformed river Ister (cyan). We plot the course Tauernabach-Isel-Drava thicker since we consider it to be Ister.
The distance of river sources.
Danube - Ister Drava - Ister Isel - Ister Tauernbach - Ister
317.781 km 29.849 km 21.776 km 45.109 km
The table contains three parts. In the first part, the Hausdorff distances (HD) of the river courses in their full length are presented. Then the partial river courses are compared.
2|c| Curve 2c| Maximal HD[$km$] 2c| Mean HD[$km$]
$A$ $B$ 1c${\delta}_{H}(A,B)$ $ { \text{ d } }_{ H }(A, B)$ 1c$\overline{ \delta }_{ H }(A, B)$ $\overline{ \text{d} }_{ H }(A, B)$
D Ister 317.781 2*317.781 112.802 2*100.898
Ister D 206.829 79.573
DD Ister 154.611 2*154.611 44.903 2*$\,\ $42.686
Ister DD 149.693 39.621
TIDD Ister 154.611 2*154.611 44.717 2*$\,\ $42.524
Ister TIDD 149.693 39.484
D1 I1 317.781 2*317.781 177.243 2*163.057
I1 D1 206.829 130.218
Drava I1 39.020 2*$\,\ $39.020 21.831 2*$\,\ $21.246
I1 Drava 38.809 20.547
TID I1 39.020 2*$\,\ $40.628 21.383 2*$\,\ $20.833
I1 TID 40.628 20.173
D2 I2 154.611 2*154.611 65.344 2*$\,\ $62.953
I2 D2 149.693 59.130
The table contains three parts. In the first part, the matching lengths of the river courses in their full length are presented. Then the partial river courses are compared.
3|c|Curve 6c|Matching length $L_m(A, B, d_t) \; \left[km \right]$
A $L_A[km]$ B 2c$d_t = 10$ 2c$d_t = 50$ 2c|$d_t = 100$
D 2714.887 Ister 228.127 (8.4$\%$) 633.263 (23.3$\%$) 1114.154 (41.0$\%$)
Ister 1421.117 D 185.952 (13.0$\%$) 493.414 (34.7$\%$) 782.470 (55.0$\%$)
3| c | A v e r a g e 207.040 (10.0$\%$) 563.339 (27.2$\%$) 948.312 (45.8$\%$)
DD 2021.995 Ister 299.190 (14.7$\%$) 1285.049 (63.5$\%$) 1693.837 (83.7$\%$)
Ister 1421.117 DD 267.783 (18.8$\%$) 1011.383 (71.1$\%$) 1247.252 (87.7$\%$)
3| c | A v e r a g e 283.486 (16.4$\%$) 1148.216 (66.6$\%$) 1470.545 (85.4$\%$)
TIDD 2026.547 Ister 320.930 (15.8$\%$) 1289.600 (63.6$\%$) 1698.389 (83.8$\%$)
Ister 1421.117 TIDD 287.667 (20.2$\%$) 1011.383 (71.1$\%$) 1247.252 (87.7$\%$)
3| c | A v e r a g e 304.299 (17.6$\%$) 1150.492 (66.7$\%$) 1472.821 (85.4$\%$)
D1 1430.501 I1 16.857 (1.1$\%$) 85.824 (5.9$\%$) 157.926 (11.0$\%$)
I1 $\,\ $617.966 D1 24.146 (3.9$\%$) 99.998 (16.1$\%$) 153.185 (24.7$\%$)
3| c | A v e r a g e 20.501 (2.0$\%$) 92.911 (9.0$\%$) 155.555 (15.1$\%$)
Drava $\,\ $737.609 I1 87.920 (11.9$\%$) 737.609 (100.0$\%$) 737.609 (100.0$\%$)
I1 $\,\ $617.966 Drava 105.976 (17.1$\%$) 617.966 (100.0$\%$) 617.966 (100.0$\%$)
3| c | A v e r a g e 96.948 (14.3$\%$) 677.788 (100.0$\%$) 677.788 (100.0$\%$)
TID $\,\ $742.161 I1 109.660 (14.7$\%$) 742.161 (100.0$\%$) 742.161 (100.0$\%$)
I1 $\,\ $617.966 TID 125.860 (20.3$\%$) 617.966 (100.0$\%$) 617.966 (100.0$\%$)
3| c | A v e r a g e 117.760 (17.3$\%$) 680.063 (100.0$\%$) 680.063 (100.0$\%$)
D2 1284.386 I2 211.270 (16.4$\%$) 547.439 (42.6$\%$) 956.228 (74.4$\%$)
I2 $\,\ $803.151 D2 161.806 (20.1$\%$) 393.416 (48.9$\%$) 629.285 (78.3$\%$)
3| c | A v e r a g e 186.538 (17.8$\%$) 470.428 (45.0$\%$) 792.757 (75.9$\%$)
Transformation using the Adriatic coast, the Black Sea coast and the Greece and Albania regions
Visual comparison of current rivers (various colors) and river Ister (cyan) transformed using Adriatic, Black sea and Greece and Albania regions, the polygons used for the transformations are highlighted in grey.
Detail of real rivers source (various colors) compared to the source of transformed river Ister (cyan). We plot the course Tauernabach-Isel-Drava thicker since we consider it to be Ister.
The distance of river sources.
Danube - Ister Drava - Ister Isel - Ister Tauernbach - Ister
278.962 km 67.198 km 51.800 km 72.642 km
The table contains three parts. In the first part, the Hausdorff distances (HD) of the river courses in their full length are presented. Then the partial river courses are compared.
2|c| Curve(river) 2c| Maximal HD[$km$] 2c| Mean HD[$km$]
$A$ $B$ 1c${\delta}_{H}(A,B)$ $ { \text{ d } }_{ H }(A, B)$ 1c$\overline{ \delta }_{ H }(A, B)$ $\overline{ \text{d} }_{ H }(A, B)$
D Ister 278.962 2*278.962 116.695 2*108.088
Ister D 194.446 92.584
DD Ister 198.848 2*198.848 66.131 2*$\,\ $62.729
Ister DD 189.915 57.998
TIDD Ister 198.848 2*198.848 65.625 2*$\,\ $61.889
Ister TIDD 189.915 56.683
D1 I1 278.962 2*278.962 157.633 2*147.751
I1 D1 194.446 125.898
Drava I1 54.656 2*$\,\ $66.735 35.270 2*$\,\ $35.435
I1 Drava 66.735 35.623
TID I1 54.656 2*$\,\ $70.863 33.857 2*$\,\ $33.083
I1 TID 70.863 32.194
D2 I2 198.848 2*198.848 96.661 2*$\,\ $93.564
I2 D2 189.915 88.555
The table contains three parts. In the first part, the matching lengths of the river courses in their full length are presented. Then the partial river courses are compared.
3|c|Curve(river) 6c|Matching length $L_m(A, B, d_t) \; \left[km \right]$
A $L_A[km]$ B 2c$d_t = 10$ 2c$d_t = 50$ 2c|$d_t = 100$
D 2714.887 Ister 61.260 (2.2$\%$) 481.784 (17.7$\%$) 948.962 (34.9$\%$)
Ister 1441.085 D 55.447 (3.8$\%$) 323.687 (22.4$\%$) 641.770 (44.5$\%$)
3| c | A v e r a g e 58.353 (2.8$\%$) 402.736 (19.3$\%$) 795.366 (38.2$\%$)
DD 2021.995 Ister 26.097 (1.2$\%$) 1082.523 (53.5$\%$) 1508.358 (74.5$\%$)
Ister 1441.085 DD 26.768 (1.8$\%$) 852.692 (59.1$\%$) 1144.834 (79.4$\%$)
3| c | A v e r a g e 26.433 (1.5$\%$) 967.608 (55.8$\%$) 1326.596 (76.6$\%$)
TIDD 2026.547 Ister 41.406 (2.0$\%$) 1087.075 (53.6$\%$) 1512.910 (74.6$\%$)
Ister 1441.085 TIDD 50.989 (3.5$\%$) 855.911 (59.3$\%$) 1144.834 (79.4$\%$)
3| c | A v e r a g e 46.197 (2.6$\%$) 971.493 (56.0$\%$) 1328.872 (76.6$\%$)
D1 1430.501 I1 34.437 (2.4$\%$) 112.681 (7.8$\%$) 178.212 (12.4$\%$)
I1 $\,\ $646.869 D1 25.465 (3.9$\%$) 90.754 (14.0$\%$) 143.805 (22.2$\%$)
3| c | A v e r a g e 29.951 (2.8$\%$) 101.717 (9.7$\%$) 161.008 (15.5$\%$)
Drava $\,\ $737.609 I1 0.000 (0.0$\%$) 713.419 (96.7$\%$) 737.609 (100.0$\%$)
I1 $\,\ $646.869 Drava 0.000 (0.0$\%$) 619.759 (95.8$\%$) 646.869 (100.0$\%$)
3| c | A v e r a g e 0.000 (0.0$\%$) 666.589 (96.2$\%$) 692.239 (100.0$\%$)
TID $\,\ $742.161 I1 15.308 (2.0$\%$) 717.971 (96.7$\%$) 742.161 (100.0$\%$)
I1 $\,\ $646.869 TID 24.221 (3.7$\%$) 622.978 (96.3$\%$) 646.869 (100.0$\%$)
3| c | A v e r a g e 19.764 (2.8$\%$) 670.474 (96.5$\%$) 694.515 (100.0$\%$)
D2 1284.386 I2 26.097 (2.0$\%$) 369.103 (28.7$\%$) 770.749 (60.0$\%$)
I2 $\,\ $794.215 D2 26.768 (3.3$\%$) 232.933 (29.3$\%$) 497.964 (62.6$\%$)
3| c | A v e r a g e 26.433 (2.5$\%$) 301.018 (28.9$\%$) 634.357 (61.0$\%$)
Transformation using the Adriatic coast, the Black sea coast and the Alps and Italy regions
Visual comparison of current rivers (various colors) and river Ister (cyan) transformed using Adriatic, Black sea and Alps and Italy regions, the polygons used for the transformations are highlighted in grey.
Detail of real rivers source (various colors) compared to the source of transformed river Ister (cyan). We plot the course Tauernabach-Isel-Drava thicker since we consider it to be Ister.
The distance of river sources.
Danube - Ister Drava - Ister Isel - Ister Tauernbach - Ister
296.586 km 50.094 km 35.885 km 58.252 km
The table contains three parts. In the first part, the Hausdorff distances (HD) of the river courses in their full length are presented. Then the partial river courses are compared.
2|c| Curve(river) 2c| Maximal HD[$km$] 2c| Mean HD[$km$]
$A$ $B$ 1c${\delta}_{H}(A,B)$ $ { \text{ d } }_{ H }(A, B)$ 1c$\overline{ \delta }_{ H }(A, B)$ $\overline{ \text{d} }_{ H }(A, B)$
D Ister 296.586 2*296.586 117.455 2*108.467
Ister D 205.818 92.198
DD Ister 219.255 2*219.255 84.164 2*$\,\ $80.301
Ister DD 205.818 74.903
TIDD Ister 219.255 2*219.255 83.610 2*$\,\ $79.679
Ister TIDD 205.818 74.175
D1 I1 296.586 2*296.586 145.940 2*137.085
I1 D1 194.015 115.984
Drava I1 101.197 2*101.197 65.823 2*$\,\ $63.574
I1 Drava 94.308 60.811
TID I1 101.197 2*101.197 64.181 2*$\,\ $61.763
I1 TID 94.308 58.773
D2 I2 219.255 2*219.255 112.879 2*107.675
I2 D2 205.818 99.975
The table contains three parts. In the first part, the matching lengths of the river courses in their full length are presented. Then the partial river courses are compared.
3|c|Curve(river) 6c|Matching length $L_m(A, B, d_t) \; \left[km \right]$
A $L_A[km]$ B 2c$d_t = 10$ 2c$d_t = 50$ 2c|$d_t = 100$
D 2714.887 Ister 46.134 (1.6$\%$) 327.386 (12.0$\%$) 911.606 (33.5$\%$)
Ister 1468.316 D 40.573 (2.7$\%$) 248.261 (16.9$\%$) 674.137 (45.9$\%$)
3| c | A v e r a g e 43.354 (2.0$\%$) 287.824 (13.7$\%$) 792.871 (37.9$\%$)
DD 2021.995 Ister 22.798 (1.1$\%$) 374.882 (18.5$\%$) 1390.515 (68.7$\%$)
Ister 1468.316 DD 20.561 (1.4$\%$) 301.285 (20.5$\%$) 1130.088 (76.9$\%$)
3| c | A v e r a g e 21.679 (1.2$\%$) 338.083 (19.3$\%$) 1260.301 (72.2$\%$)
TIDD 2026.547 Ister 48.427 (2.3$\%$) 379.434 (18.7$\%$) 1395.067 (68.8$\%$)
Ister 1468.316 TIDD 48.975 (3.3$\%$) 296.829 (20.2$\%$) 1130.088 (76.9$\%$)
3| c | A v e r a g e 48.701 (2.7$\%$) 338.131 (19.3$\%$) 1262.577 (72.2$\%$)
D1 1430.501 I1 19.581 (1.3$\%$) 122.200 (8.5$\%$) 257.106 (17.9$\%$)
I1 $\,\ $600.305 D1 10.404 (1.7$\%$) 57.143 (9.5$\%$) 144.355 (24.0$\%$)
3| c | A v e r a g e 14.992 (1.4$\%$) 89.671 (8.8$\%$) 200.730 (19.7$\%$)
Drava $\,\ $737.609 I1 0.000 (0.0$\%$) 191.060 (25.9$\%$) 727.479 (98.6$\%$)
I1 $\,\ $600.305 Drava 0.000 (0.0$\%$) 178.483 (29.7$\%$) 600.305 (100.0$\%$)
3| c | A v e r a g e 0.000 (0.0$\%$) 184.771 (27.6$\%$) 663.892 (99.2$\%$)
TID $\,\ $742.161 I1 25.628 (3.4$\%$) 195.612 (26.3$\%$) 732.031 (98.6$\%$)
I1 $\,\ $600.305 TID 28.413 (4.7$\%$) 174.027 (28.9$\%$) 600.305 (100.0$\%$)
3| c | A v e r a g e 27.021 (4.0$\%$) 184.819 (27.5$\%$) 666.168 (99.2$\%$)
D2 1284.386 I2 22.798 (1.7$\%$) 183.821 (14.3$\%$) 652.906 (50.8$\%$)
I2 $\,\ $868.010 D2 20.561 (2.3$\%$) 122.801 (14.1$\%$) 529.782 (61.0$\%$)
3| c | A v e r a g e 21.679 (2.0$\%$) 153.311 (14.2$\%$) 591.344 (54.9$\%$)
§ DISCUSSION ON SOME HISTORICAL ISSUES
We could finish our work here, just by developing the mathematical model and the numerical algorithm and by showing the correspondence of the Strabo's Ister with the nowadays Tauernbach-Isel-Drava-Danube course. But, since any new result in the mathematical and computational modelling used to bring new insight into the related pure or applied science problem, we are going to do the same for the history which forms our application background.
As we have already stated in the Introduction, many historical claims assuming that in the time of Strabo the upper course of the river Ister corresponds to the nowadays upper course of the Danube river must be revisited. Regarding this fact, there are many interesting open questions arising but two of them are the most interesting for us, (i)
where was then located the so-called Hercynian Forest/Hercynia silva/Herkynian Forest (Ἑρκυνίου δρυμος)
[36, 19, 30], the seat of Suevi/Suevi/Soebians (Σοήβων) [36, 19, 30] ? And (ii) who were Strabo's Suevi?
In the sequel, we will use the terms Suevi and Hercynian Forest because they seem to be the most spread in the English and Latin literature. We also note that in the Slovak literature the terms like Svébi/Suavi are used as well [34]. First of all, at the end of section (4-6-9) Strabo writes that the Ister source is near the seats of Suevi and the Hercynian Forest:
"ὅπου αἱ τοῦ Ἴστρου πηγαὶ πλησίον Σοήβων καὶ τοῦ Ἑρκυνίου δρυμοῦ".
Since we have shown above where the Ister source is located by means of Strabo's Geographica, we can clearly state that Strabo's Suevi near the Ister source have no relation with the Swabia (and the Swabians) in Bavaria, Germany, but we can claim that Strabo is speaking here about a settlement in the south-east Alpine region, around the boundaries of nowadays Carinthia, Tyrol, north-east Italy and Slovenia. With a high probability, this settlement was Slavic in that time and before which can be confirmed by many geographic names around the Strabo's Ister source which are of Slavic origin.
The origin of local geographic names in the neighbourhood of Val di Pusteria, in the valleys of upper Drava, Villgraten, Gail and Isel rivers, was studied in [25] where almost 200 names from this local area, including settlements, rivers and creeks, hills, forests or meadows, of the Slavic origin were presented. This study is based on works [23, 24] by Franc Miklošič (Franz Miklosich), one of the most respected philologists of the Habsburg empire in the second half of the 19th century. In two volumes Miklošič presented all the important rules for creating the Slavic geographic names (Vol. I) and he collected a comprehensive set of 789 bases of Slavic geographic names (Vol. II) from the whole Habsburg empire. He also gave the most common rules for changing Slavic names to German (and Hungarian) such as change of the Slavic "B" to German "F", etc. Now, we present just a few examples of geographic names of the Slavic origin around the Ister source. Here, and also in further paragraphs of this section, we give the meaning of these geographic names together with English also in the Slovak language, because it is the most familiar to authors and there does not exist any common Slavic language, and where it has a sense we give it also in the Slovenian due to [25].
First interesting geographic name is Val di Pusteria (Pustertal in German, Puster Valley in English), which would be in Slovak "Pusté údolie" or "Pustá dolina" and in Slovenian "Pusta dolina" or "Pustodol", see [25] and [24] - point 512, meaning "Deserted Valley" in English. At the eastern end of the western (lower) part of the Puster Valley, there is the castle hill, nowadays called Heinfels, below which the river Villgraten(bach) empties to the Drava. By [25] the river name has the Slavic origin "Velegrad" and there are two other creeks around, Gradenbach and Gratzbach (mentioned as Gradiz in [25]) with the same Slavic base "grad", see [24]-122. A bit to the west, there is the village Versciaco (Vierschach in German) with the meaning "Vŕšok" in Slovak and "hillock" in English, see also [25].
Going further north-east, in the Defereggen Valley there is the settlement Feistritz, nowadays part of St. Jacob in Defereggen, and also the creek Feistritz(bach). Feistritz is a German writing of the Slavic name "Bystrica" in Slovak and "Bistrica" in Slovenian, see [25], [24]-45. As a further nice example we mention Proßegg(-klamm) [25], the village and the gorge on the Tauernbach creek north of the town Matrei in Osttirol. The name Proßegg has exact analogy in Prosiek village and gorge in Chočské vrchy, Slovakia, since "priesek" in Slovak just means "gorge" in English and "klamm" in German. Another interesting fact is given by the historical names of Matrei in Osttirol town and Großvenediger mountain. They were called Windisch Matray als Mauter and Windisch Taurn, respectively, on the map in Figure <ref>, and it is generally accepted that "Windisch" meant Slavic in German dialects. Further north, there is the valley and the long creek Frosnitz(bach) [25], with the source just below the Großvenediger glaciers, emptying to Tauernbach near Gruben village. The name Frosnitz is a German writing of the Slavic name "Brusnica" meaning "cranberry" in English, see [25] and [24]-33. Interestingly, another creek with the same name meaning, the Fruschnitz(bach), is stemming from the western side of Großglockner [25].
But the most astonishing is the name of the longest glacier in the Eastern Alpine region - Pasterze - in Slovak pronunciation "Pastierce" which means the place of shepherds or the pastureland. Such form of the place name has the classical Slavic suffix -ce (-ze in German writing), see [23] - Chapter 2, Section V, points 18 and 17.
And why is it so astonishing? Recent results studying peat samples from the area of the retreating Pasterze glacier indicates grass-like pastureland vegetation (Cyperaceae-Carex, Bidens alba - shepherd's needles) and human impact on the vegetation during the Subboreal Chronozone (3780-800 BC) a warmer period of the Holocene [16]. In that period only, the name Pasterze with the Slavic pastureland meaning could be given to that place by the inhabitants of this Alpine region - with a very high probability by the Suevi people. They lived there in the Strabo's times and there is no reason to assume that they did not settle there before. We adopt here the hypothesis of the continuity of settlement if no change is recorded in any historical source which is in agreement, e.g., with the Palaeolithic Continuity Theory of Mario Alinei declaring the stability and continuity of Romance-Celtic/Germanic/Slavic ethnic and language geographic distribution in Europe from the Upper Palaeolithic period [1, 2]. It is also worth to note that such assumption, assuming stable Slavic sedentary population in this part of Europe already in the Strabo's times, does not exclude any immigration of the same or different ethnic origin and/or local acculturation in the later periods.
Detail of the map by Gerhard Mercator from 1639 [22] where in the middle of the map one can see the Sirmione island in Lago di Garda, nowadays connected to the land, where Tiberius could build his military camp in 15 BC during the campaign against Rhaeti.
Next important mentions of the Suevi and Hercynian Forest is in section (7-1-5) of Geographica where the fights of Tiberius against the Rhaetians around 15 BC are mentioned, see also Cassius Dio [4] sections (54-22-1)-(54-22-5) and [8]. For that military campaigns, Tiberius even built a military camp on the island - Sirmione - of the lake - Lago di Garda, see the map on Figure <ref> and [8]. By Strabo, the lake is located south of the Ister source and close to the Rhaetian territories (which are around the Tridentine Alps by Cassius Dio [4]), fitting correctly the Lago di Garda and our location of the source of Ister. Then the journey to the Hercynian forest is described by:
"ὥστ᾽ ἀνάγκη τῷ ἐκ τῆς Κελτικῆς ἐπὶ τὸν Ἑρκύνιον δρυμὸν ἰόντι πρῶτον μὲν διαπερᾶσαι τὴν λίμνην, ἔπειτα τὸν Ἴστρον, εἶτ᾽ ἤδη δι᾽ εὐπετεστέρων χωρίων ἐπὶ τὸν δρυμὸν τὰς προβάσεις ποιεῖσθαι δι᾽ ὀροπεδίων."
So, from the (Cisalpine) Keltike, which is for Strabo the north part of nowadays Italy up to the base of the Alps (see section (5-1-3) of Geographica), one has to go along that lake (Lago di Garda) then continue up to and along the course of Ister (taking simply the route along the Adige-Isarco-Rienza-Drava courses) and then continue straightforwardly through more favourable upland planes to end up in the Hercynian Forest. Just looking at any map, e.g. Figure <ref>, one clearly see that this journey must finish in the Carpathian-Alpine basin or better say in the large region between the Eastern Alps on the west and the Carpathian mountains ridge on the east and north. The last sentence of the section (7-1-5) is also very interesting: "ἔστι δὲ καὶ ἄλλη ὕλη μεγάλη Γαβρῆτα ἐπὶ τάδε τῶν Σοήβων, ἐπέκεινα δ᾽ ὁ Ἑρκύνιος δρυμός: ἔχεται δὲ κἀκεῖνος ὑπ᾽ αὐτῶν." It is not easy to translate, but in any case, it says that the whole Hercynian Forest, together with another large forest Gabreta (Γαβρῆτα), belongs to and is the seat of the Suevi. In the case that the Gabreta is on the west side of the Hercynian Forest it should correspond to the mountainous forested regions below the main Alpine ridges in the east and north-east directions such as the lower parts of Carinthia, Styria and nowadays Vienna Forest. In the case that the Gabreta is on the east and north sides
of the Hercynian Forest, then it should correspond to the Carpathians mountains - Karpaty in the Slovak language.
A further indication of the correctness of the location of the Hercynian Forest in between the Alps and the Carpathians is given in section (7-3-1) of Geographica where Strabo mentions the land of Getians which were assumed to be a Thracians by Hellenes, see (7-3-2): "οἱ τοίνυν Ἕλληνες τοὺς Γέτας Θρᾷκας ὑπελάμβανον" and Thracia was generally considered as the land north of Greece up to the Ister (lower Danube). And in (7-3-1) Strabo explicitly says that the land of Getians extends along the southern side of the Istros, nowadays northern Bulgaria, and also on the opposite side, on the mountain slopes of the Hercynian Forest:
"εἶτ᾽ εὐθὺς ἡ τῶν Γετῶν συνάπτει γῆ, κατ᾽ ἀρχὰς μὲν στενή, παρατεταμένη τῷ Ἴστρῳ κατὰ τὸ νότιον μέρος, κατὰ δὲ τοὐναντίον τῇ παρωρείᾳ τοῦ Ἑρκυνίου δρυμοῦ".
From there, it is obvious that the Getians territory north of the Ister corresponds to the nowadays south part of Romania behind the Carpathian ridges and it is adjoining the Hercynian Forest. These facts indicate that the Hercynian Forest, the seat of Suevi, corresponds (at least) to the Carpathian-Alpine basin including the south-eastern Alps and Carpathian mountains as well.
From the above facts, we can conclude that the Hercynian Forest - the Carpathian-Alpine basin in a broad sense - was in the times of Strabo settled by the Suevi people, the large ethnic group living in this compact area encompassed by the mountain ranges, as he says in sections (7-1-3) and (7-1-5) of Geographica. There are many remains of such compact settlement mainly in geographic names of rivers, mountains, towns and villages in the whole region, see e.g. [23, 24, 32, 33].
For interested reader, we are going to touch some of them, concentrating our description mainly on the west (Slovenia, Carinthia, Styria), north (Slovakia) and east (Romania) nodes of an imaginary triangle in the Carpathian-Alpine basin.
Probably the most spread geographic name of the rivers and settlements are the names Bystrica/Bystrá (in Slovakia), Bistrica/Bistra (in Slovenia), Bistrita/ Bistra (in Romania) and Feistritz (in Austria), meaning "quickly flowing" in Slavic languages, see also [24]-45. In Slovakia, there are five Bystrica and one Bystrá settlements, e.g. towns Banská Bystrica, Považská Bystrica, etc., two river streams with the name Bystrica, one mountain peak Bystrá in the West Tatra mountains and there is also saddle Bystré sedlo in the High Tatra mountains. In Slovenia, there are at least ten towns and villages with the name Bistrica, e.g. Ilirska Bistrica, Slovenska Bistrica, etc., the river Kamniška Bistrica and stream Bistra. In Austria, there are at least eight settlements with the name Feistritz, mainly in Carinthia and Styria, there is a saddle Feistritz-Sattel, on the border of Styria and Lower Austria, below which is the spring of a long Styrian river Feistritz. We note that by [24, 25], around 1870-1880 there was reported Feistritz 15 times in Carinthia and 40 times in Styria. Surprisingly also in Romania, there are at least seven Bistrita towns and villages and five rivers with such name and even more, nine Bistra rivers and three such settlements, distributed all around the country, in counties Alba, Bacău, Bihor, Bistrița-Năsăud, Caraș-Severin, Gorj, Maramureș, Mehedinţi, Mureș, Neamț, Olt, Sibiu, Suceava, Vâlcea, and there is also a mountain range called Bistrita in northern central Romania. It is worth to note that there are many further such examples of common geographic names, e.g. "Slatina", with the meaning a "mineralized water" ([24]-585) and with the exact same writing at all the places, "Trnava" and its analogy "Târnava" in Romania, with the meaning of adjective related to "thorn" ([24]-696). Interestingly, in Romania, there are whole regions with very dense names of villages almost exclusively using the basis of the Slavic words or even more, copying almost exactly the village names used e.g. in Slovakia. For example, in Caraș-Severin county, near the Valea Cernei (Údolie Čiernej (rieky) in Slovak), are the villages Camena (Kamenná in Slovak), Cozia (Kozia), Dobraia (Dobrá), Dolina (Dolina), Gruni (Grúň), Hora Mare (Veľká Hora), Hora Mică (Malá Hora), Iablanița (Jablonica), Ilova (Ilava), Obița (Obyce), Rusca (Ruská), Ruștin (Hruštín), Sadova Nouă (Nová Sadová), Sadova Veche (Stará Sadová), Slatina-Timiș (Timišská Slatina), Studena (Studená), Topla (Teplá), Zbegu (Zbehy), Plugova (Pluhová), Zoina (Zolná), etc., and there exist such examples around all Romania.
Another interesting example is the usage of the geographic name "Studena" with the meaning "cold" ([24]-636) in practically the same form in Slovakia, Romania, Serbia, Croatia, Slovenia and even in Italy (Studena Alta and Studena Bassa villages in the province of Udine close to Carinthian border) although nowadays in the majority of these languages the word "studena" is not used to express the coldness. After this general overview, we touch further examples of Slavic geographic names which interest us because they appear in Roman and Greek writings from the beginning of the first millennium.
A detail of Tabula Peutingeriana with the Trajan's roads to the province of Dacia and two stations "Bersouia" (on the top road) and "Tierua" (on the second from the top road). Source: Wikipedia.
A detail of the estimated Trajan's road to "Tiuisco" (Timișoara) on Tabula Peutingeriana (left image) with four stops indicated, "Centu Putea" (E70 mark on the bottom), "Bersouia" (Denta), "Azizis" (E70 mark on the top) and "Caput bubali" (Jebel). Station "Bersovia" is on the road crossing with the Bârzava river in the village Denta (right image). The village Berzovia which is also located on the river Bârzava is more on the east and the road going through it would not fulfil the distances indicated on the Tabula Peutingeriana. That's why we think that the station called Berzovia was at the place where the road crossed the river with the same name.
First, let us take the geographic names derived from the basis of Slavic word "breza" which means "birch" tree ([24]-29). In Slovakia, we have towns Brezová pod Bradlom and Brezno and villages Brezovica (twice), Brezovička, Brezov, Rimavské Brezovo and České Brezovo and one river stream Brezovský potok (Brezovka). In Slovenia, there are villages Brezova, Brezova Reber pri Dvoru, Brezno (twice), Breza and Brezovo. In Romania there are several forms of this name, there are seven towns or villages and three rivers with the name Breaza, two villages and two river streams with the name Breazova, one of which is a tributary of the river Bârzava, which is another form of the same name. The river Bârzava, flowing through historical regions of Banat (Romania) and Vojvodina (Serbia), has further tributaries Bârzăvița and Berzovița, there is the village Berzovia on the river Bârzava and very near is another village Brezon. It used to be claimed that the village Berzovia on the river Bârzava is noted on the Tabula Peutingeriana [40] as the station "Bersouia", see e.g. Pavol Jozef Šafárik seminal work [37]. Tabula Peutingeriana shows the Roman road system in the first centuries AD and it should be last revised in the 4th or early 5th century. The "Bersouia" is one of the stations on the most north Trajan's road to the province of Dacia which crosses the Ister (lower Danube) near the Banatska Palanka and continues in the direction to "Tiuisco" (Timișoara), see Fig. <ref>. In fact, the "Bersouia" is mentioned also in the Trajan's work Dacica, from around 100 AD, as "inde Berzobim, deinde Aizi processimus", meaning going from "Bersouia" to "Azizis", see Fig. <ref>. Taking into account the Tabula Peutingeriana distances given in Roman miles we claim that "Bersouia" or "Bersobis" is not directly the Berzovia village but it is (with high probability) the crossing point of that Trajan's road with the Bârzava river at a place near (or in) the Denta village, see Fig. <ref>. Such claim, however, does not change anything on the interesting fact that the widely spread in Carpathian basin geographic name of the Slavic origin is mentioned on the map from the first centuries AD and in the work of emperor Trajan from around 100 AD.
There is a mountainous river Belá, with the meaning "white", stemming just below
one of the highest peaks of the High Tatra mountains and thus of the whole Carpathians. Belá got her name probably by the white smooth rocks in its river-basin. The similar is true for the northern Italian, the Carnic Alpine river Fella,
with the Slovenian name Bela. Fella has its spring close to the Friuli-Carinthia-Slovenian border and concerning the current name it went through the standard change of the Slavic "B" to the German "F" recorded now in the Italian official name, see [23] and [24]-12. This twin Belá-Bela is another clear example of the same geographic name given to mountain rivers by the common Slavic population living in the Alps and Carpathians. Moreover, very near the Fella (Bela) spring, there is the Carinthian town Villach - Beljak in Slovenian. And reading carefully the Tabula Peutingeriana one can find it also there under the name "Beliandro", see Fig. <ref>. We see again the Slavic name of the settlement recorded on the map describing the Roman road system at the beginning of the first millennium AD. We can simply check our claim that Villach (Beljak) corresponds to the "Beliandro" by computing distance from Ptuj ("Petauione" on Tabula Peutingeriana) through Celje ("Celeia") to Villach ("Beliandro"). The whole distance is 137 Roman miles. In order to check the relation of the length of the Roman mile and the Google map distances in kilometres in this area of Tabula Peutingeriana we do it for two clearly given towns and known distances both in kilometres and in the Roman miles. From Ptuj to Celje we have 58,5 km on Google maps and 36 miles on Tabula Peutingeriana, which gives approximate correspondence 1.62 km for 1 Roman mile. Now, 137 x 1.62 km = 222,24 km and its there, see Fig. <ref>, the distance from Ptuj to Villach is 220 km on Google maps which perfectly approximate the Tabula Peutingeriana distance 137 Roman miles. Of course, one can check the Villach - "Beliandro" correspondence also in different ways, e.g. by estimating forward distances from "Beliandro" to other stations on the Roman road, but that we left for a reader.
A detail of Tabula Peutingeriana with the road from "Petauione" through "Celeia" to "Beliandro" and farther away. Also stations "Tergeste" and "fonte timaui" can be seen on this map detail in the left bottom part. Source: Wikipedia.
Distance from Ptuj to Villach through Celje is 220 km by the Google maps while by the Tabula Peutingeriana it should be 137 Roman miles which is approximately 222 km. Such accurate correspondence of distances shows that nowadays Villach (Beljak) corresponds to the "Beliandro" on the Tabula Peutingeriana.
After finding the Belá-Bela twin in the High Tatras and the Carnic Alps one can try to find a corresponding name also in Romania.
In the Caraș-Severin county, there is a river
Belareka meaning "Biela rieka" in Slovak and "White river" in English but written with both words together. It is again a mountainous river which joins the river Cerna ("Čierna" in Slovak, [24]-71, "Black" in English) near the spa town Baile Herculane.
The Cerna river then flows to the Ister and their confluence in nowadays Orșova got the name "Tierua" on Tabula Peutingeriana, see Fig. <ref>. It represents the place of crossing the Ister (lower Danube) by the second Trajan's road to the Dacia province. Interestingly, the place name was also recorded by Klaudios Ptolemaios in Greek as "Δίερνα", see [37, 9]. Since in the Greek alphabet there is no direct representation of the letter "č", Ptolemaios replaced it by "Δ" due to a pronunciation similarity of Čierna-Δίερνα. This record had to be written between AD 85-165, during the Ptolemaios life and after approximately AD 100 when Trajan's roads to Dacia were constructed. We also know by Pavol Jozef Šafárik [37] that there exists inscription on marble in Mehadia, former Roman camp on Belareka, from AD 157, "Valerius Felix miles coh. IV. stationis TSIERNEN", where "Tsiernen" represents a very accurate transcript of the river Cerna (Čierna) name to the Latin.
All these Slavic names writings are dated a few decades later than Strabo's Geographica was written. Since by Strabo we know that Suevi inhabited that region in the beginning of the first millennium AD, we clearly see the correspondence between the Strabo's Suevi in the Hercynian forest and the Slavs or Slavic settlements in the Carpathian-Alpine basin. We may also claim that the name Suevi (with the Latin "v") gave birth to two nowadays ethnic names of Slavic nations from the former Hercynian Forest, Slovenes and Slovaks and the name form Soebi (with the Greek "b") to the third one, the Serbians which are called "Srbi" in Slovak (in pronunciation quite similar to "Soebi"). And in fact, in the Middle Ages Latin sources up to the end of the 18th century, the Slovak people were called Slavi, Sclavi or gentis Slavae (in pronunciation again very similar to Suevi or Suavi, when considering the term "Suavia" used in [34], page 33), see e.g. "Privilegium pro Slavis solnensis" by Ľudovít I. Veľký (Louis the Great from Anjou) from 1381 where Slovaks are called Sclavi [17, 18] or "Historia gentis Slavae" by Juraj (Georgius) Papánek from 1780 [27].
Almost finally we mention "Tergeste" geographic name used by Strabo in Geographica and appearing also on Tabula Peutingeriana, see Fig. <ref> bottom left. The name "Tergeste" has correspondences in Romania, such as Târgovişte in Dâmbovița County, Muntenia historical region of Romania, which was also the capital of Wallachia between the early 15th century and the 16th century. And the exact correspondence exists also in Slovakia, the village Trhovište. The common meaning of all these places is the "marketplace" and the name is derived from the Slavic equivalent "trh", "trg" or "targ", see [24]-694. There are several further examples such as Târgu Jiu, Târgu Neamț, Târgu Mureș, Târgu Frumos or Târgu Secuiesc in Romania. By the current name similarity, the "Tergeste" used to be related to nowadays Trieste but with some probability, it may also correspond to the city of Monfalcone which is called Tržič in Slovenian with the marketplace meaning [24]-694. Concerning Tabula Peutingeriana, we can find several further names of clear Slavic origin in Slavonia and Bosnia regions, see Figure <ref>. There is the station "Vrbate" on the crossing of the Roman road with the nowadays river Vrbas, with the Slavic base of the name, "vŕba" (in Slovak) meaning the "willow" tree, see [24]-746. Then there is the "pont Vlcae" station close to nowadays town Vukovar on the Vuka river, with the Slavic base "vlk" (in Slovak) or "vuk" (in Croatian and Serbian) meaning the "wolf", see [24]-733. A further example is the station "Drinum fl" at the place of nowadays Drina river, with the Slavic base "drieň" (in Slovak) meaning the "bunchberry", see [24]-87.
The last but not least we mention the karst river Timavo which re-emerges near Monfalcone and empties into the Adriatic. Strabo calls it Timavus in section (5-1-8) of Geographica and he says it has seven sources by which it re-emerges after 130 stadia in the underground. The Slovenian name of the river is Timava, with the standard Slavic suffix -ava, see [23] - Chapter 2, Section V, point 37. And the meaning can be "Tmavá" (in Slovak) translated as "dark". This name meaning nicely corresponds to the re-emergence of Timava from the "dark underground". This Slavic name of the river appears also on the Tabula Peutingeriana as "fonte timaui", see Fig. <ref> bottom left, and it again represents a clear example of the Slavic presence in the south-eastern Alpine region at the beginning of the first millennium AD.
A detail of Tabula Peutingeriana with stations "Vrbate" (left), "pont Vlcae" (middle), "Cerne" (middle) and "Drinum fl." (right) in Bosnia and Slavonia. Source: Wikipedia.
§ CONCLUSIONS
All in all, we have shown the correspondence of the Strabo's Suevi of the Hercynian Forest with the Slavs in the Carpathian-Alpine basin. In other words, we confirmed the presence of the compact Slavic settlement in this region already at the beginning of the first millennium AD. Interestingly, such a conclusion comes from our mathematical results on the transformation of the Strabo's river Ister to the current map of the world. To perform the transformation we developed a new method for the map registration combining the locally optimal affine transformations with the interpolation/extrapolation by solving numerically the Laplace equation with zero Neumann boundary conditions. The new method keeps the optimality of the local transformations and smoothly interpolate/extrapolate the transformation parameters to the whole computational domain. The method was applied to Strabo's river Ister reconstruction and yield interesting historical conclusions.
The conclusions are in accordance with the opinions and claims of historians and linguists such as Pavol Jozef Šafárik [37], Ľudovít Štúr [38, 39, 7], Oleg Trubačev [41] or Mario Alinei [1, 2] which all declared the ancientness of the Slavs on the middle Danube, i.e. in the Carpathian-Alpine basin. It supports in some aspects also the narratives of Primary chronicle by the Saint Nestor the Chronicler [26] seeing the middle Danube even as the homeland of all the Slavs, which is however for us too strong and hardly acceptable assumption. More likely, we are in accordance with several aspects of the Paleolithic Continuity Paradigm by Mario Alinei [1, 2] declaring stability of the European population and its ethnic distribution from the very ancient (Upper Paleolithic or Mesolithic) times. Such claims are also in accordance with the results of [29] where the authors concluded that the large majority of extant human mitochondrial DNA (mtDNA) lineages entered Europe in several waves during the Late Upper Palaeolithic, mainly around the Last Glacial Maximum, 20000 years ago. After that, the bearers of the mtDNA - sedentary population - adopted in general to a new way of life and the Neolithic demic-diffusion, see e.g. [28], and further immigration waves seem to contribute only by about 10-20% of mtDNA lineages. Also, recent work [5] of Slovak and Hungarian experts in archaeology and archaeogenetics has shown a large majority of mtDNA haplogroups in medieval (9th-12th century AD) population around Nitra (Western Slovakia) belonging to the Late Upper Palaeolithic period of migration to Europe and similarity of medieval "Slovak" population with nowadays inhabitants of the Carpathian-Alpine basin such as Croats and Romanians.
We hope that results of this paper can help geographers and cartographers with the historical map registration and
also to historians
to localize the "Suavia" at the beginning of the first millennium AD [34] or
to explain how it was possible that Slavs appeared so fast in the Carpathian basin and north of the Istros (Lower Danube) at the beginning of the second half of the first millennium AD [6, 14]. In fact, the Slavs did not appear, they were there "since time immemorial".
[1]
Alinei, M.
Origini delle lingue d'Europa.
Vol. I: La teoria della continuità.
Vol. II: Continuità dal Mesolitico all'età del Ferro nelle principali aree etnolinguistiche.
Il Mulino,
[2]
Alinei, M.
Interdisciplinary and linguistic evidence for Palaeolithic continuity of Indo-European,
Uralic and Altaic populations in Eurasia, with an excursus on Slavic ethnogenesis,
(expanded version of a paper read at the Conference Ancient Settlers in Europe),
Kobarid, May 29–30, 2003,
Quaderni di Semantica 24 (2003),
[3]
Blaeu, J.
Saltzbvrg Archiepiscopatvs et Carinthia Dvcatvs.
1665. (Available in David Rumsey Historical Map Collection.)
[4]
Cassius, D.
Historiae Romanae.
in Greek available at Perseus Digital Library, Editor-in-Chief Gregory R. Crane, Tufts University, based on Dio’s Roman History. Cassius
Dio Cocceianus, Earnest Cary, Herbert Baldwin Foster, William Heinemann, Harvard
University Press, London, New York, 1914.
[5]
Csákyová, V.—Szécsényi-Nagy, A.—Csösz, A.—Nagy, M.—Fusek, G.—Langó, P., et al.
Maternal Genetic Composition of a Medieval Population from a
Hungarian-Slavic Contact Zone in Central Europe.
PLoS ONE 11
(2016), e0151206.
[6]
Curta, F.
The Making of the Slavs.
Cambridge University Press, Cambridge,
[7]
Czambel, S.
Slováci a ich reč,
Elibron Classic, 2007.
(reproduction of the original edition,
[8]
Dando-Collins, S.
Legions of Rome: The definitive history of every Roman legion,
[9]
Dictionary of Greek and Roman Geography, available at Perseus Digital Library,
Editor-in-Chief Gregory R. Crane, Tufts University, based on edition,
[10]
Encyclopaedia Biblica, Vol II,
(T. K. Cheyne and J.Sutherland Black, eds.)
The Macmillan company,
New York,
pp. E–K.
[11]
Guennebaud, G.—Jacob, B. et al.,
Eigen v3 (2010).
[12]
Google Earth Pro
Version 7.3,
[13]
Grote, G.
A History of Greece,
Cambridge University Press,
New York,
(First published in 1846.)
[14]
Homza, M.
Stredná Európa I. Na začiatku stredoveku,
Univerzita Komenského,
[15]
Karney, C. F. F.
Version 1.51 (2019).
[16]
Kellerer-Pirklbauer, A.—Drescher-Schneider, R.
Glacier fluctuation and Vegetation history during the Holocene at the largest glacier of the Eastern Alps
(Pasterze Glacier, Austria: New insight based on recent peat findings).
In: 4th Symposium of the Hohe Tauern National Park,
Castle of Kaprun, Austria, September 17th–19th, 2009,
pp. 151–155.
[17]
Marsina, R.
Žilina v listinách 1208–1438,
EDIS, Žilina,
pp. 37–38.
[18]
Marsina, R.
Právne postavenie slovenských mešťanov v Žiline koncom 14. a začiatkom 15. storočia .
In: Vlastivedný zborník Považia, Vol. XI,
pp. 13–14.
[19]
Müllero, C.—Dübnero, F.
Strabonis Geographica.
[20]
Meineke, A. ed.
Strabo Geographica.
[21]
Mercator, G.—Hondius, I.
Salzburg Carinthia.
(Available in David Rumsey Historical Map Collection.)
[22]
Mercator, G.
The second table of Lombardy.
Tarvisina Marchia et Triolis Comitatus.
(Available in David Rumsey Historical Map Collection.)
[23]
Miklosich, F.
Die slavischen Ortsnamen aus Appellativen, Vol. I.
[24]
Miklosich, F.
Die slavischen Ortsnamen aus Appellativen, Vol. II.
[25]
Mitterrutzner, J. Ch.—Malovrh, M.
Slovani, v iztočni Pustriški dolini, na Tirolskem.
J. Krajec,
[26]
Nestor the Chronicler,
Primary Chronicle/Tale of Bygone Years/Povesť vremennych let,
around 1100.
[27]
Papánek, J. (Georgius)
Historia gentis Slavae/(Prvé) dejiny slovenského národa,
[28]
Renfrew, C.
Archaeology and Language: The Puzzle of Indo-European Origins.
Cambridge University Press,
[29]
Richards, M. et al.
Tracing European founder lineages in the Near Eastern mtDNA pool,
The American Journal of Human Genetics 67 (2000),
[30]
Roller, Duane W.
The Geography of Strabo:
An English Translation with Introduction and Notes.
Cambridge University Press,
[32]
Stanislav, J.
Slovenský juh v stredoveku I.
Matica Slovenská,
Turčiansky Sv. Martin,
[33]
Stanislav, J.
Slovenský juh v stredoveku II $+$ mapová príloha.
Matica slovenská,
Turčiansky Sv. Martin,
[34]
Steinhübel, J.
Nitrianske kniežatstvo.
Vydavateľstvo Rak,
[35]
Strabo Geography.
Available in Greek at Perseus Digital Library, Editor-in-Chief Gregory R. Crane, Tufts University
[36]
Strabo: The Geography.
The Text of Strabo on LacusCurtius.
(In English),
[37]
Šafárik, P. J.
Slovanské starožitnosti I,
Oriens, Košice, 1999,
first published in Prague,
[38]
Štúr, L.
Najstaršie príhody na zemi Uhorskej a jej základy,
Orol Tatranský, 1 (1845),
p. 2.
[39]
Štúr, L.
Nárečja slovenskuo alebo potreba písaňja v tomto nárečí,
Prešporok (Bratislava),
[40]
Tabula Peutingeriana: Codex Vindobonensis 324.
[41]
Trubachev, O. N.
Etnogenez i kultura drevneyshikh Slavian:
lingvistichesskiye issledovaniya. 2nd ed.
(In Russian)
[42]
Mathematica, Version 12.2,
Wolfram Research, Inc.,
Champaign, IL,
[43]
DEFENSE MAPPING AGENCY WASHINGTON DC
World Geodetic System–1984 (WGS-84). (Manual 2nd ed.)
Report Accession Number: ADA147409,
Personal Author(s): M. M. Macomber,
International Civil Aviation Organization,
US Naval Observatory,
Washington, D.C.
|
# Diffeomorphic shape evolution coupled with a reaction-diffusion PDE on a
growth potential
Dai-Ni Hsieh<EMAIL_ADDRESS>Department of Applied Mathematics and Statistics,
Johns Hopkins University, Baltimore, MD, USA Sylvain Arguillère
<EMAIL_ADDRESS>Laboratoire Paul Painlevé, University of
Lille, France Nicolas Charon<EMAIL_ADDRESS>Department of Applied
Mathematics and Statistics, Johns Hopkins University, Baltimore, MD, USA
Laurent Younes<EMAIL_ADDRESS>Department of Applied Mathematics and
Statistics, Johns Hopkins University, Baltimore, MD, USA
###### Abstract
This paper studies a longitudinal shape transformation model in which shapes
are deformed in response to an internal growth potential that evolves
according to an advection reaction diffusion process. This model extends prior
works that considered a static growth potential, i.e., the initial growth
potential is only advected by diffeomorphisms. We focus on the mathematical
study of the corresponding system of coupled PDEs describing the joint
dynamics of the diffeomorphic transformation together with the growth
potential on the moving domain. Specifically, we prove the uniqueness and long
time existence of solutions to this system with reasonable initial and
boundary conditions as well as regularization on deformation fields. In
addition, we provide a few simple simulations of this model in the case of
isotropic elastic materials in 2D.
## 1 Introduction
We study in this paper a system of coupled evolution equations describing
shape changes (e.g., growth, or atrophy) for a free domain in
$\mathbb{R}^{d}$. A first equation, modeled as a diffusion-convection-reaction
equation, defines the evolution of a scalar function defined on the domain,
this scalar function being roughly interpreted as a “growth potential” that
determines shape changes. The second equation describes the relationship
between this potential and a smooth Eulerian velocity field and is modeled as
a linear, typically high-order, partial differential equation (PDE). The free-
form evolution of the domain then follows the flow associated with this
velocity field.
There is a significant amount of literature on growth models and shape
changes, where the dominant approach (in 3D) uses finite elasticity, modeling
the strain tensor as a product of a growth tensor (non necessarily achievable
by a 3D motion) and a correction term that makes it achievable, using this
correction term as a replacement of the strain tensor in a hyper-elastic
energy [Rodriguez et al., 1994]. Minimizing the energy of this residual stress
then leads to PDEs describing the displacement defined on the original domain
(assumed to be at rest) into the final one. We refer to several survey papers
such as Menzel and Kuhl [2012], Humphrey [2003], Ambrosi et al. [2011] for
references.
In this paper, we tackle the shape change problem with a different approach.
First, we use a dynamical model of time-dependent shapes, which allows us to
analyze each infinitesimal step using linear models. Second, our model
includes no residual stress but rather assumes that a new elastic equilibrium
is reached at each instant. Our focus is indeed on slow evolution of living
tissues, in which shape changes occur over several years and tissues can be
assumed to remain constantly at rest. More precisely, our model assumes that
at each time, an infinitesimal force places the shape into a new equilibrium,
which becomes the new reference configuration. The force is assumed to derive
from a potential, itself associated to the solution of a reaction-convection-
diffusion equation, while the new equilibrium is obtained as the solution of a
linear equation that characterizes the minimizer of a regularized deformation
energy. Figure 1 provides an example of such an evolution.
Figure 1: Example of shape change in two dimensions relative to a negative
growth potential (to be read from left to right and up to down). The potential
is initialized locally, and spreads while affecting the shape of the domain
until reaching saturation.
In our main result (see Theorem 3), we will prove that the full system has
solutions over arbitrary time intervals, and that the shape domain evolves
according to a diffeomorphic flow. This result opens the possibility to
formulate optimal control and inverse problems in which one determines initial
growth potentials (within a parametrized family) transforming a given initial
shape into a target shape. Such inverse problems were considered in the
multiplicative strain tensor framework and for thin-plate models in Lewicka et
al. [2011], and for crystal growth control, albeit not in a free-form setting,
in Trifkovic et al. [2009], Bajcinca [2013]. The optimal control of free-form
surfaces modeling the interface between two phases was considered in Bernauer
and Herzog [2011]. In Bressan and Lewicka [2018], tissue growth is modeled
through a control system evolving as an elastic body experiencing local volume
changes controlled by the concentration of a “morphogen”, which itself evolves
according to a linear elliptic equation. Our model can be seen as a
regularization of an extension of the model in that paper (we make, in
particular, non-isotropic assumptions and our growth potential evolves
according to a non-linear equation), our regularization allowing us to obtain
long term existence and uniqueness results, that were not available in Bressan
and Lewicka [2018]. Finally, Kulason et al. [2020] introduce a reaction-
diffusion model to analyze disease propagation and thickness changes in brain
cortical surfaces, where changes are happening (unlike the model studied in
our paper) within a fixed domain.
This paper follows and completes Hsieh et al. [2020] (see also Hsieh et al.
[2019]), which adopts a similar approach with a functional dependency of the
growth potential on the diffeomorphic flow. This assumption is relaxed here,
since the potential follows its own PDE, with an evolution coupled with that
of the shape. This extension will, as we will see, significantly complicate
the theoretical study of the equations, as well as their numerical
implementation.
## 2 General framework and main theorems.
### 2.1 Notation
##### Ambient space, vector fields and diffeomorphisms.
We will work in the Euclidean space $\mathbb{R}^{d}$. For an integer $s\geq
0$, and open subset $U$ of $\mathbb{R}^{d}$, we let $H^{s}(U)$ be the Hilbert
space of all real functions on $U$ of Sobolev class $H^{s}=W^{2,s}$. Recall
that $H^{0}(U)=L^{2}(U).$
We denote by $\mathcal{C}_{0}^{m}(\mathbb{R}^{d},\mathbb{R}^{d})$ the set of
all $m$-times continuously differentiable vector fields whose $k$-th
derivative $D^{k}v$ go to zero at infinity for every $k$ between 0 and $m$. It
is a Banach space under the usual norm
$\|v\|_{m,\infty}=\sum_{k=0}^{m}\max_{x\,\in\,\mathbb{R}^{d}}|D^{k}v(x)|,\quad
v\in\mathcal{C}_{0}^{m}(\mathbb{R}^{d},\mathbb{R}^{d}).$
For a generic function $f:[0,T]\times\mathbb{R}^{d}\rightarrow\mathbb{R}^{d}$,
we will use the notation $f(t):\mathbb{R}^{d}\rightarrow\mathbb{R}^{d}$
defined by $f(t)(x)=f(t,x)$. We will use ${{\mathfrak{C}}}$ to denote a
generic constant and ${{\mathfrak{C}}_{a}}$ to show a generic constant
depending on $a$. The value of such constants may change from equation to
equation while keeping the same notation.
Now, assume $m\geq 1$. Let $\mathcal{D}\mathit{iff}^{m}(\mathbb{R}^{d})$ be
the space of $\mathcal{C}^{m}$-diffeomorphisms of $\mathbb{R}^{d}$ that go to
the identity at infinity, that is, the space of all diffeomorphisms $\varphi$
such that $\varphi-
id_{\mathbb{R}^{d}}\in\mathcal{C}_{0}^{m}(\mathbb{R}^{d},\mathbb{R}^{d})$,
with $id_{\mathbb{R}^{d}}:\mathbb{R}^{d}\rightarrow\mathbb{R}^{d}$ the
identity map $id_{\mathbb{R}^{d}}(x)=x$. Do note that
$\mathcal{D}\mathit{iff}^{m}(\mathbb{R}^{d})$ is an open subset of the Banach
affine space
$id_{\mathbb{R}^{d}}+\mathcal{C}_{0}^{m}(\mathbb{R}^{d},\mathbb{R}^{d})$, with
the induced topology. It is also known to be a topological group for the law
of composition [Bruveris and Vialard, 2016], so we also have
$\varphi^{-1}\in\mathcal{D}\mathit{iff}^{m}(\mathbb{R}^{d})$. We can then
define on $\mathcal{D}\mathit{iff}^{m}(\mathbb{R}^{d})$ the distance
$d_{m,\infty}$ by
$d_{m,\infty}(\varphi,\psi)=\max\big{(}\|\varphi-\psi\|_{m,\infty},\|\varphi^{-1}-\psi^{-1}\|_{m,\infty}\big{)},\quad\varphi,\psi\in\mathcal{D}\mathit{iff}^{m}(\mathbb{R}^{d}),$
(1)
whose open balls will be denoted $B_{r}(\varphi),\ r>0,\
\varphi\in\mathcal{D}\mathit{iff}^{m}(\mathbb{R}^{d}).$ This is easily checked
to be a complete distance, and it does not change the topology of
$\mathcal{D}\mathit{iff}^{m}(\mathbb{R}^{d})$. We introduce it because we will
often need to assume bounds on diffeomorphisms and their inverse at the same
time.
##### Operators and controlled curves in Banach spaces.
If $B$ and $\widetilde{B}$ are separable Banach spaces,
$\mathscr{L}(B,\widetilde{B})$ will denote the vector space of bounded linear
operators from $B$ to $\widetilde{B}$. Weak convergence of sequences $(x_{n})$
in $B$ will be denoted by $x_{n}\rightharpoonup x$. Denoting the topological
dual of $B$ by $B^{*}$, we will use the notation $(\mu\mid v)$ rather than
$\mu(v)$ to denote the evaluation of $\mu\in B^{*}$ at $v\in B$. We say that a
linear operator $A\in\mathscr{L}(B,B^{*})$ is symmetric if the corresponding
bilinear form $(v,w)\mapsto(Av\mid w)$ is symmetric.
For a given $T>0$ and open subset $U$ of a Banach space ${B}$, we will denote
by $L^{p}([0,T],U),\ p\in[1,+\infty]$ the space of measurable maps
$f:[0,T]\rightarrow U$ such that $t\mapsto\|f(t)\|_{B}^{p}$ is integrable. One
can then define the Sobolev space $W^{p,1}([0,T],U)$ whose elements $f$ are
differentiable almost everywhere, i.e. the differential $\displaystyle
t\mapsto\frac{df}{dt}(t)=\lim_{t^{\prime}\rightarrow
t}\frac{f(t^{\prime})-f(t)}{t^{\prime}-t}$ exists almost everywhere and
$\forall t_{0},t_{1}\in[0,T],\quad
f(t_{1})-f(t_{0})=\int_{[t_{0},t_{1}]}\frac{df}{dt}(t)dt.$
For $p=2,$ we will simply write $H^{1}$ instead of $W^{2,1}.$
Case in point, for a time-dependent vector field $v\in
L^{1}([0,T],\mathcal{C}^{m}_{0}(\mathbb{R}^{d},\mathbb{R}^{d}))$, there is a
unique $\varphi:t\mapsto\varphi(t)$ in
$W^{1,1}([0,T],\mathcal{D}\mathit{iff}^{m}(\mathbb{R}^{d}))$ that satisfies
$\varphi(0)=id_{\mathbb{R}^{d}}$ and
$\frac{d\varphi}{dt}(t)=v(t)\circ\varphi(t)$ for almost every $t$.
##### RKHS (Reproducing Kernel Hilbert Spaces) of vector fields.
Throughout this paper, $V$ is a Hilbert space of vector fields on
$\mathbb{R}^{d}$ that is continuously embedded in
$C_{0}^{m}(\mathbb{R}^{d},\mathbb{R}^{d})$ for some $m\geq 1$ (we will write
$V\hookrightarrow C_{0}^{m}(\mathbb{R}^{d},\mathbb{R}^{d})$), with inner
product $\big{\langle}{\cdot}\,,\,{\cdot}\big{\rangle}_{V}$ and norm
$\|\cdot\|_{V}$. Since $V\hookrightarrow
C_{0}^{m}(\mathbb{R}^{d},\mathbb{R}^{d})$, there exists a constant $c_{V}$
such that
$\|v\|_{m,\infty}\leq c_{V}\|v\|_{V}.$ (2)
The duality map $L_{V}:V\to V^{*}$ is given by
$\big{(}{L_{V}\hskip
1.0ptv}\,|\,{w}\big{)}=\big{\langle}{v}\,,\,{w}\big{\rangle}_{V}$
and provides an isometry from $V$ onto $V^{*}$. We denote the inverse of
$L_{V}$ by $K_{V}\in\mathscr{L}(V^{*},V)$, which, because of the embedding
assumption, is a kernel operator [Aronszajn, 1950]. Note that
$\|v\|_{V}^{2}=(L_{V}\hskip 1.0ptv\mid v)=(K_{V}^{-1}\hskip 1.0ptv\mid v).$
As an example, the space $V$ can be the reproducing kernel Hilbert space
(RKHS) associated with a Matérn kernel of some order $s$, and some width
$\sigma$, which, in three dimensions, implies that $V$ is a Sobolev space
$H^{s+2}$. For the specific value $s=3$, which we will use in our experiments,
the kernel operator (when applied to a vector measure $\mu\in V^{*}$) takes
the form
$(K_{V}\hskip 1.0pt\mu)(x)=\int_{\mathbb{R}^{d}}\kappa(|x-y|/\sigma)\,d\mu(y)$
with $\kappa(t)=(1+t+2t^{2}/15+t^{3}/15)e^{-t}$.
##### Weak derivatives for Hilbert-valued functions.
Following [Lions and Magenes, 1972, Chapter 1, Section 1.3], we define
generalized derivatives of functions $u:(0,T)\to H$, where $T$ is a positive
number and $H$ a Hilbert space as follows. Let
$\mathscr{D}\big{(}(0,T)\big{)}$ denote the Schwartz space of compactly
supported infinitely differentiable real-valued functions defined on $(0,T)$.
The space of $H$-valued distributions is
$\mathscr{D}^{*}\big{(}(0,T),H\big{)}\colonequals\mathscr{L}(\mathscr{D}\big{(}(0,T)\big{)},H).$
If $u\in\mathscr{D}^{*}\big{(}(0,T),H\big{)}$, its generalized derivative,
denoted $\partial_{t}u$, is the element of
$\mathscr{D}^{*}\big{(}(0,T),H\big{)}$ defined by
$\partial_{t}u(\varphi)\colonequals-u\Big{(}\frac{d\varphi}{dt}\Big{)}\,\in\,H\
\ \ \mbox{ for all }\varphi\in\mathscr{D}\big{(}(0,T)\big{)}.$ (3)
We can identify any $u\in L_{\mathrm{loc}}^{1}([0,T],H)$ (i.e., $u\in
L^{1}([a,b],H)$ for all $[a,b]\subset(0,T)$), with the corresponding
$\widetilde{u}\in\mathscr{D}^{*}\big{(}(0,T),H\big{)}$ given by
$\widetilde{u}(\varphi)\colonequals\int_{0}^{T}u(t)\,\varphi(t)\,dt\,\in\,H\ \
\ \mbox{ for all }\varphi\in\mathscr{D}\big{(}(0,T)\big{)}$
and show that $\widetilde{u}\in\mathscr{D}^{*}\big{(}(0,T),H\big{)}$. We can
therefore see $L_{\mathrm{loc}}^{1}([0,T],H)$ as a subset of
$\mathscr{D}^{*}\big{(}(0,T),H\big{)}$.
In what follows, we will use the following two results, both taken from Lions
and Magenes [1972].
###### Theorem 1.
Let $T$ be a positive number and $\Omega$ an open subset of $\mathbb{R}^{d}$.
Assume that $u\in L^{2}([0,T],H^{1}(\Omega))$ and that $\partial_{t}u\in
L^{2}([0,T],H^{1}(\Omega)^{*})$. Then $u\in C([0,T],L^{2}(\varOmega))$. (See
Lions and Magenes [1972, Chapter 1, Proposition 2.1 and Theorem 3.1].)
We will also use the following general result on weak solutions of parabolic
equations. A bounded linear map $\mathcal{L}:L^{2}([0,T],H^{1}(\Omega))\to
L^{2}([0,T],H^{1}(\Omega)^{*})$ is coercive if there exists $\alpha>0$ such
that $(\mathcal{L}\hskip 1.0ptu\mid
u)\geq\alpha\,\int_{0}^{T}\|u\|_{H^{1}(\Omega)}^{2}\,dt$ for all $u\in
L^{2}([0,T],H^{1}(\Omega))$.
###### Theorem 2.
Given a coercive bounded linear mapping
$\mathcal{L}:L^{2}([0,T],H^{1}(\Omega))\to L^{2}([0,T],H^{1}(\Omega)^{*})$, a
function $f$ in $L^{2}([0,T],H^{1}(\Omega)^{*})$ and an initial condition
$u_{0}\in L^{2}(\Omega)$, there exists a unique solution $u\in
L^{2}([0,T],H^{1}(\Omega))$ of the parabolic initial value problem
$\left\\{\begin{array}[]{l}\partial_{t}\hskip 1.0ptu+\mathcal{L}\hskip
1.0ptu=f\\\ u(0)=u_{0}\,.\end{array}\right.$ (4)
(See Lions and Magenes [1972, Chapter 3, Theorem 1.1, Section 4.3, and Remark
4.3].)
We will also need the following technical lemma. Its proof is a simple
application of functional approximation theorems in $L^{2}$ and we shall omit
it for brevity.
###### Lemma 1.
Let $\mathcal{H}$ be a Banach space and suppose that $w\in
L^{2}([0,T],\mathcal{H})$ with $\partial_{t}\hskip 1.0ptw\in
L^{2}([0,T],\mathcal{H}^{*})$. Then the derivative in the sense of
distributions $\partial_{t}\hskip 1.0pt\|w(\cdot)\|_{\mathcal{H}}^{2}$ is a
function in $L^{1}([0,T])$ and equals to $t\mapsto
2\,\big{(}(\partial_{t}\hskip 1.0ptw)(t)\mid w(t)\big{)}$ for almost every
$t$.
### 2.2 Control systems for shapes
We want to refine the system introduced in Hsieh et al. [2020], that was
designed as a mathematical model representing possibly pathological shape
changes in human organs or tissues. The control system starts with an initial
volume, and exhibits a time-dependent deformation induced by a vector field on
the domain, where this vector field results from auxiliary variables defined
on the volume (e.g., a scalar field) that one can loosely interpret as a
manifestation of a “disease”.
##### Mixed diffeomorphic-elastic model with fixed potential.
We therefore start with an open domain $M_{0}$ of $\mathbb{R}^{d}$ and model a
deformation $t\mapsto M_{t}$. We first introduce the “diffeomorphic” model,
which is the foundation of the LDDMM algorithm (for large deformation
diffeomorphic metric mapping [Beg et al., 2005]) in shape analysis. In this
model, the deformation is tracked through a time-dependent diffeomorphism
$\varphi(\cdot)\in W^{1,1}([0,T],\mathcal{D}\mathit{iff}^{m}(\mathbb{R}^{d}))$
which is also the flow of a time-dependent vector field $v(\cdot)\in
L^{1}([0,1],V)$ that belongs to our RKHS $V$, so that for all $t$ in $[0,T],$
$M_{t}=\varphi(t,M_{0})$, with
$\varphi(0)=id_{\mathbb{R}^{d}}\quad\text{and}\quad\frac{d}{dt}{\varphi}(t)=v(t)\circ\varphi(t)\
\text{almost everywhere.}$ (5)
The vector field $v$ is preferably represented in the form
$v(\cdot)=K_{V}u(\cdot)$ for some $u(\cdot)\in L^{1}([0,T],V^{*})$, which then
acts as a control. This control can be left unspecified and estimated as part
of an optimal control problem (as done in Beg et al. [2005], Joshi and Miller
[2000], Dupuis et al. [1998], Trouvé [1995], Arguillere et al. [2014]), or
modeled as an element of some parametrized class of time-dependent linear
forms on $V$ [Younes, 2011, 2014, Gris et al., 2018, Younes et al., 2020]. We
note that the relation $v(t)=K_{V}u(t)$ is equivalent to the variational
formulation
$v(t)=\underset{v^{\prime}\,\in\,V}{\arg\min}\
\frac{1}{2}\big{(}{L_{V}v^{\prime}}\,|\,{v^{\prime}}\big{)}-(u(t)\mid
v^{\prime}).$
Alternatively, we may consider $M_{t}$ as a deformable solid, with
infinitesimal deformation energy quantified by a linear tensor
$\mathscr{A}(t)$ which is required to be a symmetric, positive semi-definite
operator in
$\mathscr{L}(H^{1}(M_{t},\mathbb{R}^{d}),H^{1}(M_{t},\mathbb{R}^{d})^{*})$.
Now, exert an infinitesimal force density $\mathfrak{j}(t)dt$ on $M_{t}$
($\mathfrak{j}(t)$ is a time derivative of a force, also called a yank).
Assuming this yank belongs to $H^{1}(M_{t},\mathbb{R}^{d})^{*}$, the
infinitesimal deformation $v(t)dt$ that brings $M_{t}$ to equilibrium is
given, when it exists, by
$v(t)=\underset{v^{\prime}\,\in\,H^{1}(M_{t},\mathbb{R}^{d})}{\arg\min}\
\frac{1}{2}(\mathscr{A}(t)v^{\prime}\mid v^{\prime})-(\mathfrak{j}(t)\mid
v^{\prime}).$
In this paper, following Hsieh et al. [2020], we fix a weight $\omega>0$ to
combine the LDDMM model and the deformable solid model, and define
$v(t)=\underset{v^{\prime}\,\in\,V}{\arg\min}\
\frac{\omega}{2}\big{(}{L_{V}v^{\prime}}\,|\,{v^{\prime}}\big{)}+\frac{1}{2}(\mathscr{A}(t)v^{\prime}\mid
v^{\prime})-(\mathfrak{j}(t)\mid v^{\prime}),$ (6)
using $\mathfrak{j}\in L^{1}([0,T],V^{*})$ as a control. Here, we make the
abuse of notation identifying $\mathscr{A}(t)v$ with its restriction to
$M_{t}$, the assumption that $V\hookrightarrow
C_{0}^{m}(\mathbb{R}^{d},\mathbb{R}^{d})$ ensuring that this restriction maps
$V$ into $H^{1}(M_{t},\mathbb{R}^{d})$. Here, the term
$(\omega/2)\big{(}{L_{V}v^{\prime}}\,|\,{v^{\prime}}\big{)}$ can be seen as an
internal energy causing permanent change to the shape, or simply as a
regularization term ensuring the existence of $v(t)\in V$. Indeed, since
$\mathscr{A}(t)$ is positive semi-definite, $v(t)$ always exists, and belongs
to $L^{1}([0,T],V)$, so that it generates a well-defined flow $\varphi(\cdot)$
on $\mathbb{R}^{d}$.
We point out important differences between the operators $L_{V}\doteq
K_{V}^{-1}$ (such that $\|v\|_{V}^{2}=\big{(}{L_{V}v}\,|\,{v}\big{)}$) and
$\mathscr{A}(t)$. The former, $L_{V}:V\to V^{*}$, is defined on a fixed space
of vector fields ($V$) themselves defined on the whole ambient space
$\mathbb{R}^{d}$. In contrast, $\mathscr{A}(t)$ is defined on
$H^{1}(M_{t},\mathbb{R}^{d})$, and therefore applies to vector fields defined
on $M_{t}$. It is, by definition shape dependent. The global nature of $L_{V}$
(and higher-order assumption insuring the embedding of $V$ in a space of
differentiable functions) is of course what makes possible the diffeomorphic
property of the evolving flow over all times intervals.
Although we will work with general assumptions on the deformation energy
tensor $\mathscr{A}$, the main example of such a tensor in three dimensions
comes from linear elasticity [Ciarlet, 1988, Marsden and Hughes, 1994].
Generally, such a tensor
$\mathscr{A}(t)\in\mathscr{L}(H^{1}(M_{t},\mathbb{R}^{3}),H^{1}(M_{t},\mathbb{R}^{3})^{*})$
is defined so that
$(\mathscr{A}(t)v\mid
w)=\int_{M_{t}}\Big{(}{\mathscr{E}_{t}(x,\epsilon_{v}(x))}\,\big{|}\,{\epsilon_{w}(x)}\Big{)}dx,$
with $\epsilon_{v}=(Dv+Dv^{T})/2$. Here, $G^{T}$ is the transpose of a matrix
$G$, and for every $x$, $\mathscr{E}_{t}(x)$, is a symmetric positive definite
tensor on $3\times 3$ symmetric matrices. In particular, one can favor at each
point $x$ of $M_{t}$ specific directions of deformations by appropriately
choosing $\mathscr{E}_{t}(x)$. Examples of such tensors that could be used in
applications of our model are those given in Hsieh et al. [2020, Section 4].
In the simplest situation, one can assume that the material is homogeneous,
isotropic and that its elastic properties are also constant in time, in which
case for all $x\in M_{t}$:
$\Big{(}{\mathscr{E}_{t}(x,\epsilon_{v})}\,\big{|}\,{\epsilon_{w}}\Big{)}=\lambda\,\mathrm{tr}(\epsilon_{u})\,\mathrm{tr}(\epsilon_{v})+2\mu\,\mathrm{tr}(\epsilon_{u}^{T}\epsilon_{v})$
(7)
where $\lambda$ and $\mu$ are known as the Lamé parameters of the elastic
material.
While not necessarily restricting to elasticity operators such as those above,
we will make the additional assumption that $\mathscr{A}(t)$ is fully
specified by the transformation $\varphi$ (defined by (5)) applied to the
initial volume. More precisely, we will assume that we are given a mapping
$A:\mathcal{D}\mathit{iff}^{m}(\mathbb{R}^{d})\rightarrow\mathscr{L}\left(H^{1}(\varphi(M_{0}),\mathbb{R}^{d}),H^{1}(\varphi(M_{0}),\mathbb{R}^{d})^{*}\right),$
such that for every diffeomorphism $\varphi,$ $A_{\varphi}$ is a deformation
energy tensor on the domain $\varphi(M_{0})$ and take
$\mathscr{A}(t)=A_{\varphi(t)}$ in (6).
##### Yank model.
It remains to model a yank $\mathfrak{j}(\cdot)$ that induces the deformation
on $M_{t}$. The model considered in Hsieh et al. [2020] starts with a fixed
positive function $g_{0}\in\mathcal{C}^{1}(M_{0},\mathbb{R})$, with negligible
values on the boundary $\partial M_{0}$. In the diseased tissue analogy, this
function may be thought of as describing an initial physiological impact of a
disease, for example the density of dying cells, or of some protein
responsible for tissue remodeling. Then, for a deformation
$\varphi\in\mathcal{D}\mathit{iff}^{m}(\mathbb{R}^{d})$, [Hsieh et al., 2020]
defines the corresponding yank $j_{\varphi}$ along $\varphi(M_{0})$ to be the
negative gradient of a some function
$Q:\mathbb{R}_{+}\rightarrow\mathbb{R}_{+}$ of the transported function
$g_{0}\circ\varphi^{-1}$, so that the yank pulls the shape towards places
where $g_{0}\circ\varphi^{-1}$ is highest. Formally, this gives:
$\forall\ v^{\prime}\in V,\quad(j_{\varphi}\mid
v^{\prime})=\int_{\varphi(M_{0})}\nabla\left[Q(g_{0}\circ\varphi^{-1})\right]^{T}v^{\prime}=\int_{\varphi(M_{0})}Q(g_{0}\circ\varphi^{-1})\hskip
1.0pt(-\mathrm{div}\,v^{\prime}),$
where the boundary term is negligible thanks to our assumptions on $g_{0}$.
The resulting dynamical system uses $\mathfrak{j}(t)=j_{\varphi(t)}$, yielding
$\left\\{\begin{array}[]{l}\partial_{t}\hskip
1.0pt\varphi(t,x)=v(t,\varphi(t,x)),\quad\varphi(0,x)=x,\\\\[5.0pt]
\displaystyle v(t)=\underset{v^{\prime}\,\in\,V}{\arg\min}\
\frac{\omega}{2}\,\big{(}{L_{V}v^{\prime}}\,|\,{v^{\prime}}\big{)}+\frac{1}{2}\,(A_{\varphi(t)}\hskip
1.0ptv^{\prime}\mid v^{\prime})-(j_{\varphi(t)}\mid v^{\prime}),\\\\[5.0pt]
\displaystyle(j_{\varphi(t)}\mid
v^{\prime})=\int_{M_{t}}Q(g_{0}\circ\varphi^{-1}(t))\hskip
1.0pt(-\mathrm{div}\,v^{\prime}),\quad v^{\prime}\in V,\quad
M_{t}=\varphi(t)(M_{0}).\end{array}\right.$ (8)
It was studied in Hsieh et al. [2020] and proved to have solutions over
arbitrary time intervals. In the same paper, the issue of identifying $g_{0}$
among a parametrized family of candidates given the initial state $M_{0}$ and
a deformed state at time $T$, $M_{T}=\varphi(T,M_{0})$, was also considered.
The assumption that shape change is driven by a strict advection of the
function $g_{0}$ can be seen as overly restrictive, as it does not allow for
independent transformations and external factors possibly affecting this
function. In this paper, we consider a reaction-diffusion equation on the
moving domain $M_{t}$ whose solution also controls the shape motion. This kind
of coupling, as far as we are aware, has not appeared in the literature.
##### Reaction-diffusion model.
Let us start with a fixed domain $U$ in $\mathbb{R}^{d}$, and consider
$p:[0,T]\rightarrow\mathcal{C}^{2}(U)$. One can think as $p(t,x)$ as some
measure of the “density of a disease” at time $t$ and location $x$ with
respect to the Lebesgue measure. A reaction-diffusion equation on $p$ in the
fixed domain $U$ is given by
$\partial_{t}p(t,x)=\mathrm{div}\big{(}S(t,x)\nabla
p(t,x)\big{)}+R(p(t,x)),\quad a.e.\ t\in[0,T],\ x\in U,$ (9)
with given initial value $p(0)=p_{0}:U\rightarrow\mathbb{R}$, and the Neumann
boundary condition $S(t,x)\nabla p(t,x)=0$ for all time $t$ and $x$ in the
boundary $\partial U$. It is understood that the gradient $\nabla$ and the
divergence are taken with respect to the $x$ coordinates.
On the right-hand side, $S(t,x)$ is a 3-by-3 symmetric positive definite
matrix for each $t$ and $x$. For example, for $S(t,x)=S=I_{3}$, the identity
matrix, we get $\mathrm{div}(S\nabla p(t,x))=\Delta p(t,x)$ the Laplacian of
$p$. More generally, for a diffusion at $x$ with rate $r_{i}>0$ in an $i$-th
direction, $i=1,2,3$, we have
$S(t,x)=r_{1}e_{1}e_{1}^{T}+r_{2}e_{2}e_{2}^{T}+r_{3}e_{3}e_{3}^{T}=\mathrm{diag}(r_{1},r_{2},r_{3})=(e_{1}\
e_{2}\ e_{3})\mathrm{diag}(r_{1},r_{2},r_{3})(e_{1}\ e_{2}\ e_{3})^{T}$
where $e_{i}$ is a unit vector pointing to the $i$-th direction and
$\mathrm{diag}(r_{1},r_{2},r_{3})$ is the diagonal matrix with corresponding
entries. In this paper, we will work under the following general assumption on
$S$ and how it is affected by shape change. We consider a time-dependent field
of frames $(t,x)\in[0,T]\times U\mapsto F(t,x)\in\mathrm{GL}_{3}(\mathbb{R})$
of unit vectors, and let
$S(t,x)=F(t,x)\,\mathrm{diag}(r_{1},r_{2},r_{3})\,F(t,x)^{T}.$
Finally, $R:\mathbb{R}\rightarrow\mathbb{R}$ is the reaction function, and
models external factors affecting the function $p$. It typically satisfies
$R(0)=0$ so that $p\equiv 0$ is a solution of the PDE initialized with
$p_{0}\equiv 0$. It may have a sigmoidal shape (such that $R(t)=0$ if $t\leq
0$, and increases on $[0,+\infty)$ with a finite limit at $+\infty$), which
results in a growth/atrophy model in which change accelerates until reaching a
limit speed. Alternatively, in order to model a growth/atrophy phase over a
finite time interval, $R$ on $[0,+\infty)$ may increase to maximal value
before decreasing again to 0 (this is the model chosen in Figure 1).
##### Integral formulation, and reaction-diffusion on a moving domain.
Equation (9) can be written in integral form leading to the weak formulation
that we will study specifically. After integrating the equation on a smaller
domain, and using the divergence theorem, we can say that the density
$(t,x)\mapsto p(t,x)$ is a solution of Equation (9) if and only if, for every
domain $U^{\prime}\subset U$,
$\frac{d}{dt}\int_{U^{\prime}}p(t,x)dx=\int_{\partial U^{\prime}}(S(t,x)\nabla
p(t,x))^{T}n_{\partial U^{\prime}}(x)d\sigma_{\partial
U^{\prime}}(x)+\int_{U^{\prime}}R(p(t,x))dx,$
with $n_{\partial U^{\prime}}$ the outer normal to the boundary of
$U^{\prime}$ and $\sigma_{\partial U^{\prime}}$ the surface measure on
$\partial U^{\prime}$. In other words, the total reduction of $p$ within
$U^{\prime}$ is equal to the flux of its gradient along the boundary, modified
by the diffusion tensor which takes into account the directions and speed of
diffusion. To this is added the total amount created in $U^{\prime}$ from the
reaction $R$.
From our fixed initial volume $M_{0}$, we can give a corresponding formulation
for the evolution of a density on a moving domain $t\mapsto
M_{t}=\varphi(t)(M_{0})$, with $\varphi\in
H^{1}([0,T],\mathcal{D}\mathit{iff}^{m}(\mathbb{R}^{d}))$. We need, however,
to account for changes in the directions of diffusion as the shape is
deformed.
First, we define along each deformation $\varphi(M_{0})$ of $M_{0}$, with
$\varphi\in\mathcal{D}\mathit{iff}^{m}(\mathbb{R}^{d})$, a frame $F_{\varphi}$
of unit vectors along $\varphi(M_{0})$. In other words, $F_{\varphi}$ is a
mapping from $\varphi(M_{0})$ onto $\mathrm{GL}_{3}(\mathbb{R})$ whose columns
have constant length 1. We define a corresponding diffusion tensor
$S_{\varphi}=F_{\varphi}\,\mathrm{diag}(r_{1},r_{2},r_{3})F_{\varphi}^{T}.$
Then, we say that the time-dependent density
$p(t):M_{t}=\varphi(t,M_{0})\rightarrow\mathbb{R}$ with respect to the
Lebesgue measure is a solution of the reaction-diffusion equation along the
moving domain $t\mapsto M_{t}$ if, for every open subset $U_{0}\subset M_{0}$,
and almost every $t$ in $[0,T]$, the equation
$\displaystyle\frac{d}{dt}\int_{\varphi(t,M_{0})}p(t,x)dx=$
$\displaystyle\int_{\varphi(t,\partial M_{0})}(S_{\varphi(t)}(x)\nabla
p(t,x))^{T}n_{\varphi(t,\partial M_{0})}(x)d\sigma_{\varphi(t,\partial
M_{0})}(x)$ (10) $\displaystyle+$
$\displaystyle\int_{\varphi(t,M_{0})}R(p(t,x))dx,$
is satisfied with Neumann boundary conditions $(S_{\varphi(t)}(t,x)\nabla
p(t,x))^{T}n_{\partial\varphi(t,M_{0})}(x)=0$ for every $t$ in $[0,T]$ and $x$
in $\partial M_{t}=\varphi(t,\partial M_{0}).$
Turning this integral formulation into a pointwise one is difficult, because
the support of $p(t)$ changes as $t$ increases. This results from considering
$p$ in spatial (i.e., Eulerian) coordinates. It is easier to deduce the
correct PDE for the corresponding density, denoted $\mathbb{p}$, in material
(i.e., Lagrangian) coordinates. This density is the pull-back
$\varphi(t)^{*}p(t)$ of $p$ through $\varphi(t)$:
$\forall f\in
L^{1}(M_{0}),\quad\int_{M_{0}}\mathbb{p}(t)fdx=\int_{M_{0}}[\varphi(t)^{*}p(t)]fdx=\int_{\varphi(t)(M_{0})}p(t)f\circ\varphi(t)^{-1}dx.$
In other words, $\mathbb{p}(t)=p(t)\circ\varphi\,J\varphi(t)$, with
$J\varphi(t)=\det(D\varphi(t))$, the Jacobian of $\varphi(t).$
Note that we get
$\nabla\mathbb{p}(t)=d\varphi(t)^{T}\nabla
p(t)\circ\varphi(t)\,J\varphi(t)+\mathbb{p}(t)\frac{\nabla
J\varphi(t)}{J\varphi(t)},$
so that
$\nabla
p(t)\circ\varphi(t)=d\varphi(t)^{-T}\left(\nabla\mathbb{p}(t)-\mathbb{p}(t)\frac{\nabla
J\varphi(t)}{J\varphi(t)}\right).$ (11)
Performing in Equation (10) the change of variable
$x=\varphi(t,y),$
so that
$dx=J\varphi(t,y)dy,\quad y\in M_{0},$
and
$n_{{\varphi(t)}(\partial U_{0})}(x)d\sigma_{{\varphi(t)}(\partial
U_{0})}(x)=J\varphi(t,y)D\varphi(t,y)^{-T}n_{\partial
U_{0}}(y)d\sigma_{\partial U_{0}}(y),\quad y\in\partial M_{0},$
we obtain an identity on the fixed domain $U_{0}$:
$\displaystyle\frac{d}{dt}\int_{U_{0}}\mathbb{p}(t)=$ $\displaystyle\
\int_{\partial
U_{0}}J\varphi(t)\left[D\varphi^{-1}(t)S_{\varphi(t)}\circ\varphi(t)D\varphi(t)^{-T}\left(\nabla\mathbb{p}(t)-\mathbb{p}\frac{\nabla
J\varphi(t)}{J\varphi(t)}\right)\right]^{T}n_{\partial U_{0}}d\sigma_{U_{0}}$
$\displaystyle+\int_{U_{0}}R\left(\frac{\mathbb{p}(t)}{J\varphi(t)}\right)J\varphi(t).$
Since the pull-back of a vector field $v:x\mapsto v(x)$ by $\varphi(t)$ is
$\varphi(t)^{*}v:x\mapsto D\varphi(t,x)^{-1}v(\varphi(x))$, the pull-back of
the frame field $F_{\varphi(t)}$ is
$\mathbb{F}_{\varphi(t)}=\varphi(t)^{*}F_{\varphi(t)}D\varphi(t)^{-1}F_{\varphi(t)}\circ\varphi(t)$,
which means that the pull-back of the diffusion tensor is
$\mathbb{S}_{\varphi(t)}=\varphi(t)^{*}S_{\varphi(t)}$ and given by
$\mathbb{S}_{\varphi(t)}=D\varphi^{-1}(t)S_{\varphi(t)}\circ\varphi(t)D\varphi(t)^{-T}.$
Note that this formula is valid even when replacing $\varphi(t)$ by any
diffeomorphism $\phi\in\mathcal{D}\mathit{iff}^{m}(\mathbb{R}^{d})$.
With this new notation, the integral equation reads
$\frac{d}{dt}\int_{M_{0}}\mathbb{p}(t)=\ \int_{\partial
M_{0}}J\varphi(t)\left[\mathbb{S}_{\varphi(t)}\left(\nabla\mathbb{p}(t)-\mathbb{p}(t)\frac{\nabla
J\varphi(t)}{J\varphi(t)}\right)\right]^{T}n_{\partial
U_{0}}d\sigma_{U_{0}}+\int_{M_{0}}R\left(\frac{\mathbb{p}(t)}{J\varphi(t)}\right)J\varphi(t).$
The boundary conditions are then
$\left[\mathbb{S}_{\varphi(t)}(x)\left(\nabla\mathbb{p}(t)-\mathbb{p}(t)\frac{\nabla
J\varphi(t)}{J\varphi(t)}\right)\right]^{T}n_{\partial M_{0}}(x)=0,\quad
t\in[0,T],\ x\in\partial M_{0}.$
From there, the divergence theorem yields
$\frac{d}{dt}\int_{M_{0}}\mathbb{p}(t,x)dx=\int_{M_{0}}\mathrm{div}\left(J\varphi(t)\,\mathbb{S}_{\varphi(t)}\left(\nabla\mathbb{p}(t)-\mathbb{p}(t)\frac{\nabla
J\varphi(t)}{J\varphi(t)}\right)\right)dx+\int_{M_{0}}R\left(\frac{\mathbb{p}(t)}{J\varphi(t)}\right)J\varphi(t)dx,$
with boundary condition
$\left[\mathbb{S}_{\varphi(t)}(t,x)\nabla\mathbb{p}(t,x)\right]^{T}n_{0}(x)=0,\quad
t\in[0,T],\ x\in\partial M_{0}.$
Since this should be true for every open $U_{0}\subset M_{0}$, we get the PDE
$\frac{d}{dt}\mathbb{p}(t,x)=\mathrm{div}\left(J\varphi(t,x)\,\mathbb{S}_{\varphi(t,x)}\left(\nabla\mathbb{p}(t,x)-\mathbb{p}(t,x)\frac{\nabla
J\varphi(t,x)}{J\varphi(t,x)}\right)\right)+R\left(\frac{\mathbb{p}(t,x)}{J\varphi(t,x)}\right)J\varphi(t,x),$
(12)
with boundary condition
$\left[\mathbb{S}_{\varphi}(t,x)\left(\nabla\mathbb{p}(t,x)-\mathbb{p}(t,x)\frac{\nabla
J\varphi(t,x)}{J\varphi(t,x)}\right)\right]n_{0}(x)=0,\quad(t,x)\in[0,T]\times\partial
M_{0}.$
##### PDE-controlled diffeomorphic equation and main result.
Combining the various paragraphs of this section, we obtain a formulation of
our model. We start by redefining our various functions and operators.
We fix an initial domain $M_{0}\subset\mathbb{R}^{d}$ and diffusion speeds
$r_{1},r_{2},r_{3}>0$. For every
$\varphi\in\mathcal{D}\mathit{iff}^{m}(\mathbb{R}^{d})$, we define
* •
A frame field of unit vectors
$F_{\varphi}:\varphi(M_{0})\rightarrow\mathrm{GL}_{3}(\mathbb{R})$ along
$\varphi(M_{0})$, and the corresponding field in spatial coordinates
$\mathbb{F}_{\varphi}=D\varphi^{-1}F_{\varphi}\circ\varphi$.
* •
A diffusion tensor $S_{\varphi}:\varphi(M_{0})\rightarrow M_{3}(\mathbb{R}),$
with
$S_{\varphi}(x)=F_{\varphi}\mathrm{diag}(r_{1},r_{2},r_{3})F_{\varphi}^{T}$, a
symmetric positive definite matrix at each point. The corresponding operator
in Lagrangian coordinates is
$\mathbb{S}_{\varphi}=D\varphi^{-1}S\circ\varphi\,D\varphi^{-T}.$
* •
A symmetric, positive-definite tensor
$A_{\varphi}\in\mathcal{L}(H^{1}(\varphi(M_{0}),\mathbb{R}^{d}),H^{1}(\varphi(M_{0}),\mathbb{R}^{d})^{*})$.
The PDE-controlled diffeomorphic model with initial condition
$\mathbb{p}_{0}:M_{0}\rightarrow\mathbb{R}$ is the system of coupled equations
on $\varphi:[0,T]\mapsto\mathcal{D}\mathit{iff}^{m}(\mathbb{R}^{d})$ and
$\mathbb{p}:[0,T]\times M_{0}\rightarrow\mathbb{R}$ that follows: for all
$(t,x)\in[0,T]\times M_{0}$,
$\left\\{\begin{aligned}
&\frac{d}{dt}\mathbb{p}(t,x)=\mathrm{div}\left(J\varphi(t,x)\,\mathbb{S}_{\varphi(t,x)}\left(\nabla\mathbb{p}(t,x)-\mathbb{p}(t,x)\frac{\nabla
J\varphi(t,x)}{J\varphi(t,x)}\right)\right)+R\left(\frac{\mathbb{p}(t,x)}{J\varphi(t,x)}\right)J\varphi(t,x),\\\
&\frac{d}{dt}\varphi(t,x)=v(t,\varphi(t,x)),\\\
&v(t)=\underset{v^{\prime}\,\in\,V}{\arg\min}\
\frac{\omega}{2}\,\big{(}{L_{V}v^{\prime}}\,|\,{v^{\prime}}\big{)}+\frac{1}{2}\,(A_{\varphi(t)}\hskip
1.0ptv^{\prime}\mid v^{\prime})-(j(t)\mid v^{\prime}),\\\ &(j(t)\mid
v^{\prime})=\int_{M_{t}}Q(p(t))\hskip 1.0pt(-\mathrm{div}\,v^{\prime}),\quad
v^{\prime}\in V,\quad M_{t}=\varphi(t)(M_{0}),\quad
t\in[0,T],\end{aligned}\right.$ (13)
where $p(t)=\mathbb{p}\circ\varphi(t)^{-1}/J\varphi(t)$, with boundary
conditions
$\left\\{\begin{aligned}
&\left(\mathbb{S}_{\varphi}(t,x)\left(\nabla\mathbb{p}(t,x)-\mathbb{p}(t,x)\frac{\nabla
J\varphi(t,x)}{J\varphi(t,x)}\right)\right)^{T}n_{0}(x)=0,&\quad(t,x)\in[0,T]\times\partial
M_{0}\\\ &\mathbb{p}(0,x)=p(0,x)=p_{0}(x)&\quad x\in M_{0}\\\
&\varphi(0,x)=x,&\quad x\in M_{0}\end{aligned}\right.$ (14)
By a solution of the above system of differential equations and boundary
conditions, we mean a couple $(\varphi,\mathbb{p})\in
H^{1}([0,T],\mathcal{D}\mathit{iff}^{m}(\mathbb{R}^{d}))\times
L^{2}([0,T],H^{1}(M_{0}))$ such that $\mathbb{p}$ is a weak solution of the
reaction-diffusion PDE (c.f., next section for the precise definition) with
the two first boundary conditions in (14) and, for almost all $t\in[0,T]$,
$\varphi$ verifies the last three equations in (13) with the last boundary
condition in (14). Our main result is the following existence and uniqueness
of the solution under adequate assumptions:
###### Theorem 3.
Assume that $V\hookrightarrow C_{0}^{m+1}(\mathbb{R}^{d},\mathbb{R}^{d})$ with
$m\geq 2$, that $R$ and $Q$ are Lipshitz and bounded, and that on every
bounded subset $B$ of $\mathcal{D}\mathit{iff}^{m}(\mathbb{R}^{d})$ for the
distance $d_{m,\infty}$, we have:
1. 1.
The linear tensor $\varphi\mapsto A_{\varphi}$ is Lipshitz on $B$.
2. 2.
$\|F_{\varphi}\|_{\infty}=\sup_{x\in\varphi(M_{0})}\|F_{\varphi}(x)^{-1}\|$ is
bounded on $B$.
Then, for every $p_{0}$ in $L^{2}(M_{0}),$ there is a unique solution
$(\varphi,\mathbb{p})\in
H^{1}([0,T],\mathcal{D}\mathit{iff}^{m}(\mathbb{R}^{d}))\times
L^{2}([0,T],H^{1}(M_{0}))$ to (13).
Most of the rest of the paper is devoted to the proof of this result,
decomposed into the following steps. In section 3, we fix a time-dependent
deformation $t\mapsto\varphi(t)$ and show the local weak existence and
uniqueness of solutions to the reaction-diffusion equation on the
corresponding moving domain. Then in section 4, we derive a number of
necessary estimates on $\varphi$ which, combined with section 3, lead to the
result of Theorem 3 by a fixed point argument.
## 3 Analysis with prescribed moving domain
Before studying the fully coupled system (13), we will first restrict to the
simpler situation of a reaction-diffusion equation on a moving domain but for
which the deformation is fixed and prove preliminary results of local and
global existence of weak solutions for this case. Note that, in the Lagrangian
formulation we consider, this amounts in a system of reaction-diffusion
equations with time-dependent diffusion tensor and boundary condition for
which several existence and regularity results have been showed in the past,
see e.g. Ladyženskaja et al. [1988], Burdzy et al. [2004], Goudon and Vasseur
[2010]. These are however derived with slightly different settings and sets of
assumptions than in the present work and thus, for the sake of completeness,
we provide detailed proofs of the weak existence results as well as bounds on
the solutions that we will need for the proof of our main theorem.
### 3.1 Weak existence for the reaction-diffusion PDE on a moving domain
In all this section, we assume that $m\geq 2$ and we slightly extend our
general notation. We let $t_{0}$ denote the initial time and take $\eta>0$,
$[t_{0},t_{0}+\eta]\subset[0,T]$. We assume that an initial deformation
$\varphi_{t_{0}}\in\mathcal{D}\mathit{iff}^{m}(\mathbb{R}^{d})$ is given at
$t_{0}$, together with a time-dependent deformation $\varphi\in
H^{1}([t_{0},t_{0}+\eta],\mathcal{D}\mathit{iff}^{m}(\mathbb{R}^{d}))$ with
$\varphi(t_{0})=id_{\mathbb{R}^{d}}$ (so that $\varphi_{t_{0}}$ and
$\varphi(t_{0})$ denote different objects). Both $\varphi_{t_{0}}$ and
$\varphi$ are assumed to be fixed in this section. For convenience, we shift
the reference domain to $M_{t_{0}}=\varphi_{t_{0}}(M_{0})$. The diffusion
tensor $\mathbb{S}_{\varphi}$ is then fully specified and given by, for all
$t\in[t_{0},t_{0}+\eta]$ and all $x\in M_{t_{0}}$:
$\mathbb{S}_{\varphi}(t,x)=(D\varphi(t,x))^{-1}\
\left(\vphantom{\sum}F_{\varphi(t)\,\circ\,\varphi_{t_{0}}}\circ\varphi(t,x)\right)\,\mathrm{diag}(r_{1},r_{2},r_{3})\,\left(\vphantom{\sum}F_{\varphi(t)\,\circ\,\varphi_{t_{0}}}\circ\varphi(t,x)\right)^{T}D\varphi(t,x)^{-T}$
(15)
#### 3.1.1 Preliminary results
As a first step we consider the simplified setting in which the reaction term
is replaced by a time-dependent function $f(t)$, introducing the following
system:
$\left\\{\begin{array}[]{ll}\displaystyle\frac{d}{dt}\mathbb{p}(t,x)=\mathrm{div}\left(J\varphi(t,x)\mathbb{S}_{\varphi(t)}(t,x)\left(\nabla\mathbb{p}(t,x)-\mathbb{p}(t,x)\frac{\nabla
J\varphi(t,x)}{J\varphi(t,x)}\right)\right)+f(t)\\\\[10.0pt]
\left[\mathbb{S}_{\varphi}(t,x)\left(\nabla\mathbb{p}(t,x)-\mathbb{p}(t,x)\frac{\nabla
J\varphi(t,x)}{J\varphi(t,x)}\right)\right]^{T}n_{0}(x)=0,\quad(t,x)\in[t_{0},t_{0}+\eta]\times\partial
M_{t_{0}}\\\\[10.0pt] \mathbb{p}(t_{0},x)=p_{t_{0}}(x),\quad x\in
M_{t_{0}}\end{array}\right.$ (16)
We will assume that $f\in L^{2}([t_{0},t_{0}+\eta],H_{1}(M_{t_{0}})^{*})$ and
rewrite (16) in a weak form. We will look for a solution $\mathbb{p}\in
L^{2}([t_{0},t_{0}+\eta],H_{1}(M_{t_{0}}))$. Introduce the operator
$\mathcal{L}_{\varphi,\,0}:L^{2}([t_{0},t_{0}+\eta],H^{1}(M_{t_{0}}))\rightarrow
L^{2}([t_{0},t_{0}+\eta],H^{1}(M_{t_{0}})^{*})$
defined by
$(\mathcal{L}_{\varphi,\,0}\hskip 1.0pth_{1}\mid
h_{2})=\int_{t_{0}}^{t_{0}+\eta}\left(\vphantom{\frac{1}{2}}\langle\mathbb{S}_{\varphi(t)}\,\nabla
h_{1}(t),\nabla h_{2}(t)\rangle_{L^{2}}-\left\langle
h_{1}(t)\,\mathbb{S}_{\varphi(t)}\,\frac{\nabla
J\varphi(t,x)}{J\varphi(t,x)},\nabla h_{2}(t)\right\rangle_{L^{2}}\right)dt$
With this notation, the first equation in (16) can be rewritten as
$\partial_{t}\mathbb{p}+\mathcal{L}_{\varphi,\,0}\,\mathbb{p}=f$ (recall that
the notation $\partial_{t}\mathbb{p}$ refers to the weak derivative of
$\mathbb{p}$ with respect to time) and the second one is automatically derived
from identifying boundary terms after integration by parts. This yields the
new formulation
$\left\\{\begin{aligned}
&\partial_{t}\mathbb{p}+\mathcal{L}_{\varphi,\,0}\,\mathbb{p}=f\\\
&\mathbb{p}(t_{0},x)=p_{t_{0}}(x)\end{aligned}\right.$ (17)
Note that the first equation implies, in particular, that
$\partial_{t}\mathbb{p}\in L^{2}([t_{0},t_{0}+\eta],H_{1}(M_{t_{0}})^{*})$),
and Theorem 1 ensures that prescribing an initial condition at $t=t_{0}$ is
meaningful. For technical reasons, it will be convenient to make the change of
function $\mathbb{q}(t,x)=e^{-\lambda t}\mathbb{p}(t,x)$ (for some $\lambda>0$
to be specified later) and rewrite (17) in terms of $\mathbb{q}$, yielding:
$\left\\{\begin{array}[]{ll}\displaystyle\partial_{t}\mathbb{q}(t,x)+\mathcal{L}_{\varphi,\,\lambda}\,\mathbb{q}=e^{-\lambda
t}f(t)\\\\[5.0pt] \mathbb{q}(t_{0},x)=e^{-\lambda t_{0}}p_{t_{0}}(x),\quad
x\in M_{t_{0}}\end{array}\right.$ (18)
where
$\mathcal{L}_{\varphi,\,\lambda}:L^{2}([t_{0},t_{0}+\eta],H^{1}(M_{t_{0}}))\rightarrow
L^{2}([t_{0},t_{0}+\eta],H^{1}(M_{t_{0}})^{*})$ is defined by
$\mathcal{L}_{\varphi,\,\lambda}\hskip 1.0pth=\lambda
h+\mathcal{L}_{\varphi,\,0}\hskip 1.0pth$. With this notation and these
assumptions, we can now state the main result of this section:
###### Proposition 1.
With the assumptions above, for all $p_{t_{0}}\in L^{2}(M_{t_{0}})$, the
system (18) has a unique weak solution on $[t_{0},t_{0}+\eta]$.
We first address the case of an homogeneous initial condition with the
following lemma.
###### Lemma 2.
Suppose that the frame field satisfies
$\sup_{t\,\in\,[t_{0},\,t_{0}+\eta]}\|F_{\varphi(t)\,\circ\,\varphi_{t_{0}}}^{-1}\|_{\infty}<\infty.$
Then there exists $\lambda(\varphi)>0$ such that for any $g\in
L^{2}([t_{0},t_{0}+\eta],H^{1}(M_{t_{0}})^{*}))$, the problem
$\left\\{\begin{array}[]{ll}\partial_{t}\mathbb{q}+\mathcal{L}_{\varphi,\,\lambda}\mathbb{q}=g\\\
\mathbb{q}(t_{0})=0\end{array}\right.$
has a unique weak solution $\mathbb{q}$ that belongs to
$L^{2}([t_{0},t_{0}+\eta],H^{1}(M_{t_{0}}))$.
###### Proof.
The proof is mainly an application of Theorem 2. We only need to choose
$\lambda>0$ such that the operator $\mathcal{L}_{\varphi,\,\lambda}$ is
bounded and coercive. Since $\varphi\in
H^{1}([t_{0},t_{0}+\eta],\mathcal{D}\mathit{iff}^{m}(\mathbb{R}^{d}))$, we
have $\varphi\in
C([t_{0},t_{0}+\eta],\mathcal{D}\mathit{iff}^{m}(\mathbb{R}^{d}))$ and as
$s\geq 2$
$B_{\varphi}\colonequals\max\left\\{\max_{t\,\in\,[t_{0},\,t_{0}+\eta]}\|\varphi(t)-\mathit{id}\|_{2,\infty},\max_{t\,\in\,[t_{0},\,t_{0}+\eta]}\|\varphi^{-1}(t)-\mathit{id}\|_{1,\infty}\right\\}<\infty.$
Recall also that
$\mathbb{S}_{\varphi}(t,\cdot)=(D\varphi(t))^{-1}\
\left(\vphantom{\sum}F_{\varphi(t)\,\circ\,\varphi_{t_{0}}}\circ\varphi(t)\right)\,\mathrm{diag}(r_{1},r_{2},r_{3})\,\left(\vphantom{\sum}F_{\varphi(t)\,\circ\,\varphi_{t_{0}}}\circ\varphi(t)\right)^{T}D\varphi(t)^{-T}$
and the columns of $F_{\varphi(t)\,\circ\,\varphi_{t_{0}}}(x)$ are unit
vectors, so
$\|\mathbb{S}_{\varphi(t)}\|_{\infty}\leq{{\mathfrak{C}}}\,B_{\varphi}^{2}\ \
\mbox{ for all }t\in[t_{0},t_{0}+\eta]$
and
$\|\mathbb{S}_{\varphi(t)}^{-1}\|_{\infty}\leq{{\mathfrak{C}}}B_{\varphi}^{2}\left(\sup_{t\,\in\,[t_{0},\,t_{0}+\eta]}\|F_{\varphi(t)\,\circ\,\varphi_{t_{0}}}^{-1}\|_{\infty}\right)^{2}\
\ \mbox{ for all }t\in[t_{0},t_{0}+\eta].$
(Recall that ${{\mathfrak{C}}}$ is our notation for a generic constant.) It
follows that there exist two constants $\alpha_{\varphi}$ and
$\beta_{\varphi}$ (depending on $\varphi$) such that
$\alpha_{\varphi}|z|^{2}\leq
z^{T}\mathbb{S}_{\varphi}z\leq\beta_{\varphi}|z|^{2}$ for all
$t\in[t_{0},t_{0}+\eta]$ and $z\in\mathbb{R}^{d}$ and therefore we have
$\displaystyle\hskip 15.0pt|(\mathcal{L}_{\varphi,\,\lambda}\hskip
1.0pth_{1}\mid h_{2})|$
$\displaystyle=\left|\int_{t_{0}}^{t_{0}+\eta}\left(\vphantom{\sum}\lambda\,\langle
h_{1}(t),h_{2}(t)\rangle_{L^{2}}+\langle\mathbb{S}_{\varphi(t)}\,\nabla
h_{1}(t),\nabla h_{2}(t)\rangle_{L^{2}}-\left\langle
h_{1}(t)\,\mathbb{S}_{\varphi(t)}\,\frac{\nabla
J\varphi(t,x)}{J\varphi(t,x)},\nabla
h_{2}(t)\right\rangle_{L^{2}}\right)dt\right|$
$\displaystyle\leq(\lambda+{{\mathfrak{C}}_{\varphi}})\int_{t_{0}}^{t_{0}+\eta}\left(\vphantom{\sum}\|h_{1}(t)\|_{L^{2}}\,\|h_{2}(t)\|_{L^{2}}+\|\nabla
h_{1}(t)\|_{L^{2}}\,\|\nabla h_{2}(t)\|_{L^{2}}+\|h_{1}(t)\|_{L^{2}}\,\|\nabla
h_{2}(t)\|_{L^{2}}\right)dt$
$\displaystyle\leq(\lambda+{{\mathfrak{C}}_{\varphi}})\int_{t_{0}}^{t_{0}+\eta}3\,\|h_{1}(t)\|_{H^{1}}\,\|h_{2}(t)\|_{H^{1}}\,dt$
$\displaystyle\leq 3\,(\lambda+{{\mathfrak{C}}_{\varphi}})\
\|h_{1}\|_{L^{2}([t_{0},\,t_{0}+\eta],\,H^{1}(M_{t_{0}}))}\
\|h_{2}\|_{L^{2}([t_{0},\,t_{0}+\eta],\,H^{1}(M_{t_{0}}))}$
which shows the boundedness of the operator $\mathcal{L}_{\varphi,\,\lambda}$.
Moreover, for any $\varepsilon>0$:
$\displaystyle\hskip 15.0pt(\mathcal{L}_{\varphi,\,\lambda}\hskip 1.0pth\mid
h)$
$\displaystyle=\int_{t_{0}}^{t_{0}+\eta}\left(\lambda\,\|h(t)\|_{L^{2}}^{2}+\langle\mathbb{S}_{\varphi(t)}\,\nabla
h(t),\nabla h(t)\rangle_{L^{2}}-\left\langle
h(t)\,\mathbb{S}_{\varphi(t)}\,\frac{\nabla
J\varphi(t,x)}{J\varphi(t,x)},\nabla h(t)\right\rangle_{L^{2}}\right)dt$
$\displaystyle\geq\int_{t_{0}}^{t_{0}+\eta}\left(\lambda\,\|h(t)\|_{L^{2}}^{2}+\beta_{\varphi}\,\|\nabla
h(t)\|_{L^{2}}^{2}-{{\mathfrak{C}}_{\varphi}}\,\|h(t)\|_{L^{2}}\,\|\nabla
h(t)\|_{L^{2}}\right)dt$
$\displaystyle\geq\int_{t_{0}}^{t_{0}+\eta}\left(\lambda\,\|h(t)\|_{L^{2}}^{2}+\beta_{\varphi}\,\|\nabla
h(t)\|_{L^{2}}^{2}-{{\mathfrak{C}}_{\varphi}}\left(\frac{1}{2\varepsilon}\,\|h(t)\|_{L^{2}}^{2}+\frac{\varepsilon}{2}\,\|\nabla
h(t)\|_{L^{2}}^{2}\right)\right)dt$
$\displaystyle=\int_{t_{0}}^{t_{0}+\eta}\left(\left(\lambda-\frac{{{\mathfrak{C}}_{\varphi}}}{2\varepsilon}\right)\|h(t)\|_{L^{2}}^{2}+\left(\beta_{\varphi}-\frac{{{\mathfrak{C}}_{\varphi}}\,\varepsilon}{2}\right)\|\nabla
h(t)\|_{L^{2}}^{2}\right)dt.$
Now, choose $\varepsilon(\varphi)>0$ and $\lambda(\varphi)>0$ such that
$\beta_{\varphi}-\frac{{{\mathfrak{C}}_{\varphi}}\,\varepsilon}{2}>0\ \ \mbox{
and }\ \ \lambda-\frac{{{\mathfrak{C}}_{\varphi}}}{2\varepsilon}>0$
and the above inequality leads to
$(\mathcal{L}_{\varphi,\,\lambda}\hskip 1.0pth\mid
h)\geq\min\left\\{\lambda-\frac{{{\mathfrak{C}}_{\varphi}}}{2\varepsilon},\,\beta_{\varphi}-\frac{{{\mathfrak{C}}_{\varphi}}\,\varepsilon}{2}\right\\}\,\|h\|_{L^{2}([t_{0},\,t_{1}],\,H^{1}(M_{t_{0}}))}^{2}.$
This shows that of $\mathcal{L}_{\varphi,\,\lambda}$ is coercive and concludes
the proof of Lemma 2. ∎
Existence and uniqueness in the case of a non-homogeneous initial condition
follows as a corollary.
###### Corollary 1.
Under the same assumptions as in Lemma 2, there exists $\lambda(\varphi)>0$
such that for all $g\in L^{2}([t_{0},t_{0}+\eta],H^{1}(M_{t_{0}})^{*})$ and
$q_{t_{0}}\in L^{2}(M_{t_{0}})$, the problem
$\left\\{\begin{array}[]{ll}\partial_{t}\mathbb{q}+\mathcal{L}_{\varphi,\,\lambda}\mathbb{q}=g\\\
\mathbb{q}(t_{0})=q_{t_{0}}\end{array}\right.$ (19)
has a unique weak solution $\mathbb{q}\in
L^{2}([t_{0},t_{0}+\eta],H^{1}(M_{t_{0}}))\cap
C([t_{0},t_{0}+\eta],L^{2}(M_{t_{0}}))$.
###### Proof.
According to Theorem 3.2, Chapter 1, in Lions and Magenes [1972], there exists
$w\in L^{2}([t_{0},t_{0}+\eta],H^{1}(M_{t_{0}}))$ such that
$\partial_{t}\hskip 1.0ptw\in L^{2}([t_{0},t_{0}+\eta],H^{1}(M_{t_{0}})^{*})$
and $w(0)=q_{t_{0}}$. Using the previous lemma, let $\tilde{\mathbb{q}}$ be
the solution to
$\left\\{\begin{array}[]{ll}\partial_{t}\hskip
1.0pt\mathbb{q}+\mathcal{L}_{\varphi,\,\lambda}\hskip
1.0pt\mathbb{q}=g-\partial_{t}\hskip
1.0ptw-\mathcal{L}_{\varphi,\,\lambda}\hskip 1.0ptw\\\
\mathbb{q}(t_{0})=0\end{array}\right.,$
then $\tilde{\mathbb{q}}+w$ is a solution to the original problem. Uniqueness
is clear from the uniqueness in Lemma 2. ∎
#### 3.1.2 Existence of solutions on a moving domain
Still assuming that $t\mapsto\varphi(t)$ is given, we now consider general
reaction-diffusions on the moving domain, still using a weak formulation,
which becomes, introducing the nonlinear operator
$\mathcal{R}_{\varphi,0}:\mathbb{p}(t,x)\mapsto
J\varphi\,R\big{(}\mathbb{p}(t,x)/J\varphi(t,x)\Big{)}$
$\left\\{\begin{aligned}
&\partial_{t}\mathbb{p}+\mathcal{L}_{\varphi,\,0}\,\mathbb{p}=\mathcal{R}_{\varphi,0}(\mathbb{p})\\\
&\mathbb{p}(t_{0},x)=\mathbb{p}_{t_{0}}(x)\end{aligned}\right.$ (20)
###### Theorem 4.
Suppose that the reaction function $R$ is Lipschitz continuous. Under the
conditions of Lemma 2, for all $p_{t_{0}}\in L^{2}(M_{t_{0}})$, the system
(20) has a unique weak solution on $[t_{0},t_{0}+\eta]$.
###### Proof.
Let us fix $p_{t_{0}}\in L^{2}(M_{t_{0}})$. Relying again on the change of
function $\mathbb{q}(t,x)=e^{-\lambda t}\mathbb{p}(t,x)$, the reaction-
diffusion system (20) becomes:
$\left\\{\begin{aligned}
&\partial_{t}\mathbb{q}+\mathcal{L}_{\varphi,\,\lambda}\,\mathbb{q}=\mathcal{R}_{\varphi,\lambda}(\mathbb{q})\\\
&\mathbb{q}(t_{0},x)=\mathbb{q}_{t_{0}}(x)\end{aligned}\right.$ (21)
with $\mathbb{q}_{t_{0}}=e^{-\lambda t_{0}}\mathbb{p}_{t_{0}}$ and
$\mathcal{R}_{\varphi,\lambda}(\mathbb{q})(t,x)=e^{-\lambda
t}\mathcal{R}_{\varphi,0}(e^{\lambda t}\mathbb{q})(t,x).$
Let $0<\delta\leq\eta$ to be fixed later in the proof and consider the space
$X\colonequals C([t_{0},t_{0}+\delta],L^{2}(M_{t_{0}}))$ equipped with the
norm
$\|h\|_{X}\colonequals\max_{t\,\in\,[t_{0},\,t_{0}+\delta]}\,\|h(t)\|_{L^{2}(M_{t_{0}})}.$
Given $h\in X$, let
$g_{\lambda,\,h}(t)\colonequals e^{-\lambda t}R\left(\frac{e^{\lambda
t}h(t)}{J\varphi(t)}\right)J\varphi(t),$
then $g_{\lambda,\,h}\in L^{2}([t_{0},t_{0}+\delta],L^{2}(M_{t_{0}}))$, since
$J\varphi$ and $1/J\varphi$ are uniformly bounded on
$[t_{0},t_{0}+\delta]\times M_{t_{0}}$, and $|R(z)|\leq C\,(1+|z|)$ by
Lipschitz continuity of $R$. From Corollary 1, we know that there exists
$\lambda>0$ that may depend on $\varphi$ but not on $h$ such that
$\left\\{\begin{array}[]{ll}\partial_{t}\hskip
1.0pt\mathbb{q}+\mathcal{L}_{\varphi,\,\lambda}\hskip
1.0pt\mathbb{q}=g_{\lambda,\,h}\\\ \mathbb{q}(t_{0})=e^{-\lambda
t_{0}}p_{t_{0}}\end{array}\right.$
has a unique weak solution for all $h\in X$. Denote this weak solution by
$\mathbb{q}=\mathcal{S}(h)\in L^{2}([t_{0},t_{0}+\delta],H^{1}(M_{t_{0}}))\cap
X$.
We now show that $\mathcal{S}:X\rightarrow X$ is a contraction when $\delta$
is chosen small enough. Let $h_{1},h_{2}\in X$ and
$\mathbb{q}_{1}=\mathcal{S}(h_{1})$, $\mathbb{q}_{2}=\mathcal{S}(h_{2})$, so
that
$\partial_{t}\hskip 1.0pt\mathbb{q}_{1}+\mathcal{L}_{\varphi,\,\lambda}\hskip
1.0pt\mathbb{q}_{1}=g_{\lambda,\,h_{1}}\,\in\,L^{2}([t_{0},t_{0}+\delta],H^{1}(M_{t_{0}})^{*})$
and
$\partial_{t}\hskip 1.0pt\mathbb{q}_{2}+\mathcal{L}_{\varphi,\,\lambda}\hskip
1.0pt\mathbb{q}_{2}=g_{\lambda,\,h_{2}}\,\in\,L^{2}([t_{0},t_{0}+\delta],H^{1}(M_{t_{0}})^{*}).$
It follows that for almost all $t\in[t_{0},t_{0}+\delta]$
$\left(\vphantom{\sum}\partial_{t}\hskip
1.0pt(\mathbb{q}_{1}-\mathbb{q}_{2})\right)\\!(t)+\left(\vphantom{\sum}\mathcal{L}_{\varphi,\,\lambda}\hskip
1.0pt(\mathbb{q}_{1}-\mathbb{q}_{2})\right)\\!(t)=(g_{\lambda,\,h_{1}}-g_{\lambda,\,h_{2}})(t)\in
L^{2}(M_{t_{0}})\subset H^{1}(M_{t_{0}})^{*}.$
Evaluating at $(\mathbb{q}_{1}-\mathbb{q}_{2})(t)\in H^{1}(M_{t_{0}})$ gives
$\displaystyle\begin{split}&\hskip
13.0pt\left(\left(\vphantom{\sum}\partial_{t}\hskip
1.0pt(\mathbb{q}_{1}-\mathbb{q}_{2})\right)\\!(t)\
\rule[-5.0pt]{0.5pt}{15.0pt}\
(\mathbb{q}_{1}-\mathbb{q}_{2})(t)\right)+\left(\left(\vphantom{\sum}\mathcal{L}_{\varphi,\,\lambda}\hskip
1.0pt(\mathbb{q}_{1}-\mathbb{q}_{2})\right)\\!(t)\
\rule[-5.0pt]{0.5pt}{15.0pt}\ (\mathbb{q}_{1}-\mathbb{q}_{2})(t)\right)\\\
&=\left(\vphantom{\sum}(g_{\lambda,\,h_{1}}-g_{\lambda,\,h_{2}})(t)\
\rule[-5.0pt]{0.5pt}{15.0pt}\
(\mathbb{q}_{1}-\mathbb{q}_{2})(t)\right).\end{split}$ (22)
As a result of the coercivity of the operator
$\mathcal{L}_{\varphi,\,\lambda}$ shown earlier, for some constant
${{\mathfrak{C}}_{\varphi}}>0$:
$\displaystyle\begin{split}\left(\left(\vphantom{\sum}\mathcal{L}_{\varphi,\,\lambda}\hskip
1.0pt(u_{1}-u_{2})\right)\\!(t)\ \rule[-5.0pt]{0.5pt}{15.0pt}\
(\mathbb{q}_{1}-\mathbb{q}_{2})(t)\right)&\geq{{\mathfrak{C}}_{\varphi}}\,\|(\mathbb{q}_{1}-\mathbb{q}_{2})(t)\|_{H^{1}}^{2}.\end{split}$
(23)
We can now combine Lemma 1, (23), and (22) to obtain
$\displaystyle\hskip 13.0pt\frac{1}{2}\left(\partial_{t}\hskip
1.0pt\|(\mathbb{q}_{1}-\mathbb{q}_{2})(\cdot)\|_{L^{2}}^{2}\right)\\!(t)+{{\mathfrak{C}}_{\varphi}}\,\|(\mathbb{q}_{1}-\mathbb{q}_{2})(t)\|_{H^{1}}^{2}$
$\displaystyle\leq\left(\left(\vphantom{\sum}\partial_{t}\hskip
1.0pt(\mathbb{q}_{1}-\mathbb{q}_{2})\right)\\!(t)\
\rule[-5.0pt]{0.5pt}{15.0pt}\
(\mathbb{q}_{1}-\mathbb{q}_{2})(t)\right)+\left(\left(\vphantom{\sum}\mathcal{L}_{\varphi,\,\lambda}\hskip
1.0pt(\mathbb{q}_{1}-\mathbb{q}_{2})\right)\\!(t)\
\rule[-5.0pt]{0.5pt}{15.0pt}\ (\mathbb{q}_{1}-\mathbb{q}_{2})(t)\right)$
$\displaystyle\leq\|(g_{\lambda,\,h_{1}}-g_{\lambda,\,h_{2}})(t)\|_{L^{2}}\,\|(\mathbb{q}_{1}-\mathbb{q}_{2})(t)\|_{L^{2}}$
$\displaystyle\leq\frac{1}{2\varepsilon}\,\|(g_{\lambda,\,h_{1}}-g_{\lambda,\,h_{2}})(t)\|_{L^{2}}^{2}+\frac{\varepsilon}{2}\,\|(\mathbb{q}_{1}-\mathbb{q}_{2})(t)\|_{H^{1}}^{2}$
for any $\varepsilon>0$. Choosing a small enough $\varepsilon$, we can get
$\displaystyle\left(\partial_{t}\hskip
1.0pt\|(\mathbb{q}_{1}-\mathbb{q}_{2})(\cdot)\|_{L^{2}}^{2}\right)\\!(t)$
$\displaystyle\leq{{\mathfrak{C}}_{\varphi}}\,\|(g_{\lambda,\,h_{1}}-g_{\lambda,\,h_{2}})(t)\|_{L^{2}}^{2}$
$\displaystyle\leq{{\mathfrak{C}}_{\varphi,R}}\,\|(h_{1}-h_{2})(t)\|_{L^{2}}^{2}.$
Consequently, for all $t\in[t_{0},t_{0}+\delta]$,
$\displaystyle\|(\mathbb{q}_{1}-\mathbb{q}_{2})(t)\|_{L^{2}}^{2}$
$\displaystyle=\|(\mathbb{q}_{1}-\mathbb{q}_{2})(t_{0})\|_{L^{2}}^{2}+\int_{t_{0}}^{t}(\partial_{t}\hskip
1.0pt\|(\mathbb{q}_{1}-\mathbb{q}_{2})(\cdot)\|_{L^{2}}^{2})(s)\,ds$
$\displaystyle\leq{{\mathfrak{C}}_{\varphi,R}}\int_{t_{0}}^{t}\|(h_{1}-h_{2})(s)\|_{L^{2}}^{2}\,ds$
$\displaystyle\leq{{\mathfrak{C}}_{\varphi,R}}\ \delta\
\|h_{1}-h_{2}\|_{X}^{2},$
which further gives
$\|\mathbb{q}_{1}-\mathbb{q}_{2}\|_{X}\leq\sqrt{{{\mathfrak{C}}_{\varphi,R}}\,\delta}\,\|h_{1}-h_{2}\|_{X}.$
Picking $\delta>0$ such that $\sqrt{{{\mathfrak{C}}_{\varphi,R}}\,\delta}<1$
then makes the mapping $\mathcal{S}$ contractive and thus, by the Banach fixed
point theorem, there exists a unique weak solution $\mathbb{q}$ to (21) on
$[t_{0},t_{0}+\delta]$, which leads to the weak solution
$\mathbb{p}(t,x)=e^{\lambda t}\mathbb{q}(t,x)$ of (20) on the same interval.
Furthermore, we see that $\delta$ only depends on the deformation $\varphi$
and not the initial condition $p_{t_{0}}$. Therefore, applying the same
reasoning at $t_{0}+\delta$ with initial condition
$\mathbb{p}(t_{0}+\delta,\cdot)$, we can extend the solution to
$[t_{0},t_{0}+2\delta]$ and by extension to the whole interval
$[t_{0},t_{0}+\eta]$. Then the uniqueness of $\mathbb{p}$ is an immediate
consequence of the uniqueness of the solution of (21) on each subinterval of
length $\delta$, thus completing the proof of Theorem 4. ∎
As a direct consequence of Theorem 4, we can finally obtain the following
result of existence of weak solutions to the reaction-diffusion PDE on the
full time interval $[0,T]$:
###### Corollary 2.
Assume that $m\geq 2$ and that $R$ is Lipschitz. Let $\varphi\in
H^{1}([0,T],\mathcal{D}\mathit{iff}^{m}(\mathbb{R}^{d}))$ with
$\varphi(0)=id_{\mathbb{R}^{d}}$ such that
$\sup_{t\,\in\,[0,\,T]}\|F_{\varphi(t)}^{-1}\|_{\infty}<\infty$. Then for all
$p_{0}\in L^{2}(M_{0})$, there exists a unique weak solution of (20) on
$[0,T]$.
### 3.2 Bounds on the solutions
We now derive some control bounds on the solution of (21) with respect to the
deformations $\varphi$ and $\varphi_{t_{0}}$ that will be needed in the next
section. Let us fix $r>0$ and denote $B_{r}=\mkern
1.5mu\overline{\mkern-1.5muB}\mkern 1.5mu_{r}(\mathit{id})$, where the closed
ball is for the distance defined in (1). We consider deformations $\varphi\in
C([t_{0},t_{0}+\eta],B_{r})$ i.e., such for all $t\in[t_{0},t_{0}+\eta]$,
$\|\varphi(t)-\mathit{id}\|_{m,\infty}\leq r$ and
$\|\varphi(t)^{-1}-\mathit{id}\|_{m,\infty}\leq r$. Note that it follows from
the results and proofs above that we can find $\lambda_{r}>0$ and
$\alpha_{r}>0$ such that for all $\varphi\in C([t_{0},t_{0}+\eta],B_{r})$ and
$\mathbb{q}\in L^{2}([t_{0},t_{0}+\eta],H^{1}(M_{t_{0}}))$:
$((\mathcal{L}_{\varphi,\,\lambda_{r}}\hskip
1.0pt\mathbb{q})(t)\mid\mathbb{q}(t))\geq\alpha_{r}\,\|\mathbb{q}(t)\|_{H^{1}(M_{t_{0}})}^{2}$
(24)
and that we have a unique local solution which we shall rewrite
$\mathbb{p}_{\varphi,\,\varphi_{t_{0}}}$ of (20) given by Theorem 4. We also
write
$\mathbb{q}_{\varphi,\,\varphi_{t_{0}}}=e^{-\lambda_{r}t}\mathbb{p}_{\varphi,\,\varphi_{t_{0}}}$.
###### Lemma 3.
Let $\mathbb{q}\in L^{2}([t_{0},t_{0}+\eta],H^{1}(M_{t_{0}}))$ and
$\varphi,\psi\in C([t_{0},t_{0}+\eta],B_{r})$. For almost every
$t\in[t_{0},t_{0}+\eta]$,
$\|(\mathcal{L}_{\varphi,\,\lambda_{r}}\hskip
1.0pt\mathbb{q})(t)-(\mathcal{L}_{\psi,\,\lambda_{r}}\hskip
1.0pt\mathbb{q})(t)\|_{H^{1}(M_{t_{0}})^{*}}\leq{{\mathfrak{C}}}_{r}\,\|\varphi(t)-\psi(t)\|_{m,\infty}\,\|\mathbb{q}(t)\|_{H^{1}(M_{t_{0}})}.$
###### Proof.
A direct computation gives for all $h\in H^{1}(M_{t_{0}})$
$\displaystyle\left|\left(\vphantom{\sum}(\mathcal{L}_{\varphi,\,\lambda_{r}}\hskip
1.0pt\mathbb{q})(t)-(\mathcal{L}_{\psi,\,\lambda_{r}}\hskip
1.0pt\mathbb{q})(t)\,\ \rule[-5.0pt]{0.5pt}{15.0pt}\,\ h\right)\right|$
$\displaystyle\leq\left|\left\langle(\mathbb{S}_{\varphi(t)}-\mathbb{S}_{\psi(t)})\,\nabla\mathbb{q}(t),\nabla
h\right\rangle_{L^{2}}\right|$
$\displaystyle+\left|\left\langle\mathbb{q}(t)\,\left(\mathbb{S}_{\varphi(t)}\,\frac{\nabla(J\varphi(t))}{J\varphi(t)}-\mathbb{S}_{\psi(t)}\,\frac{\nabla(J\psi(t))}{J\psi(t)}\right),\nabla
h\right\rangle_{L^{2}}\right|$
Since, for all $t\in[t_{0},t_{0}+\eta]$,
$\max\\{\|\varphi(t)-\mathit{id}\|_{m,\infty},\,\|\varphi(t)^{-1}-\mathit{id}\|_{m,\infty},\,\|\psi(t)-\mathit{id}\|_{m,\infty},\,\|\psi(t)^{-1}-\mathit{id}\|_{m,\infty}\\}\leq
r,$
and $m\geq 2$ we have
$\max\left\\{\|\mathbb{S}_{\varphi(t)}\|_{\infty},\,\|\mathbb{S}_{\psi(t)}\|_{\infty},\,\left\|\frac{\nabla(J\varphi(t))}{J\varphi(t)}\right\|_{\infty},\,\left\|\frac{\nabla(J\psi(t))}{J\psi(t)}\right\|_{\infty}\right\\}\leq{{\mathfrak{C}}_{r}}$
and
$\left\|\frac{\nabla(J\varphi(t))}{J\varphi(t)}-\frac{\nabla(J\psi(t))}{J\psi(t)}\right\|_{\infty}\leq{{\mathfrak{C}}_{r}}\,\|\varphi(t)-\psi(t)\|_{2,\infty}.$
The assumption made on the frame field further gives
$\|\mathbb{S}_{\varphi(t)}-\mathbb{S}_{\psi(t)}\|_{\infty}\leq{{\mathfrak{C}}_{r}}\,\|\varphi(t)-\psi(t)\|_{2,\infty}.$
Combining the previous estimates and using the Cauchy-Schwarz inequality, we
conclude that
$\displaystyle\left|\left(\vphantom{\sum}(\mathcal{L}_{\varphi,\,\lambda_{r}}\hskip
1.0pt\mathbb{q})(t)-(\mathcal{L}_{\psi,\,\lambda_{r}}\hskip
1.0pt\mathbb{q})(t)\,\ \rule[-5.0pt]{0.5pt}{15.0pt}\,\
h\right)\right|\leq{{\mathfrak{C}}_{r}}\,\|\varphi(t)-\psi(t)\|_{2,\infty}\,\|\mathbb{q}(t)\|_{H^{1}(M_{t_{0}})}\,\|h\|_{H^{1}(M_{t_{0}})}$
$\displaystyle\implies\|(\mathcal{L}_{\varphi,\,\lambda_{r}}\hskip
1.0pt\mathbb{q})(t)-(\mathcal{L}_{\psi,\,\lambda_{r}}\hskip
1.0pt\mathbb{q})(t)\|_{H^{1}(M_{t_{0}})^{*}}\leq{{\mathfrak{C}}_{r}}\,\|\varphi(t)-\psi(t)\|_{m,\infty}\,\|\mathbb{q}(t)\|_{H^{1}(M_{t_{0}})}\,.$
∎
From this result, we get the following estimates for the solution
$\mathbb{q}_{\varphi,\,\varphi_{t_{0}}}$:
###### Lemma 4.
$\|\mathbb{q}_{\varphi,\,\varphi_{t_{0}}}\\!(t)\|_{L^{2}}\leq{{\mathfrak{C}}_{r,\varphi_{t_{0}}}}\
\mbox{ for all }t\in[t_{0},t_{0}+\eta]\ \ \ \mbox{ and }\ \ \
\int_{t_{0}}^{t_{0}+\eta}\|\mathbb{q}_{\varphi,\,\varphi_{t_{0}}}(t)\|_{H^{1}(M_{t_{0}})}^{2}\,dt\leq{{\mathfrak{C}}_{r,\varphi_{t_{0}}}}.$
###### Proof.
From the definition of $\mathbb{q}_{\varphi,\,\varphi_{t_{0}}}$, we see that
for almost all $t\in[t_{0},t_{0}+\eta]$:
$\displaystyle\left(\vphantom{\sum}(\partial_{t}\hskip
1.0pt\mathbb{q}_{\varphi,\,\varphi_{t_{0}}})(t)\mid\mathbb{q}_{\varphi,\,\varphi_{t_{0}}}(t)\right)$
$\displaystyle+\left(\vphantom{\sum}(\mathcal{L}_{\varphi,\,\lambda_{r}}\,\mathbb{q}_{\varphi,\,\varphi_{t_{0}}})(t)\mid\mathbb{q}_{\varphi,\,\varphi_{t_{0}}}(t)\right)$
$\displaystyle=\int_{M_{t_{0}}}e^{-\lambda_{r}t}\,R\\!\left(\frac{e^{\lambda_{r}t}\,\mathbb{q}(t)}{J\varphi(t)}\right)J\varphi(t)\,\mathbb{q}_{\varphi,\,\varphi_{t_{0}}}(t)\,dx$
$\displaystyle\leq\frac{{{\mathfrak{C}}}_{r}}{2}\,\|R\|_{\infty}^{2}\,\mathrm{vol}(M_{t_{0}})+\frac{1}{2}\,\|\mathbb{q}_{\varphi,\,\varphi_{t_{0}}}(t)\|_{L^{2}}^{2}.$
(25)
Using Lemma 1 and the coercivity of $\mathcal{L}_{\varphi,\,\lambda_{r}}$ we
then get
$\displaystyle\frac{1}{2}\left(\partial_{t}\hskip
1.0pt\|\mathbb{q}_{\varphi,\,\varphi_{t_{0}}}(\cdot)\|_{L^{2}}^{2}\right)(t)$
$\displaystyle=\left(\vphantom{\sum}(\partial_{t}\hskip
1.0pt\mathbb{q}_{\varphi,\,\varphi_{t_{0}}})(t)\mid\mathbb{q}_{\varphi,\,\varphi_{t_{0}}}(t)\right)$
$\displaystyle\leq\frac{{{\mathfrak{C}}}_{r}}{2}\,\|R\|_{\infty}^{2}\,\mathrm{vol}(M_{t_{0}})+\frac{1}{2}\,\|\mathbb{q}_{\varphi,\,\varphi_{t_{0}}}(t)\|_{L^{2}}^{2}.$
(26)
It follows that
$\displaystyle\|\mathbb{q}_{\varphi,\,\varphi_{t_{0}}}(t)\|_{L^{2}}^{2}$
$\displaystyle=\|\mathbb{q}_{\varphi,\,\varphi_{t_{0}}}(t_{0})\|_{L^{2}}^{2}+\int_{t_{0}}^{t}\left(\partial_{t}\hskip
1.0pt\|\mathbb{q}_{\varphi,\,\varphi_{t_{0}}}(\cdot)\|_{L^{2}}^{2}\right)(s)\,ds$
$\displaystyle\leq\|\mathbb{q}(t_{0})\|_{L^{2}}^{2}+{{\mathfrak{C}}_{r}}\,T\,\|R\|_{\infty}^{2}\,\mathrm{vol}(M_{t_{0}})+\int_{t_{0}}^{t}\|\mathbb{q}_{\varphi,\,\varphi_{t_{0}}}(s)\|_{L^{2}}^{2}\,ds,$
so Gronwall’s lemma gives
$\|\mathbb{q}_{\varphi,\,\varphi_{t_{0}}}(t)\|_{L^{2}}\leq\left(\|\mathbb{q}(t_{0})\|_{L^{2}}^{2}+{{\mathfrak{C}}_{r}}\,T\,\|R\|_{\infty}^{2}\,\mathrm{vol}(M_{t_{0}})\right)e^{T}={{\mathfrak{C}}_{r,\varphi_{t_{0}}}},$
(27)
since $M_{t_{0}}=\varphi_{t_{0}}(M_{0})$.
Now, using (25) and (24), we obtain
$\alpha_{r}\,\|\mathbb{q}_{\varphi,\,\varphi_{t_{0}}}(t)\|_{H^{1}(M_{t_{0}})}^{2}\leq\frac{{{\mathfrak{C}}}_{r}}{2}\,\|R\|_{\infty}^{2}\,\mathrm{vol}(M_{t_{0}})+\frac{1}{2}\,\|\mathbb{q}_{\varphi,\,\varphi_{t_{0}}}(t)\|_{L^{2}}^{2}-\frac{1}{2}\left(\partial_{t}\hskip
1.0pt\|\mathbb{q}_{\varphi,\,\varphi_{t_{0}}}(\cdot)\|_{L^{2}}^{2}\right)(t).$
Integrating on $[t_{0},t_{0}+\eta]$, using Lemma 1 and (27), we obtain
$\displaystyle\hskip
14.0pt\alpha_{r}\,\int_{t_{0}}^{t_{0}+\eta}\|\mathbb{q}_{\varphi,\,\varphi_{t_{0}}}(t)\|_{H^{1}(M_{t_{0}})}^{2}\,dt$
$\displaystyle\leq\frac{{{\mathfrak{C}}_{r}}T}{2}\,\|R\|_{\infty}^{2}\,\mathrm{vol}(M_{t_{0}})+\frac{1}{2}\int_{t_{0}}^{t_{0}+\eta}\|\mathbb{q}_{\varphi,\,\varphi_{t_{0}}}(t)\|_{L^{2}}^{2}\,dt-\frac{1}{2}\left(\vphantom{\sum}\|\mathbb{q}_{\varphi,\,\varphi_{t_{0}}}(t_{0}+\eta)\|_{L^{2}}^{2}-\|\mathbb{q}_{\varphi,\,\varphi_{t_{0}}}(t_{0})\|_{L^{2}}^{2}\right)$
$\displaystyle\leq\frac{{{\mathfrak{C}}_{r}}T}{2}\,\|R\|_{\infty}^{2}\,\mathrm{vol}(M_{t_{0}})+\left(\frac{T}{2}+1\right){{\mathfrak{C}}_{r,\varphi_{t_{0}}}},$
that is
$\int_{t_{0}}^{t_{0}+\eta}\|\mathbb{q}_{\varphi,\,\varphi_{t_{0}}}(t)\|_{H^{1}(M_{t_{0}})}^{2}\,dt\leq{{\mathfrak{C}}_{r,\varphi_{t_{0}}}}.$
∎
This leads to the following Lipschitz regularity of
$\mathbb{p}_{\varphi,\,\varphi_{t_{0}}}(t)$ with respect to $\varphi$.
###### Lemma 5.
For almost all $t\in[t_{0},t_{0}+\eta]$,
$\|\mathbb{p}_{\varphi,\,\varphi_{t_{0}}}(t)-\mathbb{p}_{\psi,\,\varphi_{t_{0}}}(t)\|_{L^{2}(M_{t_{0}})}\leq{{\mathfrak{C}}_{r,\varphi_{t_{0}}}}\,\sup_{s\,\in\,[t_{0},t_{0}+\eta]}\|\varphi(s)-\psi(s)\|_{m,\infty}$
###### Proof.
Since
$\|\mathbb{p}_{\varphi,\,\varphi_{t_{0}}}(t)-\mathbb{p}_{\psi,\,\varphi_{t_{0}}}(t)\|_{L^{2}(M_{t_{0}})}=\left\|e^{\lambda_{r}t}\,\mathbb{q}_{\varphi,\,\varphi_{t_{0}}}\\!(t)-e^{\lambda_{r}t}\,\mathbb{q}_{\psi,\,\varphi_{t_{0}}}\\!(t)\right\|_{L^{2}(M_{t_{0}})},$
it suffices to show that
$\|\mathbb{q}_{\varphi,\,\varphi_{t_{0}}}(t)-\mathbb{q}_{\psi,\,\varphi_{t_{0}}}(t)\|_{L^{2}(M_{t_{0}})}\leq{{\mathfrak{C}}_{r,\varphi_{t_{0}}}}\,\sup_{s\,\in\,[t_{0},t_{0}+\eta]}\|\varphi(s)-\psi(s)\|_{m,\infty}.$
Recall that $\mathbb{q}_{\varphi,\,\varphi_{t_{0}}}$ and
$\mathbb{q}_{\psi,\,\varphi_{t_{0}}}$ satisfy
$\left\\{\begin{array}[]{l}\displaystyle(\partial_{t}\hskip
1.0pt\mathbb{q}_{\varphi,\,\varphi_{t_{0}}})(t)+(\mathcal{L}_{\varphi,\,\lambda_{r}}\,\mathbb{q}_{\varphi,\,\varphi_{t_{0}}})(t)=e^{-\lambda_{r}t}\,R\\!\left(\frac{e^{\lambda_{r}t}\,\mathbb{q}_{\varphi,\,\varphi_{t_{0}}}(t)}{J\varphi(t)}\right)J\varphi(t)\
\ \mbox{ for almost every }t\\\
\mathbb{q}_{\varphi,\,\varphi_{t_{0}}}(t_{0})=e^{-\lambda_{r}t_{0}}\,\mathbb{p}_{t_{0}}\end{array}\right.$
and
$\left\\{\begin{array}[]{l}\displaystyle(\partial_{t}\hskip
1.0pt\mathbb{q}_{\psi,\,\varphi_{t_{0}}})(t)+(\mathcal{L}_{\psi,\,\lambda_{r}}\,\mathbb{q}_{\psi,\,\varphi_{t_{0}}})(t)=e^{-\lambda_{r}t}\,R\\!\left(\frac{e^{\lambda_{r}t}\,\mathbb{q}_{\psi,\,\varphi_{t_{0}}}(t)}{J\psi(t)}\right)J\psi(t)\
\ \mbox{ for almost every }t\\\
\mathbb{q}_{\psi,\,\varphi_{t_{0}}}(t_{0})=e^{-\lambda_{r}t_{0}}\,\mathbb{p}_{t_{0}}\end{array}\right..$
Lemma 1 and coercivity of $\mathcal{L}_{\varphi,\,\lambda_{r}}$ again give
$\displaystyle\hskip 14.0pt\frac{1}{2}\left(\partial_{t}\hskip
1.0pt\|(\mathbb{q}_{\varphi,\,\varphi_{t_{0}}}-\mathbb{q}_{\psi,\,\varphi_{t_{0}}})(\cdot)\|_{L^{2}(M_{t_{0}})}^{2}\right)(t)+\alpha_{r}\,\|(\mathbb{q}_{\varphi,\,\varphi_{t_{0}}}-\mathbb{q}_{\psi,\,\varphi_{t_{0}}})(t)\|_{H^{1}(M_{t_{0}})}^{2}$
$\displaystyle\leq\left(\vphantom{\sum}\big{(}\partial_{t}\hskip
1.0pt(\mathbb{q}_{\varphi,\,\varphi_{t_{0}}}-\mathbb{q}_{\psi,\,\varphi_{t_{0}}})\big{)}(t)\mid(\mathbb{q}_{\varphi,\,\varphi_{t_{0}}}-\mathbb{q}_{\psi,\,\varphi_{t_{0}}})(t)\right)+\left(\vphantom{\sum}\big{(}\mathcal{L}_{\varphi,\,\lambda_{r}}\,(\mathbb{q}_{\varphi,\,\varphi_{t_{0}}}-\mathbb{q}_{\psi,\,\varphi_{t_{0}}})\big{)}(t)\mid(\mathbb{q}_{\varphi,\,\varphi_{t_{0}}}-\mathbb{q}_{\psi,\,\varphi_{t_{0}}})(t)\right)$
$\displaystyle=-\left(\vphantom{\sum}\big{(}(\mathcal{L}_{\varphi,\,\lambda_{r}}-\mathcal{L}_{\psi,\,\lambda_{r}})\,\mathbb{q}_{\psi,\,\varphi_{t_{0}}}\big{)}(t)\mid(\mathbb{q}_{\varphi,\,\varphi_{t_{0}}}-\mathbb{q}_{\psi,\,\varphi_{t_{0}}})(t)\right)$
$\displaystyle\hskip 14.0pt\phantom{}+e^{-\lambda_{r}t}\left\langle
R\\!\left(\frac{e^{\lambda_{r}t}\,\mathbb{q}_{\varphi,\,\varphi_{t_{0}}}(t)}{J\varphi(t)}\right)J\varphi(t)-R\\!\left(\frac{e^{\lambda_{r}t}\,\mathbb{q}_{\psi,\,\varphi_{t_{0}}}(t)}{J\psi(t)}\right)J\psi(t),(\mathbb{q}_{\varphi,\,\varphi_{t_{0}}}-\mathbb{q}_{\psi,\,\varphi_{t_{0}}})(t)\right\rangle_{L^{2}(M_{t_{0}})}$
and using Lemma 3, we find
$\displaystyle\hskip 14.0pt\frac{1}{2}\left(\partial_{t}\hskip
1.0pt\|(\mathbb{q}_{\varphi,\,\varphi_{t_{0}}}-\mathbb{q}_{\psi,\,\varphi_{t_{0}}})(\cdot)\|_{L^{2}(M_{t_{0}})}^{2}\right)(t)+\alpha_{r}\,\|(\mathbb{q}_{\varphi,\,\varphi_{t_{0}}}-\mathbb{q}_{\psi,\,\varphi_{t_{0}}})(t)\|_{H^{1}(M_{t_{0}})}^{2}$
$\displaystyle\leq{{\mathfrak{C}}_{r}}\ \|\varphi(t)-\psi(t)\|_{m,\infty}\
\|\mathbb{q}_{\psi,\,\varphi_{t_{0}}}(t)\|_{H^{1}(M_{t_{0}})}\
\|(\mathbb{q}_{\varphi,\,\varphi_{t_{0}}}-\mathbb{q}_{\psi,\,\varphi_{t_{0}}})(t)\|_{H^{1}(M_{t_{0}})}$
$\displaystyle\hskip
14.0pt\phantom{}+{{\mathfrak{C}}_{r}}\left(\vphantom{\sum}\|(\mathbb{q}_{\varphi,\,\varphi_{t_{0}}}-\mathbb{q}_{\psi,\,\varphi_{t_{0}}})(t)\|_{L^{2}(M_{t_{0}})}+\|\varphi(t)-\psi(t)\|_{m,\infty}\right)\|(\mathbb{q}_{\varphi,\,\varphi_{t_{0}}}-\mathbb{q}_{\psi,\,\varphi_{t_{0}}})(t)\|_{L^{2}(M_{t_{0}})}$
$\displaystyle\leq{{\mathfrak{C}}_{r}}\left(\frac{1}{2\varepsilon}\,\|\varphi(t)-\psi(t)\|_{m,\infty}^{2}\
\|\mathbb{q}_{\psi,\,\varphi_{t_{0}}}(t)\|_{H^{1}(M_{t_{0}})}^{2}+\frac{\varepsilon}{2}\,\|(\mathbb{q}_{\varphi,\,\varphi_{t_{0}}}-\mathbb{q}_{\psi,\,\varphi_{t_{0}}})(t)\|_{H^{1}(M_{t_{0}})}^{2}\right)$
$\displaystyle\hskip
14.0pt\phantom{}+{{\mathfrak{C}}_{r}}\left(\|(\mathbb{q}_{\varphi,\,\varphi_{t_{0}}}-\mathbb{q}_{\psi,\,\varphi_{t_{0}}})(t)\|_{L^{2}(M_{t_{0}})}^{2}+\frac{1}{2}\,\|\varphi(t)-\psi(t)\|_{m,\infty}^{2}+\frac{1}{2}\,\|(\mathbb{q}_{\varphi,\,\varphi_{t_{0}}}-\mathbb{q}_{\psi,\,\varphi_{t_{0}}})(t)\|_{L^{2}(M_{t_{0}})}^{2}\right).$
By choosing $\varepsilon>0$ such that
${{\mathfrak{C}}_{r}}\varepsilon/2<\alpha_{r}$, we obtain
$\displaystyle\hskip 13.0pt\left(\partial_{t}\hskip
1.0pt\|(\mathbb{q}_{\varphi,\,\varphi_{t_{0}}}-\mathbb{q}_{\psi,\,\varphi_{t_{0}}})(\cdot)\|_{L^{2}(M_{t_{0}})}^{2}\right)(t)$
$\displaystyle\leq\|\varphi(t)-\psi(t)\|_{m,\infty}^{2}\left({{\mathfrak{C}}_{r}}+\frac{{{\mathfrak{C}}_{r}}}{\varepsilon}\,\|\mathbb{q}_{\psi,\,\varphi_{t_{0}}}(t)\|_{H^{1}(M_{t_{0}})}^{2}\right)+3\,{{\mathfrak{C}}_{r}}\,\|(\mathbb{q}_{\varphi,\,\varphi_{t_{0}}}-\mathbb{q}_{\psi,\,\varphi_{t_{0}}})(t)\|_{L^{2}(M_{t_{0}})}^{2}$
$\displaystyle\leq{{\mathfrak{C}}_{r}}\left(\|\varphi(t)-\psi(t)\|_{m,\infty}^{2}\left(1+\|\mathbb{q}_{\psi,\,\varphi_{t_{0}}}(t)\|_{H^{1}(M_{t_{0}})}^{2}\right)+\|(\mathbb{q}_{\varphi,\,\varphi_{t_{0}}}-\mathbb{q}_{\psi,\,\varphi_{t_{0}}})(t)\|_{L^{2}(M_{t_{0}})}^{2}\right).$
Thus for almost all $t\in[t_{0},t_{0}+\eta]$
$\displaystyle\hskip
14.0pt\|(\mathbb{q}_{\varphi,\,\varphi_{t_{0}}}-\mathbb{q}_{\psi,\,\varphi_{t_{0}}})(t)\|_{L^{2}(M_{t_{0}})}^{2}$
$\displaystyle=\|(\mathbb{q}_{\varphi,\,\varphi_{t_{0}}}-\mathbb{q}_{\psi,\,\varphi_{t_{0}}})(t_{0})\|_{L^{2}(M_{t_{0}})}^{2}+\int_{t_{0}}^{t}\left(\partial_{t}\hskip
1.0pt\|(\mathbb{q}_{\varphi,\,\varphi_{t_{0}}}-\mathbb{q}_{\psi,\,\varphi_{t_{0}}})(\cdot)\|_{L^{2}(M_{t_{0}})}^{2}\right)(s)\,ds$
$\displaystyle\leq
0+{{\mathfrak{C}}_{r}}\,\|\varphi-\psi\|_{\infty}^{2}\left(\eta+\int_{[t_{0},t_{0}+\eta]}\|\mathbb{q}_{\psi,\,\varphi_{t_{0}}}(t)\|_{H^{1}(M_{t_{0}})}^{2}\,dt\right)+{{\mathfrak{C}}_{r}}\int_{t_{0}}^{t}\|(\mathbb{q}_{\varphi,\,\varphi_{t_{0}}}-\mathbb{q}_{\psi,\,\varphi_{t_{0}}})(s)\|_{L^{2}(M_{t_{0}})}^{2}\,ds.$
We conclude by Lemma 4 and Gronwall’s inequality that
$\|(\mathbb{q}_{\varphi,\,\varphi_{t_{0}}}-\mathbb{q}_{\psi,\,\varphi_{t_{0}}})(t)\|_{L^{2}}\leq{{\mathfrak{C}}_{r,\varphi_{t_{0}}}}\,\|\varphi-\psi\|_{\infty}.$
∎
## 4 Proof of Theorem 3
We now move on to the proof of the main result. We will first prove that a
unique solution to (13) and (14) exists locally using again a fixed-point
argument before finally showing that the solution is defined on $[0,T]$. As
done in the previous section, let us again consider $t_{0}\in[0,T)$ and
$\varphi_{t_{0}}\in\mathcal{D}\mathit{iff}^{m}(\mathbb{R}^{d})$, $p_{t_{0}}\in
L^{2}(M_{t_{0}})$. By the assumption on $A$, there exist $r>0$ and
$\ell_{A}>0$ both depending on $\varphi_{t_{0}}$ such that, letting
$B_{r}=\mkern 1.5mu\overline{\mkern-1.5muB}\mkern 1.5mu_{r}(\mathit{id})$, we
have $\\{\varphi\circ\varphi_{t_{0}}:\varphi\in
B\\}\subset\mathcal{D}\mathit{iff}^{m}(\mathbb{R}^{d})$ and
$\|A_{\varphi\,\circ\,\varphi_{t_{0}}}-A_{\psi\,\circ\,\varphi_{t_{0}}}\|_{\mathscr{L}(V,\,V^{*})}\leq\ell_{A}\,\|\varphi\circ\varphi_{t_{0}}-\psi\circ\varphi_{t_{0}}\|_{m,\infty}\
\ \mbox{ for all }\ \varphi,\psi\in B_{r}.$
Considering an arbitrary interval $[t_{0},t_{0}+\eta]\subset[0,T]$, let
$S_{\eta,\,\varphi_{t_{0}}}=C([t_{0},t_{0}+\eta],\,B_{r})$ and define
$\varGamma_{\eta}:S_{\eta,\,\varphi_{t_{0}}}\rightarrow
C([t_{0},t_{0}+\eta],\,\mathit{id}+C_{0}^{m}(\mathbb{R}^{d},\mathbb{R}^{d}))$
by
$\varGamma_{\eta}(\varphi)(t)=\mathit{id}+\int_{t_{0}}^{t}v_{\varphi,\,\varphi_{t_{0}}}(s)\circ\varphi(s)\,ds,$
(28)
where
$\left\\{\begin{array}[]{l}v_{\varphi,\,\varphi_{t_{0}}}(s)=(\omega\,K_{V}^{-1}+A_{\varphi(s)\,\circ\,\varphi_{t_{0}}})^{-1}\hskip
1.0ptj_{\varphi,\,\varphi_{t_{0}}}(s)\\\\[5.0pt]
\displaystyle(j_{\varphi,\,\varphi_{t_{0}}}(s)\mid
v^{\prime})=\int_{\varphi(s,\,\varphi_{t_{0}}(M_{0}))}\chi\
Q(\mathbb{p}_{\varphi,\,\varphi_{t_{0}}}(s)\circ\varphi^{-1}(s))\,(-\mathrm{div}\,v^{\prime})\,dx\end{array}\right..$
and $\mathbb{p}_{\varphi,\,\varphi_{t_{0}}}$ is the solution of (20) given by
Theorem 4.
For the mapping $\varGamma_{\eta}$ to be well-defined, one needs to show that
the integral in (28) is finite, which is justified by Lemma 7 below. For the
different proofs that follow, we shall recall first a few results on vector
fields, flows and diffeomorphisms.
###### Proposition 2.
Let $u,u^{\prime}\in V$ and $\varphi,\psi\in C([t_{0},t_{0}+\eta],B_{r})$. For
all $t\in[t_{0},t_{0}+\eta]$, it holds that
1. (i)
$\|u\circ\varphi(t)\|_{m,\infty}\leq{{\mathfrak{C}}_{r}}\|u\|_{m,\infty}$,
2. (ii)
$\|u\circ\varphi(t)-u\circ\psi(t)\|_{m,\infty}\leq{{\mathfrak{C}}_{r}}\|u\|_{m+1,\infty}\|\varphi(t)-\psi(t)\|_{m,\infty}$,
3. (iii)
$\|u\circ\varphi(t)-u^{\prime}\circ\varphi(t)\|_{m,\infty}\leq{{\mathfrak{C}}_{r}}\|u-u^{\prime}\|_{m,\infty}$.
###### Proof.
All these inequalities follow from the Faà di Bruno’s formula on higher order
derivatives of composition of two functions. They can be found e.g., in Younes
[2019] section 7.1. ∎
Furthermore, one has the following controls on $j_{\varphi,\,\varphi_{t_{0}}}$
and $v_{\varphi,\,\varphi_{t_{0}}}$, which are simply the generalization of
the estimates of section 6.2 in Hsieh et al. [2020] for $m=2$:
###### Proposition 3.
Let $\varphi,\psi\in C([t_{0},t_{0}+\eta],B_{r})$. Then for all
$t\in[t_{0},t_{0}+\eta]$, we have
1. (i)
$\|j_{\varphi,\,\varphi_{t_{0}}}(t)\|_{V^{*}}\leq
c_{V}\,\|Q\|_{\infty}\,\|\chi\|_{L^{1}}\colonequals J$
2. (ii)
$\|v_{\varphi,\,\varphi_{t_{0}}}(t)\|_{m+1,\infty}\leq\frac{c_{V}}{\omega}\|j_{\varphi,\,\varphi_{t_{0}}}(t)\|_{V^{*}}\leq\frac{c_{V}}{\omega}J$
3. (iii)
$\|v_{\varphi,\,\varphi_{t_{0}}}(t)-v_{\psi,\,\varphi_{t_{0}}}(t)\|_{m,\infty}\leq\frac{J}{\omega^{2}}\ell_{A}\|\varphi(t)\circ\varphi_{t_{0}}-\psi(t)\circ\varphi_{t_{0}}\|_{m,\infty}+\frac{c_{V}}{\omega}\,\|j_{\varphi,\,\varphi_{t_{0}}}(s)-j_{\psi,\,\varphi_{t_{0}}}(s)\|_{V^{*}}$
(The constant $c_{V}$ was introduced in Equation (2).)
###### Proof.
1. (i)
For any $v^{\prime}\in V$, we see that:
$\displaystyle|(j_{\varphi,\,\varphi_{t_{0}}}(t)|v^{\prime})|\leq\int_{\varphi(s,\,\varphi_{t_{0}}(M_{0}))}|\chi|\
\|Q\|_{\infty}\,|\mathrm{div}\,v^{\prime}|dx$
$\displaystyle\leq\|Q\|_{\infty}\|v^{\prime}\|_{1,\infty}\left(\int_{\varphi(s,\,\varphi_{t_{0}}(M_{0}))}|\chi|dx\right)$
$\displaystyle\leq c_{V}\|Q\|_{\infty}\|v^{\prime}\|_{V}\|\chi\|_{L^{1}}.$
which thus leads to $\|j_{\varphi,\,\varphi_{t_{0}}}(t)\|_{V^{*}}\leq J$.
2. (ii)
We have
$v_{\varphi,\,\varphi_{t_{0}}}(t)=L^{-1}j_{\varphi,\,\varphi_{t_{0}}}(t)$
where $L\colonequals\omega\,K_{V}^{-1}+A_{\varphi(t)\,\circ\,\varphi_{t_{0}}}$
so to prove (ii), we first show that for all $v\in V$,
$\|v\|_{V}\leq(1/\omega)\left\|L\,v\right\|_{V^{*}}$. Indeed
$\displaystyle\left(\frac{1}{\omega}\left\|L\,v\right\|_{V^{*}}\right)^{2}$
$\displaystyle=\left(\frac{1}{\omega}\left\|K_{V}\left(\omega
K_{V}^{-1}+A_{\varphi(t)\,\circ\,\varphi_{t_{0}}}\right)v\right\|_{V}\right)^{2}$
$\displaystyle=\frac{1}{\omega^{2}}\left\|\omega\,v+K_{V}A_{\varphi(t)\,\circ\,\varphi_{t_{0}}}v\right\|_{V}^{2}$
$\displaystyle=\|v\|_{V}^{2}+\frac{1}{\omega^{2}}\,\|K_{V}A_{\varphi(t)\,\circ\,\varphi_{t_{0}}}\hskip
1.0ptv\|_{V}^{2}+\frac{2}{\omega}\,\langle
v,K_{V}A_{\varphi(t)\,\circ\,\varphi_{t_{0}}}\hskip 1.0ptv\rangle_{V}$
$\displaystyle=\|v\|_{V}^{2}+\frac{1}{\omega^{2}}\,\|K_{V}A_{\varphi(t)\,\circ\,\varphi_{t_{0}}}\hskip
1.0ptv\|_{V}^{2}+\frac{2}{\omega}\,(A_{\varphi(t)\,\circ\,\varphi_{t_{0}}}\hskip
1.0ptv\mid v)\,\geq\,\|v\|_{V}^{2},$
where the last inequality follows from the positive definiteness of the
operator $A_{\varphi(s)\,\circ\,\varphi_{t_{0}}}$. Together with the
assumption that $V\hookrightarrow C_{0}^{m+1}(\mathbb{R}^{d},\mathbb{R}^{d})$,
it follows that:
$\|v_{\varphi,\,\varphi_{t_{0}}}(t)\|_{m+1,\infty}\leq
c_{V}\|v_{\varphi,\,\varphi_{t_{0}}}(t)\|_{V}\leq\frac{c_{V}}{\omega}\|j_{\varphi,\,\varphi_{t_{0}}}(t)\|_{V^{*}}\leq\frac{c_{V}}{\omega}J.$
3. (iii)
Writing now
$L_{\varphi}\colonequals\omega\,K_{V}^{-1}+A_{\varphi(t)\,\circ\,\varphi_{t_{0}}}$
and
$L_{\psi}\colonequals\omega\,K_{V}^{-1}+A_{\psi(t)\,\circ\,\varphi_{t_{0}}}$,
we have:
$\displaystyle\|v_{\varphi,\,\varphi_{t_{0}}}(t)-v_{\psi,\,\varphi_{t_{0}}}(t)\|_{m,\infty}$
$\displaystyle=\|L_{\varphi}^{-1}j_{\varphi,\,\varphi_{t_{0}}}(t)-L_{\psi}^{-1}j_{\psi,\,\varphi_{t_{0}}}(t)\|_{m,\infty}$
$\displaystyle\leq\|L_{\varphi}^{-1}\left(j_{\varphi,\,\varphi_{t_{0}}}(t)-j_{\psi,\,\varphi_{t_{0}}}(t)\right)\|_{m,\infty}+\|(L_{\varphi}^{-1}-L_{\psi}^{-1})j_{\psi,\,\varphi_{t_{0}}}(t)\|_{m,\infty}.$
Note that, from the proof of (ii), we obtain in particular
$\|L_{\varphi}^{-1}\|_{\mathscr{L}(V^{*}\\!,\,V)}\leq 1/\omega$ and therefore:
$\displaystyle\|L_{\varphi}^{-1}\left(j_{\varphi,\,\varphi_{t_{0}}}(t)-j_{\psi,\,\varphi_{t_{0}}}(t)\right)\|_{m,\infty}$
$\displaystyle\leq\|L_{\varphi}^{-1}\left(j_{\varphi,\,\varphi_{t_{0}}}(t)-j_{\psi,\,\varphi_{t_{0}}}(t)\right)\|_{m+1,\infty}$
$\displaystyle\leq
c_{V}\|L_{\varphi}^{-1}\left(j_{\varphi,\,\varphi_{t_{0}}}(t)-j_{\psi,\,\varphi_{t_{0}}}(t)\right)\|_{V}$
$\displaystyle\leq\frac{c_{V}}{\omega}\|j_{\varphi,\,\varphi_{t_{0}}}(s)-j_{\psi,\,\varphi_{t_{0}}}(s)\|_{V^{*}}$
Moreover, using (i),
$\|(L_{\varphi}^{-1}-L_{\psi}^{-1})j_{\psi,\,\varphi_{t_{0}}}(t)\|_{m,\infty}\leq\|L_{\varphi}^{-1}-L_{\psi}^{-1}\|_{\mathscr{L}(V^{*}\\!,\,V)}\,J$
and we also have:
$\displaystyle\|L_{\varphi}^{-1}-L_{\psi}^{-1}\|_{\mathscr{L}(V^{*}\\!,\,V)}$
$\displaystyle=\left\|L_{\varphi}^{-1}\left(L_{\psi}-L_{\varphi}\right)L_{\psi}^{-1}\right\|_{\mathscr{L}(V^{*}\\!,\,V)}$
$\displaystyle=\left\|L_{\varphi}^{-1}\left(A_{\psi(t)\,\circ\,\varphi_{t_{0}}}-A_{\varphi(t)\,\circ\,\varphi_{t_{0}}}\right)L_{\psi}^{-1}\right\|_{\mathscr{L}(V^{*}\\!,\,V)}$
$\displaystyle\leq\|L_{\varphi}^{-1}\|_{\mathscr{L}(V^{*}\\!,\,V)}\
\|A_{\psi(t)\,\circ\,\varphi_{t_{0}}}-A_{\varphi(t)\,\circ\,\varphi_{t_{0}}}\|_{\mathscr{L}(V,\,V^{*})}\
\|L_{\psi}^{-1}\|_{\mathscr{L}(V^{*}\\!,\,V)}\ $
$\displaystyle\leq\frac{1}{\omega^{2}}\,\|A_{\psi(t)\,\circ\,\varphi_{t_{0}}}-A_{\varphi(t)\,\circ\,\varphi_{t_{0}}}\|_{\mathscr{L}(V,\,V^{*})}$
$\displaystyle\leq\frac{\ell_{A}}{\omega^{2}}\,\|\varphi(t)\circ\varphi_{t_{0}}-\psi(t)\circ\varphi_{t_{0}}\|_{m,\infty}$
where the last inequality follows from the Lipschitz assumption on the
operator $A$.
∎
Using the estimates of the previous section, we can in addition show the
following Lispchitz property of $j_{\varphi,\,\varphi_{t_{0}}}$.
###### Lemma 6.
For all $t\in[t_{0},t_{0}+\eta]$,
$\|j_{\varphi,\,\varphi_{t_{0}}}(t)-j_{\psi,\,\varphi_{t_{0}}}(t)\|_{V^{*}}\leq{{\mathfrak{C}}_{\varphi_{t_{0}}}}\sup_{s\,\in\,[t_{0},t_{0}+\eta]}\|\varphi(s)-\psi(s)\|_{2,\infty}={{\mathfrak{C}}_{r,\varphi_{t_{0}}}}\,\|\varphi-\psi\|_{\infty}.$
###### Proof.
From the definition of $j$, we make a change of variables to obtain
$\displaystyle\hskip
13.0pt\left|\left(j_{\varphi,\,\varphi_{t_{0}}}(t)-j_{\psi,\,\varphi_{t_{0}}}(t)\mid
v^{\prime}\right)\right|$
$\displaystyle=\Big{|}\int_{\varphi(t)(M_{t_{0}})}\chi\
Q(\mathbb{p}_{\varphi,\,\varphi_{t_{0}}}(t)\circ\varphi^{-1}(t))\,(-\mathrm{div}\,v^{\prime})\,dx$
$\displaystyle\hskip 20.0pt\phantom{}-\int_{\psi(t)(M_{t_{0}})}\chi\
Q(\mathbb{p}_{\psi,\,\varphi_{t_{0}}}(t)\circ\psi^{-1}(t))\,(-\mathrm{div}\,v^{\prime})\,dx\Big{|}$
$\displaystyle\leq\int_{M_{t_{0}}}\Big{|}\chi\circ\varphi(t)\
\,Q(\mathbb{p}_{\varphi,\,\varphi_{t_{0}}}(t))\
\,(-\mathrm{div}\,v^{\prime}\circ\varphi(t))\ \,J\varphi(t)$
$\displaystyle\hskip 60.0pt\phantom{}-\chi\circ\psi(t)\
\,Q(\mathbb{p}_{\psi,\,\varphi_{t_{0}}}(t))\
\,(-\mathrm{div}\,v^{\prime}\circ\psi(t))\ \,J\psi(t)\Big{|}dx$
$\displaystyle\leq\|\nabla\chi\|_{\infty}\ \|\varphi(t)-\psi(t)\|_{\infty}\
\|Q\|_{\infty}\ \|v^{\prime}\|_{1,\infty}\ \|J\varphi(t)\|_{\infty}\
\mathrm{vol}(M_{t_{0}})$ $\displaystyle\hskip
15.0pt\phantom{}+{{\mathfrak{C}}_{r,\varphi_{t_{0}}}}\sup_{s\,\in\,[t_{0},t_{0}+\eta]}\|\varphi(s)-\psi(s)\|_{2,\infty}\
\|v^{\prime}\|_{1,\infty}\ \|J\varphi(t)\|_{\infty}\ \mathrm{vol}(M_{t_{0}})$
$\displaystyle\hskip 15.0pt\phantom{}+\|Q\|_{\infty}\
\|v^{\prime}\|_{2,\infty}\ \|\varphi(t)-\psi(t)\|_{\infty}\
\|J\varphi(t)\|_{\infty}\ \mathrm{vol}(M_{t_{0}})$ $\displaystyle\hskip
15.0pt\phantom{}+{{\mathfrak{C}}_{r}}\ \|Q\|_{\infty}\
\|v^{\prime}\|_{1,\infty}\ \|\varphi(t)-\psi(t)\|_{\infty}\
\mathrm{vol}(M_{t_{0}})$
$\displaystyle\leq{{\mathfrak{C}}_{r,\varphi_{t_{0}}}}\,\|\varphi-\psi\|_{\infty}\
\|v^{\prime}\|_{V},$
where we have estimated term by term and used Lipschitz continuity of $Q$
together with Lemma 5. ∎
We can now go back to the definition of the mapping $\varGamma_{\eta}$.
###### Lemma 7.
For all $\varphi\in C([t_{0},t_{0}+\eta],B_{r})$,
$\varphi_{t_{0}}\in\mathcal{D}\mathit{iff}^{m}(\mathbb{R}^{d})$ and
$p_{t_{0}}\in L^{2}(\varphi_{t_{0}}(M_{0}))$, the Bochner integral in (28) is
uniformly bounded for $t\in[t_{0},t_{0}+\eta]$.
###### Proof.
Using Proposition 2, we find that for all $s\in[t_{0},t_{0}+\eta]$:
$\displaystyle\|v_{\varphi,\,\varphi_{t_{0}}}(s)\|_{m,\infty}\leq\|v_{\varphi,\,\varphi_{t_{0}}}(s)\|_{m+1,\infty}\leq\frac{c_{V}J}{\omega}$
which gives for all $t\in[t_{0},t_{0}+\eta]$:
$\displaystyle\int_{t_{0}}^{t}\|v_{\varphi,\,\varphi_{t_{0}}}(s)\circ\varphi(s)\|_{m,\infty}\,ds\leq\int_{t_{0}}^{t}{{\mathfrak{C}}_{r}}\,\|v_{\varphi,\,\varphi_{t_{0}}}(s)\|_{m,\infty}\leq\frac{{{\mathfrak{C}}_{r}}\,c_{V}}{\omega}\,J\,\eta<\infty.$
where the first inequality follows from Proposition 3 (i). ∎
Note that in addition, if $\eta$ is taken small enough such that
$\frac{{{\mathfrak{C}}_{r}}\,c_{V}}{\omega}\,J\,\eta\leq r$ then
$\varGamma_{\eta}$ maps $S_{\eta,\,\varphi_{t_{0}}}$ to itself. The goal is
now to show that $\varGamma_{\eta}$ is a contractive mapping on
$S_{\eta,\,\varphi_{t_{0}}}$. Indeed, for any $\varphi,\psi\in
S_{\eta,\,\varphi_{t_{0}}}$:
$\displaystyle\hskip
14.0pt\|\varGamma_{\eta}(\varphi)-\varGamma_{\eta}(\psi)\|_{\infty}=\sup_{t\,\in\,[t_{0},t_{0}+\eta]}\|\varGamma_{\eta}(\varphi)(t)-\varGamma_{\eta}(\psi)(t)\|_{m,\infty}$
$\displaystyle\leq\sup_{t\,\in\,[t_{0},t_{0}+\eta]}\int_{t_{0}}^{t}\|v_{\varphi,\,\varphi_{t_{0}}}(s)\circ\varphi(s)-v_{\psi,\,\varphi_{t_{0}}}(s)\circ\psi(s)\|_{m,\infty}\,ds$
$\displaystyle\leq\int_{t_{0}}^{t_{0}+\eta}\left(\|v_{\varphi,\,\varphi_{t_{0}}}(s)\circ\varphi(s)-v_{\varphi,\,\varphi_{t_{0}}}(s)\circ\psi(s)\|_{m,\infty}+\|v_{\varphi,\,\varphi_{t_{0}}}(s)\circ\psi(s)-v_{\psi,\,\varphi_{t_{0}}}(s)\circ\psi(s)\|_{m,\infty}\right)ds$
Using Proposition 2 (ii) and (iii), we get:
$\displaystyle\hskip
14.0pt\|\varGamma_{\eta}(\varphi)-\varGamma_{\eta}(\psi)\|_{\infty}$
$\displaystyle\leq{{\mathfrak{C}}_{r}}\int_{t_{0}}^{t_{0}+\eta}\left(\|v_{\varphi,\,\varphi_{t_{0}}}(s)\|_{m+1,\infty}\,\|\varphi(s)-\psi(s)\|_{m,\infty}+\|v_{\varphi,\,\varphi_{t_{0}}}(s)-v_{\psi,\,\varphi_{t_{0}}}(s)\|_{m,\infty}\right)ds$
$\displaystyle\leq{{\mathfrak{C}}_{r}}\int_{t_{0}}^{t_{0}+\eta}\left(\vphantom{\frac{J}{\omega}}\frac{c_{V}}{\omega}\,J\,\|\varphi(s)-\psi(s)\|_{m,\infty}\right.$
$\displaystyle\hskip
60.0pt\left.\phantom{}+c_{V}\left(\frac{J}{\omega^{2}}\,\ell_{A}\,\|\varphi(s)\circ\varphi_{t_{0}}-\psi(s)\circ\varphi_{t_{0}}\|_{m,\infty}+\frac{1}{\omega}\,\|j_{\varphi,\,\varphi_{t_{0}}}(s)-j_{\psi,\,\varphi_{t_{0}}}(s)\|_{V^{*}}\right)\right)ds$
Now using Lemma 6 above, we obtain the following inequalities:
$\displaystyle\|\varGamma_{\eta}(\varphi)-\varGamma_{\eta}(\psi)\|_{\infty}$
$\displaystyle\leq{{\mathfrak{C}}_{r}}\int_{t_{0}}^{t_{0}+\eta}\left(\left(\frac{c_{V}J}{\omega}+\frac{c_{V}J}{\omega^{2}}\,\ell_{A}\,{{\mathfrak{C}}_{\varphi_{t_{0}}}}\right)\|\varphi(s)-\psi(s)\|_{m,\infty}+\frac{c_{V}}{\omega}\,{{\mathfrak{C}}_{r,\varphi_{t_{0}}}}\,\sup_{s\,\in\,[t_{0},t_{0}+\eta]}\|\varphi(s)-\psi(s)\|_{m,\infty}\right)ds$
(29)
$\displaystyle\leq{{\mathfrak{C}}_{r,\varphi_{t_{0}}}}\,\eta\,\|\varphi-\psi\|_{\infty},$
It follows that there exists a small enough $\eta>0$ depending on
$\varphi_{t_{0}}$ and $r$ such that $\varGamma_{\eta}$ is a well-defined
contraction on $S_{\eta,\,\varphi_{t_{0}}}$ and, by Banach fixed point
theorem, we get the local existence and uniqueness of a solution on
$[t_{0},t_{0}+\eta]$.
By concatenating local solutions, we can construct a unique maximal solution
$\varphi$ defined on a maximal interval $I_{\max}$, and either
$I_{\max}=[0,T^{\prime})$ for some $T^{\prime}<T$ or $I_{\max}=[0,T]$. To show
that the solution is defined over the entire interval $[0,T]$, we first prove
that $\|\varphi(t)-\mathit{id}\|_{m,\infty}$ is bounded on $I_{\max}$. For all
$t\in I_{\max}$, a solution $\varphi$ satisfies
$\varphi(t,x)=x+\int_{0}^{t}v_{\varphi}(s,\varphi(s,x))\,ds,$
which gives for all $x\in\mathbb{R}^{d}$:
$|\varphi(t,x)-x|\leq\int_{0}^{t}\|v_{\varphi}(s)\|_{\infty}\,ds\leq\int_{0}^{t}\frac{c_{V}}{\omega}\,\|j_{\varphi}(s)\|_{V^{*}}\,ds\leq\frac{c_{V}}{\omega}\,JT.$
(30)
Moreover, we have:
$D\varphi(t,x)={\bm{I}}_{d}+\int_{0}^{t}Dv_{\varphi}(s,\varphi(s,x))\,D\varphi(s,x)\,ds,$
where $\bm{I}_{d}$ denotes the $\dim$-by-$\dim$ identity matrix which leads to
$\displaystyle|D\varphi(t,x)-\bm{I}_{d}|$
$\displaystyle=\left|\int_{0}^{t}\left(\vphantom{\sum}Dv_{\varphi}(s,\varphi(s,x))+Dv_{\varphi}(s,\varphi(s,x))(D\varphi(s,x)-\bm{I}_{d})\right)ds\right|$
$\displaystyle\leq\frac{c_{V}}{\omega}\,JT+\int_{0}^{t}\,\frac{c_{V}}{\omega}\,J\
|D\varphi(s,x)-\bm{I}_{d}|\,ds.$
By Grönwall’s inequality
$|D\varphi(t,x)-\bm{I}_{d}|\leq\frac{c_{V}}{\omega}\,JT\exp\\!\left(\frac{c_{V}}{\omega}\,JT\right).$
(31)
For the second order derivatives, we see that:
$\displaystyle\begin{split}|D^{2}\varphi(t,x)|&\leq\int_{0}^{t}\left(\vphantom{\sum}|D^{2}v_{\varphi}(s,\varphi(s,x))|\,|D\varphi(s,x)|^{2}\right.\\\
&\hskip
30.0pt\left.\vphantom{\sum}\phantom{}+|Dv_{\varphi}(s,\varphi(s,x))|\,|D^{2}\varphi(s,x)|\right)ds,\end{split}$
and inserting the bound (31) into the above, we obtain
$|D^{2}\varphi(t,x)|\leq\left(1+B_{J}\right)^{2}\left(\frac{c_{V}}{\omega}\,JT\right)\,+\int_{0}^{t}\frac{c_{V}}{\omega}\,J\
|D^{2}\varphi(s,x)|\,ds.$
Using again Grönwall’s inequality, we get that $|D^{2}\varphi(t,x)|$ is
bounded by a constant dependent on $J$ uniformly in $t$ and $x$. Then by a
simple recursive argument, we show similarly that there exists a constant
$B_{J}$ such that for any $2\leq k\leq m$:
$|D^{k}\varphi(t,x)|\leq B_{J},\ \forall t\in[0,T^{\prime}),\ \forall
x\in\mathbb{R}^{d}.$ (32)
Finally, with (30), (31), and (32), we conclude that
$\|\varphi(t)-\mathit{id}\|_{m,\infty}\leq{{\mathfrak{C}}}_{J}.$
The same inequality holds for $\varphi^{-1}(t)$ for $T\in[0,T^{\prime})$.
Indeed, from standard results on flows (c.f. for instance Younes [2019] Chap.
7), one has that for $t\in[0,T^{\prime})$, the inverse map
$\psi(t)\colonequals\varphi(t)^{-1}$ is obtained as the flow of the ODE
$dz/ds=\tilde{v}^{(t)}(s,z)$, with $\tilde{v}^{(t)}(s)=-v_{\varphi(t-s)}$, and
one can repeat the analysis above with $\tilde{v}$ in place of $v$.
Importantly, this tells us that we can choose $r={{\mathfrak{C}}_{J}}$
independently of $T^{\prime}$.
Now we can show that $\varphi(t)$ has a limit in
$\mathcal{D}\mathit{iff}^{m}(\mathbb{R}^{d})$ as $t\uparrow T^{\prime}$ by the
Cauchy criterion. Let $(t_{k})_{k=1}^{\infty}\subset I_{\max}$ be a sequence
such that $t_{k}\uparrow T^{\prime}$. For $k<l$, we have
$\displaystyle\|\varphi(t_{k})-\varphi(t_{l})\|_{m,\infty}$
$\displaystyle\leq\int_{t_{k}}^{t_{l}}\|v_{\varphi}(s)\circ\varphi(s)\|_{m,\infty}\,ds$
$\displaystyle\leq\int_{t_{k}}^{t_{l}}{{\mathfrak{C}}}_{J}\
c_{V}\,\|v_{\varphi}(s)\|_{V}\,ds$
$\displaystyle\leq\int_{t_{k}}^{t_{l}}{{\mathfrak{C}}}_{J}\
\frac{c_{V}}{\omega}\,\|j_{\varphi}(s)\|_{V^{*}}\,ds$
$\displaystyle\leq\frac{{{\mathfrak{C}}}_{J}\,c_{V}}{\omega}\,J\
(t_{k}-t_{l}),$
which shows that $(\varphi(t_{n}))_{n=1}^{\infty}$ is a Cauchy sequence in
$\mathcal{D}\mathit{iff}^{m}(\mathbb{R}^{d})$ for $\|\cdot\|_{m,\infty}$. It
follows that $t\mapsto\varphi(t)$ has a limit $\varphi(T^{\prime})$ as
$t\uparrow T^{\prime}$ in the complete space
$id_{\mathbb{R}^{d}}+\mathcal{C}_{0}^{m}(\mathbb{R}^{d},\mathbb{R}^{d})$.
Similarly, replacing $v$ by $\tilde{v}$, we find that $\varphi(t)^{-1}$ also
has a limit at $T^{\prime}$, which is necessarily $\varphi(T^{\prime})^{-1}$.
This shows that $\varphi(T^{\prime})\in B_{r}$, so that the solution can be
continued at $t=T^{\prime}$, which contradicts that $[0,T^{\prime})$ is the
maximal interval of existence.
From the above analysis, we also obtain that $t\mapsto\varphi(t)$ is bounded
on $[0,T]$ for $d_{m,\infty}$ and therefore we have
$\sup_{t\,\in\,[0,\,T]}\|F_{\varphi(t)}^{-1}\|_{\infty}<\infty$. Thus
Corollary 2 applies and it follows that we get a weak solution $\mathbb{p}$ to
the reaction-diffusion PDE that is also well-defined on $[0,T]$, which
concludes the proof of Theorem 3.
## 5 Discussion
We introduced a new general longitudinal model to describe the shape of a
material deforming through the action of an internal growth potential which
itself evolves according to an advection-reaction-diffusion process. This
model extends our previous work in Hsieh et al. [2020], which did not include
any dynamics on the growth potential beyond pure advection. The present paper
was mainly dedicated to proving the long time existence of solutions to the
resulting system of coupled PDEs on moving domains. In contrast with other
related reaction-diffusion systems on moving domains which often only yield
short-time existence, the global existence is here made possible in part
thanks to the use of a particular regularization energy on the deformation.
(a) Mesh of the initial shape.
(b) Initial potential centered at $(-0.5,0.3)$.
Figure 2: Synthetic initial shape and growth potential used in the numerical
simulations.
Although this paper focuses on mathematical aspects, simple numerical
simulations of the evolution equations given by (13) and (14) can further
illustrate the potential interest of this model in future applications to the
study of growth or atrophy of biological tissues, which was the original
motivation behind our work. We present a few such preliminary simulations
using the simple synthetic 2D domain shown in Figure 2 (a) as initial shape
$M_{0}$. We choose the tensor $A_{\varphi}$ to be the isotropic elastic tensor
given by (7) with Lamé parameters $\lambda=0$ and $\mu=1$ on $\varphi(M_{0})$
as described earlier in Section 2.2. The initial potential $p_{0}$ is a
shifted radial function compactly supported in a ball centered at
$c_{\mathrm{true}}=(-0.5,0.3)$ as shown in Figure 2 (b). Specifically, it
takes the form
$p_{0}(x;c,r,h)=h\left(\frac{|x-c|^{2}}{r^{2}}-1\right)^{2}\mathbbm{1}_{B(c,r)}(x).$
(33)
with $c\in M_{0}$, $r>0$ and $h>0$ being the center, radius and height of the
potential function respectively. We also adopt simple reaction-diffusion and
yank models for the purpose of illustration. For the reaction-diffusion model,
we let the diffusion tensor be a constant
$S_{\varphi}(t,x)=\mathrm{diag}(0.025,0.005)$, which diffuses five times
faster along the $x$-direction than along the $y$-direction. The reaction and
yank functions $R$ and $Q$ are both $C^{2}$ piecewise polynomial supported on
$[p_{\min},p_{\max}]=[0.01,1]$. Their plots are displayed in Figure 3.
Figure 3: Plots of the functions $R$ and $Q$ used for the reaction and yank
expressions in the simulations.
With the above selection of parameters and initial conditions, the evolution
of the growth potential and the resulting deformation of the domain’s shape
are shown in Figure 1. We note that the potential eventually becomes constant
over the whole domain after which the deformation stops. One of our main
future subject of investigation will be to tackle the inverse problem
associated to this longitudinal model, generalizing the work done in Hsieh et
al. [2020]. In other words, if we observe the initial and final (plus possibly
some intermediate) domain’s shapes and if a parametric representation of the
initial growth potential as e.g. (33) is given, is it possible to recover this
initial potential, in particular its location? This issue relates to a long-
term goal, in medical imaging, to infer the early impact of neuro-degenerative
diseases based on later observations, allowing for a better understanding of
their pathogenesis.
(a) $T^{\prime}=10$
(b) $T^{\prime}=15$
(c) $T^{\prime}=20$
(d) $T^{\prime}=25$
Figure 4: Effect of the growth potential’s center $c$ on the deformed domain
at different times $T^{\prime}$. On the left column are the ground truth
domains obtained with $c=c_{\mathrm{true}}=(-0.5,0.3)$. The middle and right
column are plots of the varifold distance to this ground truth domain when
varying $c$.
To give a hint at the feasibility of such an inverse problem in a simple
controlled setting, we consider the deformed domains obtained with the
simulation of Figure 1 at different times $T^{\prime}$ and for each
$T^{\prime}$, we run our evolution model up to $T^{\prime}$ but by varying the
center $c$ of the initial growth potential in (33) (all other parameters in
the model being kept the same). The shape of the domain’s boundary at
$T^{\prime}$ for the different choices of $c$ is then compared to the ground
truth (i.e. the one obtained for $c=c_{true}$). To quantify this difference
between two boundary curves, we evaluate their distance for the varifold
metric introduced in Charon and Trouvé [2013] that is known to provide a
robust measure of proximity between curves. The results are shown in Figure 4
in which the left column displays the ground truth domains for the different
$T^{\prime}$ while the middle and right columns are plots of the varifold
energy with respect to the two coordinates of $c$ with bright colors
corresponding to lower values of the varifold distance i.e., closer proximity
to the ground truth domain. As can be seen and expected, for each time
$T^{\prime}$, we obtain a minimum distance of $0$ at $c=c_{true}$ but one can
further notice that the energy is relatively well behaved around that minimum:
for instance we do not observe empirically the presence of additional local
minimums. We also note that the global minimums appear more pronounced at
intermediate times than at early or late times.
Although very preliminary, those results suggest that formulating the inverse
problem as the minimization of the varifold distance to the observed final
domain over the parameters of the initial potential is an a priori viable
approach for this problem. In future work, we therefore plan to analyze the
well-posedness of such a minimization problem and investigate efficient
methods for numerical optimization, in particular to evaluate the gradient of
the energy.
## Acknowledgements
Nicolas Charon acknowledges the support of the NSF through the grant
DMS-1945224.
## References
* Ambrosi et al. [2011] D. Ambrosi, G. A. Ateshian, E. M. Arruda, S. C. Cowin, J. Dumais, A. Goriely, G. A. Holzapfel, J. D. Humphrey, R. Kemkemer, E. Kuhl, J. E. Olberding, L. A. Taber, and K. Garikipati. Perspectives on biological growth and remodeling. _Journal of the Mechanics and Physics of Solids_ , 59(4):863–883, April 2011.
* Arguillere et al. [2014] Sylvain Arguillere, Emmanuel Trélat, Alain Trouvé, and Laurent Younes. Shape deformation and optimal control. _ESAIM: Proceedings and Surveys_ , 45:300–307, 2014. Publisher: EDP Sciences.
* Aronszajn [1950] Nachman Aronszajn. Theory of reproducing kernels. _Transactions of the American mathematical society_ , 68(3):337–404, 1950.
* Bajcinca [2013] Naim Bajcinca. Analytic solutions to optimal control problems in crystal growth processes. _Journal of Process Control_ , 23(2):224–241, February 2013.
* Beg et al. [2005] M Faisal Beg, Michael I Miller, Alain Trouvé, and Laurent Younes. Computing large deformation metric mappings via geodesic flows of diffeomorphisms. _International journal of computer vision_ , 61(2):139–157, 2005.
* Bernauer and Herzog [2011] Martin K. Bernauer and Roland Herzog. Optimal Control of the Classical Two-Phase Stefan Problem in Level Set Formulation. _SIAM Journal on Scientific Computing_ , 33(1):342–363, January 2011.
* Bressan and Lewicka [2018] Alberto Bressan and Marta Lewicka. A Model of Controlled Growth. _Archive for Rational Mechanics and Analysis_ , 227(3):1223–1266, March 2018.
* Bruveris and Vialard [2016] Martins Bruveris and François-Xavier Vialard. On Completeness of Groups of Diffeomorphisms. _arXiv:1403.2089 [math]_ , January 2016.
* Burdzy et al. [2004] Chris Burdzy, Zhen-Qing Chen, and John Sylvester. The heat equation in time dependent domains with insulated boundaries. _Journal of mathematical analysis and applications_ , 294(2):581–595, 2004.
* Charon and Trouvé [2013] Nicolas Charon and Alain Trouvé. The varifold representation of nonoriented shapes for diffeomorphic registration. _SIAM Journal on Imaging Sciences_ , 6(4):2547–2580, 2013.
* Ciarlet [1988] Philippe G Ciarlet. _Three-dimensional elasticity_ , volume 20. Elsevier, 1988.
* Dupuis et al. [1998] P Dupuis, U Grenander, and MI Miller. Variation Problems on Flows of Diffeomorphisms for Image Matching. _Quarterly of Applied Mathematics_ , LVI(4):587–600, 1998.
* Goudon and Vasseur [2010] Thierry Goudon and Alexis Vasseur. Regularity analysis for systems of reaction-diffusion equations. In _Annales scientifiques de l’Ecole normale supérieure_ , volume 43, pages 117–142, 2010.
* Gris et al. [2018] Barbara Gris, Stanley Durrleman, and Alain Trouvé. A Sub-Riemannian Modular Framework for Diffeomorphism-Based Analysis of Shape Ensembles. _SIAM Journal on Imaging Sciences_ , 11(1):802–833, January 2018.
* Hsieh et al. [2019] Dai-Ni Hsieh, Sylvain Arguillère, Nicolas Charon, Michael I. Miller, and Laurent Younes. A model for elastic evolution on foliated shapes. In Albert C. S. Chung, James C. Gee, Paul A. Yushkevich, and Siqi Bao, editors, _Information Processing in Medical Imaging_ , pages 644–655. Springer International Publishing, 2019.
* Hsieh et al. [2020] Dai-Ni Hsieh, Sylvain Arguillère, Nicolas Charon, and Laurent Younes. Mechanistic Modeling of Longitudinal Shape Changes: equations of motion and inverse problems. _arXiv:2003.05512 [math]_ , March 2020.
* Humphrey [2003] J.d. Humphrey. Review Paper: Continuum biomechanics of soft biological tissues. _Proceedings of the Royal Society of London. Series A: Mathematical, Physical and Engineering Sciences_ , 459(2029):3–46, January 2003. Publisher: Royal Society.
* Joshi and Miller [2000] S. Joshi and M. Miller. Landmark matching via large deformation diffeomorphisms. _IEEE transactions in Image Processing_ , 9(8):1357–1370, 2000.
* Kulason et al. [2020] Sue Kulason, Michael I. Miller, Alain Trouvé, and Alzheimer’s Disease Neuroimaging Initiative. Reaction-Diffusion Model of Cortical Atrophy Spread during Early Stages of Alzheimer’s Disease. _bioRxiv_ , page 2020.11.02.362855, November 2020.
* Ladyženskaja et al. [1988] Olga A Ladyženskaja, Vsevolod Alekseevich Solonnikov, and Nina N Uralceva. _Linear and quasi-linear equations of parabolic type_ , volume 23. American Mathematical Soc., 1988.
* Lewicka et al. [2011] Marta Lewicka, L. Mahadevan, and Mohammad Reza Pakzad. The Föppl-von Kármán equations for plates with incompatible strains. _Proceedings of the Royal Society A: Mathematical, Physical and Engineering Sciences_ , 467(2126):402–426, February 2011.
* Lions and Magenes [1972] Jacques Louis Lions and Enrico Magenes. _Non-Homogeneous Boundary Value Problems and Applications Vol. 1_ , volume 181 of _Grundlehren der mathematischen Wissenschaften_. Springer-Verlag Berlin Heidelberg, 1972.
* Marsden and Hughes [1994] Jerrold E Marsden and Thomas JR Hughes. _Mathematical foundations of elasticity_. Courier Corporation, 1994.
* Menzel and Kuhl [2012] Andreas Menzel and Ellen Kuhl. Frontiers in growth and remodeling. _Mechanics Research Communications_ , 42:1–14, June 2012\.
* Rodriguez et al. [1994] Edward K Rodriguez, Anne Hoger, and Andrew D McCulloch. Stress-dependent finite growth in soft elastic tissues. _Journal of biomechanics_ , 27(4):455–467, 1994\.
* Trifkovic et al. [2009] Milana Trifkovic, Mehdi Sheikhzadeh, and Sohrab Rohani. Multivariable real-time optimal control of a cooling and antisolvent semibatch crystallization process. _AIChE Journal_ , 55(10):2591–2602, 2009.
* Trouvé [1995] Alain Trouvé. An approach of pattern recognition through infinite dimensional group action. _Rapport de recherche du LMENS_ , 1995.
* Younes [2011] Laurent Younes. Constrained Diffeomorphic Shape Evolution. _Foundations of Computational Mathematics_ , 2011.
* Younes [2014] Laurent Younes. Gaussian diffeons for surface and image matching within a Lagrangian framework. _Geometry, Imaging and Computing_ , 1(1):141–171, 2014.
* Younes [2019] Laurent Younes. _Shapes and Diffeomorphisms_ , volume 171. Springer, 2019.
* Younes et al. [2020] Laurent Younes, Barbara Gris, and Alain Trouvé. Sub-Riemannian Methods in Shape Analysis. In Philipp Grohs, Martin Holler, and Andreas Weinmann, editors, _Handbook of Variational Methods for Nonlinear Geometric Data_ , pages 463–495. Springer International Publishing, Cham, 2020.
|
Phases of learning dynamics in artificial neural networks: with or without
mislabeled data
Yu Feng and Yuhai Tu
IBM T. J. Watson Research Center
Yorktown Heights, NY10598
Abstract
Despite tremendous success of deep neural network in machine learning, the
underlying reason for its superior learning capability remains unclear. Here,
we present a framework based on statistical physics to study dynamics of
stochastic gradient descent (SGD) that drives learning in neural networks. By
using the minibatch gradient ensemble, we construct order parameters to
characterize dynamics of weight updates in SGD. In the case without mislabeled
data, we find that the SGD learning dynamics transitions from a fast learning
phase to a slow exploration phase, which is associated with large changes in
order parameters that characterize the alignment of SGD gradients and their
mean amplitude. In the more complex case with randomly mislabeled samples, SGD
learning dynamics falls into four distinct phases. The system first finds
solutions for the correctly labeled samples in phase I, it then wanders around
these solutions in phase II until it finds a direction to learn the mislabeled
samples during phase III, after which it finds solutions that satisfy all
training samples during phase IV. Correspondingly, the test error decreases
during phase I and remains low during phase II; however, it increases during
phase III and reaches a high plateau during phase IV. The transitions between
different phases can be understood by changes of order parameters that
characterize the alignment of the mean gradients for the two datasets
(correctly and incorrectly labeled samples) and their (relative) strength
during learning. We find that individual sample losses for the two datasets
are most separated during phase II, which leads to a cleaning process to
eliminate mislabeled samples for improving generalization. Overall, we believe
that the approach based on statistical physics and stochastic dynamical
systems theory provides a promising framework to describe and understand
learning dynamics in neural networks, which may also lead to more efficient
learning algorithms.
## 1 Introduction: Learning as a stochastic dynamical system
Modern artificial neural network-based algorithms, in particular deep learning
neural network (DLNN) [1, 2], have enjoyed a long string of tremendous
successes in achieving human level performance in image recognition [3],
machine translation [4], games [5], and even solving longstanding grand
challenge scientific problems such as protein folding [6]. However, despite
DLNN’s successes, the underlying mechanism of how they work remains unclear.
For example, one key ingredient for the powerful DLNN is a relatively simple
iterative method called stochastic gradient descent (SGD) [7, 8]. However, the
reason why SGD is so effective in finding highly generalizable solutions in a
high dimensional nonconvex loss function landscape remains unclear. The random
elements due to subsampling in SGD seems key for learning, yet the inherent
noise in SGD also makes it difficult to understand.
From thermodynamics and statistical physics, we know that physical systems
with many degrees of freedom are subject to stochastic fluctuations, e.g.,
thermal noise that drives Brownian motion, and powerful tools have been
developed for understanding collective behaviors in stochastic processes [9].
In this paper, we propose to consider the SGD based learning process as a
stochastic dynamical system and to investigate the SGD-based learning dynamics
by using concepts and methods from statistical physics.
In an artificial neural network (ANN), the model is parameterized by its
weights represented as a $N_{p}-$dimensional vector:
$w=(w_{1},w_{2},.....,w_{N_{p}})$ where $N_{p}$ is the number of parameters
(weights). For supervised learning, there is a set of $N$ training samples
each with an input vector $X_{k}$ and a correct output vector $Z_{k}$ for
$k=1,2,...,N$. For each input $X_{k}$, the learning system predicts an output
vector $Y_{k}=G(X_{k},w)$, where the output function $G$ depends on the
architecture of the NN as well as its weights $w$. The goal of learning is to
find the weight parameters to minimize the difference between the predicted
and correct output characterized by an overall loss function (or energy
function):
$L(w)=N^{-1}\sum_{k=1}^{N}l_{k},$ (1)
where $l_{k}=d(Y_{k},Z_{k})$ is the loss for sample $k$ that measures of
distance between $Y_{k}$ and $Z_{k}$. A popular choice for $d$ is the cross-
entropy loss, which is what we use in this paper.
One learning strategy is to update the weights by following the gradient of
$L$ directly. However, this direct gradient descent (GD) scheme is
computationally prohibitive for large datasets and it also has the obvious
shortfall of being trapped by local minima or saddle points. SGD was first
introduced to circumvent the large dataset problem by updating the weights
according to a subset (minibatch) of samples randomly chosen at each iteration
[7]. Specifically, the change of weight $w_{i}$ $(i=1,2,...,N_{p})$ for
iteration $t$ in SGD is given by:
$\Delta w_{i}(t)=-\alpha\frac{\partial L^{\mu(t)}(w)}{\partial w_{i}},$ (2)
where $\alpha$ is the learning rate and $\mu(t)$ represents the random
minibatch used for iteration $t$. The mini loss function (MLF) for minibatch
$\mu$ of size $B$ is defined as:
$L^{\mu}(w)=B^{-1}\sum_{l=1}^{B}d(Y_{\mu_{l}},Z_{\mu_{l}}),$ (3)
where $\mu_{l}$ ($l=1,2,..,B$) labels the $B$ randomly chosen training
samples.
Besides the computational advantage of SGD, the inherent noise due to random
subsampling in SGD allows the system to escape local traps. Noise in SGD comes
from the difference of the minibatch loss function $L^{\mu}$ and the whole
batch loss function $L$: $\delta L^{\mu}\equiv L^{\mu}-L$. By taking the
continuous time approximation in Eq. (2), the SGD learning dynamics can be
described by a Langevin equation:
$\frac{dw}{dt}=-\alpha\nabla_{w}L+\eta,$ (4)
where the first term on the right hand side (RHS) of Eq. 4 is the usual
deterministic gradient descent term, and the second term corresponds to the
SGD noise defined as: $\eta\equiv-\alpha\nabla\delta L^{\mu}$. The SGD noise
has zero mean $\langle\eta\rangle_{\mu}=0$ and its strength is characterized
by the noise matrix:
$\Delta_{ij}\equiv\langle\eta_{i}\eta_{j}\rangle=\alpha^{2}C_{ij}$, where the
co-variance matrix ${\bf C}$ can be written as:
$C_{ij}\equiv\langle\frac{\partial\delta L^{\mu}}{\partial
w_{i}}\frac{\partial\delta L^{\mu}}{\partial
w_{j}}\rangle_{\mu}=\langle\frac{\partial L^{\mu}}{\partial
w_{i}}\frac{\partial L^{\mu}}{\partial w_{j}}\rangle_{\mu}-\frac{\partial
L}{\partial w_{i}}\cdot\frac{\partial L}{\partial w_{j}}.$ (5)
According to Eq. 4, the SGD based learning dynamics can be considered as
stochastic motion of a “learning particle” ($w$) in the high-dimensional
weight space. In physical systems that are in thermal equilibrium, their
stochastic dynamics can also be described by Langevin equations with the same
deterministic term as in Eq. 4 but with a much simpler noise term that
describes the isotropic and homogeneous thermal fluctuations. Indeed, as first
pointed out by Chaudhari and Soatto [10], the SGD noise is neither isotropic
nor homogeneous in weight space. In this sense, the SGD noise is highly
nonequilibrium. As a result of the nonequilibrium SGD noise, the steady state
distribution of weights is not the Boltzmann distribution as in equilibrium
systems, and SGD dynamics exhibits much richer behaviors than simply
minimizing a global loss function (free energy).
How can we understand SGD-based learning in ANN? Here, we propose to bring
useful concepts and tools from statistical physics [11] and stochastic
processes [9] to bear on characterizing and investigating the SGD learning
process/dynamics. In the rest of this paper, we describe a systematic way to
characterize SGD dynamics based on order parameters that are defined over the
minibatch gradient ensemble. We show how this approach allows us to identify
and understand various phases in the learning process without and with
labeling noise, which may lead to useful algorithms to improve generalization
in the presence of mislabeled data. Throughout our study, we use realistic but
simple datasets to demonstrate the principles of our approach with less
attention paid to the absolute performance.
## 2 Characterizing SGD learning dynamics: the minibatch gradient ensemble
and order parameters
To characterize the stochastic learning dynamics in SGD, we introduce the
concept of minibatch ensemble $\\{\mu\\}$ where each member of the ensemble is
a minibatch with $B$ samples chosen randomly from the whole training dataset
(size $N$). Based on the minibatch ensemble, we can define an ensemble of
minibatch loss functions $L^{\mu}$ or equivalently an ensemble of gradients
$\\{g^{\mu}(\equiv-\nabla L^{\mu}(w))\\}$ at each weight vector $w$.
The SGD learning dynamics is fully characterized by statistical properties of
the gradient ensemble in weight space $\\{g^{\mu}(w)\\}$. At each point in
weight space, the ensemble average of the minibatch gradients is the gradient
over the whole dataset: $g(w)\equiv\langle g^{\mu}(w)\rangle_{\mu}(=\nabla
L(w))$, and fluctuations of the gradients around their mean give rise to the
noise matrix (Eq. 5). To measure the alignment among the minibatch gradients,
we define an alignment parameter $R$:
$R(w)\equiv\langle\hat{g}^{\mu}(w)\cdot\hat{g}^{\nu}(w)\rangle_{\mu,\nu},$ (6)
where $\hat{g}^{\mu}=g^{\mu}/\|g^{\mu}\|$ is the unit vector in gradient
direction $g^{\mu}$. The alignment parameter is the cosine of the relative
angle between two gradients averaged over all pairs of minibatches $(\mu,\nu)$
in the ensemble.
To analyze the gradient fluctuations in different directions, we can project
the minibatch gradient $g^{\mu}$ onto the mean $g$ and write it as:
$g^{\mu}=g^{\mu}_{\bot}+\lambda_{\mu}g,$ (7)
where $\lambda_{\mu}=(g^{\mu}\cdot g)/\|g\|^{2}$ is the projection constant
and $g^{\mu}_{\bot}$ is the residue gradient perpendicular to $g$:
$g^{\mu}_{\bot}\cdot g=0$. In analogy to kinetic energy, we use the square of
the gradient to measure the learning activity. The ensemble averaged activity
$(A)$ can be split into two parts:
$A\equiv\langle\|g^{\mu}\|^{2}\rangle_{\mu}=\langle\|g^{\mu}_{\bot}\|^{2}\rangle_{\mu}+\langle\lambda^{2}_{\mu}\rangle_{\mu}\|g\|^{2}\equiv
A_{\bot}+A_{\|},$ (8)
where $A_{\|}$ and $A_{\bot}$ represent activities along the mean gradient and
orthogonal to it, respectively.
The total variance $D$ of fluctuations in all directions is the trace of the
co-variance matrix ${\bf C}$:
$D\equiv Tr({\bf C})=\sum_{i}C_{ii}=A_{\bot}+D_{\|},$ (9)
where $D_{\|}=\sigma_{\lambda}^{2}\|g\|^{2}$ is the variance along the
direction of the batch gradient $g$ with
$\sigma^{2}_{\lambda}\equiv\langle\lambda^{2}_{\mu}\rangle_{\mu}-1$ the
variance of $\lambda_{\mu}$ (Note that $\langle\lambda_{\mu}\rangle_{\mu}=1$
by definition); $A_{\bot}$ is the total variance in the orthogonal directions.
The mean learning activity can be written as: $A=A_{0}+A_{\bot}+D_{\|}$, where
$A_{0}\equiv\|g\|^{2}$ represents the directed activity along the mean
gradient direction; $A_{\bot}$ and $D_{\|}$ represent the diffusive search
activities along the directions orthogonal and parallel to the mean gradient,
respectively.
All these quantities ($A$, $A_{0}$, $R$, $\sigma^{2}_{\lambda}$) depend on the
weights ($w$). Along a SGD learning trajectory in weight space, we can
evaluate these order parameters and their relative values at any given time
$t$ to characterize different phases of the SGD learning dynamics. For
example, we use $A$ and $A_{0}$ to measure the total learning activity and the
activity along the mean gradient direction respectively. The alignment among
different minibatch gradients is measurement by $R$, which is related to the
fractional aligned activity $A_{0}/A$. The fluctuations of the minibatch
gradients projected onto the mean gradient is measured by
$\sigma^{2}_{\lambda}$. In our previous work [12], we used time averaging to
approximate some of these order parameters for computational convenience.
However, properties of the SGD dynamics at any given point in weight space are
precisely defined by these ensemble averaged order parameters, which is used
hereafter.
As mentioned before, the SGD noise is anisotropic and varies in weight space.
The positive-definite eigenvalue $e_{l}$ of the symmetric co-variance matrix
${\bf C}$ is the noise strength in the corresponding eigen-direction
($l=1,2,...,N_{p}$ with $N_{p}$ the number of weights or the dimension of the
weight space). The overall noise strength $D=Tr({\bf
C})=\sum_{l=1}^{N_{p}}e_{l}$ describes the total search activity, and the
eigenvalue spectrum $\\{e_{l},\;l=1,2,...,N_{p}\\}$ tells us how much of the
total search activity is spent in each eigen-direction. From the noise
spectrum, we can define an effective dimension of search activity $D_{s}(w)$
as the number of dimensions wherein the variance in the subspace of parameters
account for certain large percentage (e.g., $90\%$) of the total variance $D$.
## 3 Phases of SGD learning dynamics without mislabeled data
We first study the learning dynamics without mislabeled data, e.g., the
original MNIST dataset. As shown in Fig. 1, dynamics of the overall loss
function $L$ suggests that there are two phases in learning. There is an
initial fast learning phase where $L$ decreases quickly followed by an
exploration phase where the training error $\epsilon_{tr}$ reaches $0$ (or
nearly $0$) while $L$ still decreases but much more slowly. These two learning
phases exist independent of hyperparameters (e.g., $\alpha$ and $B$) and
network architectures (all connected network or $CNN$) used for different
datasets (e.g., MNIST and CIFAR). The weights reached in the exploration phase
can be considered as solutions of the problem given that the training error
vanishes.
Figure 1: Two phases of learning without labeling noise. (A) Training loss
$L$, training error $\epsilon_{tr}$, and order parameters $A$, $R$, and
$\sigma^{2}_{\lambda}$ versus (training) time. The fast learning phase
corresponds to a directed (finite $R>0$, $\sigma^{2}_{\lambda}\sim 1$) and
fast (large $A$) motion in weight space; the exploration phase corresponds to
a diffusive ($R\approx 0$, $\sigma^{2}_{\lambda}\gg 1$) and slow (small $A$)
motion in weight space. The dotted line shows $R=0$. The green bar highlights
the transition region. MNIST data and an all connected network with 2 hidden
layers ($30\times 30$) are used here. (B) Illustration of the normalized
minibatch gradient ensemble (blue dotted arrows) and their means (black solid
arrows) in the two learning phases.
Dynamics of the order parameters $A(t)$, $R(t)$, and $\sigma^{2}_{\lambda}$
along the trajectory can be used to characterize and understand the two
phases. As shown Fig. 1(A), in the beginning of the learning process, the
learning activity $A$ is relatively large and the alignment parameter $R$ is
finite. In this initial phase of learning, the minibatch gradients have a high
degree of alignment resulting to a strongly directed motion of the weight
particle and a fast decrease of $L$ towards a solution region in the weight
space with low $L$ and zero training error $\epsilon_{tr}$. In the exploration
phase, the average learning activity $A$ becomes much smaller while the
average alignment parameter $R$ becomes close to zero. This means that the
motion of the weight particle becomes mostly diffusive (weakly directed) and
the decrease of $L$ slows. This diffusive motion of weights allows the system
to explore the solution space. The transition from a directed motion to a
diffusive motion is also reflected in the large increase of the variance
$\sigma^{2}_{\lambda}$ at the transition. Due to the finite size of the
system, the transition is not infinitely sharp as phase transition in physical
systems in thermodynamic limit (infinite system limit). As shown in Fig. 1(A),
the training error $\epsilon_{tr}$ becomes zero during the transition regime
and it stays zero in the exploration phase. These results confirm our previous
study that used the time-averaged ordered parameters [12]. Key differences
between the two phases in terms of alignment of minibatch gradients and mean
gradient strength are illustrated in Fig. 1(B).
Figure 2: The noise spectra, i.e., rank ordered eigenvalues
$\\{e_{l},\;l=1,2..,N_{p}\\}$ in the fast learning phase (black) and the
exploration phase (red) . The inset shows the normalized accumulated variance
$D^{-1}\sum_{i=1}^{l}e_{i}$. The two spectra are similar except for their
total variance $D$. The effective dimension $D_{s}\sim 110$, which is much
smaller than the number of parameters ($N_{p}=900$), is roughly the same in
both phases. Data and network used here are the same as in Fig. 1.
We have also studied the noise spectra in the two phases. As shown in Fig. 2,
unlike isotropic thermal noise, the SGD noise has a highly anisotropic
structure with most of its variance (strength) concentrated in a relatively
small number of directions. The normalized noise spectra are similar in both
phases and the total noise strength (variance) $D$ is much higher in the fast
learning phase. The effective dimension defined as the number of directions
that contains $90\%$ of the total variance is $D_{s}\sim 110$, which is much
smaller than the number of weighs (parameters) and remains roughly constant as
the number of parameters increases.
## 4 Phases of SGD learning dynamics in the presence of mislabeled data
There has been much interest in deep learning in the presence of mislabeled
data. This is triggered by a recent study [13] in which the authors showed
that random labels can be easily fitted by deep networks in the over-
parameterized regime and such overfitting destroys generalization. Here, we
report some new results by using the dynamical systems approach developed in
previous sections to study SGD learning dynamics with labeling noise.
In a dataset with $N_{c}$ correctly labeled training samples and $N_{w}$
incorrectly (randomly) labeled samples, the overall loss function $L$ consists
of two parts, $L_{c}$ and $L_{w}$, from the correctly-labeled samples and the
randomly labeled samples, respectively:
$L=(1-\rho)L_{c}+\rho
L_{w}=N^{-1}\Big{[}\sum_{k=1}^{N_{c}}l_{k}+\sum_{k=1}^{N_{w}}\tilde{l}_{k}\Big{]},$
(10)
where $N=N_{c}+N_{w}$ is the total number of training samples and
$\rho=N_{w}/N$ is the fraction of mislabeled samples. The loss function for a
correctly labeled sample is the cross entropy $l$ between the output
$Y_{k}(X_{k},w)$ of the network with weight vector $w$ and the correct label
vector $Z_{k}$: $l_{k}=l(Y_{k},Z_{k})$; whereas the loss function for a
mislabeled sample is: $\tilde{l}_{k}=l(Y_{k},Z^{r}_{k})$ where $Z^{r}_{k}$ is
a random label vector.
We did experiment on the MNIST and CIFAR10 with different fractions of
mislabeled data ($\rho$). As shown in Fig. 3(A) for MNIST, the whole learning
process can be divided into 4 phases (study of the CIFAR10 dataset shows
similar results):
* •
Phase I: During this initial fast learning phase ($0-10$ epoch in Fig. 3(A)),
the test error $\epsilon_{te}$ decreases quickly as the system learns the
correctly labeled data. The error $\epsilon_{c}$ from the correctly labeled
training data follows the exact same trend as $\epsilon_{te}$ and the error
$\epsilon_{w}$ from the mislabeled training data actually increases slightly,
which indicates that learning in phase I is dominated by the correctly labeled
training data.
* •
Phase II: After the initial fast learning phase, the test error
$\epsilon_{te}$ stays roughly the same during phase II ($10-70$ epoch in Fig.
3(A)). Both $\epsilon_{w}$ and $\epsilon_{c}$ remains flat, which indicates
that learning activities for the correct and incorrect samples are balanced
during phase II. This can also be seen in the plateau in the total training
error $\epsilon_{tr}=\epsilon_{c}+\epsilon_{w}$.
* •
Phase III: At the end of phase II ($\sim 70$ epoch), the test error
$\epsilon_{te}$ starts to increase quickly while the training errors for both
the correct and the incorrect training data ($\epsilon_{c}$, $\epsilon_{w}$)
decreases to zero during phase III ($70-200$ epoch). During phase III, the
system finally manages to find (learn) a solution that satisfies both the
correct and incorrect training data.
* •
Phase IV: Phase IV corresponds to the slow exploration phase after the system
reaches the solution space for the whole dataset. The test error reaches a
high plateau in phase IV.
The four distinct phases in the presence of labeling noise and the
corresponding “U”-shaped behavior in test error are general for a wide range
of noise level ($\rho$), see Fig. 3(B). Quantitatively, dynamics of the test
error $\epsilon_{te}(t)$ during these four phases can be characterized by two
timescales: $t_{m}$ – the time when the test error reaches its minimum and
$t_{f}$ – the time when the training loss function reaches its minimum, and
the two corresponding test errors: $\epsilon_{m}$ and $\epsilon_{f}$. All four
parameters depend on $\rho$. As shown in Fig. 3(C), $t_{m}$ is almost
independent of $\rho$, which means that learning the correctly labeled data is
independent of data size as long as the data size is large enough. However,
$t_{f}$ increases with $\rho$, which means that the network needs more time to
memorize the incorrectly labeled data as the number of mislabeled samples
increases. As shown in Fig. 3(D), the final test error $\epsilon_{f}$
increases with $\rho$ almost linearly, which is caused by the increased
fraction of mislabeled data. The minimum error $\epsilon_{m}$ remains roughly
the same when $\rho$ is small, but increases sharply after a threshold and
approaches $\epsilon_{f}$ when $\rho>0.85$. This also makes sense because when
$\rho$ is large, learning is dominated by mislabeled data and the correctly
labeled data no longer drives the learning dynamics.
Figure 3: Learning dynamics in the presence of labeling noise. (A) The
training error $\epsilon_{tr}$, the test error $\epsilon_{te}$, the training
error for correctly labeled data $\epsilon_{c}$, the training error for
mislabeled data $\epsilon_{w}$ are shown for a subset of MNIST data with 400
samples per digit and a fully connected network with two hidden layer (50
hidden units per layer). SGD hyper-parameters: $B=25$, $\alpha=0.01$. (B)
$\epsilon_{te}$ dynamics for different values of $\rho$. (C) The dependence of
the time scales ($t_{m}$ and $t_{f}$) on $\rho$. (D) The dependence of the
minimum and final test errors ($\epsilon_{m}$ and $\epsilon_{f}$) on $\rho$.
Here, we try to understand the different phases and the transitions between
them by using order parameters that are modified for the case with labeling
noise. In particular, each minibatch $\mu$ now consists of two smaller
minibatches $\mu_{c}$ and $\mu_{w}$ for the correctly and incorrectly labeled
data ($\mu=\mu_{c}+\mu_{w}$) with the average size $B_{c}=(1-\rho)B$ and
$B_{w}=\rho B$ respectively. The minbatch loss function can be decomposed into
two minibatch loss functions $L^{\mu_{c}}$ and $L^{\mu_{w}}$ defined for
$\mu_{c}$ and $\mu_{w}$ separately: $L^{\mu}=L^{\mu_{c}}+L^{\mu_{w}}$. At a
given point in weight space, the ensemble averaged gradient and activity for
the correctly and incorrectly labeled data can be defined separately:
$\displaystyle g_{c}$ $\displaystyle\equiv$
$\displaystyle\langle\frac{\partial L^{\mu_{c}}}{\partial
w}\rangle_{\mu_{c}}=\frac{\partial L_{c}}{\partial
w}\;\;,\;\;A_{c}\equiv\langle\|\frac{\partial L^{\mu_{c}}}{\partial
w}\|^{2}\rangle_{\mu_{c}},$ (11) $\displaystyle g_{w}$ $\displaystyle\equiv$
$\displaystyle\langle\frac{\partial L^{\mu_{w}}}{\partial
w}\rangle_{\mu_{w}}=\frac{\partial L_{w}}{\partial
w}\;\;,\;\;A_{w}\equiv\langle\|\frac{\partial L^{\mu_{w}}}{\partial
w}\|^{2}\rangle_{\mu_{w}}.$ (12)
The alignment of the two gradients $g_{c}$ and $g_{w}$ can be characterized by
the cosine of their relative angle:
$R_{cw}\equiv\frac{g_{c}\cdot g_{w}}{\|g_{c}\|\|g_{w}\|},$ (13)
from which we obtain the ensemble averaged gradient and activity for the whole
dataset:
$\displaystyle g$ $\displaystyle\equiv$ $\displaystyle\langle\frac{\partial
L^{\mu}}{\partial w}\rangle_{\mu}=(1-\rho)g_{c}+\rho g_{w},$ (14)
$\displaystyle A$ $\displaystyle\equiv$ $\displaystyle\langle\|\frac{\partial
L^{\mu}}{\partial
w}\|^{2}\rangle_{\mu}=(1-\rho)^{2}A_{c}+\rho^{2}A_{w}+2\rho(1-\rho)\|g_{c}\|\|g_{w}\|C_{cw}.$
(15)
From these basic ordered parameters defined above, we can define the directed
activity $A_{0,c}\equiv(1-\rho)^{2}\|g_{c}\|^{2}$,
$A_{0,w}\equiv\rho^{2}\|g_{w}\|^{2}$, and
$A_{0}\equiv\|g\|^{2}=A_{0,c}+A_{0,w}+2[A_{0,w}A_{0,c}]^{\frac{1}{2}}C_{cw}$;
and the alignments between $g$ and $g_{c}$, and between $g$ and $g_{w}$ are:
$R_{aw}\equiv\frac{g\cdot g_{w}}{\|g\|\|g_{w}\|}$, $R_{ac}\equiv\frac{g\cdot
g_{c}}{\|g\|\|g_{c}\|}$. We can also define alignment order parameters among
members within the different gradient ensembles ($\\{\mu_{c}\\}$,
$\\{\mu_{w}\\}$, and $\\{\mu\\}$).
We studied three groups of order parameters: the total activities ($A$,
$A_{c}$, $A_{w}$); the directed activities ($A_{0}$, $A_{0,c}$, $A_{0,w}$) and
their alignments ($R_{cw}$, $R_{aw}$, $R_{ac}$) to understand the learning
dynamics in the presence of labeling noise. As shown in Fig. 4(A)&(B), all
learning activity order parameters ($A$’s and $A_{0}$’s) show a consistent
trend of increasing during phase I, II, and III before deceasing during phase
IV. This is in contrast to the behavior of learning activity $A$ in the
absence of labeling noise, which shows a relatively flat or a slight
decreasing trend during the fast learning phase (see Fig. 1). This
continuously elevated learning activity in phases I-III suggests an increasing
frustration between the two separate learning tasks (for learning the
correctly and the incorrectly labeled datasets) before a consistent solution
can be found in phase IV.
The difference among learning phases I, II, and III can be understood by
studying the relation between the two mean gradients $g_{w}$ and $g_{c}$
characterized by the alignment order parameter $R_{cw}$ (see Fig. 4(C)) and
the relative strength of the two directed activities $A_{0,c}$ and $A_{0,w}$.
* •
Phase I: $A_{0,c}\gg A_{0,w}$, $R_{cw}<0$. In phase I, the directed activity
from the correctly labeled data is much larger than that from the incorrectly
labeled data (see inset in Fig. 4(B)). This is due to the fact that samples
from the correctly labeled dataset are consistent with each other in terms of
their labels, which leads to a much larger mean gradient towards learning a
solution for the correctly labeled data. In phase I, $g_{c}$ and $g_{w}$ are
not aligned ($R_{cw}<0$). Due to the fact $A_{0,c}\gg A_{0,w}$, we have
$R_{aw}<0$, which means that there is an increase of $L_{w}$ during phase I as
observed in Fig. 3(A).
* •
Phase II: $A_{0,w}\approx A_{0,c}$, $R_{cw}<0$. As the system approaches a
solution for the correctly labeled data during late stage of phase I, the
directed learning activity from the mislabeled data ($A_{0,w}$) increases
sharply and $A_{0,w}$ become comparable with $A_{0,c}$ in phase II (see inset
of the middle panel in Fig. 4). In addition, the two mean gradients ($g_{c}$
and $g_{w}$) are opposite to each other with $R_{cw}\approx-1$. As a result of
the balanced gradients between the two datasets, the overall directed activity
is small $A_{0}\ll A_{0,c(w)}$ and the loss functions ($L_{c}$, $L_{w}$, and
$L$) remains relatively flat during phase II (see Fig. 3(A)).
* •
Phase III: $A_{0,w}\approx A_{0,c}$, $R_{cw}>0$. The system enters into phase
III when it finally finds a direction to decrease both loss functions ($L_{w}$
and $L_{c}$) as evidenced by the alignment of $g_{c}$ and $g_{w}$, which only
happens during phase III. This alignment ($R_{cw}>0$) means that the system
can finally learn a solution for all the training data.
* •
Phase IV: $A_{0,w}\approx A_{0,c}$, $R_{cw}<0$. Once the system finds a
solution for all data, learning slows down to explore other solutions nearby.
Phase IV is similar to the exploration phase without mislabeled data where
learning activity is much reduced than those in phases I-III.
Key differences of the four phases in terms of the strength and relative
direction of the two mean gradients ($g_{c}$ and $g_{w}$) are illustrated in
Fig. 4(D).
Figure 4: Dynamics of the order parameters during phases of learning with
mislabeled data. (A) The total activities ($A$, $A_{w}$, $A_{c}$). (B)
Directed activities ($A_{0}$, $A_{0,w}$, $A_{0,c}$), the inset shows the ratio
$A_{0,c}/A_{0,w}$. (C) Alignment parameters ($R_{cw}$, $R_{ac},R_{aw}$). The
dotted line shows $R=0$. (D) Illustration of the four different phases in
terms of the relative strength and direction of the two mean gradients
($g_{c}$ and $g_{w}$).
We have also analyzed the noise spectra in different learning phases in the
presence of labeling noise. As shown in Fig. 5, the normalized spectra remain
roughly the same in different learning phases and the effective dimensions are
$D_{I,II,III,IV}\approx 43,58,140,95$, which are much smaller than the number
of parameters. We note that both the noise spectra and the effective noise
dimensions are similar to those without labeling noise (Fig. 2).
Figure 5: The noise spectra, i.e., rank ordered eigenvalues
$\\{e_{l},\;l=1,2..,N_{p}\\}$ in different phases of learning with labeling
noise (same setting as in Fig. 4). The inset shows the normalized accumulated
variance $D^{-1}\sum_{i=1}^{l}e_{i}$. The spectra are similar except for their
total variance $D$. In different phases, the effective dimension $D_{s}$
varies in a range $(50-150)$, which is much smaller than the number of
parameters ($N_{p}=2500$).
## 5 Identifying and cleaning the mislabeled samples in phase II
Our study so far has used various ensemble averaged properties to demonstrate
the different phases of learning dynamics. We now investigate the distribution
of losses for individual samples and how the individual loss distribution
evolves with time. In Fig. 6(A), we show the probability distribution
functions (pdf’s) - $P_{c}(l,t)$ and $P_{w}(l,t)$ \- for the individual losses
of the correctly labeled and incorrectly labeled samples at different times
during training. Starting with an identical distribution at time $0$, the two
distributions quickly separate during phase I as $P_{c}(l,t)$ moves to smaller
losses while $P_{w}(l,t)$ moves slightly to higher losses. The separation
between the two distributions increases during phase I and reaches its maximum
during phase II. After the system enters phase III, the gap between the two
distributions closes quickly as the system learns the mislabeled data and
$P_{w}(l,t)$ catches up with $P_{c}(l,t)$ at small losses. In phase IV, these
two distributions becomes indistinguishable again as they both become highly
concentrated at near zero losses.
As a result of the different dynamics of the two distribution, the overall
individual loss distribution $P(l)=(1-\rho)P_{c}(l)+\rho P_{w}(l)$ exhibits a
bimodal behavior, which is most pronounced during phase II. In fact, we can
fit the overall distribution by a Gaussian mixture model:
$l\sim(1-r)\mathcal{N}(m_{c},s_{c}^{2})+r\mathcal{N}(m_{w},s_{w}^{2})$ with
fitting parameters: fraction $r$, means $m_{c,w}$, and variances
$s^{2}_{c,w}$. As shown in Fig. 6(B), the Guassian mixture model fits $P(l)$
well, and furthermore, the fitted means $m_{c}$ and $m_{w}$ agree with the
mean losses ($L_{c}$, and $L_{w}$) obtained from the experiments.
Figure 6: The individual loss distribution and the cleaning method. (A) The
loss distributions of correctly labeled samples (red) and mislabeled samples
(blue) in different learning phases. (B) The bimodal distribution in phase II
can be fitted by a Gaussian mixture model (red line), which is used to
determine a threshold $l_{c}$ for cleaning. (C) The mean losses (symbols)
predicted from the Gaussian mixture model agree with their true values from
experiments (lines). A cleaning time $t_{c}$ can be determined when $\Delta
L(\equiv m_{w}-m_{c})$ reaches its maximum. (D) The test accuracy without
cleaning ($a_{n}$), with cleaning ($a_{c}$), and with only the correctly
labeled training data ($a_{p}$) versus training time. The labeling noise level
$\rho=50\%$ for (A)-(D). (E) $a_{n}$, $a_{c}$, and $a_{p}$ versus $\rho$. The
slight decrease in $a_{p}$ as $\rho$ increases is due to the decreasing size
of the correctly labeled dataset. MNIST dataset and network used here are the
same as those in Fig. 3.
The separation of individual loss distribution functions has recently been
used to devise sophisticated methods to improve generalization such as those
reported in [14, 15]. Here, we demonstrate the basic idea by presenting a
simple method to identify and clean the mislabeled samples based on the
understanding of different learning phases. In particular, according to our
analysis, such a cleaning process can be best done during phase II. For
simplicity, we set the time $t_{c}$ for cleaning when the difference $\Delta
L(\equiv m_{w}-m_{c})$ reaches its maximum. At $t=t_{c}$, we can set a
threshold $l_{c}$, which best separates the two distributions. For example, we
can set $l_{c}$ as the loss when the two pdf’s are equal or simply as the
average of $m_{c}$ and $m_{w}$ (we do not observe significant differences
between the two choices). We can then get rid of all the data which has a loss
larger than $l_{c}$ and continue training with the cleaned dataset.
Alternatively, we can stop the training altogether at $t=t_{c}$, i.e., early
stopping. We do not observe significant differences between these two choices
in our experiments. In Fig. 6(D), the test accuracy $a_{n}$ without cleaning,
$a_{c}$ with cleaning, and $a_{p}$ with only the correctly labeled data are
shown for MNIST data with $\rho=50\%$ labeling noise. Performance of the
cleaning algorithm can be measured by $Q=\frac{a_{c}-a_{n}}{a_{p}-a_{n}}$,
which depends on the noise level $\rho$. As shown in Fig. 6(E), the cleaning
method can achieve significant improvement in generalization ($Q>50\%$) for
noise level up to $\rho=80\%$ noise level.
## 6 Summary
Deep learning neural networks have demonstrated tremendous capability in
learning and problem solving in diverse domains. Yet, the mechanism underlying
this seemingly magical learning ability is not well understood. For example,
modern DNNs often contain more parameters than training samples, which allow
it to interpolate (memorize) all the training samples, even if their labels
are replaced by pure noise [16, 17]. Remarkably, despite their huge capacity,
DNNs can achieve small generalization error on real data (this phenomenon has
been formalized in the so called “double descent” curve [18, 19, 20, 21, 22,
23]). The learning system/model seems to be able to self-tuned its complexity
in accordance with the data to find the simplest possible solution in the
highly over-parameterized weight space. However, how does the system adjusts
its complexity dynamically, and how SGD seeks out simple and more
generalizable solutions for realistic learning tasks remain not well
understood.
In this paper, we demonstrate that the approach based on statistical physics
and stochastic dynamical systems provides a useful theoretical framework
(alternative to the traditional theorem proving approach) for studying SGD-
based machine learning by applying it to identify and characterize the
different phases in SGD-based learning with and without labeling noise. In an
earlier work [12], we have used this approach to study the relation between
SGD dynamics and the loss function landscape and discovered an inverse
relation between weight variance and the loss landscape flatness that is the
opposite to fluctuation-dissipation relation (the Einstein relation) in
equilibrium systems. We believe this framework may pave the way for a deeper
understanding of deep learning by bringing powerful ideas (e.g., phase
transitions in critical phenomena) and tools (e.g., renormalization group
theory and replica method) from statistical physics to bear on understanding
ANN. It would be interesting to use this general framework to address other
fundamental questions in machine learning such as generalization [24, 25, 26]
in particular the mechanism for the double descent behavior in learning as
described above; the relation between task complexity and network
architecture; information flow in DNN [27, 28]; as well as building a solid
theoretical foundation for important applications such as transfer learning
[29], curriculum learning [30], and continuous learning [31, 32, 33].
## References
* [1] LeCun, Y., Bengio, Y. & Hinton, G. Deep learning. _Nature_ 521, 436 EP – (2015). URL https://doi.org/10.1038/nature14539.
* [2] Goodfellow, I., Bengio, Y., Courville, A. & Bengio, Y. _Deep learning_ , vol. 1 (MIT Press, 2016).
* [3] He, K., Zhang, X., Ren, S. & Sun, J. Deep residual learning for image recognition. In _Proceedings of the IEEE conference on computer vision and pattern recognition_ , 770–778 (2016).
* [4] Wu, Y. _et al._ Google’s neural machine translation system: Bridging the gap between human and machine translation. _arXiv preprint arXiv:1609.08144_ (2016).
* [5] Silver, D. _et al._ Mastering the game of go with deep neural networks and tree search. _Nature_ 529, 484–489 (2016). URL https://doi.org/10.1038/nature16961.
* [6] Callaway, E. ‘it will change everything’: Deepmind’s ai makes gigantic leap in solving protein structures. _Nature_ 588, 203–204 (2020). URL https://doi.org/10.1038/d41586-020-03348-4.
* [7] Robbins, H. & Monro, S. A stochastic approximation method. _The Annals of Mathematical Statistics_ 22, 400?407 (1951). URL http://dx.doi.org/10.1214/aoms/1177729586.
* [8] Bottou, L. Large-scale machine learning with stochastic gradient descent. In Lechevallier, Y. & Saporta, G. (eds.) _Proceedings of COMPSTAT’2010_ , 177–186 (Physica-Verlag HD, Heidelberg, 2010).
* [9] Kampen, N. G. V. _Stochastic Processes in Physics and Chemistry_ (Elsevier, 2010).
* [10] Chaudhari, P. & Soatto, S. Stochastic gradient descent performs variational inference, converges to limit cycles for deep networks. _2018 Information Theory and Applications Workshop (ITA)_ (2018). URL http://dx.doi.org/10.1109/ita.2018.8503224.
* [11] Forster, D. _Hydrodynamic fluctuations, broken symmetry, and correlation functions_ (CRC Press, 2018).
* [12] Feng, Y. & Tu, Y. How neural networks find generalizable solutions: Self-tuned annealing in deep learning. _arXiv preprint arXiv:2001.01678_ (2020).
* [13] Zhang, Y., Saxe, A. M., Advani, M. S. & Lee, A. A. Energy–entropy competition and the effectiveness of stochastic gradient descent in machine learning. _Molecular Physics_ 116, 3214–3223 (2018). URL http://dx.doi.org/10.1080/00268976.2018.1483535.
* [14] Arazo, E., Ortego, D., Albert, P., O’Connor, N. E. & McGuinness, K. Unsupervised label noise modeling and loss correction. _arXiv preprint arXiv:1904.11238_ (2019).
* [15] Li, M., Soltanolkotabi, M. & Oymak, S. Gradient descent with early stopping is provably robust to label noise for overparameterized neural networks. _ArXiv_ abs/1903.11680 (2020).
* [16] Zhang, C., Bengio, S., Hardt, M., Recht, B. & Vinyals, O. Understanding deep learning requires rethinking generalization (2016). 1611.03530.
* [17] Arpit, D. _et al._ A closer look at memorization in deep networks (2017). 1706.05394.
* [18] Belkin, M., Hsu, D., Ma, S. & Mandal, S. Reconciling modern machine-learning practice and the classical bias–variance trade-off. _Proceedings of the National Academy of Sciences_ 116, 15849–15854 (2019). URL https://www.pnas.org/content/116/32/15849. https://www.pnas.org/content/116/32/15849.full.pdf.
* [19] Brutzkus, A., Globerson, A., Malach, E. & Shalev-Shwartz, S. Sgd learns over-parameterized networks that provably generalize on linearly separable data (2017). 1710.10174.
* [20] Li, Y. & Liang, Y. Learning overparameterized neural networks via stochastic gradient descent on structured data. _Advances in Neural Information Processing Systems_ 31, 8157–8166 (2018).
* [21] Mei, S. & Montanari, A. The generalization error of random features regression: Precise asymptotics and double descent curve. _arXiv preprint arXiv:1908.05355_ (2019).
* [22] Geiger, M. _et al._ Scaling description of generalization with number of parameters in deep learning. _Journal of Statistical Mechanics: Theory and Experiment_ 2020, 023401 (2020).
* [23] Gerace, F., Loureiro, B., Krzakala, F., Mézard, M. & Zdeborová, L. Generalisation error in learning with random features and the hidden manifold model. _arXiv preprint arXiv:2002.09339_ (2020).
* [24] Neyshabur, B., Bhojanapalli, S., McAllester, D. & Srebro, N. Exploring generalization in deep learning. In _NIPS_ (2017).
* [25] Advani, M. S. & Saxe, A. M. High-dimensional dynamics of generalization error in neural networks (2017). 1710.03667.
* [26] Jiang, Y., Neyshabur, B., Mobahi, H., Krishnan, D. & Bengio, S. Fantastic generalization measures and where to find them. _arXiv preprint arXiv:1912.02178_ (2019).
* [27] Shwartz-Ziv, R. & Tishby, N. Opening the black box of deep neural networks via information. _arXiv preprint arXiv:1703.00810_ (2017).
* [28] Tishby, N. & Zaslavsky, N. Deep learning and the information bottleneck principle. _2015 IEEE Information Theory Workshop (ITW)_ (2015). URL http://dx.doi.org/10.1109/ITW.2015.7133169.
* [29] Yosinski, J., Clune, J., Bengio, Y. & Lipson, H. How transferable are features in deep neural networks? (2014). 1411.1792.
* [30] Bengio, Y., Louradour, J., Collobert, R. & Weston, J. Curriculum learning. In _Proceedings of the 26th annual international conference on machine learning_ , 41–48 (2009).
* [31] Ring, M. B. _Continual learning in reinforcement environments_. Ph.D. thesis, University of Texas at Austin Austin, Texas 78712 (1994).
* [32] Lopez-Paz, D. & Ranzato, M. Gradient episodic memory for continuum learning. _NIPS_ (2017).
* [33] Riemer, M. _et al._ Learning to learn without forgetting by maximizing transfer and minimizing interference (2018). 1810.11910.
|
Magnetized Taub-NUT spacetime
Haryanto M<EMAIL_ADDRESS>
Center for Theoretical Physics,
Department of Physics, Parahyangan Catholic University,
Jalan Ciumbuleuit 94, Bandung 40141, Indonesia
We find an exact solution describing the spacetime outside a massive object
with NUT parameter embedded in an external magnetic field. To get the
solution, we employ the Ernst magnetization transformation to the Taub-NUT
metric as the seed solution. The massless limit of this new solution is the
Melvin-Taub-NUT spacetime. Some aspects in the magnetized Taub-NUT spacetime
are investigated, such as the surface geometry, the existence of closed time
like curve, and the electromagnetic properties.
## 1 Introduction
Exact solutions in the Einstein-Maxwell theory are always fascinating objects
to be studied [1, 2, 3, 4]. It starts from the mathematical aspects of the
solution to some possible astrophysical-related phenomenon. In the Einstein-
Maxwell theory the spacetime can contain electromagnetic fields, and sometime
is known as the electrovacuum system. The most general asymptotically flat
spacetime solution in Einstein-Maxwell theory that contains a black hole is
known as the Kerr-Newman solution. This solution describes a black hole with
rotation and electric charge as well. Despite it is very unlikely for a
collapsing object to maintain a significant amount of electric charge, this
type of solution has been discussed significantly in literature [1, 3, 5].
Another interesting solution of black hole spacetime containing
electromagnetic fields is the magnetized solution proposed by Wald [6]. The
solution by Wald describes a black hole immersed in a uniform magnetic field,
where the Maxwell field solution is generated by the associated Killing
symmetries of spacetime. However, in Wald’s prescription, the Maxwell field is
considered as some perturbations in the spacetime. The nonperturbative
consideration of a black hole immersed in a homogeneous magnetic field was
introduced by Ernst [7]. Ersnt solution for magnetized black hole can be
viewed as an embedding a black hole in Melvin magnetic universe where the
spacetime outside black hole is filled by an homogeneous magnetic field. Ernst
method uses a Harison type of transformation [8] that is applied to a known
seed solution in Einstein-Maxwell theory. Nevertheless, the resulting
magnetized spacetimes are no longer flat at asymptotic despite resulting from
an asymptotically flat seed metric. Various aspects of black holes immersed in
external magnetic field had been studied extensively in literature [9, 10, 11,
12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22].
Despite the loss of asymptotic flatness for a magnetized spacetime containing
a black hole, this solution has been considered to have some astrophysical
uses especially in modeling the spacetime near rotating supermassive black
hole surrounded by hot moving plasma [9]. Indeed, a full comprehension for the
interaction of a black hole and its surrounding magnetic field due to the
accretion disc requires a sophisticated general relativistic treatment, at
least by employing a costly numerical approach. If the full general
relativistic or the comprehensive numerical treatment is not necessary, in the
sense that we just seek for some approximate qualitative explanations, the
picture of magnetized black hole by Wald or even the non-perturbative model by
Ernst can be the alternatives. For example, these models can explain the
charge induction by a black hole immersed in the magnetic field, or the
Meissner-like effect that may occur for this type of black hole [23]. In
particular, the superradiant instability of a magnetized rotating black hole
is studied in [24]. Another aspects had been investigated as well, such as the
Kerr/CFT correspondence for some magnetized black holes in [25, 26, 27], and
some conserved quantities calculations are proposed in [28].
In the vacuum Einstein system, the Taub-NUT solution generalizes Schwarzschild
solution to include the so-called NUT (Newman-Unti-Tamburino) parameter $l$.
This parameter is interpreted as a “gravitomagnetic” mass in an analogy to the
magnetic monopole in electromagnetic theory. However, the presence of NUT
parameter yields several new features in spacetime compared to the null NUT
parameter counterpart [5]. First is the loss of asymptotic flatness, for the
asymptotically flat null NUT case. Indeed, this asymptotic behavior demands
some delicate approaches in defining the conserved charges in the spacetime.
Second, the non-existence of physical singularity at the origin. This is
interesting since it leads to the question of defining black hole in such
spacetime, which normally we understand as a true singularity covered by a
horizon. Instead of a true singularity at origin, the spacetime with NUT
parameter possesses the conic singularity on its symmetry axis which gives
rise to a problem in describing the black hole horizon. Despite these issues
regarding black hole picture in a spacetime with NUT parameter, discussions on
Kerr-Newman-Taub-NUT black hole family are still among the active areas in
gravitational studies [29, 30, 31, 32, 31, 33, 34, 35, 36, 37, 38, 39, 40, 41,
42]. In particular, the discussion related to the circular motion in a
spacetime with NUT parameter [43] where the authors found that the existence
of NUT parameter leads to a new non trivial constraint to the equatorial
circular motion for a test body. This problem also occurs in gravitational
theories beyond Einstein-Maxwell, for example in low energy string theory [44]
and braneworld scenario [45]. In a recent work [46], the authors show that the
Misner string contribution to the entropy of Taub-NUT-AdS can be renormalized
by introducing the Gauss-Bonnet term, and in [47] the authors show how to
embed the Taub-NUT solutions in general scalar-tensor theories.
There exist magnetization of some well-known solution in Einstein-Maxwell
theory whose aspects have been studied extensively [5, 9]. In this work, we
introduce a new solution namely the magnetized Taub-NUT spacetime. The idea is
straightforward, i.e. performing the magnetization transformation to the Taub-
NUT metric as the seed solution. One key aspect is the compatibility of the
Taub-NUT metric with the Lewis-Papapetrou-Weyl (LPW) line element. The
obtained solution describes an object with mass and NUT parameter embedded in
an external magnetic field. A quite similar idea where the weak external
magnetic field exists outside an object with NUT parameter has been performed
in [48].
The properties of the event horizon under the influence of some external
magnetic fields are also an interesting aspect to be investigated [49]. It is
known for the magnetized Schwarzschild solution, the scalar curvature of the
horizon varies depending on the strength of the external magnetic field. It
can take positive, zero, or negative values, and these values associate to
some different physics. Recall that a “normal” horizon such as the one of
Schwarzschild black hole has a positive curvature, which is understood from
its spherical form. However, despite the shape of the horizon changing due to
the presence of an external magnetic field, the total area of the horizon does
not vary.
The organization of this paper is as follows. In the next section, we provide
some reviews of the Ernst magnetization procedure by using a complex
differential operator. In section 3, after employing the magnetization
procedure to the Taub-NUT spacetime, we obtain the magnetized Taub-NUT
solution. The surface geometry and closed timelike curve in this new spacetime
are discussed in section 4. In section 5, some of the electromagnetic
properties in the spacetime are studied. Finally, we give some conclusions and
discussions. We consider the natural units $c={\hbar}=k_{B}=G_{4}=1$.
## 2 Magnetization of a spacetime
Magnetization of a spacetime can be done by employing the Ernst magnetization
to the metric whose line element can be written as the Lewis-Papapetrou-Weyl
(LPW) type,
$ds^{2}=-f^{-1}\left({\rho^{2}dt^{2}-e^{2\gamma}d\zeta
d\zeta^{*}}\right)+f\left({d\phi-\omega dt}\right)^{2}\,.$ (2.1)
Above, the $f$, $\gamma$, and $\omega$ are functions of a complex coordinate
$\zeta$. Above we are using the $-+++$ signs convention for the spacetime, and
the ∗ notation represents the complex conjugation. In Einstein-Maxwell theory,
the metric (2.1) together with a vector solution ${\bf
A}=A_{t}dt+A_{\phi}d\phi$ obey the field equations
${R_{\mu\nu}}=2{F_{\mu\alpha}}F_{\nu}^{\alpha}-\frac{1}{2}{g_{\mu\nu}}{F_{\alpha\beta}}{F^{\alpha\beta}}\,,$
(2.2)
where $R_{\mu\nu}$ is Ricci tensor, and
${F_{\mu\nu}}={\partial_{\mu}}{A_{\nu}}-{\partial_{\nu}}{A_{\mu}}$ is the
Maxwell field-strength tensor.
Interestingly, Ernst [50, 51] showed that the last equations can give us a set
of wave-like equations to the Ernst gravitational and electromagnetic
potentials. Using the metric functions in (2.1), we can construct the
gravitational Ernst potential
${\cal E}=f+\left|{\Phi}\right|^{2} -i\Psi\,,$ (2.3)
where the electromagnetic Ernst potential $\Phi$ consists of
$\Phi =A_{\phi} +i\tilde{A}_{\phi} \,.$ (2.4)
Note that the real part of $\Phi$ above is $A_{\phi}$ instead of $A_{t}$ as
appeared in [51] since the gravitational Ernst potential ${\cal E}$ is defined
with respect to the Killing vector $\partial_{\phi}$. The relation between
these vector components is given by
$\nabla A_{t} = -\omega\nabla A_{\phi} -i\frac{\rho}{f}\nabla\tilde{A}_{\phi}
\,,$ (2.5)
where the twist potential $\Psi$ satisfies
$\nabla\Psi =\frac{{if^{2}}}{\rho}\nabla\omega+2i\Phi^{*}\nabla\Phi \,.$ (2.6)
Interestingly, as it was first shown by Ernst in [50, 51], the Einstein-
Maxwell eq. (2.2) dictates the Ernst potentials to obey the nonlinear complex
equations
$\left({{\mathop{\rm Re}\nolimits}\left\\{{\cal E}\right\\}+{{\left|\Phi
\right|}^{2}}}\right)\nabla^{2}{\cal E}=\left({\nabla{\cal
E}+2{\Phi^{*}}\nabla\Phi}\right)\cdot\nabla{\cal E}\,,$ (2.7)
and
$\left({{\mathop{\rm Re}\nolimits}\left\\{{\cal E}\right\\}+{{\left|\Phi
\right|}^{2}}}\right)\nabla^{2}{\Phi}=\left({\nabla{\cal
E}+2{\Phi^{*}}\nabla\Phi}\right)\cdot\nabla{\Phi}\,.$ (2.8)
The magnetization procedure according to Ernst can be written as the
following,
${\cal E}\to{\cal E}^{\prime}=\Lambda^{-1}{\cal E}\leavevmode\nobreak\
\leavevmode\nobreak\ \leavevmode\nobreak\ {\rm and}\leavevmode\nobreak\
\leavevmode\nobreak\ \leavevmode\nobreak\ \Phi
\to\Phi^{\prime}=\Lambda^{-1}\left({\Phi -b{\cal E}}\right)\,,$ (2.9)
where
$\Lambda =1-2b\Phi +b^{2}{\cal E}\,.$ (2.10)
In the equations above, the constant $b$ represents the magnetic field
strength in the spacetime222For economical reason, we prefer to express the
magnetic parameter as $b$ instead of $B/2$ as appeared in [7]. The relation is
$B=2b$.. The transformation (2.9) leaves the two equations (2.7) and (2.8)
remain unchanged. In other words, the new fields
$\left\\{g^{\prime}_{\mu\nu},A^{\prime}_{\mu}\right\\}$ after performing the
Ernst transformation (2.9) still belong to the field equations (2.2).
In particular, the transformed line element (2.1) resulting from the
magnetization (2.9) has the components
$f^{\prime}={\mathop{\rm Re}\nolimits}\left\\{{{\cal
E}^{\prime}}\right\\}-\left|{\Phi^{\prime}}\right|^{2} =\left|\Lambda
\right|^{-2}f\,.$ (2.11)
The $\omega^{\prime}$ function obeys
$\nabla\omega^{\prime}=\left|\Lambda \right|^{2}\nabla\omega
-\frac{\rho}{f}\left({\Lambda^{*}\nabla\Lambda
-\Lambda\nabla\Lambda^{*}}\right)\,,$ (2.12)
and $\gamma$ in (2.1) remains unchanged. Since all the incorporated functions
in the metric (2.1) depend only on $\rho$ and $z$ coordinates, then the
operator $\nabla$ can be defined in the flat Euclidean space
$d\zeta d\zeta^{*} =d\rho^{2} +dz^{2}\,,$ (2.13)
where we have set the complex coordinate $d\zeta =d\rho +idz$. Here we have
$\nabla =\partial_{\rho} +i\partial_{z}$, accordingly.
As we know, the spacetime solutions in Einstein-Maxwell theory are normally
expressed in the Boyer-Lindquist type coordinate
$\left\\{{t,r,x=\cos\theta,\phi}\right\\}$. Consequently, the LPW type metric
(2.1) with stationary and axial Killing symmetries will have the metric
function that depend only on $r$ and $x$ coordinates, and the corresponding
flat metric line element reads
$d\zeta d\zeta^{*}
=\frac{{dr^{2}}}{{\Delta_{r}}}+\frac{{dx^{2}}}{{\Delta_{x}}}\,,$ (2.14)
where $\Delta_{r}=\Delta_{r}\left(r\right)$ and
$\Delta_{x}=\Delta_{x}\left(x\right)$. Accordingly, the corresponding operator
$\nabla$ will read $\nabla =\sqrt{\Delta_{r}}\partial_{r}
+i\sqrt{\Delta_{x}}\partial_{x}$. Furthermore, we have
$\rho^{2}=\Delta_{r}\Delta_{x}$ which then allows us to write the components
of eq. (2.5) as
$\partial_{r}A_{t} = -\omega\partial_{r}A_{\phi}
+\frac{{\Delta_{x}}}{f}\partial_{x}\tilde{A}_{\phi} \,,$ (2.15)
and
$\partial_{x}A_{t} = -\omega\partial_{x}A_{\phi}
-\frac{{\Delta_{r}}}{f}\partial_{r}\tilde{A}_{\phi} \,.$ (2.16)
The last two equations are useful later in obtaining the $A_{t}$ component
associated to the magnetized spacetime according to (2.9). To complete some
details on the magnetization procedure, another equations for the magnetized
metric functions are
$\partial_{r}\omega^{\prime}=\left|\Lambda \right|^{2}\partial_{r}\omega
+\frac{{\Delta_{x}}}{f}{\mathop{\rm
Im}\nolimits}\left\\{{\Lambda^{*}\partial_{x}\Lambda
-\Lambda\partial_{x}\Lambda^{*}}\right\\}\,,$ (2.17)
and
$\partial_{x}\omega^{\prime}=\left|\Lambda \right|^{2}\partial_{x}\omega
-\frac{{\Delta_{r}}}{f}{\mathop{\rm
Im}\nolimits}\left\\{{\Lambda^{*}\partial_{r}\Lambda
-\Lambda\partial_{r}\Lambda^{*}}\right\\}\,.$ (2.18)
In the next section, we will employ this magnetization prescription to the
Taub-NUT spacetime.
## 3 Magnetizing the Taub-NUT spacetime
Taub-NUT spacetime is a non-trivial extension of Schwarzschild solution, where
in addition to mass parameter $M$, the solution contains an extra parameter
$l$ known as the NUT parameter. However, unlike the mass $M$ which can be
considered as a conserved quantity due to the timelike Killing symmetry
$\partial_{t}$ [3], the NUT parameter cannot be viewed in analogous way as a
sort of conserved charge associated to a symmetry in the spacetime. The line
element of Taub-NUT spacetime can be expressed as [5]
$ds^{2}=-\frac{{\Delta_{r}}}{{r^{2}+l^{2}}}\left({dt-2lxd\phi}\right)^{2}+\left({r^{2}+l^{2}}\right)\left({\frac{{dr^{2}}}{{\Delta_{r}}}+\frac{{dx^{2}}}{{\Delta_{x}}}}\right)+\left({r^{2}+l^{2}}\right)\Delta_{x}d\phi^{2}\,,$
(3.19)
where $\Delta_{r}=r^{2}-2Mr-l^{2}$ and $\Delta_{x}=1-x^{2}$. Matching this
line element to the LPW form (2.1) gives us
$f=\frac{{\Delta_{x}\left({r^{2}+l^{2}}\right)^{2}-4\Delta_{r}l^{2}x^{2}}}{{r^{2}+l^{2}}}\,,$
(3.20)
$\omega=-\frac{{2\Delta_{r}lx}}{{4\Delta_{r}l^{2}x^{2}-\Delta_{x}\left({r^{2}+l^{2}}\right)^{2}}}\,,$
(3.21)
and
$e^{2\gamma}=\Delta_{x}\left({r^{2}+l^{2}}\right)^{2}-4\Delta_{r}l^{2}x^{2}\,.$
(3.22)
Note that from eqs. (2.13) and (2.14), one can find
$\rho^{2}=\Delta_{r}\Delta_{x}$ and $z=rx$. Furthermore, by using eq. (2.6)
one can show that the associated twist potential for Taub-NUT metric can be
written as
$\Psi=-{\frac{2l\left({r}^{3}+r{l}^{2}+{r}^{3}{x}^{2}-3r{l}^{2}{x}^{2}+M{l}^{2}{x}^{2}-3M{r}^{2}{x}^{2}\right)}{{r}^{2}+{l}^{2}}}\,.$
(3.23)
Thence, the gravitational Ernst potential (2.7) defined with respect to
$\partial_{\phi}$ for Taub-NUT spacetime (3.19) is
${\cal
E}={\frac{6Mrl{x}^{2}-{r}^{2}l+{l}^{3}-3l{r}^{2}{x}^{2}+3{l}^{3}{x}^{2}+i\left(\Delta_{x}\left\\{r^{3}+3rl^{2}\right\\}+2M{l}^{2}{x}^{2}\right)}{l+ir}}\,,$
(3.24)
and the electromagnetic Ernst potential $\Phi$ vanishes. Also, from (2.10) we
can have
$\Lambda=\frac{\delta_{x}b^{2}{l}^{3}+\left(1-{b}^{2}{r}^{2}\delta_{x}+6{b}^{2}Mr{x}^{2}\right)l+i\left\\{\left(3{b}^{2}r\Delta_{x}+2{b}^{2}M{x}^{2}\right){l}^{2}+r+{b}^{2}{r}^{3}\Delta_{x}\right\\}}{l+ir}$
(3.25)
where $\delta_{x}=1+3x^{2}$. Recall that this $\Lambda$ function plays an
important role in the Ernst magnetization (2.9).
Now, let us obtain the magnetized spacetime by using the Taub-NUT metric
(3.19) as the seed solution. Following (2.9), the corresponding magnetized
Ernst gravitational potential from (3.24) and (3.25) can be written as
${\cal
E}^{\prime}=\frac{6Mrl{x}^{2}-{r}^{2}l+{l}^{3}-3l{r}^{2}{x}^{2}+3{l}^{3}{x}^{2}+i\left(\Delta_{x}\left\\{r^{3}+3rl^{2}\right\\}+2M{l}^{2}{x}^{2}\right)}{\delta_{x}b^{2}{l}^{3}+\left(1-{b}^{2}{r}^{2}\delta_{x}+6{b}^{2}Mr{x}^{2}\right)l+i\left\\{\left(3{b}^{2}r\Delta_{x}+2{b}^{2}M{x}^{2}\right){l}^{2}+r+{b}^{2}{r}^{3}\Delta_{x}\right\\}}\,.$
(3.26)
On the other hand, the resulting electromagnetic Ernst potential simply reads
$\Phi^{\prime}=-b{\cal E}^{\prime}\,.$ (3.27)
This is obvious from (2.9) since the seed metric (3.19) has no associated
electromagnetic Ernst potential, i.e. $\Phi=0$. Consequently, the magnetized
metric function
$f^{\prime}={\mathop{\rm Re}\nolimits}\left\\{{{\cal
E}^{\prime}}\right\\}-\left|{\Phi^{\prime}}\right|^{2}$ (3.28)
which is related to the seed function $f$ as
$f^{\prime}=\left|\Lambda\right|^{-2}f$ can be written as
$f^{\prime}=\frac{{\Delta_{x}\left({r^{2}+l^{2}}\right)^{2}-4\Delta_{r}l^{2}x^{2}}}{\Xi}\,.$
(3.29)
In the last equation, we have used
$\Xi=d_{6}l^{6}+d_{4}l^{4}+d_{2}l^{2}+d_{0}$ where
$d_{0}={r}^{2}\left(1+{r}^{2}{b}^{2}\Delta_{x}\right)^{2}\leavevmode\nobreak\
\leavevmode\nobreak\ ,\leavevmode\nobreak\ \leavevmode\nobreak\
d_{6}=b^{4}\delta_{x}^{2}\,,$
$d_{4}=b^{2}\left(7{r}^{2}{b}^{2}+24{b}^{2}{x}^{4}Mr+24{b}^{2}Mr{x}^{2}+6{x}^{2}+2-30{r}^{2}{b}^{2}{x}^{2}+4{b}^{2}{M}^{2}{x}^{4}-9{b}^{2}{r}^{2}{x}^{4}\right)\,,$
and
$d_{2}=1+\left(36{M}^{2}{r}^{2}{x}^{4}-40{r}^{3}{x}^{4}M-6{r}^{4}{x}^{2}+15{x}^{4}{r}^{4}+7{r}^{4}-8{r}^{3}M{x}^{2}\right){b}^{4}+\left(16Mr{x}^{2}4{r}^{2}-12{r}^{2}{x}^{2}\right){b}^{2}\,.$
Note that the new twist potential $\Psi^{\prime}$ is associated to the
transformed Ernst potential ${\cal E}^{\prime}$ as
$\Psi^{\prime}=\frac{-2l\left({r}^{3}+r{l}^{2}+{r}^{3}{x}^{2}-3r{l}^{2}{x}^{2}+M{l}^{2}{x}^{2}-3M{r}^{2}{x}^{2}\right)}{\Xi}\,.$
(3.30)
Furthermore, integrating out (2.17) and (2.18) gives us
${\omega^{\prime}}=\frac{2lx\Delta_{r}\left\\{c_{4}l^{4}+c_{2}l^{2}+c_{0}\right\\}}{{\Delta_{x}\left({r^{2}+l^{2}}\right)^{2}-4\Delta_{r}l^{2}x^{2}}}\,,$
(3.31)
where
$c_{4}=-b^{4}\delta_{x}\Delta_{x}\,,$ (3.32)
$c_{2}=2{b}^{4}\left(3{r}^{2}{x}^{4}-2{x}^{4}Mr+2{M}^{2}{x}^{4}+3{r}^{2}-6{r}^{2}{x}^{2}+2{x}^{2}Mr\right)\,,$
(3.33)
and
$c_{0}=1+b^{4}r^{3}\Delta_{x}\left(rx^{2}+3r-4Mx^{2}\right)\,.$ (3.34)
Obviously, $\omega^{\prime}$ reduces to (3.21) as one considers the limit
$b\to 0$. Using the obtained $\omega^{\prime}$ and $f^{\prime}$ functions
above, the metric after magnetization now becomes
$ds^{2}=\frac{1}{f^{\prime}}\left\\{-{\Delta_{r}\Delta_{x}}dt^{2}+\left({r^{2}+l^{2}}\right)\left({\frac{{dr^{2}}}{{\Delta_{r}}}+\frac{{dx^{2}}}{{\Delta_{x}}}}\right)\right\\}+f^{\prime}\left({d\phi-\omega^{\prime}dt}\right)^{2}\,.$
(3.35)
On the other hand, the accompanying vector field in solving the Einstein-
Maxwell equations (2.2) can be obtained from the electromagnetic Ernst
potential $\Phi^{\prime}=A_{\phi}+i{\tilde{A}}_{\phi}$, where the vector
component $A_{t}$ can be found after integrating (2.15) and (2.16).
Explicitly, these vector components read
$A_{\phi}=-\frac{b}{\Xi}\left\\{b^{2}\Delta_{x}r^{6}+\left(1+15{b}^{2}{l}^{2}{x}^{4}-{x}^{2}-6{b}^{2}{l}^{2}{x}^{2}+7{b}^{2}{l}^{2}\right)r^{4}-8{b}^{2}M{l}^{2}{x}^{2}\left(5{x}^{2}+1\right)r^{3}\right.$
$+{l}^{2}\left(2-6{x}^{2}-30{b}^{2}{l}^{2}{x}^{2}+36{b}^{2}{M}^{2}{x}^{4}+7{b}^{2}{l}^{2}-9{b}^{2}{l}^{2}{x}^{4}\right)r^{2}+8M{l}^{2}{x}^{2}\left(1+3{b}^{2}{l}^{2}\left\\{1+x^{2}\right\\}\right)r$
$\left.+{l}^{4}\left(4{b}^{2}{M}^{2}{x}^{4}+6{b}^{2}{l}^{2}{x}^{2}+3{x}^{2}+{b}^{2}{l}^{2}+1+9{b}^{2}{l}^{2}{x}^{4}\right)\right\\}$
(3.36)
and
$A_{t}=-\frac{2lbx\Delta_{r}}{\Xi}\left\\{b^{4}\Delta_{x}\left(3+x^{2}\right)r^{4}-4b^{4}x^{2}M\Delta_{x}r^{3}+2{b}^{2}\left(1+x^{2}+3b^{2}\Delta_{x}^{2}\right)r^{2}\right.$
$\left.-4b^{2}x^{2}M\left(1-b^{2}l^{2}\Delta_{x}\right)r+4l^{2}b^{4}M^{2}x^{4}-\left(1+{b}^{2}{l}^{2}\Delta_{x}\right)\left(3{b}^{2}{l}^{2}{x}^{2}+1+{b}^{2}{l}^{2}\right)\right\\}\,.$
(3.37)
It can be verified that this vector solution obeys the source-free condition,
$\nabla_{\mu}F^{\mu\nu}=0$. Moreover, considering the massless limit of
(3.35), (3.36), and (3.37) gives us the Melvin-Taub-NUT universe333The
solution is given in appendix A., i.e. the Taub-NUT extension of the Melvin
magnetic universe discovered in [52, 5].
Figure 1: Some numerical evaluations for the Kretschmann scalar at equator in
the absence of NUT parameter, where
$K_{s}^{*}=M^{-4}R_{\mu\nu\alpha\beta}R^{\mu\nu\alpha\beta}$. Figure 2: Some
numerical evaluations for the Kretschmann scalar in the magnetized Taub-NUT
spacetime with $l=M/2$ at equator, where
$K_{s}^{*}=M^{-4}R_{\mu\nu\alpha\beta}R^{\mu\nu\alpha\beta}$. Figure 3: Some
numerical evaluations for the Kretschmann scalar in the magnetized Taub-NUT
spacetime with $l=M$ at equator, where
$K_{s}^{*}=M^{-4}R_{\mu\nu\alpha\beta}R^{\mu\nu\alpha\beta}$.
## 4 Surface geometry and closed timelike curve
In this section, let us study the horizon surface deformation and the closed
timelike curve in the spacetime. Before we discuss these features, let us
examine the Kretschmann scalar in spacetime, to justify the existence of the
true singularity in the spacetime. However, the complexity of the spacetime
solution (3.35) hinders us to express the Kretschmann scalar explicitly.
Therefore, we will perform some numerical evaluations and see whether the
Kretschmann scalar can be singular at the origin. As we have mentioned in the
introduction, spacetime with a NUT parameter has the conical singularity
instead of a true one at the origin. This is known from the fact that
typically spacetime with NUT parameter has a non-singular Kretschmann scalar
at $r=0$. In the absence of NUT parameter, the Kretschmann scalar blows up at
origin even in the presence of an external magnetic field as depicted in fig.
1. However, the typical plots of Kretschmann scalar for a spacetime with NUT
parameter appear in figs. 2 and 3, which allow us to infer that the magnetized
Taub-NUT spacetime does not possess a true singularity at the origin.
As one would expect for a magnetized spacetime obtained by the Ersnt method,
the existence of an external magnetic field does not affect the radius of the
event horizon. It is the zero of $\Delta_{r}$ which happens to be the same as
in the non-magnetized one, namely $r_{+}=M+\sqrt{M^{2}+l^{2}}$. Furthermore,
the total area of horizon reads
$A=\int\limits_{0}^{2\pi}{\int\limits_{-1}^{1}{\sqrt{{g_{xx}}{g_{\phi\phi}}}dxd\phi}}
=4\pi\left({r_{+}^{2}+{l^{2}}}\right)\,,$ (4.38)
which is equal to the area of the generic Taub-NUT black hole. Consequently,
the entropy of a magnetized Taub-NUT black hole will be the same as that of a
non-magnetized one, namely $S=A/4$.
However, the external magnetic field can distort the horizon of black hole, as
reported in [49]. In getting to this conclusion, one can study the Gaussian
curvature $K=\tfrac{1}{2}R$ of the two dimensional surface of the horizon,
where $R$ is the scalar curvature. For the magnetized Taub-NUT black hole, the
corresponding two dimensional surface of horizon reads
$ds_{{\rm{hor}}}^{2}={{\left({r_{+}^{2}+{l^{2}}}\right)d{x^{2}}}\over{{f_{+}}^{\prime}{\Delta_{x}}}}+{f_{+}}^{\prime}d{\phi^{2}}\,,$
(4.39)
where $f_{+}^{\prime}$ is $f^{\prime}$ evaluated at $r_{+}$. Furthermore, the
Gaussian curvature at equator can be found as
${K_{x=0}}={{\left\\{{3{b^{4}}{l^{8}}}+4r_{+}b^{4}\left(4M-9r_{+}\right)l^{6}-\left(2{r}_{+}^{4}{b}^{4}+32{r}_{+}^{3}M{b}^{4}+3\right){l}^{4}\right.}\over{{{\left({{r_{+}^{2}}+{l^{2}}}\right)}^{3}}\left({{b^{4}}{l^{4}}+2{l^{2}}{b^{2}}\left\\{{1+3{b^{2}}{r_{+}^{2}}}\right\\}+{{\left\\{{1+{b^{2}}{r_{+}^{2}}}\right\\}}^{2}}}\right)}}$
$\left.-2r_{+}\left(8{b}^{4}M{r}_{+}^{4}-2{r}_{+}^{5}{b}^{4}-3r_{+}+4M\right){l}^{2}+{{r_{+}^{4}}\left(1-b^{4}r_{+}^{4}\right)}\right\\}\,.$
(4.40)
Taking the limit $l\to 0$ from eq. above, we recover the Gaussian curvature of
horizon at $x=0$ for Schwarzschild black hole immersed in a magnetic field
[49]
${K_{l=0,x=0}}={{1-4{b^{2}}{M^{2}}}\over{4{M^{2}}{{\left({1+{b^{2}}{M^{2}}}\right)}^{3}}}}\,.$
(4.41)
Note that in the absence of external magnetic field, this curvature takes the
form
${K_{{\rm{Taub-NUT}},x=0}}={1\over{2{r_{+}}\sqrt{{M^{2}}+{l^{2}}}}}\,,$ (4.42)
which is always positive just like in the Schwarzschild case. In the presence
of magnetic field, the scalar curvature at horizon vanishes for $2bM=1$,
becomes negative for $2bM>1$, and positive for $2bM<1$. Particularly, the zero
equatorial scalar curvature occurs for the magnetic field strength
$b=b_{0}=\left\\{{\frac{r_{+}^{2}}{4\left(2M\left\\{4{M}^{4}+6{l}^{2}{M}^{2}+3{l}^{4}\right\\}\sqrt{M^{2}+l^{2}}+8{M}^{6}+16{l}^{2}{M}^{4}+11{l}^{4}{M}^{2}+2{l}^{6}\right)}}\right\\}^{\frac{1}{4}}\,.$
(4.43)
Obviously, if we set $l\to 0$ in (4.43), we recover the condition for zero
scalar curvature at horizon for the magnetized Schwarzschild black hole [49].
Furthermore, the positive curvature can be obtained for $b<b_{0}$, and the
negative one for $b>b_{0}$. Nevertheless, the expression of (4.43) is quite
complicated, so then we would evaluate eq. (4.40) numerically to see how the
curvature varies with $b$ and $l$. The case of null NUT parameter is presented
in figs. 4 and 5, where we can learn that the curvature can be positive,
negative, or zero values depending on the external magnetic field strength.
This is in agreement to the results reported in [49]. Furthermore, the
curvature (4.40) vanishes as $l\to\infty$ which is in agreement to the plots
presented in fig. 4, and plots in fig. 5 shows how the curvature (4.40)
changes as the magnetic field increases.
Figure 4: Gaussian curvature (4.40) evaluated for some magnetic field strength
$b$. Figure 5: Gaussian curvature (4.40) evaluated for some values of NUT
parameter $l$.
Another way to see how the external magnetic field deforms the horizon can be
done by studying the shape of horizon as performed in [49] and [53]. Surely,
the generic Schwarzschild black hole horizon is a sphere. However, the horizon
of a magnetized Schwarzschild black hole can form an oval shape, or even an
hourglass appearance for a sufficiently large magnetic field [53]. We show
here that the horizon in the spacetime with NUT parameter also exhibits this
effect. It can be understood from the previous finding where the equatorial
Gaussian curvature can be negative for $b>b_{0}$. To illustrate this
prolateness effect, let us compute the equatorial circumference of horizon
$C_{e}$, and the polar one $C_{p}$ as well. Since the integration would be in
a full cycle, let us return to the standard Boyer-Lindquist type coordinate
$\left(t,r,\theta,\phi\right)$. The standard textbook definition for these
circumferences are
${C_{p}}=\int\limits_{0}^{2\pi}{\sqrt{{g_{\theta\theta}}}d\theta}\,,$ (4.44)
and
${C_{e}}={\left.{\int\limits_{0}^{2\pi}{\sqrt{{g_{\phi\phi}}}d\phi}}\right|_{\theta
=\pi/2}}\,.$ (4.45)
Following [49], we can define the quantity $\delta$ which denotes the
prolateness of horizon as a function of magnetic field $b$,
$\delta ={{{C_{p}}-{C_{e}}}\over{{C_{e}}}}\,.$ (4.46)
Note that, for the seed solution (3.19), this quantity vanishes implying the
spherical horizon. In terms of $\gamma$ and $f^{\prime}$ functions, expressed
in $\theta$ instead of $x$, the corresponding metric functions incorporated in
$C_{p}$ and $C_{e}$ are
$g_{\theta\theta}=\frac{e^{2\gamma\left(r_{+},\theta\right)}}{f^{\prime}\left(r_{+},\theta\right)}\leavevmode\nobreak\
\leavevmode\nobreak\ {\rm and}\leavevmode\nobreak\ \leavevmode\nobreak\
g_{\phi\phi}=f^{\prime}\left(r_{+},\tfrac{\pi}{2}\right)\,.$ (4.47)
To illustrate the shape of horizon, we provide numerical plots for some cases
of NUT parameters in figs. 6 and 7.
Figure 6: Plots of deviation from the spherical form of the surface as
dictated by (4.46). We can observe that the deviation increases for the larger
NUT parameter. Figure 7: Solid lines are for subscript $i=e$ and dashed ones
are for $i=p$. Here $r^{*}=r/M$ and the red, black, and blue colors denote the
cases of $l=0$, $l=M/2$, and $l=M$, respectively. Here we learn that the gap
between $C_{p}$ and $C_{e}$ gets bigger as the value of NUT parameter $l$
increases, confirming the results presented in fig. 6.
Now let us turn to the discussion of another aspect in the magnetized Taub-NUT
spacetime, namely the existence of closed timelike curve (CTC). It is well
known that the Taub-NUT spacetime possesses the CTC [5] since $g_{\phi\phi}$
in the seed metric (3.19) becomes negative for
$x<\frac{4l^{2}\Delta_{r}-\left(r^{2}+l^{2}\right)^{2}}{4l^{2}\Delta_{r}+\left(r^{2}+l^{2}\right)^{2}}\,.$
(4.48)
Then it is natural to ask whether the CTC can also occur in the magnetized
version of (3.19). If it exists, then how would the external magnetic field
influence the existing CTC? Related to the magnetized line element, the
$\left(\phi,\phi\right)$ component of the metric changes in the form
${g_{\phi\phi}}\to g{{}^{\prime}_{\phi\phi}}={\left|\Lambda
\right|^{-2}}{g_{\phi\phi}}\,.$ (4.49)
Clearly, it is troublesome to express the condition for CTC occurrence in the
magnetized Taub-NUT spacetime analogous to eq. (4.48) of the non-magnetized
one. Therefore, we provide figs. 8 and 9 which show some numerical evaluations
of $g_{\phi\phi}$ for some particular $b$ and $l$. Curves in fig. 8 are some
results in the absence of magnetic field where the CTC can be understood to
exist from the negative value of $g_{\phi\phi}$. On the other hand, plots in
fig. 9 associate to the case with magnetic field which show that CTC can occur
as well.
Figure 8: Evaluation of $g_{\phi\phi}$ in the absence of external magnetic
field and for some values of $l$’s. Figure 9: Evaluation of $g_{\phi\phi}$ for
$l=M$ and some values of $b$, over the same angles as in fig. 8.
## 5 Electromagnetic properties
In this section, let us study some properties of electromagnetic fields around
the magnetized Taub-NUT black hole. The electric and magnetic fields are given
by
$E_{\alpha}=-F_{\alpha\beta}u^{\beta}\,,$ (5.50)
and
$B_{\alpha}=\frac{1}{2}\varepsilon_{\alpha\beta\mu\nu}F^{\mu\nu}u^{\beta}\,,$
(5.51)
respectively, where $u^{\alpha}=[1,0,0,0]$ is the stationary Killing vector.
For the solutions presented in eqs. (3.35), (3.36), and (3.37), the non-zero
components of electric and magnetic fields are
$E_{r}=\frac{{4xbl\left({f_{10}l^{10}+f_{8}l^{8}+f_{6}l^{6}+f_{4}l^{4}+f_{2}l^{2}+f_{0}}\right)}}{{\Xi_{r}^{2}}}\,,$
(5.52)
and
$E_{x}=\frac{{2bl\left(r^{2}-2Mr-l^{2}\right)\left({h_{10}l^{10}+h_{8}l^{8}+h_{6}l^{6}+h_{4}l^{4}+h_{2}l^{2}+h_{0}}\right)}}{{\Xi_{x}^{2}}}\,.$
(5.53)
On the other hand, the non-zero components of magnetic field are
$B_{r}=-\frac{{2xb\left({{\tilde{f}}_{12}l^{12}+{\tilde{f}}_{10}l^{10}+{\tilde{f}}_{8}l^{8}+{\tilde{f}}_{6}l^{6}+{\tilde{f}}_{4}l^{4}+{\tilde{f}}_{2}l^{2}+{\tilde{f}}_{0}}\right)}}{{{\Upsilon}_{r}^{3}}}\,,$
(5.54)
and
$B_{x}=-\frac{{2b\left(r^{2}-2Mr-l^{2}\right)\left({{\tilde{h}}_{10}l^{10}+{\tilde{h}}_{8}l^{8}+{\tilde{h}}_{6}l^{6}+{\tilde{h}}_{4}l^{4}+{\tilde{h}}_{2}l^{2}+{\tilde{h}}_{0}}\right)}}{{\Upsilon_{x}^{3}}}\,.$
(5.55)
The $r$ and $x$ dependent functions $f_{j}$, $h_{j}$, ${\tilde{f}}_{j}$,
${\tilde{h}}_{j}$, $\Xi_{r}$, $\Xi_{x}$, ${\Upsilon}_{r}$, and
${\Upsilon}_{x}$ above are provided in the appendix B. Furthermore, the
asymptotic of each component above can be found as
$\mathop{\lim}\limits_{r\to\infty}E_{r}=0\leavevmode\nobreak\
\leavevmode\nobreak\ ,\leavevmode\nobreak\ \leavevmode\nobreak\
\mathop{\lim}\limits_{r\to\infty}B_{r}=0\leavevmode\nobreak\
\leavevmode\nobreak\ ,\leavevmode\nobreak\ \leavevmode\nobreak\
\mathop{\lim}\limits_{r\to\infty}B_{x}=0\,,$ (5.56)
and
$\mathop{\lim}\limits_{r\to\infty}E_{x}={\frac{2lb\left({x}^{4}-6{x}^{2}-3\right)}{1-2{x}^{2}+{x}^{4}}}\,.$
(5.57)
Obviously, the lengthy expressions for the electric and magnetic fields
(5.52), (5.53), (5.54), and (5.55) hinder us to extract some more qualitative
results related to the fields properties around the black hole. However, the
behavior of these fields can still be studied by using some numerical examples
as appear in figs. 10, 11, 12, 13. From fig. 10, we can learn that the
magnitude of $E_{r}$ increases as going from $x=-1$ to $x=1$. On the other
hand, from fig. 11, $E_{x}$ increases in magnitude for a larger $r$. Moreover,
for $B_{r}$ as depicted in fig. 12, we learn that the maximum value for the
considered numerical setups is at $x=-1$ and near horizon.
Figure 10: Plot of dimensionless $E_{r}^{*}=ME_{r}$ for $b=0.01M$ and $l=0.1M$
in the range of $2.5M\leq r\leq 5M$ and $-1\leq x\leq 1$. Figure 11: Plot of
dimensionless $E_{x}^{*}=E_{x}$ for $b=0.01M$ and $l=0.1M$ in the range of
$2.5M\leq r\leq 5M$ and $-1\leq x\leq 1$. Figure 12: Plot of dimensionless
$B_{r}^{*}=M^{3}E_{r}$ for $b=0.01M$ and $l=0.1M$ in the range of $2.5M\leq
r\leq 5M$ and $-1\leq x\leq 1$. Figure 13: Plot of dimensionless
$B_{x}^{*}=MB_{x}$ for $b=0.01M$ and $l=0.1M$ in the range of $2.5M\leq r\leq
5M$ and $-1\leq x\leq 1$.
Now let us evaluate these fields at equator. Obviously the $r$-component of
the fields vanish there, while the $x$-components evaluated at $r=3M$ are
depicted in figs. 14 and 15. From 14 we observe that $E_{x}$ increases as $b$
and $l$ grow, and from fig. 15 we learn that the magnitude grows as magnetic
field increases. However, the presence of NUT parameter leads to the smaller
magnitude of $B_{x}$ as illustrated in fig. 15.
Figure 14: Equatorial $E_{x}$ at $r=3M$ for $0\leq l\leq 0.5M$ and $0\leq
b\leq 0.2/M$. Figure 15: Equatorial dimensionless $B_{x}^{*}$ at $r=3M$ for
$0\leq l\leq 0.5M$ and $0\leq b\leq 0.2/M$.
## 6 Conclusion
In this paper, we have presented a novel solution in Einstein-Maxwell theory
namely the Taub-NUT extension of magnetized black hole reported in [7]. To get
the solution, we have employed the Ernst magnetization to the Taub-NUT
spacetime. Typical for a spacetime with NUT parameter, the equatorial
Kretschmann scalar does not blow up at the origin. Moreover, we find that the
black hole surface in magnetized Taub-NUT spacetime deforms due the presence
of an external magnetic field, similar to the magnetized Schwarzschild case
reported in [49]. Furthermore, the existence of NUT parameter leads to the
occurrence of a closed timelike curve in the spacetime as shown in fig 9. In
addition to the surface geometry and closed timelike curve discussions, we
also add some studies on the electromagnetic properties in the spacetime.
There are several interesting future problems that we can investigate for the
new spacetime solution presented in this work, for example the conserved
charges and thermodynamical related. In particular, it is associated with the
Melvin-Taub-NUT solution [52] as given in the appendix. The distribution of
energy in Melvin spacetime had been studied in [54, 55], and its stability
against some perturbations was investigated in [56]. Investigating the energy
distribution associate to the Melvin-Taub-NUT spacetime and its stability
against some perturbations are worth consideration.
## Acknowledgement
This work is supported by Lembaga Penelitian Dan Pengabdian Kepada Masyarakat,
Parahyangan Catholic University. I thank Merry K. Nainggolan for her support
and encouragement.
## Appendix A Melvin Taub NUT spacetime
In this appendix we provide the solution describing the Taub-NUT extension of
the Melvin magnetic universe. The metric components are
$g_{rr}=\frac{\Upsilon}{{r^{2}-l^{2}}}=\frac{{\Delta_{x}g_{xx}}}{{r^{2}-l^{2}}}\,,$
(A.58)
$g_{tt}={\Upsilon}^{-1}\left\\{\left(1+b^{2}r^{2}\Delta_{x}\right)^{4}+b^{8}\Delta_{x}\delta_{x}\left(5x^{4}+10x^{2}+1\right)l^{8}-4b^{6}\Delta_{x}\left(15{b}^{2}{r}^{2}{x}^{4}\left\\{1+x^{2}\right\\}-9{x}^{4}\right.\right.$
$\left.+21{x}^{2}{b}^{2}{r}^{2}-6{x}^{2}-3{b}^{2}{r}^{2}-1\right)l^{6}+2b^{4}\Delta_{x}\left(5{b}^{4}{x}^{6}{r}^{4}+105{b}^{4}{r}^{4}{x}^{4}-18{r}^{2}{b}^{2}{x}^{4}\right.$
$\left.-33{b}^{4}{r}^{4}{x}^{2}-60{x}^{2}{b}^{2}{r}^{2}+5{x}^{2}+14{b}^{2}{r}^{2}+3+19{b}^{4}{r}^{4}\right)l^{4}-4b^{2}\Delta_{x}\left(7{r}^{6}{b}^{6}{x}^{6}-15{b}^{4}{r}^{4}{x}^{4}\right.$
$\left.\left.-{r}^{6}{b}^{6}{x}^{4}+13{r}^{6}{b}^{6}{x}^{2}+9{x}^{2}{b}^{2}{r}^{2}+6{b}^{4}{r}^{4}{x}^{2}-5{b}^{2}{r}^{2}-3{r}^{6}{b}^{6}-7{b}^{4}{r}^{4}-1\right)l^{2}\right\\}\,,$
(A.59)
$g_{t\phi}=\frac{2lx}{\Upsilon}\left(l^{2}-r^{2}\right)\left(1-{b}^{4}{r}^{4}{x}^{2}\left\\{2+x^{2}\right\\}+3{b}^{4}{r}^{4}+6l^{2}b^{4}r^{2}\Delta_{x}^{2}+3{l}^{4}{b}^{4}{x}^{4}-{l}^{4}{b}^{4}\left\\{1+2x^{2}\right\\}\right)\,,$
(A.60)
$g_{\phi\phi}=\Upsilon^{-1}\left({r}^{4}-{r}^{4}{x}^{2}+2{r}^{2}{l}^{2}+{l}^{4}-6{r}^{2}{l}^{2}{x}^{2}+3{l}^{4}{x}^{2}\right)\,,$
(A.61)
where
$\Upsilon=b^{4}\delta_{x}^{2}l^{6}+\left\\{1+{b}^{2}{r}^{2}\left(7{b}^{2}{r}^{2}+15{r}^{2}{b}^{2}{x}^{4}-6{x}^{2}{b}^{2}{r}^{2}+4-12{x}^{2}\right)\right\\}l^{2}$
$-b^{2}\left(9{r}^{2}{b}^{2}{x}^{4}+30{x}^{2}{b}^{2}{r}^{2}-7{b}^{2}{r}^{2}-2-6{x}^{2}\right)l^{4}+r^{2}\left(1+b^{2}r^{2}\Delta_{x}\right)^{2}\,,$
(A.62)
with $\delta_{x}$ as introduced in (3.25). The associated vector components
are
$A_{t}=2lbx\left(r^{2}-l^{2}\right)\Upsilon^{-1}\left\\{1+{b}^{4}{r}^{4}{x}^{4}+2{b}^{4}{r}^{4}{x}^{2}-3{b}^{4}{r}^{4}-6{l}^{2}{r}^{2}{b}^{4}{x}^{4}+12{l}^{2}{b}^{4}{r}^{2}{x}^{2}-2{x}^{2}{b}^{2}{r}^{2}\right.$
$\left.-6{l}^{2}{b}^{4}{r}^{2}-2{b}^{2}{r}^{2}+2{l}^{4}{b}^{4}{x}^{2}+2{b}^{2}{l}^{2}+{l}^{4}{b}^{4}+2{b}^{2}{l}^{2}{x}^{2}-3{l}^{4}{b}^{4}{x}^{4}\right\\}\,,$
(A.63)
and
$A_{\phi}=-b\Upsilon^{-1}\left\\{{r}^{6}{x}^{4}{b}^{2}-6{r}^{4}{b}^{2}{l}^{2}{x}^{2}-30{r}^{2}{b}^{2}{l}^{4}{x}^{2}+{l}^{6}{b}^{2}+7{r}^{4}{b}^{2}{l}^{2}+7{r}^{2}{b}^{2}{l}^{4}-2{r}^{6}{b}^{2}{x}^{2}+6{l}^{6}{b}^{2}{x}^{2}\right.$
$\left.+9{b}^{2}{l}^{6}{x}^{4}-9{r}^{2}{l}^{4}{x}^{4}{b}^{2}+15{b}^{2}{l}^{2}{r}^{4}{x}^{4}-6{r}^{2}{l}^{2}{x}^{2}+2{r}^{2}{l}^{2}-{r}^{4}{x}^{2}+3{l}^{4}{x}^{2}+{r}^{4}+{l}^{4}+{r}^{6}{b}^{2}\right\\}\,.$
(A.64)
This solution reduces to that of the Melvin universe [52] as the limit $l\to
0$ is considered.
## Appendix B Functions in the components of electric and magnetic fields
The followings are functions appearing in (5.52)
$f_{10}={b}^{8}\Delta_{x}\left(1+3{x}^{2}\right)\left(9M{x}^{4}-18r{x}^{4}+8M{x}^{2}-12{x}^{2}r-M+14r\right)\,,$
(B.65)
$f_{8}=-{b}^{6}\left(10M{x}^{2}+24M{x}^{4}-25M{r}^{2}{b}^{2}+36r-8{r}^{3}{b}^{2}-72{b}^{2}{M}^{2}r{x}^{4}-102M{r}^{2}{b}^{2}{x}^{2}+168{b}^{2}r{x}^{8}{M}^{2}\right.$
$+72r{x}^{6}+232{b}^{2}M{r}^{2}{x}^{4}-16{b}^{2}{r}^{3}{x}^{4}-495{b}^{2}{r}^{2}{x}^{8}M+24{b}^{2}{x}^{6}{M}^{3}-30M{x}^{6}-108r{x}^{4}+216{b}^{2}{r}^{3}{x}^{8}$
$\left.-4M-152{b}^{2}{x}^{6}{M}^{2}r+390{b}^{2}{r}^{2}{x}^{6}M-8{b}^{2}{M}^{2}r{x}^{2}+8{b}^{2}{x}^{8}{M}^{3}-192{b}^{2}{r}^{3}{x}^{6}\right)\,,$
(B.66)
$f_{6}=2{b}^{4}\left(39{b}^{4}{r}^{4}M-284{b}^{4}{r}^{5}{x}^{4}-8{r}^{3}{b}^{2}-30M{r}^{2}{b}^{2}-24{b}^{2}{r}^{3}{x}^{4}+24{b}^{2}{r}^{3}{x}^{6}+12{b}^{2}{x}^{6}{M}^{3}\right.$
$+128{b}^{4}{M}^{2}{r}^{3}{x}^{2}+88{b}^{4}{M}^{3}{r}^{2}{x}^{4}-360{b}^{4}{M}^{2}{x}^{4}{r}^{3}-150{b}^{4}M{x}^{2}{r}^{4}+496{b}^{4}M{x}^{4}{r}^{4}-48M{r}^{2}{b}^{2}{x}^{2}-4{b}^{2}{M}^{2}r{x}^{2}$
$-38{b}^{4}{r}^{5}-8{x}^{2}r+16r-3M-36{b}^{4}{x}^{6}{M}^{3}{r}^{2}-96{b}^{4}{x}^{6}{M}^{2}{r}^{3}+16{b}^{4}{M}^{4}r{x}^{6}-96{b}^{4}{M}^{4}r{x}^{8}+264{b}^{4}{x}^{8}{M}^{2}{r}^{3}$
$-231{b}^{4}{x}^{8}M{r}^{4}+108{b}^{4}{x}^{8}{M}^{3}{r}^{2}-154{b}^{4}{r}^{4}{x}^{6}M-24{b}^{2}{M}^{2}r{x}^{4}+186{b}^{2}M{r}^{2}{x}^{4}+4{b}^{2}{x}^{6}{M}^{2}r-108{b}^{2}{r}^{2}{x}^{6}M$
$\left.-2M{x}^{4}+18{b}^{4}{r}^{5}{x}^{8}+192{b}^{4}{r}^{5}{x}^{6}+8{b}^{4}{M}^{5}{x}^{8}+8{b}^{2}{r}^{3}{x}^{2}+112{b}^{4}{r}^{5}{x}^{2}+M{x}^{2}\right)\,,$
(B.67)
$f_{4}=-2{b}^{2}\left(56{b}^{6}{r}^{7}{x}^{4}+32{b}^{6}{r}^{7}{x}^{6}+58{b}^{4}{r}^{4}M-128{b}^{6}{r}^{7}{x}^{2}-23M{r}^{2}{b}^{2}-48{b}^{4}{r}^{5}{x}^{4}-4{r}^{3}{b}^{2}\right.$
$+8{b}^{2}{r}^{3}{x}^{4}-294{x}^{6}{b}^{6}M{r}^{6}+220{x}^{4}{b}^{6}M{r}^{6}-88{b}^{6}{M}^{2}{r}^{5}{x}^{2}+112{b}^{4}{M}^{2}{r}^{3}{x}^{2}+24{b}^{4}{M}^{3}{r}^{2}{x}^{4}-272{b}^{4}{M}^{2}{x}^{4}{r}^{3}$
$-82{b}^{4}M{x}^{2}{r}^{4}+134{b}^{4}M{x}^{4}{r}^{4}-3M{r}^{2}{b}^{2}{x}^{2}+4{b}^{2}{M}^{2}r{x}^{2}+150{b}^{6}{x}^{2}M{r}^{6}+120{b}^{6}{M}^{2}{x}^{6}{r}^{5}+196{b}^{6}{M}^{3}{r}^{4}{x}^{6}$
$+32{b}^{6}{M}^{3}{r}^{4}{x}^{4}-280{b}^{6}{M}^{2}{r}^{5}{x}^{4}+12{b}^{6}{r}^{7}{x}^{8}-40{b}^{4}{r}^{5}+72{b}^{6}{M}^{5}{r}^{2}{x}^{8}-192{b}^{6}{M}^{4}{r}^{3}{x}^{8}+216{b}^{6}{x}^{8}{M}^{2}{r}^{5}$
$-63{b}^{6}{x}^{8}M{r}^{6}-84{b}^{6}{x}^{8}{M}^{3}{r}^{4}+28{b}^{6}{r}^{7}+6r-2M-13{b}^{6}M{r}^{6}+24{b}^{4}{x}^{6}{M}^{3}{r}^{2}+144{b}^{4}{x}^{6}{M}^{2}{r}^{3}-16{b}^{4}{M}^{4}r{x}^{6}$
$\left.-110{b}^{4}{r}^{4}{x}^{6}M+12{b}^{2}{M}^{2}r{x}^{4}-14{b}^{2}M{r}^{2}{x}^{4}+36{b}^{4}{r}^{5}{x}^{6}+12{b}^{2}{r}^{3}{x}^{2}+52{b}^{4}{r}^{5}{x}^{2}-M{x}^{2}\right)\,,$
(B.68)
$f_{2}=16{b}^{4}{r}^{5}{x}^{4}+32{b}^{6}{r}^{7}{x}^{4}+32{b}^{6}{r}^{7}{x}^{6}-20{b}^{8}{r}^{9}{x}^{4}+22{b}^{8}{r}^{9}{x}^{8}+62{b}^{4}{r}^{4}M-96{b}^{6}{r}^{7}{x}^{2}-12M{r}^{2}{b}^{2}$
$+16{b}^{8}{r}^{9}{x}^{2}+3{b}^{8}M{r}^{8}+128{b}^{8}{M}^{2}{r}^{7}{x}^{6}-168{x}^{6}{b}^{6}M{r}^{6}-4{x}^{4}{b}^{6}M{r}^{6}+102{b}^{8}M{r}^{8}{x}^{2}+36{b}^{8}M{r}^{8}{x}^{4}$
$-200{b}^{6}{M}^{2}{r}^{5}{x}^{2}+32{b}^{4}{M}^{2}{r}^{3}{x}^{2}-32{b}^{4}{M}^{3}{r}^{2}{x}^{4}+64{b}^{4}{M}^{2}{x}^{4}{r}^{3}-74{b}^{4}M{x}^{2}{r}^{4}-60{b}^{4}M{x}^{4}{r}^{4}-64{b}^{8}{M}^{2}{r}^{7}{x}^{2}$
$+336{b}^{8}{M}^{3}{r}^{6}{x}^{4}-12M{r}^{2}{b}^{2}{x}^{2}+288{b}^{8}{M}^{4}{r}^{5}{x}^{8}-288{b}^{8}{M}^{4}{r}^{5}{x}^{6}-6{b}^{8}{r}^{8}{x}^{6}M-135{b}^{8}{r}^{8}{x}^{8}M+336{b}^{8}{r}^{7}{x}^{8}{M}^{2}$
$+8{b}^{2}{M}^{2}r{x}^{2}+192{b}^{6}{x}^{2}M{r}^{6}+264{b}^{6}{M}^{2}{x}^{6}{r}^{5}-200{b}^{6}{M}^{3}{r}^{4}{x}^{6}-504{b}^{8}{M}^{3}{r}^{6}{x}^{8}+176{b}^{6}{M}^{3}{r}^{4}{x}^{4}-400{b}^{8}{M}^{2}{r}^{7}{x}^{4}$
$-48{b}^{6}{M}^{2}{r}^{5}{x}^{4}+168{b}^{8}{M}^{3}{r}^{6}{x}^{6}-32{b}^{4}{r}^{5}+32{b}^{6}{r}^{7}-18{b}^{8}{r}^{9}+2r-M-20{b}^{6}M{r}^{6}+8{b}^{2}{r}^{3}{x}^{2}+32{b}^{4}{r}^{5}{x}^{2}\,,$
(B.69)
$f_{0}=-{r}^{2}\left(1+{b}^{2}{r}^{2}{x}^{2}-{b}^{2}{r}^{2}\right)^{2}\left(3{b}^{4}{r}^{4}M+8{b}^{4}{M}^{2}{x}^{4}{r}^{3}-8{b}^{4}{M}^{2}{r}^{3}{x}^{2}-4{r}^{3}{b}^{2}+6M{r}^{2}{b}^{2}-M-3{b}^{4}M{x}^{4}{r}^{4}\right)\,,$
(B.70)
$\Xi_{r}=\left\\{12{b}^{2}{l}^{2}{r}^{2}{x}^{2}-4{b}^{2}{l}^{2}{r}^{2}-6{b}^{2}{l}^{4}{x}^{2}+2{b}^{2}{r}^{4}{x}^{2}-2{b}^{2}{l}^{4}-2{b}^{2}{r}^{4}-2{b}^{4}{r}^{6}{x}^{2}+{b}^{4}{r}^{6}{x}^{4}+9{l}^{6}{b}^{4}{x}^{4}\right.$
$-16{b}^{2}{l}^{2}Mr{x}^{2}+6{l}^{6}{b}^{4}{x}^{2}+7{l}^{4}{b}^{4}{r}^{2}+7{l}^{2}{b}^{4}{r}^{4}+{l}^{6}{b}^{4}+{b}^{4}{r}^{6}+24{l}^{4}{b}^{4}Mr{x}^{4}+24{l}^{4}{b}^{4}Mr{x}^{2}+36{l}^{2}{b}^{4}{M}^{2}{r}^{2}{x}^{4}$
$\left.-8{l}^{2}{b}^{4}M{r}^{3}{x}^{2}-40{l}^{2}{b}^{4}M{r}^{3}{x}^{4}+4{l}^{4}{b}^{4}{M}^{2}{x}^{4}-30{l}^{4}{b}^{4}{r}^{2}{x}^{2}-9{l}^{4}{r}^{2}{b}^{4}{x}^{4}-6{l}^{2}{b}^{4}{r}^{4}{x}^{2}+15{l}^{2}{b}^{4}{r}^{4}{x}^{4}+{r}^{2}+{l}^{2}\right\\}\,.$
(B.71)
On the other hand, the following functions appeared in (5.53)
$h_{10}=-{b}^{8}\left(3{x}^{4}+6{x}^{2}-1\right)\left(1+3{x}^{2}\right)^{2}\,,$
(B.72)
$h_{8}=-{b}^{6}\left(28{b}^{2}{x}^{4}Mr-72{b}^{2}{r}^{2}{x}^{6}+80{x}^{6}{b}^{2}{M}^{2}-66{b}^{2}{r}^{2}{x}^{4}+32{b}^{2}{M}^{2}{x}^{4}+36{b}^{2}Mr{x}^{2}+48{x}^{8}{b}^{2}{M}^{2}\right.$
$\left.+36{b}^{2}Mr{x}^{8}-{b}^{2}{r}^{2}-144{b}^{2}{r}^{2}{x}^{2}-72{x}^{6}-60{x}^{4}+4+156{b}^{2}{x}^{6}Mr+27{b}^{2}{x}^{8}{r}^{2}\right)\,,$
(B.73)
$h_{6}=-2{b}^{4}\left(2{b}^{2}{r}^{2}+48{b}^{4}M{r}^{3}{x}^{8}+536{b}^{4}{x}^{6}M{r}^{3}-312{b}^{4}{x}^{6}{M}^{2}{r}^{2}+40{b}^{4}{x}^{8}{M}^{3}r-3+136{b}^{4}{M}^{3}{x}^{6}r+8{b}^{4}{M}^{4}{x}^{8}\right.$
$-9{b}^{4}{r}^{4}{x}^{8}-336{b}^{4}{x}^{6}{r}^{4}-12{b}^{2}{x}^{4}Mr-78{b}^{2}{x}^{6}Mr-38{b}^{2}Mr{x}^{2}-40{b}^{4}M{r}^{3}{x}^{2}-544{b}^{4}M{r}^{3}{x}^{4}-32{b}^{2}{M}^{2}{x}^{4}-40{x}^{6}{b}^{2}{M}^{2}$
$\left.-18{b}^{2}{r}^{2}{x}^{4}+18{b}^{2}{r}^{2}{x}^{6}+19{b}^{4}{r}^{4}-72{b}^{4}{r}^{4}{x}^{2}+398{b}^{4}{r}^{4}{x}^{4}+15{x}^{4}+136{b}^{4}{M}^{2}{r}^{2}{x}^{4}+126{b}^{2}{r}^{2}{x}^{2}\right)\,,$
(B.74)
$h_{4}=-2{b}^{2}\left(2-12{r}^{5}{b}^{6}M{x}^{2}+31{b}^{6}{r}^{6}-3{b}^{2}{r}^{2}-170{b}^{4}{x}^{6}M{r}^{3}+88{b}^{4}{x}^{6}{M}^{2}{r}^{2}-104{b}^{4}{M}^{3}{x}^{6}r-180{b}^{6}M{r}^{5}{x}^{8}\right.$
$+264{b}^{6}{M}^{2}{r}^{4}{x}^{8}-144{b}^{6}{M}^{3}{r}^{3}{x}^{8}+72{b}^{6}{M}^{4}{r}^{2}{x}^{8}+102{b}^{4}{x}^{6}{r}^{4}-2{b}^{2}{x}^{4}Mr+22{b}^{2}Mr{x}^{2}+110{b}^{4}M{r}^{3}{x}^{2}+252{b}^{4}M{r}^{3}{x}^{4}$
$+16{b}^{2}{M}^{2}{x}^{4}-516{b}^{6}M{r}^{5}{x}^{6}+324{b}^{6}M{r}^{5}{x}^{4}+51{b}^{6}{r}^{6}{x}^{8}+27{b}^{2}{r}^{2}{x}^{4}-14{b}^{4}{r}^{4}+34{b}^{4}{r}^{4}{x}^{2}-186{b}^{4}{r}^{4}{x}^{4}-120{b}^{4}{M}^{2}{r}^{2}{x}^{4}$
$\left.-336{r}^{4}{b}^{6}{M}^{2}{x}^{4}-112{b}^{6}{M}^{3}{x}^{6}{r}^{3}+504{b}^{6}{M}^{2}{x}^{6}{r}^{4}+64{b}^{6}{r}^{6}{x}^{6}+70{b}^{6}{r}^{6}{x}^{4}-88{b}^{6}{r}^{6}{x}^{2}-60{b}^{2}{r}^{2}{x}^{2}\right)\,,$
(B.75)
$h_{2}=1-96{b}^{8}{r}^{7}{x}^{8}M+48{b}^{8}M{r}^{7}{x}^{2}-512{b}^{8}{r}^{7}{x}^{4}M+48{b}^{8}{r}^{7}{x}^{6}M-140{r}^{5}{b}^{6}M{x}^{2}-27{b}^{8}{r}^{8}+36{b}^{6}{r}^{6}-4{b}^{2}{r}^{2}$
$+4{b}^{2}Mr{x}^{2}+88{b}^{4}M{r}^{3}{x}^{2}+40{b}^{4}M{r}^{3}{x}^{4}-28{b}^{6}M{r}^{5}{x}^{6}+296{b}^{6}M{r}^{5}{x}^{4}-6{b}^{4}{r}^{4}+16{b}^{4}{r}^{4}{x}^{2}-50{b}^{4}{r}^{4}{x}^{4}-64{b}^{4}{M}^{2}{r}^{2}{x}^{4}$
$-208{r}^{4}{b}^{6}{M}^{2}{x}^{4}+144{b}^{6}{M}^{3}{x}^{6}{r}^{3}-64{b}^{6}{M}^{2}{x}^{6}{r}^{4}-144{b}^{8}{M}^{3}{r}^{5}{x}^{8}+208{b}^{8}{r}^{6}{x}^{6}{M}^{2}+192{b}^{8}{r}^{6}{x}^{8}{M}^{2}+272{b}^{8}{r}^{6}{M}^{2}{x}^{4}$
$-144{b}^{8}{r}^{5}{M}^{3}{x}^{6}+122{b}^{8}{r}^{8}{x}^{4}-24{b}^{8}{r}^{8}{x}^{6}+48{b}^{8}{r}^{8}{x}^{2}+9{b}^{8}{r}^{8}{x}^{8}-28{b}^{6}{r}^{6}{x}^{6}-4{b}^{6}{r}^{6}{x}^{4}-4{b}^{6}{r}^{6}{x}^{2}-12{b}^{2}{r}^{2}{x}^{2}\,,$
(B.76)
$h_{0}=-{r}^{2}\left(1+{b}^{2}{r}^{2}{x}^{2}-{b}^{2}{r}^{2}\right)^{2}\left(6{b}^{4}{r}^{4}{x}^{2}+3{b}^{4}{r}^{4}+4{b}^{4}M{r}^{3}{x}^{4}-12{b}^{4}M{r}^{3}{x}^{2}-2{b}^{2}{r}^{2}{x}^{2}\right.$
$\left.-{b}^{4}{r}^{4}{x}^{4}-2{b}^{2}{r}^{2}+12{b}^{2}Mr{x}^{2}-1\right)\,,$
(B.77)
$\Xi_{x}=-16{b}^{2}{l}^{2}Mr{x}^{2}+12{b}^{2}{l}^{2}{r}^{2}{x}^{2}-4{b}^{2}{l}^{2}{r}^{2}-6{b}^{2}{l}^{4}{x}^{2}+2{b}^{2}{r}^{4}{x}^{2}-2{b}^{2}{l}^{4}-2{b}^{2}{r}^{4}-2{b}^{4}{r}^{6}{x}^{2}+{b}^{4}{r}^{6}{x}^{4}$
$+9{l}^{6}{b}^{4}{x}^{4}+6{l}^{6}{b}^{4}{x}^{2}+7{l}^{4}{b}^{4}{r}^{2}+7{l}^{2}{b}^{4}{r}^{4}+{l}^{6}{b}^{4}+{b}^{4}{r}^{6}+24{l}^{4}{b}^{4}Mr{x}^{4}+24{l}^{4}{b}^{4}Mr{x}^{2}+36{l}^{2}{b}^{4}{M}^{2}{r}^{2}{x}^{4}$
$-8{l}^{2}{b}^{4}M{r}^{3}{x}^{2}-40{l}^{2}{b}^{4}M{r}^{3}{x}^{4}+4{l}^{4}{b}^{4}{M}^{2}{x}^{4}-30{l}^{4}{b}^{4}{r}^{2}{x}^{2}-9{l}^{4}{r}^{2}{b}^{4}{x}^{4}-6{l}^{2}{b}^{4}{r}^{4}{x}^{2}+15{l}^{2}{b}^{4}{r}^{4}{x}^{4}+{r}^{2}+{l}^{2}\,.$
(B.78)
Furthermore, these functions belong to (5.54)
${\tilde{f}}_{12}=-{b}^{8}\Delta_{x}\left(3{x}^{2}+5\right)\left(1+3{x}^{2}\right)^{2}\,,$
(B.79)
${\tilde{f}}_{10}=-4{b}^{6}\left(20{b}^{2}{r}^{2}{x}^{2}+81{b}^{2}{x}^{8}{r}^{2}-117{b}^{2}Mr{x}^{8}-14{x}^{2}+88{b}^{2}{x}^{4}Mr+6{b}^{2}{x}^{6}Mr-54{b}^{2}{r}^{2}{x}^{4}-2{b}^{2}{M}^{2}{x}^{4}\right.$
$\left.+5{b}^{2}Mr-36{b}^{2}{r}^{2}{x}^{6}-2{x}^{6}{b}^{2}{M}^{2}-11{b}^{2}{r}^{2}+18{x}^{6}+18{x}^{8}{b}^{2}{M}^{2}+18{b}^{2}Mr{x}^{2}-4+2{b}^{2}{M}^{2}{x}^{2}\right)\,,$
(B.80)
${\tilde{f}}_{8}=-{b}^{4}\left(18+32{b}^{4}{M}^{4}{x}^{6}+92{b}^{2}{r}^{2}+128{b}^{4}{x}^{4}{M}^{3}r+256{b}^{4}{x}^{2}{M}^{2}{r}^{2}-1200{b}^{4}{M}^{2}{x}^{8}{r}^{2}+1584{b}^{4}M{r}^{3}{x}^{8}\right.$
$-544{b}^{4}{x}^{6}M{r}^{3}+288{b}^{4}{x}^{6}{M}^{2}{r}^{2}-96{b}^{4}{x}^{8}{M}^{3}r+32{b}^{4}{M}^{3}{x}^{6}r+80{b}^{4}{M}^{4}{x}^{8}-387{b}^{4}{r}^{4}{x}^{8}-84{b}^{4}{x}^{6}{r}^{4}$
$-256{b}^{2}{x}^{4}Mr+480{b}^{2}{x}^{6}Mr-160{b}^{2}Mr{x}^{2}-24{b}^{2}{M}^{2}{x}^{2}-224{b}^{4}M{r}^{3}{x}^{2}-832{b}^{4}M{r}^{3}{x}^{4}+16{b}^{4}M{r}^{3}$
$-64{b}^{2}Mr-8{x}^{6}{b}^{2}{M}^{2}+276{b}^{2}{r}^{2}{x}^{4}-252{b}^{2}{r}^{2}{x}^{6}-131{b}^{4}{r}^{4}+172{b}^{4}{r}^{4}{x}^{2}+430{b}^{4}{r}^{4}{x}^{4}+28{x}^{2}$
$\left.-30{x}^{4}+592{b}^{4}{M}^{2}{r}^{2}{x}^{4}-116{b}^{2}{r}^{2}{x}^{2}\right)\,,$
(B.81)
${\tilde{f}}_{6}=-8{b}^{2}\left(93{b}^{4}{x}^{6}{M}^{2}{r}^{2}-1-94{r}^{5}{b}^{6}M{x}^{2}-4{b}^{6}{r}^{6}-4{b}^{4}{M}^{4}{x}^{6}-9{b}^{2}{r}^{2}-24{b}^{4}{x}^{4}{M}^{3}r-63{b}^{4}{x}^{2}{M}^{2}{r}^{2}\right.$
$-92{b}^{4}{x}^{6}M{r}^{3}+16{b}^{4}{M}^{3}{x}^{6}r+24{b}^{6}{M}^{5}{x}^{8}r+168{b}^{6}{r}^{3}{M}^{3}{x}^{4}+46{b}^{6}{x}^{2}{M}^{2}{r}^{4}+72{b}^{6}{M}^{4}{x}^{6}{r}^{2}-75{b}^{6}M{r}^{5}{x}^{8}$
$+192{b}^{6}{M}^{2}{r}^{4}{x}^{8}-32{b}^{6}{M}^{3}{r}^{3}{x}^{8}-96{b}^{6}{M}^{4}{r}^{2}{x}^{8}+15{b}^{4}{x}^{6}{r}^{4}-16{b}^{2}{x}^{4}Mr+9{b}^{2}Mr{x}^{2}+3{b}^{2}{M}^{2}{x}^{2}+56{b}^{4}M{r}^{3}{x}^{2}$
$+42{b}^{4}M{r}^{3}{x}^{4}-9{b}^{6}M{r}^{5}-6{b}^{4}M{r}^{3}+9{b}^{2}Mr+{b}^{2}{M}^{2}{x}^{4}-282{b}^{6}M{r}^{5}{x}^{6}+460{b}^{6}M{r}^{5}{x}^{4}+3{b}^{2}{r}^{2}{x}^{4}+27{b}^{4}{r}^{4}$
$-55{b}^{4}{r}^{4}{x}^{2}+13{b}^{4}{r}^{4}{x}^{4}-22{b}^{4}{M}^{2}{r}^{2}{x}^{4}-420{r}^{4}{b}^{6}{M}^{2}{x}^{4}-120{b}^{6}{M}^{3}{x}^{6}{r}^{3}+174{b}^{6}{M}^{2}{x}^{6}{r}^{4}+104{b}^{6}{r}^{6}{x}^{6}$
$\left.-148{b}^{6}{r}^{6}{x}^{4}+48{b}^{6}{r}^{6}{x}^{2}+4{b}^{2}{r}^{2}{x}^{2}\right)\,,$
(B.82)
${\tilde{f}}_{4}=416{b}^{8}M{r}^{7}{x}^{2}-640{b}^{8}{r}^{7}{x}^{4}M+864{b}^{8}{r}^{7}{x}^{6}M-544{r}^{5}{b}^{6}M{x}^{2}+33{b}^{8}{r}^{8}-64{b}^{6}{r}^{6}-1-624{b}^{8}{r}^{7}{x}^{8}M$
$+912{b}^{8}{M}^{4}{r}^{4}{x}^{8}-1056{b}^{8}{r}^{4}{M}^{4}{x}^{6}-28{b}^{2}{r}^{2}-64{b}^{4}{x}^{4}{M}^{3}r-272{b}^{4}{x}^{2}{M}^{2}{r}^{2}+960{b}^{6}{r}^{3}{M}^{3}{x}^{4}+456{b}^{6}{x}^{2}{M}^{2}{r}^{4}$
$+96{b}^{6}{M}^{4}{x}^{6}{r}^{2}-16{b}^{2}Mr{x}^{2}+8{b}^{2}{M}^{2}{x}^{2}+272{b}^{4}M{r}^{3}{x}^{2}-256{b}^{4}M{r}^{3}{x}^{4}-96{b}^{6}M{r}^{5}-48{b}^{4}M{r}^{3}+32{b}^{2}Mr$
$-16{b}^{8}M{r}^{7}-864{b}^{6}M{r}^{5}{x}^{6}+1504{b}^{6}M{r}^{5}{x}^{4}+108{b}^{4}{r}^{4}-184{b}^{4}{r}^{4}{x}^{2}+92{b}^{4}{r}^{4}{x}^{4}+336{b}^{4}{M}^{2}{r}^{2}{x}^{4}$
$-1088{b}^{6}{M}^{3}{x}^{6}{r}^{3}+1560{b}^{6}{M}^{2}{x}^{6}{r}^{4}-2208{b}^{8}{M}^{3}{r}^{5}{x}^{8}-1504{b}^{8}{r}^{6}{x}^{6}{M}^{2}+1680{b}^{8}{r}^{6}{x}^{8}{M}^{2}-240{b}^{8}{r}^{6}{M}^{2}{x}^{4}$
$+2400{b}^{8}{r}^{5}{M}^{3}{x}^{6}+518{b}^{8}{r}^{8}{x}^{4}-372{b}^{8}{r}^{8}{x}^{6}-308{b}^{8}{r}^{8}{x}^{2}+129{b}^{8}{r}^{8}{x}^{8}+224{b}^{6}{r}^{6}{x}^{6}-576{b}^{6}{r}^{6}{x}^{4}$
$+416{b}^{6}{r}^{6}{x}^{2}+12{b}^{2}{r}^{2}{x}^{2}-1984{r}^{4}{b}^{6}{M}^{2}{x}^{4}\,,$
(B.83)
${\tilde{f}}_{2}=-4r\left(1+{b}^{2}{r}^{2}{x}^{2}-{b}^{2}{r}^{2}\right)\left(5{r}^{3}{b}^{2}+11{b}^{6}{r}^{7}{x}^{4}-9{b}^{6}{r}^{7}{x}^{6}-9{b}^{4}{r}^{4}M-15{b}^{4}{r}^{5}{x}^{4}\right.$
$-7{b}^{6}{r}^{7}{x}^{2}-3M{r}^{2}{b}^{2}+35{x}^{6}{b}^{6}M{r}^{6}-15{x}^{4}{b}^{6}M{r}^{6}+30{b}^{6}{M}^{2}{r}^{5}{x}^{2}+40{b}^{4}{M}^{2}{r}^{3}{x}^{2}-32{b}^{4}{M}^{2}{x}^{4}{r}^{3}$
$-40{b}^{4}M{x}^{2}{r}^{4}+37{b}^{4}M{x}^{4}{r}^{4}+11M{r}^{2}{b}^{2}{x}^{2}-6{b}^{2}{M}^{2}r{x}^{2}-15{b}^{6}{x}^{2}M{r}^{6}-66{b}^{6}{M}^{2}{x}^{6}{r}^{5}+48{b}^{6}{M}^{3}{r}^{4}{x}^{6}$
$\left.-48{b}^{6}{M}^{3}{r}^{4}{x}^{4}+36{b}^{6}{M}^{2}{r}^{5}{x}^{4}-{b}^{4}{r}^{5}+5{b}^{6}{r}^{7}-r+M-5{b}^{6}M{r}^{6}-7{b}^{2}{r}^{3}{x}^{2}+20{b}^{4}{r}^{5}{x}^{2}\right)\,,$
(B.84)
${\tilde{f}}_{0}={r}^{4}\left(1+{b}^{2}{r}^{2}{x}^{2}-{b}^{2}{r}^{2}\right)^{4}\,,$
(B.85)
$\Upsilon_{r}=-16{b}^{2}{l}^{2}Mr{x}^{2}+12{b}^{2}{l}^{2}{r}^{2}{x}^{2}-4{b}^{2}{l}^{2}{r}^{2}-6{b}^{2}{l}^{4}{x}^{2}+2{b}^{2}{r}^{4}{x}^{2}-2{b}^{2}{l}^{4}-2{b}^{2}{r}^{4}-2{b}^{4}{r}^{6}{x}^{2}+{b}^{4}{r}^{6}{x}^{4}$
$+9{l}^{6}{b}^{4}{x}^{4}+6{l}^{6}{b}^{4}{x}^{2}+7{l}^{4}{b}^{4}{r}^{2}+7{l}^{2}{b}^{4}{r}^{4}+{l}^{6}{b}^{4}+{b}^{4}{r}^{6}+24{l}^{4}{b}^{4}Mr{x}^{4}+24{l}^{4}{b}^{4}Mr{x}^{2}+36{l}^{2}{b}^{4}{M}^{2}{r}^{2}{x}^{4}$
$-8{l}^{2}{b}^{4}M{r}^{3}{x}^{2}-40{l}^{2}{b}^{4}M{r}^{3}{x}^{4}+4{l}^{4}{b}^{4}{M}^{2}{x}^{4}-30{l}^{4}{b}^{4}{r}^{2}{x}^{2}-9{l}^{4}{r}^{2}{b}^{4}{x}^{4}-6{l}^{2}{b}^{4}{r}^{4}{x}^{2}+15{l}^{2}{b}^{4}{r}^{4}{x}^{4}+{r}^{2}+{l}^{2}\,.$
(B.86)
Finally, these functions appear in (5.55)
${\tilde{h}}_{10}={b}^{8}\left(1+3{x}^{2}\right)\left(8M{x}^{4}-9r{x}^{4}-51{x}^{2}r+12M{x}^{2}+5r-9r{x}^{6}+12M{x}^{6}\right)\,,$
(B.87)
${\tilde{h}}_{8}={b}^{6}\left(192{b}^{2}r{x}^{8}{M}^{2}-36M{x}^{6}-16r+117{b}^{2}{r}^{3}{x}^{8}-36M{x}^{2}+416{b}^{2}{x}^{6}{M}^{2}r+60{b}^{2}{r}^{3}{x}^{2}-288{b}^{2}{r}^{2}{x}^{8}M\right.$
$+14{b}^{2}{r}^{3}{x}^{4}+16{b}^{2}{x}^{8}{M}^{3}-16{b}^{2}{x}^{6}{M}^{3}+96{x}^{2}r-88M{x}^{4}+96{b}^{2}{M}^{2}r{x}^{4}+540{b}^{2}{r}^{3}{x}^{6}$
$\left.-416{b}^{2}M{r}^{2}{x}^{4}-960{b}^{2}{r}^{2}{x}^{6}M+37{r}^{3}{b}^{2}+240r{x}^{4}\right)\,,$
(B.88)
${\tilde{h}}_{6}=2{b}^{4}\left(206{b}^{4}{r}^{5}{x}^{4}-34{r}^{3}{b}^{2}-134{b}^{2}{r}^{3}{x}^{4}-126{b}^{2}{r}^{3}{x}^{6}+8{b}^{2}{x}^{6}{M}^{3}+56{b}^{4}{M}^{2}{x}^{4}{r}^{3}+164{b}^{4}M{x}^{2}{r}^{4}\right.$
$-380{b}^{4}M{x}^{4}{r}^{4}+10M{r}^{2}{b}^{2}{x}^{2}+25{b}^{4}{r}^{5}-39r{x}^{4}-42{x}^{2}r+9r+336{b}^{4}{x}^{6}{M}^{3}{r}^{2}-1312{b}^{4}{x}^{6}{M}^{2}{r}^{3}+24{b}^{4}{M}^{4}r{x}^{8}$
$-408{b}^{4}{x}^{8}{M}^{2}{r}^{3}+300{b}^{4}{x}^{8}M{r}^{4}+176{b}^{4}{x}^{8}{M}^{3}{r}^{2}+1388{b}^{4}{r}^{4}{x}^{6}M-104{b}^{2}{M}^{2}r{x}^{4}+412{b}^{2}M{r}^{2}{x}^{4}-168{b}^{2}{x}^{6}{M}^{2}r$
$\left.+234{b}^{2}{r}^{2}{x}^{6}M+22M{x}^{4}-63{b}^{4}{r}^{5}{x}^{8}-444{b}^{4}{r}^{5}{x}^{6}+6{b}^{2}{r}^{3}{x}^{2}-108{b}^{4}{r}^{5}{x}^{2}+18M{x}^{2}\right)\,,$
(B.89)
${\tilde{h}}_{4}=2{b}^{2}\left(21{r}^{3}{b}^{2}-98{b}^{4}{r}^{5}{x}^{4}+214{b}^{6}{r}^{7}{x}^{4}-44{b}^{6}{r}^{7}{x}^{6}-124{b}^{6}{r}^{7}{x}^{2}+61{b}^{2}{r}^{3}{x}^{4}-224{x}^{6}{b}^{6}M{r}^{6}\right.$
$-272{x}^{4}{b}^{6}M{r}^{6}+136{b}^{4}{M}^{2}{x}^{4}{r}^{3}-110{b}^{4}M{x}^{2}{r}^{4}+76{b}^{4}M{x}^{4}{r}^{4}-12M{r}^{2}{b}^{2}{x}^{2}+128{b}^{6}{x}^{2}M{r}^{6}+592{b}^{6}{M}^{2}{x}^{6}{r}^{5}$
$-360{b}^{6}{M}^{3}{r}^{4}{x}^{6}+48{b}^{6}{M}^{2}{r}^{5}{x}^{4}+69{b}^{6}{r}^{7}{x}^{8}-38{b}^{4}{r}^{5}+12{x}^{2}r+216{b}^{6}{M}^{4}{r}^{3}{x}^{8}+672{b}^{6}{x}^{8}{M}^{2}{r}^{5}-336{b}^{6}{x}^{8}M{r}^{6}$
$-600{b}^{6}{x}^{8}{M}^{3}{r}^{4}+13{b}^{6}{r}^{7}-4r-216{b}^{4}{x}^{6}{M}^{3}{r}^{2}+504{b}^{4}{x}^{6}{M}^{2}{r}^{3}-462{b}^{4}{r}^{4}{x}^{6}M+56{b}^{2}{M}^{2}r{x}^{4}$
$\left.-148{b}^{2}M{r}^{2}{x}^{4}+126{b}^{4}{r}^{5}{x}^{6}-18{b}^{2}{r}^{3}{x}^{2}+106{b}^{4}{r}^{5}{x}^{2}-6M{x}^{2}\right)\,,$
(B.90)
${\tilde{h}}_{2}=r\left(1+{b}^{2}{r}^{2}{x}^{2}-{b}^{2}{r}^{2}\right)\left(25{b}^{6}{r}^{6}{x}^{6}-11{b}^{6}{r}^{6}{x}^{4}+59{b}^{6}{r}^{6}{x}^{2}-9{b}^{6}{r}^{6}-60{b}^{6}M{r}^{5}{x}^{6}-56{b}^{6}M{r}^{5}{x}^{4}\right.$
$-44{r}^{5}{b}^{6}M{x}^{2}+48{b}^{6}{M}^{2}{x}^{6}{r}^{4}+48{r}^{4}{b}^{6}{M}^{2}{x}^{4}+35{b}^{4}{r}^{4}{x}^{4}-38{b}^{4}{r}^{4}{x}^{2}+19{b}^{4}{r}^{4}-24{b}^{4}M{r}^{3}{x}^{4}$
$\left.+40{b}^{4}M{r}^{3}{x}^{2}-48{b}^{4}{M}^{2}{r}^{2}{x}^{4}+11{b}^{2}{r}^{2}{x}^{2}-11{b}^{2}{r}^{2}+4{b}^{2}Mr{x}^{2}+1\right)\,,$
(B.91)
${\tilde{h}}_{0}={r}^{3}\left(1+{b}^{2}{r}^{2}{x}^{2}-{b}^{2}{r}^{2}\right)^{4}\,,$
(B.92)
$\Upsilon_{x}=-16{b}^{2}{l}^{2}Mr{x}^{2}+12{b}^{2}{l}^{2}{r}^{2}{x}^{2}-4{b}^{2}{l}^{2}{r}^{2}-6{b}^{2}{l}^{4}{x}^{2}+2{b}^{2}{r}^{4}{x}^{2}-2{b}^{2}{l}^{4}-2{b}^{2}{r}^{4}-2{b}^{4}{r}^{6}{x}^{2}+{b}^{4}{r}^{6}{x}^{4}+9{l}^{6}{b}^{4}{x}^{4}$
$+6{l}^{6}{b}^{4}{x}^{2}+7{l}^{4}{b}^{4}{r}^{2}+7{l}^{2}{b}^{4}{r}^{4}+{l}^{6}{b}^{4}+{b}^{4}{r}^{6}+24{l}^{4}{b}^{4}Mr{x}^{4}+24{l}^{4}{b}^{4}Mr{x}^{2}+36{l}^{2}{b}^{4}{M}^{2}{r}^{2}{x}^{4}-8{l}^{2}{b}^{4}M{r}^{3}{x}^{2}$
$-40{l}^{2}{b}^{4}M{r}^{3}{x}^{4}+4{l}^{4}{b}^{4}{M}^{2}{x}^{4}-30{l}^{4}{b}^{4}{r}^{2}{x}^{2}-9{l}^{4}{r}^{2}{b}^{4}{x}^{4}-6{l}^{2}{b}^{4}{r}^{4}{x}^{2}+15{l}^{2}{b}^{4}{r}^{4}{x}^{4}+{r}^{2}+{l}^{2}\,.$
(B.93)
## References
* [1] H. Stephani, D. Kramer, M. A. H. MacCallum, C. Hoenselaers and E. Herlt, “Exact solutions of Einstein’s field equations,” Cambridge University Press, 2004.
* [2] Islam, J. N., “Rotating fields in general relativity,” Cambridge University Press (1985).
* [3] R. M. Wald, ‘General Relativity,” Chicago, Usa: Univ. Pr. ( 1984) 491p.
* [4] C. W. Misner, K. S. Thorne and J. A. Wheeler, “Gravitation,” San Francisco 1973, 1279p.
* [5] J. B. Griffiths and J. Podolsky, Cambridge University Press, 2009.
* [6] R. M. Wald, Phys. Rev. D 10, 1680 (1974).
* [7] F. J. Ernst, J. Math. Phys. 17, 54 (1976).
* [8] B. K. Harrison, J.Math.Phys.,9,1744, (1968).
* [9] A. N. Aliev and D. V. Galtsov, Sov. Phys. Usp. 32, 75 (1989).
* [10] M. Kološ, Z. Stuchlik and A. Tursunov, Class. Quant. Grav. 32, no. 16, 165009 (2015).
* [11] A. Tursunov, M. Kolos, Z. Stuchlik and B. Ahmedov, Phys. Rev. D 90, no. 8, 085009 (2014).
* [12] A. N. Aliev and D. V. Galtsov, Astrophys. Space Sci. 155, 181 (1989).
* [13] A. N. Aliev, D. V. Galtsov and A. A. Sokolov, Sov. Phys. J. 23, 179 (1980).
* [14] A. N. Aliev, D. V. Galtsov and V. I. Petrukhov, Astrophys. Space Sci. 124, 137 (1986).
* [15] A. N. Aliev and D. V. Galtsov, Sov. Phys. JETP 67, 1525 (1988).
* [16] A. N. Aliev and D. V. Galtsov, Sov. Phys. J. 32, 790 (1989).
* [17] D. V. Galtsov and V. I. Petukhov, Zh. Eksp. Teor. Fiz. 74, 801 (1978).
* [18] W. A. Hiscock, J. Math. Phys. 22, 1828 (1981).
* [19] K. Orekhov, J. Geom. Phys. 104, 242 (2016).
* [20] G. W. Gibbons, A. H. Mujtaba and C. N. Pope, Class. Quant. Grav. 30, no. 12, 125008 (2013).
* [21] G. W. Gibbons, Y. Pang and C. N. Pope, Phys. Rev. D 89, no. 4, 044029 (2014).
* [22] M. Rogatko, Phys. Rev. D 93, no. 4, 044008 (2016).
* [23] J. Bicak and F. Hejda, Phys. Rev. D 92, no. 10, 104006 (2015).
* [24] R. Brito, V. Cardoso and P. Pani, Phys. Rev. D 89, no. 10, 104045 (2014).
* [25] H. M. Siahaan, Class. Quant. Grav. 33, no. 15, 155013 (2016).
* [26] M. Astorino, JHEP 1510, 016 (2015).
* [27] M. Astorino, Phys. Lett. B 751, 96 (2015).
* [28] M. Astorino, G. Compère, R. Oliveri and N. Vandevoorde, Phys. Rev. D 94, no. 2, 024019 (2016).
* [29] C. Chakraborty and S. Bhattacharyya, Phys. Rev. D 98, no. 4, 043021 (2018).
* [30] P. Pradhan, Class. Quant. Grav. 32, no. 16, 165001 (2015).
* [31] M. F. A. R. Sakti, A. Suroso and F. P. Zen, Int. J. Mod. Phys. D 27 (2018) no.12, 1850109.
* [32] M. F. A. R. Sakti, A. Suroso and F. P. Zen, arXiv:1901.09163 [gr-qc].
* [33] A. N. Aliev, H. Cebeci and T. Dereli, Phys. Rev. D 77, 124022 (2008).
* [34] K. Düztaş, Class. Quant. Grav. 35, no. 4, 045008 (2018).
* [35] H. Cebeci, N. ’́Ozdemir and S. Şentorun, Phys. Rev. D 93, no. 10, 104031 (2016).
* [36] A. Zakria and M. Jamil, JHEP 1505, 147 (2015).
* [37] S. Mukherjee, S. Chakraborty and N. Dadhich, Eur. Phys. J. C 79 (2019) no.2, 161.
* [38] A. A. Abdujabbarov, B. J. Ahmedov, S. R. Shaymatov and A. S. Rakhmatov, Astrophys. Space Sci. 334 (2011) 237.
* [39] A. A. Abdujabbarov, A. A. Tursunov, B. J. Ahmedov and A. Kuvatov, Astrophys. Space Sci. 343 (2013) 173.
* [40] B. J. Ahmedov, A. V. Khugaev and A. A. Abdujabbarov, Astrophys. Space Sci. 337 (2012) 19.
* [41] A. A. Abdujabbarov, B. J. Ahmedov and V. G. Kagramanova, Gen. Rel. Grav. 40 (2008) 2515.
* [42] A. Abdujabbarov, F. Atamurotov, Y. Kucukakca, B. Ahmedov and U. Camci, Astrophys. Space Sci. 344 (2013) 429.
* [43] P. Jefremov and V. Perlick, Class. Quant. Grav. 33 (2016) no.24, 245014.
* [44] H. M. Siahaan, Eur. Phys. J. C 80 (2020) no.10, 1000.
* [45] H. M. Siahaan, Phys. Rev. D 102 (2020) no.6, 064022.
* [46] L. Ciambelli, C. Corral, J. Figueroa, G. Giribet and R. Olea, Phys. Rev. D 103 (2021) no.2, 024052
* [47] A. Cisterna, A. Neira-Gallegos, J. Oliva and S. C. Rebolledo-Caceres, [arXiv:2101.03628 [gr-qc]].
* [48] V. P. Frolov, P. Krtous and D. Kubiznak, Phys. Lett. B 771 (2017), 254-256.
* [49] W. J. Wild and R. M. Kerns, Phys. Rev. D 21 (1980), 332-335.
* [50] F. J. Ernst, Phys. Rev. 167, 1175 (1968).
* [51] F. J. Ernst, Phys. Rev. 168, 1415 (1968).
* [52] M. A. Melvin, Phys. Lett. 8, 65 (1964).
* [53] I. Booth, M. Hunt, A. Palomo-Lozano and H. K. Kunduri, Class. Quant. Grav. 32, no. 23, 235025 (2015).
* [54] S. S. Xulu, Int. J. Mod. Phys. A 15 (2000), 4849-4856.
* [55] S. S. Xulu, Int. J. Mod. Phys. A 15 (2000), 2979-2986.
* [56] K. Thorne, Phys. Rev. 139 (1965) B244.
|
# A Two-Level Simulation-Assisted Sequential Distribution System Restoration
Model With Frequency Dynamics Constraints
Qianzhi Zhang, Zixiao Ma, Yongli Zhu, and Zhaoyu Wang This work was supported
in part by the U.S. Department of Energy Wind Energy Technologies Office under
Grant DE-EE00008956 (_Corresponding author: Zhaoyu Wang_). The authors are
with the Department of Electrical and Computer Engineering, Iowa State
University, Ames, IA 50011 USA (e-mail<EMAIL_ADDRESS><EMAIL_ADDRESS><EMAIL_ADDRESS>[email protected]).
###### Abstract
This paper proposes a service restoration model for unbalanced distribution
systems and inverter-dominated microgrids (MGs), in which frequency dynamics
constraints are developed to optimize the amount of load restoration and
guarantee the dynamic performance of system frequency response during the
restoration process. After extreme events, the damaged distribution systems
can be sectionalized into several isolated MGs to restore critical loads and
tripped non-black start distributed generations (DGs) by black start DGs.
However, the high penetration of inverter-based DGs reduces the system
inertia, which results in low-inertia issues and large frequency fluctuation
during the restoration process. To address this challenge, we propose a two-
level simulation-assisted sequential service restoration model, which includes
a mixed integer linear programming (MILP)-based optimization model and a
transient simulation model. The proposed MILP model explicitly incorporates
the frequency response into constraints, by interfacing with transient
simulation of inverter-dominated MGs. Numerical results on a modified IEEE
123-bus system have validated that the frequency dynamic performance of the
proposed service restoration model are indeed improved.
###### Index Terms:
Frequency dynamics, service restoration, network reconfiguration, inverter-
dominated microgrids, simulation-based optimization.
## Nomenclature
Sets
$\Omega_{\rm BK}$
Set of bus blocks.
$\Omega_{\rm G}$
Set of generators.
$\Omega_{\rm BS}$
Set of generators with black start capability.
$\Omega_{\rm NBS}$
Set of generators without black start capability.
$\Omega_{\rm K}$
Set of distribution lines.
$\Omega_{\rm SW_{K}}$
Set of switchable lines.
$\Omega_{\rm NSW_{K}}$
Set of non-switchable lines.
$\Omega_{\rm L}$
Set of loads.
$\Omega_{\rm SW_{L}}$
Set of switchable loads.
$\Omega_{\rm NSW_{L}}$
Set of non-switchable loads.
$\Omega_{\phi}$
Set of phases.
Indices
$BK$
Index of bus block.
$k$
Index of line.
$i,j$
Index of bus.
$t$
Index of time instant.
$\phi$
Index of three-phase $\phi_{a},\phi_{b},\phi_{c}$.
Parameters
$a_{\phi}$
Approximate relative phase unbalance.
$D_{\rm P},D_{\rm Q}$
$P-\omega$ and $Q-V$ droop gains.
$f_{0}$
Nominal steady-state frequency.
$f^{\rm min}$
Minimum allowable frequency during the transient simulation.
$M$
Big-M number.
$P_{i}^{\rm G,M},Q_{i}^{\rm G,M}$
Active and reactive power output maximum limits of generator at bus $i$.
$P_{k}^{\rm K,M},Q_{k}^{\rm K,M}$
Active and reactive power flow maximum limits of line $k$.
$p_{k,\phi}$
Phase identifier of line $k$.
$R,L$
Aggregate resistance and inductance of connections from the inverter
terminal’s point review.
$\hat{R}_{k},\hat{X}_{k}$
Matrices of resistance and reactance of line $k$.
$T$
Length of rolling horizon.
$U_{i}^{\rm m},U_{i}^{\rm M}$
Minimum and maximum limit for squared nodal voltage magnitude of bus $i$.
$V_{\rm bus}$
Bus voltage.
$Z_{k},\hat{Z}_{k}$
Matrices of original impedance and equivalent impedance of line $k$.
$\alpha$
Hyper-parameter in frequency dynamics constraints.
$\Delta f^{\rm max}$
User-defined maximum allowable frequency drop limit.
$\Delta f^{\rm meas}$
Measured maximum transient frequency drop.
$w_{i}^{\rm L}$
Priority weight factor for load of bus $i$.
$\omega_{\rm c}$
Cut-off frequency of the low pass filter.
$\omega_{\rm set},V_{\rm set}$
Set points of frequency and voltage controllers.
$\omega_{0}$
Nominal angular frequency.
Variables
$f^{\rm nadir}$
Frequency nadir during the transient simulation.
$I_{\rm d},I_{\rm q}$
$dq$-axis current.
$P,Q$
Filtered terminal output active and reactive power.
$P^{\rm L},Q^{\rm L}$
Restored active and reactive loads.
$P_{i,\phi,t}^{\rm G}$
Three-phase active power output of generator at bus $i$, phase $\phi$, time
$t$.
$P_{i,t}^{\rm G,MLS}$
Maximum load step at bus $i$, time $t$.
$P_{k,\phi,t}^{\rm K}$
Three-phase active power flow of line $k$, phase $\phi$, time $t$.
$P_{i,\phi,t}^{\rm L}$
Restored active load at bus $i$, phase $\phi$, time $t$.
$Q_{i,\phi,t}^{\rm G}$
Three-phase reactive power output of generator at bus $i$, phase $\phi$, time
$t$.
$Q_{k,\phi,t}^{\rm K}$
Three-phase reactive power flow of line $k$, phase $\phi$, time $t$.
$U_{i,\phi,t}$
Squared of three-phase voltage magnitude.
$V$
Output voltage of the inverter.
$x_{i,t}^{\rm B}$
Binary energizing status of bus, if $x_{i,t}^{\rm B}=1$ then the bus $i$ is
energized at time $t$.
$x_{B,t}^{\rm BK}$
Binary energizing status of bus block, if $x_{B,t}^{\rm BK}=1$ then the bus
block $B$ is energized at time $t$.
$x_{i,t}^{\rm G}$
Binary switch on/off status of grid-following generator, if $x_{i,t}^{\rm
G}=1$ then the grid-following generator at bus $i$ is switched on at time $t$.
$x_{k,t}^{\rm K}$
Binary connection status of line, if $x_{k,t}^{\rm K}=1$ then the line $k$ is
connected at time $t$.
$x_{i,t}^{\rm L}$
Binary restoration status of load, if $x_{i,t}^{\rm L}=1$ then the load $i$ is
restored at time $t$.
$\Delta P_{i,t-1}^{G,MLS}$
Change of the maximum load step.
$\theta$
Output phase angle of the inverter.
$\omega$
Output angular frequency of the inverter.
## I Introduction
EXTREME events can cause severe damages to power distribution systems [1],
e.g. substation disconnection, line outage, generator tripping, load shedding,
and consequently large-scale system blackouts [2]. During the network and
service restoration, in order to isolate faults and restore critical loads, a
distribution system can be sectionalized into several isolated microgirds
(MGs) [3]. Through the MG formation, buses, lines and loads in outage areas
can be locally energized by distributed generations (DGs), where more outage
areas could be restored and the number of switching operations could be
minimized [4, 5, 6, 7, 8, 9]. In [4], the self-healing mode of MGs is
considered to provide reliable power supply for critical loads and restore the
outage areas. In [5], a networked MGs-aided approach is developed for service
restoration, which considers both dispatchable and non-dispatchable DGs. In
[6] and [7], the service restoration problem is formulated as a mixed integer
linear programming (MILP) to maximize the critical loads to be restored while
satisfying constraints for MG formation and remotely controlled devices. In
[8], the formation of adaptive multiple MGs is developed as part of the
critical service restoration strategy. In [9], a sequential service
restoration framework is proposed to generate restoration solutions for MGs in
the event of large-scale power outages. However, the previous methods mainly
use the conventional synchronous generators as the black start units, and only
consider steady-state constraints in the service restoration models, which
have limitations in the following aspects:
(1) An inverter-dominated MG can have low-inertia: With the increasing
penetration of inverter-based DGs (IBDGs) in distribution systems, such as
distributed wind and photovoltaics (PVs) generations, the system inertia
becomes lower [10, 11]. When sudden changes happen, such as DG output
changing, load reconnecting, and line switching, the dynamic frequency
performance of such low-inertia distribution systems can deteriorate [12].
This issue becomes even worse when restoring low-inertia inverter-dominated
MGs. Without considering frequency dynamics constraints, the load and service
restoration decisions may not be implemented in practice.
(2) Frequency responses need to be considered: Previous studies [13, 14, 15,
16] have considered the impact of disturbances on frequency responses in the
service restoration problem using different approaches. In [13], the amount of
load restored by DGs is limited by a fixed frequency response rate and maximum
allowable frequency deviation. However, because the frequency response rate is
pre-determined in an off-line manner, the impacts of significant load
restoration, topology change, and load variations may not be fully captured by
the off-line model. In [14], the stability and security constraints are
incorporated into the restoration model. However, this model has to be solved
by meta-heuristic methods due to the nonlinearity of the stability
constraints, which may lead to large optimality gaps. In [15], even though the
transient simulation results of voltage and frequency are considered to
evaluate the potential MG restoration paths in an online manner, it adopts a
relatively complicated four-stage procedure to obtain the optimal restoration
path. In [16], a control strategy of real-time frequency regulation for
network reconfiguration is developed, nonetheless, it is not co-optimized with
the switch operations.
(3) Grid-forming IBDGs need to be considered: In previous studies on optimal
service restoration, IBDGs are usually modeled as grid-following sources
(i.e., PQ sources) to simply supply active and reactive power based on the
control commands. However, during the service restoration after a network
blackout and loss of connection to the upstream feeder, a grid-forming IBDG
will be needed to setup voltage and frequency references for the blackout
network [17]. During outages, the grid-following IBDGs will be switched off.
After outages, the grid-forming IBDGs have the black start capability, which
can restore loads after the faults are isolated. Because IBDGs are connected
with power electronics converters and have no rotating mass, there is no
conventional concept of “inertia” for IBDGs. Thus, control techniques such as
droop control [18, 19] and virtual synchronous generator (VSG) control [20,
21] are usually adopted to emulate the inertia property in IBDGs.
To alleviate the frequency fluctuations caused by service restoration, we
establish a MILP-based optimization model with frequency dynamics constraints
for sequential service restoration to generate sequential actions for remotely
controlled switches, restoration status for buses, lines, loads, operation
actions for grid-forming and grid-following IBDGs, which interacts with the
transient simulation of inverter-dominated MGs. Inspired by recent advances in
simulation-assisted methods [15, 22] and to incorporate the frequency dynamics
constraints explicitly in the optimization formulation, we associate the
frequency nadir of the transient simulation with respect to the maximum load
that a MG can restore. Although some previous works have considered the
transient simulation as well in finding the optimal restoration solution, they
either adopts a heuristic framework, or merely using the transient simulation
to validate the feasibility of the obtained restoration solution after solving
an optimization problem. By contrast, the proposed two-level simulation-
assisted restoration model directly incorporates the transient simulation
module on top of a strict MILP optimization problem via explicit constraints,
thus its solving process is more tractable and straightforward.
The main contribution of this paper is two-folded:
* •
We develop a two-level simulation-assisted sequential service restoration
model within a rolling horizon framework, which combines a MILP-based
optimization level of service restoration and a transient simulation level of
inverter-dominated MGs.
* •
Frequency dynamics constraints are developed and explicitly incorporated in
the optimization model, to associate the simulated frequency responses with
the decision variables of maximum load step at each stage. These constraints
help restrict the system frequency drop during the transient periods of
restoration. Thus, the generated restoration solution can be more secure and
practical.
The reminder of the paper is organized as follows: Section II presents the
overall framework of the proposed service restoration model. Section III
introduces frequency dynamics constrained MILP-based sequential service
restoration. Section IV describes transient simulation of inverter-dominated
MGs. Numerical results and conclusions are given in Section V and Section VI,
respectively.
## II Overview of the Proposed service restoration Model
The general framework of the proposed two-level simulation-assisted service
restoration is shown in Fig. 1, including an optimization level of MILP-based
sequential service restoration model and a transient simulation level of
$7$th-order electromagnetic inverter-dominated MG dynamic model. After
outages, the fault-affected areas of the distribution system will be isolated.
Consequently, each isolated sub-network can be considered as a MG [23], which
can be formed by the voltage and frequency supports from the grid-forming
IBDGs, and active and reactive power supplies from the grid-following IBDGs.
In the proposed optimization level, each MG will determine its restoration
solutions, including optimal service restoration status of loads, optimal
operation of remotely controlled switches and optimal active and reactive
power dispatches of IBDGs. To prevent large frequency fluctuation due to a
large load restoration, the maximum restorable load for a given period is
limited by the proposed frequency dynamics constraints. In this way, the whole
restoration process is divided into multiple stages. As shown in Fig. 1, the
information exchanged between the optimization level and the simulation level
are the restoration solution (obtained from optimization) and MG system
frequency nadir value (obtained from transient simulation): at each
restoration stage, the optimization level will obtain and send the optimal
restoration solution to the simulation level; then, after receiving the
restoration solution, the simulation level will begin to run transient
simulation by the proposed dynamic model of each inverter-dominated MG, and
send the frequency nadir value to the optimization level for next restoration
stage.
Figure 1: The overall framework of the proposed service restoration model with
optimization level and simulation level.
To accurately reflect the dynamic frequency-supporting capacities of grid-
forming IBDGs during the service restoration process, a rolling-horizon
framework is implemented in the proposed service restoration model, as shown
in Fig. 2. More specifically, we repeatedly run the MILP-based sequential
service restoration model by incorporating the network configuration from the
preceding stage as the initial condition, and then feedback the frequency
nadir value from the transient simulation to the frequency dynamics
constraints. For each stage: (1) the horizon length will be fixed; (2) then
only the restoration solution of first horizon of the current stage is
retained and transferred to the simulation level, while the remaining horizons
are discarded; (3) this process will keep going until the maximum restored
load is reached in each MG. More details about the principles of rolling
horizon can be found in [24].
Figure 2: Implementation of rolling-horizon in the proposed restoration model.
## III Frequency Dynamics Constrained Service Restoration
This section presents the mathematical formulation for coordinating remotely
controlled switches, grid-forming and grid-following IBDGs, and the sequential
restoration status of buses, lines and loads. Here, we consider a unbalanced
three-phase radial distribution system. The three-phase
$\phi_{a},\phi_{b},\phi_{c}$ are simplified as $\phi$. Define the set
$\Omega_{\rm L}=\Omega_{\rm SW_{L}}\cup\Omega_{\rm NSW_{L}}$, where
$\Omega_{\rm SW_{L}}$ and $\Omega_{\rm NSW_{L}}$ represent the set of
switchable load and the set of non-switchable loads, respectively. Define the
set $\Omega_{\rm G}=\Omega_{\rm BS}\cup\Omega_{\rm NBS}$, where $\Omega_{\rm
BS}$ and $\Omega_{\rm NBS}$ represent the set of grid-forming IBDGs with black
start capability and the set of grid-following IBDGs without black start
capability, respectively. Define the set $\Omega_{\rm K}=\Omega_{\rm
SW_{K}}\cup\Omega_{\rm NSW_{K}}$, where $\Omega_{\rm SW}$ and $\Omega_{\rm
NSW}$ represent the set of switchable lines and the set of non-switchable
lines, respectively. Define $\Omega_{\rm BK}$ as the set of bus blocks, where
bus block [9] is a group of buses interconnected by non-switchable lines and
those bus blocks are interconnected by switchable lines. It is assumed that
bus block can be energized by grid-forming IBDGs. By forcing the related
binary variables of faulted lines to be zeros, each faulted area remains
isolated during the restoration process.
### III-A MILP-based Sequential Service Restoration Formulation
The objective function (1) aims to maximize the total restored loads with
priority factor $w_{i}^{L}$ over a rolling horizon $[t,t+T]$ as shown below:
$\max\sum_{t\in[t,t+T]}\sum_{i\in\Omega_{L}}\sum_{\phi\in\Omega_{\phi}}(w_{i}^{\rm
L}x_{i,t}^{\rm L}P_{i,\phi,t}^{\rm L})$ (1)
where $P_{i,\phi,t}^{\rm L}$ and $x_{i,t}^{\rm L}$ are the restored load and
restoration status of load at $t$. If the load demand $P_{i,\phi,t}^{\rm L}$
is restored, then $x_{i,t}^{\rm L}=1$. $T$ is horizon length in the rolling
horizon optimization problem. In this work, the amount of restored load is
also bounded by frequency dynamics constraints with respect to frequency
response and maximum load step. More details of frequency dynamics constraints
are discussed in Section III-B.
Constraints (2)-(11) are defined by the unbalanced three-phase version of
linearized DistFlow model [25, 26] in each formed MG during the service
restoration process. Constraints (2) and (3) are the nodal active and reactive
power balance constraints, where $P_{k,\phi,t}^{\rm K}$ and $Q_{k,\phi,t}^{\rm
K}$ are the active and reactive power flows along line $k$, and $P^{\rm
G}_{i,\phi,t}$ and $Q^{\rm G}_{i,\phi,t}$ are the power outputs of the
generators. Constraints (4) and (5) represent the active and reactive power
limits of the lines, where the limits ($P_{k}^{\rm K,M}$ and $Q_{k}^{\rm
K,M}$) are multiplied by the line status binary variable $x_{k,t}^{\rm K}$.
Therefore, if a line is disconnected or damaged $x_{k,t}^{\rm K}=0$, then
constraints (4) and (5) will be relaxed, which means that power cannot flow
through this line. In the proposed model, there are two types of IBDGs, grid-
forming IBDGs with black start capability and grid-following IBDGs without
black start capability. On the one side, the grid-forming IBDGs can provide
voltage and frequency references in the MG during the restoration process,
which can energize the bus and restore the part of the network that is not
damaged if the fault is isolated. Therefore, the grid-forming IBDGs are
considered to be connected to the network at the beginning of restoration. On
the other side, the grid-following IBDGs are switched off at the beginning of
restoration. If the grid-following IBDGs are connected to an energized bus
during the restoration process, then they can be switched on to supply active
and reactive powers. In constraints (6) and (7), the active and reactive power
outputs of the grid-forming IBDGs are limited by the maximum active and
reactive capacities $P_{i}^{\rm G,M}$ and $Q_{i}^{\rm G,M}$, respectively.
Constraints (8) and (9) limit the active and reactive outputs of the grid-
following IBDGs. Note that the constraints (8) and (9) of grid-following IBDGs
are multiplied by binary variable $x_{i,t}^{\rm G}$. Consequently, if one
grid-following IBDG is not energized ($x_{i,t}^{\rm G}=0$) during the
restoration process, then constraints (8) and (9) of this grid-following IBDG
will be relaxed.
$\displaystyle\sum_{k\in\Omega_{\rm K}(i,.)}P_{k,\phi,t}^{\rm
K}-\sum_{k\in\Omega_{\rm K}(.,i)}P_{k,\phi,t}^{\rm K}=P^{\rm
G}_{i,\phi,t}-x_{i,t}^{\rm L}P_{i,\phi,t}^{\rm L},\forall i,\phi,t$ (2)
$\displaystyle\sum_{k\in\Omega_{\rm K}(i,.)}Q_{k,\phi,t}^{\rm
K}-\sum_{k\in\Omega_{\rm K}(.,i)}Q_{k,\phi,t}^{\rm K}$ $\displaystyle=Q^{\rm
G}_{i,\phi,t}-x_{i,t}^{\rm L}Q_{i,\phi,t}^{\rm L},\forall i,\phi,t$ (3)
$-x_{k,t}^{\rm K}P_{k}^{\rm K,M}\leq P_{k,\phi,t}^{\rm K}\leq x_{k,t}^{\rm
K}P_{k}^{\rm K,M},\forall k\in\Omega_{\rm K},\phi,t$ (4) $-x_{k,t}^{\rm
K}Q_{k}^{\rm K,M}\leq Q_{k,\phi,t}^{\rm K}\leq x_{k,t}^{\rm K}Q_{k}^{\rm
K,M},\forall k\in\Omega_{\rm K},\phi,t$ (5) $0\leq P_{i,\phi,t}^{\rm G}\leq
P_{i}^{\rm G,M},\forall i\in\Omega_{\rm BS},\phi,t$ (6) $0\leq
Q_{i,\phi,t}^{\rm G}\leq Q_{i}^{\rm G,M},\forall i\in\Omega_{\rm BS},\phi,t$
(7) $0\leq P_{i,\phi,t}^{\rm G}\leq x_{i,t}^{\rm G}P_{i}^{\rm G,M},\forall
i\in\Omega_{\rm NBS},\phi,t$ (8) $0\leq Q_{i,\phi,t}^{\rm G}\leq x_{i,t}^{\rm
G}Q_{i}^{\rm G,M},\forall i\in\Omega_{\rm NBS},\phi,t$ (9)
Constraints (10) and (11) calculate the voltage difference along line $k$
between bus $i$ and bus $j$, where $U_{i,\phi,t}$ is the square of voltage
magnitude of bus $i$. We use the big-M method [9] to relax constraints (10)
and (11), if lines are damaged or disconnected, then $x_{k,t}^{\rm K}=0$. The
$p_{k,\phi}$ represents the phase identifier for phase $\phi$ of line $k$. For
example, if line $k$ is a single-phase line on phase a, then
$p_{k,\phi_{a}}=1$, $p_{k,\phi_{b}}=0$ and $p_{k,\phi_{c}}=0$. Constraint (12)
guarantees that the voltage is limited within a specified region [$U^{\rm
m}_{i}$,$U^{\rm M}_{i}$], and will be set to 0 if the bus is in an outage area
$x_{i,t}^{\rm B}=0$.
$\begin{split}U_{i,\phi,t}-U_{j,\phi,t}\geq&2(\hat{R}_{k}P_{k,\phi,t}^{\rm
K}+\hat{X}_{k}Q_{k,\phi,t}^{\rm K})\\\ &+(x_{k,t}^{\rm
K}+p_{k,\phi}-2)M,\forall k,ij\in\Omega_{\rm K},\phi,t\end{split}$ (10)
$\begin{split}U_{i,\phi,t}-U_{j,\phi,t}\leq&2(\hat{R}_{k}P_{k,\phi,t}^{\rm
K}+\hat{X}_{k}Q_{k,\phi,t}^{\rm K})\\\ &+(2-x_{k,t}^{\rm
K}-p_{k,\phi})M,\forall k,ij\in\Omega_{\rm K},\phi,t\end{split}$ (11)
$x_{i,t}^{\rm B}U^{\rm m}_{i}\leq U_{i,\phi,t}\leq x_{i,t}^{\rm B}U^{\rm
M}_{i},\forall i,\phi,t$ (12)
where $\hat{R}_{k}$ and $\hat{X}_{k}$ are the unbalanced three-phase
resistance matrix and reactance matrix of line $k$. To model the unbalanced
three-phase network, we assume that the distribution network is not too
severely unbalanced and operates around the nominal voltage, then the relative
phase unbalance can be approximated as $a_{\phi}=[1,{e}^{-{\bf
i}2\pi/3},{e}^{{\bf i}2\pi/3}]^{T}$ [25]. Therefore, the equivalent unbalanced
three-phase system line impedance matrix $\hat{Z}_{k}$ can be calculated based
on the original line impedance matrix $Z_{k}$ and $a_{\phi}$ in (13).
$\hat{R}_{k}$ and $\hat{X}_{k}$ are the real and imaginary parts of
$\hat{Z}_{k}$, as shown in (14). Note that the loads and IBDGs are also
modelled in a three-phase form. More details about the model of unbalance
three-phase distribution system can be found in [26].
$\hat{Z}_{k}=a_{\phi}a_{\phi}^{H}\odot Z_{k}$ (13)
$\hat{R}_{k}=real(\hat{Z}_{k}),\hat{X}_{k}=imag(\hat{Z}_{k})$ (14)
Constraints (15)-(22) ensure the physical connections among buses, lines,
IBDGs and loads during restoration process. In constraint (15), the grid-
following IBDGs will be switched on $x_{i,t}^{\rm G}=1$, if the connected bus
is energized $x_{i,t}^{\rm B}=1$; otherwise, $x_{i,t}^{\rm G}=0$. Constraint
(16) implies a switchable line can only be energized when both end buses are
energized. Constraint (17) presents that a non-switchable line can be
energized once one of two end buses is energized. Constraint (18) ensures that
a switchable load can be energized $x_{i,t}^{\rm L}=1$, if the connected bus
is energized $x_{i,t}^{\rm B}=1$; otherwise, $x_{i,t}^{\rm L}=0$. Constraint
(19) allows that a non-switchable load can be immediately energized once the
connected bus is energized. Constraints (20)-(22) ensure that the grid-
following IBDGs, switchable lines and loads cannot be tripped again, if they
have been energized at the previous time $t-1$.
$x_{i,t}^{\rm G}\leq x_{i,t}^{\rm B},\forall i\in\Omega_{\rm NBS},t$ (15)
$x_{k,t}^{\rm K}\leq x_{i,t}^{\rm B},x_{k,t}^{\rm K}\leq x_{j,t}^{\rm
B},\forall k,ij\in\Omega_{\rm SW_{K}},t$ (16) $x_{k,t}^{\rm K}=x_{i,t}^{\rm
B},x_{k,t}^{\rm K}=x_{j,t}^{\rm B},\forall k,ij\in\Omega_{\rm NSW_{K}},t$ (17)
$x_{i,t}^{\rm L}\leq x_{i,t}^{\rm B},\forall i\in\Omega_{\rm SW_{L}},t$ (18)
$x_{i,t}^{\rm L}=x_{i,t}^{\rm B},\forall i\in\Omega_{\rm NSW_{L}},t$ (19)
$x_{i,t}^{\rm G}-x_{i,t-1}^{\rm G}\geq 0,\forall i\in\Omega_{\rm NBS},t$ (20)
$x_{k,t}^{\rm K}-x_{k,t-1}^{\rm K}\geq 0,\forall k\in\Omega_{\rm SW_{k}},t$
(21) $x_{i,t}^{\rm L}-x_{i,t-1}^{\rm L}\geq 0,\forall i\in\Omega_{\rm
SW_{L}},t$ (22)
Constraints (23)-(25) ensure that each formed MG remains isolated from each
other and each MG can maintain a tree topology during the restoration process.
Constraint (23) implies that if one bus $i$ is located in one bus block,
$i\in\Omega_{\rm BK}$, then the energization status of bus and the
corresponding bus block keep the same. Here $x_{B,t}^{\rm BK}$ represents the
energization status of bus block $BK$. To avoid forming loop topology,
constraint (24) guarantees that a switchable line cannot be closed at time $t$
if its both end bus blocks are already energized at previous time $t-1$. Note
that the DistFlow model is valid for radial distribution network, therefore,
loop topology is not considered in this work. If one bus block is not
energized at previous time $t-1$, then constraint (25) makes sure that this
bus block can only be energized at time $t$ by at most one of the connected
switchable lines. Constraints (26) and (27) ensure that each formed MG has a
reasonable restoration and energization sequence of switchable lines and bus
blocks. Constraints (26) implies that energized switchable lines can energize
the connected bus block. Constraints (27) requires that a switchable line can
only be energized at time $t$, if at least one of the connected bus block is
energized at previous time $t-1$.
$x_{i,t}^{\rm B}=x_{i,t}^{\rm BK},\forall i\in\Omega_{\rm BK},t$ (23)
$\begin{split}(x_{i,t}^{\rm BK}-x_{i,t-1}^{\rm BK})&+(x_{j,t}^{\rm
BK}-x_{j,t-1}^{\rm BK})\\\ &\geq x^{\rm K}_{k,t}-x^{\rm K}_{k,t-1},\forall
k,ij\in\Omega_{\rm SW_{K}},t\geq 2\end{split}$ (24)
$\begin{split}\sum_{ki,k\in\Omega_{i}}(x^{\rm K}_{ki,t}-x^{\rm
K}_{ki,t-1})&+\sum_{ij,j\in\Omega_{i}}(x^{\rm K}_{ij,t}-x^{\rm K}_{ij,t-1})\\\
&\leq 1+x_{i,t-1}^{\rm BK}M,\forall k,ij\in\Omega_{\rm SW_{K}},t\geq
2\end{split}$ (25) $x_{i,t-1}^{\rm BK}\leq\sum_{ki,k\in\Omega_{i}}(x^{\rm
K}_{ki,t})+\sum_{ij,j\in\Omega_{i}}(x^{\rm K}_{ij,t}),\forall
k,ij\in\Omega_{\rm SW_{K}},t\geq 2$ (26) $x^{\rm K}_{ij,t}\leq x_{i,t-1}^{\rm
BK}+x_{j,t-1}^{\rm BK},\forall ij\in\Omega_{\rm SW_{K}},t\geq 2$ (27)
### III-B Simulation-based Frequency Dynamics Constraints
By considering the frequency dynamics of each isolated inverter-dominated MG
during the transitions of network reconfiguration and service restoration,
constraints (28) and (30) have been added here to avoid the potential large
frequency deviations caused by MG formation and oversized load restoration.
The variable of maximum load step $P_{i,t}^{G,MLS}$ has been applied in
constraint (28) to ensure that the restored load is limited by an upper bound
for each restoration stage, as follows:
$\begin{split}0\leq P_{i,t}^{\rm G,MLS}&\leq P_{i,t-1}^{\rm G,MLS}\\\
&+\alpha(\Delta f^{\rm max}-\Delta f^{\rm meas}),\forall i\in\Omega_{\rm
BS},t\geq 2\end{split}$ (28)
In constraint (28), the variable $P_{i,t}^{G,MLS}$ is restricted by three
items: a hyper-parameter $\alpha$ representing the virtual frequency-power
characteristic of IBDGs, a user-defined maximum allowable frequency drop limit
$\Delta f^{\rm max}$ and the measured maximum transient frequency drop from
the results of simulation level $\Delta f^{\rm meas}$. The hyper-parameter
$\alpha$ is used to curb the frequency nadir during transients from too low.
This can be shown by the following expressions:
$\displaystyle\alpha(\Delta f^{\rm max}-\Delta f^{\rm meas})$
$\displaystyle=\alpha(f_{0}-f^{\rm min}-(f_{0}-f^{\rm nadir}))$
$\displaystyle=\alpha(f^{\rm nadir}-f^{\rm min})$
$\displaystyle\triangleq\Delta P_{i,t-1}^{\rm G,MLS}$ (29)
where $f_{0}$ is the nominal steady-state frequency, e.g. 60Hz. $f^{\rm
nadir}$ is the lowest frequency reached during the transient simulation.
$f^{\rm min}$ is the minimum allowable frequency. $\Delta P_{i,t-1}^{G,MLS}$
is the incremental change of the maximum load step for the next step $t$
(estimated at step $t-1$). Finally, constraint (30) ensures the restored load
and frequency response of the IBDGs do not exceed the user-defined thresholds.
$\begin{split}-x_{i,t}^{\rm G}P_{i,t}^{\rm G,MLS}\leq P_{i,\phi,t}^{\rm
G}-P_{i,\phi,t-1}^{\rm G}\leq&x_{i,t}^{\rm G}P_{i,t}^{\rm G,MLS}\\\
&,i\in\Omega_{\rm BS},\phi,t\geq 2\end{split}$ (30)
Note that the generator ramp rate is not a constant number anymore as in
previous literature, but is varying with the value of $P_{i,t}^{\rm G,MLS}$
from (28) during the optimization process combining with transient simulation
information of frequency deviation. When $f^{\rm nadir}$ is approaching
$f^{\rm min}$, that implies a necessity to reduce the potential amount of
restored load in the next step. Thus the incremental change of maximum load
step $\Delta P_{i,t}^{\rm G,MLS}$ is reduced to reflect the above purpose.
During the restoration process, the restored load in each restoration stage is
determined by maximum load step and available DG power output through power
balance constraints (2), (3) and constraints (28), (30) in optimization level;
then, the frequency deviation in each restoration stage is determined by
restored load through transient model in simulation level, which is introduced
in the next section.
## IV Transient Simulation of Inverter-Dominated MG Formation
In optimization level, our target is to maximize the amount of restored load
while satisfying a series of constraints. One of these constraints should be
frequency dynamics constraint which is derived from simulation level. However,
due to the different time scales and nonlinearity, the conventional dynamic
security constraints cannot be directly solved in optimization problem, such
as Lyapunov theory, LaSalle’s theorem and so on. Therefore, we need a
connection variable between the two levels.
For this purpose, we assume that the changes of typologies between each two
sequential stages can be represented by the change of restored loads $P^{\rm
L}$. The sudden load change of $P^{\rm L}$ results in a disturbance in MGs in
the time-scale of simulation level. During the transience to the new
equilibrium (operation point), the system states such as frequency will
deviate from their nominal values. Therefore, it is natural to estimate the
dynamic security margin with the allowed maximum range of deviations.
Figure 3: Diagram of studied MG control system.
Since the frequency of each inverter-dominated MG is mainly controlled by the
grid-forming IBDGs, we can approximate the maximum frequency deviation during
the transience by observing the dynamic response of the grid-forming IBDGs
under sudden load change. In this paper, the standard outer droop control
together with inner double-loop control structure is adopted for each IBDGs
unit. As shown in Fig. 3, the three-phase output voltage $V_{0,abc}$ and
current $I_{0,abc}$ are measured from the terminal bus of the inverter and
transformed into $dq$ axis firstly. Then, the filtered terminal output active
and reactive power $P$ and $Q$ are obtained by filtering the calculated power
measurements $P^{\rm meas}$ and $Q^{\rm meas}$ with cut-off frequency
$\omega_{\rm c}$. Finally, the voltage and frequency references for the inner
control loop are calculated with droop controller. Since the references can be
accurately tracked by inner control loop with properly tuned PID parameters in
the much faster time-scale, the output voltage $V$ and frequency $\omega$ can
be considered equivalently as the references generated by the droop
controller. Thus, the inverter can be modelled effectively modelled by using
the terminal states and line states of the inverter [18, 19]. In this work,
the transient simulation is conducted with the detailed mathematical MG model
(31)–(37) adopted from [18], where the droop equations (34) and (35) are
replaced by the ones proposed in [19] to consider the restored loads.
$\displaystyle\dot{P}$ $\displaystyle=\omega_{\rm c}(V\cos{\theta}I_{\rm
d}+V\sin{\theta}I_{\rm q}-P),$ (31) $\displaystyle\dot{Q}$
$\displaystyle=\omega_{\rm c}(V\sin{\theta}I_{\rm d}-V\cos{\theta}I_{\rm
q}-Q),$ (32) $\displaystyle\dot{\theta}$ $\displaystyle=\omega-\omega_{0},$
(33) $\displaystyle\dot{\omega}$ $\displaystyle=\omega_{\rm c}(\omega_{\rm
set}-\omega+D_{\rm P}(P-P^{\rm L})),$ (34) $\displaystyle\dot{V}$
$\displaystyle=\omega_{\rm c}(V_{\rm set}-V+D_{\rm Q}(Q-Q^{\rm L})),$ (35)
$\displaystyle\dot{I}_{\rm d}$ $\displaystyle=(V\cos{\theta}-V_{\rm
bus}-RI_{\rm d})/L+\omega_{o}I_{\rm q},$ (36) $\displaystyle\dot{I}_{\rm q}$
$\displaystyle=(V\sin{\theta}-RI_{\rm q})/L-\omega_{o}I_{\rm d},$ (37)
where $\omega_{\rm set}$ and $V_{\rm set}$ are the set points of frequency and
voltage controllers, respectively; $\omega_{\rm c}$ is cut-off frequency;
$D_{\rm P}$ and $D_{\rm Q}$ are $P-\omega$ and $Q-V$ droop gains,
respectively; $P^{\rm L}$ and $Q^{\rm L}$ are the restored active and reactive
loads, respectively; $\theta$ is phase angle; $\omega$ is angular frequency in
$rad/s$; $\omega_{0}$ is a fixed angular frequency; $V_{\rm bus}$ is bus
voltage; $I_{\rm d}$ and $I_{\rm q}$ are $dq$-axis currents; $R$ and $L$ are
aggregate resistance and inductance of connections from the inverter
terminal’s point view, respectively. In (34), it can be observed that, the
equilibrium can be achieved when $\omega=\omega_{\rm set}$ and $P=P^{\rm L}$,
which means that the output frequency tracks the frequency reference when the
output power of the simulation level tracks the obtained restored load of the
optimization level.
Note that constraint (28) is the connection between the optimization level and
simulation level in our proposed two-level simulation-assisted restoration
model, which incorporates the frequency response of inverter-dominated MG from
the simulation level into the optimization level. The variable $P_{i,t}^{\rm
G,MLS}$ is restricted by frequency response in constraint (28). Meanwhile,
$P_{i,t}^{\rm G,MLS}$ also limits the IBDG power output in constraint (30). In
constraints (2) and (3), the power balance is met between restored load and
power supply of IBDGs. Therefore, we associate the frequency nadir of the
transient simulation with respect to the restored load by incorporating the
frequency dynamics constraints explicitly in the optimization level.
After the process of fault detection [27] and sub-grids isolation are
finished, the proposed service restoration model will begin to work. Each
isolated network will begin to form a MG depending on the location of the
nearest grid-forming IBDG with black start capability. The flowchart of the
proposed restoration method is shown in Fig. 4 and the interaction between the
proposed transient simulation and the established optimization problem of
service restoration is described as follows:
Figure 4: Flowchart of the proposed two-level simulation-assisted restoration
method.
(a) Solving the optimal service restoration problem: Given horizon length $T$
in each restoration stage, the MILP-based sequential service restoration
problem (1)–(28) and (30) is solved, and the restoration solution is obtained
for each formed MG.
(b) Transient simulation of inverter-dominated MGs: After receiving
restoration solutions of current stage from optimization level, the frequency
response is simulated by (31)–(37) and the frequency nadir is calculated for
each inverter-dominated MG.
(c) Check the progress of service restoration and stopping criteria: If the
maximum service restoration level is reached for all the MGs, then stop the
restoration process; otherwise, go back to (a) to generate the restoration
solution with newly obtained frequency responses of all MGs for next
restoration stage.
## V Numerical Results
### V-A Simulation Setup
A modified IEEE 123-bus test system [28] in Fig. 5 is used to test the
performance of the proposed frequency dynamics constrained service restoration
model. In Fig. 5, blue dotted line and blue dot stand for single-phase line
and bus, orange dashed line and orange dot stand for two-phase line and bus,
black line and black dot stand for three-phase line and bus, respectively. The
modified test system has been equipped with multiple remotely controlled
switches, as shown in Fig. 5. In Table I, the locations and capacities of
grid-following and grid-forming IBDGs are shown. Four line faults on lines
between substation and bus 1, bus 14 and bus 19, bus 14 and bus 54 and bus 62
and bus 70 are detected, as shown in red dotted lines of Fig. 5. They are
assumed to be persisting during the restoration process until the faulty areas
are cleared to maintain the radial topology and isolate the faulty areas.
Consequently, four MGs can be formed for service restoration with grid-forming
IBDGs and switches. For the sake of simplicity, we assume that the weight
factors for all loads are set to 1 during the restoration process. We
demonstrate the effectiveness of our proposed service restoration model
through numerical evaluations on the following experiments: (i) Comparison
between a base case (i.e., without the proposed frequency dynamics
constraints) and the case with the proposed restoration model. (ii) Cases with
the proposed restoration model under different values of hyper-parameters. All
the case studies are implemented using a PC with Intel Core i7-4790 3.6 GHz
CPU and 16 GB RAM hardware. The simulations are performed in MATLAB R2019b,
which integrates YALMIP Toolbox with IBM ILOG CPLEX 12.9 solver and ordinary
differential equation solver.
Figure 5: Modified IEEE 123 node test feeder. TABLE I: Locations and Capacities of Grid-following and Grid-forming IBDGs in modified IEEE 123 Node Test Feeder. Type | Locations | Capacities
---|---|---
| Grid-following
---
IBDG (1-$\phi$)
| 5, 11, 16, 28, 40, 42,
---
47, 81, 83, 90, 97, 107
110, 116
| 80 kW for single-phase
---
40 kVAr for single-phase
| Grid-following
---
IBDG (3-$\phi$)
| 24, 33, 41, 48, 52,
---
59, 69, 91, 105, 109
| 100 kW per $\phi_{a},\phi_{b},\phi_{c}$
---
50 kVAr per $\phi_{a},\phi_{b},\phi_{c}$
| Grid-forming
---
IBDG (3-$\phi$)
14, 19, 62, 72 | | 100 kW per $\phi_{a},\phi_{b},\phi_{c}$
---
50 kVAr per $\phi_{a},\phi_{b},\phi_{c}$
### V-B Sequential Service Restoration Results
As shown in (28), the relationship between the maximum load step and the
frequency nadir is influenced by the value of hyper-parameter $\alpha$ in the
frequency-dynamics constraints. Therefore, different $\alpha$ values may lead
to different service restoration results. In this case, the horizon length $T$
and the hyper-parameter $\alpha$ are set to 4 and 0.1, respectively.
As shown in Fig. 6, the system is partitioned into four MGs by energizing the
switchable lines sequentially, and the radial structure of each MG is
maintained at each stage. Inside each formed MG, the power balance is achieved
between the restored load and power outputs of IBDGs. The value in brackets
nearby each line switch in Fig. 6 represents the number of restoration stage
when it closes. In Table II, the restoration sequences for switchable IBDGs
and loads are shown, where the subscript and superscript are the bus index and
the MG index of grid-following IBDGs and loads, respectively. It can be
observed that MG2 only needs 3 stages to be fully restored, while MG1 and MG3
can restore in 4 stages. However, due to the heavy loading situation, MG4 is
gradually restored in 5 stages to ensure a relatively smooth frequency
dynamics.
Figure 6: Restoration solutions for the formed MG1-MG4, where the restoration stage when line switch closes is shown in red. TABLE II: Restored Grid-following IBDGs and Loads at Each Restoration Stage. | Restoration
---
stage
| Restored
---
grid-following IBDGs
| Restored
---
loads
1 | | $G_{11}^{1},G_{5}^{1},G_{24}^{2}$
---
$G_{28}^{2},G_{40}^{2},G_{41}^{2}$
$G_{42}^{2},G_{97}^{4}$
| $L_{14}^{1},L_{8}^{1},L_{9}^{1},L_{10}^{1},L_{11}^{1},L_{12}^{1}$
---
$L_{13}^{1},L_{1}^{1},L_{2}^{1},L_{3}^{1},L_{4}^{1},L_{5}^{1},L_{6}^{1}$
$L_{7}^{1},L_{19}^{2},L_{20}^{2},L_{21}^{2},L_{22}^{2},L_{23}^{2}$
$L_{24}^{2},L_{25}^{2},L_{26}^{2},L_{27}^{2}L_{28}^{2},L_{29}^{2}$
$L_{30}^{2},L_{31}^{2},L_{36}^{2},L_{37}^{2}L_{38}^{2},L_{39}^{2}$
$L_{40}^{2},L_{41}^{2},L_{42}^{2},L_{43}^{2},L_{44}^{2},L_{62}^{3}$
$L_{65}^{3},L_{63}^{3},L_{64}^{3},L_{72}^{4},L_{71}^{4},L_{93}^{4}$
$L_{94}^{4},L_{95}^{4},L_{70}^{4},L_{96}^{4},L_{97}^{4},L_{98}^{4}$
$L_{99}^{4},L_{100}^{4},L_{101}^{4},L_{102}^{4},L_{103}^{4}$
$L_{104}^{4}$
2 | | $G_{33}^{2},G_{47}^{2},G_{48}^{2}$
---
$G_{69}^{3},G_{105}^{4},G_{107}^{4}$
$G_{109}^{4},G_{110}^{4},G_{116}^{4}$
| $L_{32}^{2},L_{33}^{2},L_{34}^{2},L_{35}^{2},L_{45}^{2},L_{46}^{2}$
---
$L_{47}^{2},L_{48}^{2},L_{49}^{2},L_{66}^{3},L_{67}^{3},L_{68}^{3}$
$L_{69}^{3},L_{105}^{4},L_{106}^{4},L_{107}^{4},L_{108}^{4}$
$L_{109}^{4},L_{110}^{4},L_{111}^{4},L_{112}^{4},L_{113}^{4}$
$L_{114}^{4},L_{115}^{4},L_{116}^{4},L_{117}^{4},L_{118}^{4}$
$L_{119}^{4}$
3 | | $G_{52}^{2}$
---
| $L_{50}^{2},L_{51}^{2},L_{52}^{2},L_{53}^{2},L_{84}^{4},L_{85}^{4}$
---
$L_{86}^{4},L_{87}^{4}$
4 | | $G_{16}^{1},G_{59}^{3},G_{90}^{4}$
---
$G_{91}^{4}$
| $L_{15}^{1},L_{16}^{1},L_{17}^{1},L_{18}^{1},L_{54}^{3},L_{55}^{3}$
---
$L_{56}^{3},L_{57}^{3},L_{58}^{3},L_{59}^{3},L_{60}^{3},L_{60}^{3}$
$L_{61}^{3},L_{73}^{4},L_{74}^{4},L_{75}^{4},L_{76}^{4},L_{77}^{4}$
$L_{78}^{4},L_{79}^{4},L_{88}^{4},L_{89}^{4},L_{90}^{4},L_{91}^{4}$
$L_{92}^{4}$
5 | | $G_{81}^{4},G_{83}^{4}$
---
| $L_{80}^{4},L_{81}^{4},L_{82}^{4},L_{83}^{4}$
---
For each restoration stage, the restored loads and frequency nadir in MG1-MG4
are shown in Table III. Total 1773 kW of load are restored at the end of the 5
stages. It can be observed the service restoration actions happened in certain
stages rather than in all stages. For example, MG1 restores 280.5 kW of load
in Stage 1, but it restores no more load until Stage 4. While MG4 takes action
on service restoration in each stage. It is because the sequential service
restoration is limited by operational constraints, among which the maximum
load step in each stage is again limited by the proposed frequency-dynamics
constraints. Note that a larger amount of restored load in the optimization
level will typically cause a lower frequency nadir in the simulation level,
then a low frequency nadir will be considered in constraint (28) and help the
optimization level to restrict a larger amount of restored load in next
restoration stage. Because the first stage is the entry point of the
restoration process, there is no prior frequency nadir information to be used
in constraint (28), therefore, the restored load in the first stage is
typically the largest among all stages, which leads to a corresponding lowest
frequency nadir among all stages.
TABLE III: Restored Loads, Frequency Nadir and Computation Time for MG1-MG4. Cases | | Restored load
---
(kW)
| Frequency nadir
---
(Hz)
MG1 ($T=4$ and $\alpha=0.1$) | Stage 1 | 280.5 | 59.7044
Stage 2 | 280.5 | 59.9992
Stage 3 | 280.5 | 59.9992
Stage 4 | 346.5 | 59.9200
Stage 5 | 346.5 | 59.9989
MG2 ($T=4$ and $\alpha=0.1$) | Stage 1 | 230.0 | 59.7079
Stage 2 | 360.0 | 59.8201
Stage 3 | 420.0 | 59.9146
Stage 4 | 420.0 | 59.9984
Stage 5 | 420.0 | 59.9984
MG3 ($T=4$ and $\alpha=0.1$) | Stage 1 | 212.5 | 59.7116
Stage 2 | 212.5 | 59.9990
Stage 3 | 212.5 | 59.9990
Stage 4 | 382.5 | 59.7656
Stage 5 | 382.5 | 59.9985
MG4 ($T=4$ and $\alpha=0.1$) | Stage 1 | 192.0 | 59.7910
Stage 2 | 324.0 | 59.8541
Stage 3 | 414.0 | 59.9003
Stage 4 | 570.0 | 59.8230
Stage 5 | 624.0 | 59.9364
The comparison of total restored loads with and without considering the
proposed frequency dynamics constraints is shown in Fig. 7. Note that the
total amount of restorable load of the base case model (i.e., without the
frequency dynamics constraints) is the same as that of the proposed model with
the frequency dynamics constraints. That is because the total load of the test
system is fixed and less than the total DG generation capacity in both models.
However, the base case needs 6 stages to fully restore the all the loads,
while the proposed model can achieve that goal in the first 5 stages (as it is
observed, no more loads between Stage 5 and Stage 6 are restored). While In
the early stages 1 to 3, the restored load of the proposed model is a little
bit less than the base case. A further analysis is that: during the early
restoration stages, the proposed model generated a restoration solution that
prevents too low frequency nadir during transients. The base case restores
more loads at Stage 1 to Stage 3 without considering such limitation on the
frequency nadir. However, Stage 4 is a turning point when the proposed model
restores more loads than the base case. Therefore, the proposed model restores
less loads than the base case during early stages (here, Stage 1 to Stage 3),
while it restores more loads than the base case during later stages (from
Stage 4). Such restoration pattern (restored load at each stage) of the base
case model and the proposed model may vary case by case if the system topology
or other operational constraints are changed. Therefore, if we implement the
base case model and the proposed model in another test system with different
topology or constraint settings, the base case model may restore fewer loads
than the proposed model in the early stages and the turning point stage may
change as well.
Figure 7: Total restored load with and without considering frequency dynamics
constraints.
In Fig. 8a and Fig. 8b, a zoom in view of the frequency response of MG4 and
the frequency response of MG4 in Stage 1 are shown for better observation of
the frequency dynamic performance. The frequency responses with and without
the frequency dynamics constraints are represented by blue and red lines,
respectively. By this comparison, it can be observed that both the rate of
change of frequency and frequency nadir are significantly improved by
considering frequency dynamics constraints in the proposed restoration model.
However, if the frequency dynamics constraints are not considered to prevent a
large frequency drop, unstable frequency oscillation may happen. The reason of
the oscillation phenomenon in Fig. 8b is the too large $P^{L}$, which deviates
the initial state of MG in the current stage out of the region of attraction
of the original stable equilibrium. This in turn demonstrates the necessity to
incorporate that frequency dynamics constraint in the optimization level. Note
that $\omega_{\rm set}$ is set to 60 Hz in the droop equation (34), the
equilibrium can be achieved when $\omega=\omega_{\rm set}$ and $P=P^{L}$,
which means that the output frequency tracks the frequency reference when the
output power of the simulation level tracks the target restored load
calculated from the optimization level.
Figure 8: Frequency responses of MG4 with and without frequency dynamics
constraints: (a) Subplot of frequency response of MG4 during 5.0 s to 5.8 s;
(b) Frequency responses of MG4 in Stage 1.
Fig. 9 shows the frequency responses of each inverter-dominated MG based on
the proposed restoration model. The results show that the MG frequency drops
when the load is restored. Because the maximum load step is constrained in the
proposed MILP-based sequential service restoration model, the frequency nadir
is also constrained. When load is restored as the frequency drops, the
frequency nadir can be effectively maintained above the $f^{\rm min}$
threshold.
Figure 9: Frequency responses of inverter-dominated MGs: (a) MG1; (b) MG2; (c)
MG3; (d) MG4.
### V-C Impact of Hyper-parameters in Frequency Dynamics Constraints
Compared to other MGs, MG4 is heavily loaded with the largest number of nodes.
Based on the results of Fig. 6, MG4 needs more stages to be fully restored
compared to other MGs. Therefore, MG4 is chosen to test the effect of
different $\alpha$ values. In Fig. 10a and Fig. 10b, the frequency responses
of MG4 during the period of 3.1 s to 5.1 s, the period of 9.3 s to 11.3 s and
the whole restoration process are shown, where the frequency with
$\alpha=0.1$, $\alpha=0.2$ and $\alpha=1.0$ are represented by blue solid
line, red dashed line and yellow dotted line, respectively. It can be observed
that 5 stages are required to fully restore all the loads when $\alpha=0.1$;
while only 4 restoration stages are needed when $\alpha=0.2$ or $\alpha=1.0$.
During the period of 3.1 s to 5.1 s in left of Fig. 10a, the frequency nadirs
with $\alpha=0.2$ or $\alpha=1.0$ are lower than the frequency nadir with
$\alpha=0.1$, which means more loads can be restored with larger value of
$\alpha$. During the period of 9.3 s to 11.3 s in right of Fig. 10b, the
frequency nadir with $\alpha=0.1$ is lower than the frequency nadirs with
$\alpha=0.2$ and $\alpha=1.0$, it is because the total restored loads for
different $\alpha$ values are same, with $\alpha=0.2$ or $\alpha=1.0$, it can
restore more loads in the early restoration stage, therefore they just need
less loads to be restored in the late restoration stage. However, $\alpha=0.1$
restores less loads in the early restoration stage, it has to restore more
loads in the late restoration stage. As shown in Fig. 10c, the overall dynamic
frequency performance with $\alpha=0.1$ is still better than the cases with
$\alpha=0.2$ and $\alpha=1.0$. Hence, there is a trade-off between dynamic
frequency performance and restoration performance regarding the choice of
$\alpha$: too small $\alpha$ may lead to too slow restoration and the
frequency nadir may be high in the early restoration stage and the frequency
nadir may be low in the late restoration stage; in turn, a large $\alpha$ may
lead to less number of restoration stages, too large $\alpha$ may cause too
low frequency in early stages and deteriorate the dynamic performance of the
system frequency in a practical restoration process.
Figure 10: Frequency responses of MG4 with different $\alpha$: (a) Frequency
responses during 3.1 s to 5.1 s; (b) Frequency responses during 9.3 s to 11.3
s; (c) Frequency responses during the whole restoration process.
We also shows that different values of the horizon length $T$ may cause
different service restoration results. Table IV summarizes the total restored
loads and computation time using different horizon lengths in the proposed
service restoration model. On the one side, the restored loads of case with
$T=2$ and $T=3$ are less than that of the cases with $T\geq 4$, where the
total restored load can reach the maximum level. Therefore, the results with
small number of horizon length $T=2$ and $T=3$ are sub-optimal restoration
solutions. On the other side, the longer horizon length also leads to heavy
computation burden and increase the computation time. Similar to the impact of
$\alpha$, there can be a trade-off between the computation time and the
quality of solution when determining the value of $T$.
TABLE IV: Restored Loads, Frequency Nadir and Computation Time with Different Horizon Lengths. | Total restored load (kW) | Computation time (s)
---|---|---
$T=2$ | 1362.5 | 26.8870
$T=3$ | 1410.5 | 32.6725
$T=4$ | 1773.0 | 48.5629
$T=5$ | 1773.0 | 61.9968
$T=6$ | 1773.0 | 88.0216
In Fig. 11, the frequency responses of MG1 to MG4 are depicted during the
restoration process with different values of droop gain $D_{p}$. In the test
case, the original setting of $D_{p}$ is $1\times 10^{-5}$. It can be observed
that the different values of $D_{p}$ will cause different restoration
solutions and frequency responses. As indicated by the arrow in Fig. 11a, MG1
can be fully restored in four stages when $D_{p}=1\times 10^{-5}$ or $2\times
10^{-5}$, however, if the $D_{p}=3\times 10^{-5}$, MG1 needs five stages to be
fully restored. Similar observation can be found for restoration stage in Fig.
11c for MG3, it needs five stages to be fully restored when $D_{p}$ equals
larger values (such as $2\times 10^{-5}$ or $3\times 10^{-5}$), while it only
needs four stages when $D_{p}$ equals smaller values (such as $=1\times
10^{-5}$). As shown in Fig. 11b and Fig. 11d, larger value of $D_{p}$ will
also lead to larger frequency drop during restoration process.
Figure 11: Frequency responses of inverter-dominated MGs with different values
of $D_{p}$ during restoration process: (a) MG1; (b) MG2; (c) MG3; (d) MG4.
## VI Conclusion
To improve the dynamic performance of the system frequency during service
restoration of a unbalanced distribution systems in an inverter-dominated
environment, we propose a simulation-assisted optimization model considering
frequency dynamics constraints with clear physical meanings. Results
demonstrate that: (i) The proposed frequency dynamics constrained service
restoration model can significantly reduce the transient frequency drop during
MGs forming and service restoration. (ii) Other steady-state performance
indicators of our proposed method can rival that of the conventional methods,
in terms of the final restored total load and the required number of
restoration stages. Investigating on how to choose the best hyper-parameters,
such as $\alpha$, horizon length $T$ and droop gain $D_{p}$ will be the next
research direction.
## References
* [1] E. O. of the President, “Economic benefits of increasing electric grid resilience to weather outages,” White House, Tech. Rep., 2020.
* [2] A. M. Salman, Y. Li, and M. G. Stewart, “Evaluating system reliability and targeted hardening strategies of power distribution systems subjected to hurricanes,” _Reliab. Eng. Syst. Saf._ , vol. 144, pp. 319–333, Dec. 2015\.
* [3] H. Haggi, R. R. nejad, M. Song, and W. Sun, “A review of smart grid restoration to enhance cyber-physical system resilience,” in _2019 IEEE Innovative Smart Grid Technologies - Asia (ISGT Asia)_ , 2019, pp. 4008–4013.
* [4] Z. Wang and J. Wang, “Self-healing resilient distribution systems based on sectionalization into microgrids,” _IEEE Trans. Power Syst._ , vol. 30, pp. 3139–3149, Nov. 2015.
* [5] A. Arif and Z. Wang, “Networked microgrids for service restoration in resilient distribution systems,” _IET Gener. Transm. Distrib._ , vol. 11, no. 14, pp. 3612–3619, Aug. 2017.
* [6] C. Chen, J. Wang, F. Qiu, and D. Zhao, “Resilient distribution system by microgrids formation after natural disasters,” _IEEE Trans. Smart Grid_ , vol. 7, no. 2, pp. 958–966, Mar. 2016.
* [7] S. Yao, P. Wang, and T. Zhao, “Transportable energy storage for more resilient distribution systems with multiple microgrids,” _IEEE Trans. on Smart Grid_ , vol. 10, pp. 3331–3341, May 2019.
* [8] L. Che and M. Shahidehpour, “Adaptive formation of microgrids with mobile emergency resources for critical service restoration in extreme conditions,” _IEEE Trans. Power Syst._ , vol. 34, no. 1, pp. 742–753, Jan. 2019.
* [9] B. Chen, C. Chen, J. Wang, and K. L. Butler-Purry, “Sequential service restoration for unbalanced distribution systems and microgrids,” _IEEE Trans. Power Syst._ , vol. 33, pp. 1507–1520, Mar. 2018.
* [10] Y. Wen, W. Li, G. Huang, and X. Liu, “Frequency dynamics constrained unit commitment with battery energy storage,” _IEEE Trans. Power Syst._ , vol. 31, no. 6, pp. 5115–5125, Nov. 2016.
* [11] H. Gu, R. Yan, T. K. Saha, E. Muljadi, J. Tan, and Y. Zhang, “Zonal inertia constrained generator dispatch considering load frequency relief,” _IEEE Trans. Power Syst._ , vol. 35, no. 4, pp. 3065–3077, Jul. 2020.
* [12] Y. Wen, C. Y. Chung, X. Liu, and L. Che, “Microgrid dispatch with frequency-aware islanding constraints,” _IEEE Trans. Power Syst._ , vol. 34, no. 3, pp. 2465–2468, May 2019.
* [13] O. Bassey, K. L. Butler-Purry, and B. Chen, “Dynamic modeling of sequential service restoration in islanded single master microgrids,” _IEEE Trans. Power Syst._ , vol. 35, no. 1, pp. 202–214, Jan. 2020.
* [14] B. Qin, H. Gao, J. Ma, W. Li, and A. Y. Zomaya, “An input-to-state stability-based load restoration approach for isolated power systems,” _Energies_ , vol. 11, pp. 597–614, Mar. 2018.
* [15] Y. Xu, C. Liu, K. P. Schneider, F. K. Tuffner, and D. T. Ton, “Microgrids for service restoration to critical load in a resilient distribution system,” _IEEE Trans. Smart Grid_ , vol. 9, pp. 426–437, Jan. 2018.
* [16] Y. Du, X. Lu, J. Wang, and S. Lukic, “Distributed secondary control strategy for microgrid operation with dynamic boundaries,” _IEEE Trans. Smart Grid_ , vol. 10, no. 5, pp. 5269–5285, Sept. 2019.
* [17] B. K. Poolla, D. Grob, and F. Dorfler, “Placement and implementation of grid-forming and grid-following virtual inertia and fast frequency response,” _IEEE Trans. Power Syst._ , vol. 34, pp. 3035–3046, Jul. 2019\.
* [18] P. Vorobev, P. Huang, M. A. Hosani, J. L. Kirtley, and K. Turitsyn, “High-fidelity model order reduction for microgrids stability assessment,” _IEEE Trans. Power Syst._ , vol. 33, pp. 874–887, Jan. 2018.
* [19] J. M. Guerrero, L. Hang, and J. Uceda, “Control of distributed uninterruptible power supply systems,” _IEEE Trans. Ind. Electron._ , vol. 55, no. 8, pp. 2845–2859, 2008.
* [20] K. Y. Yap, C. R. Sarimuthu, and J. M.-Y. Lim, “Virtual inertia-based inverters for mitigating frequency instability in grid-connected renewable energy system: A review,” _Appl. Sci._ , vol. 9, no. 24, p. 5300, Dec. 2019.
* [21] H. Bevrani, T. Ise, and Y. Miura, “Virtual synchronous generators: A survey and new perspectives,” _Int. J. Electr. Power Energy Syst._ , vol. 54, pp. 244–254, Jan. 2014.
* [22] Y. Zhu, C. Liu, K. Sun, D. Shi, and Z. Wang, “Optimization of battery energy storage to improve power system oscillation damping,” _IEEE Trans. Sustain. Energy_ , vol. 10, no. 3, pp. 1015–1024, 2019.
* [23] Y. Kim, J. Wang, and X. Lu, “A framework for load service restoration using dynamic change in boundaries of advanced microgrids with synchronous-machine dgs,” _IEEE Trans. Smart Grid_ , vol. 9, no. 4, pp. 3676–3690, Jul. 2018.
* [24] Z. Wang, J. Wang, B. Chen, M. M. Begovic, and Y. He, “Mpc-based voltage/var optimization for distribution circuits with distributed generators and exponential load models,” _IEEE Trans. Smart Grid_ , vol. 5, no. 5, pp. 2412–2420, 2014.
* [25] B. A. Robbins and A. D. Domínguez-García, “Optimal reactive power dispatch for voltage regulation in unbalanced distribution systems,” _IEEE Trans. Power Syst._ , vol. 31, no. 4, pp. 2903–2913, 2016.
* [26] Q. Zhang, K. Dehghanpour, and Z. Wang, “Distributed CVR in unbalanced distribution systems with PV penetration,” _IEEE Trans. Smart Grid_ , vol. 10, no. 5, pp. 5308–5319, Sept. 2019.
* [27] Y. Yuan, K. Dehghanpour, F. Bu, and Z. Wang, “Outage detection in partially observable distribution systems using smart meters and generative adversarial networks,” _IEEE Trans. Smart Grid_ , vol. 11, no. 6, pp. 5418–5430, Nov. 2020.
* [28] 123-bus feeder. [Online]. Available: https://site.ieee.org/pes-testfeeders/resources/
| Qianzhi Zhang (S’17) is currently pursuing his Ph.D. in the Department of
Electrical and Computer Engineering, Iowa State University, Ames, IA. He
received his M.S. in electrical and computer engineering from Arizona State
University in 2015. He has worked with Huadian Electric Power Research
Institute from 2015 to 2016 as a research engineer. His research interests
include the applications of machine learning and advanced optimization
techniques in power system operation and control.
---|---
| Zixiao Ma (S’18) is currently a Ph.D. student in the Department of
Electrical and Computer Engineering at the Iowa State University, Ames, IA,
USA. He received his B.S. degree in Automation and M.S. degree in Control
theory and Control Engineering from Northeastern University in 2014 and 2017
respectively. His research interests are focused on the power system load
modeling, microgrids, nonlinear control and model reduction.
---|---
| Yongli Zhu (S’12) received his B.S. degree from Huazhong University of
Science and Technology in 2009, M.S. degree from State Grid Electric Power
Research Institute in 2012, and Ph.D. degree from the University of Tennessee,
Knoxville in 2018. He joined Iowa State University in the position of postdoc
researcher in 2020. His research interests include power system stability,
microgrid, and machine learning applications in power systems.
---|---
| Zhaoyu Wang (S’13–M’15–SM’20) is the Harpole-Pentair Assistant Professor
with Iowa State University. He received the B.S. and M.S. degrees in
electrical engineering from Shanghai Jiaotong University, and the M.S. and
Ph.D. degrees in electrical and computer engineering from Georgia Institute of
Technology. His research interests include optimization and data analytics in
power distribution systems and microgrids. He is the Principal Investigator
for a multitude of projects focused on these topics and funded by the National
Science Foundation, the Department of Energy, National Laboratories, PSERC,
and Iowa Economic Development Authority. Dr. Wang is the Chair of IEEE Power
and Energy Society (PES) PSOPE Award Subcommittee, Co-Vice Chair of PES
Distribution System Operation and Planning Subcommittee, and Vice Chair of PES
Task Force on Advances in Natural Disaster Mitigation Methods. He is an editor
of IEEE Transactions on Power Systems, IEEE Transactions on Smart Grid, IEEE
Open Access Journal of Power and Energy, IEEE Power Engineering Letters, and
IET Smart Grid. Dr. Wang was the recipient of the National Science Foundation
(NSF) CAREER Award, the IEEE PES Outstanding Young Engineer Award, and the
Harpole-Pentair Young Faculty Award Endowment.
---|---
|
# Linguistically-Enriched and Context-Aware
Zero-shot Slot Filling
A. B. Siddique University of California, Riverside<EMAIL_ADDRESS>, Fuad
Jamour University of California, Riverside<EMAIL_ADDRESS>and Vagelis
Hristidis University of California, Riverside<EMAIL_ADDRESS>
###### Abstract.
Slot filling is identifying contiguous spans of words in an utterance that
correspond to certain parameters (i.e., slots) of a user request/query. Slot
filling is one of the most important challenges in modern task-oriented dialog
systems. Supervised learning approaches have proven effective at tackling this
challenge, but they need a significant amount of labeled training data in a
given domain. However, new domains (i.e., unseen in training) may emerge after
deployment. Thus, it is imperative that these models seamlessly adapt and fill
slots from both seen and unseen domains – unseen domains contain unseen slot
types with no training data, and even seen slots in unseen domains are
typically presented in different contexts. This setting is commonly referred
to as zero-shot slot filling. Little work has focused on this setting, with
limited experimental evaluation. Existing models that mainly rely on context-
independent embedding-based similarity measures fail to detect slot values in
unseen domains or do so only partially. We propose a new zero-shot slot
filling neural model, $\mathsf{LEONA}$, which works in three steps. Step one
acquires domain-oblivious, context-aware representations of the utterance word
by exploiting (a) linguistic features such as part-of-speech; (b) named entity
recognition cues; and (c) contextual embeddings from pre-trained language
models. Step two fine-tunes these rich representations and produces slot-
independent tags for each word. Step three exploits generalizable context-
aware utterance-slot similarity features at the word level, uses slot-
independent tags, and contextualizes them to produce slot-specific predictions
for each word. Our thorough evaluation on four diverse public datasets
demonstrates that our approach consistently outperforms the state-of-the-art
models by $17.52\%$, $22.15\%$, $17.42\%$, and $17.95\%$ on average for unseen
domains on SNIPS, ATIS, MultiWOZ, and SGD datasets, respectively.
## 1\. Introduction
Goal-oriented dialog systems allow users to accomplish tasks, such as
reserving a table at a restaurant, through an intuitive natural language
interface (e.g., Amazon Alexa). For instance, a user may issue the following
utterance: _“I would like to book a table at 8 Immortals Restaurant in San
Francisco for 5:30 pm today for 6 people”_. For dialog systems to fulfill such
a request, they first need to extract the parameter (a.k.a. slot) values of
the request. Slots in the restaurant booking domain include
$\mathtt{restaurant\\_name}$ and $\mathtt{city}$, whose values in our example
utterance are “$\mathtt{8}$ $\mathtt{Immortals}$ $\mathtt{Restaurant}$” and
“$\mathtt{San}$ $\mathtt{Francisco}$”, respectively. Only after all slot
values are filled, the system can call the appropriate API to actually perform
the intended action (e.g., reserving a table at a restaurant). Thus, the
extraction of slot values from natural languages utterances (i.e., slot
filling) is a critical step to the success of a dialog system.
Slot filling is an important and challenging task that tags each word
subsequence in an input utterance with a slot label (see Figure 1 for an
example). Despite the challenges, supervised approaches have shown promising
results for the slot filling task (Goo et al., 2018; Zhang et al., 2018;
Young, 2002; Bellegarda, 2014; Mesnil et al., 2014; Kurata et al., 2016;
Hakkani-Tür et al., 2016; Xu and Sarikaya, 2013). The disadvantage of
supervised methods is the unsustainable requirement of having massive labeled
training data for each domain; the acquisition of such data is laborious and
expensive. Moreover, in practical settings, new unseen domains (with unseen
slot types) emerge only after the deployment of the dialog system, rendering
supervised models ineffective. Consequently, models with capabilities to
seamlessly adapt to new unseen domains are indispensable to the success of
dialog systems. Note that unseen slot types do not have any training data, and
the values of seen slots may be present in different contexts in new domains
(rendering their training data from other seen domains irrelevant). Filling
slots in settings where new domains emerge after deployment is referred to as
zero-shot slot filling (Bapna et al., 2017). Alexa Skills and Google Actions,
where developers can integrate their novel content and services into a virtual
assistant are a prominent examples of scenarios where zero-shot slot filling
is crucial.
Figure 1. Overview of LEONA with an example utterance and its words’ label
sequence (following the IOB scheme).
There has been little research on zero-shot slot filling, and existing works
presented limited experimental evaluation results. To the best of our
knowledge, existing models were evaluated using a single public dataset.
Recently, the authors in (Shah et al., 2019) proposed a cross-domain zero-shot
adaptation for slot filling by utilizing example slot values. Due to the
inherent variance of slot values, this framework faces difficulties in
capturing the full slot value (e.g., “$\mathtt{8}$ $\mathtt{Immortals}$
$\mathtt{Restaurant}$” for slot type “$\mathtt{restaurant\\_name}$” in Figure
1) in unseen domains. Coach (Liu et al., 2020) proposed to address the issues
in (Shah et al., 2019; Bapna et al., 2017) with a coarse-to-fine approach.
Coach (Liu et al., 2020) uses the seen domain data to learn templates for the
slots based on whether the words are slot values or not. Then, it determines a
slot type for each identified slot value by matching it with the
representation of each slot type description. The diversity of slot types
across different domains makes it practically impossible for Coach to learn
general templates that are applicable to all new unseen domains; for example,
“$\mathtt{book}$” and “$\mathtt{table}$” can be slot values in an e-commerce
domain, but not in the restaurant booking domain.
We propose an end-to-end model $\mathsf{LEONA}$111Linguistically-Enriched and
cONtext-Aware,222Source code coming soon that relies on the power of domain-
independent linguistic features and contextual representations from pre-
trained language models (LM), and context-aware utterance-slot similarity
features. $\mathsf{LEONA}$ works in three steps as illustrated in Figure 1.
Step one leverages pre-trained Natural Language Processing (NLP) models that
provide additional domain-oblivious and context-aware information to
initialize our embedding layer. Specifically, Step one uses (_i_) syntactic
cues through part of speech (POS) tags that provide information on the
possibility of a word subsequence being a slot value (e.g., proper nouns are
usually slot values); (_ii_) off-the-shelf Named Entity Recognition (NER)
models that provide complementary and more informative tags (e.g., geo-
political entity tag for “$\mathtt{San}$ $\mathtt{Francisco}$”); and (_iii_) a
deep bidirectional pre-trained LM (ELMo) (Peters et al., 2018) to generate
contextual character-based word representations that can handle unknown words
that were never seen during training. Moreover, the pre-trained ELMo (Peters
et al., 2018) with appropriate fine-tuning has provided state-of-the-art
(SOTA) results on many NLP benchmarks (Rajpurkar et al., 2016; Bowman et al.,
2015; He et al., 2017; Pradhan et al., 2012; Sang and De Meulder, 2003; Socher
et al., 2013). Combined, these domain-independent sources of rich semantic
information provide a robust initialization for the embedding layer to better
accommodate unseen words (i.e., never seen during training), which greatly
facilitates zero-shot slot filling.
Step two fine-tunes the semantically rich information from Step one by
accounting for the temporal interactions among the utterance words using bi-
directional Long Short Term Memory network (Hochreiter and Schmidhuber, 1997)
that effectively transfers rich semantic information from NLP models. This
step produces slot-independent tags (i.e., Inside Outside Beginning IOB),
which provide complementary cues at the word subsequence level (i.e., hints on
which word subsequences constitute slot values) using a Conditional Random
Field (CRF) (Lafferty et al., 2001). Step three, which is the most critical
step, learns a generalizable context-aware similarity function between the
utterance words and those of slot descriptions from seen domains, and exploits
the learned function in new unseen domains to highlight the features of the
utterance words that are contextually relevant to a given slot. This step also
jointly contextualizes the multi-granular information produced at all steps.
Finally, CRF is employed to produce slot-specific predictions for the given
utterance words and slot type. This step is repeated for every relevant slot
type, and the predictions are combined to get the final sequence labels. In
our example in Figure 1, the predictions for “$\mathtt{restaurant\\_name}$”
and “$\mathtt{city}$” are combined to produce the final sequence labels shown
in the figure.
In summary, this work makes the following contributions:
* •
We propose an end-to-end model for zero-shot slot filling that effectively
captures context-aware similarity between utterance words and slot types, and
integrates contextual information across different levels of granularity,
leading to outstanding zero-shot capabilities.
* •
We demonstrate that pre-trained NLP models can provide additional domain-
oblivious semantic information, especially for unseen concepts. To the best of
our knowledge, this is the first work that leverages the power of pre-trained
NLP models for zero-shot slot filling. This finding might have positive
implications for other zero-shot NLP tasks.
* •
We conduct extensive experimental analysis using four public datasets: SNIPS
(Coucke et al., 2018), ATIS (Liu et al., 2019), MultiWOZ (Zang et al., 2020)
and SGD (Rastogi et al., 2019), and show that our proposed model consistently
outperforms SOTA models in a wide range of experimental evaluations on unseen
domains. To the best of our knowledge, this is first work that comprehensively
evaluates zero-shot slot filling models on many datasets with diverse domains
and characteristics.
## 2\. Preliminaries
### 2.1. Problem Formulation
Given an utterance with $J$ words $\mathcal{X}{i}=(x{1},x{2},\cdots,x_{J})$, a
slot value is a span of words $(x_{e},\cdots,x_{f})$ such that $0\leq e\leq
f\leq J$, that is associated with a slot type. Slot filling is a sequence
labeling task that assigns the labels
$\mathcal{Y}{i}=(y{1},y{2},\cdots,y_{J})$ to the input $\mathcal{X}{i}$,
following the IOB labeling scheme (Ramshaw and Marcus, 1995). Specifically,
the first word of a slot value associated with slot type $\mathcal{S}_{r}$ is
labeled as $\mathtt{B}$-$\mathcal{S}_{r}$, the other words inside the slot
value are labeled as $\mathtt{I}$-$\mathcal{S}_{r}$, and non-slot words are
labeled as $\mathtt{O}$. Let
$\mathcal{D}_{c}=\\{\mathcal{S}{1},\mathcal{S}{2},\dots\\}$, be the set of
slot types in domain $c$. Let
$\mathcal{D}_{\mathtt{SEEN}}=\\{\mathcal{D}{1},\cdots,\mathcal{D}_{l}\\}$ be a
set of seen domains and
$\mathcal{D}_{\mathtt{UNSEEN}}=\\{\mathcal{D}_{l+\mathpzc{1}},\cdots,\mathcal{D}_{z}\\}$
be a set of unseen domains where
$\mathcal{D}_{\mathtt{SEEN}}\cap\mathcal{D}_{\mathtt{UNSEEN}}=\varnothing$.
Let $\\{(\mathcal{X}{i},\mathcal{Y}{i})\\}_{\mathpzc{i}=1}^{n}$ be a set of
training utterances labeled at the word level such that the slot types in
$\mathcal{Y}{i}$ are in $\mathcal{D}_{p}\in\mathcal{D}_{\mathtt{SEEN}}$. In
traditional (i.e., supervised) slot filling, the domains of test utterances
belong to $\mathcal{D}_{\mathtt{SEEN}}$, whereas in zero-shot slot-filling,
the domains of test utterances belong to $\mathcal{D}_{\mathtt{UNSEEN}}$; an
utterance belongs to a domain if it contains slot values that correspond to
slot types from this domain. Note that in zero-shot slot filling, the output
slot types belong to either seen or unseen domains (i.e., in
$\mathcal{D}_{p}\in\mathcal{D}_{\mathtt{SEEN}}\cup\mathcal{D}_{\mathtt{UNSEEN}}$).
We focus on zero-shot slot filling in this work.
### 2.2. Pre-trained NLP Models
In this work, we utilize several pre-trained NLP models that are readily
available. Specifically, we use: Pre-trained POS tagger, Pre-trained NER
model, and Pre-trained ELMo. The cues provided by POS/NER tags and ELMo
embeddings are supplementary in our model, and they are further fine-tuned and
contextualized using the available training data from seen domains. Next, we
provide a brief overview of these models.
Pre-trained POS tagger. This model labels an utterance with part of speech
tags, such as PROPN, VERB, and ADJ. POS tags provide useful syntactic cues for
the task of zero-shot slot filling, especially for unseen domains.
$\mathsf{LEONA}$ learns general cues from the language syntax about how slot
values are defined in one domain, and transfers this knowledge to new unseen
domains because POS tags are domain and slot type independent. For example,
proper nouns are usually values for some slots. In this work, we employ
SpaCy’s pre-trained POS tagger333https://spacy.io/api/annotation#pos-tagging,
that has shown production level accuracy.
Pre-trained NER model. This model labels an utterance with IOB tags for four
entity types: PER, GPE, ORG, and MISC. The NER model provides information at a
different granularity, which is generic and domain-independent. Although the
NER model provides tags for a limited set of entities and the task of slot
filling encounters many more entity types, we observe that many, but not all,
slots can be mapped to basic entities supported by the NER model. For
instance, names of places or locations are referred to as “GPE” (i.e., geo-
political entity or location) by the NER model, whereas in the task of the
slot filling, there may be a location of a hotel, restaurant, salon, or some
place the user is planing to visit. It remains challenging to assign the name
of the location to the correct corresponding entity/slot in the zero-shot
fashion. Moreover, NER models can not identify all slots/entities that slot
filling intends to extract, resulting in a low recall. Yet, cues from NER
model are informative and helpful in reducing the complexity of the task. In
this work, we employ SpaCy’s pre-trained NER
model444https://spacy.io/api/annotation#named-entities.
Pre-trained ELMo. Pre-trained language models (i.e., ELMo) are trained on huge
amounts of text data in an unsupervised fashion. These models have billions of
parameters and thereby capture general semantic and syntactic information in
an effective manner. In this work, we employ the deep bidirectional language
model ELMo to provide contextualized word representations that capture complex
syntactic and semantic features of words based on the context of their usage,
unlike fixed word embeddings (i.e., GloVe (Pennington et al., 2014) or
Word2vec (Mikolov et al., 2013)) which do not consider context. Furthermore,
these representations are purely character based and are robust for words
unseen during training, which makes them suitable for the task of zero-shot
slot filling.
### 2.3. Conditional Random Fields
Conditional Random Fields (CRFs) (Sutton and McCallum, 2006) have been
successfully applied to various sequence labeling problems in natural language
processing such as POS tagging (Cutting et al., 1992), shallow parsing (Sha
and Pereira, 2003), and named entity recognition (Settles, 2004). To produce
the best possible label sequence for a given input, CRFs incorporate the
context and dependencies among predictions. In this work, we employ linear
chain CRFs that are trained by estimating maximum conditional log-likelihood.
In its simplest form, it estimates a transition cost matrix of size,
$\mathtt{num\\_tags}$ $\times$ $\mathtt{num\\_tags}$, where the value at the
indices [$\mathtt{i}$, $\mathtt{j}$] represents the likelihood of
transitioning from the $\mathtt{j}$-th tag to the $\mathtt{i}$-th tag.
Moreover, it allows enforcing constraints in a flexible way (e.g., tag
“$\mathtt{I}$” can not be preceded by tag “$\mathtt{O}$”).
## 3\. Approach
Figure 2. Illustration of the layers in our model LEONA.
Our model $\mathsf{LEONA}$ is an end-to-end neural network with six layers
that collectively realize the conceptual three steps in Figure 1.
Specifically, the Embedding layer realizes Step one and it also jointly
realizes Step two together with the Encoding and the CRF layers. The
Similarity, Contextualization, and Predication layers realize Step three. We
briefly summarize each layer below, and we describe each layer in detail in
the subsequent subsections. The Embedding layer maps each word to a vector
space; this layer is responsible for embedding the words from both the
utterance and the slot description. The Encoding layer uses bi-directional
LSTM networks to refine the embeddings from the previous layer by considering
information from neighboring words. This layer encodes utterances as well as
slot descriptions. The CRF layer uses utterance encodings and makes slot-
independent predictions (i.e., IOB tags) for each word in the utterance by
considering dependencies between the predictions and taking context into
account. The Similarity layer uses utterance and slot description encodings to
compute an attention matrix that captures the similarities between utterance
words and a slot type, and signifies feature vectors of the utterance words
relevant to the slot type. The Contextualization layer uses representations
from different granularities and contextualizes them for slot-specific
predictions by employing bi-directional LSTM networks; specifically, it uses
representations from the Similarity layer, the Encoding layer, and the IOB
predictions produced by the CRF layer. The Prediction layer employs another
CRF to make slot-specific predictions (i.e., IOB tags for a given slot type)
based on the input from the contextualization layer. Note that the prediction
process is repeated for all the relevant slot types and its outputs are
combined to produce the final label for each word.
### 3.1. Embedding Layer
This layer maps each word in the input utterance to a high-dimensional vector
space. Three complementary embeddings are utilized: (_i_) word embedding of
the POS tags for the input words, (_ii_) word embedding of the NER tags for
the input word, and (_iii_) contextual word embedding from the pre-trained
ELMo model. Then, we employ a two-layer Highway Network (Srivastava et al.,
2015) to combine the three embeddings for each word in an effective way.
Highway Networks have been shown to have better performance than simple
concatenation. They produces a $\mathpzc{dim}$-dimensional vector for each
word. Specifically, the embedding layer produces
$\mathcal{X}\in\mathbb{R}^{\mathpzc{dim}\times J}$ for the given utterance
$\\{x{1},x{2},\cdots,x_{J}\\}$ with $J$ words, and
$\mathcal{S}\in\mathbb{R}^{\mathpzc{dim}\times K}$ for the given slot
description $\\{s{1},s{2},\cdots,s_{K}\\}$ with $K$ words. This representation
gets fine-tuned and contextualized in the next layers.
### 3.2. Encoding Layer
We use a bi-directional LSTM network to capture the temporal interactions
between input words. At time-step $\mathpzc{i}$, we compute hidden states for
the input utterance as follows:
$\overrightarrow{h}{i}=\text{LSTM}(\overrightarrow{h}_{\mathpzc{i}-\mathpzc{1}},\mathcal{X}_{:\mathpzc{i}})$
$\overleftarrow{h}{i}=\text{LSTM}(\overleftarrow{h}_{\mathpzc{i}-\mathpzc{1}},\mathcal{X}_{:\mathpzc{i}})$
Then, we concatenate the output of the hidden states $\overrightarrow{h}{i}$
and $\overleftarrow{h}{i}$ to get the bi-directional hidden state
representation,
$h{i}=[\overrightarrow{h}{i};\overleftarrow{h}{i}]\in\mathbb{R}^{\mathpzc{2}\mathpzc{d}}$.
This layer produces $\mathcal{H}\in\mathbb{R}^{\mathpzc{2}\mathpzc{d}\times
J}$ from the context word vectors $\mathcal{X}$ (i.e., for utterance).
Essentially, every column of the matrix represents the fine-tuned context-
aware representation of the corresponding word. A similar mechanism is
employed to produce $\mathcal{U}\in\mathbb{R}^{\mathpzc{2}\mathpzc{d}\times
K}$ from word vector $\mathcal{S}$ (i.e., for slot description).
### 3.3. CRF Layer
The task of the CRF layer is to predict one of three slot-independent tags
(i.e., I, O, or B) for each word based on utterance contextual representation
$\mathcal{H}=\\{h{1},h{2},\cdots,h_{J}\\}$ produced by the encoding layer. Let
$\mathcal{Y}$ refer to a sequence label, and the set of all possible state
sequences is $\mathcal{C}$. For the given input sequence $\mathcal{H}$, the
conditional probability function for the CRF,
$P(\mathcal{Y}|\mathcal{H};W,b)$, over all possible label sequences
$\mathcal{Y}$ is computed as follows:
$P(\mathcal{Y}|\mathcal{H};W,b)=\frac{\prod\limits_{\mathpzc{i}=\mathpzc{1}}^{J}\theta{i}(y_{\mathpzc{i}-\mathpzc{1}},y{i},\mathcal{H})}{\sum\limits_{y^{\prime}\in\mathcal{C}}\prod\limits_{\mathpzc{i}=\mathpzc{1}}^{J}\theta{i}(y_{\mathpzc{i}-\mathpzc{1}}^{\prime},y{i}^{\prime},\mathcal{H})}$
where
$\theta{i}(y_{\mathpzc{i}-\mathpzc{1}}^{\prime},y{i}^{\prime},\mathcal{H})=\exp(W_{y^{\prime},y}^{T}h{i}+b_{y^{\prime},y})$
is a trainable function, that has $W_{y^{\prime},y}^{T}$ weight and
$b_{y^{\prime},y}$ bias matrices for the label pair $(y^{\prime},y)$.
Note that the slot-independent predictions also represent the output of Step
two; i.e., information about utterance words at a different granularity than
the initial cues from NLP models. Essentially, Step two learns general
patterns of slot values from seen domains irrespective of slot types, and
transfers this knowledge to new unseen domains and their slot types. Since it
is hard to learn general templates of slot values that are applicable to all
unseen domains, we do not use these slot-independent predictions to predict
slot-specific tags. Instead, we pass this information to the contextualization
layer for further fine-tuning.
### 3.4. Similarity Layer
The similarity layer highlights the features of each utterance word that are
important for a given slot type by employing attention mechanisms. The popular
attention methods (Weston et al., 2014; Bahdanau et al., 2014; Liu and Lane,
2016) that summarize the whole sequence into a fixed length feature vector are
not suitable for the task at hand, i.e., per word labeling. Alternatively, we
compute the attention vector at each time step, i.e., attention vector for
each word in the utterance. The utterance encoding
$\mathcal{H}\in\mathbb{R}^{\mathpzc{2}\mathpzc{d}\times J}$ and slot
description encoding $\mathcal{U}\in\mathbb{R}^{\mathpzc{2}\mathpzc{d}\times
K}$ metrics are input to this layer, that are used to compute a similarity
matrix $\mathcal{A}\in\mathbb{R}^{J\times K}$ between the utterance and slot
description encodings. $\mathcal{A}_{jk}$ represents the similarity between
$j$-th utterance word and $k$-th slot description word. We compute the
similarity matrix, as follows:
$\mathcal{A}_{jk}=\alpha(\mathcal{H}_{:j},\mathcal{U}_{:k})\in\mathbb{R}$
where $\alpha$ is a trainable function that captures the similarity between
input vectors $\mathcal{H}_{:j}$ and $\mathcal{U}_{:k}$, where
$\mathcal{H}_{:j}$ and $\mathcal{U}_{:k}$ are $j$-th and $k$-th column-vectors
of $\mathcal{H}$ and $\mathcal{U}$, respectively.
$\alpha(h,u)=w^{\top}_{(a)}[h\oplus u\oplus h\otimes u]$, where $\oplus$ is
vector concatenation, $\otimes$ is element-wise multiplication, and $w_{(a)}$
is a trainable weight vector.
The similarity matrix $\mathcal{A}$ is used to capture bi-directional
interactions between the utterance words and the slot type. First we compute
attention that highlights the words in the slot description that are closely
related to the utterance. At time-step $t$, we compute it as follows:
$\mathcal{U}^{\prime}_{:t}=\sum_{k}v_{tk}\mathcal{U}_{:k}$ where
$v_{t}=\text{softmax}(\mathcal{A}_{t:})\in\mathbb{R}^{K}$ is the attention
weight for slot description computed at time-step $t$ and $\sum v_{tk}=1$ for
all $t$. $\mathcal{U}^{\prime}\in\mathbb{R}^{\mathpzc{2}\mathpzc{d}\times J}$
represents the attention weights for the slot description with respect to all
the words in the utterance. Basically, every column of the matrix represents
closeness of the slot description with the corresponding utterance word. Then,
attention weights that signify the words in the utterance that have the
highest similarity with the slot description are computed as follows:
$h^{\prime}=\sum_{j}b_{j}\mathcal{H}_{:j}$ where
$b=\text{softmax}(\text{max}_{\text{col}}(\mathcal{A}))\in\mathbb{R}^{J}$ and
max is operated across columns, and
$\mathcal{H}^{\prime}\in\mathbb{R}^{\mathpzc{2}\mathpzc{d}\times J}$ is
obtained by tiling $h^{\prime}$ across columns.
We highlight that $\mathcal{U}^{\prime}$ represents features that highlight
important slot description words with closely similar words of utterance, and
$\mathcal{H}^{\prime}$ highlights features of the utterance with high
similarity with the slot description, computed based on the similarity matrix
$\mathcal{A}$, that itself has been computed based the contextual
representations of the utterance ($\mathcal{H}$) and slot description
($\mathcal{U}$) generated by the encoding layer that considers surrounding
words (i.e., employing bi-LSTM) to generate the representations. Finally,
$\mathcal{U}^{\prime}$ and $\mathcal{H}^{\prime}$ are concatenated to produce
$\mathcal{G}\in\mathbb{R}^{\mathpzc{4}\mathpzc{d}\times J}$, where every
column of the matrix represents rich bi-directional similarity features of the
corresponding utterance word with the slot description.
Essentially, this layer learns a general context-aware similarity function
between utterance words and a slot description from seen domains, and it
exploits the learned function for unseen domains. Due to the general nature of
the similarity function, this layer also facilitates the identification of
slot values in cases when Step two fails to correctly identify domain-
independent slot values.
### 3.5. Contextualization Layer
This layer is responsible for contextualizing information from different
granularities. Specifically, the utterance encodings from the Encoding layer,
the bi-directional similarity between the utterance and the slot description
from the Similarity layer, and the slot-independent IOB predictions from the
CRF layer are passed as input. This layer employs $\mathpzc{2}$ stacked layers
of bi-directional LSTM networks to contextualize all the information by
considering the relationships among neighbouring words’ representations. It
generates high quality features for the prediction layer; specifically, the
features are $\in\mathbb{R}^{\mathpzc{2}\mathpzc{d}\times J}$, where each
column represents the $\mathpzc{2}\mathpzc{d}$-dimensional features for the
given word in the utterance.
### 3.6. Prediction Layer
The contextualized features are passed as input to this layer, and it is
responsible for generating slot-specific predictions for the given utterance
and slot type. First, it passes these features through $\mathpzc{2}$ linear
layers with ReLU activation. Then a CRF is employed to make structured
predictions, as briefly explained in the CRF layer. The prediction process is
done for each of the relevant slot types (i.e., slot types in the respective
domain) and the resulting label sequences are combined to produces the final
label for each word. Note that if the model made two or more conflicting slot
predictions for a given sequence of words, we pick the slot type with the
highest prediction probability.
### 3.7. Training the Model
The model has two trainable components: the slot-independent IOB predictor and
the slot-specific IOB predictor. We jointly train both components by
minimizing the negative log likelihood loss of both components over our
training examples. The training data is prepared as follows. The training
examples are of the form
$(\mathcal{X}{i},\mathcal{S}_{r},\mathcal{Y}^{\prime}{i},\mathcal{Y}^{\prime\prime}_{\mathpzc{i}r})$,
where $\mathcal{X}{i}$ represents an utterance, $\mathcal{S}_{r}$ represents a
slot type, $\mathcal{Y}^{\prime}{i}$ represents slot-independent IOB tags for
the given utterance $\mathcal{X}{i}$, and
$\mathcal{Y}^{\prime\prime}_{\mathpzc{i}r}$ represents slot-specific IOB tags
for the given utterance $\mathcal{X}{i}$ and slot type $\mathcal{S}_{r}$. For
a sample from the given dataset of the form $(\mathcal{X}{i},\mathcal{Y}{i})$
that has values for $m$ slot types, first slot-indepedent IOB tags
$\mathcal{Y}^{\prime}{i}$ are generated by removing slot type information.
Then, we generated $m$ positive training examples by setting each of $m$ slot
types as $\mathcal{S}_{r}$ and generating corresponding label
$\mathcal{Y}^{\prime\prime}_{\mathpzc{i}r}$ (i.e., slot-specific tags for slot
type $\mathcal{S}_{r}$). Finally, $q$ negative examples are generated, where
such slot types are chosen which are not present in the utterance. For
example, the utterance in Figure 1 “I would like to book a table at 8
Immortals Restaurant in San Francisco” has true labels as “O O O O O O O O
B-restaurant_name I-restaurant_name I-restaurant_name O B-city I-city”. The
positive training examples would be: ($\cdots$,
“$\mathtt{restaurant\\_name}$”, “O O O O O O O O B I I O B I”, “O O O O O O O
O $\mathtt{B}$ $\mathtt{I}$ $\mathtt{I}$ O O O”) and ($\cdots$,
“$\mathtt{city}$”, $\cdots$, “O O O O O O O O O O O O $\mathtt{B}$
$\mathtt{I}$”). Whereas the negative examples can be as follows: ($\cdots$,
“$\mathtt{salon\\_name}$”, $\cdots$, “O O O O O O O O O O O O O O”),
($\cdots$, “$\mathtt{cuisine}$”, $\cdots$, $\cdots$), ($\cdots$,
“$\mathtt{phone\\_number}$”, $\cdots$, $\cdots$), and so on. Note that slot
types are shown in the above example for brevity, the slot descriptions are
used in practice.
## 4\. Experimental Setup
Table 1. Dataset statistics. Dataset | SNIPS | ATIS | MultiWOZ | SGD
---|---|---|---|---
Dataset Size | $14.5$K | $5.9$K | $67.4$K | $188$K
Vocab. Size | $12.1$K | $1$K | $10.5$K | $33.6$K
Avg. Length | $9.0$ | $11.1$ | $13.3$ | $13.8$
# of Domains | $6$ | $1$ | $8$ | $20$
# of Intents | $7$ | $18$ | $11$ | $46$
# of Slots | $39$ | $83$ | $61$ | $240$
In this section, we describe the datasets, evaluation methodology, competing
methods, and the implementation details of our model $\mathsf{LEONA}$.
### 4.1. Datasets
We used four public datasets to evaluate the performance of our model
$\mathsf{LEONA}$: SNIPS Natural Language Understanding benchmark (SNIPS)
(Coucke et al., 2018), Airline Travel Information System (ATIS) (Liu et al.,
2019), Multi-Domain Wizard-of-Oz (MultiWOZ) (Zang et al., 2020), and Dialog
System Technology Challenge 8, Schema Guided Dialogue (SGD) (Rastogi et al.,
2019). To the best of our knowledge, this is first work to comprehensively
evaluate zero-shot slot filling models on a wide range of public datasets.
Table 1 presents important statistics about the datasets.
SNIPS. A crowd-sourced single-turn Natural Language Understanding (NLU)
benchmark widely used for slot filling. It has $39$ slot types across $7$
intents from different domains. Since this dataset does not have slot
descriptions, we used tokenized slot names as the descriptions (e.g., for slot
type “playlist_owner”, we used “playlist owner” as its description).
ATIS. A single-turn dataset that has been widely used in slot filling
evaluations. It covers $83$ slot types across $18$ intents from a single
domain. Many of the intents do not have many utterances, so all the intents
having less than $100$ utterances are combined into a single intent “Others”
in our experiments. Moreover, similarly to SNIPS dataset, we used the
tokenized versions of the slot names as slot descriptions.
MultiWOZ. A well-known dataset that has been widely used for the task of
dialogue state tracking. In this work, we used the most recent version of the
dataset (i.e., MultiWOZ$2.2$). In its original form, it contains dialogues
between users and system. For the task of slot filling, we take all the user
utterances and system messages that mention any slot(s), and shuffle the order
to make it as if it were a single-turn dataset to maintain consistency with
the previous works. For experiments in this work, utterances with intents that
have less than $650$ ($<1\%$ of the dataset) utterances are grouped into the
intent “Others”.
SGD. A recently published comprehensive dataset for the eighth Dialog System
Technology Challenge; it contains dialogues from $20$ domains with a total of
$46$ intents and $240$ slots. SGD was originally proposed for dialogue state
tracking. This dataset is also pre-processed to have single-turn utterances
labeled for slot filling. Moreover, we merge utterances from domains that have
no more than $1850$ ($<1\%$ of the dataset) utterances, and we name the
resulting domain “Others”.
Since not all datasets provide a large enough number of domains, we do the
splits in our experiments based on intents instead of domains for datasets
that have more intents than domains. That is, we consider intents as domains
for SNIPS, ATIS, and MultiWOZ.
### 4.2. Evaluation Methodology
We compute the slot F1 scores555Standard CoNLL evaluation script is used to
compute slot F1 score. and present evaluation results for the following
settings:
Train on all except target intent/domain. This is the most common setting that
previous works (Liu et al., 2020; Shah et al., 2019; Bapna et al., 2017) have
used for evaluation. A model is trained on all intents/domains except a single
target intent/domain. For example, for SNIPS dataset the model is trained on
all intents except a target intent “AddToPlatlist” that is used for testing
the model’s capabilities in the zero-shot fashion. This setup is repeated for
every single intent in the dataset. The utterances at test time only come from
a single intent/domain (or “Others”) which makes this setting less
challenging.
Train on a certain percentage of intents/domains and test on the rest. This is
a slightly more challenging setting where test (i.e., unseen in training)
intent/domains are usually from multiple unseen new intents/domains. We vary
the number of training (i.e., seen) and testing (i.e., unseen) intents/domains
to comprehensively evaluate all competing models. In this setting, we randomly
select $\approx 25\%$, $\approx 50\%$, and $\approx 75\%$ of the
intents/domains for training and the rest for testing, and report average
results over five runs.
Train on one dataset and test on the rest of the datasets. This is the most
challenging setting, where models are trained on one dataset and tested on the
remaining datasets. For example, we train on the SGD dataset and test on
SNIPS, ATIS, and MultiWOZ datasets. Similarly, we repeat the process for every
dataset. Since datasets are very diverse (i.e., in terms of domains, slot
types and user’s expressions), this setting can be thought of as a “in the
wild” (Dhall et al., 2017) setting, which resembles real-world zero-shot slot
filling scenarios to a large degree.
### 4.3. Competing Methods
We compare against the following state-of-the-art (SOTA) models:
:
Coach (Liu et al., 2020). This model proposes to handle the zero-shot slot
filling task with a coarse-to-fine procedure. It first identifies the words
that constitute slot values. Then, based on the identified slot values, it
tries to assign these values to slot types by matching the identified slot
values with the representation of each slot description. We use their best
model, i.e., Coach+TR, that employs template regularization but we call it
Coach for simplicity.
:
RZS (Shah et al., 2019). This work proposes a zero-shot adaption for slot
filling by utilizing example values of each slot type. It employs character
and word embedding of the utterance and slot descriptions, which are then
concatenated with the averaged slot example embeddings, and passed through a
bidirectional LSTM network to get the final prediction for each word in the
utterance.
:
CT (Bapna et al., 2017). This model fills slots for each slot type
individually. Character and word-level representations are concatenated with
the slot type representation (i.e., embeddings) and an LSTM network is used to
make the predictions for each word in the utterance for the given slot type.
Note that we do not compare against simple baselines such as BiLSTM-CRF
(Lample et al., 2016), LSTM-BoE, and CRF-BoE (Jha et al., 2018) because they
have been outperformed by the previous works we compare against.
### 4.4. Implementation Details
Our model uses $300$ dimensional embedding for POS and NER tags, and pre-
trained ELMo embedding with $1024$ dimensions. The encoding and
contextualization layers have two stacked layers of bi-directional LSTMs with
hidden states of size $300$. The prediction layer has two linear layers with
ReLU activation, and the CRF uses the “IOB” labeling scheme. The model is
trained with a batch size of $32$ for up to $200$ epochs with early stopping
using Adam optimizer and a negative log likelihood loss with a scheduled
learning rate, starting at $0.001$, and the model uses dropout rate of $0.3$
at every layer to avoid over-fitting. Whereas $q$ is set to three for negative
sampling.
## 5\. Results
Table 2. SNIPS dataset: Slot F1 scores for all competing models for target intents that are unseen in training. Target Intent $\downarrow$ | CT | RZS | Coach | LEONA w/o IOB | LEONA
---|---|---|---|---|---
AddToPlaylist | 0.3882 | 0.4277 | 0.5090 | 0.5104 | 0.5115
BookRestaurant | 0.2754 | 0.3068 | 0.3401 | 0.3405 | 0.4781
GetWeather | 0.4645 | 0.5028 | 0.5047 | 0.5531 | 0.6677
PlayMusic | 0.3286 | 0.3312 | 0.3201 | 0.3435 | 0.4323
RateBook | 0.1454 | 0.1643 | 0.2206 | 0.2224 | 0.2318
SearchCreativeWork | 0.3979 | 0.4445 | 0.4665 | 0.4671 | 0.4673
SearchScreeningEvent | 0.1383 | 0.1225 | 0.2563 | 0.2690 | 0.2872
Average | 0.3055 | 0.3285 | 0.3739 | 0.3866 | 0.4394
Table 3. ATIS dataset: Slot F1 scores for all competing models for target intents that are unseen in training. Target Intent $\downarrow$ | CT | RZS | Coach | LEONA w/o IOB | LEONA
---|---|---|---|---|---
Abbreviation | 0.4163 | 0.5252 | 0.4804 | 0.4965 | 0.6405
Airfare | 0.6549 | 0.5410 | 0.6929 | 0.7490 | 0.9492
Airline | 0.7126 | 0.6354 | 0.7212 | 0.7762 | 0.8586
Flight | 0.6530 | 0.7165 | 0.8072 | 0.8521 | 0.9070
Ground Service | 0.4924 | 0.6452 | 0.7641 | 0.8463 | 0.8490
Others | 0.4835 | 0.5169 | 0.6586 | 0.7749 | 0.8337
Average | 0.5688 | 0.5967 | 0.6874 | 0.7492 | 0.8397
We present in the next subsections quantitative and qualitative analysis of
all competing models. We first present the quantitative analysis in Subsection
5.1 and show that our model consistently outperforms the competing models in
all settings. Furthermore, this subsection also has an ablation study that
quantifies the role of each conceptual step in our model. We dig deeper into
limitations of each competing model in our qualitative analysis in Subsection
5.2.
### 5.1. Quantitative Analysis
Train on all except target intent/domain. Tables 2 3, 4, and 5 present F1
scores for SNIPS, ATIS, MultiWOZ, and SGD datasets, respectively. All models
are trained on all the intents/domains except the target one that is used for
zero-shot testing. Our proposed approach is consistently better than SOTA
methods. Specifically, it outperforms SOTA models by $17.52\%$, $22.15\%$,
$17.42\%$, and $17.95\%$ on average for unseen intents/domains on SNIPS, ATIS,
MultiWOZ, and SGD datasets, respectively. We also present a variant of our
model that does not employ “IOB” tags from Step two, we call it
$\mathsf{LEONA}$ $\mathpzc{w/o}~{}\mathtt{IOB}$. Even this variant of our
model outperforms all other SOTA models. This performance gain over SOTA
methods can be attributed to the pre-trained NLP models that provide
meaningful cues for the unseen domains, the similarity layer that can capture
the closeness of the utterance words with the given slot irrespective of
whether it is seen or unseen, and the contextualization layer that uses all
the available information to generate a rich context-aware representation for
each word in the utterance.
Table 4. MultiWOZ dataset: Slot F1 scores for all competing models for target intents that are unseen in training. Target Intent $\downarrow$ | CT | RZS | Coach | LEONA w/o IOB | LEONA
---|---|---|---|---|---
Book Hotel | 0.4577 | 0.3739 | 0.5866 | 0.6181 | 0.6446
Book Restaurant | 0.3260 | 0.4200 | 0.4576 | 0.6268 | 0.6269
Book Train | 0.4777 | 0.5269 | 0.6112 | 0.6317 | 0.7025
Find Attraction | 0.2914 | 0.3489 | 0.3029 | 0.3787 | 0.3834
Find Hotel | 0.4933 | 0.5920 | 0.7235 | 0.7673 | 0.8222
Find Restaurant | 0.6420 | 0.6921 | 0.7671 | 0.7969 | 0.8338
Find Taxi | 0.1459 | 0.1587 | 0.1260 | 0.1682 | 0.1824
Find Train | 0.6344 | 0.4406 | 0.7754 | 0.8779 | 0.8811
Others | 0.1205 | 0.0878 | 0.1201 | 0.1687 | 0.1721
Average | 0.3988 | 0.4045 | 0.4967 | 0.5594 | 0.5832
Table 5. SGD dataset: Slot F1 scores for all competing models for target domains that are unseen in training. Target Domain $\downarrow$ | CT | RZS | Coach | LEONA w/o IOB | LEONA
---|---|---|---|---|---
Buses | 0.4954 | 0.5443 | 0.6280 | 0.6364 | 0.6978
Calendar | 0.5056 | 0.4908 | 0.6023 | 0.6216 | 0.7436
Events | 0.5181 | 0.6324 | 0.5486 | 0.7405 | 0.7619
Flights | 0.4898 | 0.4662 | 0.4898 | 0.4907 | 0.5901
Homes | 0.4542 | 0.7159 | 0.6235 | 0.6927 | 0.7698
Hotels | 0.4069 | 0.5681 | 0.7216 | 0.7266 | 0.7677
Movies | 0.5100 | 0.3424 | 0.5537 | 0.5687 | 0.7285
Music | 0.4111 | 0.6090 | 0.5786 | 0.7466 | 0.7613
RentalCars | 0.4138 | 0.3399 | 0.6576 | 0.7344 | 0.7389
Restaurants | 0.4620 | 0.3787 | 0.7195 | 0.7451 | 0.7574
RideSharing | 0.6619 | 0.5312 | 0.7273 | 0.7656 | 0.8172
Services | 0.6380 | 0.6381 | 0.7607 | 0.7628 | 0.8180
Travel | 0.6556 | 0.6464 | 0.8403 | 0.9013 | 0.9234
Weather | 0.4605 | 0.5180 | 0.6003 | 0.6178 | 0.8223
Others | 0.4362 | 0.5312 | 0.4921 | 0.5129 | 0.5592
Average | 0.5013 | 0.5302 | 0.6363 | 0.6842 | 0.7505
Table 6. Averaged F1 scores for all competing models for seen and unseen slot types in the target unseen domains for SNIPS, ATIS, MultiWOZ, and SGD datasets. Method $\downarrow$ | SNIPS | ATIS | MultiWOZ | SGD
---|---|---|---|---
Slot Type $\rightarrow$ | Seen | Unseen | Seen | Unseen | Seen | Unseen | Seen | Unseen
CT | 0.4407 | 0.2725 | 0.7552 | 0.4851 | 0.6062 | 0.3040 | 0.7362 | 0.3940
RZS | 0.4786 | 0.2801 | 0.8132 | 0.5143 | 0.6604 | 0.3301 | 0.7565 | 0.4478
Coach | 0.5173 | 0.3423 | 0.7742 | 0.7166 | 0.7034 | 0.4895 | 0.7996 | 0.6614
$\mathsf{LEONA}$ $\mathpzc{w/o}~{}\mathtt{IOB}$ | 0.5292 | 0.3578 | 0.8155 | 0.7130 | 0.6651 | 0.5638 | 0.7986 | 0.7424
$\mathsf{LEONA}$ | 0.6354 | 0.4006 | 0.9588 | 0.7524 | 0.7765 | 0.5962 | 0.9192 | 0.8167
Table 7. Averaged F1 scores for all competing models in the target unseen domains of all datasets. The train/test sets have variable number of intents/domains, which makes this setting more challenging. Method $\downarrow$ | SNIPS | ATIS | MultiWOZ | SGD
---|---|---|---|---
% Seen Intents $\rightarrow$ | 25% | 50% | 75% | 25% | 50% | 75% | 25% | 50% | 75% | 25% | 50% | 75%
CT | 0.1043 | 0.2055 | 0.2574 | 0.5018 | 0.7341 | 0.6542 | 0.2991 | 0.4371 | 0.6607 | 0.4523 | 0.5389 | 0.6160
RZS | 0.1214 | 0.1940 | 0.3207 | 0.6393 | 0.7727 | 0.7811 | 0.4566 | 0.4703 | 0.6951 | 0.6677 | 0.6578 | 0.6741
Coach | 0.1248 | 0.2258 | 0.3081 | 0.6070 | 0.7341 | 0.8104 | 0.4408 | 0.4505 | 0.6522 | 0.5888 | 0.6419 | 0.6725
$\mathsf{LEONA}$ $\mathpzc{w/o}~{}\mathtt{IOB}$ | 0.1550 | 0.2631 | 0.4108 | 0.6495 | 0.9437 | 0.9378 | 0.5137 | 0.5529 | 0.7843 | 0.6861 | 0.7315 | 0.7704
$\mathsf{LEONA}$ | 0.1710 | 0.2895 | 0.4220 | 0.8093 | 0.9659 | 0.9764 | 0.5248 | 0.5533 | 0.8581 | 0.7180 | 0.7925 | 0.8324
$\mathsf{LEONA}$ achieves its best performance on ATIS dataset (see Table 3)
as compared to other datasets. It highlights that zero-shot slot filling
across different intents within a single domain is relatively easier than
across domains, since ATIS dataset consists of a single domain, i.e., airline
travel. On the contrary, SGD dataset is the most comprehensive public dataset
(i.e., it has 46 intents across 20 domains), yet our proposed method
$\mathsf{LEONA}$ has better performance on it (see Table 5) than on SNIPS and
MultiWoz datasets. This calls attention to another critical point: dataset
quality. We observe that SGD dataset is not only comprehensive but also has
high quality semantic description for slot types and all the domains have
enough training examples with minimal annotation errors (based on a manual
study of a small stratified sample from the dataset). For example, the slot
types “$\mathtt{restaurant\\_name}$”, “$\mathtt{hotel\\_name}$”, and
“$\mathtt{attraction\\_name}$” belong to different domains, but are very
similar to one another. The rich semantic description of each slot type makes
it easy for the model to transfer knowledge from one domain to new unseen
domains with high F1 scores. $\mathsf{LEONA}$ shows poor performance on SNIPS
dataset (see Table 2) as compared to other datasets, especially for intents:
“$\mathtt{RateBook}$” and “$\mathtt{SearchScreeningEvent}$”. This poor
performance further highlights our previous point (i.e., quality of the
dataset) since SNIPS dataset does not provide any textual descriptions for
slot types. Moreover, slot names (e.g., “$\mathtt{object\\_name}$” and
“$\mathtt{object\\_type}$”) convey very little semantic information, which
exacerbates the challenge for the model to perform well for unseen domains.
Finally, the results on MultiWOZ dataset (see Table 4) highlights that
transferring knowledge to new unseen intents/domains is easier when some
similar intent/domain is there in the training set. For example, the model is
able to transfer knowledge for new unseen target intent “$\mathtt{Find}$
$\mathtt{Hotel}$” (i.e., not in the training) from other similar intents such
as, “$\mathtt{Find}$ $\mathtt{Restaurant}$” and “$\mathtt{Book}$
$\mathtt{Hotel}$” effectively. However, for the target domain “$\mathtt{Find}$
$\mathtt{Attraction}$” that does not have any similar domain in the training
set, the model shows relatively poor performance. Similar observations can
also be made other competing models.
Table 8. F1 scores for all competing models where the model is trained on one dataset and tested on the rest. This setting resembles real-life scenarios. Method $\downarrow$ Train Dataset $\rightarrow$ | SNIPS | ATIS | MultiWOZ | SGD
---|---|---|---|---
Test Dataset $\rightarrow$ | ATIS | MultiWOZ | SGD | SNIPS | MultiWOZ | SGD | SNIPS | ATIS | SGD | SNIPS | ATIS | MultiWOZ
CT | 0.0874 | 0.1099 | 0.0845 | 0.0589 | 0.0725 | 0.0531 | 0.0646 | 0.0878 | 0.0616 | 0.1463 | 0.2290 | 0.1529
RZS | 0.0915 | 0.1209 | 0.1048 | 0.0819 | 0.0809 | 0.0912 | 0.1496 | 0.2103 | 0.0875 | 0.1905 | 0.3435 | 0.2134
Coach | 0.1435 | 0.1191 | 0.1301 | 0.0976 | 0.0962 | 0.0871 | 0.1201 | 0.1730 | 0.1102 | 0.1795 | 0.3383 | 0.1903
$\mathsf{LEONA}$ $\mathpzc{w/o}~{}\mathtt{IOB}$ | 0.1544 | 0.1433 | 0.1504 | 0.1156 | 0.1124 | 0.1359 | 0.1242 | 0.1885 | 0.1258 | 0.2544 | 0.4714 | 0.2743
$\mathsf{LEONA}$ | 0.2080 | 0.1832 | 0.1690 | 0.1436 | 0.1394 | 0.1361 | 0.1847 | 0.2662 | 0.1620 | 0.2761 | 0.5205 | 0.2884
Comparison for seen and unseen slot types. An unseen target domains may have
both unseen and seen slot types. The unseen ones have never been seen during
training, and seen ones might have different contexts. For example,
“$\mathtt{date}$” is a common slot type that may correspond to many different
contexts in different domains such as date of a salon appointment, date of a
restaurant booking, return date of a round-trip flight, and so on. We evaluate
the performance of the competing models on seen and unseen slot types
individually to test each model’s ability in handling completely unsen slot
types. Table 6 presents results in further detail where results for both seen
and unseen slot types are reported separately. $\mathsf{LEONA}$ is
consistently better than other models on seen as well as unseen slot types. On
average, our proposed model $\mathsf{LEONA}$ shows 18% and 17% gains in F1
scores over the SOTA model for seen and unseen slots, respectively. These
gains are due to our slot-independent IOB predictions (which provide effective
templates for seen slot types) and our context-aware similarity function
(which works well regardless whether slot types are seen or unseen). Moreover,
all the models have better performance on seen slots than on unseen ones as it
is relatively easier to adapt to a new context (i.e., in new domain) for seen
slots than to new unseen slots in an unseen context. We also note that
$\mathsf{LEONA}$ achieves a similar performance on ATIS dataset for seen slots
in the unseen target domain, when compared with the results reported by SOTA
_supervised slot filling_ methods in (Zhang et al., 2018), i.e., F1 score of
$0.952$ vs $0.959$ by our method.
Train on a certain percentage of intents/domains and test on the rest. Large
labeled training datasets are an important factor in accelerating the progress
of supervised models. To investigate whether zero-shot models are affected by
the size of training data from different domains, we vary the size of the
training data and report results to quantify the effect. Table 7 presents
results on all datasets when the training set has data from $\approx 25\%$,
$\approx 50\%$, and $\approx 75\%$ of the intents/domains and the rest are
used for testing. The choice of intents/domains to be in the training or
testing sets is done randomly, and average results are reported over five
runs. This setting is more challenging in two ways: models have access to less
training data and the test utterances come from multiple domains.
$\mathsf{LEONA}$ is at least $19.06\%$ better (better F1 scores) than other
models for any percentage of unseen intents on any dataset. Overall, the
performance of $\mathsf{LEONA}$ improves as it gets access to training data
from more intents/domains, which is a desirable behaviour. Moreover, we also
observe that our model achieves $0.72$ F1 score on SGD with only 25% of
domains in the training data, which once again validates the intuition that
having better quality data is very critical to adapt models to new unseen
domains. Similar results are observed on ATIS dataset (i.e., single domain
dataset), that highlights that knowledge transfer within a single domain is
easier, and models can do a very good job on unseen intents even with a small
amount of training data (e.g., 25% intents in the training set). Similar
conclusions hold true for other methods.
(a) RZS
(b) Coach
(c) LEONA (this work)
Figure 3. t-SNE visualization of word representations from selected
utterances; the selected utterances belong to the unseen domain “Restaurant”
in SGD dataset and contain the slot type “restaurant_name”. Results are
presented for the best performing 3 models.
Train on one dataset and test on the rest of the datasets. This setting
closely resembles the real-world zero-shot setting, where a model is trained
on one dataset and tested on the rest. This is the most challenging setting,
since the test datasets come from purely different distributions than those
seen during training. Although each domain within a dataset can be thought of
as a different distribution, every dataset shows some similarity of expression
across different domains. Table 8 presents results of all competing models for
this setting. All models show relatively poor performance for this challenging
setting. However, $\mathsf{LEONA}$ is consistently better than others;
specifically, it is up to $56.26\%$ better on F1 score than the SOTA model.
Our model achieves the best performance when it is trained on the SGD dataset
(relatively better quality dataset) and tested on the rest. On the contrary,
it shows the worst performance, when trained on ATIS (i.e., single-domain) and
tested on the rest. Similar observations can be made for the other models.
These results once again highlight the importance of the quality and
comprehensiveness of the training dataset(s). Finally, this setting also
indicates that the current SOTA models are not yet ready to be deployed in
real-world scenarios and calls for more explorations and research in this
important yet challenging and less-explored task of zero-shot slot filling.
Table 9. Ablation study of our model LEONA in the zero-shot setting: averaged F1 scores for unseen target domains. Configuration | SNIPS | ATIS | MultiWOZ | SGD
---|---|---|---|---
Step 2 | 0.3689 | 0.6719 | 0.4792 | 0.6375
Step 3 | 0.3812 | 0.6915 | 0.4999 | 0.6407
Step 2 + 3 | 0.4013 | 0.7605 | 0.5412 | 0.6684
Step 1 + 2 | 0.3820 | 0.6895 | 0.4936 | 0.6471
Step 1 + 3 | 0.3866 | 0.7492 | 0.5594 | 0.6842
Step 1 + 2 +3 | 0.4394 | 0.8397 | 0.5832 | 0.7505
Ablation study. To quantify the role of each component in our model, we
present an ablation study results in Table 9 over all datasets. First, we
study the significance of the pre-trained NLP models in the first three rows
in Table 9. To produce the results in these rows, we used traditional word
(Bojanowski et al., 2017) and character (Hashimoto et al., 2016) embeddings
instead of employing powerful pre-trained NLP models. We observe that Step
three, i.e., variant of the model that does not use pre-trained NLP models and
does not consider “$\mathtt{IOB}$” tags from Step two, is the most influential
component in the model, as it alone can outperform the best performing SOTA
model Coach (Liu et al., 2020), but the margin is not significant (i.e.,
0.3812 vs. 0.3739 on SNIPS, 0.6915 vs. 0.6874 on ATIS, 0.4999 vs. 0.4967 on
MultiWOZ, and 0.6407 vs. 0.6363 on SGD). If “$\mathtt{IOB}$” predictions from
Step two are incorporated into it (i.e., row Step 2 + 3) or pre-trained NLP
models are employed with it (i.e., row Step 1 + 3), its performance is further
improved. Moreover, if we just use Step two by predicting “$\mathtt{IOB}$”
tags and assigning these “$\mathtt{IOB}$” tags to the slot type with the
highest similarity (i.e., row Step 2), or combine Step one with Step two
(i.e., row Step 1 + 2), we note that we do not achieve the best results.
### 5.2. Qualitative Analysis
In this experiment, we randomly selected $100$ utterances in the unseen target
domain “Restaurant” from the SGD dataset and visually analyzed the performance
of the competing models in extracting the values of the slot type
“$\mathtt{restaurant\\_name}$” from the selected utterances. The goal of this
experiment is to visually highlight the strengths/weaknesses of the competing
models. We retrieved the multi-dimensional numerical representations of the
words in the selected utterances from the final layers of each model and
reduced the number of dimensions of each representation to two using t-SNE
(Maaten and Hinton, 2008). Figure 3 shows scatter plots for the resulting
2-dimensional representations for each model. We observe that all models
produce clear-cut clusters for each class: $\mathtt{B}$, $\mathtt{I}$, or
$\mathtt{O}$, which indicates that all models are able to produce
distinguishing representations. However, $\mathsf{LEONA}$ produces better
representations in the sense that less words are misclassified. That is, there
are less violating data point in the clusters of $\mathsf{LEONA}$ in Figure 3
(c).
We further analyze the results for two utterances: “Golden Wok would be a
great choice in …” and “I would like to book a table at 8 Immortals Restaurant
in …”. RZS (Shah et al., 2019) is able to predict full slot value (i.e.,
$\mathtt{Golden}$ $\mathtt{Wok}$) of the slot “$\mathtt{restaurant\\_name}$”
in the first utterance. However, we notice that RZS fails to capture the full
value (i.e., “$\mathtt{8}$ $\mathtt{Immortals}$ $\mathtt{Restaurant}$”) for
the slot “$\mathtt{restaurant\\_name}$” in the other utterance, where it could
partially extract “$\mathtt{Immortals}$ $\mathtt{Restaurant}$” and mistakenly
assigns label $\mathtt{O}$ to the word “$\mathtt{8}$”, which led to subsequent
wrong prediction for the word “$\mathtt{Immortals}$” (i.e., predicted label
$\mathtt{B}$, whereas the true label is $\mathtt{I}$). This misclassification
is also highlighted in Figure 3 (a) by coloring the wrongly predicted words
with red. Since RZS relies on the example value(s) and there is a high
variability across the lengths of slot values, along with the diversity of
expression, this model faces problems in detecting the _full slot values_.
We notice that Coach (Liu et al., 2020) fails to detect the value (i.e.,
$\mathtt{Golden}$ $\mathtt{Wok}$) for the slot “$\mathtt{restaurant\\_name}$”
in the first utterance. However, it successfully captures the slot value in
the other utterance. Since Coach relies on learning templates from seen
domains and exploits those for unseen domains, it fails to handle the
deviation of the unseen domains from the learned templates. $\mathsf{LEONA}$
is able to detect full slot values for both utterances successfully, thanks
to: the slot-independent IOB predictions from Step two; the similarity
function in Step three which is robust to errors from the previous steps; and
the contextualization layers of the model. Finally, we observed that our model
also fails to fully detect very long slot values. For example, slot values
“$\mathtt{Rustic}$ $\mathtt{House}$ $\mathtt{Oyster}$ $\mathtt{Bar}$
$\mathtt{And}$ $\mathtt{Grill}$”, “$\mathtt{Tarla}$ $\mathtt{Mediterranean}$
$\mathtt{Bar}$ $\mathtt{+}$ $\mathtt{Grill}$”, and “$\mathtt{Pura}$
$\mathtt{Vida}$ $\mathtt{-}$ $\mathtt{Cocina}$ $\mathtt{Latina}$ $\mathtt{\&}$
$\mathtt{Sangria}$ $\mathtt{Bar}$” for the slot type
“$\mathtt{restaurant\\_name}$” are challenging to detect in unseen domains not
only because of their long length, but also because of the presence of tokens
like $\mathtt{\&}$, $\mathtt{+}$, and $\mathtt{-}$, that further exacerbate
the challenge. Note that other SOTA models also fail to detect the above
example slot values. We plan to overcome this challenge in our future work by
learning n-gram phrase-level representations to detect such slot values in
their entirety.
## 6\. Related Work
We organize the related work into three categories: (_i_) supervised slot
filling, (_ii_) few-shots slot filling, and (_iii_) zero-shot slot filling.
Supervised Slot Filling. Slot filling is an extensively studied research
problem in the supervised setting. Recurrent neural networks such as Long
Short-Term Memory (LSTM) or Gated Recurrent Unit (GRU) networks that learn how
words within a sentence are related temporally (Mesnil et al., 2014; Kurata et
al., 2016) have been employed to tag the input for slots. Similarly,
Conditional Random Fields (CRFs) have been integrated with LSTMs/GRUs (Huang
et al., 2015; Reimers and Gurevych, 2017). The authors in (Shen et al., 2017;
Tan et al., 2017) proposed a self-attention mechanism for sequential labeling.
More recently, researchers have proposed jointly addressing the related tasks
of intent detection and slot filling (Goo et al., 2018; Hakkani-Tür et al.,
2016; Liu and Lane, 2016; Zhang et al., 2018; Xu and Sarikaya, 2013). The
authors in (Zhang et al., 2018) suggested using a capsule neural network by
dynamically routing and rerouting information from wordCaps to slotCaps and
then to intentCaps to jointly model the tasks. Supervised slot filling methods
rely on the availability of large amounts of labeled training data from all
domains to learn patterns of slot usage. In contrast, we focus on the more
challenging as well as more practically relevant setting where new unseen
domains are evolving and training data is not available for all domains.
Few-shot Slot Filling. Few-shot learning requires a small amount of training
data in the target domain. Meta-learning based methods (Finn et al., 2017;
Nichol and Schulman, 2018; Nichol et al., 2018) have shown tremendous success
for few-shot learning in many tasks such as few-shot image generation (Reed et
al., 2017), image classification (Snell et al., 2017), and domain adaptation
(Vinyals et al., 2016). Following the success of such approaches, few-shot
learning in NLP have been investigated for tasks such as text classification
(Sun et al., 2019; Geng et al., 2019; Yan et al., 2018), entity-relation
extraction (Lv et al., 2019; Gao et al., 2020), and few-shot slot filling (Luo
et al., 2018; Fritzler et al., 2019; Hou et al., 2020). The authors in (Luo et
al., 2018) exploited regular expressions for few-shot slot filling,
Prototypical Network was employed in (Fritzler et al., 2019), and the authors
in (Hou et al., 2020) extended the CRF model by introducing collapsed
dependency transition to transfer label dependency patterns. Moreover, few-
shot slot filling and intent detection have been modeled jointly (Krone et
al., 2020; Bhathiya and Thayasivam, 2020), where model agnostic meta learning
(MAML) was leveraged. Few-shot slot filling not only requires a small amount
of training data in the target domain, but also requires re-training/fine-
tuning. Our model addresses the task of zero-shot slot filling where no
training example for the new unseen target domain is available and it can
seamlessly adapt to new unseen domains – a more challenging and realistic
setting.
Zero-shot Slot Filling. Zero-shot learning for slot filling is less explored,
and only a handful of research has addressed this challenging problem, albeit,
with very limited experimental evaluation. Coach (Liu et al., 2020) addressed
the zero-shot slot filling task with a coarse-to-fine approach. It first
predicts words that are slot values. Then, it assigns the predicted slot value
to the appropriate slot type by matching the value with the representation of
description of each slot type. RZS (Shah et al., 2019) utilizes example values
of each slot type. It uses character and word embeddings of the utterance and
slot types along with the slot examples’ embeddings, and passes the
concatenated information through a bidirectional LSTM network to get the
prediction for each word in the utterance. CT (Bapna et al., 2017) proposed
LSTM network and employed slot descriptions to fill the slots for each slot
type individually. The authors in (Lee and Jha, 2019) also employed LSTM, slot
descriptions, and attention mechanisms for individual slot predictions. To
tackle the challenge of the zero-shot slot filling, we leverage the power of
the pre-trained NLP models, compute complex bi-directional relationships of
utterance and slot types, and contextualize the multi-granular information to
better accommodate unseen concepts. In a related, but orthogonal line of
research, the authors in (Ma et al., 2019; Li et al., 2020; Gulyaev et al.,
2020) tackled the problem of slot filling in the context of dialog state
tracking where dialog state and history are available in addition to an input
utterance. In contrast, this work and the SOTA models we compare against in
our experiments only consider an utterance without having access to any dialog
state elements.
## 7\. Conclusion
We have presented a zero-shot slot filling model, $\mathsf{LEONA}$, that can
adapt to new unseen domains seamlessly. $\mathsf{LEONA}$ stands out as the
first zero-shot slot filling model that effectively captures rich and context-
aware linguistic features at different granularities. Our experimental
evaluation uses a comprehensive set of datasets and covers many challenging
settings that stress models and expose their weaknesses (especially in more
realistic settings). Interestingly, our model outperforms all state-of-the-art
models in all settings, over all datasets. The superior performance of our
model is mainly attributed to: its effective use of pre-trained NLP models
that provide domain-oblivious word representations, its multi-step approach
where extra insight is propagated from one step to the next, its generalizable
similarity function, and its contextualization of the words’ representations.
In the most challenging evaluation setting where models are tested on a
variety of datasets after being trained on data from one dataset only, our
model is up to 56.26% more accurate (in F1 score) than the best performing
state-of-the-art model. It remains challenging for all models, including ours,
to identify slot values that are very long or that contain certain tokens. We
plan to further improve our model by incorporating n-gram phrase-level
representations to overcome this challenge and allow our model to accurately
extract slot values regardless of their length or diversity.
## References
* (1)
* Bahdanau et al. (2014) Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2014\. Neural machine translation by jointly learning to align and translate. _arXiv preprint arXiv:1409.0473_ (2014).
* Bapna et al. (2017) Ankur Bapna, Gokhan Tur, Dilek Hakkani-Tur, and Larry Heck. 2017\. Towards zero-shot frame semantic parsing for domain scaling. _arXiv preprint arXiv:1707.02363_ (2017).
* Bellegarda (2014) Jerome R Bellegarda. 2014\. Spoken language understanding for natural interaction: The siri experience. In _Natural interaction with robots, knowbots and smartphones_. Springer, 3–14.
* Bhathiya and Thayasivam (2020) Hemanthage S Bhathiya and Uthayasanker Thayasivam. 2020. Meta Learning for Few-Shot Joint Intent Detection and Slot-Filling. In _Proceedings of the 2020 5th International Conference on Machine Learning Technologies_. 86–92.
* Bojanowski et al. (2017) Piotr Bojanowski, Edouard Grave, Armand Joulin, and Tomas Mikolov. 2017. Enriching word vectors with subword information. _Transactions of the Association for Computational Linguistics_ 5 (2017), 135–146.
* Bowman et al. (2015) Samuel R Bowman, Gabor Angeli, Christopher Potts, and Christopher D Manning. 2015. A large annotated corpus for learning natural language inference. _arXiv preprint arXiv:1508.05326_ (2015).
* Coucke et al. (2018) Alice Coucke, Alaa Saade, Adrien Ball, Théodore Bluche, Alexandre Caulier, David Leroy, Clément Doumouro, Thibault Gisselbrecht, Francesco Caltagirone, Thibaut Lavril, et al. 2018\. Snips voice platform: an embedded spoken language understanding system for private-by-design voice interfaces. _arXiv preprint arXiv:1805.10190_ (2018).
* Cutting et al. (1992) Douglass Cutting, Julian Kupiec, Jan Pedersen, and Penelope Sibun. 1992. A practical part-of-speech tagger. In _Third Conference on Applied Natural Language Processing_. 133–140.
* Dhall et al. (2017) Abhinav Dhall, Roland Goecke, Shreya Ghosh, Jyoti Joshi, Jesse Hoey, and Tom Gedeon. 2017\. From individual to group-level emotion recognition: Emotiw 5.0. In _Proceedings of the 19th ACM international conference on multimodal interaction_. 524–528.
* Finn et al. (2017) Chelsea Finn, Pieter Abbeel, and Sergey Levine. 2017\. Model-agnostic meta-learning for fast adaptation of deep networks. _arXiv preprint arXiv:1703.03400_ (2017).
* Fritzler et al. (2019) Alexander Fritzler, Varvara Logacheva, and Maksim Kretov. 2019\. Few-shot classification in named entity recognition task. In _Proceedings of the 34th ACM/SIGAPP Symposium on Applied Computing_. 993–1000.
* Gao et al. (2020) Tianyu Gao, Xu Han, Ruobing Xie, Zhiyuan Liu, Fen Lin, Leyu Lin, and Maosong Sun. 2020. Neural Snowball for Few-Shot Relation Learning.. In _AAAI_. 7772–7779.
* Geng et al. (2019) Ruiying Geng, Binhua Li, Yongbin Li, Xiaodan Zhu, Ping Jian, and Jian Sun. 2019\. Induction networks for few-shot text classification. _arXiv preprint arXiv:1902.10482_ (2019).
* Goo et al. (2018) Chih-Wen Goo, Guang Gao, Yun-Kai Hsu, Chih-Li Huo, Tsung-Chieh Chen, Keng-Wei Hsu, and Yun-Nung Chen. 2018. Slot-gated modeling for joint slot filling and intent prediction. In _Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers)_. 753–757.
* Gulyaev et al. (2020) Pavel Gulyaev, Eugenia Elistratova, Vasily Konovalov, Yuri Kuratov, Leonid Pugachev, and Mikhail Burtsev. 2020. Goal-oriented multi-task bert-based dialogue state tracker. _arXiv preprint arXiv:2002.02450_ (2020).
* Hakkani-Tür et al. (2016) Dilek Hakkani-Tür, Gökhan Tür, Asli Celikyilmaz, Yun-Nung Chen, Jianfeng Gao, Li Deng, and Ye-Yi Wang. 2016\. Multi-domain joint semantic frame parsing using bi-directional rnn-lstm.. In _Interspeech_. 715–719.
* Hashimoto et al. (2016) Kazuma Hashimoto, Caiming Xiong, Yoshimasa Tsuruoka, and Richard Socher. 2016. A joint many-task model: Growing a neural network for multiple nlp tasks. _arXiv preprint arXiv:1611.01587_ (2016).
* He et al. (2017) Luheng He, Kenton Lee, Mike Lewis, and Luke Zettlemoyer. 2017\. Deep semantic role labeling: What works and what’s next. In _Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)_. 473–483.
* Hochreiter and Schmidhuber (1997) Sepp Hochreiter and Jürgen Schmidhuber. 1997. Long short-term memory. _Neural computation_ 9, 8 (1997), 1735–1780.
* Hou et al. (2020) Yutai Hou, Wanxiang Che, Yongkui Lai, Zhihan Zhou, Yijia Liu, Han Liu, and Ting Liu. 2020. Few-shot Slot Tagging with Collapsed Dependency Transfer and Label-enhanced Task-adaptive Projection Network. _arXiv preprint arXiv:2006.05702_ (2020).
* Huang et al. (2015) Zhiheng Huang, Wei Xu, and Kai Yu. 2015. Bidirectional LSTM-CRF models for sequence tagging. _arXiv preprint arXiv:1508.01991_ (2015).
* Jha et al. (2018) Rahul Jha, Alex Marin, Suvamsh Shivaprasad, and Imed Zitouni. 2018\. Bag of experts architectures for model reuse in conversational language understanding. In _Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 3 (Industry Papers)_. 153–161.
* Krone et al. (2020) Jason Krone, Yi Zhang, and Mona Diab. 2020. Learning to Classify Intents and Slot Labels Given a Handful of Examples. _arXiv preprint arXiv:2004.10793_ (2020).
* Kurata et al. (2016) Gakuto Kurata, Bing Xiang, Bowen Zhou, and Mo Yu. 2016\. Leveraging sentence-level information with encoder lstm for semantic slot filling. _arXiv preprint arXiv:1601.01530_ (2016).
* Lafferty et al. (2001) John Lafferty, Andrew McCallum, and Fernando CN Pereira. 2001\. Conditional random fields: Probabilistic models for segmenting and labeling sequence data. (2001).
* Lample et al. (2016) Guillaume Lample, Miguel Ballesteros, Sandeep Subramanian, Kazuya Kawakami, and Chris Dyer. 2016\. Neural architectures for named entity recognition. _arXiv preprint arXiv:1603.01360_ (2016).
* Lee and Jha (2019) Sungjin Lee and Rahul Jha. 2019. Zero-shot adaptive transfer for conversational language understanding. In _Proceedings of the AAAI Conference on Artificial Intelligence_ , Vol. 33. 6642–6649.
* Li et al. (2020) Miao Li, Haoqi Xiong, and Yunbo Cao. 2020. The sppd system for schema guided dialogue state tracking challenge. _arXiv preprint arXiv:2006.09035_ (2020).
* Liu and Lane (2016) Bing Liu and Ian Lane. 2016\. Attention-based recurrent neural network models for joint intent detection and slot filling. _arXiv preprint arXiv:1609.01454_ (2016).
* Liu et al. (2019) Xingkun Liu, Arash Eshghi, Pawel Swietojanski, and Verena Rieser. 2019. Benchmarking natural language understanding services for building conversational agents. _arXiv preprint arXiv:1903.05566_ (2019).
* Liu et al. (2020) Zihan Liu, Genta Indra Winata, Peng Xu, and Pascale Fung. 2020\. Coach: A Coarse-to-Fine Approach for Cross-domain Slot Filling. _arXiv preprint arXiv:2004.11727_ (2020).
* Luo et al. (2018) Bingfeng Luo, Yansong Feng, Zheng Wang, Songfang Huang, Rui Yan, and Dongyan Zhao. 2018\. Marrying up regular expressions with neural networks: A case study for spoken language understanding. _arXiv preprint arXiv:1805.05588_ (2018).
* Lv et al. (2019) Xin Lv, Yuxian Gu, Xu Han, Lei Hou, Juanzi Li, and Zhiyuan Liu. 2019. Adapting meta knowledge graph information for multi-hop reasoning over few-shot relations. _arXiv preprint arXiv:1908.11513_ (2019).
* Ma et al. (2019) Yue Ma, Zengfeng Zeng, Dawei Zhu, Xuan Li, Yiying Yang, Xiaoyuan Yao, Kaijie Zhou, and Jianping Shen. 2019\. An end-to-end dialogue state tracking system with machine reading comprehension and wide & deep classification. _arXiv preprint arXiv:1912.09297_ (2019).
* Maaten and Hinton (2008) Laurens van der Maaten and Geoffrey Hinton. 2008. Visualizing data using t-SNE. _Journal of machine learning research_ 9, Nov (2008), 2579–2605.
* Mesnil et al. (2014) Grégoire Mesnil, Yann Dauphin, Kaisheng Yao, Yoshua Bengio, Li Deng, Dilek Hakkani-Tur, Xiaodong He, Larry Heck, Gokhan Tur, Dong Yu, et al. 2014\. Using recurrent neural networks for slot filling in spoken language understanding. _IEEE/ACM Transactions on Audio, Speech, and Language Processing_ 23, 3 (2014), 530–539.
* Mikolov et al. (2013) Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013\. Efficient estimation of word representations in vector space. _arXiv preprint arXiv:1301.3781_ (2013).
* Nichol et al. (2018) Alex Nichol, Joshua Achiam, and John Schulman. 2018\. On first-order meta-learning algorithms. _arXiv preprint arXiv:1803.02999_ (2018).
* Nichol and Schulman (2018) Alex Nichol and John Schulman. 2018. Reptile: a scalable metalearning algorithm. _arXiv preprint arXiv:1803.02999_ 2, 3 (2018), 4\.
* Pennington et al. (2014) Jeffrey Pennington, Richard Socher, and Christopher D Manning. 2014. Glove: Global vectors for word representation. In _Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP)_. 1532–1543.
* Peters et al. (2018) Matthew E. Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word representations. In _Proc. of NAACL_.
* Pradhan et al. (2012) Sameer Pradhan, Alessandro Moschitti, Nianwen Xue, Olga Uryupina, and Yuchen Zhang. 2012. CoNLL-2012 shared task: Modeling multilingual unrestricted coreference in OntoNotes. In _Joint Conference on EMNLP and CoNLL-Shared Task_. 1–40.
* Rajpurkar et al. (2016) Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. 2016. Squad: 100,000+ questions for machine comprehension of text. _arXiv preprint arXiv:1606.05250_ (2016).
* Ramshaw and Marcus (1995) Lance Ramshaw and Mitch Marcus. 1995. Text Chunking using Transformation-Based Learning. In _Third Workshop on Very Large Corpora_. https://www.aclweb.org/anthology/W95-0107
* Rastogi et al. (2019) Abhinav Rastogi, Xiaoxue Zang, Srinivas Sunkara, Raghav Gupta, and Pranav Khaitan. 2019. Towards scalable multi-domain conversational agents: The schema-guided dialogue dataset. _arXiv preprint arXiv:1909.05855_ (2019).
* Reed et al. (2017) Scott Reed, Yutian Chen, Thomas Paine, Aäron van den Oord, SM Eslami, Danilo Rezende, Oriol Vinyals, and Nando de Freitas. 2017\. Few-shot autoregressive density estimation: Towards learning to learn distributions. _arXiv preprint arXiv:1710.10304_ (2017).
* Reimers and Gurevych (2017) Nils Reimers and Iryna Gurevych. 2017. Optimal hyperparameters for deep lstm-networks for sequence labeling tasks. _arXiv preprint arXiv:1707.06799_ (2017).
* Sang and De Meulder (2003) Erik F Sang and Fien De Meulder. 2003. Introduction to the CoNLL-2003 shared task: Language-independent named entity recognition. _arXiv preprint cs/0306050_ (2003).
* Settles (2004) Burr Settles. 2004\. Biomedical named entity recognition using conditional random fields and rich feature sets. In _Proceedings of the International Joint Workshop on Natural Language Processing in Biomedicine and its Applications (NLPBA/BioNLP)_. 107–110.
* Sha and Pereira (2003) Fei Sha and Fernando Pereira. 2003. Shallow parsing with conditional random fields. In _Proceedings of the 2003 Human Language Technology Conference of the North American Chapter of the Association for Computational Linguistics_. 213–220.
* Shah et al. (2019) Darsh J Shah, Raghav Gupta, Amir A Fayazi, and Dilek Hakkani-Tur. 2019. Robust zero-shot cross-domain slot filling with example values. _arXiv preprint arXiv:1906.06870_ (2019).
* Shen et al. (2017) Tao Shen, Tianyi Zhou, Guodong Long, Jing Jiang, Shirui Pan, and Chengqi Zhang. 2017\. Disan: Directional self-attention network for rnn/cnn-free language understanding. _arXiv preprint arXiv:1709.04696_ (2017).
* Snell et al. (2017) Jake Snell, Kevin Swersky, and Richard Zemel. 2017\. Prototypical networks for few-shot learning. In _Advances in neural information processing systems_. 4077–4087.
* Socher et al. (2013) Richard Socher, Alex Perelygin, Jean Wu, Jason Chuang, Christopher D Manning, Andrew Y Ng, and Christopher Potts. 2013. Recursive deep models for semantic compositionality over a sentiment treebank. In _Proceedings of the 2013 conference on empirical methods in natural language processing_. 1631–1642.
* Srivastava et al. (2015) Rupesh Kumar Srivastava, Klaus Greff, and Jürgen Schmidhuber. 2015. Highway networks. _arXiv preprint arXiv:1505.00387_ (2015).
* Sun et al. (2019) Shengli Sun, Qingfeng Sun, Kevin Zhou, and Tengchao Lv. 2019\. Hierarchical attention prototypical networks for few-shot text classification. In _Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)_. 476–485.
* Sutton and McCallum (2006) Charles Sutton and Andrew McCallum. 2006. An introduction to conditional random fields for relational learning. _Introduction to statistical relational learning_ 2 (2006), 93–128.
* Tan et al. (2017) Zhixing Tan, Mingxuan Wang, Jun Xie, Yidong Chen, and Xiaodong Shi. 2017. Deep semantic role labeling with self-attention. _arXiv preprint arXiv:1712.01586_ (2017).
* Vinyals et al. (2016) Oriol Vinyals, Charles Blundell, Timothy Lillicrap, Daan Wierstra, et al. 2016\. Matching networks for one shot learning. In _Advances in neural information processing systems_. 3630–3638.
* Weston et al. (2014) Jason Weston, Sumit Chopra, and Antoine Bordes. 2014\. Memory networks. _arXiv preprint arXiv:1410.3916_ (2014).
* Xu and Sarikaya (2013) Puyang Xu and Ruhi Sarikaya. 2013. Convolutional neural network based triangular crf for joint intent detection and slot filling. In _2013 ieee workshop on automatic speech recognition and understanding_. IEEE, 78–83.
* Yan et al. (2018) Leiming Yan, Yuhui Zheng, and Jie Cao. 2018. Few-shot learning for short text classification. _Multimedia Tools and Applications_ 77, 22 (2018), 29799–29810.
* Young (2002) Steve Young. 2002\. Talking to machines (statistically speaking). In _Seventh International Conference on Spoken Language Processing_.
* Zang et al. (2020) Xiaoxue Zang, Abhinav Rastogi, Srinivas Sunkara, Raghav Gupta, Jianguo Zhang, and Jindong Chen. 2020\. MultiWOZ 2.2 : A Dialogue Dataset with Additional Annotation Corrections and State Tracking Baselines. In _Proceedings of the 2nd Workshop on Natural Language Processing for Conversational AI_. Association for Computational Linguistics, Online, 109–117. https://doi.org/10.18653/v1/2020.nlp4convai-1.13
* Zhang et al. (2018) Chenwei Zhang, Yaliang Li, Nan Du, Wei Fan, and Philip S Yu. 2018. Joint slot filling and intent detection via capsule neural networks. _arXiv preprint arXiv:1812.09471_ (2018).
|
# A Novel Approach for Earthquake Early Warning System Design using Deep
Learning Techniques
Tonumoy Mukherjee
Advanced Technology Developement Center
Indian Institute of Technology Kharagpur
Kharagpur 721302, India
<EMAIL_ADDRESS>
&Chandrani Singh
Geology & Geophysics Department
Indian Institute of Technology Kharagpur
Kharagpur 721302, India
<EMAIL_ADDRESS>
&Prabir Kumar Biswas
Electronics & Electrical Communication Department
Indian Institute of Technology Kharagpur
Kharagpur 721302, India
<EMAIL_ADDRESS>
###### Abstract
Earthquake signals are non-stationary in nature and thus in real-time, it is
difficult to identify and classify events based on classical approaches like
peak ground displacement, peak ground velocity. Even the popular algorithm of
STA/LTA requires extensive research to determine basic thresholding parameters
so as to trigger an alarm. Also, many times due to human error or other
unavoidable natural factors such as thunder strikes or landslides, the
algorithm may end up raising a false alarm. This work focuses on detecting
earthquakes by converting seismograph recorded data into corresponding audio
signals for better perception and then uses popular Speech Recognition
techniques of Filter bank coefficients and Mel Frequency Cepstral Coefficients
(MFCC) to extract the features. These features were then used to train a
Convolutional Neural Network(CNN) and a Long Short Term Memory (LSTM) network.
The proposed method can overcome the above-mentioned problems and help in
detecting earthquakes automatically from the waveforms without much human
intervention. For the 1000Hz audio data set the CNN model showed a testing
accuracy of 91.1% for 0.2-second sample window length while the LSTM model
showed 93.99% for the same. A total of 610 sounds consisting of 310 earthquake
sounds and 300 non-earthquake sounds were used to train the models. While
testing, the total time required for generating the alarm was 1.68 seconds
which included individual times for data collection, processing, and
prediction. Taking into consideration the processing and prediction delays,
the total time is thus considered to be approximately 2 seconds. This shows
the effectiveness of the proposed method for EEW applications. Since the input
of the method is only the waveform, it is suitable for real-time processing,
thus, the models can very well be used also as an onsite earthquake early
warning system requiring a minimum amount of preparation time and workload.
## 1 Introduction
Earthquakes have been an integral part of the planet earth since time
immemorial. Any movement in the tectonic plates of the earth’s crust releases
massive amounts of energy which passes through the earth’s surface as seismic
waves and results in mild to tremendous shaking. This whole process although
seems to be very long but happens within a few minutes. As a result, the time
to respond to these occurrences have increased many folds. Various scales of
1-10 have been adopted as the magnitude indicator of earthquakes. For
addressing a wider range of earthquake sizes, the moment magnitude scale,
abbreviated $M_{W}$, is preferred and is applicable globally [1]. Magnitudes
of earthquakes are exponential. To put it simply, for each whole number that
we go up on a magnitude scale, the amplitude of the ground motion goes up by a
factor of 10 when recorded by a seismograph [2]. Thus, by using this scale as
a reference, it is realized that the level of ground shaking caused by a
magnitude 2 earthquake would be ten times more than a magnitude 1 earthquake
(and 32 times as much energy would be released). To put that into context, if
a magnitude 1 earthquake releases as much energy as blowing up 6 ounces of
TNT, a magnitude 8 earthquake would release as much energy as detonating 6
million tons of TNT. Major earthquakes can cause significant damage to life as
well as property. Preventing an earthquake from occurring is an illogical
thing to do. We need to focus on how to mitigate the devastation caused by
these events. For that, we need some robust and reliable prediction methods.
There were a few successful predictions. The Haicheng earthquake(1975) is a
perfect example of a successful prediction. Precursors for this event included
a foreshock sequence, peculiarity in animal behavior, and anomalies like
geodetic deformation, groundwater level differences[3]. But these
abnormalities were not present in other major earthquakes. Hence there is no
generic precursor for earthquakes. This is where modern approaches of machine
learning and deep learning comes into play. Although researches in seismology
using machine learning and deep learning are quite limited, with the huge
amount of data that are accessible to researchers, good researches like
foreshock identification in real-time using deep learning [4], Earthquake
detection and location using convolutional neural network [5] and Machine
Learning Seismic Wave Discrimination [6] have been done. Thus having an idea
about the occurrence of an earthquake as early as possible will result in
developing a certain amount of alertness to respond to such events with ease.
Unlike various methods of detecting an earthquake such as STA/LTA (Short-Term
Average/Long-Term Average) algorithm as shown in Figure: 1 [7], another
possible way of detecting and classifying earthquakes using their sounds, have
been explored in this research. Sound is an effect produced by anything
physical including earthquakes [8] and is one of the most common effects
reported during or immediately after the felt tremors caused by them[9]. An
earthquake of magnitude 5, will have a sound or vibration different from
another earthquake whose magnitude is 4 [10]. Also, similar magnitude
earthquakes should have similar vibrations or sounds that are
indistinguishable by us but can be picked up very efficiently by neural
networks.
In this paper, two types of machine learning models were used to train a
system with data consisting of earthquake and non-earthquake sounds.
Convolutional Neural Network (CNN) and Long Short Term Memory (LSTM) models
were used for the purpose. The former model showed a testing accuracy of
91.102% for a 0.2-second audio data sample and the latter showed a testing
accuracy of 93.999% for the same.
Figure 1: Figure showing a 6.8 magnitude earthquake acceleration signal. The
red line represents the first arrival of P-wave detected by STA/LTA algorithm
marked by the letter ’P’ and the S-wave arrival marked by the letter ’S’.
## 2 Deep Learning and Seismology
With the rapid increase in the seismic data quantity, major challenges are
faced by modern seismology in the fields of data analyzing and processing
techniques. Most of the many popular techniques that are used in major data
centers date back to the time when the amount of seismic data was small and
the computational power was limited.
Today with the advancements in the fields of machine learning and deep
learning, scientists and researchers can very easily extract useful
information from voluminous data as they provide a large collection of tools
to work with. Once trained with sufficient data, deep learning models just
like humans, can acquire their knowledge by extracting features from raw data
[11] to recognize natural objects and make expert-level decisions in various
disciplines. Besides, the high computational costs for training such networks
are balanced by their low-cost online operation [5]. Advantages like these,
make deep learning suitable for applications in real-time seismology and
earthquake early warning (EEW).
This paper would look into two commonly used and widely known Deep learning
models, CNN and LSTM respectively and the accuracy achieved by them in
detecting and classifying earthquakes from their sounds. The architecture of
both the networks would be looked at in detail in the following sections.
### 2.1 Convolutional Neural Network (CNN) Architecture
The proposed architecture for the CNN model has 4 convolutional layers, 1
maxpool layer, and 3 dense layers. 2(b) shows the first layer of the 4
convolutional layers consisting of 16 filters built with 3x3 convolution with
’Relu’ as the activation function and 1x1 stride. All the parameters for the
second, third, and fourth layers remain the same except for the number of
filters in each layer multiplies two times with the number of filters in the
previous layer. Or in other words, 16 filters in the first layer, 32 filters
in the second layer, 64 in the third and 128 in the final layer (shown in
2(a)).
(a) Block Diagram of the CNN Model used
(b) 16 Filters of the First Convolutional Layer arranged in a 4X4 Matrix
Figure 2: CNN Block Diagram with First Convolutional Layer Filters
The idea behind increasing the number of filters in each layer is to be more
specific about the features as the data starts to convolve through each layer.
A kernel of 2x2 has been used for the maxpool layer. The three dense layers
after the maxpool layer consist of 128, 64, and 2 neurons so as to pool down
the features for the final 2 class classification. The first two dense layers
use ’Relu’ as their activation function whereas the last dense layer uses
’Softmax’ as its activation function as we use categorical cross-entropy for
multi-class classification purposes and ’adam’ as the optimizer.
### 2.2 Long Short Term Memory (LSTM) Architecture
The proposed architecture for the LSTM model (shown in Figure: 3) has 2 LSTM
layers consisting of 128 neurons each. 4 time distributed fully connected
layers of 64, 32, 16, and 8 neurons respectively are added after the 2 LSTM
layers with ’relu’ as their activation function. Lastly, a dense layer
consisting of 2 neurons is added for the final 2 class classification with
’softmax’ as its activation function and ’adam’ as the optimizer.
Figure 3: Block Diagram of the LSTM Model used
## 3 Data
Indian Earthquake acceleration data of the past 10 years and magnitudes
approximately between 2-8$M_{W}$ was collected from PESMOS (Program for
Excellence in Strong Motion Studies, IIT Roorkee) [12] and EGMB (Eastern Ghats
Mobile Belt) Data of earthquakes with magnitudes ranging between 2-5$M_{W}$
Collected at Geology and Geophysics Lab IIT Kharagpur, were also used.
Table 1: Training Data for Deep Learning Models (CNN & LSTM)
Data Source | Data Type & Format | Feature Vector Dimensions | No. of Earthquake Data | No. of Non-Earthquake Data
---|---|---|---|---
PESMOS (IIT Roorkee) | Audio, .wav | 9 x 13 | 212 | 0
EGMB (IIT Kharagpur) | Audio, .wav | 9 x 13 | 68 | 280
Table 2: Testing Data for Deep Learning Models (CNN & LSTM)
Data Source | Data Type & Format | Feature Vector Dimensions | No. of Earthquake Data | No. of Non-Earthquake Data
---|---|---|---|---
PESMOS (IIT Roorkee) | Audio, .wav | 9 x 13 | 11 | 0
EGMB (IIT Kharagpur) | Audio, .wav | 9 x 13 | 21 | 18
Table 1 and Table 2 shows the Train-Test Data split for both the Deep Learning
models. The collected dataset was replicated twice. The first copy of the
dataset was converted into corresponding audio signals of .wav format by
keeping the original sensor sampling rate 200Hz and the second copy of the
same dataset was converted into corresponding audio signals of the same format
whose sampling rate was increased to 1000Hz using upsampling techniques. The
sampling rate of the upsampled audio signals was estimated using trial and
error method so as to hear and clearly notice the change of the signal with
respect to time. A total of 610 sounds consisting of 310 earthquake sounds and
300 non-earthquake sounds were used to train the models. Both the audio signal
datasets of 200Hz and 1000Hz were fed as inputs to both the above models.
## 4 Methodology
### 4.1 Data Preparation
Before training the models, proper cleaning of the audio data was done and
essential features from the data were then extracted using popular speech
processing methodologies of Filter Banks and Mel-Frequency Cepstral
Coefficients (MFCCs).
### 4.2 Pre-Emphasis
The very first step of data processing included filtering the data using a
Pre-Emphasis filter. This was mainly done to amplify the high frequencies.
Apart from amplification, the filter helped in balancing the frequency
spectrum since higher frequencies usually have smaller magnitudes compared to
lower frequencies. The filter also was able to improve the Signal-to-Noise
Ratio (SNR). The first order filter is represented by the equation :
$y(t)=x(t)-\alpha x(t-1)~~$ (1)
was used to apply the pre-emphasis filter over the range of audio signal data.
From the typical values of 0.95 and 0.97, the former was used for the filter
coefficient $\alpha$.
### 4.3 Framing and FFT
After pre-emphasis filtering, the data was divided into short-time frames to
avoid the loss of frequency contours of the signal over time for performing
Fourier transform. A good approximation of the frequency contours of the
signal was achieved by concatenating adjacent frames and applying the Fourier
transform over those short-time frames. Popular settings of 25ms for the
frame-size and 10ms stride were used for framing the data. A Hamming window
function (shown in Eq.2) was applied after the signal was sliced into frames.
$w[n]=0.54-0.46\cos\frac{2\pi n}{N-1},\text{for}\ 0\leq n\leq N-1$ (2)
where N represents the number of frames in which the signal was divided.
After dividing the signal into frames, an N-point FFT was performed on each
frame to calculate the frequency spectrum, which also happens to be the Short-
Time Fourier-Transform (STFT), where N is typically 256 or 512 (256 in this
case).
### 4.4 Filter Banks and Mel-Frequency Cepstral Coefficients (MFCCs)
The rationale behind using filter banks was to separate the input signal into
its multiple components such that each of those components carries a single
frequency sub-band of the original signal. Triangular filters, typically 26,
were applied on a Mel-scale to the power spectrum of the short-time frames to
extract the frequency bands as shown in Figure: 4.
The formula for converting from frequency to Mel scale is given by:
$M(f)=1125\times ln(1+\frac{f}{700})$ (3)
To go back from Mels to frequency, the formula used is given by:
$M^{-1}(m)=700\times(e^{m/1125}-1)$ (4)
Figure 4: Features extracted from the signal for training the models after
applying the filter bank to the power spectrums of the short-time frames of
the signal. Figure 5: Mel-Frequency Cepstral Coefficients (MFCCs) as extracted
from the short frames in which the signal was divided.
MFCC is a biologically inspired and by far the most successful and most used
feature in the area of speech processing [13]. The algorithm was used in
volcano classification also [14]. For speech signals, the mean and the
variance changes continuously with time, and thus it makes the signal non-
stationary [15]. Similarly like speech, earthquake signals are also non-
stationary [16] as each of them has different arrival of P, S, and surface
waves. Therefore, normal signal processing techniques like Fourier transform,
cannot be directly applied to it. But, if the signal is observed in a very
small duration window (say 25ms ), the frequency content in that small
duration appears to be more or less stationary. This opened up the possibility
of short-time processing of the earthquake sound signals. The small duration
window is called a frame, discussed in section 4.3. For processing the whole
sound segment, the window was moved from the beginning to the end of the
segment consistently with equal steps, called shift or stride. Based on the
frame-size and frame-stride, it gave us M frames. Now, for each of the frames
as depicted in Figure: 5, MFCC coefficients were computed.
Figure 6: Block diagram of the entire System
Moreover, the filter bank energies computed were highly correlated since all
the filterbanks were overlapping. This becomes a problem for most of the
machine learning algorithms. To reduce autocorrelation between the filterbank
coefficients and get a compressed representation of the filter banks, a
Discrete Cosine Transform (DCT) was applied to the filterbank energies. This
also allowed the use of diagonal covariance matrices to model the features for
training. Also, 13 of the 26 DCT coefficients were kept and the rest were
discarded due to the fact that fast changes in the filterbank energies are
represented by higher DCT coefficients. These fast changes resulted in
degrading the model performances. Thus a small improvement was observed by
dropping them.
The overall system representation could be better understood from Figure: 6.
## 5 Results and Discussion
The CNN and the LSTM models performed almost similarly for the 200Hz audio
data set, but significant improvements in the train-test accuracy percentages
are observed for the 1000Hz data set. For 1000Hz audio data set the CNN model
showed a testing accuracy of 91.102% for 0.2-second sample window length while
the LSTM model showed 93.999% for the same (shown in Figure: 7). This
observation can be backed by the fact that LSTMs performs better for
sequential or time-series data classifications [17].
Figure 7: Testing and Training Accuracy comparisons of CNN and LSTM Models
over several sampling window time frames for the audio signal data sets of
1000Hz & 200Hz sampling frequencies
The Kappa statistics(values), generally used for comparing an Observed
Accuracy with an Expected Accuracy (random chance), was used for validating
the model accuracies for both the data sets (shown in Figure: 8).
For both data classes, activations by random 5 out of 16 filters of the first
layer of the CNN Model along with their inputs is represented by Figure: 9 and
Figure: 10 respectively.
Figure 8: Testing Accuracy and Cohen Kappa Score comparisons of the CNN and
the LSTM Models over several sampling window time frames for the audio signal
data sets of 1000Hz and 200Hz sampling frequencies
Figure 9: 1st Convolutional layer activations for Earthquake Data with
corresponding Filters
Figure 10: 1st Convolutional layer activations for Non- Earthquake Data with
the same Filters as used for Earthquake data
The time required for generating the alarm by the CNN model includes 1.28
seconds to gather enough data to start the MFCC computations, 8ms for
processing and a prediction time of 0.2 seconds. The summation gives a result
of 1.68 seconds. Taking into consideration the processing and prediction
delays, the total time for the CNN model is thus considered to be
$\simeq{2}$seconds.
LSTMs being computationally a bit more expensive, the processing and the
prediction times were 10ms and 0.5 seconds respectively, giving a total of
2.19 seconds. Taking into consideration the processing and prediction delays,
the total time for the LSTM model is thus considered to be
$\simeq{2.5}$seconds.
Table 3: Overall Comparision between standard STA/LTA algorithm and the
proposed algorithms
Algorithm | Data Source | Data Sampling Frequency | Time to Alarm | Accuracy | Prerequisites
---|---|---|---|---|---
STA/LTA | | PESMOS (IIT Roorkee)
---
&
EGMB (IIT Kharagpur)
200 Hz | 3 Seconds after P-wave arrival | | 95.43%
---
(Heavily dependant on Prerequisites)
| 1\. Proper user-defined Thresholds
---
2\. Different Thresholds for different Regions
3\. Threshold kept high for Strong Motion Events
(more earthquakes missed, lesser false alarms)
4\. Threshold kept low for Weak Motion Events
(fewer earthquakes missed, more false alarms)
| MFCC with CNN Model
---
(Proposed Method)
| PESMOS (IIT Roorkee)
---
&
EGMB (IIT Kharagpur)
1000 Hz | 2 Seconds after P-wave arrival | 91.1% | None
| MFCC with LSTM Model
---
(Proposed Method)
| PESMOS (IIT Roorkee)
---
&
EGMB (IIT Kharagpur)
1000 Hz | 2.5 Seconds after P-wave arrival | 93.99% | None
It was also observed that by increasing the sampling rate of the audio
signals, the training and the testing accuracies of both CNN and LSTM model
increased. This is one of the very first applications of Machine learning in
the field of earthquake detection using MFCC’s and Filterbank Coefficients
which are generally used in the field of Speech Recognition. An earthquake is
basically understood well by three types of waves namely P-wave, S-wave, and
surface wave. Interaction of these waves with the surrounding medium gives an
earthquake its intensity. Any wave is a vibration. Any vibration has some
sound associated with it. It might be inaudible to the human ears, but the
sound remains. In real-time, it is difficult to identify and classify events
based on classical approaches like peak ground displacement, peak ground
velocity, or even the widely recognized algorithm of STA/LTA as they require
extensive research to determine basic thresholding parameters so as to trigger
an alarm. Many times due to human error or other unavoidable natural factors
such as thunder strikes or landslides, the conventional algorithms may end up
raising a false alarm (shown in Figure: 11).
Figure 11: Figure Showing 3 subplots where the first plot shows incorrect
P-wave arrival detection by STA/LTA, marked by the red line, due to poor
threshold parameters, leading to late detection of P-wave and in-turn leading
to late alarm generation. The second plot shows the spectrogram with almost
appropriate detection of the P-wave first arrival marked by the green line and
the third plot shows the same spectrogram with a -50dB threshold applied, so
as to get a clear view of the first P-wave arrival.
Table 3 shows a detailed comparision between the standard STA/LTA algorithm
and the proposed Deep Learning models. The main disadvantage of STA/LTA
algorithm is that it has to be tweaked differently for different types of
event detections. The user defined threshold which acts as the trigger, varies
from region to region due to its direct dependence on the geographical and
topological features of a particular region. Also, the threshold is kept high
for strong motion events and low for weak motion events. This results in
serious ambiguity as both the features can’t be used together. If the
threshold is high, more earthquakes are missed but lesser false alarms are
generated and if the threshold is low, less earthquakes are missed but more
false alarms generated. The proposed method can overcome these problems as it
can extract essential features of a raw sound or vibrational data without
manually hand engineering them and doesn’t require any knowledge or expertise
in the relevant field. It is also invariant to small variations in occurrence
in time or position and can understand representations of data with multiple
levels of abstraction. Since the input of the method is only the waveform, it
is suitable for real-time processing, thus, the models can very well be used
also as an onsite earthquake early warning system requiring a minimum amount
of preparation time and workload. Until now, the use of earthquake early
warning systems for earthquake disasters are mainly limited by false alarm
generation and delay in detection. By using the suggested approach, these
problems can be overcome, leading to automatic, fast, and accurate detection
of earthquake seismic signals.
### 5.1 Seismic Sensor Prototype Hardware Design and Experimental Setup for
Model Testing
The main reason for developing the sensor hardware prototype was to test the
validity and robustness of the proposed models. Also, it was very crucial to
understand and evaluate the complexities and solve the challenges at a system
level as it is targeted to deploy the system in earthquake prone areas. It was
also essential for comparing the system results with already present
commercial solutions.
The hardware is developed with the help of commercial equipments to mimic the
vibrations from an actual earthquake. Figure: 12 depicts a schematic of the
prototype sensor hardware used. Figure: 13(a) shows the actual PCB with boxes
representing the utility of sections. On the hard left we have the power
management block to supply a steady source of power to the circuit
(3.3V,80mA). On the top right we have the accelerometer sensor acting as the
principle of detection (seismometer) having an output dynamic range of 0.1V to
2.5V and a sensitivity of 300mV/g. In between the Power Management and the
accelerometer sensor blocks we have the microcontroller and wifi communication
blocks to receive and transmit the sensor data wirelessly. The Wifi has a
bandwidth of 100mHz to 10Hz, RF carrier frequency of 2.4GHz with a data
transmission rate upto 256Kbps. The processor is of a 16 bit RISC architecture
with an operating frequency of 16MHz.
Figure 12: Schematic of an accelerometer with front-end amplifiers (box in
black), and a microcontroller with wireless transceiver (box in red).
(a) Photo of the prototype circuit board for implementing a wireless seismic
sensor node.
(b) A 3D printed packaging scheme for the wireless seismic sensor prototype.
(c) Specifications of the Seismic sensor Prototype.
Figure 13: Prototype Sensor PCB with its 3D printed packaging and
Specifications
Figure: 13(b) and Figure: 13(c) gives us the image of the 3D printed packaging
solution used to house the sensor board and the exact specification details of
the components used to design the sensor PCB respectively. After the Deep
learning models were validated and finalized, an interesting experimental
setup was made so as to test the effectiveness and robustness of the models.
The idea was to mimic the vibrations from an actual earthquake signal, sense
it with the help of the custom designed sensor PCB, transmit the data
wirelessly to a remote computer where the data gets plotted, processed and
stored in real time. Also, as soon as the Deep Learning models detects the
required data, it activates and predicts the label so as to classify whether
or not the recorded data corresponds to an earthquake or a non earthquake
sound. The experimental setup can be visualized from Figure: 14.
Figure 14: Test Setup for the entire System.
The test setup and the overall system included the following steps:
* •
An Arbitary Waveform Generator (AWG). Real earthquake data was stored in it so
that it could generate the output from that data.
* •
A 4 ohm 40 Watt Speaker. The output of the AWG was connected to the speaker so
as to make the speaker vibrate according to the output waveform and thus mimic
an earthquake.
* •
The Prototype circuit board consisting of the accelerometer sensor to sense
the speaker vibrations, microcontroller and an ESP8266 wifi communication
block (transmitter built within the PCB and receiver is coonected to the
remote PC via a USB to UART cable).
* •
MATLAB code running in the remote PC to receive, process, plot and store the
data in real time.
* •
Python code running simultaneously along with the MATLAB code in the remote
PC, monitoring the stored data directory to activate the deep learning model
for prediction as soon as it detects any data in that directory.
The test run of the system was a success as it was able to produce the desired
results by accurately predicting the class of the vibration signals from the
speaker received by the wireless ESP8266 Wi-Fi module.
## 6 Conclusion
In this paper, a new way of automatic classification of earthquake signals is
presented based on CNN and LSTM by using only MFCC features extracted from the
waveform. The performance of this algorithm has been tested by its application
to regional and local earthquake events selected from PESMOS (Program for
Excellence in Strong Motion Studies, IIT Roorkee) and EGMB (Eastern Ghats
Mobile Belt, Geology and Geophysics Lab IIT Kharagpur) data sets. Using
optimal parameters, for 1000 Hz audio data set the CNN model showed a testing
accuracy of 91.102% for a 0.2-second sample window while the LSTM model
obtained an accuracy of 93.999% for the same. This brings down the standard
alarm generation time to approximately 2 seconds after P-wave arrival. The
results outperform the conventional algorithm of STA/LTA in terms of
classification accuracy with respect to classification speed and constraint
requirements. While the models were mainly aimed at classifying earthquake
events as quickly as possible, they can also be easily tweaked to give early
estimates of magnitudes, ground velocities, shaking intensity, and many more
useful parameters.
Also, the uniquely innovative experimental setup helped in creating real time
earthquake simulations in a cheap, precise and effective way.
MFCC for earthquake detection is like putting our ears to hear and listen the
sounds inside the earth which we otherwise could not hear and thus treating
the earth sounds as earth’s speech signals. The most interesting and effective
part of this model is that it can be trained on various classes of sounds
other than earthquake sounds alone, to classify every signal that the sensor
detects, using their sound signatures. This can be of enormous help in
military applications also. If trained with human movement sounds, the model
could be deployed in the border and other high-security areas so as to provide
us instant information regarding trespassing and other such unlawful
activities, releasing the burden to some extent from the security officials
and the soldiers. This could be a solution not just for earthquake detection
alone but for many other such applications also.
## Acknowledgments
This work is funded by the Ministry of Human Resource Development(MHRD),
Government of India. The authors are thankful to the Department of Earthquake
Engineering, IIT Roorkee, and the Department of Geology and Geophysics, IIT
Kharagpur, for proving Program for Excellence in Strong Motion Studies(PESMOS)
and Easter Ghats Mobile Belt(EGMB) earthquake datasets respectively for the
research. This work would not have been possible without the research
facilities provided by IIT Kharagpur. Finally, authors are very thankful to
all the members of Image Processing and Computer Vision Laboratory, Department
of Electronics and Electrical Communication Engineering and Computational
Laboratory, Department of Geology and Geophysics, Indian Institute of
Technology Kharagpur, for their kind help and support.
## References
* [1] Thomas C Hanks and Hiroo Kanamori. A moment magnitude scale. Journal of Geophysical Research: Solid Earth, 84(B5):2348–2350, 1979.
* [2] Hiroo Kanamori. Quantification of earthquakes. Nature, 271(5644):411–414, 1978.
* [3] Anshu Jin and Keiiti Aki. Temporal change in coda q before the tangshan earthquake of 1976 and the haicheng earthquake of 1975. Journal of Geophysical Research: Solid Earth, 91(B1):665–673, 1986\.
* [4] K Vikraman. A deep neural network to identify foreshocks in real time. arXiv preprint arXiv:1611.08655, 2016.
* [5] Thibaut Perol, Michaël Gharbi, and Marine Denolle. Convolutional neural network for earthquake detection and location. Science Advances, 4(2):e1700578, 2018.
* [6] Zefeng Li, Men-Andrin Meier, Egill Hauksson, Zhongwen Zhan, and Jennifer Andrews. Machine learning seismic wave discrimination: Application to earthquake early warning. Geophysical Research Letters, 45(10):4773–4779, 2018.
* [7] Wu Yih-Min, Hiroo Kanamori, Richard M Allen, and Egill Hauksson. Determination of earthquake early warning parameters, $\tau$ c and pd, for southern california. Geophysical Journal International, 170(2):711–717, 2007.
* [8] PL Bragato, M Sugan, P Augliera, M Massa, A Vuan, and A Saraò. Moho reflection effects in the po plain (northern italy) observed from instrumental and intensity data. Bulletin of the Seismological Society of America, 101(5):2142–2152, 2011.
* [9] Patrizia Tosi, Valerio De Rubeis, Andrea Tertulliani, and Calvino Gasparini. Spatial patterns of earthquake sounds and seismic source geometry. Geophysical research letters, 27(17):2749–2752, 2000.
* [10] Patrizia Tosi, Paola Sbarra, and Valerio De Rubeis. Earthquake sound perception. Geophysical Research Letters, 39(24), 2012.
* [11] Ian Goodfellow, Yoshua Bengio, and Aaron Courville. Deep learning. MIT press, 2016.
* [12] Himanshu Mittal, Ashok Kumar, and Rebecca Ramhmachhuani. Indian national strong motion instrumentation network and site characterization of its stations. International Journal of Geosciences, 3(06):1151, 2012.
* [13] Md Afzal Hossan, Sheeraz Memon, and Mark A Gregory. A novel approach for mfcc feature extraction. In 2010 4th International Conference on Signal Processing and Communication Systems, pages 1–5. IEEE, 2010.
* [14] I. Ã?lvarez, G. Cortés, A. de la Torre, C. BenÃtez, L. GarcÃa, P. Lesage, R. Arámbula, and M. González. Improving feature extraction in the automatic classification of seismic events. application to colima and arenal volcanoes. In 2009 IEEE International Geoscience and Remote Sensing Symposium, volume 4, pages IV–526–IV–529, 2009.
* [15] Mark G Frei and Ivan Osorio. Intrinsic time-scale decomposition: time–frequency–energy analysis and real-time filtering of non-stationary signals. Proceedings of the Royal Society A: Mathematical, Physical and Engineering Sciences, 463(2078):321–342, 2006.
* [16] JK Hammond and PR White. The analysis of non-stationary signals using time-frequency methods. Journal of Sound and vibration, 190(3):419–447, 1996.
* [17] Sepp Hochreiter and Jrgen Schmidhuber. Long short-term memory. Neural Computation, 9(8):1735–1780, 1997.
|
# Convergence of non-autonomous attractors for subquintic weakly damped wave
equation
Jakub Banaśkiewicz, Piotr Kalita Faculty of Mathematics and Computer Science,
Jagiellonian University, ul. Łojasiewicza 6, 30-348, Kraków
<EMAIL_ADDRESS><EMAIL_ADDRESS>
###### Abstract.
We study the non-autonomous weakly damped wave equation with subquintic growth
condition on the nonlinearity. Our main focus is the class of Shatah–Struwe
solutions, which satisfy the Strichartz estimates and are coincide with the
class of solutions obtained by the Galerkin method. For this class we show the
existence and smoothness of pullback, uniform, and cocycle attractors and the
relations between them. We also prove that these non-autonomous attractors
converge upper-semicontinuously to the global attractor for the limit
autonomous problem if the time-dependent nonlinearity tends to time
independent function in an appropriate way.
This work was supported by National Science Center (NCN) of Poland under
projects No. DEC-2017/25/B/ST1/00302 and UMO-2016/22/A/ST1/00077.
## 1\. Introduction.
In this paper we are interested in the existence, regularity and upper-
semicontinuous convergence of pullback, uniform and cocycle attractors of the
problems governed by the following weakly damped wave equations
$u_{tt}+u_{t}-\Delta u=f_{\varepsilon}(t,u).$ (1.1)
We prove that these attractors converge as $\varepsilon\to 0$ to the global
attractor of the problem governed by the limit autonomous equation
$u_{tt}+u_{t}-\Delta u=f_{0}(u),$ (1.2)
where $f_{\varepsilon}\to f_{0}$ in appropriate sense. The unknowns are the
functions $u:[t_{0},\infty)\times\Omega\to\mathbb{R}$, where $\Omega$ is an
open and bounded domain with smooth boundary.
The theory of global attractors for the wave equation with damping term
$u_{t}$ has been developed by Babin and Vishik [Babin-Vishik-1983], Ghidaglia
and Temam [Ghidaglia_Temam], Hale [Hale_1985], Haraux [Haraux1, Haraux2], Pata
and Zelik [PataZelik]. Overview of the theory can be found, among others, in
the monographs of Babin and Vishik [Babin_Vishik], Haraux [Har_book], Chueshov
and Lasiecka [Chueshov_Lasiecka]. We also mention the classical monographs of
Henry [Henry-1981], Hale [Hale-1988], Robinson [Robinson], Temam [Temam], and
Dłotko and Cholewa [Dlotko_Cholewa] on infinite dimensional autonomous
dynamical systems. Various types of non-autonomous attractors and their
properties has been studied, among others, by Chepyzhov and Vishik
[Chepyzhov_Vishik], Cheban [Cheban], Kloeden and Rasmussen [Kloeden],
Carvalho, Langa, and Robinson [Carvalho-Langa-Robinson-2012], Chepyzhov
[Chepyzhov-2013], and Bortolan, Carvalho and Langa [BoCaL].
The existence of the global attractor for (1.2) with the cubic growth
condition
$|f_{0}(s)|\leq C(1+|s|^{3}),$ (1.3)
has been obtained by Arrieta, Carhalho, and Hale in [Arrieta-Carvalho-
Hale-1992]. This growth exponent had long been considered as critical. In 2016
Kalantarov, Savostianov and Zelik [Savostianov] used the findings on the
Strichartz estimates for the wave equation on bounded domains [Burq, Blair] to
obtain the global attractor existence for the so called Shatah–Struwe
solutions of quintic weakly damped wave equation, i.e. where the exponent $3$
in (1.3) is replaced by $5$. These findings led to the rapid development of
the theory for weakly damped wave equation with supercubic growth. In
particular, global attractors for Shatah–Struwe solutions for supercubic case
with forcing in $H^{-1}$ have been studied by Liu, Meng, and Sun [LMS], and
the exponential attractors were investigated by Meng and Liu in [Meng_Liu]. We
also mention the work [Carvalho-Chol-Dlot-2009] of Carvalho, Cholewa, and
Dłotko who obtained an existence of the weak global attractor for a concept of
solutions for supercubic but subquintic case. Finally, the results on
attractors for autonomous problems with supercubic nonlinearities have been
generalized to the case of damping given by the fractional Laplacian in the
subquintic case in [Savo2] and in the quintic case in [Savo1].
For a non-autonomous dynamical system there exist several important concepts
of attractors: the pullback attractor, a time-dependent family of compact sets
attracting “from the past” [Carvalho-Langa-Robinson-2012, Kloeden], the
uniform attractor, the minimal compact set attracting forwards in time
uniformly with respect to the driven system of non-autonomous terms
[Chepyzhov_Vishik], and the cocycle attractor which, in a sense unifies and
extends the two above concepts [Kloeden, Langa]. An overview of these notions
can be found in the review article [balibrea]. Recent intensive research on
the characterization of pullback attractors and continuity properties for PDEs
[Carvalho-Langa-Robinson-2012, Kloeden, Langa] has led to the results on the
link between the notions of uniform, pullback, and cocycle attractors, namely
an internal characterization of a uniform attractor as the union of the
pullback attractors related to all their associated symbols (see [Langa], and
Theorem 8.1 below), and thus allowing to define the notion of lifted
invariance (see [Langa], and Definition 8.7 and Theorem 8.2 below) on uniform
attractors.
There are several recent results on nonautonomous version of weakly damped
wave equation with quintic, or at least supercubic, growth condition which use
the concept of Shatah–Struwe solutions. Savostianov and Zelik in the article
[Savostianov_measure] obtain the existence of the uniform attractor for the
problem governed by
$u_{tt}+u_{t}+(1-\Delta)u+f(u)=\mu(t),$
on the three dimensional torus, where $\mu(t)$ can be a measure. Mei, Xiong,
and Sun [mcs] obtain the existence of the pullback attractor for the problem
governed by the equation
$u_{tt}+u_{t}-\Delta u+f(u)=g(t),$ (1.4)
for the subquintic case on the space domain given by whole $\mathbb{R}^{3}$ in
the so called locally uniform spaces. Mei and Sun [Mei_Sun] obtain the
existence of the uniform attractor for non translation compact forcing for the
problem governed by (1.4) with subquintic $f$. Finally, Chang, Li, Sun, and
Zelik [clsz] consider the problem of the form
$u_{tt}+\gamma(t)u_{t}-\Delta u+f(u)=g,$
and show the existence of several types of nonautonomous attractors with
quintic nonlinearity for the case where the damping may change sign. None of
these results consider the nonlinearity of the form $f(t,u)$ and none of these
results fully explore the structure of non-autonomous attractors and relation
between pullback, uniform, and cocycle attractors characterized in [Langa].
The present paper aims to fill this gap.
In this article we generalize the results of [Savostianov] to the problem
governed by the weakly damped nonautonomous wave equation (1.1) with the
semilinear term $f_{\varepsilon}(t,u)$ which is a perturbation of the
autonomous nonlinearity $f_{0}(u)$, cf. assumptions (H2) and (H3) in Section
3. We stress that we deal only with the case of the subquintic growth
$\left|\frac{\partial f_{\varepsilon}}{\partial u}(t,u)\right|\leq
C(1+|u|^{4-\kappa}),\ $
for which we prove the results on the existence and asymptotic smoothness of
Shatah–Struwe solutions, derive the asymptotic smoothing estimates and obtain
the result on the upper-semicontinuous convergence of attractors. Thus we
extend and complete the previous results in [Savostianov] where only the
autonomous case was considered and in [Mei_Sun, mcs] where the nonlinearity
was only in the autonomous term. We stress some key difficulties and
achievements of our work. We follow the methodology of [Savostianov,
Proposition 3.1 and Proposition 4.1] to derive the Strichartz estimate for the
nonlinear problem from the one for the linear problem (where we use the
continuation principle that can be found for example in [Tao, Proposition
1.21]) but in the proof we need the extra property that the constant $C_{h}$
in the linear Strichartz estimate
$\|u\|_{L^{4}(0,h;L^{12})}\leq
C_{h}(\|(u_{0},u_{1})\|_{\mathcal{E}_{0}}+\|G\|_{L^{1}(0,T;L^{2})}),$
is a nondecreasing function of $h$. We establish this fact with the use of the
Christ–Kiselev lemma [Sogge, Lemma 3.1]. As the part of the definition of the
weak solution that we work with, we choose that it should be the limit of the
Galerkin approximations. In [Savostianov, Section 3] the authors decide to
work with Shatah–Struwe solutions (i.e. the solutions posessing the extra
$L^{4}(0,T;L^{12}(\Omega))$ regularity), and they prove that each such
solution must be the limit of the Galerkin approximations, cf. [Savostianov,
Corollary 3.6]. We establish that in the subcritical case the two notions are
in fact equivalent, cf. our Lemma 5.4. In [Savostianov, Corollary 4.3] the
authors derive only $\mathcal{E}_{\delta}$ estimates saying in Remark 4.6
about possibility of further bootstrapping arguments. We derive in Section 6
the relevant asymptotic smoothing estimates and thus provide the result on the
attractor smoothness in $\mathcal{E}_{1}$. The main result of the paper about
non-autonomous attractors, cf. Theorem 6.2, uses the findings of [Kloeden,
Langa] and establishes the existence, smoothness, and relation between
uniform, cocycle (and thus also pullback) attractors. Finally, another novelty
of the present paper is the upper-semicontinuity result of Section 7 which
also concerns these three classes of non-autonomous attractors.
The possible extension of our results can involve dealing with a non-
autonomous nonlinearity with critical quintic growth condition. This case is
more delicate because control of the energy norm of the initial data does not
give the control over norm $L^{4}(0,T;L^{12}(\Omega))$. To overcome this
problem Kalantarov, Savostianov, and Zelik in [Savostianov] used the technique
of trajectory attractors. Another interesting question is the possibility of
extending the results of [Mirelson-Kalita-Langa] about the convergence of non-
autonomous attractors for equations
$\varepsilon u_{tt}+u_{t}-\Delta u=f_{\varepsilon}(t,u)$
to the attractor for the semilinear heat equation as $\varepsilon\to 0$, in
the case of subquintic or quintic growth condition on $f$. The main difficulty
is to obtain the Strichartz estimates which are uniform with respect to
$\varepsilon$. Finally we mention the possible further line of research
involving the lower semicontinuous convergence of attractor and the stability
of the attractor structure under perturbation.
Structure of the article is as follows. After some preliminary facts reminded
in Section 2, the formulation of the problem, assumptions of its data, and
some auxiliary results regarding the translation compactness on the non-
autonomous term are presented in Section 3. Next Section 4 is devoted to the
Galerkin solutions and their dissipativity and the following Section 5
contains the results on the Strichartz estimates, Shatah–Struwe solutions, and
their equivalence with the Galerkin solutions. Result on the existence and
asymptotic smoothness of non-autonomous attractors, Theorem 6.2 is contained
in Section 6, while in Section 7 we prove their upper-semicontinuous
convergence to the global attractor of the autonomous problem. Some auxiliary
results needed in the paper are included in the final Section 8.
## 2\. Preliminaries.
Let $\Omega\subset\mathbb{R}^{3}$ be a bounded and open set with sufficiently
smooth boundary. We will use the notation $L^{2}$ for $L^{2}(\Omega)$ and in
general for notation brevity we will skip writing dependence on $\Omega$ in
spaces of functions defined on this set. By $(\cdot,\cdot),\|{.}\|$ we will
denote respectively the scalar product and the norm in $L^{2}$. We will also
use the notation $\mathcal{E}_{0}=H^{1}_{0}\times L^{2}$ for the energy space.
Its norm is defined by $\|(u,v)\|^{2}_{\mathcal{E}_{0}}=\|\nabla
u\|^{2}+\|v\|^{2}$. In the article by $C$ we denote a generic positive
constant which can vary from line to line. We recall some useful information
concerning the spectral fractional Laplacian [Antil]. Denote by
$\\{e_{i}\\}_{i=1}^{\infty}$ the eigenfunctions (normed to $1$ in
$L^{2}(\Omega)$) of the operator $-\Delta$ with the Dirichlet boundary
conditions, such that the corresponding eigenvalues are given by
$0<\lambda_{1}\leq\lambda_{2}\leq\ldots\leq\lambda_{n}\leq\ldots.$
For $u\in L^{2}$ its $k$-th Fourier coefficient is defined as
$\widehat{u}_{k}=(u,e_{k})$. Let $s\geq 0$. The spectral fractional laplacian
is defined by the formula
$(-\Delta)^{\frac{s}{2}}u=\sum_{k=1}^{\infty}\lambda_{k}^{\frac{s}{2}}\widehat{u}_{k}.$
The space $\mathbb{H}^{s}$ is defined as
$\mathbb{H}^{s}=\left\\{u\in L^{2}\,:\
\sum_{k=1}^{\infty}\lambda_{k}^{s}\widehat{u}_{k}^{2}<\infty\right\\}.$
The corresponding norm is given by
$\|u\|_{\mathbb{H}^{s}}=\|(-\Delta)^{s/2}u\|=\sqrt{\sum_{k=1}^{\infty}\lambda_{k}^{s}\widehat{u}_{k}^{2}}.$
The space $\mathbb{H}^{s}$ is a subspace of the fractional Sobolev space
$H^{s}$. In particular
$\mathbb{H}^{s}=\begin{cases}H^{s}=H^{s}_{0}\ \ \textrm{for}\ \
s\in(0,1/2),\\\ H^{s}_{0}\ \ \textrm{for}\ \ s\in(1/2,1].\end{cases}$
We also remind that the standard fractional Sobolev norm satisfies
$\|u\|_{H^{s}}\leq C\|u\|_{\mathbb{H}^{s}}$ for $u\in\mathbb{H}^{s}$, cf.
[Antil, Proposition 2.1]. For $s\in[0,1]$ we will use the notation
$\mathcal{E}_{s}=\mathbb{H}^{s+1}\times\mathbb{H}^{s}$. This space is equipped
with the norm
$\|(u,v)\|^{2}_{\mathcal{E}_{s}}=\|u\|^{2}_{\mathbb{H}^{s+1}}+\|v\|^{2}_{\mathbb{H}^{s}}$.
## 3\. Problem definition and assumptions.
We consider the following family of problems parameterized by $\varepsilon>0$
$\begin{cases}u_{tt}+u_{t}-\Delta
u=f_{\varepsilon}(t,u)\;\text{for}\;(x,t)\in\Omega\times(0,\infty),\\\
u(t,x)=0\;\text{for}\;x\in\partial\Omega,\\\ u(0,x)=u_{0}(x),\\\
u_{t}(0,x)=u_{1}(x).\end{cases}$ (3.1)
The initial data has the regularity $(u_{0},u_{1})\in\mathcal{E}_{0}$.
Throughout the article we always assume that the nonautonomous and nonlinear
term $f_{\varepsilon}(t,u)$, treated as the mapping which assigns to the time
$t\in\mathbb{R}$ the function of the variable $u$, belongs to the space
$C(\mathbb{R};C^{1}(\mathbb{R}))$. This space is equipped with the metric
$d_{C(\mathbb{R};C^{1}(\mathbb{R}))}(g_{1},g_{2})=\sum_{i=1}^{\infty}\frac{1}{2^{i}}\frac{\sup_{t\in[-i,i]}d_{C^{1}(\mathbb{R})}(g_{1}(t,.),g_{2}(t,.))}{1+\sup_{t\in[-i,i]}d_{C^{1}(\mathbb{R})}(g_{1}(t,.),g_{2}(t,.))}\;\text{for
$g_{1},g_{2}\in C(\mathbb{R};C^{1}(\mathbb{R}))$},$
where the metric in $C^{1}(\mathbb{R})$ is defined as follows
$d_{C^{1}(\mathbb{R})}(g_{1},g_{2})=\sum_{i=1}^{\infty}\frac{1}{2^{i}}\frac{\|{g_{1}(u)-g_{2}(u)}\|_{C^{1}([-i,i])}}{1+\|{g_{1}(u)-g_{2}(u)}\|_{C^{1}([-i,i])}}\;\text{for
$g_{1},g_{2}\in C^{1}(\mathbb{R})$},$
and $\|g\|_{C^{1}(A)}=\max_{r\in A}|g(r)|+\max_{r\in A}|g^{\prime}(r)|$ for a
compact set $A\subset\mathbb{R}$.
###### Remark 3.1.
If $g_{n}\to g$ in sense of $C(\mathbb{R};C^{1}(\mathbb{R}))$ then $g_{n}\to
g$ and $\frac{\partial{g_{n}}}{\partial u}\to\frac{\partial{g}}{\partial u}$
uniformly on every bounded subset of $\mathbb{R}$.
We make the following assumptions on functions
$f_{\varepsilon}:\mathbb{R}\times\mathbb{R}\to\mathbb{R}$ and
$f_{0}:\mathbb{R}\to\mathbb{R}$
* (H1)
For every $\varepsilon\in(0,1]$ the function $f_{\varepsilon}\in
C(\mathbb{R};C^{1}(\mathbb{R}))$, and $f_{0}\in C^{1}(\mathbb{R})$.
* (H2)
For every $u\in\mathbb{R}$
$\lim_{\varepsilon\to
0}\sup_{t\in\mathbb{R}}|f_{\varepsilon}(t,u)-f_{0}(u)|=0.$
* (H3)
There holds
$\sup_{\varepsilon\in[0,1]}\sup_{t\in\mathbb{R}}\sup_{u\in\mathbb{R}}|f_{\varepsilon}(t,u)-f_{0}(u)|<\infty.$
* (H4)
There holds
$\limsup_{|u|<\infty}\frac{f_{0}(u)}{u}<\lambda_{1},$
where $\lambda_{1}$ is the first eigenvalue of $-\Delta$ operator with the
Dirichlet boundary conditions.
* (H5)
There exists $\kappa>0$ and $C>0$ such that
$\sup_{\varepsilon\in[0,1]}\sup_{t\in\mathbb{R}}\left|\frac{\partial
f_{\varepsilon}}{\partial u}(t,u)\right|\leq C(1+|u|^{4-\kappa})\ \
\textrm{for every}\ \ u\in\mathbb{R}.$
* (H6)
For any fixed $u\in\mathbb{R}$ the map $f_{\varepsilon}(t,u)$ is uniformly
continuous with respect to $t$. Moreover for every $R>0$ the map
$\mathbb{R}\times[-R,R]\ni(t,u)\mapsto\frac{\partial f_{\varepsilon}}{\partial
u}(t,u)$ is uniformly continuous.
###### Remark 3.2.
The example of family of functions satisfying conditions (H1)-(H6) is
$f_{\varepsilon}(t,u)=-u|u|^{4-\kappa}+g(u)+\varepsilon\sin(t)\sin(u^{3})$
where the growth of $g(u)$ is essentially lower than $5-\kappa$.
###### Proposition 3.3.
Assuming (H1), (H3), (H5), and (H6), for every $R>0$ the mapping
$\mathbb{R}\times[-R,R]\ni(t,u)\mapsto f_{\varepsilon}(t,u)$
is uniformly continuous.
###### Proof.
Let $u_{1},u_{2}\in[-R,R]$ and $t_{1},t_{2}\in\mathbb{R}$. There holds
$\displaystyle|f_{\varepsilon}(t_{1},u_{1})-f_{\varepsilon}(t_{2},u_{2})|\leq|f_{\varepsilon}(t_{1},u_{1})-f_{\varepsilon}(t_{1},u_{2})|+|f_{\varepsilon}(t_{1},u_{2})-f_{\varepsilon}(t_{2},u_{2})|$
$\displaystyle\qquad\leq C(R)|u_{1}-u_{2}|+\sup_{|u|\leq
R}|f_{\varepsilon}(t_{1},u)-f_{\varepsilon}(t_{2},u)|.$
It suffices to prove that for every $\eta>0$ we can find $\delta>0$ such that
if only $|t_{1}-t_{2}|\leq\delta$ then $\sup_{|u|\leq
R}|f_{\varepsilon}(t_{1},u)-f_{\varepsilon}(t_{2},u)|\leq\eta$. Assume for
contradiction that there exists $\eta_{0}>0$ such that for every
$n\in\mathbb{N}$ we can find $t^{n}_{1},t^{n}_{2}\in\mathbb{R}$ with
$|t_{1}-t_{2}|\leq\frac{1}{n}$ and
$\sup_{|u|\leq
R}|f_{\varepsilon}(t^{n}_{1},u)-f_{\varepsilon}(t^{n}_{2},u)|>\eta_{0}.$
For every $n$ there exists $u^{n}$ with $|u^{n}|\leq R$ such that
$|f_{\varepsilon}(t^{n}_{1},u^{n})-f_{\varepsilon}(t^{n}_{2},u^{n})|>\eta_{0}.$
For a subsequence $u^{n}\to u^{0}$ with $|u^{0}|\leq R$, hence
$\displaystyle\eta_{0}<|f_{\varepsilon}(t^{n}_{1},u^{n})-f_{\varepsilon}(t^{n}_{1},u^{0})|+|f_{\varepsilon}(t^{n}_{1},u^{0})-f_{\varepsilon}(t^{n}_{2},u^{0})|+|f_{\varepsilon}(t^{n}_{2},u^{0})-f_{\varepsilon}(t^{n}_{2},u^{n})|$
$\displaystyle\ \ \ \leq
2C(R)|u^{n}-u^{0}|+|f_{\varepsilon}(t^{n}_{1},u^{0})-f_{\varepsilon}(t^{n}_{2},u^{0})|.$
By taking $n$ large enough we deduce that
$\frac{\eta_{0}}{2}<|f_{\varepsilon}(t^{n}_{1},u^{0})-f_{\varepsilon}(t^{n}_{2},u^{0})|,$
a contradiction with uniform continuity of $f_{\varepsilon}(\cdot,u_{0})$
assumed in (H6). ∎
We define hull of $f$ as a set
$\mathcal{H}(f):=\overline{\\{f(t+\cdot,\cdot)\in
C(\mathbb{R};C^{1}(\mathbb{R}))\\}_{t\in\mathbb{R}}}$, where the closure is
understood in the metric $d_{C(\mathbb{R};C^{1}(\mathbb{R}))}$. We also define
set
$\mathcal{H}_{[0,1]}:=\bigcup_{\varepsilon\in(0,1]}\mathcal{H}(f_{\varepsilon})\cup\\{f_{0}\\}$
We say that a function $f$ is translation compact if its hull $\mathcal{H}(f)$
is a compact set. The following characterization of translation compactness
can be found in [Chepyzhov_Vishik, Proposition 2.5 and Remark 2.2].
###### Proposition 3.4.
Let $f\in C(\mathbb{R};C^{1}(\mathbb{R}))$. Then $f$ is translation compact if
and only if for every $R>0$
* (i)
$|f(t,u)|+|\frac{\partial f}{\partial u}(t,u)|\leq C_{R}$ for
$(t,u)\in\mathbb{R}\times[-R,R],$
* (ii)
Functions $f(t,u)$ and $\frac{\partial f}{\partial u}(t,u)$ are uniformly
continuous on $\mathbb{R}\times[-R,R].$
We prove two simple results concerning the translation compactness of
$f_{\varepsilon}$ and its hull.
###### Corollary 3.0.1.
For each $\varepsilon\in(0,1]$ function $f_{\varepsilon}$ is translation
compact.
###### Proof.
From assumption (H3) and the fact that $f_{0}\in C^{1}(\mathbb{R})$ one can
deduce that (i) from Proposition 3.4 holds. Moreover, (H6) and Proposition 3.3
imply that (ii) holds, and the proof is complete. ∎
###### Proposition 3.5.
If $f_{\varepsilon}$ satisfies conditions (H1), (H2), (H3), and (H5) then
these conditions are satisfied by all elements from $\mathcal{H}_{[0,1]}$.
Moreover there exist constants $C,K>0$ independent of $\varepsilon$ such that
for every $p_{\varepsilon}\in\mathcal{H}(f_{\varepsilon})$ there hold
$\sup_{\epsilon\in[0,1]}\sup_{t\in\mathbb{R}}\sup_{u\in\mathbb{R}}\left|p_{\varepsilon}(t,s)-f_{0}(u)\right|\leq
K,\qquad\sup_{\varepsilon\in[0,1]}\sup_{t\in\mathbb{R}}\left|\frac{\partial
p_{\varepsilon}}{\partial u}(t,u)\right|\leq C(1+|u|^{4-\kappa})\ \
\textrm{for every}\ \ u\in\mathbb{R}.$ (3.2)
###### Proof.
Property (H1) is clear. Suppose that (H2) does not hold. Then there exists a
number $\delta>0$, sequences $\varepsilon_{n}\to 0$,
$p_{\varepsilon_{n}}\in\mathcal{H}(f_{\varepsilon})$, $t_{n}\in R$ and a
number $u\in\mathbb{R}$ such that
$|p_{\varepsilon_{n}}(t_{n},u)-f_{0}(u)|>2\delta.$
Because $p_{\varepsilon_{n}}\in\mathcal{H}(f_{\varepsilon_{n}})$ we can pick a
sequence $s_{n}$ such that
$|f_{\varepsilon_{n}}(s_{n}+t_{n},u)-p_{\varepsilon_{n}}(t_{n},u)|\leq\delta$.
Then
$|f_{\varepsilon_{n}}(t_{n}+s_{n},u)-f_{0}(u)|\geq-|f_{\varepsilon_{n}}(t_{n}+s_{n},u)-p_{\varepsilon_{n}}(t_{n},u)|+|p_{\varepsilon_{n}}(t_{n},u)-f_{0}(u)|\geq\delta.$
Now (H2) follows by contradiction. We denote
$K_{1}:=\sup_{\varepsilon\in[0,1]}\sup_{t\in\mathbb{R}}\sup_{u\in\mathbb{R}}|f_{\varepsilon}(t,u)-f_{0}(u)|,$
which from assumption (H3) is a finite number. Taking
$p_{\varepsilon}\in\mathcal{H}_{[0,1]}$, for every $(t,u)\in\mathbb{R}^{2}$ we
obtain.
$|p_{\varepsilon}(t,u)-f_{0}(u)|\leq|f_{\varepsilon}(t+s_{n},u)-p_{\varepsilon}(t,u)|+|f_{\varepsilon}(t+s_{n},u)-f_{0}(u)|\leq
K_{1}+|f_{\varepsilon}(t+s_{n},u)-p_{\varepsilon}(t,u)|$
We can a pick sequence $s_{n}$ such that
$|f_{\varepsilon}(s_{n}+t,u)-p_{\varepsilon}(t,u)|\to 0$. So passing to the
limit we get
$|p_{\varepsilon}(t,u)-f_{0}(u)|\leq K_{1}$
We have proved that for every $p_{\varepsilon}\in\mathcal{H}_{[0,1]}$ we have
$\sup_{t\in\mathbb{R}}\sup_{u\in\mathbb{R}}|p_{\varepsilon}(t,u)-f_{0}(u)|\leq
K_{1}.$
From (H5) we obtain
$\left|\frac{\partial{f_{\varepsilon}}}{\partial u}(t+s_{n},u)\right|\leq
C(1+|u|^{4-\kappa}),$
for every $u,t,s_{n}\in\mathbb{R},\varepsilon\in[0,1]$. Again by choosing
$s_{n}$ such that $|f_{\varepsilon}(s_{n}+t,u)-p_{\varepsilon}(t,u)|\to 0$ and
passing to the limit we observe that for every
$p_{\varepsilon}\in\mathcal{H}_{[0,1]}$ there holds
$\sup_{t\in\mathbb{R}}\left|\frac{\partial p_{\varepsilon}}{\partial
u}(t,u)\right|\leq C(1+|u|^{4-\kappa})\ \ \textrm{for every}\ \
u\in\mathbb{R},$ (3.3)
which ends the proof. ∎
###### Proposition 3.6.
If (H1), (H2), (H3), and (H5) hold, then for every $R>0$ and every
$p_{\varepsilon}\in\mathcal{H}(f_{\varepsilon})$
$\lim_{\varepsilon\to 0}\sup_{|s|\leq
R}\sup_{t\in\mathbb{R}}|p_{\varepsilon}(t,s)-f_{0}(s)|=0.$
###### Proof.
For contradiction assume that there exists $\delta>0$ and sequences
$|s_{n}|\leq R$, $t_{n}\in\mathbb{R}$, $\varepsilon_{n}\to 0$ such that
$\delta\leq|p_{\varepsilon_{n}}(t_{n},s_{n})-f_{0}(s_{n})|.$
For a subsequence there holds $s_{n}\to s_{0}$, where $|s_{0}|\leq R$. Hence,
$\displaystyle\delta\leq|p_{\varepsilon_{n}}(t_{n},s_{n})-p_{\varepsilon_{n}}(t_{n},s_{0})|+|p_{\varepsilon_{n}}(t_{n},s_{0})-f_{0}(s_{0})|+|f_{0}(s_{0})-f_{0}(s_{n})|$
$\displaystyle\leq
C(1+|s_{n}|^{4-\kappa}+|s_{0}|^{4-\kappa})|s_{n}-s_{0}|+\sup_{t\in\mathbb{R}}|p_{\varepsilon_{n}}(t,s_{0})-f_{0}(s_{0})|+|f_{0}(s_{0})-f_{0}(s_{n})|$
$\displaystyle\leq
C(R)|s_{n}-s_{0}|+\sup_{t\in\mathbb{R}}|p_{\varepsilon_{n}}(t,s_{0})-f_{0}(s_{0})|+|f_{0}(s_{0})-f_{0}(s_{n})|.$
All terms on right-hand side tend to zero as $n\to\infty$, and we have the
contradiction. ∎
## 4\. Galerkin solutions.
###### Definition 4.1.
Let $(u_{0},u_{1})\in\mathcal{E}_{0}$. The function $u\in
L^{\infty}_{loc}([0,\infty);H^{1}_{0})$ with $u_{t}\in
L^{\infty}_{loc}([0,\infty);L^{2})$ and $u_{tt}\in
L^{\infty}_{loc}([0,\infty);H^{-1})$ is a weak solution of problem (3.1) if
for every $v\in L^{2}_{loc}([0,\infty);H^{1}_{0})$ there holds
$\int_{0}^{t_{1}}\langle u_{tt}(t),v(t)\rangle_{H^{-1}\times
H^{1}_{0}}+(u_{t}(t)-f_{\varepsilon}(t,u(t)),v)+(\nabla u(t),\nabla
v(t))\,dt=0,$
and $u(0)=u_{0}$, $u_{t}(0)=u_{1}$.
Note that as $u\in C([0,\infty);L^{2})$ and $u_{t}\in C([0,\infty);H^{-1})$,
pointwise values of $u$ and $u_{t}$, and thus the initial data, make sense.
However, due to the lack of regularity of the nonlinear term
$f_{\varepsilon}(\cdot,u(\cdot))$, we cannot test the equation with $u_{t}$.
Thus, although it is straightforward to prove (using the Galerkin method) the
existence of the weak solution given by the above definition, we cannot
establish the energy estimates required to work with this solution.
Let $\\{e_{i}\\}_{i=1}^{\infty}$ be the eigenfunctions of the $-\Delta$
operator with the Dirichlet boundary conditions on $\partial\Omega$ sorted by
the nondecreasing eigenvalues. They constitute the orthonormal basis of
$L^{2}$ and they are orthogonal in $H^{1}_{0}$. Denote
$V_{N}=\textrm{span}\,\\{e_{1},\ldots,e_{N}\\}$. The family of finite
dimensional spaces $\\{V_{N}\\}_{N=1}^{\infty}$ approximates $H^{1}_{0}$ from
the inside, that is
$\overline{\bigcup_{N=1}^{\infty}V_{N}}^{H^{1}_{0}}=H^{1}_{0}\qquad\textrm{and}\qquad
V_{N}\subset V_{N+1}\ \ \textrm{for every}\ \ N\geq 1.$
Let $u^{N}_{0}\in V_{N}$ and $u^{N}_{1}\in V_{N}$ be such that
$\displaystyle u^{N}_{0}\to u_{0}\ \ \textrm{in}\ \ H^{1}_{0}\ \ \textrm{as}\
\ N\to\infty,$ $\displaystyle u^{N}_{1}\to u_{1}\ \ \textrm{in}\ \ L^{2}\ \
\textrm{as}\ \ N\to\infty.$
Now the $N$-th Galerkin approximate solution for (3.1) is defined as follows.
###### Definition 4.2.
The function $u^{N}\in C^{1}([0,\infty);V_{N})$ with $u^{N}_{t}\in
AC([0,\infty);V_{N})$ is the $N$-th Galerkin approximate solution of problem
(3.1) if $u_{N}(0)=u^{N}_{0}$, $u^{N}_{t}(0)=u^{N}_{1}$ and for every $v\in
V_{N}$ and a.e. $t>0$ there holds
$(u^{N}_{tt}(t)+u^{N}_{t}(t)-f_{\varepsilon}(t,u^{N}(t)),v)+(\nabla
u^{N}(t),\nabla v)=0.$
We continue by defining the weak solution of the Galerkin type
###### Definition 4.3.
The weak solution given by Definition 4.1 is said to be of the Galerkin type
if it can be approximated by the solutions of the Galerkin problems, i.e., for
a nonrenumbered subsequence of $N$ there holds
$\displaystyle u^{N}\to u\ \ \textrm{weakly-* in}\ \
L^{\infty}_{loc}([0,\infty);H^{1}_{0}),$ (4.1) $\displaystyle u^{N}_{t}\to
u_{t}\ \ \textrm{weakly-* in}\ \ L^{\infty}_{loc}([0,\infty);L^{2}),$ (4.2)
$\displaystyle u^{N}_{tt}\to u_{tt}\ \ \textrm{weakly-* in}\ \
L^{\infty}_{loc}([0,\infty);H^{-1}).$ (4.3)
We skip the proof of the following result which is standard in the framework
of the Galerkin method.
###### Theorem 4.1.
Assume (H1), (H3)–(H5). If $(u_{0},u_{1})\in\mathcal{E}_{0}$ then the problem
given in Definition 4.1 has at least one weak solution of Galerkin type.
###### Proposition 4.4.
The Galerkin solutions of problem (3.1) are bounded in $\mathcal{E}_{0}$ and
there exists a bounded set $B_{0}\subset\mathcal{E}_{0}$ which is absorbing,
i.e. for every bounded set $B\subset\mathcal{E}_{0}$ there exists $t_{0}\geq
0$ such that for every Galerkin solution $(u(t),u_{t}(t))$ with the initial
conditions in $B$ there holds $(u(t),u_{t}(t))\in B_{0}$ for every $t\geq
t_{0}$. Moreover $B_{0}$ and $t_{1}$ do not depend on the choice of
$p(t,u)\in\mathcal{H}_{[0,1]}$ in place of $f_{\varepsilon}$ in (3.1).
Note that in the above result the function $f_{\varepsilon}$ in (3.1) is
replaced by $p\in\mathcal{H}_{[0,1]}$. In the sequel we will consider (3.1)
with such $p\in\mathcal{H}_{[0,1]}$ replacing $f_{\varepsilon}$.
To prove the above proposition we will need the following Gronwall type lemma.
###### Lemma 4.2.
Let $I(t)=I_{1}(t)+\ldots+I_{n}(t)$ be an absolutely continuous function,
$I(0)\in\mathbb{R}$. Suppose that
$\frac{d}{dt}I(t)\leq-A_{i}I_{i}(t)^{\alpha_{i}}+B_{i},$
for every $i\in\\{{1,\ldots,n}\\}$ and for almost every $t$ such that
$I_{i}(t)\geq 0$, where $\alpha_{i},A_{i},B_{i}>0$ are constants. Then for
every $\eta>0$ there exists $t_{0}>0$ such that
$I(t)\leq\sum_{i=1}^{n}\left(\frac{B_{i}}{A_{i}}\right)^{\frac{1}{\alpha_{i}}}+\eta,\;\text{for
every $t\geq t_{0}$}.$
If, in addition, $\\{I^{l}(t)\\}_{l\in\mathcal{L}}$ is a family of functions
satisfying the above conditions and such that $I^{l}(0)\leq Q$ for each
$l\in\mathcal{L}$, then the time $t_{0}$ is independent of $l$ and there
exists a constant $C$ depending on $Q,A_{i},B_{i},\alpha_{i}$ such that
$I^{l}(t)\leq C$ for every $t\geq 0$ and every $l\in\mathcal{L}$.
###### Proof.
We denote
$B=\sum_{i=1}^{n}\left(\frac{B_{i}}{A_{i}}\right)^{\frac{1}{\alpha_{i}}}$
and let $A=\min_{i\in\\{1,\ldots,n\\}}\\{A_{i}\\}$. First we will show that
for every $\eta>0$ if $I(t_{0})\leq B+\eta$, then $I(t)\leq B+\eta$ for every
$t\geq t_{0}$. For the sake of contradiction let us suppose that there exists
some $t_{1}>t_{0}$ such that $I(t_{1})>B+\eta$. Let
$t_{2}=\sup\\{{s\in[t_{0},t_{1}]\,:\ I(s)\leq B+\eta}\\}$, so there exists
$\delta>0$ such that for every $s\in(t_{2},t_{1}]$ we can find an index $i$
for which there holds
$I_{i}(s)>\left(\frac{B_{i}}{A_{i}}+\delta\right)^{\frac{1}{\alpha_{i}}}.$
Then for a.e. $s\in(t_{2},t_{1}]$ we have
$\frac{d}{dt}I(s)\leq-A_{i}I_{i}(s)^{\alpha_{i}}+B_{i}\leq-A\delta,$
and after integration we get that $I(t_{1})<I(t_{2})-(t_{2}-t_{1})A\delta$
which is a contradiction. We observe that all functions from family
$\\{I^{l}(t)\\}_{l\in\mathcal{L}}$ are bounded by $\max\\{Q,1\\}+B$. Now we
will prove existence of $t_{0}$. For the sake of contradiction suppose that
there exists $\eta>0$ and the sequence of times $t_{n}\to\infty$ such that
$I^{l_{n}}(t_{n})>B+\eta$ for some $l_{n}\in\mathcal{L}$. Then for every
$s\in[0,t_{n}]$ we must have $I^{l_{n}}(s)>B+\eta$. Then there exist
$\delta>0$ such for all $s\in[0,t_{n}]$ and $l_{n}$ there is some
$I_{i}^{l_{n}}(s)>(\frac{B_{i}}{A_{i}}+\delta)^{\frac{1}{\alpha_{i}}}$. Again
for a.e $s\in(0,t_{n})$
$\frac{d}{dt}I^{l^{n}}(s)\leq-A\delta$
and after integrating we get that $I^{l_{n}}(t_{n})\leq Q-t_{n}A\delta$ which
is contradiction. ∎
###### Proof of Proposition 4.4.
Let $u$ be the Galerkin solution to (3.1) with any function
$p\in\mathcal{H}_{[0,1]}$ in place of $f_{\varepsilon}$ at the right-hand side
of (3.1). By testing this equation with $u+2u_{t}$ we obtain
$\displaystyle\frac{d}{dt}\left[(u_{t},u)+\frac{1}{2}\|{u}\|^{2}+\|{u_{t}}\|^{2}+\|{\nabla
u}\|^{2}-2\int_{\Omega}F_{0}(u)dx\right]$
$\displaystyle\qquad=-\|{u_{t}}\|^{2}-\|{\nabla
u}\|^{2}+(f_{0}(u),u)+(p(t,u)-f_{0}(u),2u_{t}+u),$
where $F_{0}(u)=\int_{0}^{u}f_{0}(v)dv$. Assumption (H4) implies the
inequality
$(f_{0}(u),u)\leq C+K\|{u}\|^{2},\text{ where }0\leq K<\lambda_{1}.$
We define
$I(t)=(u_{t},u)+\frac{1}{2}\|{u}\|^{2}+\|{u_{t}}\|^{2}+\|{\nabla
u}\|^{2}-2\int_{\Omega}F_{0}(v)dx.$
Using the Poincaré and Cauchy–Schwarz inequalities we obtain
$\frac{d}{dt}I(t)\leq-\|{u_{t}}\|^{2}-C\|{\nabla
u}\|^{2}+\|{p_{\varepsilon}(t,u)-f_{0}(u)}\|(2\|{u_{t}}\|+\|{u}\|)+C.$ (4.4)
Using the Poincaré inequality again it follows by Proposition 3.5 that
$\frac{d}{dt}I(t)\leq-C\left(\|{u_{t}}\|^{2}+\|{\nabla u}\|^{2}\right)+C.$
(4.5)
We represent the function $I(t)$ as the sum of the following terms
$I_{1}=\|{u_{t}}\|^{2},\;I_{2}=\frac{1}{2}\|{u}\|^{2},\;I_{3}=\|{\nabla
u}\|^{2},\;I_{4}(t)=(u_{t},u),\;I_{5}=-2\int_{\Omega}F_{0}(u)dx.$
From the estimate (4.5) and Poincare inequality we can easily see that
$\frac{d}{dt}I\leq-A_{i}I_{i}+B_{i}$ for $i\in\\{1,2,3,4\\}$, (4.6)
where $A_{i},B_{i}$ are positive constants. To deal with the term $I_{5}$ we
observe that by the growth condition (H5) using the Hölder inequality we
obtain
$\displaystyle I_{5}$ $\displaystyle\leq
C\int_{\Omega}\left|\int_{0}^{u}1+|v|^{5}dv\right|dx\leq
C\int_{\Omega}\left(|u|+|u|^{6}\right)dx=C\left(\|{u}\|_{L_{1}}+\|{u}\|_{L_{6}}^{6}\right)\leq
C\left(\|{u}\|_{L_{6}}+\|{u}\|_{L_{6}}^{6}\right)$ $\displaystyle\leq
C\left(\|{u}\|_{L_{6}}^{6}+1\right).$
From the Sobolev embedding $H_{0}^{1}\hookrightarrow L_{6}$ it follows that
$I_{5}^{\frac{1}{3}}\leq\left(C\|{\nabla u}\|^{6}+1\right)^{\frac{1}{3}}\leq
C\left(\|{\nabla u}\|^{2}+1\right).$ (4.7)
From the estimate (4.5) we observe that
$\frac{d}{dt}I\leq-A_{5}I_{5}^{\frac{1}{3}}+B_{5},\ \ \textrm{with}\ \
A_{5},B_{5}>0.$ (4.8)
By Lemma 4.2 we deduce that there exists a constant $D>0$ such that every for
bounded set of initial data $B\subset\mathcal{E}_{0}$ there exists the time
$t_{0}=t_{0}(B)$ such that for every $p\in\mathcal{H}_{[0,1]}$ there holds
$I(t)\leq D$ for $t\geq t_{0}$ and $(u_{0},u_{1})\in B$. (4.9)
We observe that from (H4) it follows that
$F_{0}(u)\leq C+\frac{K}{2}u^{2}$ where $0\leq K<\lambda_{1}$ (4.10)
We deduce
$I(t)\geq\frac{1}{2}\|{u_{t}}\|^{2}+\|{\nabla u}\|^{2}-K\|{u}\|^{2}-C\geq
C\|{u}\|_{\mathcal{E}_{0}}^{2}-C.$ (4.11)
We have shown the existence of the absorbing set $B_{0}\subset\mathcal{E}_{0}$
which is independent of the choice of $p\in\mathcal{H}_{[0,1]}$. By Lemma 4.2
it follows that for every initial condition $(u_{0},u_{1})\in\mathcal{E}_{0}$
there exists a constant $D=D(u_{0},u_{1})>0$ such that for every
$p\in\mathcal{H}_{[0,1]}$ and $t\in\mathbb{R}$ there holds
$I(t)\leq D$ for $t\in[0,\infty)$. (4.12)
The proof is complete. ∎
## 5\. Shatah–Struwe solutions, their regularity and a priori estimates.
### 5.1. Auxiliary linear problem.
Similar as in [Savostianov] we define an auxiliary non-autonomous problem for
which we derive a priori estimates both in energy and Strichartz norms.
$\displaystyle\begin{cases}&u_{tt}+u_{t}-\Delta
u=G(x,t)\;\text{for}\;(x,t)\in\Omega\times(t_{0},\infty),\\\
&u(t,x)=0\;\text{for}\;x\in\partial\Omega\\\ &u(t_{0},x)=u_{0}(x)\\\
&u_{t}(t_{0},x)=u_{1}(x)\end{cases}$ (5.1)
It is well known that if only $G\in L^{1}_{loc}([t_{0},\infty);L^{2})$ and
$(u_{0},u_{1})\in\mathcal{E}_{0}$ then the above problem has the unique weak
solution $u$ belonging to $C_{loc}([t_{0},\infty);H^{1}_{0})$ with $u_{t}\in
C_{loc}([t_{0},\infty);L^{2})$ and $u_{tt}\in
L^{\infty}_{loc}([t_{0},\infty);H^{-1})$. For details see cf. [Temam,
Babin_Vishik, Robinson, Chepyzhov_Vishik]. The next result appears in
[Savostianov, Proposition 2.1]. For completeness of our argument we provide
the outline of the proof.
###### Proposition 5.1.
Let $u$ be the weak solution to problem (5.1) on interval $[t_{0},\infty)$
with $G\in L^{1}_{loc}([t_{0},\infty);L^{2})$ and initial data
$u(t_{0})=u_{0}$, $u_{t}(t_{0})=u_{1}$ with $(u_{0},u_{1})\in\mathcal{E}_{0}$.
Then the following estimate holds
$\|{(u(t),u_{t}(t))}\|_{\mathcal{E}_{0}}\leq
C\left(\|{(u_{0},u_{1})}\|_{\mathcal{E}_{0}}e^{-\alpha(t-t_{0})}+\int_{t_{0}}^{t}e^{-\alpha(t-s)}\|{G(s)}\|ds\right)$
for every $t\geq t_{0}$, where $C,\alpha$ are positive constants independent
of $t,t_{0},G$ and initial conditions of (5.1).
###### Proof.
Testing (5.1) by $u+2u_{t}$ we obtain
$\displaystyle\frac{d}{dt}\left((u_{t},u)+\frac{1}{2}\|{u}\|^{2}+\|{u_{t}}\|^{2}+\|{\nabla
u}\|^{2}\right)=-\|{u_{t}}\|^{2}-\|{\nabla u}\|^{2}+(G(t),u+2u_{t})$
We define $I(t)=(u_{t},u)+\frac{1}{2}\|{u}\|^{2}+\|{u_{t}}\|^{2}+\|{\nabla
u}\|^{2}$. We easily deduce
$\frac{d}{dt}I(t)\leq C\left(-I(t)+\sqrt{I(t)}\|{G(t)}\|\right).$
Multiplying the above inequality by $e^{Ct}$ we obtain
$\frac{d}{dt}\left(I(t)e^{Ct}\right)\leq Ce^{Ct}\|G(t)\|\sqrt{I(t)}.$
After integration it follows that
$I(t)e^{Ct}-I(t_{0})e^{Ct_{0}}\leq
C\int_{t_{0}}^{t}e^{Cs}\|G(s)\|\sqrt{I(s)}\,ds.$
Hence, for every $\varepsilon>0$
$I(t)\leq(I(t_{0})+\varepsilon)e^{C(t_{0}-t)}+e^{-Ct}C\int_{t_{0}}^{t}e^{Cs}\|G(s)\|\sqrt{I(s)}\,ds.$
(5.2)
Now let
$J(t)=C\int_{t_{0}}^{t}e^{Cs}\|G(s)\|\sqrt{I(s)}\,ds.$
Then $J$ is absolutely continuous, $J(t_{0})=0$, and for almost every
$t>t_{0}$ we obtain
$J^{\prime}(t)=Ce^{Ct}\|G(t)\|\sqrt{I(t)}.$
From (5.2) it follows that
$J^{\prime}(t)\leq
Ce^{Ct}\|G(t)\|\sqrt{(I(t_{0})+\varepsilon)e^{C(t_{0}-t)}+e^{-Ct}J(t)}=Ce^{\frac{Ct}{2}}\|G(t)\|\sqrt{(I(t_{0})+\varepsilon)e^{Ct_{0}}+J(t)}.$
Hence
$\frac{J^{\prime}(t)}{\sqrt{(I(t_{0})+\varepsilon)e^{Ct_{0}}+J(t)}}\leq
Ce^{\frac{Ct}{2}}\|G(t)\|.$
After integrating over interval $[t_{0},t]$ we obtain the following inequality
valid for every $t\geq t_{0}$
$\sqrt{(I(t_{0})+\varepsilon)e^{Ct_{0}}+J(t)}\leq{\sqrt{(I(t_{0})+\varepsilon)e^{Ct_{0}}}}+\frac{C}{2}\int_{t_{0}}^{t}e^{\frac{Cs}{2}}\|G(s)\|\,ds.$
It follows that
$J(t)\leq
C\left[\left(\int_{t_{0}}^{t}e^{\frac{Cs}{2}}\|G(s)\|\,ds\right)^{2}+(I(t_{0})+\varepsilon)e^{Ct_{0}}\right].$
From definition of $J(t)$ using the inequality (5.2) we notice that
$I(t)\leq
C\left((I(t_{0})+\varepsilon)e^{\alpha(t_{0}-t)}+\left(\int_{t_{0}}^{t}e^{-\alpha(t-s)}\|G(s)\|\,ds\right)^{2}\right),$
for a constant $\alpha>0$. As
$c_{1}\|{(u(t),u_{t}(t))}\|_{\mathcal{E}_{0}}\leq\sqrt{I(t)}\leq
c_{2}\|{(u(t),u_{t}(t))}\|_{\mathcal{E}_{0}}$ for some $c_{1},c_{2}>0$,
passing with $\varepsilon$ to zero we obtain the required assertion. ∎
The following Lemma provides us an extra control on the $L^{4}(L^{12})$ norm
of the solution to the linear problem (3.1). The result is given in
[Savostianov, Proposition 2.2 and Remark 2.3].
###### Lemma 5.1.
Let $h>0$ and let $u$ be a weak solution to problem (5.1) on time interval
$(t_{0},t_{0}+h)$ with $G\in L^{1}(t_{0},t_{0}+h;L^{2})$ and
$(u(t_{0}),u_{t}(t_{0}))=(u_{0},u_{1})\in\mathcal{E}_{0}$. Then $u\in
L^{4}(t_{0},t_{0}+h;L^{12})$ and the following estimate holds
$\|{u}\|_{L^{4}(t_{0},t_{0}+h;L^{12})}\leq
C_{h}\left(\|{(u_{0},u_{1})}\|_{\mathcal{E}_{0}}+\|{G}\|_{L^{1}(t_{0},t_{0}+h;L^{2})}\right),$
(5.3)
where the constant $C_{h}>0$ depends only on $h$ but is independent of
$t_{0},(u_{0},u_{1}),G$.
We will need the following result.
###### Proposition 5.2.
It is possible to choose the constants $C_{h}$ in previous lemma such that the
function $[0,\infty)\ni h\to C_{h}$ is nondecreasing.
The above proposition will be proved with the use of the following theorem
known as the Christ–Kiselev lemma, see e.g. [Sogge, Lemma 3.1].
###### Theorem 5.2.
Let $X,Y$ be Banach spaces and assume that $K(t,s)$ is a continuous function
taking values in $B(X,Y)$, the space of linear bounded mappings from $X$ to
$Y$. Suppose that $-\infty\leq a<b\leq\infty$ and set
$Tf(t)=\int_{a}^{b}K(t,s)f(s)\,ds,$ $Wf(t)=\int_{a}^{t}K(t,s)f(s)\,ds.$
Then if for $1\leq p<q\leq\infty$ there holds
$\|{Tf}\|_{L^{q}(a,b;Y)}\leq C\|{f}\|_{L^{p}(a,b;X)},$
then
$\|{Wf}\|_{L^{q}(a,b;Y)}\leq\overline{C}\|{f}\|_{L^{p}(a,b;X)},\;\text{with
$\overline{C}=2C\frac{2^{2\left(\frac{1}{q}-\frac{1}{p}\right)}}{1-2^{\frac{1}{q}-\frac{1}{p}}}$}.$
###### Proof.
If $G\equiv 0$ then we denote the corresponding constant by $D_{h}$, i.e.
$\|{u}\|_{L^{4}(t_{0},t_{0}+h;L^{12})}\leq
D_{h}\|{(u_{0},u_{1})}\|_{\mathcal{E}_{0}}.$
Clearly, the function $[0,\infty)\ni h\to D_{h}\in[0,\infty)$ can be made
nondecreasing. We will prove that (5.3) holds with $C_{h}$, a monotone
function of $D_{h}$. If the family $\\{S(t)\\}_{t\in\mathbb{R}}$ of mappings
$S(t):\mathcal{E}_{0}\to\mathcal{E}_{0}$ is the solution group for the linear
homogeneous problem (i.e. if $G\equiv 0$) then we denote
$S(t)(u_{0},u_{1})=(S_{u}(t)(u_{0},u_{1}),S_{u_{t}}(t)(u_{0},u_{1}))$. Let
$t_{0}\in\mathbb{R}$ and $\delta>0$. Using the Duhamel formula for equation
$\eqref{eq:linear}$ we obtain
$u(t_{0}+\delta)=S_{u}(\delta)(u_{0},u_{1})+\int_{0}^{\delta}S_{u}(\delta-s)(0,G(t_{0}+s))\,ds.$
Applying the $L^{4}(0,h;L^{12})$ norm with respect to $\delta$ to both sides
we obtain
$\|{u}\|_{L^{4}(t_{0},t_{0}+h;L^{12})}\leq
D_{h}\|{(u_{0},u_{1})}\|_{\mathcal{E}}+\|{P_{1}}\|_{L^{4}(0,h;L^{12})},$
for every $h>0$, where
$P_{1}(\delta)=\int_{0}^{\delta}S_{u}(\delta-s)(0,G(t_{0}+s))ds$. We will
estimate the Strichartz norm of $P_{1}$ using Theorem 5.2 with
$X=L^{2},Y=L^{12},q=4,p=1,a=0,b=h$. If $\Pi_{N}:L^{2}\to V_{N}$ is
$L^{2}$-orthogonal projection, then $S_{u}(h-s)(0,\Pi_{N}(\cdot))$ is a
continuous function of $(h,s)$ taking its values in $B(L^{2},L^{12})$. Hence
the estimate should be derived separately for every $N$, and, since it is
uniform with respect to this $N$ it holds also in the limit. We skip this
technicality and proceed with the formal estimates only. We set
$P_{2}(\delta)=\int_{0}^{h}S_{u}(\delta-s)(0,G(t_{0}+s))ds$, and we estimate
$\displaystyle\|{P_{2}}\|_{L^{4}(0,h;L^{12})}\leq\int_{0}^{h}\|{S_{u}(\delta-s)(0,G(t_{0}+s))}\|_{L^{4}(0,h;L^{12})}\,ds$
$\displaystyle\ \ \
=\int_{0}^{h}\|{S_{u}(\delta)S(-s)(0,G(t_{0}+s))}\|_{L^{4}(0,h;L^{12})}\,ds\leq\int_{0}^{h}D_{h}\|{S(-s)(0,G(t_{0}+s))}\|_{\mathcal{E}_{0}}\,ds,$
where in the last inequality we used the homogeneous Strichartz estimate.
Observe that there exists $\beta>0$ such that there holds
$\|{S(-s)(u_{0},u_{1})}\|_{\mathcal{E}_{0}}\leq
e^{s\beta}\|{(u_{0},u_{1})}\|_{\mathcal{E}_{0}}.$
We deduce
$\|{P_{2}}\|_{L^{4}(0,h;L^{12})}\leq D_{h}e^{\beta
h}\|{G}\|_{L^{1}(t_{0},t_{0}+h,L^{2})}.$
Hence, by Theorem 5.2 we obtain $\|{P_{1}}\|_{L^{4}(0,h;L^{12})}\leq
CD_{h}e^{\beta h}\|{G}\|_{L^{1}(t_{0},t_{0}+h,L^{2})}$ for every $h>0$, and
the proof is complete.
∎
The following result will be useful in the bootstrap argument on the attractor
regularity.
###### Lemma 5.3.
Let $(u_{0},u_{1})\in\mathcal{E}_{s}$ and $G\in
L^{1}_{loc}([t_{0},\infty);\mathbb{H}^{s})$ for $s\in(0,1]$. Then the weak
solution of (5.1) has regularity $u\in
C_{loc}([t_{0},\infty);\mathbb{H}^{s+1})$ and $u_{t}\in
C_{loc}([t_{0},\infty);\mathbb{H}^{s})$, Moreover, the following estimates
hold
$\displaystyle\|(u(t),u_{t}(t))\|_{\mathcal{E}_{s}}\leq
C\left(\|{(u_{0},u_{1})}\|_{\mathcal{E}_{s}}e^{-\alpha(t-t_{0})}+\int_{t_{0}}^{t}e^{-\alpha(t-s)}\|{G(s)}\|_{\mathbb{H}^{s}}ds\right),$
$\displaystyle\|u\|_{L^{4}(0,h;W^{s,12})}\leq
C_{h}\left(\|{(u_{0},u_{1})}\|_{\mathcal{E}_{s}}+\|{G}\|_{L^{1}(t_{0},t_{0}+h;\mathbb{H}^{s})}\right).$
###### Proof.
The problem
$\displaystyle\begin{cases}&w_{tt}(t)+w_{t}(t)-\Delta
w(t)=(-\Delta)^{s/2}G(t)\;\text{for}\;(x,t)\in\Omega\times(t_{0},\infty),\\\
&w(t,x)=0\;\text{for}\;x\in\partial\Omega,\\\
&w(t_{0})=(-\Delta)^{s/2}u_{0},\\\
&w_{t}(t_{0})=(-\Delta)^{s/2}u_{1},\end{cases}$ (5.4)
has the unique weak solution $w\in C_{loc}([t_{0},\infty);H^{1}_{0})$ with the
derivative $w_{t}\in C_{loc}([t_{0},\infty);L^{2})$. It is enough to observe
that
$\widehat{w}_{k}(t)=\lambda_{k}^{\frac{s}{2}}\widehat{u}_{k}(t)\ \ \textrm{for
every}\ \ k\in\mathbb{N}.$
Testing weak solutions $w,u$ with $e_{k}v(t)$, where $v(t)\in
C_{0}^{\infty}([t_{0},t_{1}))$ and using the du Bois-Reymond lemma we get
systems
$\begin{cases}\widehat{u}_{k}^{\prime\prime}+\widehat{u}_{k}^{\prime\prime}+\lambda_{k}\widehat{u}_{k}=(G(t),e_{k}),\\\
\widehat{u}_{k}(t_{0})=\widehat{u_{0}}_{k},\\\
\widehat{u}_{k}^{\prime}(t_{0})=\widehat{u_{1}}_{k},\end{cases}\ \ \
\begin{cases}\widehat{w}_{k}^{\prime\prime}+\widehat{w}_{k}^{\prime}+\lambda_{k}\widehat{w}_{k}=((-\Delta)^{\frac{{s}}{{2}}}G(t),e_{k})=\lambda_{k}^{\frac{s}{2}}(G(t),e_{k}),\\\
\widehat{w}_{k}(t_{0})=((-\Delta)^{\frac{{s}}{{2}}}u_{0},e_{k})=\lambda_{k}^{\frac{2}{s}}\widehat{u_{0}}_{k},\\\
\widehat{w}_{k}^{\prime}(t_{0})=((-\Delta)^{\frac{{s}}{{2}}}u_{1},e_{k})=\lambda_{k}^{\frac{2}{s}}\widehat{u_{1}}_{k}.\end{cases}$
The difference
$\overline{w}_{k}(t)=\widehat{w}_{k}(t)-\lambda_{k}^{\frac{s}{2}}\widehat{u}_{k}(t)$
solves the problem
$\begin{cases}\overline{w}_{k}^{\prime\prime}+\overline{w}_{k}^{\prime}+\lambda_{k}\overline{w}_{k}=0,\\\
\overline{w}_{k}(t_{0})=0,\\\ \overline{w}_{k}^{\prime}(t_{0})=0.\end{cases}$
So $\overline{w}_{k}(t)=0$ for every $t\in[t_{0},\infty)$. The assertion
follows from Proposition 5.1 and Lemma 5.1. ∎
### 5.2. Shatah–Struwe solutions and their properties
This section recollects the results from [Savostianov]. The non-autonomous
generalizations of these results are straightforward so we skip some of the
proofs which follow the lines of the corresponding results from [Savostianov].
The following remark follows from the Gagliardo–Nirenberg interpolation
inequality and the Sobolev embedding $H^{1}_{0}\hookrightarrow L^{6}$.
###### Remark 5.3.
If $u\in L^{4}(0,t;L^{12})$ and $u\in L^{\infty}(0,t;H^{1}_{0})$ then
$\|{u}\|_{L^{5}(0,t;L^{10})}\leq\|{u}\|_{L^{4}(0,t;L^{12})}^{\frac{4}{5}}\|{u}\|_{L^{\infty}(0,t;H^{1}_{0})}^{\frac{1}{5}}.$
We define the Shatah–Struwe solution of problem (3.1).
###### Definition 5.4.
Let $(u_{0},u_{1})\in\mathcal{E}_{0}$. A weak solution of problem (3.1), given
by Definition 4.1 is called a Shatah–Struwe solution if $u\in
L^{4}_{loc}([0,\infty);L^{12})$.
###### Proposition 5.5.
Shatah–Struwe solutions to problem (3.1) given by Definition 5.4 are unique
and the mapping
$\mathcal{E}_{0}\ni(u_{0},u_{1})\mapsto(u(t),u_{t}(t))\in\mathcal{E}_{0}$ is
continuous for every $t>0$.
###### Proof.
Let $u,v$ be Shatah–Struwe solutions to Problem (3.1) with the initial data
$(u_{0},u_{1})$ and $(v_{0},v_{1})$, respectively. Their difference $w:=u-v$
satisfies the following equation
$w_{tt}(t)+w_{t}(t)-\Delta
w(t)=f_{\varepsilon}(t,u(t))-f_{\varepsilon}(t,v(t))=w\frac{\partial
p_{\varepsilon}(t,\theta u+(1-\theta)v)}{\partial u}.$
Testing this equation with $w_{t}$ yields
$\frac{1}{2}\frac{d}{dt}\left(\|{w_{t}}\|^{2}+\|{\nabla
w}\|^{2}\right)+\|{w_{t}}\|^{2}=\left(w\frac{\partial p_{\varepsilon}(t,\theta
u+(1-\theta)v)}{\partial u},w_{t}\right).$
Assumption (H5) gives inequality
$\frac{1}{2}\frac{d}{dt}\left(\|{w_{t}}\|^{2}+\|{\nabla w}\|^{2}\right)\leq
C\int_{\Omega}w(1+|u|^{4}+|v|^{4})w_{t}dx$
Then by using the Hölder inequality with exponents
$\frac{1}{6},\frac{1}{3},\frac{1}{2}$ and the Sobolev embedding
$L_{6}\hookrightarrow H_{0}^{1}$ we obtain
$\frac{d}{dt}\left(\|{w_{t}}\|^{2}+\|{\nabla w}\|^{2}\right)\leq
C\left(\|{\nabla
w}\|^{2}+\|{w_{t}}\|^{2}\right)\left(1+\|{u}\|_{L^{12}}^{4}+\|{v}\|_{L^{12}}^{4}\right).$
Because $v,u$ are Shatah–Struwe solution, i.e. $u,v\in
L^{4}_{loc}([0,\infty);L^{12})$, it is possible to use integral form of the
Grönwall inequality which gives us
$\|{\nabla w}\|^{2}+\|{w_{t}}\|^{2}\leq(\|{\nabla
w_{0}}\|^{2}+\|{w_{1}}\|^{2})\exp\left(C\left(t+\int_{0}^{t}\|{v}\|_{L_{12}}^{4}+\|{w}\|_{L_{12}}^{4}dt\right)\right),$
for $t\in[0,\infty)$, hence the assertion follows. ∎
###### Lemma 5.4.
Every weak solution of problem (3.1) is of Galerkin type if and only it is a
Shatah–Struwe solution. Moreover for every $t>0$ there exists a constant
$C_{t}>0$ such that for every solution $u$ with arbitrary
$p_{\varepsilon}\in\mathcal{H}_{[0,1]}$ treated as right-hand side in (3.1)
contained in the absorbing set $B_{0}$ there holds
$\|{u}\|_{L^{4}(0,t;L^{12})}\leq C_{t}.$
###### Proof.
Let $u$ be the solution of the Galerkin type with the initial data
$(u_{0},u_{1})\in\mathcal{E}_{0}$. From assumption (H5) we see that
$\|{p_{\varepsilon}(t,u)}\|\leq
C(1+\|{|u|^{5-\kappa}}\|)=C(1+\|{u}\|^{5-\kappa}_{L^{2(5-\kappa)}})\leq
C(1+\|{u}\|^{5-\kappa}_{L^{10}}).$
We assume that $t\in[0,1]$. From the Hölder inequality we obtain
$\displaystyle\int_{0}^{t}\|{p_{\varepsilon}(s,u)}\|ds\leq
Ct+C\int_{0}^{t}\|{u}\|^{5-\kappa}_{L^{10}}ds\leq
C\left(\left(\int_{0}^{t}\|{u}\|^{5}_{L^{10}}ds\right)^{\frac{5-\kappa}{5}}\left(\int_{0}^{t}1dt\right)^{\frac{\kappa}{5}}+t\right)$
$\displaystyle\ \
=C\left(\|{u}\|_{L^{5}(0,t;L^{10})}^{5-\kappa}t^{\frac{\kappa}{5}}+t\right)\leq
C\left(\|{u}\|_{L^{5}(0,t;L^{10})}^{5-\kappa}+1\right)t^{\frac{\kappa}{5}}$
$\displaystyle\ \ \leq
CR^{\frac{1}{5}}\left(\|{u}\|_{L^{4}(0,t;L^{12})}^{4-\frac{4\kappa}{5}}+1\right)t^{\frac{\kappa}{5}}$
where $R$ is the bound of the $L^{\infty}(0,t;H^{1}_{0})$ norm of $u$. We
split $u$ as the sum $u=v+w$ where $v,w$ solve the following problems
$\begin{cases}v_{tt}+v_{t}-\Delta v=0,\\\
v(t,x)=0\;\text{for}\;x\in\partial\Omega,\\\ v(0,x)=u_{0}(x),\\\
v_{t}(0,x)=u_{1}(x),\end{cases}\qquad\qquad\begin{cases}w_{tt}+w_{t}-\Delta
w=p_{\epsilon}(t,u),\\\ w(t,x)=0\;\text{for}\;x\in\partial\Omega,\\\
w(0,x)=0,\\\ w_{t}(0,x)=0.\end{cases}$
From the Strichartz estimate in Lemma 5.1 we deduce
$\|{v}\|_{L^{4}(0,t;L^{12})}\leq C_{1}\|{(u_{0},u_{1})}\|_{\mathcal{E}},$
and
$\|{w}\|_{L^{4}(0,t;L^{12})}\leq
CR^{\frac{1}{5}}\left(\|{w}\|_{L^{4}(0,t;L^{12})}^{4-\frac{4\kappa}{5}}+\left(C_{1}\|{(u_{0},u_{1})}\|\right)^{4-\frac{4\kappa}{5}}+1\right)t^{\frac{\kappa}{5}}.$
We define function $Y(t)=\|{w}\|_{L^{4}(0,t;L^{12})}$ for $t\in[0,1]$.
Formally we do not know if this function is well defined, so to make the proof
rigorous we should proceed for Galerkin approximation, cf. [Savostianov]. We
continue the proof in formal way. The function
$Y(t)=\|{w}\|_{L^{4}(0,t;L^{12})}$ is continuous with $Y(0)=0$ and there holds
$Y(t)\leq
CR^{\frac{1}{5}}(Y(t)^{4-\frac{4\kappa}{5}}+(C_{1}\|{(u_{0},u_{1})}\|)^{4-\frac{4\kappa}{5}}+1)t^{\frac{\kappa}{5}}.$
We define
$t^{\frac{\kappa}{5}}_{\max}=\min\left\\{\frac{1}{2CR^{\frac{1}{5}}((C_{1}R)^{4-\frac{4\kappa}{5}}+2)},1\right\\},\text{
where }R\geq\|{(u_{0},u_{1})}\|_{\mathcal{E}_{0}}$
Now we will use continuation method to prove that the estimate $Y(t)\leq 1$
holds on the interval $[0,t_{\max}]$. The argument follows the scheme of the
proof from [Tao, Proposition 1.21]. Defining the logical predicates
$H(t)=(Y(t)\leq 1)$ and $C(t)=(Y(t)\leq\frac{1}{2})$ we observe that following
facts hold
* •
$C(0)$ is true.
* •
If $C(s_{0})$ for some $s_{0}$ is true then $H(s)$ is true in some
neighbourhood of $s_{0}$.
* •
If $s_{n}\to s_{0}$ and $C(s_{n})$ holds for every $n$ then $C(s_{0})$ is
true.
* •
$H(t)$ implies $C(t)$ for $t\in[0,t_{\textrm{max}}]$, indeed
$Y(t)\leq
CR^{\frac{1}{5}}((C_{1}\|{u_{0},u_{1}}\|)^{4-\frac{4\kappa}{5}}+1+Y(t)^{4-\frac{4\kappa}{5}})t^{\frac{\kappa}{5}}\leq
CR^{\frac{1}{5}}(C_{1}\|{u_{0},u_{1}}\|+2)^{\frac{4-\kappa}{5}}t^{\frac{\kappa}{5}}_{\max}\leq\frac{1}{2}.$
The continuation argument implies that $C(t)$ holds for $t\in[0,t_{\max}]$.
From the triangle inequality we conclude that
$\|{u}\|_{L^{4}(0,t_{\max};L^{12})}\leq C_{1}\|{(u_{0},u_{1})}\|+1.$
Observe that $t_{\max}$ and $C_{1}$ are independent of choice of
$p_{\varepsilon}\in\mathcal{H}_{[0,1]}$. Because all trajectories are bounded.
cf. Proposition 4.4, by picking
$R:=\max_{t\in[0,\infty)}\|{(u(t),u_{t}(t))}\|_{\mathcal{E}_{0}}$ we deduce
that $\|{u}\|_{L^{4}(0,t,L^{12})}$ is bounded for every $t>0$. Moreover if
$(u(t),u_{t}(t))\in B_{0}$ for every $t\geq 0$, then with $R:=\sup_{(u,v)\in
B_{0}}\|{(u,v)}\|_{\mathcal{E}_{0}}$ we get the bound
$\|{u}\|_{L^{4}(0,t,L^{12})}\leq C_{t}$ with $C_{t}$ independent of
$p_{\varepsilon}$. ∎
###### Remark 5.6.
As a consequence of Proposition 5.5 and Lemma 5.4 for every
$(u_{0},u_{1})\in\mathcal{E}_{0}$ weak solution of Galerkin type of problem
(3.1) is unique.
###### Lemma 5.5.
If the weak solution $(u,u_{t})$ of Problem 3.1 is of Galerkin type then for
every $T>0$ it belongs to the space $C([0,T];\mathcal{E}_{0})$.
###### Proof.
The proof follows an argument of Proposition 3.3 from [Savostianov]. They key
fact is that Galerkin (or equivalently, Shatah–Struwe) solutions satisfy the
energy equation. Let $t_{n}\to t$ and let $T>\sup_{n\in N}\\{t_{n}\\}$.
Clearly, $(u,u_{t})\in C_{w}([0,T];\mathcal{E}_{0})$ and hence
$(u(t_{n}),u_{t}(t_{n}))\to(u(t),u_{t}(t))$ weakly in $\mathcal{E}_{0}$. To
deduce that this convergences is strong we need to show that
$\|(u(t_{n}),u_{t}(t_{n}))\|_{\mathcal{E}_{0}}\to\|(u(t),u_{t}(t))\|_{\mathcal{E}_{0}}$.
To this end we will use the energy equation
$\|{(u(t),u_{t}(t))}\|_{\mathcal{E}_{0}}^{2}-\|{(u(t_{n}),u_{t}(t_{n}))}\|_{\mathcal{E}_{0}}^{2}=2\int_{t_{n}}^{t}(p(s,u(s)),u_{t})-\|{u_{t}(s)}\|^{2}\,ds.$
Then
$\left|\|{(u(t),u_{t}(t))}\|_{\mathcal{E}_{0}}^{2}-\|{(u(t_{n}),u_{t}(t_{n}))}\|_{\mathcal{E}_{0}}^{2}\right|\leq
CR\left((R+1)|t-t_{n}|+\|{u}\|_{L^{5}{(t_{n},t;L^{10})}}\right)$
where $R$ is a bound on $\|u_{t}\|$. The right side tends to zero as $t_{n}\to
t$ which proves the assertion. ∎
### 5.3. Nonautonomous dynamical system.
We will denote by $(u(t),u_{t}(t))=\varphi_{\varepsilon}(t,p)(u_{0},u_{1})$
the map which gives the solution of (3.1) with
$p\in\mathcal{H}(f_{\varepsilon})$ as the right-hand side and the initial
conditions $u(0)=u_{0}$, $u(0)=u_{1}$.
###### Proposition 5.7.
Mapping $\varphi_{\varepsilon}:\mathbb{R}\times\mathcal{H}(f_{\varepsilon})\to
C(\mathcal{E})$ together with time translation
$\theta_{t}p_{\varepsilon}=p_{\varepsilon}(\cdot+t)$ forms a NDS.
###### Proof.
Property $\varphi(0,p)=\textrm{Id}_{\mathcal{E}_{0}}$ and cocycle property are
obvious from definition of $\varphi_{\varepsilon}$ and $\theta_{t}$. Let
$(u^{n}_{0},u^{n}_{1})\to(u_{0},u_{1})$ in $\mathcal{E}_{0}$,
$p_{\varepsilon}^{n}\to p_{\varepsilon}$ in the metric of $\Sigma$, $t_{n}\to
t$ and let $\\{u^{n}\\}_{n=1}^{\infty}$ and $u$ be the Galerkin type weak
solutions of the problems governed by the equations
$\displaystyle u^{n}_{tt}+u^{n}_{t}-\Delta
u^{n}=p^{n}_{\varepsilon}(t,u^{n}),$ (5.5) $\displaystyle u_{tt}+u_{t}-\Delta
u=p_{\varepsilon}(t,u),$ (5.6)
with the boundary data $u^{n}=u=0$ on $\partial\Omega$ and initial data
$(u^{n}(0),u^{n}_{t}(0))=(u^{n}_{0},u^{n}_{1})\in\mathcal{E}_{0}$ and
$(u(0),u_{t}(0))=(u_{0},u_{1})\in\mathcal{E}_{0}$. Choose $T>0$ such that
$T>\sup_{n\in\mathbb{N}}\\{t_{n}\\}$. There hold the bounds
$\displaystyle\|\nabla u^{n}(t)\|_{L^{2}}\leq C,\ \ \|\nabla
u(t)\|_{L^{2}}\leq C,$ $\displaystyle\|u^{n}_{t}(t)\|_{L^{2}}\leq C,\ \
\|u_{t}(t)\|_{L^{2}}\leq C,$ $\displaystyle\|u^{n}_{tt}(t)\|_{H^{-1}}\leq C,\
\ \|u_{tt}(t)\|_{H^{-1}}\leq C.$
for $t\in[0,T]$ with a constant $C>0$. Moreover there hold the bounds
$\|u^{n}\|_{L^{4}(0,T;L^{12})}\leq C,\|u\|_{L^{4}(0,T;L^{12})}\leq C.$
This means that, for a subsequence
$\displaystyle u^{n}\to v\ \ \textrm{weakly-*}\ \ \textrm{in}\ \
L^{\infty}(0,T;H^{1}_{0}),$ $\displaystyle u^{n}_{t}\to v_{t}\ \
\textrm{weakly-*}\ \ \textrm{in}\ \ L^{\infty}(0,T;L^{2}),$ $\displaystyle
u^{n}_{tt}\to v_{tt}\ \ \textrm{weakly-*}\ \ \textrm{in}\ \
L^{\infty}(0,T;H^{-1}),$
for a certain function $v\in L^{\infty}(0,T;H^{1}_{0})$ with $v_{t}\in
L^{\infty}(0,T;L^{2})$ and $v_{tt}\in L^{\infty}(0,T;H^{-1})$. By Lemma 5.5
$u^{n},u\in C([0,T];\mathcal{E}_{0})$. Moreover $v\in C([0,T];L^{2})\cap
C_{w}([0,T];H^{1}_{0})$, and, $v_{t}\in C([0,T];H^{-1})\cap
C_{w}([0,T];L^{2})$, cf [Temam, Lemma 1.4, page 263]. We will show that $v=u$
for $t\in[0,T]$. Note that for every $w\in L^{2}$
$(u^{n}(0),w)=(u^{n}(t),w)-\int_{0}^{t}(u^{n}_{t}(s),w)\,ds.$
Integrating with respect to $t$ between $0$ and $T$ and exchanging the order
of integration we obtain
$T(u^{n}_{0},w)=\int_{0}^{T}(u^{n}(t),w)\,dt-\int_{0}^{T}(u^{n}_{t}(s),(T-s)w)\,ds.$
Passing to the limit we obtain
$T(u_{0},w)=\int_{0}^{T}(v(t),w)\,dt-\int_{0}^{T}(v_{t}(s),(T-s)w)\,ds=T(v(0),w),$
whence $v(0)=u_{0}$. It is straightforward to see that $u^{n}(t)\to v(t)$
weakly in $H^{1}_{0}$ for every $t\in[0,T]$. Similar reasoning for $u^{n}_{t}$
allows us to deduce that $v_{t}(0)=u_{1}$ and $u^{n}_{t}(t)\to v(t)$ weakly in
$L^{2}$ for every $t\in[0,T]$. Now we have to show that $v$ satsfies (5.6).
Indeed, weak form of (5.5) is as follows
$\displaystyle\int_{0}^{T}\langle u^{n}_{tt}(t),w(t)\rangle_{H^{-1}\times
H^{1}_{0}{}}\,dt+\int_{0}^{T}(u^{n}_{t}(t),w(t))\,dt+\int_{0}^{T}(\nabla
u^{n}(t),\nabla w(t))\,dt$ $\displaystyle\ \
=\int_{0}^{T}\int_{\Omega}f^{n}_{\epsilon}(u^{n}(x,t),t)w(t)\,dx\,dt,$
for every $w\in L^{2}(0,T;H^{1}_{0})$. It suffices only to pass to the limit
on the right-hand side. Fix $t\in[0,T]$ and $w\in H^{1}_{0}$. There holds
$u^{n}(\cdot,t)\to u(\cdot,t)$ strongly in $L^{6-\frac{6}{5}\kappa}$ and, for
a subsequence, $u^{n}(x,t)\to u(x,t)$ for a.e. $x\in\Omega$ and
$|u^{n}(x,t)|\leq g(x)$ with $g\in L^{6-\frac{6}{5}\kappa}$ where $g$ can also
depend on $t$. Hence
$f^{n}_{\epsilon}(u^{n}(x,t),t)w(x)\to f_{\epsilon}(u(x,t),t)w(x)\ \
\textrm{a.e.}\ \ x\in\Omega,$
moreover
$|f^{n}_{\epsilon}(u^{n}(x,t),t)w(x)|\leq
C(1+|u^{n}(x,t)|^{5-\kappa})|w(x)|\leq|w(x)|^{6}+C(1+g(x)^{6-\frac{6}{5}\kappa})\in
L^{1}.$
This means that
$\lim_{n\to\infty}\int_{\Omega}f^{n}_{\epsilon}(u^{n}(x,t),t)w(x)\,dx=\int_{\Omega}f_{\epsilon}(u(x,t),t)w(x)\,dx.$
Now let $w\in L^{2}(0,T;H^{1}_{0})$. There holds
$\displaystyle\left|\int_{\Omega}f^{n}_{\epsilon}(u^{n}(x,t),t)w(x,t)\,dx\right|\leq
C\int_{\Omega}(1+|u^{n}(x,t)|^{5})|w(x,t)|\,dx\leq
C\|w(t)\|_{L^{6}}(1+\|u^{n}(t)\|_{L^{6}}^{5})$ $\displaystyle\ \ \leq
C\|w(t)\|_{H^{1}_{0}}(1+\|u^{n}(t)\|_{H^{1}_{0}}^{5})\leq
C\|w(t)\|_{H^{1}_{0}}\in L^{1}(0,T+1),$
whence we can pass to the limit in the nonlinear term. The fact that the
$L^{4}(0,T;L^{12})$ estimate on $u^{n}$ is independent of $n$ implies that $v$
satisfies the same estimate which ends the proof that $u=v$.
We must show that
$\|{(u^{n}(t_{n}),u^{n}_{t}(t_{n}))-(u(t),u_{t}(t))}\|_{\mathcal{E}_{0}}\to 0$
We already know that $u^{n}(t)\to u(t)$ weakly in $H^{1}_{0}$ and
$u^{n}_{t}(t)\to u_{t}(t)$ weakly in $L^{2}$ for every $t\in[0,T]$. We will
first prove that these convergences are strong. To this end let
$w^{n}=u^{n}-u$. There holds
$w^{n}_{tt}+w^{n}_{t}-\Delta
w^{n}=f^{n}_{\epsilon}(u^{n},t)-f_{\epsilon}(u,t).$
Testing this equation with $w^{n}_{t}$ we obtain
$\frac{1}{2}\frac{d}{dt}\|{(w^{n}(t),w^{n}_{t}(t))}\|_{\mathcal{E}_{0}}^{2}+\|{w^{n}(t)}\|^{2}=\int_{\Omega}(f^{n}_{\epsilon}(t,u^{n})-f_{\epsilon}(t,u))w^{n}_{t}(t)\,dx.$
Simple computations lead us to
$\frac{d}{dt}\|{(w^{n}(t),w^{n}_{t}(t))}\|_{\mathcal{E}_{0}}^{2}\leq\int_{\Omega}(f^{n}_{\epsilon}(t,u^{n})-f^{n}_{\epsilon}(u,t))^{2}\,dx+\int_{\Omega}(f^{n}_{\epsilon}(t,u)-f_{\epsilon}(t,u))^{2}\,dx.$
After integration from $0$ to $t$ we obtain
$\displaystyle\|{(w^{n}(t),w^{n}_{t}(t))}\|_{\mathcal{E}_{0}}^{2}\leq\|{(u^{n}_{0}-u^{n},u^{n}_{1}-u_{1})}\|_{\mathcal{E}_{0}}^{2}$
$\displaystyle\qquad+\int_{0}^{T}\int_{\Omega}(f^{n}_{\epsilon}(u^{n},s)-f^{n}_{\epsilon}(u,s))^{2}\,dx\,ds+\int_{0}^{T}\int_{\Omega}(f^{n}_{\epsilon}(u,s)-f_{\epsilon}(u,s))^{2}\,dx\,ds.$
We must pass to the limit in two terms. To deal with the first term observe
that,
$\displaystyle\int_{0}^{T}\int_{\Omega}(f^{n}_{\epsilon}(u^{n},s)-f^{n}_{\epsilon}(u,s))^{2}\,dx\,ds\leq\int_{0}^{T}\int_{\Omega}(C|u^{n}(s)-u(s)|(1+|u^{n}(s)|^{4-\kappa}+|u(s)|^{4-\kappa}))^{2}\,dx\,ds$
$\displaystyle\ \ \leq
C\int_{0}^{T}\int_{\Omega}|u^{n}(s)-u(s)|^{2}(1+|u^{n}(s)|^{8-2\kappa}+|u(s)|^{8-2\kappa})\,dx\,ds$
$\displaystyle\ \ \leq
C\left(\|u^{n}-u\|^{2}_{L^{2}(0,T;L^{2})}+\int_{0}^{T}\,\|u_{n}(s)-u(s)\|_{L^{\frac{12}{2+\kappa}}}^{2}\left(\|u^{n}(s)\|_{L^{12}}^{\frac{12}{8-2\kappa}}+\|u(s)\|_{L^{12}}^{\frac{12}{8-2\kappa}}\right)ds\right)$
$\displaystyle\ \ \leq
C\left(\|u^{n}-u\|^{2}_{L^{2}(0,T;L^{2})}+\left(\int_{0}^{T}\,\|u_{n}(s)-u(s)\|_{L^{\frac{12}{2+\kappa}}}^{\frac{8-2\kappa}{1-\kappa}}\,dt\right)^{\frac{1-\kappa}{4-\kappa}}\left(\|u^{n}\|_{L^{4}(0,T;L^{12})}^{\frac{12}{4-\kappa}}+\|u\|_{L^{4}(0,T;L^{12})}^{\frac{12}{4-\kappa}}\right)\right)$
$\displaystyle\ \ \leq
C\left(\|u^{n}-u\|^{2}_{L^{2}(0,T;L^{2})}+\|u^{n}-u\|_{L^{\frac{8-2\kappa}{1-\kappa}}(0,T;L^{\frac{12}{2+\kappa}})}^{\frac{8-2\kappa}{4-\kappa}}\right),$
(5.7)
and the assertion follows from the compact embedding $H^{1}_{0}\subset
L^{\frac{12}{2+\kappa}}$ by the Aubin–Lions lemma. To deal with the second
term note that $f^{n}_{\epsilon}(u,t)\to f_{\epsilon}(u,t)$ for almost every
$(x,t)\in\Omega\times(0,T)$. Moreover
$(f^{n}_{\epsilon}(u,t)-f_{\epsilon}(u,t))^{2}\leq C,$
and the Lebesgue dominated convergence theorem implies the assertion.
Now, the triangle inequality implies
$\displaystyle\|\nabla u^{n}(t_{n})-\nabla
u(t)\|_{L^{2}}^{2}+\|u^{n}_{t}(t_{n})-u_{t}(t)\|_{L^{2}}^{2}$ $\displaystyle\
\ \leq 2\left(\|\nabla u^{n}(t_{n})-\nabla
u(t_{n})\|_{L^{2}}^{2}+\|u^{n}_{t}(t_{n})-u_{t}(t_{n})\|_{L^{2}}^{2}\right)$
$\displaystyle\qquad+2\left(\|\nabla u(t_{n})-\nabla
u(t)\|_{L^{2}}^{2}+\|u_{t}(t_{n})-u_{t}(t)\|_{L^{2}}^{2}\right),$
where both terms tend to zero, the first one by (5.7) and the second one by
Lemma 5.5 and the proof is complete. ∎
## 6\. Existence and regularity of non-autonomous attractors.
We start from the result which states that the solution can be split into the
sum of two functions: one that decays to zero, and another one which is more
smooth than the initial data.
###### Lemma 6.1.
Let $u$ be the Shatah–Struve solution of (3.1) such that $u(t)\in B_{0}$ for
every $t\geq 0$, where $B_{0}$ is the absorbing set from Proposition 4.4.
There exists the increasing sequence $\alpha_{0},\ldots,\alpha_{k}$ with
$\alpha_{0}=0$, $\alpha_{k}=1$ such that if
$\|{(u(t),u_{t}(t))}\|_{\mathcal{E}_{\alpha_{i}}}\leq R$ for every
$t\in[0,\infty)$, then $u$ can be represented as the sum of two functions
$v,w$ satisfying
$\displaystyle u(t)=v(t)+w(t),\ \
\|{(v(t),v_{t}(t))}\|_{\mathcal{E}_{\alpha_{i}}}\leq\|{(u_{0},u_{1})}\|_{\mathcal{E}_{\alpha_{i}}}Ce^{-t\alpha}$
$\displaystyle\qquad\textrm{and}\ \
\|{(w(t),w_{t}(t))}\|_{\mathcal{E}_{\alpha_{i+1}}}\leq C_{R}\ \ \textrm{for}\
\ i\in\\{0,\ldots,k-1\\}.$
Moreover constants $C,C_{R},\alpha$ are the same for every
$p(t,u)\in\mathcal{H}_{[0,1]}$ treated as the right-hand side in equation
(3.1).
###### Proof.
From the Gagliardo-–Nirenberg interpolation inequality we see that
$\|{p_{\varepsilon}(t,u)}\|_{H^{\alpha}}\leq C\|{\nabla
p_{\varepsilon}(t,u)}\|_{L^{s}}^{\theta}\|{p_{\varepsilon}(t,u)}\|_{L^{q}}^{1-\theta}+C\|{p_{\varepsilon}(t,u)}\|_{L^{q}}$
with $\alpha\leq\theta\leq 1$,
$\frac{1}{2}=\frac{\alpha}{3}+\left(\frac{1}{s}-\frac{1}{3}\right)\theta+\frac{1-\theta}{q}$
and $s<2$. From the Hölder inequality we obtain
$\|{p_{\varepsilon}(t,u)}\|_{H^{\alpha}}\leq
C\left(\int_{\Omega}\left|\frac{\partial{p_{\varepsilon}}}{\partial
u}(t,u)\right|^{sp_{*}}dx\right)^{\frac{\theta}{sp_{*}}}\left(\int_{\Omega}|\nabla
u|^{sp}dx\right)^{\frac{\theta}{sp}}\|{p_{\varepsilon}(t,u)}\|_{L^{q}}^{1-\theta}+C\|{p_{\varepsilon}(t,u)}\|_{L^{q}}.$
From assumption (H5), the Cauchy inequality, and the fact that solution $u$ is
included in the absorbing set, taking $sp=2,\;sp^{*}=3,\;\theta=\frac{1}{2}$
we get the inequality
$\displaystyle\|{p_{\varepsilon}(t,u)}\|_{H^{\alpha}}\leq
C(R)\left(\left(\int_{\Omega}|u|^{12}dx\right)^{\frac{1}{3}}+\left(\int_{\Omega}|u|^{(5-\kappa)q}dx\right)^{\frac{1}{q}}+1\right)\;$
$\displaystyle\qquad\qquad\qquad\qquad\qquad\text{with}\;\alpha=\frac{3}{2}\left(\frac{1}{2}-\frac{1}{q}\right),\;\alpha<\frac{1}{2}$
(6.1)
Now we will inductively describe sequence $\alpha_{1}\ldots,\alpha_{k-1}$
starting with $\alpha_{1}$. If we set
$\frac{5-\kappa}{10}\leq\frac{1}{q}<\frac{1}{2}$ in inequality (6.1), we
obtain
$\int_{t_{0}}^{t_{0}+h}\|{p_{\varepsilon}(t,u)}\|_{H^{\alpha_{1}}}dt\leq
C(R)\left(\|{u}\|_{L^{4}(t_{0},t_{0}+h;L^{12})}^{4}+\|{u}\|_{L^{5}(t_{0},t_{0}+h;L^{10})}^{5}+h\right)\leq
C(h,R).$
We observe that $\alpha_{1}\in(0,\delta)$, for some $\delta>0$. Assume that
$i\in\\{1,\dots,k-1\\}$
$\|{(u(t),u_{t}(t))}\|_{\mathcal{E}_{\alpha_{i}}}\leq
R\quad\text{and}\quad\int_{t_{0}}^{t_{0}+h}\|{p_{\varepsilon}(t,u)}\|_{H^{\alpha_{i}}}dt\leq
C(h,R)\ \quad\text{for $t,t_{0}\in[0,\infty)$}.$
From Lemma 5.3 we see that
$u\in
L^{4}(t_{0},t_{0}+h;W^{\alpha_{i},12}),\qquad\|{u}\|_{L^{4}(t_{0},t_{0}+h;W^{\alpha_{i},12})}\leq
C(h,R).$
By the Sobolev embeding $W^{\alpha_{i},10}\hookrightarrow
L^{\frac{30}{3-10\alpha_{i}}}$ and by interpolation we see that
$\|{u}\|_{L^{5}\left(t_{0},t_{0}+h;L^{\frac{30}{3-10\alpha_{i}}}\right)}\leq\|{u}\|_{L^{5}(t_{0},t_{0}+h;W^{\alpha_{i},10})}\leq\|{u}\|_{L^{4}(t_{0},t_{0}+h;W^{\alpha_{i},12})}^{\frac{4}{5}}\|{u}\|_{L^{\infty}(t_{0},t_{0}+h;H^{\alpha_{i}+1})}^{\frac{1}{5}}\leq
C(h,R).$
Using (6.1) with $q=\frac{6}{3-10\alpha_{i}}$ we obtain
$\int_{t_{0}}^{t_{0}+h}\|{p_{\varepsilon}(t,u)}\|_{H^{\alpha_{i+1}}}dt\leq
C(R)\left(\|{u}\|_{L^{4}(t_{0},t_{0}+h;L^{12})}^{4}+\|{u}\|_{L^{5}\left(t_{0},t_{0}+h;L^{\frac{30}{3-10\alpha_{i}}}\right)}^{5}+h\right)\leq
C(h,R),$
with $\alpha_{i+1}=\frac{5}{2}\alpha_{i}$. From this recurrent relation and
the fact that $\alpha_{1}\in(0,\delta)$ we can find sequence
$\alpha_{1},\ldots,\alpha_{k-1}$ such that $\alpha_{k-1}=\frac{9}{20}$. Let
observe that in case $\alpha_{k-1}=\frac{9}{20}$ from the Sobolev embedding we
get the bounds
$\|{u}\|_{L^{60}}\leq C(R)\quad\text{and}\quad\|{\nabla
u}\|_{L^{\frac{60}{21}}}\leq C(R).$
Hence,
$\|{\nabla p_{\varepsilon}(t,u)}\|_{L^{2}}\leq
C\left(1+\int_{\Omega}|u|^{8}|\nabla u|^{2}\,dx\right)\leq
C\left(1+\|{u}\|_{L^{28}}^{8}\|{\nabla u}\|_{L^{\frac{60}{21}}}^{2}\right)\leq
C(R),$
and consequently there holds
$\int_{t_{0}}^{t_{0}+h}\|{\nabla p_{\varepsilon}(t,u)}\|_{L^{2}}\,dt\leq
C(h,R).$
Let us decompose $u(t)=w(t)+v(t)$ where $w,v$ satisfy the problems
$\begin{cases}v_{tt}+v_{t}-\Delta v=0,\\\
v(t,x)=0\;\text{for}\;x\in\partial\Omega,\\\ v(0,x)=u_{0}(x),\\\
v_{t}(0,x)=u_{1}(x),\end{cases}\qquad\qquad\begin{cases}w_{tt}+w_{t}-\Delta
w=p_{\varepsilon}(t,v+w),\\\ w(t,x)=0\;\text{for}\;x\in\partial\Omega,\\\
w(0,x)=0,\\\ w_{t}(0,x)=0.\end{cases}$
From Lemma 5.3 we get that
$\|{(v(t),v_{t}(t))}\|_{\mathcal{E}_{\alpha_{i}}}\leq
C\|{(u_{0},u_{1})}\|_{\mathcal{E}_{\alpha_{i}}}e^{-\alpha t}$ and
$\|{(w(t+h),w_{t}(t+h))}\|_{\mathcal{E}_{\alpha_{i+1}}}\leq Ce^{-\beta
h}\|{(w(t),w_{t}(t))}\|_{\mathcal{E}_{\alpha_{i+1}}}+C(h,R),$
for every $t\geq 0$ and $h>0$. We set $h$ such that $Ce^{-\beta
h}\leq\frac{1}{2}$. Then we obtain that
$\|{(w(t),w_{t}(t))}\|_{\mathcal{E}_{\alpha_{i+1}}}\leq 2C(h,R)=C_{R}$ for
$i\in\\{0,\ldots,k-1\\}$. We stress that all constants are independent of
$p_{\varepsilon}(t,u)\in\mathcal{H}_{[0,1]}$. ∎
Bounds obtained in the previous lemma allow us to deduce the asymptotic
compactness of the considered non-autonomous dynamical system.
###### Proposition 6.1.
For every $\varepsilon\in[0,1]$, the non-autonomous dynamical system
$(\varphi_{\varepsilon},\theta)$ is uniformly asymptotically compact.
###### Proof.
Let $B_{0}$ be an absorbing set from Proposition 4.4. Then for every bounded
set $B\subset\mathcal{E}$ there exist $t_{0}$ such that for every $t\geq
t_{0}$ and every $p_{\varepsilon}\in\mathcal{H}(f_{\varepsilon})$ there holds
$\varphi_{\varepsilon}(t,p_{\varepsilon})\in B_{0}$. From the previous lemma
there exists the set $B_{\alpha_{1}}\subset\mathcal{E}_{\alpha_{1}}$ which is
compact in $\mathcal{E}_{0}$ such that
$\lim_{t\to\infty}\sup_{p_{\varepsilon}\in\mathcal{H}(f_{\varepsilon})}\text{dist}(\varphi(t,p)B,B_{\alpha_{1}})=0,$
which shows that the non-autonomous dynamical system
$(\varphi_{\epsilon},\theta)$ is uniformly asymptotically compact. ∎
We are in position to formulate the main result of this section, the theorem
on non-autonomous attractors.
###### Theorem 6.2.
For every $\varepsilon\in[0,1]$ problem (3.1) has uniform
$\mathcal{A}_{\varepsilon}$, cocycle
$\\{\mathcal{A}_{\varepsilon}(p)\\}_{p\in\mathcal{H}(f_{\epsilon})}$ and
pullback attractors which are bounded in $\mathcal{E}_{1}$ uniformly with
respect to $\varepsilon$. Moreover there holds
$\mathcal{A}_{\varepsilon}=\bigcup_{p\in\mathcal{H}(f_{\epsilon})}\mathcal{A}_{\varepsilon}(p)$
###### Proof.
Because $(\varphi_{\epsilon},\theta)$ is asymptotically compact, from Theorem
8.1 we get existence of uniform and cocycle attractors and the relation
between them. For $(u_{0},u_{1})\in\mathcal{A}_{\varepsilon}$ by Theorem 8.2
there exists the global solution $u(t)$ with $(u(0),u_{t}(0))=(u_{0},u_{1})$.
If $\mathcal{A}_{\varepsilon}$ is bounded in $\mathcal{E}_{\alpha_{i}}$ then
from Lemma 6.1 we can split this solution into the sum
$u(t)=v^{n}(t)+w^{n}(t)$ for $t\in[-n,\infty)$ such that
$\|{(v^{n}(t),v^{n}_{t}(t))}\|_{\mathcal{E}_{\alpha_{i}}}\leq
Ce^{-(t+n)\alpha}$ and
$\|{(w^{n}(t),w^{n}_{t}(t))}\|_{\mathcal{E}_{\alpha_{i+1}}}\leq C$.
Then, for the subsequence, there holds $w^{n}(0)\to w$ and $v^{n}(0)\to 0$ as
$n\to\infty$ for some $w\in\mathcal{E}_{\alpha_{i+1}}$, so $w=(u_{0},u_{1})$.
Because $\mathcal{A_{\varepsilon}}$ is bounded in $\mathcal{E}_{0}$ in finite
number of steps we obtain the boudedness of the uniform attractors in
$\mathcal{E}_{1}$. Moreover, due to Proposition 4.4 and Lemma 6.1 the
$\mathcal{E}_{1}$ bound of these attractors does not depend on $\varepsilon$.
∎
## 7\. Upper semicontinuous convergence of attractors.
The paper is concluded with the result on upper-semicontinuous convergence of
attractors.
###### Theorem 7.1.
The family of uniform attractors
$\\{\mathcal{A}_{\varepsilon}\\}_{\varepsilon\in[0,1]}$ for the considered
non-autonomous dynamical system $(\varepsilon,\theta_{t})$ is upper semi-
continuous in Kuratowski and Hausdorff sense in $\mathcal{E}_{0}$ as
$\varepsilon\to 0$.
###### Proof.
Let $(u_{0}^{n},u_{1}^{n})\in\mathcal{A}_{\varepsilon_{n}}$ such that
$(u_{0}^{n},u_{1}^{n})\to(u_{0},u_{1})$ in $\mathcal{E}_{0}$. There exist
$p_{\varepsilon_{n}}\in\mathcal{H}_{[0,1]}$ such that there exist global
solution $u_{n}(t,x)$ to problem
$\begin{cases}u^{n}_{tt}+u^{n}_{t}-\Delta u^{n}=p_{\varepsilon_{n}}(t,u),\\\
u^{n}(t,x)=0\;\text{for}\;x\in\partial\Omega,\\\ u^{n}(0,x)=u^{n}_{0}(x),\\\
u^{n}_{t}(0,x)=u^{n}_{1}(x).\end{cases}$
As in the proof of Proposition 5.7 it follows that for every $T$ there exist
$v\in L^{\infty}(T,-T;H^{1}_{0})$ with $v_{t}\in
L^{\infty}(T,-T;H^{1}_{0}),v_{tt}\in L^{\infty}(T,-T;H^{-1})$ and $v\in
L^{4}(-T,T;L^{12})$ such that for the subsequence of $u^{n}$ there hold the
convergences
$\displaystyle u^{n}\to v\ \ \textrm{weakly-*}\ \ \textrm{in}\ \
L^{\infty}(-T,T;H^{1}_{0}),$ $\displaystyle u^{n}_{t}\to v_{t}\ \
\textrm{weakly-*}\ \ \textrm{in}\ \ L^{\infty}(-T,T;L^{2}),$ $\displaystyle
u^{n}_{tt}\to v_{tt}\ \ \textrm{weakly-*}\ \ \textrm{in}\ \
L^{\infty}(-T,T;H^{-1}).$
Moreover $(u^{n}(t),u^{n}_{t}(t))\to(v(t),v_{t}(t))$ weakly in
$\mathcal{E}_{0}$ for every $t\in[-T,T]$ which implies that
$(v(0),v_{t}(0))=(u_{0},u_{1})$ and $u^{n}(t)\to v(t)$ strongly in $L^{2}$. We
will show that $v$ is a weak solution for the autonomous problem, i.e., the
problem with $\varepsilon=0$. It is enough to show that for every $w\in
L^{2}(-T,T;H^{1}_{0})$ there holds
$\lim_{n\to\infty}\int_{-T}^{T}(p_{\varepsilon_{n}}(u^{n}(t)-f_{0}(v^{n}(t)),w(t))\,dt=0$
Let observe that $\|{u_{n}(t)}\|_{C^{0}}\leq R$ and $\|{v(t)}\|_{C^{0}}\leq R$
due to the fact that all attractors are bounded uniformly in $\mathcal{E}_{1}$
and the Sobolev embedding $H^{2}\hookrightarrow C^{0}$. Hence
$\displaystyle\left|\int_{-T}^{T}(p_{\varepsilon_{n}}(u^{n}(t),t)-f_{0}(v(t)),w(t))dt\right|$
$\displaystyle\qquad\leq\int_{-T}^{T}|(p_{\varepsilon_{n}}(u^{n}(t),t)-f_{0}(u^{n}(t)),w(t))|dt+\int_{-T}^{T}|(f_{0}(u^{n}(t))-f_{0}(v(t)),w(t))|dt$
$\displaystyle\qquad\leq\sup_{t\in\mathbb{R}}\sup_{|s|\leq
R}|(p_{\varepsilon_{n}}(s,t)-f_{0}(s))|\|{w}\|_{L^{1}(-T,T;L^{2})}$
$\displaystyle\qquad\qquad+\sup_{|s|\leq
R}|f^{\prime}_{0}(s)|\left(\int_{-T}^{T}\|{u^{n}(t)-v(t)}\|^{2}dt\right)^{\frac{1}{2}}\|{w}\|_{L^{2}(-T,T;L^{2})}^{\frac{1}{2}}$
Due to (H2) the first term tends to zero. The second term also tends to zero
by the Aubin–Lions lemma. Hence, $v(t)$ is the weak solution on the interval
$[-T,T]$ with $v(0)=(u_{0},u_{1})$. By the diagonal argument we can extend $v$
to a global weak solution. Moreover, as $v$ is also the Shatah–Struve
solution, it is unique. Moreover $\|{v(t)}\|_{\mathcal{E}_{1}}\leq C$ due to
the uniform boudedness of attractors $\mathcal{A}_{\varepsilon}$ in
$\mathcal{E}_{1}$. Hence $\\{v(t)\\}_{t\in\mathbb{R}}$ is a global bounded
orbit for the autonomous dynamical system $\varphi_{0}$ which implies that
$(u_{0},v_{0})\in\mathcal{A}_{0}$ and shows the upper semi-continuity in the
Kuratowski sense. Because all uniform attractors $\mathcal{A}_{\varepsilon}$
are uniformly bounded in $\mathcal{E}_{1}$, their sum
$\cup_{\varepsilon\in[0,1]}\mathcal{A}_{\varepsilon}$ is relatively compact in
$\mathcal{E}_{0}$. So, by Lemma 8.3 we have also upper semi-continuity in
Hausdorff sense.
∎
## 8\. Appendix.
### 8.1. Non-autonomous attractors
The results of this section can be found in [Langa, Kloeden].
###### Definition 8.1.
Let $X,\Sigma$ be metric spaces, $\\{\theta_{t}\\}_{t\geq 0}$ be a semigroup
and $\varphi:\mathbb{R}^{+}\times\Sigma\to C(X)$ is family of continuous maps
on $X$. Let the following conditions hold
* •
$\varphi(0,\sigma)=\textrm{Id}_{X}$ for every $\sigma\in\Sigma$.
* •
Map $\mathbb{R}^{+}\times\Sigma\ni(t,\sigma)\to\varphi(t,\sigma)x\in X$ is
continuous for every $x$.
* •
For every $t,s\geq 0$ and $\sigma\in\Sigma$ there holds the cocycle property
$\varphi(t+s,\sigma)=\varphi(t,\theta_{s}\sigma)\varphi(s,\sigma).$
Then the pair $(\theta,\varphi)_{(X,\Sigma)}$ is called a non-autonomous
dynamical (NDS) and map $\phi$ a cocycle semiflow.
###### Definition 8.2.
The set $\mathcal{A}\subset X$ is called uniform attractor for the cocycle
$\varphi$ on $\Sigma,X$ if $\mathcal{A}$ is smallest compact set such that for
every bounded sets $B\subset X$ and $\Upsilon\subset\Sigma$ for every holds
$\lim_{t\to\infty}\sup_{\sigma\in\Upsilon}\text{dist}(\varphi(t,\sigma)B,\mathcal{A})\to
0.$
###### Definition 8.3.
Let $(\theta,\varphi)_{(X,\Sigma)}$ be an NDS such that $\theta$ is a group
i.e $\Sigma$ is invariant for every $\theta_{t}$. Then we call the family of
compact sets
$\\{\mathcal{A}(\sigma)\\}_{\sigma\in\Sigma}\subset\mathcal{P}(X)$ a cocycle
atractor if there holds
* •
$\\{\mathcal{A}(\sigma)\\}_{\sigma\in\Sigma}$ is invariant under the NDS
$(\theta,\varphi)_{(X,\Sigma)}$, i.e.,
$\varphi(t,\sigma)\mathcal{A}(\sigma)=\mathcal{A}(\theta_{t}\sigma)\;\text{
for every $t\geq 0$.}$
* •
$\\{\mathcal{A}(\sigma)\\}_{\sigma\in\Sigma}$ pullback attracts all bounded
subsets $B\subset X$, i.e.,
$\lim_{t\to\infty}\text{dist}(\varphi(t,\theta_{-t}\sigma)B,\mathcal{A}(\sigma)))=0.$
###### Remark 8.4.
If for some $\sigma\in\Sigma$ we consider the mapping
$S(t,\tau)=\varphi(t,\sigma)$ for an NDS $(\theta,\varphi)_{(X,\Sigma)}$ then
the family of mappings $\\{S(t,\tau):t\geq\tau\\}$ forms an evolution process.
Let $\\{\mathcal{A}(\sigma)\\}_{\sigma\in\Sigma}$ be a cocycyle atrator for
NDS. Then $\mathcal{A}(t)=\mathcal{A}(\theta_{t}\sigma)$ is called a pullback
atractor for $S(t,\tau)$.
###### Definition 8.5.
We say that the NDS $(\theta,\varphi)_{(X,\Sigma)}$ is uniformly
asymptotically compact if there exist the compact set $K\subset X$ such that
$\lim_{t\to\infty}\sup_{\sigma\in\Upsilon}\text{dist}(\varphi(t,\sigma)B,K)\to
0.$
###### Theorem 8.1.
[Langa, Theorem 3.12.] Suppose that the NDS $(\theta,\varphi)_{(X,\Sigma)}$ is
uniformly asymptotically compact with $\Sigma$ which is compact and invariant
under the flow $\theta$. Then the uniform and cocycle attractors exist and
there holds
$\bigcup_{\sigma\in\Sigma}\mathcal{A}(\sigma)=\mathcal{A},$
where $\\{\mathcal{A}(\sigma)\\}_{\sigma\in\Sigma}$ is cocycle atractor and
$\mathcal{A}$ is uniform atractor.
###### Definition 8.6.
Let $(\theta,\varphi)_{(X,\Sigma)}$ be an NDS such that $\theta$ is a group.
We call $\xi:\mathbb{R}\to X$ a global solution through $x$ and $\sigma$ if,
for all $t\geq s$ it satisfies
$\varphi(t-s,\theta_{s}\sigma)\xi(s)=\xi(t)\text{ and }\xi(0)=x.$
###### Definition 8.7.
Let $(\theta,\varphi)_{(X,\Sigma)}$ be an NDS. We say that a subset
$\mathcal{M}\subset X$ is lifted-invariant if for each $x\in\mathcal{M}$ there
exist $\sigma$ and bounded global solution $\xi:\mathbb{R}\to X$ through $x$
and $\sigma$.
###### Theorem 8.2.
[Langa, Proposition 3.21] Suppose that the NDS $(\theta,\varphi)_{(X,\Sigma)}$
is uniformly asymptotically compact with $\Sigma$ which is compact and
invariant under flow $\theta$. Then the uniform attractor $\mathcal{A}$ is the
maximal bounded lifted-invariant set of the NDS
$(\theta,\varphi)_{(X,\Sigma)}$.
### 8.2. Upper semicontinuity
We recall the definitions of Hausdorff and Kuratowski upper-semicontinuous
convergence of sets, and the relation between these conditions.
###### Definition 8.8.
Let $(X,d)$ be a metric space and let
$\\{A_{\varepsilon}\\}_{\varepsilon\in[0,1]}$ be a family of sets in $X$. We
say that this family converges to $A_{0}$ upper-semicontinuously in Hausdorff
sense if
$\lim_{\varepsilon\to 0^{+}}\text{dist}_{X}(A_{\varepsilon},A_{0})=0.$
###### Definition 8.9.
Let $(X,d)$ be a metric space and let
$\\{A_{\varepsilon}\\}_{\varepsilon\in[0,1]}$ be a family of sets in $X$. We
say that this family converges to $A_{0}$ upper-semicontinuously in Kuratowski
sense if
$X-\limsup_{\varepsilon\to 0^{+}}A_{\varepsilon}\subset A_{0},$
where $X-\limsup_{\varepsilon\to 0^{+}}A_{\varepsilon}$ is the Kuratowski
upper limit defined by
$X-\limsup_{\varepsilon\to 0^{+}}A_{\varepsilon}=\\{x\in
X:\lim_{\varepsilon_{n}\to
0}d(x_{\varepsilon_{n}},x)=0,\;x_{\varepsilon_{n}}\in A_{\varepsilon_{n}}\\}.$
The proof of the next result can be found for example in [DMP, Proposition
4.7.16].
###### Lemma 8.3.
Assume that the sets $\\{A_{\varepsilon}\\}_{\varepsilon\in[0,1]}$ are
nonempty and closed and the set $\cup_{\varepsilon\in[0,1]}A_{\varepsilon}$ is
relatively compact. Then if the family
$\\{A_{\varepsilon}\\}_{\varepsilon\in[0,1]}$ converges to $A_{0}$ upper-
semicontinuously in Kuratowski sense then
$\\{A_{\varepsilon}\\}_{\varepsilon\in[0,1]}$ converges to $A_{0}$ upper-
semicontinuously in Hausdorff sense.
## 9\. References.
AntilH.PfeffererJ.RogovsS.Fractional operators with inhomogeneous boundary
conditions: analysis, control, and discretizationCommunications in
Mathematical Sciences2018161395–1426@article{Antil, author = {Antil, H.},
author = {Pfefferer, J.}, author = {Rogovs, S.}, title = {Fractional operators
with inhomogeneous boundary conditions: analysis, control, and
discretization}, journal = {Communications in Mathematical Sciences}, date =
{2018}, volume = {16}, pages = {1395–1426}} A damped hyperbolic equation with
critical exponentComm. Partial Differential
Equations17841–8661992ArrietaJ.M.CarvalhoA.N.HaleJ.K.@article{Arrieta-
Carvalho-Hale-1992, title = {A damped hyperbolic equation with critical
exponent}, journal = {Comm. Partial Differential Equations}, volume = {17},
pages = {841–866}, year = {1992}, author = {Arrieta, J.M.}, author =
{Carvalho, A.N.}, author = {Hale, J.K.}} Regular attractors of semi-groups and
evolution equationsJ. Math. Pures et
Appl.62441–4911983BabinA.V.VishikM.I.@article{Babin-Vishik-1983, title =
{Regular attractors of semi-groups and evolution equations}, journal = {J.
Math. Pures et Appl.}, volume = {62}, pages = {441–491}, year = {1983}, author
= {Babin, A.V.}, author = {Vishik, M.I.}} Attractors of evolution
equationsStudies in Mathematics and its Applications25AmsterdamNorth-Holland
Publishing Co.1992BabinA.V.VishikM.I.@book{Babin_Vishik, title = {Attractors
of Evolution Equations}, series = {Studies in Mathematics and its
Applications}, volume = {25}, place = {Amsterdam}, publisher = {North-Holland
Publishing Co.}, date = {1992}, author = {Babin, A.V.}, author = {Vishik,
M.I.}} BalibreaF.CaraballoT.KloedenP.E.ValeroJ.Recent developments in
dynamical systems: three perspectivesInt. J. Bifurcat.
Chaos2020102591–2636@article{balibrea, author = {Balibrea, F.}, author =
{Caraballo, T.}, author = {Kloeden, P.E.}, author = {Valero, J.}, title =
{Recent developments in dynamical systems: three perspectives}, journal =
{Int. J. Bifurcat. Chaos}, volume = {20}, date = {2010}, pages = {2591–2636}}
@article{Blair}
* author=Blair, M., author=Smith, H., author=Sogge, C., title=Strichartz estimates for the wave equation on manifolds with boundary, journal=Ann. I. H. Poincaré-AN, volume=26, pages=1817–1829, year=2009
BortolanM.CCarvalhoA.N.LangaJ.A.Structure of attractors for skew product
semiflowsJournal of Differential Equations2014257(2)490–522@article{Langa,
author = {Bortolan, M.C}, author = {Carvalho, A.N.}, author = {Langa, J.A.},
title = {Structure of attractors for skew product semiflows}, journal =
{Journal of Differential Equations}, date = {2014}, volume = {257(2)}, pages =
{490–522}} BortolanM.CCarvalhoA.N.LangaJ.A.Attractors under autonomous and
non-autonomous perturbationsAmerican Mathematical Society2020Mathematical
Surveys and Monographs 246Providence, Rhode Island@book{BoCaL, author =
{Bortolan, M.C}, author = {Carvalho, A.N.}, author = {Langa, J.A.}, title =
{Attractors Under Autonomous and Non-autonomous Perturbations}, publisher =
{American Mathematical Society}, date = {2020}, series = {Mathematical Surveys
and Monographs }, volume = {246}, place = {Providence, Rhode Island}}
BurqN.LebeauG.PlanchonF.Global existence for energy critical waves in 3d
domainsJ. AMS21831–8452008@article{Burq, author = {Burq, N.}, author =
{Lebeau, G.}, author = {Planchon, F.}, title = {Global existence for energy
critical waves in 3D domains}, journal = {J. AMS}, volume = {21}, pages =
{831–845}, year = {2008}} Damped wave equations with fast growing dissipative
nonlinearitiesDiscrete and Cont. Dyn.
Systems-A241147–11652009CarvalhoA.N.CholewaJ.W.DłotkoT.@article{Carvalho-Chol-
Dlot-2009, title = {Damped wave equations with fast growing dissipative
nonlinearities}, journal = {Discrete and Cont. Dyn. Systems-A}, volume = {24},
pages = {1147–1165}, year = {2009}, author = {Carvalho, A.N.}, author =
{Cholewa, J.W.}, author = {D\l{}otko, T.}} Attractors for infinite-dimensional
non-autonomous dynamical systemsApplied Mathematical Series182New
YorkSpringer2013CarvalhoA.N.LangaJ.A.RobinsonJ.C.@book{Carvalho-Langa-
Robinson-2012, title = {Attractors for infinite-dimensional non-autonomous
dynamical systems}, series = {Applied Mathematical Series}, volume = {182},
address = {New York}, publisher = {Springer}, year = {2013}, author =
{Carvalho, A.N.}, author = {Langa, J.A.}, author = {Robinson, J.C.}}
Deterministic and random attractors for a wave equation with sign changing
dampingChangQ.LiD.SunC.ZelikS.arXiv:1910.02430@article{clsz, title =
{Deterministic and random attractors for a wave equation with sign changing
damping}, author = {Chang, Q.}, author = {Li, D.}, author = {Sun, C.}, author
= {Zelik, S.}, journal = {arXiv:1910.02430}} Global attractors of non-
autonomous dissipative dynamical systemsWorld
Scientific2004ChebanD.N.@book{Cheban, title = {Global attractors of non-
autonomous dissipative dynamical systems}, publisher = {World Scientific},
year = {2004}, author = {Cheban, D.N.}} Uniform attractors of dynamical
processes and non-autonomous equations of mathematical physicsRussian
Mathematical Surveys68159–1962013V.V.Chepyzhov.@article{Chepyzhov-2013, title
= {Uniform attractors of dynamical processes and non-autonomous equations of
mathematical physics}, journal = {Russian Mathematical Surveys}, volume =
{68}, pages = {159–196}, year = {2013}, author = {Chepyzhov. V.V.}}
ChepyzhovV.V.VishikM.I.Attractors for equations of mathematical
physicsAmerican Mathematical SocietyProvidence, Rhode
Island2002@book{Chepyzhov_Vishik, author = {Chepyzhov, V.V.}, author =
{Vishik, M.I.}, title = {Attractors for Equations of Mathematical Physics},
publisher = {American Mathematical Society}, place = {Providence, Rhode
Island}, date = {2002}} Long-time behavior of second order evolution equations
with nonlinear dampingMemoirs of the American Mathematical Society912American
Mathematical Society2008ChueshovI.LasieckaI.@book{Chueshov_Lasiecka, title =
{Long-Time Behavior of Second Order Evolution Equations with Nonlinear
Damping}, series = {Memoirs of the American Mathematical Society}, volume =
{912}, publisher = {American Mathematical Society}, year = {2008}, author =
{Chueshov, I.}, author = {Lasiecka, I.}}
DenkowskiZ.MigórskiS.PapageorgiouN.S.An introduction to nonlinear analysis:
theoryKluwer Academic Publishers2003@book{DMP, author = {Denkowski, Z.},
author = {Mig\'{o}rski, S.}, author = {Papageorgiou, N.S.}, title = {An
Introduction to Nonlinear Analysis: Theory}, publisher = {Kluwer Academic
Publishers}, date = {2003}} Global attractors in abstract parabolic
problemsCambridgeCambridge University
Press2000DłotkoT.CholewaJ.W.@book{Dlotko_Cholewa, title = {Global Attractors
in Abstract Parabolic Problems}, address = {Cambridge}, publisher = {Cambridge
University Press}, year = {2000}, author = {D\l{}otko, T.}, author = {Cholewa,
J.W.}} FreitasM.KalitaPLangaJ.2017Continuity of non-autonomous attractors for
hyperbolic perturbation of parabolic equations264Journal of Differential
Equations@article{Mirelson-Kalita-Langa, author = {Freitas, M.}, author =
{Kalita, P}, author = {Langa, J.}, year = {2017}, pages = { {}}, title =
{Continuity of non-autonomous attractors for hyperbolic perturbation of
parabolic equations}, volume = {264}, journal = {Journal of Differential
Equations}} Attractors for damped nonlinear hyperbolic equationsJ. Math. Pures
et Appl.66273–3191987GhidagliaJ.M.R.Temam.@article{Ghidaglia_Temam, title =
{Attractors for damped nonlinear hyperbolic equations}, journal = {J. Math.
Pures et Appl.}, volume = {66}, pages = {273–319}, year = {1987}, author =
{Ghidaglia, J.M.}, author = {Temam. R.}} Asymptotic behavior and dynamics in
infinite dimensionsHaleJ.K.1–42title={Nonlinear Differential Equations},
series={Research Notes in Math.}, volume={132}, year={1985},
publisher={Pitman}, @article{Hale_1985, title = {Asymptotic behavior and
dynamics in infinite dimensions}, author = {Hale, J.K.}, pages = {1–42}, book
= {title={Nonlinear Differential Equations}, series={Research Notes in Math.},
volume={132}, year={1985}, publisher={Pitman}, }} Asymptotic behavior of
dissipative systemsAmerican Mathematical SocietyMathematical Surveys and
Monographs251988”ProvidenceHaleJ.K.@book{Hale-1988, title = {Asymptotic
Behavior of Dissipative Systems}, publisher = {American Mathematical Society},
series = {Mathematical Surveys and Monographs}, volume = {25}, year = {1988},
address = {"Providence}, author = {Hale, J.K.}} Two remarks on dissipative
hyperbolic problemsHarauxA.161–179title={Nonlinear partial differential
equations and their applications, College de France Seminar}, series={Research
Notes in Math.}, volume={122}, year={1984}, publisher={Pitman},
@article{Haraux1, title = {Two remarks on dissipative hyperbolic problems},
author = {Haraux, A.}, pages = {161–179}, book = {title={Nonlinear partial
differential equations and their applications, College de France Seminar},
series={Research Notes in Math.}, volume={122}, year={1984},
publisher={Pitman}, }} Recent results on semi-linear hyperbolic problems in
bounded domainsHarauxA.118–126title={Partial Differential Equations},
series={Lecture Notes in Math.}, volume={1324}, year={1988},
publisher={Springer}, address={Berlin, Heidelberg}, @article{Haraux2, title =
{Recent results on semi-linear hyperbolic problems in bounded domains}, author
= {Haraux, A.}, pages = {118–126}, book = {title={Partial Differential
Equations}, series={Lecture Notes in Math.}, volume={1324}, year={1988},
publisher={Springer}, address={Berlin, Heidelberg}, }} Semi-linear hyperbolic
problems in bounded domainsCRC Press1987HarauxA.@book{Har_book, title = {Semi-
Linear Hyperbolic Problems in Bounded Domains}, publisher = {CRC Press}, year
= {1987}, author = {Haraux, A.}} Geometric theory of semilinear parabolic
equationsSpringerLecture Notes in Math.8401981BerlinHenryD.@book{Henry-1981,
title = {Geometric Theory of Semilinear Parabolic Equations}, publisher =
{Springer}, series = {Lecture Notes in Math.}, volume = {840}, year = {1981},
address = {Berlin}, author = {Henry, D.}}
KalantarovV.SavostianovA.ZelikS.Attractors for damped quintic wave equations
in bounded domainsAnn. Henri Poincaré2016172555–2584@article{Savostianov,
author = {Kalantarov, V.}, author = {Savostianov, A.}, author = {Zelik, S.},
title = {Attractors for damped quintic wave equations in bounded domains},
journal = {Ann. Henri Poincar\'{e}}, date = {2016}, volume = {17}, pages =
{2555–2584}} KloedenP.E.RasmussenM.Nonautonomous dynamical systems2010American
Mathematical SocietyProvidence, Rhode Island@book{Kloeden, author = {Kloeden,
P.E.}, author = {Rasmussen, M.}, title = {Nonautonomous Dynamical Systems},
date = {2010}, publisher = {American Mathematical Society}, place =
{Providence, Rhode Island}} Well-posedness and attractors for a super-cubic
weakly damped wave equation with $H^{-1}$ source termJournal of Differential
Equations2638718–87482017LiuC.MengF.SunC.@article{LMS, title = {Well-posedness
and attractors for a super-cubic weakly damped wave equation with $H^{-1}$
source term}, journal = {Journal of Differential Equations}, volume = {263},
pages = {8718–8748}, year = {2017}, author = {Liu, C.}, author = {Meng, F.},
author = {Sun, C.}} Uniform attractors for a weakly damped wave equation with
sup-cubic nonlinearityApplied Mathematics
Letters95179–1852019MeiX.SunC.@article{Mei_Sun, title = {Uniform attractors
for a weakly damped wave equation with sup-cubic nonlinearity}, journal =
{Applied Mathematics Letters}, volume = {95}, pages = {179–185}, year =
{2019}, author = {Mei, X.}, author = {Sun, C.}} Discrete and Continuous
Dynamical Systems412021569–600Pullback attractor for a weakly damped wave
equation with sup-cubic nonlinearityMeiX.XiongY.SunC.@article{mcs, journal =
{Discrete and Continuous Dynamical Systems}, volume = {41}, year = {2021},
pages = {569–600}, title = {Pullback attractor for a weakly damped wave
equation with sup-cubic nonlinearity}, author = {Mei, X.}, author = {Xiong,
Y.}, author = {Sun, C.}} Exponential attractors for weakly damped wave
equation with sub-quintic nonlinearityComputers & Mathematics with
Applications781026–10362019MengF.LiuC.@article{Meng_Liu, title = {Exponential
attractors for weakly damped wave equation with sub-quintic nonlinearity},
journal = {Computers \& Mathematics with Applications}, volume = {78}, pages =
{1026–1036}, year = {2019}, author = {Meng, F.}, author = {Liu, C.}}
PataV.ZelikS.A remark on the damped wave equationCommunications on Pure &
Applied Analysis2006611–6165@article{PataZelik, author = {Pata, V.}, author =
{Zelik, S.}, title = {A remark on the damped wave equation}, journal =
{Communications on Pure \& Applied Analysis}, year = {2006}, pages =
{611–616}, volume = {5}} Infinite-dimensional dynamical systemsCambridge
University PressCambridge2001RobinsonJ.C.@book{Robinson, title = {Infinite-
Dimensional Dynamical Systems}, publisher = {Cambridge University Press},
place = {Cambridge}, date = {2001}, author = {Robinson, J.C.}}
SavostianovA.Strichartz estimates and smooth attractors for a sub-quintic wave
equation with fractional damping in bounded domainsAdvances in Differential
Equations20495–5302015@article{Savo2, author = {Savostianov, A.}, title =
{Strichartz estimates and smooth attractors for a sub-quintic wave equation
with fractional damping in bounded domains}, journal = {Advances in
Differential Equations}, volume = {20}, pages = {495–530}, year = {2015}}
SavostianovA.ZelikS.Smooth attractors for the quintic wave equations with
fractional dampingAsymptotic Analysis87191–2212014@article{Savo1, author =
{Savostianov, A.}, author = {Zelik, S.}, title = {Smooth attractors for the
quintic wave equations with fractional damping}, journal = {Asymptotic
Analysis}, volume = {87}, pages = {191–221}, date = {2014}}
SavostianovA.ZelikS.Uniform attractors for measure-driven quintic wave
equationsRussian Mathematical
Surveys75253–3202020@article{Savostianov_measure, author = {Savostianov, A.},
author = {Zelik, S.}, title = {Uniform attractors for measure-driven quintic
wave equations}, journal = {Russian Mathematical Surveys}, volume = {75},
pages = {253–320}, year = {2020}} SmithH.F.SoggeC.D.Global Strichartz
estimates for nonthapping perturbations of the Laplacian2000Communications in
Partial Differential Equations252171–2183@article{Sogge, author = {Smith,
H.F.}, author = {Sogge, C.D.}, title = {Global {S}trichartz estimates for
nonthapping perturbations of the {L}aplacian}, date = {2000}, journal =
{Communications in Partial Differential Equations}, volume = {25}, pages =
{2171-2183}} TaoT.Nonlinear dispersive equations: local and global
analysisCBMS Regional Conference Series in Mathematics2006@book{Tao, author =
{Tao, T.}, title = {Nonlinear dispersive equations: local and global
analysis}, publisher = {CBMS Regional Conference Series in Mathematics}, date
= {2006}} Infinite-dimensional dynamical systems in mechanics and
physicsSpringerNew York1988TemamR.@book{Temam, title = {Infinite-Dimensional
Dynamical Systems in Mechanics and Physics}, publisher = {Springer}, place =
{New York}, date = {1988}, author = {Temam, R.}}
|
[SBM-consistency-supp.pdf]
and
# Consistent Bayesian Community Detection
Sheng<EMAIL_ADDRESS>[ Surya T.
<EMAIL_ADDRESS>[ Duke University Department of
Statistical Science, Duke University,
###### Abstract
Stochastic Block Models (SBMs) are a fundamental tool for community detection
in network analysis. But little theoretical work exists on the statistical
performance of Bayesian SBMs, especially when the community count is unknown.
This paper studies a special class of SBMs whose community-wise connectivity
probability matrix is diagonally dominant, i.e., members of the same community
are more likely to connect with one another than with members from other
communities. The diagonal dominance constraint is embedded within an otherwise
weak prior, and, under mild regularity conditions, the resulting posterior
distribution is shown to concentrate on the true community count and
membership allocation as the network size grows to infinity. A reversible-jump
Markov Chain Monte Carlo posterior computation strategy is developed by
adapting the allocation sampler of [19]. Finite sample properties are examined
via simulation studies in which the proposed method offers competitive
estimation accuracy relative to existing methods under a variety of
challenging scenarios.
62F12,
62F15,
Bayesian inference,
stochastic block model,
diagonal dominance,
network analysis,
community detection,
###### keywords:
[class=MSC]
###### keywords:
## 1 Introduction
Community detection is the most basic yet central statistical problem in
network analysis. To determine the number of communities, various tests have
been constructed based on modularity [29], random matrix theory [4, 17], and
likelihood ratio [28]. Methods based on information criteria [25] and network
cross-validation [5, 18] have also been designed. In the Bayesian realm, a
stochastic block model (SBM) is often employed to jointly infer the number of
communities, the connectivity probability matrix, and the membership
assignment [22, 19, 8].
Despite clear empirical evidence of good statistical performance [19, 8],
theoretical guarantees on Bayesian SBMs are a rarity when the number of
communities is unknown. As the only exception, [8] show that the community
count may be consistently estimated under the restrictive assumptions of a
homogeneous SBM with at most three communities. It is unclear if their
calculations generalize to more realistic scenarios. It is also not clear if
Bayesian SBMs can consistently recover the true membership allocation.
We study a special class of SBMs whose community-wise connectivity probability
matrix is diagonally dominant. This special class offers a stronger encoding
of the notion of communities in networks in the sense that nodes within the
same community are strictly more likely to connect with each other than with
nodes from other communities. Crucially, the diagonal dominance condition
enables membership allocations to be fully retrieved from the node-wise
connectivity probabilities, as long as each community contains at least two
nodes. Of course, the node-wise connectivity probability matrix is estimated
from data with statistical error. But as long as it is sufficiently “close” to
the truth, it is still possible to precisely recover the membership allocation
and the community count.
For a Bayesian estimation of the diagonally-dominant SBM under a modified
Nowicki-Snijders prior [22], we show the posterior on the node-wise
connectivity matrix contracts to the truth in the sup-norm topology. Posterior
contraction under sup-norm is necessary to the identification strategy
detailed above. [13] establish near minimax optimal posterior contraction
rates in the $L_{2}$ norm for dense networks with the true number of
communities assumed known. However, posterior contraction in $L_{2}$ or other
norms that are weaker than the sup-norm do not grant the identification of the
number of communities or the membership assignment from node-wise connectivity
probabilities. Our sup-norm posterior contraction calculation applies the
Schwartz method [10, 11, 12]. The key observation is that the sup-norm is
dominated by the Hellinger distance in the special context of SBMs, so the
tests required by the Schwartz method exist.
The theoretical gains of the diagonally dominant SBMs come at the price of
losing conjugacy with respect to the original Nowicki-Snijders prior. But
posterior computation may be carried out with a reasonably efficient
reversible-jump Markov chain Monte Carlo (MCMC) algorithm based of the
allocation sampler in [19]. Results from extensive numerical studies show that
our Bayesian diagonally-dominant SBM offers comparable and competitive
statistical performance against various alternatives in estimating the
community count and membership assignment.
## 2 The Diagonally Dominant Stochastic Block Model
Suppose an $n\times n$ binary adjacency matrix $A$ is observed, with entry
$A_{ij}=1$ if node $i$ and node $j$ are connected and $A_{ij}=0$ otherwise.
The stochastic block model (SBM) assumes there are $K\in{\mathbb{Z}}_{+}$
communities among the $n$ nodes and the connection between nodes exclusively
depends on their community membership. The community assignment $Z$ partitions
nodes $\\{1,...,n\\}$ into $K$ non-empty groups and assigns each node with a
community label. Let the community-wise connectivity probability matrix be
$P\in[0,1]^{K\times K}$. Then,
$A_{ij}|Z\mathbin{\overset{ind}{\kern
0.0pt\leavevmode\resizebox{10.25664pt}{3.66875pt}{$\sim$}}}Ber({P_{{Z(i)}{Z(j)}}})\text{
for }1\leq i<j\leq n,$ (1)
and $P\left({{A_{ii}}=0|Z}\right)=1$ for $i\in\\{1,...,n\\}$, assuming no
self-loops. We denote the above SBM model as $SBM(Z,P,n,k)$. Due to its
simplicity and expressiveness, SBM and its variants are fundamental tools for
community detection [e.g., 15, 2, 23].
### 2.1 Bayesian SBM with conjugate priors
For Bayesian estimation of the SBM, [22] propose the following conjugate
prior: given $K$,
$\begin{split}{P_{ab}}&\mathbin{\overset{iid}{\kern
0.0pt\leavevmode\resizebox{8.46658pt}{3.66875pt}{$\sim$}}}U\left({0,1}\right),a,b=1,...,K\\\
Z_{i}&\mathbin{\overset{iid}{\kern
0.0pt\leavevmode\resizebox{8.46658pt}{3.66875pt}{$\sim$}}}MN(\pi),i=1,...,n\\\
\pi&\sim Dir\left(\alpha\right).\end{split}$ (2)
This prior is widely used and adapted to more complicated cases in the
Bayesian SBM literature [13, 27, 8, 19].
For the unknown $K$ case, to maintain conjugacy, it is natural to place a
Poisson prior on $K$ [19, 8]. With conjugacy, [19] marginalize out $P$ from
the posterior $\Pi_{n}(Z,K,P|A)$ and develop an efficient “allocation sampler”
to directly sample from $\Pi_{n}(Z,K|A)$; [8] adapt the idea of MFM sampler of
[20] to the SBM case: marginalize out $K$ from the posterior
$\Pi_{n}(Z,K,P|A)$, and develop a Gibbs sampler sampling from
$\Pi_{n}(Z,P|A)$.
### 2.2 Our proposal: diagonally dominant SBM
In this paper, we propose to modify the conjugate specification of Nowicki and
Snijders’ prior on the connectivity matrix $P$ by imposing a diagonal
dominance constraint. The constraint is imposed in two steps: first specify a
prior distribution for the diagonal entries of $P$, then conditional on the
diagonal entries, specify a prior distribution on the off-diagonal entries
such that the off-diagonal entries are strictly less than their corresponding
diagonal entries.
For instance, we specify the following prior:
$\begin{split}&{P_{aa}}|K,\delta\mathbin{\overset{iid}{\kern
0.0pt\leavevmode\resizebox{8.46658pt}{3.66875pt}{$\sim$}}}U(\delta,1],a\in\\{1,...,K\\},\\\
&{P_{ab}}|K,\delta,\\{P_{aa}\\}_{a\in\\{1,...,K\\}}\mathbin{\overset{ind}{\kern
0.0pt\leavevmode\resizebox{10.25664pt}{3.66875pt}{$\sim$}}}U(0,P_{aa}\wedge
P_{bb}-\delta),a<b\in\\{1,...,K\\},\\\ &\delta\propto{\rm log}(n)/n,\\\ &K\sim
Pois(1),\end{split}$ (3)
where the hyperparameter $\delta$ is chosen to be a deterministic sequence
that goes to 0 as the network size grows to infinity. Uniform distributions in
(3) are used for simplicity and can be replaced with other distributions.
In contrast to the Nowicki and Snijders’ priors, our prior specification
directly imposes conditional dependence between diagonal entries and off-
diagonal entries. The dependence matches the idea of “community” at the price
of losing conjugacy.
The modification is mainly for two reasons. Firstly, the prior constraint of
diagonal dominance offers a neat identification of the number of communities,
and allows us to consistently estimate the number of communities and
membership. (See more details in section 3.1.)
Secondly, the resulting posterior under the modified prior is more
interpretable. Though the prior specification following [22] is conjugate,
off-diagonal entries can be greater than diagonal entries under the prior,
that is, nodes can be more likely to be connected to nodes from other
communities than nodes from their own community. Such configurations violate
the idea of “community”. Consequently, posterior samples of connectivity
matrices can violate diagonal dominance and are hard to interprete within the
framework of SBM.
### 2.3 $L_{2}$ minimax rate
This paper studies a special sub-class of SBM. One may wonder if the
diagonally dominant (DD) SBM actually solves a simpler community detection
problem. To answer this question, we calculate the $L_{2}$ minimax rate of
estimation for DD-SBM and compare it with the minimax rates derived in [6].
Now, we define the parameter space of DD-SBM. DD-SBM has the following space
of connectivity matrix
${S_{k,\delta}}=\left\\{{P\in{{\left[{0,1}\right]}^{k\times
k}}:{P^{T}}=P,{P_{ii}}>\delta+\mathop{\mathop{\rm max}\nolimits}\limits_{j\neq
i}\left({{P_{ij}}}\right),i\in\\{1,...,k\\}}\right\\},$ (4)
where $\delta\in[0,1)$ is a constant. The key departure from the literature is
the diagonal dominance constraint: ${P_{ii}}>\delta+\mathop{\mathop{\rm
max}\nolimits}\limits_{j\neq i}\left({{P_{ij}}}\right)$, for all
$i\in\\{1,...,k\\}$. Under this constraint, between community connection
probabilities are less than within community connection probabilities by
$\delta$. The gap is inherited by the node-wise connectivity probability
matrix.
Further with the membership assignment $Z$, we can define the space for node-
wise connectivity probability matrix:
${\Theta_{k,\delta}}=\left\\{{T({ZPZ^{T}})\in[0,1]^{n\times n}:P\in
S_{k,\delta},Z\in{{\mathcal{Z}}_{n,k}}}\right\\},$ (5)
where ${\mathcal{Z}}_{n,k}$ denotes the collection of all possible assignment
of $n$ nodes into $k$ communities which have at least two elements, and
$T(M):=M-\mathop{\rm diag}\nolimits(M)$ for any square matrix $M$. The node-
wise connectivity probability matrix inherits the structural assumption of
diagonal dominance. The minimum community size assumption allows recovering
community membership from node-wise connectivity probability matrix. It is
worthwhile to emphasize that singleton communities are ruled out.
The following $L_{2}$ minimax result implies that DD-SBM estimation is as
difficult as the original SBM estimation problem, as long as the dominance gap
is shrinking at certain rate. In our calculation, the gap squared
($\delta^{2}$) is dominated by the “clustering rate” ${\rm log}(k)/n$ [6, 7,
16].
###### Proposition 1.
For any $k\in\\{1,..,n\\}$ and $\delta\precsim\sqrt{{\rm log}(k)/n}$,
$\inf\limits_{\hat{\theta}}\sup\limits_{\theta\in\Theta_{k,\delta}}{\mathbb{E}}\left[||\hat{\theta}-\theta||_{2}^{2}\right]\asymp\frac{k^{2}}{n^{2}}+\frac{{\rm
log}(k)}{n}.$ (6)
###### Proof.
The upper bound follows theorem 2.1 of [6] as the diagonally dominant
connectivity matrix space is a subset of the unconstrained connectivity matrix
space.
The lower bound follows the proof of theorem 2.2 of [6] but their construction
violates the diagonally dominant constraint. It turns out a diagonally
dominant version of their construction is available. For brevity, we only
highlight the differences from the proof in [6].
For the nonparametric rate, we construct the $Q^{\omega}$ matrix by
$Q^{\omega}_{ab}=Q^{\omega}_{ba}=\frac{1}{2}-\delta-\frac{c_{1}k}{n}\omega_{ab}$,
for $a>b\in\\{1,...,k\\}$ and $Q^{\omega}_{aa}=\frac{1}{2}$, for
$a\in\\{1,...,k\\}$. The rest of the proof for the nonparametric rate remains
the same.
For the clustering rate, we construct the $Q$ matrix with the following form
$\begin{bmatrix}D_{1}&B\\\ B^{T}&D_{2}\end{bmatrix}$, where
$D_{1}=\frac{1}{2}I_{k/2}$, $B$ follows the same construction of [6] except
that $B_{a}=\frac{1}{2}-\delta-\sqrt{\frac{c_{2}{\rm log}k}{n}}\omega_{a}$ for
$a\in\\{1,...,k/2\\}$, $D_{2}=(\frac{1}{2}-\delta-\sqrt{\frac{{\rm
log}k}{n}})1_{k/2}1^{T}_{k/2}+(\delta+\sqrt{\frac{{\rm log}k}{n}})I_{k/2}$. As
$\delta\precsim\sqrt{{\rm log}(k)/n}$, the KL divergence upper bound remains
the same. The rest of the proof for the clustering rate remains the same as
the entropy calculation and the volume argument are unaffected.
∎
## 3 Consistent Bayesian Community Detection
### 3.1 Identification Strategy
The first consequence of diagonal dominance is that the node-wise connectivity
probability matrix spaces of different ranks are non-overlapping. This
observation offers a neat partition of the parameter space by the number of
communities.
###### Lemma 3.1.
Suppose $k\neq k^{\prime}\in{\mathbb{N}}$, then
$\Theta_{k,\delta}\cap\Theta_{k^{\prime},\delta^{\prime}}=\emptyset$ for any
$\delta,\delta^{\prime}\geq 0$.
Secondly, with diagonal dominance, it is possible to exactly identify the
number of communities, the membership of every node and the community-wise
connectivity probability matrix from node-wise connectivity probability matrix
under mild conditions. A more rigorous statement is presented in Lemma 3.2.
The recovery is based on checking each node’s connectivity probabilities with
other nodes, as each node is connected with nodes from its own community with
the highest probability.
###### Lemma 3.2.
Suppose $P\in S_{k,\delta}$ for some constant $\delta>0$, $\theta=T(ZPZ^{T})$
for some $Z\in{\mathcal{Z}}_{n,k}$, $T^{-1}$ recovers both community
assignment $Z$ and connectivity matrix $P$ from $\theta$.
###### Proof.
Without loss of generality, assume the nodes are ordered by community and we
can write $Z=[{\bf{1}}_{n_{1}},...,{\bf{1}}_{n_{k}}]$ where $n_{j}$ denotes
the number of nodes in community $j$ and ${\bf{1}}_{n_{j}}$ is a $n\times 1$
vector with entries in the $j^{th}$ block being 1. Therefore, the off-diagonal
terms of $\theta$ are the off-diagonal terms of $ZP{Z^{T}}$.
Suppose we hope to pin down $i^{th}$ node’s community membership. We take
$i^{th}$ row of $\theta$ and it contains the connectivity probabilities of
node $i$ and all other nodes. As $Z\in{\mathcal{Z}}_{n,k}$ whose minimum
community size is two,
${\mathcal{C}}_{i}\equiv\\{j:\theta_{ij}=\mathop{\mathop{\rm
max}\nolimits}\limits_{\ell}\theta_{i\ell}\\}$ is exactly the set of node(s)
from the community of node $i$. If ${\mathcal{C}}_{i}$ contains node(s) from
other communities, then the connectivity probabilities of node $i$ with those
node(s) are cross-community which are strictly less than the within-community
connectity probabiilty of node $i$, contradicting the construction of
${\mathcal{C}}_{i}$. If ${\mathcal{C}}_{i}$ misses node(s) from the community
of node $i$, then the connectivity probabilities of node $i$ with those
node(s) are within-community which have to match the connectivity
probabilities of nodes in ${\mathcal{C}}_{i}$. Therefore, by enumerating the
above procedure for all rows of $\theta$, $Z$ is identified up to a
permutation of columns.
To recover $P$ from $\theta$, it suffices to use $Z$ and plug in corresponding
values from $\theta$.
∎
In practice, the exact knowledge of node-wise connectivity probability matrix
is not available. However, the precise recovery in Lemma 3.2 is possible with
the estimated node-wise connectivity probability matrix. This is formalized in
Lemma 3.3. We use sup-norm to characterize the accuracy of the knowledge of
node-wise connectivity probability matrix. For any node-wise connectivity
matrix $\theta^{0}$, there exists $Z_{0}$ and $P^{0}$ such that
$\theta^{0}=T(Z_{0}P^{0}Z_{0}^{T})$. Without loss of generality, we can fix
the column ordering of $Z_{0}$ so that $P^{0}$ is consequently defined.
###### Lemma 3.3.
Suppose ${\theta^{0}}=T(Z_{0}P^{0}Z_{0}^{T})$ for some
$Z_{0}\in{\mathcal{Z}}_{n,{k_{0}}}$, ${P^{0}}\in{S_{{k_{0}},\delta}}$ and
$\delta>0$. Then, $\\{\theta=T(ZPZ^{T}):||\theta-\theta^{0}||_{\infty}\leq
r,Z\in{\mathcal{Z}}_{n,k},P\in
S_{k,\delta}\\}=\\{T(Z_{0}PZ_{0}^{T}):||P-P^{0}||_{\infty}\leq
r,P\in{S_{k_{0},\delta}}\\}$ holds for all $r<\delta/2$.
###### Proof.
Pick any $\theta\in\\{\theta=T(ZPZ^{T}):||\theta-\theta^{0}||_{\infty}\leq
r,Z\in{\mathcal{Z}}_{n,k},P\in S_{k,\delta}\\}$, define
${\mathcal{C}}_{i}=\\{j:\theta_{ij}=\mathop{\mathop{\rm
max}\nolimits}\limits_{\ell}\theta_{i\ell}\\}$; similarly, for $\theta^{0}$,
define ${\mathcal{C}}_{i}^{0}=\\{j:\theta^{0}_{ij}=\mathop{\mathop{\rm
max}\nolimits}\limits_{\ell}\theta^{0}_{i\ell}\\}$. The statement is
equivalent to ${\mathcal{C}}_{i}={\mathcal{C}}_{i}^{0}$ for all
$i\in\\{1,...,n\\}$ and all $\theta$.
First, note for any $j\in{\mathcal{C}}^{0}_{i}$ and
$\ell\in\\{1,...,n\\}\backslash{\mathcal{C}}^{0}_{i}$,
${\theta_{ij}}-{\theta_{i\ell}}={\theta_{ij}}-\theta_{ij}^{0}+\theta_{ij}^{0}-\theta_{i\ell}^{0}+\theta_{i\ell}^{0}-{\theta_{i\ell}}>\delta-2r>0$.
That is, ${\mathcal{C}}^{0}_{i}$ identifies a set of nodes with higher
connectivity probabilities with node $i$ relative to nodes from
$\\{1,...,n\\}\backslash{\mathcal{C}}^{0}_{i}$. Recall ${\mathcal{C}}_{i}$ is
the collection of nodes with the highest connectivity probability. Then,
${\mathcal{C}}_{i}\subseteq{\mathcal{C}}_{i}^{0}$ for all $i\in\\{1,...,n\\}$.
If ${\mathcal{C}}_{i}^{0}$ contains nodes from at least two communities of
$\theta$, then there exist $j_{1},j_{2}\in{\mathcal{C}}^{0}_{i}$, such that
$|\theta_{ij_{1}}-\theta_{ij_{2}}|>\delta$ as $P\in S_{k,\delta}$. Note for
all $j_{1},j_{2}\in{\mathcal{C}}^{0}_{i}$,
$\theta_{ij_{1}}^{0}=\theta_{ij_{2}}^{0}$, then it follows
$|{\theta_{ij_{1}}}-{\theta_{ij_{2}}}|=|{\theta_{ij_{1}}}-\theta_{ij_{1}}^{0}+\theta_{ij_{1}}^{0}-\theta_{ij_{2}}^{0}+\theta_{ij_{2}}^{0}-{\theta_{ij_{2}}}|\leq|{\theta_{ij_{1}}}-\theta_{ij_{1}}^{0}|+|\theta_{ij_{2}}^{0}-{\theta_{ij_{2}}}|\leq
2r<\delta$. Then, the contradiction implies
${\mathcal{C}}_{i}={\mathcal{C}}_{i}^{0}$ for all $i$. As $\theta$ is
arbitrary, ${\mathcal{C}}_{i}={\mathcal{C}}_{i}^{0}$ for all
$i\in\\{1,...,n\\}$ and for all $\theta$. ∎
### 3.2 Posterior Concentration
To study the asymptotic behavior of the diagonally dominated SBM, we make the
following assumptions on the prior specification. The prior specification in
Assumption 1 and 2 is indexed by $n$, the number of nodes in the network , and
can be interpreted as a sequence of prior distributions.
###### Assumption 1.
(Prior mass on the parameter space) There exists $\bar{\delta}\in(0,1)$ such
that for all $0<\delta<\bar{\delta}$ and $k>1$,
${\Pi_{n}}\left({{S_{k,\delta}}}|K=k\right)\geq 1-{e^{-{n^{2}}{\delta}}}$.
Assumption 1 requires that the prior specification is essentially diagonally
dominant. Under Nowicki and Snijders’ prior, conditional on $k$ communities,
the prior probability of diagonal dominance is $1/k^{k}$. Therefore, Nowicki
and Snijders’ prior does not satisfy Assumption 1.
###### Assumption 2.
(Prior decay rates)
1. 1.
(Prior on $P$ conditional on $K$ and $\delta$)
For $a\in\\{1,...,k\\}$, diagonal entries $\\{P_{aa}\\}$ are independent with
prior density $\pi_{n}(P_{aa}|K,\delta)\geq e^{-C{\rm
log}(n)P_{aa}}1_{\\{P_{aa}\in(\delta,1)\\}}$ for some positive constant $C$
independent of $a\in\\{1,...,k\\}$.
For $a<b\in\\{1,...,k\\}$, off-diagonal entries
$\\{P_{ab}\\}_{a\in\\{1,...,k\\}}$ are conditionally independent on diagonal
entries with conditional prior density
${\pi_{n}}\left({P_{ab}}|\\{P_{aa}\\}_{a\in\\{1,...,k\\}},\delta,K\right)\geq
e^{-C{\rm
log}(n)({P_{aa}}\wedge{P_{bb}})}1_{\\{P_{ab}\in[0,{P_{aa}}\wedge{P_{bb}}-\delta]\\}}$
(7)
for some positive constant $C$ independent of $a,b\in\\{1,...,k\\}$.
2. 2.
(Prior on $Z$ conditional on $K$)
The prior on the membership assignment $Z$ satisfies
${\Pi_{n}}\left({Z=z|K=k}\right)\geq{e^{-Cn{\rm log}(k)}}$ for all
$z\in{\mathcal{Z}}_{n,k}$ and for some universal positive constant $C$.
3. 3.
(Prior on $K$)
The support of $K$ is $[K_{n}]$ with $K_{n}\precsim\sqrt{n}$. For
$k\in[K_{n}]$, the prior on $K$ satisfies ${\Pi_{n}}\left({K=k}\right)\geq
e^{-Ck{\rm log}(k)}$ for some universal positive constant $C$.
Assumption 2 makes more specific decay rate assumptions on the prior mass of
connectivity matrix $P$, the assignment $Z$, and the number of communities
$K$. The rate assumption of the prior on $P$ given $K$ and $\delta$
essentially requires the prior density on $P$ is lower bounded away from 0.
For instance, the uniform prior on $P$ and the Poisson prior on $K$ in (3)
satisfy Assumption 2.
###### Theorem 3.4.
Suppose adjacency matrix $A\sim SBM(Z_{0},P^{0},n,k_{0})$, let
$\theta^{0}=T(Z_{0}P^{0}Z_{0}^{T})$, $P^{0}\in\Theta_{k_{0},\delta_{0}}$ for
some $k_{0}\precsim\sqrt{n}$ and $\delta_{0}>0$, and the number of zero and
one entries of $\theta^{0}$ is at most $O(n^{2}\varepsilon_{n})$ where
$\varepsilon_{n}^{2}\asymp\frac{{\rm log}(k_{0})}{n}$. The prior $\Pi_{n}$
satisfies Assumption 1 and 2. Then, for all sufficiently large $M$,
${{\mathbb{P}}_{0,n}}{\Pi_{n}}\left({\theta:||\theta-\theta^{0}||_{\infty}\geq
M\varepsilon_{n}}|A\right)\to 0.$
The proof of Theorem 3.4 follows Schwartz method [26, 3, 9, 11]. Details of
the proof are deferred to Section 3.3.
Though exact $L_{\infty}$ minimax rates of SBM or DD SBM are unknown,
$L_{\infty}$ minimax rates are lower bounded by $L_{2}$ minimax rates. The
$L_{2}$ minimax rate calculation of DD SBM in Proposition 1 can be useful for
judging the sharpness of the posterior contraction rate in Theorem 3.4. As we
assume $k_{0}\precsim\sqrt{n}$, the posterior contraction rate in
$||\cdot||_{\infty}$ matches the $L_{2}$ minimax rates in Proposition 1, and
the posterior contraction rate is minimax-optimal.
With Theorem 3.4 and Lemma 3.3, we can establish the consistent estimation of
the true number of communties and true membership assignment. The main result
is summarized as follows.
###### Theorem 3.5.
Under the same assumptions of Theorem 3.4,
${{\mathbb{P}}_{0,n}}\left[{{\Pi_{n}}\left(\\{K=k_{0}\\}\cap\\{Z={Z_{0}}\\}|A\right)}\right]\to
1.$
###### Proof.
In light of Theorem 3.4, the posterior mass is essentially on
$\\{\theta:||\theta-\theta_{n}^{0}|{|_{\infty}}\leq{\varepsilon_{n}}\\}$.
Therefore, we leverage Lemma 3.3 to identify $k_{0}$ and $Z_{0}$ on the set.
Define $E_{0}=\\{K=k_{0}\\}\cap\\{Z=Z_{0}\\}$. Note the decomposition
$E^{c}_{0}=\left(E^{c}_{0}\cap\\{||\theta-\theta^{0}|{|_{\infty}}\leq{\varepsilon_{n}}\\}\right)\cup\left(E^{c}_{0}\cap\\{||\theta-\theta^{0}|{|_{\infty}}>{\varepsilon_{n}}\\}\right)$
for some $\varepsilon_{n}$, then
${\Pi_{n}}\left({E^{c}_{0}|A}\right)\leq{\Pi_{n}}\left({E^{c}_{0},||\theta-\theta^{0}|{|_{\infty}}\leq{\varepsilon_{n}}|A}\right)+{\Pi_{n}}\left({||\theta-\theta^{0}|{|_{\infty}}>{\varepsilon_{n}}|A}\right)$
(8)
where $\varepsilon_{n}\to 0$ is chosen to match the posterior contraction rate
in sup-norm.
Then, the posterior probability of choosing wrong number of communities or
wrong membership assignment can be upper bounded via the identification
assumption and convergence of the posterior distribution of $\theta$. For the
first part of Equation (8), the $\delta$ gap assumption of $\theta^{0}$
satisfies $\delta_{0}\succsim\varepsilon_{n}$. Then, by Lemma 3.3, for all
sufficiently small $\varepsilon_{n}$,
$\\{||\theta-\theta^{0}|{|_{\infty}}\leq{\varepsilon_{n}}\\}$ is the same as
its $Z_{0}$ slice where the implied number of communities is $k_{0}$.
For the second part, Theorem 3.4 implies
${\mathbb{P}}_{0}[{\Pi_{n}}\left({||\theta-\theta^{0}|{|_{\infty}}>{\varepsilon_{n}}|A}\right)]\to
0$.
∎
### 3.3 Proof of Theorem 3.4
Pioneered by [26] and further developed by [3, 9, 11], Schwartz method is the
major tool to study posterior concentration properties of Bayesian procedures
as sample size grows to infinity [12]. Schwartz method seeks for two
sufficient conditions to guarantee posterior concentration: the existence of
certain tests and prior mass condition. The existence of certain tests often
reduces to the construction of certain sieves and an entropy condition
associated with the sieve, if the metric under which we wish to obtain
posterior contraction is dominated by Hellinger distance. The prior mass
condition requires sufficient amount of prior mass on some KL neighborhood
near the truth.
Establishing convergence in $||\cdot||_{\infty}$ via the general framework of
Schwartz method requires $||\cdot||_{\infty}$ to be dominated by Hellinger
distance. In general, $||\cdot||_{\infty}$ is (weakly) stronger than Hellinger
distance and not dominated by Hellinger distance. However, in the special case
of SBM, the parameter space is constrained and the desired dominance holds.
This observation is shown in Lemma 3.6.
###### Lemma 3.6.
Suppose $A_{ij}|\theta\mathbin{\overset{IND}{\kern
0.0pt\leavevmode\resizebox{16.00371pt}{3.66875pt}{$\sim$}}}Ber(\theta_{ij})$
for $i<j$ and $i,j\in\\{1,...,n\\}$, then $||\cdot||_{\infty}$ is dominated by
Hellinger distance: $||{\theta^{0}}-{\theta^{1}}|{|_{\infty}}\leq
2H\left({{\mathbb{P}}_{\theta^{0}},{\mathbb{P}}_{\theta^{1}}}\right)$.
With the norm dominance, the existence of certain tests reduces to construct a
suitable sieve which charges sufficient prior mass and whose metric entropy is
under control. In our proof, the sieve is constructed as the set of all well
separated node-wise connectivity probability matrices:
$\bigcup\nolimits_{k=1}^{{K_{n}}}{{\Theta_{k,\delta_{n}}}}$ for some carefully
chosen $\delta_{n}$ and $K_{n}$.
In light of Lemma 3.1, the metric entropy of the sieve can be neatly bounded.
The entropy calculation is summarized in Lemma 3.7.
###### Lemma 3.7.
Suppose $\varepsilon_{n}\to 0$ as $n\to\infty$, and
$\varepsilon_{n}\precsim\delta_{n}$, then metric entropy satisfies
${\rm
log}N\left({{\varepsilon_{n}},\bigcup\nolimits_{k=1}^{{K_{n}}}{{\Theta_{k,\delta_{n}}}},||\cdot|{|_{\infty}}}\right)\precsim\left({n+1}\right){\rm
log}{K_{n}}+\frac{1}{2}{K_{n}}\left({{K_{n}}+1}\right){\rm
log}\left({1/{\varepsilon_{n}}}\right).$ (9)
The prior mass condition in terms of KL divergence can be reduced to a prior
mass condition in terms of $||\cdot||_{\infty}$ norm. This observation is
summarized in Lemma 3.8.
###### Lemma 3.8.
The observation model is $A_{ij}|\theta^{0}\mathbin{\overset{IND}{\kern
0.0pt\leavevmode\resizebox{16.00371pt}{3.66875pt}{$\sim$}}}Ber(\theta^{0}_{ij})$
for $i<j$ and $i,j\in\\{1,...,n\\}$. Suppose $C_{0}={{\mathop{\mathop{\rm
min}\nolimits}\nolimits_{i<j:0<\theta_{ij}^{0}<1}\theta_{ij}^{0}\left({1-\theta_{ij}^{0}}\right)}}>0$,
and the number of zero and one entries of ${\theta^{0}}$ is less than
$O(n^{2}\varepsilon_{n})$ for some $\varepsilon_{n}\to 0$ such that
$n^{2}\varepsilon_{n}\to\infty$. If
$||{\theta}-\theta^{0}|{|_{\infty}}\leq\varepsilon_{n}$, then
$KL\left({{{\mathbb{P}}_{{\theta^{0}}}},{{\mathbb{P}}_{\theta}}}\right)\precsim
C_{0}^{-1}{n^{2}}\varepsilon_{n}^{2}$, and
${V_{2,0}}\left({{{\mathbb{P}}_{{\theta^{0}}}},{{\mathbb{P}}_{\theta}}}\right)\precsim
C_{0}^{-1}{n^{2}}\varepsilon_{n}^{2}$.
Lemma 3.8 simplifies the prior mass condition to element-wise probability
calculation. Immediately with Assumption 2, we obtain the following prior mass
calculation.
###### Lemma 3.9 (prior mass condition).
Suppose $P^{0}\in S_{k_{0},\delta_{0}}$ for some $k_{0}\precsim\sqrt{n}$ and
constant $\delta_{0}\in(0,1)$, and $\varepsilon_{n}^{2}\asymp{\rm
log}(k_{0})/n$, then under Assumption 2, there exists a constant $C$ only
dependent on $P^{0}$ and $C_{0}$ such that
${\Pi_{n}}\left({P:||P-{P^{0}}|{|_{\infty}}<C_{0}{\varepsilon_{n}}};Z=Z_{0};K=k_{0}|\delta\right)\geq
e^{-Cn^{2}\varepsilon_{n}^{2}}$ (10)
holds for all sufficiently large $n$.
With the above preparation, the proof of Theorem 3.4 is as follows. The
structure of the proof follows [11].
###### Proof.
We first verify prior mass condition. By Lemma 3.8, the set
$\left\\{{\theta\in\bigcup\nolimits_{k=1}^{{K_{n}}}{{\Theta_{k,0}}}:KL\left({{{\mathbb{P}}_{{\theta_{n}^{0}}}},{{\mathbb{P}}_{\theta}}}\right)<n^{2}\varepsilon_{n}^{2},{V_{2,0}}\left({{{\mathbb{P}}_{{\theta_{n}^{0}}}},{{\mathbb{P}}_{\theta}}}\right)<n^{2}\varepsilon_{n}^{2}}\right\\}$
contains a sup-norm ball
$\left\\{{\theta\in\bigcup\nolimits_{k=1}^{{K_{n}}}{{\Theta_{k,0}}}:||\theta-{\theta_{n}^{0}}|{|_{\infty}}<C_{0}{\varepsilon_{n}}}\right\\}$
for some constant $C_{0}$ only dependent on $\theta^{0}$. Choose
$1\succ{\tau_{n}}\succ{\varepsilon_{n}}$, the sup-norm ball further contains
the following sup-norm ball
$\left\\{{\theta\in{{\Theta_{k_{0},\tau_{n}}}}:||\theta-{\theta_{n}^{0}}|{|_{\infty}}<C_{0}{\varepsilon_{n}}}\right\\}$.
By Lemma 3.3, the sup-norm ball is essentially its $Z_{0}$ slice which reduces
to
${\Pi_{n}}\left({P\in
S_{k_{0},\tau_{n}}:||P-{P^{0}}|{|_{\infty}}<C{\varepsilon_{n}}};Z=Z_{0};K=k_{0}\right).$
By Lemma 3.9, the prior mass is further lower bounded by
$e^{-Cn^{2}\varepsilon_{n}^{2}}$ for some constant $C$ only dependent on
$P^{0}$ and $C_{0}$.
Next, we check the existence of tests. The existence of tests boils down to
metric entropy condition and prior mass condition of the sieve. The sieve is
constructed as $\bigcup\nolimits_{k=1}^{{K_{n}}}{{\Theta_{k,\delta_{n}}}}$
with $1\succ{\delta_{n}}\succsim{\varepsilon_{n}^{2}}$.
Metric entropy condition of the sieve requires the metric entropy is upper
bounded by $Cn^{2}\varepsilon_{n}^{2}$. Clearly, this is satisfied by Lemma
3.7.
It is left to show the prior mass on the sieve. Note
${\Pi_{n}}\left({{{\left({\bigcup\nolimits_{k=1}^{{K_{n}}}{\Theta_{k,{\delta_{n}}}}}\right)}^{c}}}\right)\leq{\Pi_{n}}\left({\Theta_{K_{n},{\delta_{n}}}^{c}}\right)={\Pi_{n}}\left({\Theta_{K_{n},{\delta_{n}}}^{c}|K=K_{n}}\right)\Pi_{n}(K=K_{n})$,
then the prior mass on the sieve is also satisfied by a union bound:
$\begin{array}[]{lll}{\Pi_{n}}\left({\Theta_{k,{\delta_{n}}}^{c}|K=k}\right)&\leq&\sum\nolimits_{z\in{{\mathcal{Z}}_{n,k}}}{{\Pi_{n}}\left({\Theta_{k,{\delta_{n}}}^{c}|Z=z,K=k}\right){\Pi_{n}}\left({Z=z|K=k}\right)}\\\
&\leq&\mathop{\mathop{\rm
max}\nolimits}\nolimits_{z\in{{\mathcal{Z}}_{n,k}}}{\Pi_{n}}\left({\Theta_{k,{\delta_{n}}}^{c}|Z=z,K=k}\right)\\\
&=&\mathop{\mathop{\rm
max}\nolimits}\nolimits_{z\in{{\mathcal{Z}}_{n,k}}}{\Pi_{n}}\left({T(zPz^{T}):P\in
S_{k,{\delta_{n}}}^{c}|Z=z,K=k}\right)\\\
&\leq&{\Pi_{n}}\left({S_{k,{\delta_{n}}}^{c}|K=k}\right)\\\
&\leq&e^{-n^{2}\delta_{n}}\\\
&\precsim&{e^{-C{n^{2}}\varepsilon_{n}^{2}}}\end{array}$
for some constant $C$.
∎
## 4 Posterior Sampler and Inference
### 4.1 Reversible-jump MCMC algorithm
Under the diagonally dominant prior (3), the posterior distribution is as
follows,
${\Pi_{n}}\left({Z,K,P|A}\right)\propto\Pi\left({A|Z,P}\right){\Pi_{n}}\left({P|Z}\right){\Pi_{n}}\left({Z|K}\right){\Pi_{n}}\left(K\right)$
(11)
with
$\begin{array}[]{lll}\Pi\left({A|Z,P}\right)&=&\prod\nolimits_{1\leq a\leq
b\leq
K}{P_{ab}^{{O_{ab}}\left(Z\right)}{{\left({1-{P_{ab}}}\right)}^{{n_{ab}}\left(Z\right)-{O_{ab}}\left(Z\right)}}}\\\
{\Pi_{n}}\left({P|Z,K,\delta_{n}}\right)&=&\prod\nolimits_{1\leq a<b\leq
K}{\frac{{{1_{\left({0\leq{P_{ab}}\leq\left({{P_{aa}}\wedge{P_{bb}}}\right)-{\delta_{n}}}\right)}}}}{{\left({{P_{aa}}\wedge{P_{bb}}}\right)-{\delta_{n}}}}}\\\
{\Pi_{n}}\left({Z|K}\right)&=&\frac{{\Gamma\left(K\right)}}{{\Gamma\left(n+K\right)}}\prod\nolimits_{1\leq
c\leq K}{\Gamma\left({{n_{c}}\left(Z\right)}+1\right)}\\\
{\Pi_{n}}\left(K\right)&\propto&\frac{1}{{K!}}{1_{1\leq
K\leq{K_{n}}}}.\end{array}$
For comparison, the Nowicki and Snijders’ prior is conjugate and the
community-wise connectivity probability matrix $P$ can be marginalized out in
the posterior distribution. Therefore, posterior inference on $K$ is directly
based on posterior draws from $\Pi_{n}(Z,K|A)$. However, the truncated Nowicki
and Snijders’ prior loses conjugacy. Our posterior inference needs to sample
from $\Pi_{n}(P,Z,K|A)$.
We propose an Metropolis-Hastings algorithm to sample from (11). The proposal
$(Z^{*},K^{*},P^{*})$ is accepted with probability
$\mathop{\rm
min}\nolimits\left(1,\frac{{\Pi_{n}}\left({Z^{*},K^{*},P^{*}|A}\right)}{{\Pi_{n}}\left({Z,K,P|A}\right)}\frac{\Pi_{prop}(Z,K,P|Z^{*},K^{*},P^{*})}{\Pi_{prop}(Z^{*},K^{*},P^{*}|Z,K,P)}\right)$
(12)
where $\Pi_{prop}$ denotes the density function of the proposal distribution
and $(Z,K,P)$ denotes the current iteration.
To be specific, the proposal distribution is adapted from the allocation
sampler developed in [19]. For each iteration of the sampler, the proposal
distribution first sample $(Z,K)$ in the spirit of the allocation sampler,
then sample $P$ given $(Z,K)$. The proposal distribution is decomposed into
two parts: conditional on the previous draw $(P,Z,K)$ and data matrix $A$,
${\Pi_{prop}}\left({{Z^{*}},{K^{*}},{P^{*}}|Z,K,P,A}\right)\propto{\Pi_{prop}}\left({{P^{*}}|{Z^{*}},A}\right){\Pi_{prop}}\left({{Z^{*}},{K^{*}}|Z,K,P,A}\right)$
where $P_{ab}^{*}|{Z^{*}},A\mathbin{\overset{ind}{\kern
0.0pt\leavevmode\resizebox{10.25664pt}{3.66875pt}{$\sim$}}}Beta\left({{O_{ab}^{*}}+1,{n_{ab}^{*}}-{O_{ab}^{*}}+1}\right)$
with $O_{ab}^{*}\equiv O_{ab}(Z^{*})$ and $n_{ab}^{*}\equiv n_{ab}(Z^{*})$,
and $(Z^{*},K^{*})|(Z,K,P,A)$ are simulated in the spirit of the allocation
sampler developed in [19, 21].
The proposal distribution of $(Z^{*},K^{*})|(Z,K,P,A)$ follows the allocation
sampler of [19] but it is different in the way that connectivity probability
matrix $P$ is involved and used for likelihood evaluation. In contrast, the
allocation sampler of [19] explores the $(Z,K)$ space with $P$ marginalized
out. Details of the posterior sampler are in the Supplement.
The expectation of the proposal distribution $\Pi_{prop}(P^{*}|(Z^{*},A))$ is
the ordinary block constant least squares estimator which is widely used to
estimate the connectivity probability matrix in the literature [see 6, 16, 27,
for instance]. As the proposal density matches the likelihood component
$\Pi(A|P^{*},Z^{*})$, the acceptance rate is a product of prior density ratios
and proposal density ratios.
### 4.2 Posterior Inference
Under the 0-1 loss function $\ell(k,k_{0})=1_{k=k_{0}}$, the Bayes estimate of
$K$ is its posterior mode. As in the Metropolis-Hastings sampler, $K$
communities may contain empty communities, we compute the effective number of
communities based on samples of $Z$.
The community assignment is identified up to a label switching. In our matrix
formulation, the assignment $Z$ is identified up to a column permutation. That
is, $ZZ^{T}$ is invariant to column permutations. If the $(i,j)^{th}$ entry of
$ZZ^{T}$ is 1, node $i$ and node $j$ are classified into the same community by
$Z$. In addition, the node-wise connectivity $\theta$ is also identified
without relabelling concerns. With the 0-1 loss function
$\ell(Z,Z_{0})=1_{({ZZ^{T}=Z_{0}Z_{0}^{T}})}$, Bayes estimate of $Z$ is its
posterior mode. To pin down the posterior mode of $Z$, we can find the
posterior mode of $ZZ^{T}$ and the corresponding $Z$ is the posterior mode of
$Z$.
## 5 Numerical Experiments
Section 3 presents asymptotic properties of Bayesian SBM with diagonally
dominant priors which is henceforth abbreviated as “DD-SBM”. This section
assesses finite sample properties of DD-SBM under various settings.
### 5.1 Simulation design
We perform simulation studies for different configurations of the number of
communities, network size, and overall sparsity of connectivity. In
particular, we choose
$(k_{0},n,\rho)\in\\{3,5,7\\}\times\\{50,75\\}\times\\{\frac{1}{2},1\\}$, and
for each $(k_{0},n,\rho)$ configuration, 100 networks are generated from
$SBM(Z_{0},\rho P^{0},n,k_{0})$.
To control the source of variation in the synthetic networks, the 100 networks
share the same community structure $Z_{0}$ where nodes are deterministically
and uniformly assigned to $k_{0}$ communities; the 100 networks also share the
same connectivity matrix $\rho P^{0}$. The randomness in the 100 synthetic
networks is only from the stochastic generation of Bernoulli trials of
$SBM(Z_{0},\rho P^{0},n,k_{0})$.
We choose the following cases for $P^{0}$.
* •
Case 1: $P^{0}=0.6\times I_{k_{0}}+0.2\times 1_{k_{0}}1_{k_{0}}^{T}$,
* •
Case 2: $P^{0}=0.2\times I_{k_{0}}+0.6\times 1_{k_{0}}1_{k_{0}}^{T}$,
* •
Case 3: $P^{0}=0.4\times I_{k_{0}}+0.4\times 1_{k_{0}}1_{k_{0}}^{T}$,
* •
Case 4: $P^{0}=0.2\times I_{k_{0}}+0.2\times 1_{k_{0}}1_{k_{0}}^{T}+0.4\times
1_{k_{0},\lceil{k_{0}/2\rceil{}}}1_{k_{0},\lceil{k_{0}/2\rceil{}}}^{T}$,
where $I_{k}$ denotes identity matrix of rank $k$, $1_{k}$ denotes the
$k-$dimensional vector of ones, and $1_{n,k}$ denotes the $n-$dimensional
vector with the first $k$ elements being 1 and the rest $(n-k)$ elements being
0.
In the four cases, within community connectivity probabilities are all 0.8.
For simplicity, the between community connectivity probabilities are the same
for Case 1-3; in Case 1, cross community connectivity is weak; in Case 2,
cross community connectivity is strong; and in Case 3, cross community
connectivity is medium. Case 4 combines the structure of Case 1 and Case 3 and
half of the cross community connectivity is strong.
The reasons for choosing $n\in\\{50,75\\}$ are as follows. Firstly, many
networks in natural and social sciences are often of moderate size. Secondly,
asymptotically consistent estimators can perform poorly when sample size is
moderate. It is more informative to compare methods for networks of moderate
size than that for networks with thousands of nodes. Thirdly, MCMC algorithms
are computationally expensive, and the computation bottleneck prevents us from
networks with more than thousands of nodes.
As the number of parameters in the SBM grows in the order of $O(k^{2}_{0})$,
the difficulty of community detection increases as $k_{0}$ grows. The case of
$k_{0}=7$ imitates the situation of many communities, while the cases of
$k_{0}\in\\{3,5\\}$ imitate networks with moderately many communties.
### 5.2 Simulation results
For comparison, we also implement Bayesian SBM with the Nowicki and Snijders’
prior [21, 8], composite likelihood BIC method [25], and network cross-
validation [5]. Two posterior samplers for the Nowicki and Snijders’ prior are
available in the literature: the allocation sampler of [19], and the MFM
adapted MCMC algorithm of [8]. We use the code provided in the supplementary
materials of [8] and choose default values for the hyperparameters in their
algorithm. The Bayesian SBM of [8, 19] is henceforth denoted as “c-SBM”
(Bayesian SBM with conjugate priors). [25] propose composite likelihood BIC to
choose the number of communities, and this method is henceforth denoted as
“CLBIC”. [5] design a cross-validation strategy to choose the number of
communities for SBM, and it is henceforth denoted as “NCV”.
| | | Case 1 | Case 2 | Case 3 | Case 4
---|---|---|---|---|---|---
$k_{0}$ | $n$ | Method | $\rho=\mbox{\tiny$\frac{1}{2}$}$ | $\rho=1$ | $\rho=\mbox{\tiny$\frac{1}{2}$}$ | $\rho=1$ | $\rho=\mbox{\tiny$\frac{1}{2}$}$ | $\rho=1$ | $\rho=\mbox{\tiny$\frac{1}{2}$}$ | $\rho=1$
$3$ | $50$ | DD-SBM | $1.8_{\mbox{\tiny(1.3)}}$ | $1.9_{\mbox{\tiny(1.9)}}$ | $1.8_{\mbox{\tiny(-1.6)}}$ | $1.3_{\mbox{\tiny(0.0)}}$ | $0.3_{\mbox{\tiny(0.1)}}$ | $2.0_{\mbox{\tiny(-1.9)}}$ | $0.3_{\mbox{\tiny(0.1)}}$ | $1.0_{\mbox{\tiny(-0.6)}}$
c-SBM | $0.8_{\mbox{\tiny(-0.5)}}$ | $1.9_{\mbox{\tiny(-1.9)}}$ | $1.9_{\mbox{\tiny(-1.8)}}$ | $1.0_{\mbox{\tiny(-1.0)}}$ | $0.2_{\mbox{\tiny(-0.0)}}$ | $1.9_{\mbox{\tiny(-1.9)}}$ | $0.6_{\mbox{\tiny(-0.1)}}$ | $0.9_{\mbox{\tiny(-0.8)}}$
CLBIC | $0.5_{\mbox{\tiny(-0.2)}}$ | $1.3_{\mbox{\tiny(-1.2)}}$ | $1.3_{\mbox{\tiny(-1.2)}}$ | $1.3_{\mbox{\tiny(-1.1)}}$ | $0.0_{\mbox{\tiny(0.0)}}$ | $1.4_{\mbox{\tiny(-1.3)}}$ | $0.6_{\mbox{\tiny(-0.3)}}$ | $1.0_{\mbox{\tiny(-0.9)}}$
NCV | $0.9_{\mbox{\tiny(-0.6)}}$ | $2.0_{\mbox{\tiny(-2.0)}}$ | $2.0_{\mbox{\tiny(-2.0)}}$ | $2.0_{\mbox{\tiny(-2.0)}}$ | $0.0_{\mbox{\tiny(0.0)}}$ | $2.0_{\mbox{\tiny(-2.0)}}$ | $0.9_{\mbox{\tiny(-0.3)}}$ | $0.9_{\mbox{\tiny(-0.8)}}$
$75$ | DD-SBM | $1.0_{\mbox{\tiny(0.5)}}$ | $2.0_{\mbox{\tiny(-1.9)}}$ | $1.6_{\mbox{\tiny(-1.1)}}$ | $1.1_{\mbox{\tiny(-0.6)}}$ | $0.1_{\mbox{\tiny(0.0)}}$ | $1.9_{\mbox{\tiny(-1.9)}}$ | $0.2_{\mbox{\tiny(0.0)}}$ | $0.9_{\mbox{\tiny(-0.7)}}$
c-SBM | $0.5_{\mbox{\tiny(-0.1)}}$ | $2.0_{\mbox{\tiny(-1.9)}}$ | $1.6_{\mbox{\tiny(-1.3)}}$ | $1.0_{\mbox{\tiny(-1.0)}}$ | $0.3_{\mbox{\tiny(0.0)}}$ | $1.8_{\mbox{\tiny(-1.6)}}$ | $0.4_{\mbox{\tiny(0.0)}}$ | $0.9_{\mbox{\tiny(-0.8)}}$
CLBIC | $0.0_{\mbox{\tiny(0.0)}}$ | $1.0_{\mbox{\tiny(-1.0)}}$ | $0.9_{\mbox{\tiny(-0.8)}}$ | $1.0_{\mbox{\tiny(-0.9)}}$ | $0.0_{\mbox{\tiny(0.0)}}$ | $1.0_{\mbox{\tiny(-1.0)}}$ | $0.0_{\mbox{\tiny(0.0)}}$ | $1.0_{\mbox{\tiny(-0.9)}}$
NCV | $0.1_{\mbox{\tiny(0.0)}}$ | $2.0_{\mbox{\tiny(-2.0)}}$ | $1.9_{\mbox{\tiny(-1.9)}}$ | $2.0_{\mbox{\tiny(-1.9)}}$ | $0.0_{\mbox{\tiny(0.0)}}$ | $2.0_{\mbox{\tiny(-2.0)}}$ | $0.0_{\mbox{\tiny(0.0)}}$ | $1.0_{\mbox{\tiny(-0.9)}}$
$5$ | $50$ | DD-SBM | $3.0_{\mbox{\tiny(-2.5)}}$ | $3.9_{\mbox{\tiny(-3.9)}}$ | $3.9_{\mbox{\tiny(-3.8)}}$ | $2.3_{\mbox{\tiny(-2.0)}}$ | $1.2_{\mbox{\tiny(0.7)}}$ | $4.0_{\mbox{\tiny(-4.0)}}$ | $3.6_{\mbox{\tiny(-3.6)}}$ | $2.8_{\mbox{\tiny(-2.7)}}$
c-SBM | $3.7_{\mbox{\tiny(-3.7)}}$ | $3.9_{\mbox{\tiny(-3.9)}}$ | $4.0_{\mbox{\tiny(-4.0)}}$ | $3.0_{\mbox{\tiny(-3.0)}}$ | $1.4_{\mbox{\tiny(-1.0)}}$ | $3.9_{\mbox{\tiny(-3.9)}}$ | $3.8_{\mbox{\tiny(-3.7)}}$ | $2.9_{\mbox{\tiny(-2.9)}}$
CLBIC | $3.1_{\mbox{\tiny(-3.1)}}$ | $3.4_{\mbox{\tiny(-3.4)}}$ | $3.3_{\mbox{\tiny(-3.3)}}$ | $3.5_{\mbox{\tiny(-3.4)}}$ | $1.9_{\mbox{\tiny(-1.6)}}$ | $3.4_{\mbox{\tiny(-3.3)}}$ | $3.2_{\mbox{\tiny(-3.2)}}$ | $2.9_{\mbox{\tiny(-2.8)}}$
NCV | $4.0_{\mbox{\tiny(-4.0)}}$ | $4.0_{\mbox{\tiny(-4.0)}}$ | $4.0_{\mbox{\tiny(-4.0)}}$ | $4.0_{\mbox{\tiny(-4.0)}}$ | $2.0_{\mbox{\tiny(-1.5)}}$ | $4.0_{\mbox{\tiny(-4.0)}}$ | $4.0_{\mbox{\tiny(-4.0)}}$ | $3.2_{\mbox{\tiny(-3.0)}}$
$75$ | DD-SBM | $2.0_{\mbox{\tiny(-1.1)}}$ | $3.9_{\mbox{\tiny(-3.9)}}$ | $3.9_{\mbox{\tiny(-3.9)}}$ | $2.6_{\mbox{\tiny(-2.4)}}$ | $0.5_{\mbox{\tiny(0.0)}}$ | $4.0_{\mbox{\tiny(-4.0)}}$ | $2.3_{\mbox{\tiny(-2.0)}}$ | $2.9_{\mbox{\tiny(-2.8)}}$
c-SBM | $2.7_{\mbox{\tiny(-2.5)}}$ | $4.0_{\mbox{\tiny(-4.0)}}$ | $4.0_{\mbox{\tiny(-4.0)}}$ | $3.0_{\mbox{\tiny(-3.0)}}$ | $0.8_{\mbox{\tiny(-0.3)}}$ | $4.0_{\mbox{\tiny(-3.9)}}$ | $2.3_{\mbox{\tiny(-2.0)}}$ | $2.9_{\mbox{\tiny(-2.9)}}$
CLBIC | $2.6_{\mbox{\tiny(-2.5)}}$ | $3.0_{\mbox{\tiny(-3.0)}}$ | $3.0_{\mbox{\tiny(-3.0)}}$ | $2.9_{\mbox{\tiny(-2.9)}}$ | $0.0_{\mbox{\tiny(0.0)}}$ | $3.0_{\mbox{\tiny(-3.0)}}$ | $2.8_{\mbox{\tiny(-2.8)}}$ | $2.7_{\mbox{\tiny(-2.7)}}$
NCV | $3.9_{\mbox{\tiny(-3.8)}}$ | $4.0_{\mbox{\tiny(-4.0)}}$ | $4.0_{\mbox{\tiny(-4.0)}}$ | $3.9_{\mbox{\tiny(-3.9)}}$ | $0.0_{\mbox{\tiny(0.0)}}$ | $4.0_{\mbox{\tiny(-4.0)}}$ | $3.9_{\mbox{\tiny(-3.8)}}$ | $2.7_{\mbox{\tiny(-2.6)}}$
$7$ | $50$ | DD-SBM | $5.6_{\mbox{\tiny(-5.5)}}$ | $5.9_{\mbox{\tiny(-5.9)}}$ | $5.9_{\mbox{\tiny(-5.9)}}$ | $4.1_{\mbox{\tiny(-3.9)}}$ | $3.5_{\mbox{\tiny(-3.1)}}$ | $6.0_{\mbox{\tiny(-6.0)}}$ | $6.0_{\mbox{\tiny(-6.0)}}$ | $4.6_{\mbox{\tiny(-4.5)}}$
c-SBM | $5.9_{\mbox{\tiny(-5.9)}}$ | $5.9_{\mbox{\tiny(-5.9)}}$ | $6.0_{\mbox{\tiny(-6.0)}}$ | $5.1_{\mbox{\tiny(-5.0)}}$ | $4.8_{\mbox{\tiny(-4.7)}}$ | $6.0_{\mbox{\tiny(-5.9)}}$ | $5.9_{\mbox{\tiny(-5.9)}}$ | $5.0_{\mbox{\tiny(-4.9)}}$
CLBIC | $5.2_{\mbox{\tiny(-5.2)}}$ | $5.3_{\mbox{\tiny(-5.3)}}$ | $5.3_{\mbox{\tiny(-5.3)}}$ | $5.5_{\mbox{\tiny(-5.4)}}$ | $4.9_{\mbox{\tiny(-4.8)}}$ | $5.3_{\mbox{\tiny(-5.3)}}$ | $5.3_{\mbox{\tiny(-5.3)}}$ | $4.9_{\mbox{\tiny(-4.8)}}$
NCV | $6.0_{\mbox{\tiny(-6.0)}}$ | $6.0_{\mbox{\tiny(-6.0)}}$ | $6.0_{\mbox{\tiny(-6.0)}}$ | $6.0_{\mbox{\tiny(-6.0)}}$ | $6.0_{\mbox{\tiny(-6.0)}}$ | $6.0_{\mbox{\tiny(-6.0)}}$ | $6.0_{\mbox{\tiny(-6.0)}}$ | $5.6_{\mbox{\tiny(-5.5)}}$
$75$ | DD-SBM | $4.7_{\mbox{\tiny(-4.6)}}$ | $6.0_{\mbox{\tiny(-6.0)}}$ | $6.0_{\mbox{\tiny(-5.9)}}$ | $4.4_{\mbox{\tiny({-4.3})}}$ | $2.0_{\mbox{\tiny(-1.4)}}$ | $6.0_{\mbox{\tiny(-5.9)}}$ | $5.5_{\mbox{\tiny(-5.5)}}$ | $4.8_{\mbox{\tiny(-4.8)}}$
c-SBM | $5.4_{\mbox{\tiny(-5.4)}}$ | $5.9_{\mbox{\tiny(-5.9)}}$ | $5.9_{\mbox{\tiny(-5.9)}}$ | $5.0_{\mbox{\tiny(-5.0)}}$ | $2.6_{\mbox{\tiny(-2.3)}}$ | $5.9_{\mbox{\tiny(-5.9)}}$ | $5.4_{\mbox{\tiny(-5.3)}}$ | $5.0_{\mbox{\tiny(-5.0)}}$
CLBIC | $4.9_{\mbox{\tiny(-4.8)}}$ | $5.0_{\mbox{\tiny(-5.0)}}$ | $5.0_{\mbox{\tiny(-5.0)}}$ | $4.9_{\mbox{\tiny(-4.8)}}$ | $3.5_{\mbox{\tiny(-3.4)}}$ | $5.0_{\mbox{\tiny(-5.0)}}$ | $5.0_{\mbox{\tiny(-5.0)}}$ | $4.7_{\mbox{\tiny(-4.7)}}$
NCV | $6.0_{\mbox{\tiny(-6.0)}}$ | $6.0_{\mbox{\tiny(-6.0)}}$ | $6.0_{\mbox{\tiny(-6.0)}}$ | $6.0_{\mbox{\tiny(-6.0)}}$ | $3.5_{\mbox{\tiny(-3.2)}}$ | $6.0_{\mbox{\tiny(-6.0)}}$ | $6.0_{\mbox{\tiny(-6.0)}}$ | $4.8_{\mbox{\tiny(-4.8)}}$
Table 1: RMSE and bias of $\hat{K}$ with bias in parentheses.
Compared with c-SBM, DD-SBM achieves similar accuracy across different
configurations. To be specific, when $k_{0}=3$, DD-SBM tends to over-estimate
the number of communities; when $\rho=\frac{1}{2}$ and $k_{0}\in\\{5,7\\}$,
DD-SBM is slightly more accurate than c-SBM in Case 1 and 3 and similarly
accurate to c-SBM in Case 2 and 4. When the posterior samples of connectivity
matrix of c-SBM are also diagonally dominant, c-SBM is essentially DD-SBM.
Therefore, it is reasonable to expect DD-SBM and c-SBM have similar accuracy
in networks generated from diagonally dominant SBM.
Compared with CLBIC, DD-SBM is less accurate in most cases. This is due to the
design of $P^{0}$ in Case 1 - 3, such that the working likelihood of CLBIC is
close to the true likelihood. In Case 4, the true likelihood is more
complicated than the working likelihood of CLBIC, and the advantage of CLBIC
over DD-SBM is less obvious.
Compared with NCV, DD-SBM is more accurate in most cases. To be specific, when
$k_{0}=3$ and $\rho=\frac{1}{2}$, DD-SBM tends to over-estimate the number of
communities; in other configurations, DD-SBM is more accurate than NCV.
Case 2 is the most difficult as the between community connectivity probability
is very close to within community connectivity probability. Indeed, the
methods nearly uniformly choose one big community, except that CLBIC sometimes
chooses two communities.
To assess the membership assignment accuracy, we use the Hubert-Arabie
adjusted Rand index [14, 24] to measure the agreement between two clustering
assignments. The index is expected to be 0 if two independent assignments are
compared, and is 1 if two equivalent assignments are compared. Though the
adjusted Rand index tends to capture the disagreement among large clusters,
community sizes in our simulation study are about the same and the adjusted
Rand index is still a meaningful metric.
| | | Case 1 | Case 2 | Case 3 | Case 4
---|---|---|---|---|---|---
$k_{0}$ | $\rho$ | $n$ | DD-SBM | c-SBM | DD-SBM | c-SBM | DD-SBM | c-SBM | DD-SBM | c-SBM
$3$ | $\frac{1}{2}$ | $50$ | 0.62 | 0.63 | 0.00 | 0.00 | 0.05 | 0.01 | 0.37 | 0.42
$75$ | 0.87 | 0.91 | 0.00 | 0.00 | 0.12 | 0.05 | 0.47 | 0.51
$1$ | $50$ | 0.97 | 0.98 | 0.01 | 0.01 | 0.86 | 0.86 | 0.56 | 0.58
$75$ | 0.99 | 0.99 | 0.03 | 0.03 | 0.96 | 0.97 | 0.58 | 0.57
$5$ | $\frac{1}{2}$ | $50$ | 0.10 | 0.03 | 0.00 | 0.00 | 0.01 | 0.00 | 0.20 | 0.22
$75$ | 0.25 | 0.11 | 0.00 | 0.00 | 0.01 | 0.00 | 0.28 | 0.30
$1$ | $50$ | 0.83 | 0.86 | 0.00 | 0.00 | 0.06 | 0.03 | 0.33 | 0.34
$75$ | 0.94 | 0.99 | 0.00 | 0.00 | 0.32 | 0.17 | 0.35 | 0.36
$7$ | $\frac{1}{2}$ | $50$ | 0.03 | 0.00 | 0.00 | 0.00 | 0.01 | 0.00 | 0.12 | 0.12
$75$ | 0.07 | 0.01 | 0.00 | 0.00 | 0.00 | 0.00 | 0.19 | 0.20
$1$ | $50$ | 0.24 | 0.12 | 0.00 | 0.00 | 0.01 | 0.00 | 0.24 | 0.24
$75$ | 0.57 | 0.43 | 0.00 | 0.00 | 0.04 | 0.02 | 0.27 | 0.27
Table 2: Adjusted Rand index
Given a synthetic network $A$ and draws from the posterior distribution
$\Pi(\cdot|A)$, we can compute the adjusted Rand index of posterior draws of
$Z$ against $Z_{0}$ and use their mean as the accuracy metric for
$\Pi(\cdot|A)$. Like the adjusted Rand index for two clustering assignments,
the averaged index assesses the agreement of the posterior distribution of $Z$
against the truth $Z_{0}$.
Table 2 presents the average of adjusted Rand indices of the 100 synthetic
networks under different $(k_{0},\rho,n)$ configurations in the four cases.
Overall, the average adjusted Rand index of DD-SBM is similar to that of
c-SBM. This echoes the similar estimation accuracy of $k$ of DD-SBM and c-SBM,
as community detection is highly sensitive to the number of communities. When
$\rho=1/2$ and $k_{0}\in\\{5,7\\}$, DD-SBM is slightly better than c-SBM in
Case 1 and 3. When data is less informative, the regularity in the prior of
DD-SBM improves estimation accuracy over c-SBM. The advantage disappears in
Case 2 and 4 where cross community connectivity is close to within community
connectivity.
## 6 Sparse Networks
The framework in Section 3 can be extended to sparse networks whose overall
connectivity probability shrinks to 0 as network size increases [e.g. 16, 7].
We state the posterior contraction rates and the posterior consistency results
for those sparse networks as follows. Their proofs follow exactly the same
argument except that the derivations involve the sparse factor $\rho_{n}$.
###### Theorem 6.1.
Suppose adjacency matrix $A\in\\{0,1\\}^{n\times n}$ is generated from the SBM
with $\theta_{n}^{0}={\rho_{n}}T(Z_{0}P^{0}Z_{0}^{T})$, ${\rm
log}(k_{0})/n\precsim\rho_{n}\precsim 1$, $P^{0}\in\Theta_{k_{0},\delta_{0}}$
for some $k_{0}\precsim\sqrt{n}$ and $\delta_{0}>0$, and the number of zero
and one entries of $T(Z_{0}P^{0}Z_{0}^{T})$ is at most
$O(n^{2}\varepsilon_{n})$ where $\varepsilon_{n}^{2}\asymp\frac{{\rm
log}(k_{0})}{n}$. The prior $\Pi_{n}$ satisfies Assumption 2. Then, for all
sufficiently large $M$,
${{\mathbb{P}}_{0,n}}{\Pi_{n}}\left({\theta:||\theta-\theta_{n}^{0}||_{\infty}\geq
M\varepsilon_{n}}|A\right)\to 0.$
The posterior contraction rate in Theorem 6.1 is independent of the sparsity
level. In contrast, $L_{2}$ minimax rates of error derived in [16, 7] are
proportional to the sparsity level. We conjecture that $L_{\infty}$ minimax
rates of error are also proportional to the sparsity level. It is likely that
the posterior contraction rate in Theorem 6.1 is sub-optimal.
###### Theorem 6.2.
Under the same assumptions of Theorem 6.1 except that the sparsity level
satisfies ${\rm log}(k_{0})/n\precsim\rho^{2}_{n}\precsim 1$, then
${{\mathbb{P}}_{0,n}}\left[{{\Pi_{n}}\left(\\{K=k_{0}\\}\cap\\{Z={Z_{0}}\\}|A\right)}\right]\to
1.$
In the sparse network setting, the diagonal dominance gap also vanishes at the
rate of $\rho_{n}$. Our identification strategy for the number of communities
requires $\rho_{n}\delta_{0}\succsim\varepsilon_{n}\asymp\sqrt{{\rm
log}(k_{0})/n}$ to guarantee consistent community detection. In contrast, some
work in the sparse network literature works for networks with sparser sparsity
levels [e.g. 1, for a recent survey]. The Bayesian model outlined in (3) may
need additional modifications to adapt to networks at various sparse levels.
## 7 Concluding Remarks
In this paper, we have shown Bayesian SBM can consistently estimate the number
of communities and the membership assignment. Towards this end, we propose the
diagonally dominant Nowicki-Snijders’ prior and trade conjugacy of Nowicki-
Snijders’ prior for simpler and clearer asymptotic analysis.
In the simulation studies, c-SBM has similar finite sample estimation accuracy
to DD-SBM. We conjecture that c-SBM can also consistently estimate the number
of communities and the membership assignment for networks generated from
diagonally dominant SBM. However, the proof technique adopted in this paper
cannot be applied to c-SBM.
The price of losing conjugacy is on the computation side. The posterior
sampler in [8] is much faster than our allocation sampler as they successfully
adapt the idea of MFM sampler of [20] to the SBM case. It remains unclear if
the MFM idea can be applied to the non-conjugate case.
## References
* Abbe [2017] [author] Abbe, EmmanuelE. (2017). Community detection and stochastic block models: recent developments. The Journal of Machine Learning Research 18 6446–6531.
* Airoldi et al. [2008] [author] Airoldi, Edoardo ME. M., Blei, David MD. M., Fienberg, Stephen ES. E. and Xing, Eric PE. P. (2008). Mixed membership stochastic blockmodels. Journal of machine learning research 9 1981–2014.
* Barron, Schervish and Wasserman [1999] [author] Barron, AndrewA., Schervish, Mark JM. J. and Wasserman, LarryL. (1999). The consistency of posterior distributions in nonparametric problems. The Annals of Statistics 27 536–561.
* Bickel and Sarkar [2016] [author] Bickel, Peter JP. J. and Sarkar, PurnamritaP. (2016). Hypothesis testing for automated community detection in networks. Journal of the Royal Statistical Society: Series B (Statistical Methodology) 78 253–273.
* Chen and Lei [2018] [author] Chen, KehuiK. and Lei, JingJ. (2018). Network cross-validation for determining the number of communities in network data. Journal of the American Statistical Association 113 241–251.
* Gao, Lu and Zhou [2015] [author] Gao, ChaoC., Lu, YuY. and Zhou, Harrison HH. H. (2015). Rate-optimal graphon estimation. The Annals of Statistics 43 2624–2652.
* Gao and Ma [2018] [author] Gao, ChaoC. and Ma, ZongmingZ. (2018). Minimax rates in network analysis: Graphon estimation, community detection and hypothesis testing. arXiv preprint arXiv:1811.06055.
* Geng, Bhattacharya and Pati [2019] [author] Geng, JunxianJ., Bhattacharya, AnirbanA. and Pati, DebdeepD. (2019). Probabilistic community detection with unknown number of communities. Journal of the American Statistical Association 114 893–905.
* Ghosal, Ghosh and Ramamoorthi [1999] [author] Ghosal, SubhashisS., Ghosh, Jayanta KJ. K. and Ramamoorthi, RVR. (1999). Posterior consistency of Dirichlet mixtures in density estimation. Ann. Statist 27 143–158.
* Ghosal, Ghosh and van der Vaart [2000] [author] Ghosal, SubhashisS., Ghosh, Jayanta KJ. K. and van der Vaart, Aad WA. W. (2000). Convergence rates of posterior distributions. Annals of Statistics 28 500–531.
* Ghosal and van der Vaart [2007] [author] Ghosal, SubhashisS. and van der Vaart, AadA. (2007). Convergence rates of posterior distributions for noniid observations. The Annals of Statistics 35 192–223.
* Ghosal and van der Vaart [2017] [author] Ghosal, SubhashisS. and van der Vaart, AadA. (2017). Fundamentals of nonparametric Bayesian inference 44\. Cambridge University Press.
* Ghosh, Pati and Bhattacharya [2019] [author] Ghosh, PrasenjitP., Pati, DebdeepD. and Bhattacharya, AnirbanA. (2019). Posterior Contraction Rates for Stochastic Block Models. Sankhya A 1–29.
* Hubert and Arabie [1985] [author] Hubert, LawrenceL. and Arabie, PhippsP. (1985). Comparing partitions. Journal of classification 2 193–218.
* Karrer and Newman [2011] [author] Karrer, BrianB. and Newman, Mark EJM. E. (2011). Stochastic blockmodels and community structure in networks. Physical review E 83 016107\.
* Klopp, Tsybakov and Verzelen [2017] [author] Klopp, OlgaO., Tsybakov, Alexandre BA. B. and Verzelen, NicolasN. (2017). Oracle inequalities for network models and sparse graphon estimation. The Annals of Statistics 45 316–354.
* Lei [2016] [author] Lei, JingJ. (2016). A goodness-of-fit test for stochastic block models. The Annals of Statistics 44 401–424.
* Li, Levina and Zhu [2020] [author] Li, TianxiT., Levina, ElizavetaE. and Zhu, JiJ. (2020). Network cross-validation by edge sampling. Biometrika 107 257–276.
* McDaid et al. [2013] [author] McDaid, Aaron FA. F., Murphy, Thomas BrendanT. B., Friel, NialN. and Hurley, Neil JN. J. (2013). Improved Bayesian inference for the stochastic block model with application to large networks. Computational Statistics & Data Analysis 60 12–31.
* Miller and Harrison [2018] [author] Miller, Jeffrey WJ. W. and Harrison, Matthew TM. T. (2018). Mixture models with a prior on the number of components. Journal of the American Statistical Association 113 340–356.
* Nobile and Fearnside [2007] [author] Nobile, AgostinoA. and Fearnside, Alastair TA. T. (2007). Bayesian finite mixtures with an unknown number of components: The allocation sampler. Statistics and Computing 17 147–162.
* Nowicki and Snijders [2001] [author] Nowicki, KrzysztofK. and Snijders, Tom A BT. A. B. (2001). Estimation and prediction for stochastic blockstructures. Journal of the American Statistical Association 96 1077–1087.
* Peixoto [2014] [author] Peixoto, Tiago PT. P. (2014). Hierarchical block structures and high-resolution model selection in large networks. Physical Review X 4 011047\.
* Rand [1971] [author] Rand, William MW. M. (1971). Objective criteria for the evaluation of clustering methods. Journal of the American Statistical association 66 846–850.
* Saldana, Yu and Feng [2017] [author] Saldana, D FrancoD. F., Yu, YiY. and Feng, YangY. (2017). How many communities are there? Journal of Computational and Graphical Statistics 26 171–181.
* Schwartz [1965] [author] Schwartz, LorraineL. (1965). On Bayes procedures. Probability Theory and Related Fields 4 10–26.
* van der Pas and van der Vaart [2018] [author] van der Pas, SLS. and van der Vaart, AWA. (2018). Bayesian community detection. Bayesian Analysis 13 767–796.
* Wang and Bickel [2017] [author] Wang, YX RachelY. R. and Bickel, Peter JP. J. (2017). Likelihood-based model selection for stochastic block models. The Annals of Statistics 45 500–528.
* Zhao, Levina and Zhu [2011] [author] Zhao, YunpengY., Levina, ElizavetaE. and Zhu, JiJ. (2011). Community extraction for social networks. Proceedings of the National Academy of Sciences 108 7321–7326.
Supplement to “Consistent Bayesian Community Detection”. This Supplement
contains additional results and proofs in the text.
Supplement to “Consistent Bayesian Community Detection”
The supplement file contains complete proofs for Lemma 3.1, 3.6, 3.7, 3.8 and
3.9, details of the sampler, and complete simulation results for all
configurations.
## 8 Proofs
#### Proof of Lemma 3.1
###### Proof.
Suppose $\theta\in\Theta_{k,\delta}$, it suffices to show
$\theta\notin\Theta_{k^{\prime},\delta^{\prime}}$ for all $k^{\prime}<k$ and
$\delta^{\prime}\geq 0$. Now prove the statement by contradiction.
If $\theta\in\Theta_{k^{\prime},\delta^{\prime}}$ for some $k^{\prime}<k$ and
$\delta^{\prime}\geq 0$, then some nodes from some communities implied by
$\theta$ are merged. But by construction of $\Theta_{k,\delta}$, between-
community connectivity probabilities of $\theta$ are strictly less than
corresponding within community connectivity probabilities. Therefore, once
merged, the connectivity probabilities of the merged block are not identical.
This is a contradiction.
∎
#### Proof of Lemma 3.6
###### Proof.
The Hellinger distance between two Bernoulli random variables satisfies
$\begin{array}[]{lll}{H^{2}}\left({{{\mathbb{P}}_{\theta_{ij}^{0}}},{{\mathbb{P}}_{\theta_{ij}^{1}}}}\right)&=&\frac{1}{2}\left[{{{\left({\sqrt{\theta_{ij}^{0}}-\sqrt{\theta_{ij}^{1}}}\right)}^{2}}+{{\left({\sqrt{1-\theta_{ij}^{0}}-\sqrt{1-\theta_{ij}^{1}}}\right)}^{2}}}\right]\\\
&=&\frac{1}{2}\left[{{{\left({\frac{1}{2}2|\sqrt{\theta_{ij}^{0}}-\sqrt{\theta_{ij}^{1}}|}\right)}^{2}}+{{\left({\frac{1}{2}2|\sqrt{1-\theta_{ij}^{0}}-\sqrt{1-\theta_{ij}^{1}}|}\right)}^{2}}}\right]\\\
&\geq&\frac{1}{4}{\left({\theta_{ij}^{0}-\theta_{ij}^{1}}\right)^{2}}\end{array}$
as $\theta_{ij}^{0}$ and $\theta_{ij}^{1}$ are in $[0,1]$,
$|\sqrt{\theta_{ij}^{0}}+\sqrt{\theta_{ij}^{1}}|\leq 2$ and
$|\sqrt{1-\theta_{ij}^{0}}+\sqrt{1-\theta_{ij}^{1}}|\leq 2$.
By independence, ${P_{\theta}}={\otimes_{i<j}}{P_{{\theta_{ij}}}}$. Then, the
Hellinger distance between ${\mathbb{P}}_{\theta^{0}}$ and
${\mathbb{P}}_{\theta^{1}}$ satisfies
$\begin{array}[]{lll}{H^{2}}\left({{P_{{\theta^{0}}}},{P_{{\theta^{1}}}}}\right)&=&2-2\prod\nolimits_{i<j}{\left({1-\frac{1}{2}{H^{2}}\left({{P_{\theta_{ij}^{0}}},{P_{\theta_{ij}^{1}}}}\right)}\right)}\\\
&\geq&2-2\prod\nolimits_{i<j}{\left({1-\frac{1}{8}{\left({\theta_{ij}^{0}-\theta_{ij}^{1}}\right)^{2}}}\right)}\\\
&\geq&2-2\mathop{\mathop{\rm
min}\nolimits}\nolimits_{i<j}\left({1-\frac{1}{8}{\left({\theta_{ij}^{0}-\theta_{ij}^{1}}\right)^{2}}}\right)\\\
&=&\frac{1}{4}\mathop{\mathop{\rm
max}\nolimits}\nolimits_{i<j}\left({\theta_{ij}^{0}-\theta_{ij}^{1}}\right)^{2}\\\
&=&\frac{1}{4}||{\theta^{0}}-{\theta^{1}}||_{\infty}^{2}.\end{array}$
∎
#### Proof of Lemma 3.7
###### Proof.
Note
${\Theta_{k,\delta_{n}}}={\cup_{Z\in{{\mathcal{Z}}_{n,k}}}}\Theta_{k,\delta_{n}}^{Z}$,
where
$\Theta_{k,\delta_{n}}^{Z}=\left\\{T(ZPZ^{T}):P\in{S_{k,\delta_{n}}}\right\\}$
denotes the $Z$ slice of the parameter space.
By Lemma 3.3 and the assumption on $\delta_{n}$ and $\varepsilon_{n}$, node-
wise connectivity probability matrix space can be simplified via
$\left\\{{\theta:||\theta-{\theta^{0}}|{|_{\infty}}<{\varepsilon_{n}}}\right\\}=\left\\{T(Z_{0}PZ_{0}^{T}):||P-{P^{0}}|{|_{\infty}}<{\varepsilon_{n}}\right\\}$.
This relation implies the covering number
$N\left({{\varepsilon_{n}},\Theta_{k,\delta_{n}}^{Z},||\cdot|{|_{\infty}}}\right)\leq{\left({1/{\varepsilon_{n}}}\right)^{k\left({k+1}\right)/2}}$,
and then union bound implies the covering number
$N\left({{\varepsilon_{n}},\Theta_{k,\delta_{n}},||\cdot|{|_{\infty}}}\right)\leq{k^{n}}{\left({1/{\varepsilon_{n}}}\right)^{k\left({k+1}\right)/2}}$.
By Lemma 3.1, $\Theta_{k,\delta}$ are non-overlapping for different $k$, then
another union bound implies the statement (9).
∎
#### Proof of Lemma 3.8
###### Proof.
First recall some basic expansions from calculus. For $x_{0}\in(0,1)$, define
$f\left(x\right)=-{x_{0}}{\rm
log}\frac{x}{{x{{}_{0}}}}-\left({1-{x_{0}}}\right){\rm
log}\frac{{1-x}}{{1-{x_{0}}}}$ for $x\in[0,1]$. Taylor expand $f(x)$ around
$x_{0}$:
$\begin{array}[]{lll}f\left(x\right)&=&f\left({{x_{0}}}\right)+{f^{\prime}}\left({{x_{0}}}\right)\left({x-{x_{0}}}\right)+\frac{1}{2}{f^{\prime\prime}}\left({{x_{0}}}\right){\left({x-{x_{0}}}\right)^{2}}+O\left({|x-{x_{0}}{|^{3}}}\right)\\\
&=&\frac{1}{{2{x_{0}}\left({1-{x_{0}}}\right)}}{\left({x-{x_{0}}}\right)^{2}}+O\left({|x-{x_{0}}{|^{3}}}\right).\end{array}$
For $x_{0}=0$, the above $f(x)=-{\rm log}(1-x)$ with the convention $0{\rm
log}0=0$. Its Taylor expansion around $0$ is $f\left(x\right)=-{\rm
log}(1-x)=x+O\left({{x^{2}}}\right)$. For $x_{0}=1$, the above $f(x)=-{\rm
log}(x)$ also with the convention $0{\rm log}0=0$. Its Taylor expansion around
$1$ is $f\left(x\right)=-{\rm
log}(x)=1-x+O\left({{{\left({1-x}\right)}^{2}}}\right)$.
With $||\theta-\theta^{0}||_{\infty}\leq\varepsilon_{n}$ and the assumption on
$\theta^{0}$, expand KL divergence at $\theta^{0}$,
$\begin{array}[]{lll}KL\left({{{\mathbb{P}}_{{\theta^{0}}}},{{\mathbb{P}}_{\theta}}}\right)&=&-\sum\nolimits_{i<j:\theta_{ij}^{0}>0}{\theta_{ij}^{0}{\rm
log}\frac{{{\theta_{ij}}}}{{\theta_{ij}^{0}}}}-\sum\nolimits_{i<j:\theta_{ij}^{0}<1}{\left({1-\theta_{ij}^{0}}\right){\rm
log}\frac{{1-{\theta_{ij}}}}{{1-\theta_{ij}^{0}}}}\\\
&\leq&\left({{N_{0}}+{N_{1}}}\right)\left({{\varepsilon_{n}}+O\left({\varepsilon_{n}^{2}}\right)}\right)+\frac{{n\left({n-1}\right)}}{2}C_{0}^{-1}\left({\varepsilon_{n}^{2}+O\left({|{\varepsilon_{n}}{|^{3}}}\right)}\right)\\\
&\precsim&{n^{2}}\varepsilon_{n}^{2}/{C_{0}}\end{array}$
where ${N_{0}}=\\#\left\\{{\left({i,j}\right):\theta_{ij}^{0}=0,i<j}\right\\}$
denotes the number of zero entries in $\theta^{0}$, and
${N_{1}}=\\#\left\\{{\left({i,j}\right):\theta_{ij}^{0}=1,i<j}\right\\}$
denotes the number of one entries in $\theta^{0}$.
To bound $V_{2,0}$, note the Taylor expansion of $f\left(x\right)={\rm
log}\frac{x}{{1-x}}$ around $x_{0}\in(0,1)$ satisfies $f\left(x\right)={\rm
log}\frac{x}{{1-x}}={\rm
log}\frac{{{x_{0}}}}{{1-{x_{0}}}}+\frac{1}{{{x_{0}}\left({1-{x_{0}}}\right)}}\left({x-{x_{0}}}\right)+O\left({{{\left({x-{x_{0}}}\right)}^{2}}}\right)$.
By independence of different entries and with
$||\theta-\theta^{0}||_{\infty}\leq\varepsilon_{n}$, KL variation can be
bounded similarly by an expansion of $f(x)={\rm log}(x/(1-x))$:
$\begin{array}[]{lll}{V_{2,0}}\left({{{\mathbb{P}}_{{\theta^{0}}}},{{\mathbb{P}}_{\theta}}}\right)&=&{{\mathbb{P}}_{0}}\left\\{{{{\left[{\sum\limits_{i<j}{\left({{A_{ij}}{\rm
log}\frac{{{\theta_{ij}}}}{{\theta_{ij}^{0}}}+\left({1-{A_{ij}}}\right){\rm
log}\frac{{1-{\theta_{ij}}}}{{1-\theta_{ij}^{0}}}}\right)}+KL\left({{{\mathbb{P}}_{{\theta^{0}}}},{{\mathbb{P}}_{\theta}}}\right)}\right]}^{2}}}\right\\}\\\
&=&\sum\limits_{i<j}{{{\mathbb{P}}_{0}}\left\\{{{{\left[{\left({{A_{ij}}{\rm
log}\frac{{{\theta_{ij}}}}{{\theta_{ij}^{0}}}+\left({1-{A_{ij}}}\right){\rm
log}\frac{{1-{\theta_{ij}}}}{{1-\theta_{ij}^{0}}}}\right)+KL\left({{{\mathbb{P}}_{\theta_{ij}^{0}}},{{\mathbb{P}}_{{\theta_{ij}}}}}\right)}\right]}^{2}}}\right\\}}\\\
&=&\sum\nolimits_{i<j}{\theta_{ij}^{0}\left({1-\theta_{ij}^{0}}\right){{\left({{\rm
log}\frac{{{\theta_{ij}}}}{{1-{\theta_{ij}}}}-{\rm
log}\frac{{\theta_{ij}^{0}}}{{1-\theta_{ij}^{0}}}}\right)}^{2}}}\\\
&\precsim&\sum\nolimits_{i<j}\frac{1}{\theta_{ij}^{0}\left({1-\theta_{ij}^{0}}\right)}\varepsilon_{n}^{2}\\\
&\precsim&{n^{2}}\varepsilon_{n}^{2}/{C_{0}}\end{array}$
∎
#### Proof of Lemma 3.9
###### Proof.
By the dependence assumption made in Assumption 2, the prior mass has the
factorization
${\Pi_{n}}\left({P:||P-{P^{0}}|{|_{\infty}}<C_{0}{\varepsilon_{n}}}|K=k_{0}\right){\Pi_{n}}\left({Z={Z_{0}}|K={k_{0}}}\right){\Pi_{n}}\left({K={k_{0}}}\right).$
(13)
Next, we bound individual components of (13) respectively.
To bound the first component of (13), the conditional indepence of the off-
diagonal entries of $P$ on the diagonal entries of $P$ suggests the following
factorization,
$\begin{array}[]{ll}&{\Pi_{n}}\left({P:||P-{P^{0}}|{|_{\infty}}<C_{0}{\varepsilon_{n}}}|K=k_{0}\right)\\\
=&{\Pi_{n}}\left({\bigcap_{1\leq a\leq b\leq k_{0}}E_{n,ab}}|K=k_{0}\right)\\\
=&\prod_{1\leq a\leq k_{0}}\left\\{\int_{E_{n,aa}}\left[\prod\nolimits_{1\leq
a<b\leq
k_{0}}{\Pi_{n}}\left(E_{n,ab}|\\{P_{aa}\\},K=k_{0}\right)\right]d\Pi_{n}(P_{aa}|K=k_{0})\right\\}\end{array}$
where $E_{n,ab}=\\{P_{ab}:|P_{ab}-{P_{ab}^{0}}|<C_{0}{\varepsilon_{n}}\\}$. As
${\varepsilon_{n}}=o(1)$ and $P^{0}\in S_{k_{0},\delta_{0}}$, (conditional)
prior density of $P_{ab}$ is positive on $E_{n,ab}$ for all $a,b\in[k_{0}]$.
By Assumption 2 (2), for $a<b\in[k_{0}]$, the prior probability
$\Pi_{n}(E_{n,ab}|\\{P_{aa}\\},K=k_{0})\geq|E_{n,ab}|\mathop{\rm
min}\nolimits\limits_{P_{ab}\in
E_{n,ab}}\pi_{n}(P_{ab}|\\{P_{aa}\\},K=k_{0},\delta)\succsim\varepsilon_{n}e^{-C{\rm
log}(n)(P_{aa}\wedge P_{bb})}$ for some universal constant $C$. As $P_{aa}\in
E_{n,aa}$ for $a\in[k_{0}]$, $P_{aa}\wedge P_{bb}\leq(P^{0}_{aa}\wedge
P^{0}_{bb})+C_{0}\varepsilon_{n}\leq||P^{0}||_{\infty}+C_{0}\varepsilon_{n}$,
which gives a bound independent of $\\{P_{aa}\\}$.
Similarly, Assumption 2 (2) implies
$\Pi_{n}(E_{n,aa}|K=k_{0})\succsim\varepsilon_{n}e^{-C{\rm
log}(n)(P^{0}_{aa}+C_{0}\varepsilon_{n})}$.
Therefore, combining the bounds for $P_{ab}$’s gives
${\Pi_{n}}\left({P:||P-{P^{0}}|{|_{\infty}}<C_{0}{\varepsilon_{n}}}|K=k_{0}\right)\succsim
e^{Ck_{0}^{2}{\rm log}(\varepsilon_{n})-Ck_{0}^{2}{\rm
log}(n)(||P^{0}||_{\infty}+C_{0}\varepsilon_{n})}$
where $k_{0}^{2}$ has the same order as $\frac{1}{2}k_{0}(k_{0}+1)$ and is
used for simpler notation, and the constant $C$ is universal.
As $\varepsilon_{n}^{2}\asymp{\rm log}(k_{0})/n$ and $1\succsim{\rm
log}(k_{0})/n$, ${\rm log}(n)\succsim-{\rm log}(\varepsilon_{n})$. As
$k_{0}\precsim\sqrt{n}$, $k_{0}^{2}{\rm log}(n)\precsim n{\rm log}(k_{0})$.
Then,
${\Pi_{n}}\left({P:||P-{P^{0}}|{|_{\infty}}<C_{0}{\varepsilon_{n}}}|K=k_{0}\right)\succsim
e^{-Cn{\rm log}(k_{0})}$ for some constant $C$ dependent on $P^{0}$.
To bound the second and the third component of (13), by Assumption 2 (3) and
(4), there exists a universal constant $C$ such that
${\Pi_{n}}\left({Z={Z_{0}}|K={k_{0}}}\right)\geq e^{-Cn{\rm log}(k_{0})}$ and
${\Pi_{n}}\left({K={k_{0}}}\right)\geq e^{-Cn{\rm log}(k_{0})}$.
Note $n^{2}\varepsilon_{n}^{2}\asymp n{\rm log}(k_{0})$, the right hand side
of the inequality (10) can be replaced with $e^{-Cn{\rm log}(k_{0})}$ and (10)
holds for some constant $C$ dependent on $P^{0}$.
∎
## 9 Posterior Sampler
This section presents details of the Metropolis-Hastings algorithm used to
draw posterior samples from (11). The proposal has two stages: in the first
stage, sample $(Z,K)$; in the second stage, sample $P$ given $(Z,K)$. The
first stage is adapted from the allocation sampler [19].
At $t^{th}$ iteration, the proposal
$\Pi_{prop}\left(Z^{*},K^{*}|A,Z^{(t)},K^{(t)},P^{(t)}\right)$ consists of the
four steps MK, GS, M3 and AE with equal probability $\frac{1}{4}$. With
proposal $(Z^{*},K^{*})$, sample $P^{*}|(Z^{*},A)$ by independently sampling
each entry of $P^{*}$ from the Beta distribution
$Beta\left({{O_{ab}^{*}}+1,{n_{ab}^{*}}-{O_{ab}^{*}}+1}\right)$. With proposal
$(P^{*},Z^{*},K^{*})$, the acceptance rates in the allocation sampler regimes
are computed.
### 9.1 MK
MK: choose “add” or “delete” one empty cluster with probability $1/2$. If
“add” move is chosen, randomly pick one community identifier from $[K+1]$ for
the new empty community and rename the others as necessary; if “delete” move
is chosen, randomly pick one community from $[K]$, delete the community if it
is empty and abandon the MK move if it is not empty.
In the step MK, if “add” one empty community is chosen, accept the proposal
with probability $\mathop{\rm
min}\nolimits\left({1,\frac{{{\Pi_{n}}\left({{P^{*}}|{Z^{*}}}\right)}}{{{\Pi_{n}}\left({{P^{\left(t\right)}}|{Z^{\left(t\right)}}}\right)}}\frac{K}{{{K^{*}}}}}\frac{1}{n+K}\right)$;
if “delete” one empty community is chosen, accept the proposal with
probability
$\mathop{\rm
min}\nolimits\left({1,\frac{{{\Pi_{n}}\left({{P^{*}}|{Z^{*}}}\right)}}{{{\Pi_{n}}\left({{P^{\left(t\right)}}|{Z^{\left(t\right)}}}\right)}}\frac{K}{{{K^{*}}}}}{(n+K-1)}\right).$
### 9.2 GS
GS: relabel a random node. First randomly pick $i$ then generate $Z^{*}(i)$
according to
$\Pi_{prop}(Z^{*}(i)=k)\propto\beta(Z^{*},A)^{-1}\Pi(Z^{*}|K^{*})$ where
$K^{*}=K^{(t)}$, the prior probability
$\Pi(Z^{*}|K^{*})=\int\Pi(Z^{*}|\alpha,K^{*})\Pi(\alpha|K^{*})d\alpha=\frac{\Gamma(K^{*})}{\Gamma(n+K^{*})}\prod_{1\leq
c\leq K}\Gamma(n_{c}^{*}+1)$ due to multinomial-Dirichlet conjugacy, and
$\beta(Z^{*},A)=\prod_{1\leq a\leq b\leq
K}\frac{\Gamma(n_{ab}^{*}+2)}{\Gamma(O_{ab}^{*}+1)\Gamma(n_{ab}^{*}-O_{ab}^{*}+1)}$
is the coefficient corresponding to the proposal distribution of $P$. Clearly,
$Z^{*}(j)=Z(j)$ for all $j\neq i\in[n]$.
In the step GS, suppose node $i$ is chosen and its original label $c_{1}$ is
relabeled with $c_{2}$, then accept the proposal with probability $\mathop{\rm
min}\nolimits\left(1,\frac{{{\Pi_{n}}\left({{P^{*}}|{Z^{*}}}\right)}}{{{\Pi_{n}}\left({{P^{\left(t\right)}}|{Z^{\left(t\right)}}}\right)}}\right)$.
### 9.3 M3
M3: randomly pick two communities $c_{1},c_{2}\in[K]$, reassign nodes
$\\{i:Z(i)\in\\{c_{1},c_{2}\\}\\}$ to $\\{c_{1},c_{2}\\}$ sequentially
according to the following scheme. Start with $B_{0}=B_{1}=\emptyset$ and
$A_{0}$ being the sub-network without nodes from community $c_{1}$ and
$c_{2}$, define the assignment $B_{h}=\\{Z^{*}(x_{i})\\}_{i=1}^{h-1}$ with
$x_{i}$ being the node index of the $i^{th}$ element in
$\\{i:Z(i)\in\\{c_{1},c_{2}\\}\\}$, define the sub-network
$A_{h}=A_{h-1}\cup\\{x_{h}\\}$ by appending one more node, define the
assignment $Z_{B_{h}}^{c_{j}}$ for the sub-network $A_{h}$ as the assignment
with the node $x_{h}$ assigned to $c_{j}$, and define the size of communities
in the sub-network $A_{h-1}$ as $\\{n_{h,c}\\}_{c\in[K]}$. For $i\in[n_{c}]$,
assign the $i^{th}$ node of $\\{i:Z(i)\in\\{c_{1},c_{2}\\}\\}$ to $c_{1}$ with
probability $p_{B_{i}}^{c_{1}}$ and to $c_{2}$ with probability
$p_{B_{i}}^{c_{2}}\equiv 1-p_{B_{i}}^{c_{1}}$, where
$\frac{p_{B_{i}}^{c_{1}}}{p_{B_{i}}^{c_{2}}}=\frac{\Pi(A_{i},Z_{B_{i}}^{c_{1}},K,P)}{\Pi(A_{i},Z_{B_{i}}^{c_{2}},K,P)}=\frac{\Pi(A_{i}|P,Z_{B_{i}}^{c_{1}})\Pi(P|Z_{B_{i}}^{c_{1}},K)\Pi(Z_{B_{i}}^{c_{1}}|K)\Pi(K)}{\Pi(A_{i}|P,Z_{B_{i}}^{c_{2}})\Pi(P|Z_{B_{i}}^{c_{2}},K)\Pi(Z_{B_{i}}^{c_{2}}|K)\Pi(K)}=\frac{\Pi(A_{i}|P,Z_{B_{i}}^{c_{1}})(n_{i,c_{1}}+1)}{\Pi(A_{i}|P,Z_{B_{i}}^{c_{2}})(n_{i,c_{2}}+1)}$.
To improve mixing, once $c_{1}$ and $c_{2}$ are drawn, shuffle
$\\{i:Z(i)\in\\{c_{1},c_{2}\\}\\}$ before the sequential reassignment.
Therefore, the ordering of node indices in the sequential reassignment is
random.
In the step M3, suppose community $c_{1}$ and $c_{2}$ are chosen, then accept
the proposal with probability $\mathop{\rm
min}\nolimits\left({1,\frac{{{\Pi_{n}}\left({{P^{*}}|{Z^{*}}}\right)}}{{{\Pi_{n}}\left({{P^{\left(t\right)}}|{Z^{\left(t\right)}}}\right)}}\frac{\prod_{i=1}^{n_{c}}p_{B_{i}}^{Z(i)}}{\prod_{i=1}^{n_{c}}p_{B_{i}}^{Z^{*}(i)}}\frac{{\Gamma\left({n_{{c_{1}}}^{*}+1}\right)\Gamma\left({n_{{c_{2}}}^{*}+1}\right)}}{{\Gamma\left({n_{{c_{1}}}^{(t)}+1}\right)\Gamma\left({n_{{c_{2}}}^{(t)}+1}\right)}}\frac{{\beta\left({{Z^{\left(t\right)}},A}\right)}}{{\beta\left({{Z^{*}},A}\right)}}}\right)$,
where $n_{c}=n_{c_{1}}+n_{c_{2}}$.
### 9.4 AE
AE: merge two random clusters or split one cluster into two clusters with
probability $1/2$. If “merge” is chosen, randomly merge two clusters $c_{1}$
and $c_{2}$ with $Z^{*}(i)=c_{1}$ for all
$i\in\\{j:Z(j)\in\\{c_{1},c_{2}\\}\\}$ and $Z^{*}(i)=Z(i)$ for all
$i\notin\\{j:Z(j)\in\\{c_{1},c_{2}\\}\\}$. The proposal probability is
$\binom{K}{2}^{-1}$. If “split” is chosen, randomly pick two cluster
identifiers $\\{c_{1},c_{2}\\}$ from $[K+1]$, renaming others’ identifiers as
necessary, and assign the nodes in cluster $c_{1}$ to the cluster $c_{2}$ with
the random probability $p_{c}\sim U(0,1)$. By integrating out $p_{c}$, the
proposal probability is
$\frac{{\Gamma\left({{n_{{c_{1}}}}+1}\right)\Gamma\left({{n_{{c_{2}}}}+1}\right)}}{K(K+1){\Gamma\left({{n_{c}}+2}\right)}}$.
In the step AE, if “merge” two communities is chosen, accept the proposal with
probability $\mathop{\rm
min}\nolimits\left({1,\frac{{{\Pi_{n}}\left({{P^{*}}|{Z^{*}}}\right)}}{{{\Pi_{n}}\left({{P^{\left(t\right)}}|{Z^{\left(t\right)}}}\right)}}\frac{{{K^{\left(t\right)}}}}{{{K^{*}}}}\frac{{\beta\left({{Z^{\left(t\right)}},A}\right)}}{{\beta\left({{Z^{*}},A}\right)}}\frac{K^{*}+n}{{{n_{c_{1}}^{\left(t\right)}+1}}}}\right)$;
if “split” is chosen, accept the proposal with probability $\mathop{\rm
min}\nolimits\left({1,\frac{{{\Pi_{n}}\left({{P^{*}}|{Z^{*}}}\right)}}{{{\Pi_{n}}\left({{P^{\left(t\right)}}|{Z^{\left(t\right)}}}\right)}}\frac{{{K^{\left(t\right)}}}}{{{K^{*}}}}\frac{{\beta\left({{Z^{\left(t\right)}},A}\right)}}{{\beta\left({{Z^{*}},A}\right)}}\frac{n_{c_{1}}^{(t)}+1}{K+n}}\right)$.
## 10 Complete simulation results
This section provides complete simulation results. We choose
$(k_{0},n,\rho)\in\\{3,5,7\\}\times\\{50,75\\}\times\\{\frac{1}{2},1\\}$, and
for each $(k_{0},n,\rho)$ configuration, 100 networks are generated from
$SBM(Z_{0},\rho P^{0},n,k_{0})$.
To reduce Monte Carlo error and reach reasonable mixing, the Metropolis-
Hastings algorithm and the allocation sampler collect $2\times 10^{4}$
posterior draws for each synthetic dataset after discarding first $10^{4}$
draws as burn-in. Both algorithms are initialized at $K=2$ and random
membership assignment.
| | | | Case 1 | Case 2 | Case 3 | Case 4
---|---|---|---|---|---|---|---
$k_{0}$ | $n$ | $\rho$ | Method | Bias | RMSE | Bias | RMSE | Bias | RMSE | Bias | RMSE
$3$ | $50$ | $\frac{1}{2}$ | DD-SBM | 1.3 | 1.8 | -1.9 | 1.9 | -1.6 | 1.8 | 0.0 | 1.3
c-SBM | -0.5 | 0.8 | -1.9 | 1.9 | -1.8 | 1.9 | -1.0 | 1.0
CLBIC | -0.2 | 0.5 | -1.2 | 1.3 | -1.2 | 1.3 | -1.1 | 1.3
NCV | -0.6 | 0.9 | -2.0 | 2.0 | -2.0 | 2.0 | -2.0 | 2.0
$1$ | DD-SBM | 0.1 | 0.3 | -1.9 | 2.0 | 0.1 | 0.3 | -0.6 | 1.0
c-SBM | -0.0 | 0.2 | -1.9 | 1.9 | -0.1 | 0.6 | -0.8 | 0.9
CLBIC | 0.0 | 0.0 | -1.3 | 1.4 | -0.3 | 0.6 | -0.9 | 1.0
NCV | 0.0 | 0.0 | -2.0 | 2.0 | -0.3 | 0.9 | -0.8 | 0.9
$3$ | $75$ | $\frac{1}{2}$ | DD-SBM | 0.5 | 1.0 | -1.9 | 2.0 | -1.1 | 1.6 | -0.6 | 1.1
c-SBM | -0.1 | 0.5 | -1.9 | 2.0 | -1.3 | 1.6 | -1.0 | 1.0
CLBIC | 0.0 | 0.0 | -1.0 | 1.0 | -0.8 | 0.9 | -0.9 | 1.0
NCV | 0.0 | 0.1 | -2.0 | 2.0 | -1.9 | 1.9 | -1.9 | 2.0
$1$ | DD-SBM | 0.0 | 0.1 | -1.9 | 1.9 | 0.0 | 0.2 | -0.7 | 0.9
c-SBM | 0.0 | 0.3 | -1.6 | 1.8 | 0.0 | 0.4 | -0.8 | 0.9
CLBIC | 0.0 | 0.0 | -1.0 | 1.0 | 0.0 | 0.0 | -0.9 | 1.0
NCV | 0.0 | 0.0 | -2.0 | 2.0 | 0.0 | 0.0 | -0.9 | 1.0
$5$ | $50$ | $\frac{1}{2}$ | DD-SBM | -2.5 | 3.0 | -3.9 | 3.9 | -3.8 | 3.9 | -2.0 | 2.3
c-SBM | -3.7 | 3.7 | -3.9 | 3.9 | -4.0 | 4.0 | -3.0 | 3.0
CLBIC | -3.1 | 3.1 | -3.4 | 3.4 | -3.3 | 3.3 | -3.4 | 3.5
NCV | -4.0 | 4.0 | -4.0 | 4.0 | -4.0 | 4.0 | -4.0 | 4.0
$1$ | DD-SBM | 0.7 | 1.2 | -4.0 | 4.0 | -3.6 | 3.6 | -2.7 | 2.8
c-SBM | -1.0 | 1.4 | -3.9 | 3.9 | -3.7 | 3.8 | -2.9 | 2.9
CLBIC | -1.6 | 1.9 | -3.3 | 3.4 | -3.2 | 3.2 | -2.8 | 2.9
NCV | -1.5 | 2.0 | -4.0 | 4.0 | -4.0 | 4.0 | -3.0 | 3.2
$5$ | $75$ | $\frac{1}{2}$ | DD-SBM | -1.1 | 2.0 | -3.9 | 3.9 | -3.9 | 3.9 | -2.4 | 2.6
c-SBM | -2.5 | 2.7 | -4.0 | 4.0 | -4.0 | 4.0 | -3.0 | 3.0
CLBIC | -2.5 | 2.6 | -3.0 | 3.0 | -3.0 | 3.0 | -2.9 | 2.9
NCV | -3.8 | 3.9 | -4.0 | 4.0 | -4.0 | 4.0 | -3.9 | 3.9
$1$ | DD-SBM | 0.0 | 0.5 | -4.0 | 4.0 | -2.0 | 2.3 | -2.8 | 2.9
c-SBM | -0.3 | 0.8 | -3.9 | 4.0 | -2.0 | 2.3 | -2.9 | 2.9
CLBIC | 0.0 | 0.0 | -3.0 | 3.0 | -2.8 | 2.8 | -2.7 | 2.7
NCV | 0.0 | 0.0 | -4.0 | 4.0 | -3.8 | 3.9 | -2.6 | 2.7
$7$ | $50$ | $\frac{1}{2}$ | DD-SBM | -5.5 | 5.6 | -5.9 | 5.9 | -5.9 | 5.9 | -3.9 | 4.1
c-SBM | -5.9 | 5.9 | -5.9 | 5.9 | -6.0 | 6.0 | -5.0 | 5.1
CLBIC | -5.2 | 5.2 | -5.3 | 5.3 | -5.3 | 5.3 | -5.4 | 5.5
NCV | -6.0 | 6.0 | -6.0 | 6.0 | -6.0 | 6.0 | -6.0 | 6.0
$1$ | DD-SBM | -3.1 | 3.5 | -6.0 | 6.0 | -6.0 | 6.0 | -4.5 | 4.6
c-SBM | -4.7 | 4.8 | -5.9 | 6.0 | -5.9 | 5.9 | -4.9 | 5.0
CLBIC | -4.8 | 4.9 | -5.3 | 5.3 | -5.3 | 5.3 | -4.8 | 4.9
NCV | -6.0 | 6.0 | -6.0 | 6.0 | -6.0 | 6.0 | -5.5 | 5.6
$7$ | $75$ | $\frac{1}{2}$ | DD-SBM | -4.6 | 4.7 | -6.0 | 6.0 | -5.9 | 6.0 | -4.3 | 4.4
c-SBM | -5.4 | 5.4 | -5.9 | 5.9 | -5.9 | 5.9 | -5.0 | 5.0
CLBIC | -4.8 | 4.9 | -5.0 | 5.0 | -5.0 | 5.0 | -4.8 | 4.9
NCV | -6.0 | 6.0 | -6.0 | 6.0 | -6.0 | 6.0 | -6.0 | 6.0
$1$ | DD-SBM | -1.4 | 2.0 | -5.9 | 6.0 | -5.5 | 5.5 | -4.8 | 4.8
c-SBM | -2.3 | 2.6 | -5.9 | 5.9 | -5.3 | 5.4 | -5.0 | 5.0
CLBIC | -3.4 | 3.5 | -5.0 | 5.0 | -5.0 | 5.0 | -4.7 | 4.7
NCV | -3.2 | 3.5 | -6.0 | 6.0 | -6.0 | 6.0 | -4.8 | 4.8
Table 3: Bias and RMSE of $\hat{K}$.
|
# Modeling of fluid flow in a flexible vessel with elastic walls
Vladimir Kozlova, Sergei Nazarovb and German Zavorokhinc
aDepartment of Mathematics, Linköping University,
S–581 83 Linköping, Sweden E-mail<EMAIL_ADDRESS>
b St.Petersburg State University, Universitetsky pr., 28, Peterhof, St.
Petersburg, 198504, Russia, and Department of Mathematics, Linköping
University
c St.Petersburg Department of the Steklov Mathematical Institute, Fontanka,
27, 191023, St.Petersburg, Russia E-mail<EMAIL_ADDRESS>
Abstract. We exploit a two-dimensional model [8], [7] and [2] describing the
elastic behavior of the wall of a flexible blood vessel which takes
interaction with surrounding muscle tissue and the 3D fluid flow into account.
We study time periodic flows in a cylinder with such compound boundary
conditions. The main result is that solutions of this problem do not depend on
the period and they are nothing else but the time independent Poiseuille flow.
Similar solutions of the Stokes equations for the rigid wall (the no-slip
boundary condition) depend on the period and their profile depends on time.
Keywords and phrases: Blood vessel with elastic walls, demension reduction
procedure, periodic in time flows, Poiseuille flow.
## 1 Introduction
In any book about the human circulatory system, one can read that the
elasticity of the composite walls of the arteries and the muscle material
surrounding arteria’s bed significantly contributes to the transfer of blood
pushed by the heart along the arterial tree. In addition, hardening of the
walls of blood vessels caused by calcification or other diseases makes it much
more difficult to supply blood to peripheral parts of the system. At the same
time, the authors could not find an answer to a natural question anywhere in
the medical and applied mathematical literature: how is the elasticity support
mechanism for the blood supply system? In numerous publications, modeling the
circulatory system, computational or pure theoretical, there are no
fundamental differences between the steady flows of viscous fluid in pipes
with rigid walls and vessels with elastic walls. Moreover, quite often much
attention is ungroundedly paid to nonlinear convective terms in the Navier-
Stokes equations, although blood as a multi-component viscoelastic fluid
should be described by much more involved integro-differential equations. We
also note that for technical reasons, none of the primary one-dimensional
models of a single artery or the entire arterial tree obtained using dimension
reduction procedures includes the terms generated by these nonlinear terms.
In connection with the foregoing, in this paper we consider the linear non-
stationary Stokes equations simulating a heartbeat, we study time-periodic
solutions in a straight infinite cylinder with an arbitrary cross-section.
In the case of the Dirichlet (no-slip) condition, according to well-known
results, there are many periodic solutions $({\bf v},p)$, where the velocity
${\bf v}$ has only the component directed along the cylinder ($z$-axis) and
the pressure $p$ depends linear on $z$ with coefficients depending only on the
time variable $t$. For elastic walls, there is only one such solution up to a
constant factor, proportional to the steady Poiseuille flow which does not
depend on the time variable but can be considered as periodic one with any
period. This is precisely the difference in behaviour of blood flow between
elastic and rigid walls. The former smooths (at removal from the heart) the
blood flow entering the aorta, and then into the artery with sharp, clearly
defined jerks, this is how the heart works with a heart valve, and the latter
reproduce the frequency of the flow throughout the length of the pipe.
Due to the elastic walls of arteries, an increased heart rate only leads to an
increase in the speed of blood flow (the flux grows) without changing the
structure of the flow as a whole, and only at an ultra-high beat rate, when
the wall elasticity is not enough to compensate for the flow pulsation, the
body begins to feel heart beating. The fact is that the human arterial system
is geometrically arranged in a very complex system, gently conical shape of
blood vessels, their curvature and a considerable number of bifurcation nodes.
Therefore, the considered model problem of an infinite straight cylinder gives
only a basic approximation to the real circulatory system and on some of its
elongated fragments, which acquires periodic disturbances found in the wrists,
temples, neck and other periphery of the circulatory system. The correctness
of such views is also confirmed by a full-scale experiment on watering the
garden: a piston pump delivers water with shocks, but with a long soft hose
the water jet at the outlet is unchanged, but with a hard short-pulsating one.
A general two-dimensional model describing the elastic behaviour of the wall
of a flexible vessel has been presented in the case of a straight vessel in
[7], [9], in the case of a curved vessel in [2] and for numerical results see
[4], [5]. The wall has a laminate structure consisting of several anisotropic
elastic layers of varying thickness and is assumed to be much thinner than the
radius of the channel which itself is allowed to vary. This two-dimensional
model takes the interaction of the wall with surrounding or supporting elastic
material of muscles and the fluid flow into account and is obtained via a
dimension reduction procedure. We study the time-periodic flow in the straight
flexible vessel with elastic walls. In comparison with the Stokes problem for
the vessel with rigid walls, we prove that
Compared with the classical works [6] and [17] of J.R. Womersley, for an
alternative description of the above works see [16], our formulation of
problem has much in common. In Wormerley’s works, axisymmetric pulsative blood
flow in a vessel with circular isotropic elastic wall is found as a
perturbation of the steady Poisseulle flow. Apart from inessential
generalizations like arbitrary shape of vessel’s cross-section and orthotropic
wall, the main difference of our paper is in the coefficient $K(s)$ which
describe the reaction of the surrounding cell material on deformation of the
wall. In other words, the vessel is assumed in [6], [17] to "hang in air"
while in our paper it is placed inside the muscular arteria’s bed as in human
and animal bodies intended to compensate for external and internal influences.
An evident experiment shows that a rubber or plastic hose uses to wriggle
under pulsative water supply.
## 2 Problem statement
### 2.1 Preliminaries
Let $\Omega$ be a bounded, simple connected domain in the plane
$\mathbb{R}^{2}$ with $C^{1,1}$ boundary $\gamma=\partial\Omega$ and let us
introduce the spatial cylinder
${\mathcal{C}}=\\{x=(y,z)\in\mathbb{R}^{2}\times\mathbb{R}:\
y=(y_{1},y_{2})\in\Omega,\ z\in\mathbb{R}\\}.$ (1)
We assume that the curve $\gamma$ is parameterised as
$(y_{1},y_{2})=(\zeta_{1}(s),\zeta_{2}(s))$, where $s$ is the arc length along
$\gamma$ measured counterclockwise from a certain point and
$\zeta=(\zeta_{1},\zeta_{2})$ is a vector $C^{1,1}$ function. The length of
the countour $\gamma$ is denoted by $|\gamma|$ and its curvature by
$\kappa=\kappa(s)=\zeta_{1}^{\prime\prime}(s)\zeta_{2}^{\prime}(s)-\zeta_{2}^{\prime\prime}(s)\zeta_{1}^{\prime}(s).$
In a neighborhood of $\gamma$, we introduce the natural curvilinear orthogonal
coordinates system $(n,s)$, where $n$ is the oriented distance to $\gamma$
($n>0$ outside $\Omega$).
The boundary of the cylinder ${\mathcal{C}}$ is denoted by $\Gamma$, i.e.
$\Gamma=\\{x=(y,z):\,y\in\gamma,z\in\mathbb{R}\\}.$ (2)
The flow in the vessel is described by the velocity vector ${\bf v}=({\bf
v}_{1},{\bf v}_{2},{\bf v}_{3})$ and by the pressure $p$ which are subject to
the non-stationary Stokes equations:
$\partial_{t}{\bf v}-\nu\Delta{\bf v}+\nabla
p=0\;\;\mbox{and}\;\;\nabla\cdot{\bf v}=0\;\;\mbox{in
${\mathcal{C}}\times\mathbb{R}\ni(x,t)$}.$ (3)
Here $\nu>0$ is the kinematic viscosity related to the dynamic viscosity $\mu$
by $\nu=\mu/\rho_{b}$, where $\rho_{b}>0$ is the density of the fluid.
The elastic properties of the 2D boundary are described by the displacement
vector ${\bf u}$ defined on $\Gamma$ and they are presented in [8], [6] for a
straight cylinder and in [2] for a curve-linear cylinder. If we use the curve-
linear coordinates $(s,z)$ on $\Gamma$ and write the vector ${\bf u}$ in the
basis ${\bf n}$, ${\bm{\tau}}$ and ${\bf z}$, where ${\bf n}$ is the outward
unit normal vector, ${\bm{\tau}}$ is the tangent vector to the curve $\gamma$
and ${\bf z}$ is the direction of $z$ axis, then the balance equation has the
following form:
$\displaystyle
D(\kappa,-\partial_{s},-\partial_{z})^{T}\,Q(s)D(\kappa,\partial_{s},\partial_{z}){\bf
u}$ $\displaystyle+\rho(s)\partial_{t}^{2}{\bf u}+K(s){\bf
u}+\sigma(s){\mathcal{F}}=0\;\;\mbox{in $\Gamma\times\mathbb{R}$},$ (4)
where $\rho(s)$ is the average density of the vessel wall,
$\sigma=\rho_{b}/h$, $h=h(s)>0$ is the thickness of the wall, $A^{T}$ stands
for the transpose of a matrix $A$,
$D(\kappa,\partial_{s},\partial_{z})=D_{0}\partial_{z}+D_{1}(\partial_{s})$,
where
$D_{0}=\left(\begin{array}[]{lll}0&0&0\\\ 0&0&1\\\
0&\frac{1}{\sqrt{2}}&0\end{array}\right),\;D_{1}(\partial_{s})=\left(\begin{array}[]{lll}\kappa&\partial_{s}&0\\\
0&0&0\\\ 0&0&\frac{1}{\sqrt{2}}\partial_{s}\end{array}\right)$ (5)
and $K(s){\bf u}=(k(s){\bf u}_{1},0,0)$. Here $k(s)$ is a scalar function, $Q$
is a $3\times 3$ symmetric positive definite matrix of homogenized elastic
moduli (see [2]) and the displacement vector ${\bf u}$ is written in the
curve-linear coordinates $(s,z)$ in the basis ${\bf n}$, ${\bm{\tau}}$ and
${\bf z}$. Furthermore
${\mathcal{F}}=({\mathcal{F}}_{n},{\mathcal{F}}_{z},{\mathcal{F}}_{s})$ is the
hydrodynamical force given by
${\mathcal{F}}_{n}=-p+2\nu\frac{\partial v_{n}}{\partial
n},\;\;{\mathcal{F}}_{s}=\nu\Big{(}\frac{\partial v_{n}}{\partial
s}+\frac{\partial v_{s}}{\partial n}-\kappa
v_{s}\Big{)},\;\;{\mathcal{F}}_{z}=\nu\Big{(}\frac{\partial v_{n}}{\partial
z}+\frac{\partial v_{z}}{\partial n}\Big{)},$ (6)
where $v_{n}$ and $v_{s}$ are the velocity components in the direction of the
normal ${\bf n}$ and the tangent ${\bm{\tau}}$, respectively, whereas $v_{z}$
is the longitudinal velocity component. The functions $\rho$, $k$ and the
elements of the matrix $Q$ are bounded measurable functions satisfying
$\rho(s)\geq\rho_{0}>0\;\;\mbox{ and}\;\;k(s)\geq k_{0}>0.$ (7)
The elements of the matrix $Q$ are assumed to be Lipschitz continuous and
$\langle Q\xi,\xi\rangle\geq q_{0}|\xi|^{2}$ for all $\xi\in\mathbb{R}^{3}$
with $q_{0}>0$, where $\langle\cdot,\cdot\rangle$ is the cartesian inner
product in $\mathbb{R}^{3}$.
We note that
$D(\kappa,\partial_{s},\partial_{z}){\bf u}^{T}=(\kappa{\bf
u}_{1}+\partial_{s}{\bf u}_{2},\partial_{z}{\bf
u}_{3},\frac{1}{\sqrt{2}}(\partial_{z}{\bf u}_{2}+\partial_{s}{\bf
u}_{3}))^{T}$
and one can easily see that
$\kappa{\bf u}_{1}+\partial_{s}{\bf u}_{2}={\bm{\varepsilon}}_{ss}({\bf
u}),;\;\partial_{z}{\bf u}_{3}={\bm{\varepsilon}}_{zz}({\bf
u})\;\;\mbox{and}\;\;\partial_{z}{\bf u}_{2}+\partial_{s}{\bf
u}_{3}=2{\bm{\varepsilon}}_{sz}({\bf u})$ (8)
on $\Gamma$. Here ${\bm{\varepsilon}}_{ss}({\bf u})$,
${\bm{\varepsilon}}_{zz}({\bf u})$ and ${\bm{\varepsilon}}_{sz}({\bf u})$ are
components of the deformation tensor in the basis $\\{{\bf n},{\bm{\tau}},{\bf
z}\\}$. In what follows we will write the displacement vector as ${\bf
u}=({\bf u}_{1},{\bf u}_{2},{\bf u}_{3})$, where ${\bf u}_{1}={\bf u}_{n}$,
${\bf u}_{2}={\bf u}_{s}$ and ${\bf u}_{3}={\bf u}_{z}$. For the velocity
${\bf v}$ we will use indexes $1,2$ and $3$ for the components of ${\bf v}$ in
$y_{1}$, $y_{2}$ and $z$ directions respectively.
Furthermore the vector functions ${\bf v}$ and ${\bf u}$ are connected on the
boundary by the relation
${\bf v}=\partial_{t}{\bf u}\;\;\mbox{on $\Gamma\times\mathbb{R}$.}$ (9)
The problem (3)–(9) appears when we deal with a flow in a pipe surrounded by a
thin layered elastic wall which separates the flow from the muscle tissue.
Since we have in mind an application to the blood flow in the blood
circulatory system, we are interested in periodic in time solutions. One of
goals of this paper is to describe all periodic solutions to the problem (3),
(2.1), (9) which are bounded in $\mathbb{R}\times{\mathcal{C}}\ni(t,x)$.
It is reasonable to compare property of solutions to this problem with similar
properties of solutions to the Stokes system (3) supplied with the no-slip
boundary condition
${\bf v}=0\;\;\mbox{on $\Gamma$.}$ (10)
Considering the problem (3), (10) we assume that the boundary $\gamma$ is
Lipschitz only.
The following result about the problem (3), (10) is possibly known but we
present a concise proof for reader’s convenience.
###### Theorem 2.1.
Let the boundary $\gamma$ be Lipschitz and $\Lambda>0$. There exists
$\delta>0$ such that if $({\bf v},p)$ are $\Lambda$-periodic in time functions
satisfying (3), (10) and may admit a certain exponential growth at infinity
$\max_{0\leq t\leq\Lambda}\int_{\mathcal{C}}e^{-2\delta|z|}(|\nabla{\bf
v}(x,t)|^{2}+|\nabla\partial_{z}{\bf v}|^{2}+|p(x,t)|^{2})dx<\infty.$ (11)
Then
$p=zp_{*}(t)+p_{0}(t),\;\;{\bf v}(x,t)=(0,0,{\bf v}_{3}(y,t)),$ (12)
where $p$ and ${\bf v}_{3}$ are $\Lambda$-periodic functions in $t$ which
satisfy the problem
$\displaystyle\partial_{t}{\bf v}_{3}-\nu\Delta_{y}{\bf
v}_{3}+p_{*}(t)=0\;\;\mbox{in $\Omega\times\mathbb{R}$}$ $\displaystyle{\bf
v}=0\;\;\mbox{on $\gamma\times\mathbb{R}$}.$ (13)
Thus the dimension of the space of periodic solutions to the problem (3), (10)
is infinite and they can be parameterised by a periodic function $p_{*}$. In
the case of elastic wall situation is quite different
###### Theorem 2.2.
Let the boundary $\gamma$ be $C^{1,1}$ and $\Lambda>0$. Let also $({\bf
v},p,{\bf u})$ be a $\Lambda$-periodic with respect to $t$ solution to the
problem (3)–(9) admitting an arbitrary power growth at infifnity
$\displaystyle\max_{0\leq
t\leq\Lambda}\Big{(}\int_{\mathcal{C}}(1+|z|)^{-N}(|{\bf
v}|^{2}+|\nabla_{x}{\bf v}|^{2}+|\nabla_{x}\partial_{z}{\bf
v}|^{2}+|p|^{2})dx$ (14)
$\displaystyle+\int_{\Gamma}(1+|z|)^{-N}(|u|^{2}+\sum_{k=2}^{3}(|\nabla_{sz}u_{k}|^{2}+|\nabla_{sz}\partial_{z}u_{k}|^{2}))dsdz\Big{)}<\infty$
for a certain $N>0$. Then
$p=zp_{0}+p_{1},\;\;{\bf v}_{1}={\bf v}_{2}=0,\;\;{\bf v}_{3}=p_{0}{\bf
v}_{*}(y),$ (15)
where $p_{0}$ and $p_{1}$ are constants and ${\bf v}_{*}$ is the Poiseuille
profile, i.e.
$\nu\Delta_{y}{\bf v}_{*}=1\;\;\;\mbox{in $\Omega$\;\; and}\;\;{\bf
v}_{*}=0\;\;\mbox{on $\gamma$}.$ (16)
The boundary displacement vector ${\bf u}={\bf u}(s,z)$ satisfies the equation
$D(\kappa(s),-\partial_{s},-\partial_{z})^{T}\,Q(s)D(\kappa(s),\partial_{s},\partial_{z}){\bf
u}+K{\bf u}=\sigma(p,0,p_{0}\nu\partial_{n}{\bf v}_{3}|\gamma)^{T}.$ (17)
If the elements $Q_{21}$ and $Q_{31}$ vanish then the function ${\bf u}$ is a
polynomial of second degree in $z$: ${\bf
u}(s,z)=(0,\alpha,\beta)^{T}z^{2}+{\bf u}^{(1)}(s)z+{\bf u}^{(2)}(s)$, where
$\alpha$ and $\beta$ are constants.
Thus, in the case of elastic wall all periodic solutions are independent of
$t$ and hence are the same for any period. Moreover inside the cylinder the
flow takes the Poiseuille form. The above theorems have different requirements
on the behavior of solutions with respect to $z$, compare (11) and (14). This
is because of the following reason. In the case of the Dirichlet boundary
condition we can prove a resolvent estimate on the imaginary axis
($\lambda=i\omega$, $\omega$ is real) with exponential weights independent on
$\omega$. In the case of the elastic boundary condition exponential weights
depends on $\omega$. Becuase of that we can not put in (14) the same
exponential weight as in (11).
The structure of our paper is the following. In Sect.3 we treat the Stokes
system with the no-slip condition on the boundary of cylinder. Since we are
dealing with time-periodic solutions the problem can be reduced to a series of
time independent problems with a parameter (frequency). The main result there
is Theorem 3.1. Using this assertion it is quite straightforward to proof the
main theorem 2.1 for the Dirichlet problem. Parameter dependent problems are
studied in Sect.3.2-3.4. Theorem 3.1 is proved in Sect.3.5.
Stokes problem in a vessel with elastic walls is considered in Sect.4. We also
reduce the time periodic problem to a series of time independent problems
depending on a parameter. The main result there is Theorem 4.1. Using this
result we prove our main theorem 2.2 for the case of elastic wall in Sect.4.1.
The parameter depending problem is studied in Sect.4.2-4.6. The proof of
Theorem 4.1 is given in Sect.4.7. In Sect.4.8 we consider the case when the
parameter in the elastic wall problem is vanishing. This consideration
completes the proof of Theorem 2.2.
## 3 Dirichlet problem for the Stokes system
The first step in the proof of Theorem 2.1 is the following reduction of the
time dependent problem to time independent one. Due to $\Lambda$-periodicity
of our solution we can represent it in the form
${\bf v}(x,t)=\sum_{k=-\infty}^{\infty}{\bf V}_{k}(x)e^{2\pi
kit/\Lambda},\;\;p(x,t)=\sum_{k=-\infty}^{\infty}{\bf P}_{k}(x)e^{2\pi
kit/\Lambda},$ (18)
where
${\bf V}_{k}(x)=\frac{1}{\Lambda}\int_{0}^{\Lambda}{\bf v}(x,t)e^{-2\pi
kit/\Lambda}dt,\;\;P_{k}(x)=\frac{1}{\Lambda}\int_{0}^{\Lambda}p(x,t)e^{-2\pi
kit/\Lambda}dt.$ (19)
These coefficients satisfy the following time independent problem
$i\omega{\bf V}-\nu\Delta{\bf V}+\nabla P={\bf
F}\;\;\mbox{and}\;\;\nabla\cdot{\bf V}=0\;\;\mbox{in ${\mathcal{C}}$},$ (20)
with the Dirichlet boundary condition
${\bf V}=0\;\;\mbox{on $\Gamma$}$ (21)
and with $\omega=2\pi k/\Lambda$ and ${\bf F}=({\bf F}_{1},{\bf F}_{2},{\bf
F}_{3})=0$ (for further analysis it is convenient to have an arbitrary ${\bf
F}$).
###### Theorem 3.1.
There exist a positive number $\beta^{*}$ depending only on $\Omega$ and $\nu$
such that for $\beta\in(0,\beta^{*})$ the only solution to problem (20), (21)
with ${\bf F}=0$ which may admit a certain exponential growth
$\int_{\mathcal{C}}e^{-2\beta|z|}\big{(}|\nabla{\bf
V}|^{2}+|\nabla\partial_{z}{\bf V}|^{2}+|P|^{2}\big{)}dydz<\infty$ (22)
is
${\bf V}(x)=p_{0}(0,0,{\hat{v}}(y))\;\;\mbox{and}\;\;P(x)=p_{0}z+p_{1},$ (23)
where $p_{0}$ and $p_{1}$ are constants and ${\hat{v}}$ satisfies
$i\omega\hat{v}-\nu\Delta\hat{v}+1=0\;\;\mbox{in
$\Omega$},\;\;\hat{v}=0\;\;\mbox{on $\gamma$.}$ (24)
###### Remark 3.1.
From (24) it follows that
$\int_{\Omega}\hat{v}(y)dy=i\omega\int_{\Omega}|\hat{v}|^{2}du-\nu\int_{\Omega}|\nabla\hat{v}|^{2}dy,$
i.e. the flux does not vanish for this solution.
We postpone the proof of the above theorem to Sect.3.5 and in the next section
we present the proof of Theorem 2.1
### 3.1 Proof of Theorem 2.1
By (11)
$\int_{\mathcal{C}}e^{-2\delta|z|}(|\nabla{\bf V}_{k}(x)|^{2}+|{\bf
P}_{k}(x)|^{2})dx<\infty,$ (25)
Applying Theorem 3.1 and assuming $\delta<\beta^{*}$, we get that
$V_{k}=(0,0,\hat{v}_{k}(y)),\;\;P_{k}=zp_{0k}+p_{1k},$
where $p_{0k}$ and $p_{1k}$ are constants. This implies that ${\bf v}_{1}={\bf
v}_{2}=0$, ${\bf v}_{3}$ depends only on $y,t$ and $p=p_{0}(t)z+p_{1}(t)$,
which proves the required assertion.
### 3.2 System for coefficients (20), (21)
To describe the main solvability result for the problem (20), (21), let us
introduce some function spaces. For $\beta\in\mathbb{R}$ we denote by
$L^{2}_{\beta}({\mathcal{C}})$ the space of functions on ${\mathcal{C}}$ with
the finite norm
$||u;L^{2}_{\beta}({\mathcal{C}})||=\Big{(}\int_{{\mathcal{C}}}e^{2\beta
z}|u(x)|^{2}dx\Big{)}^{1/2}.$
By $W^{1,2}_{\beta}({\mathcal{C}})$ we denote the space of functions in
${\mathcal{C}}$ with the finite norm
$||v;W^{1,2}_{\beta}({\mathcal{C}})||=\int_{\mathcal{C}}e^{2\beta
z}(|\nabla_{x}v|^{2}+|v|^{2})dx\Big{)}^{1/2}.$
We will use the same notation for spaces of vector functions.
###### Proposition 3.1.
Let the boundary $\gamma$ is Lipschitz and $\omega\in\mathbb{R}$. There exist
$\beta^{*}>0$ independent of $\omega$ such that the following assertions are
valid:
(i) for any $\beta\in(-\beta^{*},\beta^{*})$, $\beta\neq 0$ and ${\bf F}\in
L^{2}_{\beta}({\mathcal{C}})$, the problem (20), (21) has a unique solution
$({\bf V},{\bf P})$ in $W_{\beta}^{1,2}({\mathcal{C}})^{3}\times
L^{2}_{\beta}({\mathcal{C}})$ satisfying the estimate
$||{\bf V};W^{1,2}_{\beta}({\mathcal{C}})||+||{\bf
P};L_{2}^{\beta}({\mathcal{C}})||\leq C||{\bf
F};L_{2}^{\beta}({\mathcal{C}})||.$ (26)
where $C$ may depend on $\beta$, $\nu$ and $\Omega$. Moreover,
$||\partial_{z}{\bf V};W^{1,2}_{\beta}({\mathcal{C}})||\leq C||{\bf
F};L^{2}_{\beta}({\mathcal{C}})||.$ (27)
(ii) Let $\beta\in(0,\beta^{*})$ and ${\bf F}\in
L^{2}_{\beta}({\mathcal{C}})\cap L^{2}_{-\beta}({\mathcal{C}})$. Then
solutions $({\bf V}_{\pm},{\bf P}_{\pm})\in
W_{\pm\beta}^{1,2}({\mathcal{C}})^{3}\times L^{2}_{\pm\beta}({\mathcal{C}})$
to (20), (21) are connected by
${\bf V}_{-}(x)={\bf V}_{+}(x)+p_{0}(0,0,\hat{v}(y)),\;\;{\bf P}_{-}(x)={\bf
P}_{+}(x)+p_{0}z+p_{1},$ (28)
with certain constants $p_{0}$ and $p_{1}$. Here $\hat{v}$ is solution to
(24).
###### Remark 3.2.
If $\gamma$ is $C^{1,1}$ then it follows from [15] that the left-hand side in
(26) can be replaced by
$||{\bf V};W^{2,2}_{\beta}({\mathcal{C}})||+||\nabla{\bf
P};L_{2}^{\beta}({\mathcal{C}})||.$
### 3.3 Operator pencil, weak formulation
We will use the spaces of complex valued functions $H^{1}_{0}(\Omega)$,
$L^{2}(\Omega)$ and $H^{-1}(\Omega)$ and the corresponding norms are denoted
by $||\cdot||_{1}$, $||\cdot||_{0}$ and $||\cdot||_{-1}$ respectively.
Let us introduce an operator pencil by
${\mathcal{S}}(\lambda)\begin{pmatrix}v\\\ p\end{pmatrix}=\begin{pmatrix}\mu
v_{1}-\nu\Delta_{y}v_{1}+\partial_{y_{1}}p\\\ \mu
v_{2}-\nu\Delta_{y}v_{2}+\partial_{y_{2}}p\\\ \mu
v_{3}-\nu\Delta_{y}v_{3}+\lambda p\\\
\partial_{y_{1}}v_{1}+\partial_{y_{2}}v_{2}+\lambda
v_{3}\end{pmatrix},\;\;\mu=i\omega-\nu\lambda^{2},$ (29)
where $v=(v_{1},v_{2},v_{3})$ is a vector function and $p$ is a scalar
function in $\Omega$ . This pencil is defined for vectors $(v,p)$ such that
$v=0$ on $\gamma$.
Clearly
${\mathcal{S}}(\lambda)\;:\;H^{1}_{0}(\Omega)^{3}\times
L^{2}(\Omega)\rightarrow(H^{-1}(\Omega))^{3}\times L^{2}(\Omega)$ (30)
is a bounded operator for all $\lambda\in\mathbb{C}$. The following problem is
associated with this operator
$\displaystyle\mu v_{1}-\nu\Delta_{y}v_{1}+\partial_{y_{1}}p=f_{1},$
$\displaystyle\mu v_{2}-\nu\Delta_{y}v_{2}+\partial_{y_{2}}p=f_{2},$
$\displaystyle\mu v_{3}-\nu\Delta_{y}v_{3}+\lambda p=f_{3}\;\;\mbox{in
$\Omega$}$ (31)
and
$\partial_{y_{1}}v_{1}+\partial_{y_{2}}v_{2}+\lambda v_{3}=h\;\;\mbox{in
$\Omega$}$ (32)
supplied with the Dirichlet condition
$v=0\;\;\mbox{on $\partial\Omega$.}$ (33)
The corresponding sesquilinear form is given by
$\displaystyle\mathbb{A}(v,p;\hat{v},\hat{p};\lambda)=\int_{\Omega}\sum_{j=1}^{3}(\mu
v_{j}\overline{\hat{v}_{j}}+\nu\nabla_{y}v_{j}\cdot\nabla_{y}\overline{\hat{v}_{j}})dy$
$\displaystyle-\int_{\Omega}p\overline{(\nabla_{y}\cdot\hat{v^{\prime}}-\bar{\lambda}\hat{v}_{3})}dy+\int_{\Omega}(\nabla_{y}\cdot
v^{\prime}+\lambda v_{3})\overline{\hat{p}}dy$
where $v^{\prime}=(v_{1},v_{2})$. This form is well-defined on
$H_{0}^{1}(\Omega)^{3}\times L^{2}(\Omega)$.
The weak formulation of (3.3)–(33) can be written as
$\mathbb{A}(v,p;\hat{v},\hat{p};\lambda)=\int_{\Omega}(f\cdot\overline{\hat{v}}+h\overline{\hat{p}})dy$
(34)
for all $(\hat{v},\hat{p})\in H_{0}^{1}(\Omega)^{3}\times L^{2}(\Omega)$. As
it was shown in the proof of Lemma 3.2(ii)[14] the operator
${\mathcal{S}}(\lambda)$ is isomorphism for $\lambda=i\xi$,
$\xi\in\mathbb{R}\setminus\\{0\\}$. Since the operator corresponding to the
difference of the forms $\mathbb{A}$ for different $\lambda$ is compact, the
operator pencil ${\mathcal{S}}(\lambda)$ is Fredholm for all
$\lambda\in\mathbb{C}$ and its spectrum consists of isolated eigenvalues of
finite algebraic multiplicity, see [3].
### 3.4 Operator pencil near the imaginary axis $\Re\lambda=0$
Here we consider the right-hand sides in (3.3)–(33) as follows $f\in
L^{2}(\Omega)$ and $g\in L^{2}(\Omega)$
The next assertion is proved in Lemma 3.2(i) [14], after a straightforward
modification.
###### Lemma 3.1.
Let $h\in L^{2}(\Omega)$ and $\lambda\in\mathbb{C}$, $\lambda\neq 0$. Then the
equation
$\partial_{y_{1}}w_{1}+\partial_{y_{2}}w_{2}+\lambda w_{3}=h\;\;\mbox{in
$\Omega$}$ (35)
has a solution $w\in H_{0}^{1}(\Omega)$ satisfying the estimate
$\sum_{j=1}^{2}||w_{j}||_{1}\leq C(||h||_{0}+|\alpha|),\;\;||w_{3}||_{1}\leq
C\frac{|\alpha|}{|\lambda|}$
where
$\alpha=\int_{\Omega}hdy$
and $C$ depends only on $\Omega$. The mapping $h\mapsto w$ can be chosen
linear.
The proof of the next lemma can be extracted from the proof of Lemma 3.2(ii)
[14].
###### Lemma 3.2.
Let $f\in L_{2}(\Omega)$, $h=0$ in (32) and let $\lambda=i\xi$,
$0\neq\xi\in\mathbb{R}$. Then the solution to (3.3)–(33) admits the estimate
$(1+|\omega|+|\xi|^{2})||v||_{0}+(1+|\omega|+|\xi|^{2})^{1/2}||\nabla
v||_{0}\leq C||f||_{0},$ (36)
and
$||p||_{0}\leq C\frac{1+|\lambda|}{|\lambda|}||f||_{0},$ (37)
where the constant $C$ depends only on $\nu$ and $\Omega$.
###### Proof.
To estimate the norm of $v$ by the right-hand side in (36), we take
$\hat{v}=v$ in (34) and obtain
$\int_{\Omega}(\mu|v|^{2}+\nu|\nabla_{y}v|^{2})dy=\int_{\Omega}f\cdot\bar{v}dy.$
This implies the inequality (36). To estimate the norm of $p$, we choose
$\nabla_{i\xi}w=p$. Then the relation (34) becomes
$\int_{\Omega}|p|^{2}dy=\int_{\Omega}(\mu v\cdot\overline{w}+\nu\nabla
v\cdot\nabla\overline{w}-f\overline{w})dy.$
Since by Lemma 3.1
$||w||_{1}\leq C\frac{1+|\lambda|}{|\lambda|}||p||_{0},$
we arrive at (37). ∎
###### Lemma 3.3.
There exists a positive number $\delta$ depending on $\nu$ and $\Omega$ such
that if $f\in L_{2}(\Omega)$, $h=0$ and $0<|\lambda|<\delta$, then the problem
(3.3)–(33) has a unique solution satisfying
$(|\omega|+1)||v||_{0}+(|\omega|+1)^{1/2}||v||_{1}+||p-p_{m}||_{0}+|\lambda|\,||p||_{0}\leq
C||f||_{0},$ (38)
where the constant $C$ depends only on $\nu$ and $\Omega$ and
$p_{m}=\frac{1}{|\Omega|}\int_{\Omega}pdy.$
###### Proof.
From (32)) it follows that
$\int_{\Omega}v_{3}dy=0\;\;\mbox{ when}\;\;\lambda\neq 0.$ (39)
Taking $\hat{v}=v$ in (34) yields
$\mu\int_{\Omega}|v|^{2}dy+\nu\int_{\Omega}|\nabla_{y}v|^{2}dy=-2\Re\Big{(}\lambda\int_{\Omega}p\overline{v_{3}}dy\Big{)}+\Re\Big{(}\int_{\Omega}f\cdot\overline{v}dy\Big{)}.$
For a small $\lambda\neq 0$, this together with (39) implies
$(1+|\omega|)||v||_{0}\leq C(||f||_{0}+|\lambda||p-p_{m}||_{0})$
and
$(1+|\omega|)^{1/2}||\nabla v||_{0}\leq C(||f||_{0}+|\lambda||p-p_{m}||_{0})$
with $C$ independent of $\omega$ and $\lambda$. Taking
$\hat{v}=w=(w_{1},w_{2},0)$ in (34) where $w_{k}\in H^{1}_{0}(\Omega)$
satisfies
$\partial_{y_{1}}w_{1}+\partial_{y_{2}}w_{2}=p-p_{m},\;\;||w||_{1}\leq
C||p-p_{m}||_{0},$
we get
$||p-p_{m}||_{0}^{2}=\mu\int_{\Omega}v\cdot\overline{w}dy+\nu\int_{\Omega}\nabla_{y}v\cdot\nabla_{y}\bar{w}dy.$
Therefore,
$||p-p_{m}||_{0}^{2}\leq C\big{(}(1+|\omega|)||v||_{0}+||\nabla
v||_{0})||p-p_{m}||_{0}$
and, hence,
$||p-p_{m}||_{0}\leq C(||f||_{0}+|\lambda||p-p_{m}||_{0})$
which implies
$||p-p_{m}||_{0}\leq
C||f||_{0}\;\;\;\mbox{and}\;\;(1+|\omega|)||v||_{0}+(1+|\omega|)^{1/2}||\nabla
v||_{0}\leq C||f||_{0}.$ (40)
Now taking $\hat{v}=w=(w_{1},w_{2},w_{3})$ in (34), where $w_{k}\in
H^{1}_{0}(\Omega)$ is subject to
$\partial_{y_{1}}w_{1}+\partial_{y_{2}}w_{2}-\bar{\lambda}w_{3}=p-p_{m},\;\;||w||_{1}\leq
C\frac{1+|\lambda|}{|\lambda|}||p-p_{m}||_{0},$
we obtain
$||p||_{0}\leq C\frac{1+|\lambda|}{|\lambda|}||f||_{0}.$
The last inequality together with (40) gives (38). ∎
Now we can describe properties of the pencil ${\mathcal{S}}$ in a neighborhood
of the imaginary axis $\Re\lambda=0$.
###### Lemma 3.4.
There exist $\beta^{*}>0$ such that the following assertion are valid:
(i) the only eigenvalue of ${\mathcal{S}}$ in the strip
$|\Re\lambda|<\beta^{*}$ is zero;
(ii) if $\beta\in(-\beta^{*},0)\cup(0,\beta^{*})$ then for $f\in
L^{2}(\Omega)$, $h=0$ and $\Re\lambda=\beta$ the problem (3.3)–(33) has a
unique solution in $H^{1}_{0}(\Omega)^{3}\times L^{2}(\gamma)$ and its norm is
estimated as follows:
$(1+|\lambda|^{2}+|\omega|)\,||v||_{0}+(1+|\lambda|^{2}+|\omega|)^{1/2}||\nabla
v||_{0}+||p||_{0}\leq C||f||_{0},$ (41)
where the constant $C$ may depend on $\beta$, $\nu$ and $\Omega$.
###### Proof.
First, we observe that
$\mathbb{A}(v,p;\hat{v},\hat{p};\beta+i\xi)-\mathbb{A}(v,p;\hat{v},\hat{p};\xi)=\nu\beta(\beta+2\xi
i)\int_{\Omega}v\cdot\overline{\hat{v}}dy+\beta\int_{\Omega}(v_{3}\overline{\hat{p}}-p\overline{\hat{v_{3}}})dy.$
Thus the first form is a small perturbation of the second one. Now using
Lemmas 3.2 and 3.3 for small $\beta$ we arrive at the existence of
$\beta^{*}$, which satisfies (i). Moreover, the estimates (36) and (37) are
true for $\lambda=\beta+i\xi$ for a fixed
$\beta\in(-\beta^{*},0)\cup(0,\beta^{*})$ and arbitrary real $\xi$. With this
the constants in (36) and (37) may depend now on $\beta$, $\nu$ and $\Omega$
only.
∎
### 3.5 Proof of Proposition 3.1 and Theorem 3.1
###### Proof.
The assertion (i) in Proposition 3.1 is obtained from Lemma 3.4(ii) by using
the inverse Fourier transform together with Parseval’s identity.
To prove (ii) in Proposition 3.1, we observe that
$({\bf V}_{\pm},{\bf P}_{\pm})=\frac{1}{2\pi
i}\int_{\Re\lambda=\pm\beta}e^{-\lambda
z}{\mathcal{S}}^{-1}(f(y,\lambda),0)d\lambda$
and the relation (28) is obtained by applying the residue theorem. ∎
Now we turn to Proof of Theorem 3.1. Let $({\bf V},{\bf P})$ be a solution to
(20), (21) satisfying (22). Our first step is to construct a representation of
the solution in the form
$({\bf V},{\bf P})=({\bf V}^{(+)},{\bf P}^{(+)})+({\bf V}^{(-)},{\bf
P}^{(-)}),$ (42)
where ${\bf V}^{(\pm)}\in W^{1,2}_{\pm\beta}({\mathcal{C}})$, ${\bf
P}^{(\pm)}\in L^{2}_{\pm\beta}({\mathcal{C}})$ and they solve the problem
(20), (21) with certain ${\bf F}={\bf F}^{(\pm)}\in
L^{2}_{\pm\beta}({\mathcal{C}})$.
By the second equation in (20) and by (21) the flux
$\Psi=\int_{\Omega}{\bf V}_{3}(y,z)dy\;\;\mbox{is constant}.$ (43)
The vector-function $(0,0,p_{0}\hat{v}_{3},p_{0})$ with a constant $p_{0}$
verifies the homogeneous problem (20), (21) and its flux does not vanish in
the case $p_{0}\neq 0$. So subtracting it with appropriate constant $p_{0}$
from $({\bf V},{\bf P})$ we can reduce the proof of theorem to the case
$\Psi=0$. In this way we assume in what follows that this is the case.
Let $\zeta(z)$ be a smooth cut-off function equal $1$ for large positive $z$
and $0$ for large negative $z$ and let $\zeta^{\prime}$ be its derivative. We
choose in (42)
$({\bf V}^{(+)},{\bf P}^{(+)})=\zeta({\bf V},{\bf P})+\zeta^{\prime}({\bf
W},0),\;\;({\bf V}^{(-)},{\bf P}^{(-)})=(1-\zeta)({\bf V},{\bf
P})-\zeta^{\prime}({\bf W},0)$
where the vector function ${\bf W}=({\bf W}_{1},{\bf W}_{2},0)$ is such that
$\nabla_{y}\cdot({\bf W}_{1},{\bf W}_{2})(y,z)={\bf V}_{3}(y,z).$ (44)
We construct solution ${\bf W}$ by solving two-dimensional Stokes problem in
$\Omega$ depending on the parameter $z$:
$-\nu\Delta{\bf W}_{k}+\partial_{y_{k}}{\bf
Q}=0,\;\;k=1,2,\;\;\partial_{y_{1}}{\bf W}_{1}(y)+\partial_{y_{2}}{\bf
W}_{2}(y)=V_{3}(y,z)\;\;\mbox{in $\Omega$}$
and ${\bf W}_{k}=0$ on $\gamma$, $k=1,2$. This problem has a solution in
$H^{1}_{0}(\Omega)^{2}\times L^{2}(\Omega)$, which is unique if we require
$\int_{\Omega}{\bf Q}dy=0$. If we look on the dependence on the parameter $z$
it is the same as in the right-hand side. So
${\bf W}_{k},\partial_{z}{\bf W}_{k},\partial^{2}_{z}{\bf W}_{k}\in L^{2}_{\rm
loc}(\mathbb{R};H^{1}_{0}(\Omega))\;\;\mbox{and}\;\;{\bf Q},\partial_{z}{\bf
Q}\in L^{2}(\Omega).$
Therefore,
$i\omega{\bf V}_{k}^{(+)}-\nu\Delta{\bf V}_{k}^{(+)}+\partial_{y_{k}}{\bf
P}^{(+)}={\bf F}_{k}^{(+)}$
where
${\bf F}_{k}^{(+)}=-\nu\zeta^{{}^{\prime\prime}}{\bf
V}_{k}-2\nu\zeta^{\prime}\partial_{z}{\bf V}_{k}+i\omega\zeta^{\prime}{\bf
W}_{k}-\nu\partial_{z}^{2}(\zeta^{\prime}{\bf W}_{k})\in(L_{2,\beta_{+}}\cap
L_{2,\beta_{-}})({\mathcal{C}})$
for $k=1,2$ and
${\bf F}_{3}^{(+)}=-\nu\zeta^{{}^{\prime\prime}}{\bf
V}_{k}-2\nu\zeta^{\prime}\partial_{z}{\bf V}_{k}+\zeta^{\prime}{\bf
P}^{(+)}\in(L_{2,\beta_{+}}\cap L_{2,\beta_{-}})({\mathcal{C}})$
Similar formulas are valid for $({\bf V}^{(-)},{\bf P}^{(-)})$ with
${\bf F}_{k}^{(-)}=-{\bf F}_{k}^{(+)}.$
By Proposition 3.1(ii) this implies
$({\bf V}^{(+)},{\bf P}^{(+)})+({\bf V}^{(-)},{\bf
P}^{(-)})=(0,0,p_{0}\hat{v}(y),p_{0}z+p_{1})$
for certain constants $p_{0}$ and $p_{1}$, which furnishes the proof of the
assertion.
## 4 Stokes flow in a vessel with elastic walls
This section is devoted to the proof of Theorem 2.2. As in the case of the
Dirichlet problem considered in Sect.3 we represent solutions to the problem
(3)–(9) in the form (18) (for the velocity ${\bf v}$ and the pressure $p$) and
${\bf u}(s,z,t)=\sum_{k=-\infty}^{\infty}e^{2\pi kit/\Lambda}{\bf U}_{k}(s,z)$
(45)
for the displacements ${\bf u}$. The coefficients in (18) are given by (19)
and in (45) by
${\bf U}_{k}(s,z)=\frac{1}{\Lambda}\int_{0}^{\Lambda}e^{-2\pi kit/\Lambda}{\bf
u}(s,z,t)dt.$
The above introduced coefficients satisfy the time independent problem
$i\omega{\bf V}-\nu\Delta{\bf V}+\nabla P={\bf
F}\;\;\mbox{and}\;\;\nabla\cdot{\bf V}=0\;\;\mbox{in ${\mathcal{C}}$},$ (46)
$\displaystyle
D(\kappa(s),-\partial_{s},-\partial_{z})^{T}\,\overline{Q}(s)D(\kappa(s),\partial_{s},\partial_{z}){\bf
U}(s,z)-\overline{\rho}(s)\omega^{2}{\bf U}(s,z)$ $\displaystyle+K{\bf
U}+\sigma{\widehat{\mathcal{F}}}(s,z)={\bf G},$ (47) ${\bf V}=i\omega{\bf
U}\;\;\mbox{on $\Gamma\times\mathbb{R}$}$ (48)
where ${\bf F}=0$, ${\bf G}=0$ and $\omega=2\pi k/\Lambda$ (for forthcoming
analysis it is convenient to have arbitrary right-hand sides in this problem).
###### Theorem 4.1.
Let $\omega\in\mathbb{R}$ and $\omega\neq 0$. Then there exists $\beta>0$
depending on $\omega$ such that the only solution to the homogeneous (${\bf
F}=0$ and ${\bf G}=0$) problem (46)–(48) subject to
$\displaystyle\int_{\mathcal{C}}e^{-\beta|z|}(|{\bf v}|^{2}+|\nabla_{x}{\bf
v}|^{2}+|\nabla_{x}\partial_{z}{\bf v}|^{2}+|p|^{2})dx$ (49)
$\displaystyle+\int_{\Gamma}e^{-\beta|z|}(|u|^{2}+\sum_{k=2}^{3}(|\nabla_{sz}u_{k}|^{2}+|\nabla_{sz}\partial_{z}u_{k}|^{2}))dsdz<\infty$
is ${\bf V}=0$, ${\bf U}=0$ and ${\bf P}=0$.
We postpone the proof of the formulated theorem to Sect.4.7 and in the next
section we give the proof of Theorem 2.2
### 4.1 Proof of Theorem 2.2
By (49),
$\int_{\mathcal{C}}(1+|x|^{2})^{-N}\big{(}|\nabla{\bf
V}_{k}|^{2}+|P_{k}|^{2}\big{)}dydz+\int_{\Gamma}(1+|x|^{2})^{-N}|{\bf
U}_{k}|^{2}dsdz<\infty.$
Applying Theorem 4.1 we get ${\bf V}_{k}=0$, ${\bf P}_{k}=0$ and ${\bf
U}_{k}=0$ for $k\neq 0$. Now using Theorem 3.1 and consideration in
forthcoming Sect.4.8 for $\omega=0$ we arrive at the required assertion.
### 4.2 System for coefficients (46)–(48)
To formulate the main solvability result for the system (46)–(48), we need the
following function spaces
${\mathcal{Y}}_{\beta}=\\{{\bf U}=({\bf U}_{1},{\bf U}_{2},{\bf U}_{3}):{\bf
U}_{1}\in L_{2}^{\beta}(\Gamma),\,{\bf U}_{2},{\bf U}_{3}\in
W^{1,2}_{\beta}(\Gamma)\\}$
and
${\mathcal{Z}}_{\beta}=\\{({\bf V},{\bf U})\,:\,{\bf V}\in
W^{1,2}_{\beta}({\mathcal{C}})^{3},\,{\bf
U}\in{\mathcal{Y}}_{\beta},\,i\omega{\bf U}={\bf V}\;\mbox{on}\;\Gamma\\}.$
###### Proposition 4.1.
Let $\omega\in\mathbb{R}$ and $\omega\neq 0$. There exist a positive number
$\beta^{*}$ depending on $\omega$, $\Omega$ and $\nu$ such that for any
$\beta\in(-\beta^{*},\beta^{*})$ the following assertions hold
(i) If ${\bf F}\in L^{2}_{\beta}({\mathcal{C}})$, ${\bf G},\partial_{z}{\bf
G}\in L^{2}_{\beta}(\Gamma)$ then the problem (46)–(48) has a unique solution
$({\bf V},{\bf U})\in{\mathcal{Z}}_{\beta}$, $P\in
L^{2}_{\beta}({\mathcal{C}})$ and this solution satisfies the estimate
$\displaystyle||{\bf
V};W^{1,2}_{\beta}({\mathcal{C}})||+||P;L^{2}_{\beta}({\mathcal{C}})||+||{\bf
U};{\mathcal{Y}}_{\beta}||$ $\displaystyle\leq C\Big{(}||{\bf
F};L^{2}_{\beta}({\mathcal{C}})||+||{\bf
G};L^{2}_{\beta}(\Gamma)||+||\partial_{z}{\bf
G};L^{2}_{\beta}(\Gamma)||\Big{)},$
where $C$ may depend on $\omega$, $\beta$, $\nu$ and $\Omega$. Moreover,
$||\partial_{z}{\bf V};W^{1,2}_{\beta}({\mathcal{C}})||+||\partial_{z}{\bf
U};{\mathcal{Y}}_{\beta}||\leq C\Big{(}||{\bf
F};L^{2}_{\beta}({\mathcal{C}})||+||{\bf
G};L^{2}_{\beta}(\Gamma)||+||\partial_{z}{\bf
G};L^{2}_{\beta}(\Gamma)||\Big{)}.$
(ii) If ${\bf F}\in L^{2}_{\beta_{1}}({\mathcal{C}})\cap
L^{2}_{\beta_{2}}({\mathcal{C}})$, ${\bf G},\partial_{z}{\bf G}\in
L^{2}_{\beta_{1}}(\Gamma)\cap L^{2}_{\beta_{2}}(\Gamma)$ with
$\beta_{1},\beta_{2}\in(-\beta^{*},\beta^{*})$ and $({\bf V}^{(k)},{\bf
U}^{(k)},{\bf P}^{(k)})\in{\mathcal{Z}}_{\beta_{k}}\times
L^{2}_{\beta_{k}}({\mathcal{C}})$ is the solution from (i) for $k=1,2$
respectively, then they coincide.
### 4.3 Transformations of the problem (3), (2.1), (9)
It is convenient to rewrite the Stokes system (20) in the form
$i\omega{\bf V}_{j}-\sum_{i=1}^{3}\partial_{x_{i}}T_{ji}({\bf V})={\bf
F}_{j},\;\;j=1,2,3,\;\nabla\cdot{\bf V}=0\;\;\mbox{in ${\mathcal{C}}$}.$ (50)
where
$T_{ji}({\bf V})=-p\delta_{i,j}+\nu\big{(}\partial_{x_{i}}{\bf
V}_{j}+\partial_{x_{j}}{\bf V}_{i}\big{)}$ (51)
and $\delta_{i,j}$ is the Kronecker delta. Moreover, relations (2.1) and (9)
become
$\displaystyle
D(\kappa(s),-\partial_{s},-\partial_{z})^{T}\,Q(s)D(\kappa(s),\partial_{s},\partial_{z}){\bf
U}(s,z)$ $\displaystyle-\rho(s)\omega^{2}{\bf U}(s,z)+K{\bf
U}+\sigma{\widehat{\mathcal{F}}}(s,z)={\bf G}(n,s,z)$ (52)
and
${\bf V}=i\omega{\bf U}\;\;\mbox{on $\Gamma$.}$ (53)
Here ${\widehat{\mathcal{F}}}=e^{-i\omega t}{\mathcal{F}}$.
Next step is the application of the Fourier transform. We set
${\bf V}(x)=e^{\lambda z}v(y),P(x)=e^{\lambda z}p(y)\;\;\mbox{and}\;\;{\bf
U}(x)=e^{\lambda z}u(y).$
As the result we obtain the system
$i\omega v_{j}-\sum_{i=1}^{2}\partial_{x_{i}}t_{ji}(v;\lambda)+\lambda
t_{j3}(v;\lambda)=f_{j}\;\;j=1,2,3,$ (54) $\nabla_{y}\cdot v^{\prime}+\lambda
v_{3}=h\;\;\mbox{in $\Omega$,}$ (55)
where
$t_{ij}(v,p;\lambda)=-p\delta_{j}^{i}+2\nu\varepsilon_{ij}(v;\lambda),$ (56)
$\displaystyle\varepsilon_{ij}(v)=\frac{1}{2}\Big{(}\partial_{x_{i}}v_{j}+\partial_{x_{j}}v_{i}\Big{)},\;i,j\leq
2,$
$\displaystyle\varepsilon_{i3}(v;\lambda)=\hat{\varepsilon}_{3i}(v;\lambda)=\frac{1}{2}\Big{(}\lambda
v_{i}+\partial_{x_{i}}v_{3}\Big{)},\;i=1,2,$
$\displaystyle\varepsilon_{33}(v;\lambda)=\lambda v_{3}.$
The equations (4.3) and (53) take the form
$\displaystyle
D(\kappa(s),-\partial_{s},-\lambda)^{T}\,Q(s)D(\kappa(s),\partial_{s},\lambda)u$
(57) $\displaystyle-\overline{\rho}(s)\omega^{2}u+K(s)u+\sigma(s)\Phi(s)=g(s)$
and
$v=i\omega u\;\;\mbox{on $\partial\Omega$.}$ (58)
Here $\Phi(s)=(\Phi_{n},\Phi_{z},\Phi_{s})$ and
$\Phi_{n}=-p+2\nu\frac{\partial v_{n}}{\partial
n},\;\;\Phi_{s}=\nu\Big{(}\frac{\partial v_{n}}{\partial s}+\frac{\partial
v_{s}}{\partial n}-\kappa v_{s}\Big{)},\;\;\Phi_{z}=\nu\Big{(}\lambda
v_{n}+\frac{\partial v_{z}}{\partial n}\Big{)}.$ (59)
### 4.4 Weak formulation and function spaces
Let us introduce an energy integral
$E({\bf v},\hat{\bf v})=\int_{\Omega}\sum_{i,j=1}^{2}\varepsilon_{ij}({\bf
v})\overline{\varepsilon_{ij}(\hat{\bf v})}dy$
and put
$a(u,\hat{u};\lambda)=\int_{\partial\Omega}\langle
Q(s)D(\kappa(s),\partial_{s},\lambda)u(s),D(\kappa(s),\partial_{s},-\bar{\lambda})\hat{u}(s)\rangle
ds,$
where $\langle\cdot,\cdot\rangle$ is the euclidian inner product in
$\mathbb{C}^{3}$. Since the matrix $Q$ is positive definite
$a(u,u;i\xi)\geq c_{1}(|\xi|^{2}|u_{3}|^{2}+|\kappa
u_{1}+\partial_{s}u_{2}|^{2}+|i\xi u_{2}+\partial_{s}u_{3}|^{2}),$ (60)
where $\xi\in\mathbb{R}$ and $c_{1}$ is a positive constant independent of
$\xi$. Another useful inequality is the following
$\int_{\partial\Omega}|v|^{2}dy\leq c_{2}||v||_{0}\,||v||_{1},$ (61)
or by using Korn’s inequality
$q\int_{\gamma}|v|^{2}dy\leq
c_{3}\Big{(}q^{2}||v||_{0}^{2}+E(v,v)\Big{)}\;\;\mbox{for $q\geq 1$},$ (62)
where $c_{3}$ does not depend on $q$.
To define a weak solution, we introduce the vector function spaces:
$X=\\{v=(v_{1},v_{2},v_{3})\,:\,v\in H^{1}(\Omega)^{3}\;\\},$ $Y=\\{u_{1}\in
H^{1/2}(\gamma)\,:\,u_{2},\,u_{3}\in H^{1}(\gamma)\\}$
and
$Z=Z_{\omega}=\\{(v,u)\,:\,v\in X,\;u\in Y,\;\;v=i\omega
u\;\,\mbox{on}\;\gamma\\}.$
We supply the space $Z$ with the inner product
$\displaystyle\langle
v,u;\hat{v},\hat{u}\rangle_{0}=\int_{\Omega}(v\cdot\overline{\hat{v}}+\sum_{j=1}^{3}\nabla_{y}v_{j}\cdot\nabla_{y}\overline{\hat{v}_{j}})dy$
$\displaystyle+\int_{\gamma}(\partial_{s}u_{2}\partial_{s}\overline{\hat{u}_{2}}+\partial_{s}u_{3}\partial_{s}\overline{\hat{u}_{3}})ds.$
(63)
Since $\omega\neq 0$, the norm $||u_{1};H^{1/2}(\gamma)||$ is estimated by the
norm of $v$ in the space $H^{1}(\gamma)$ therefore we do not need a term with
$u_{1}$ and $\hat{u}_{1}$ in (4.4), indeed. Let also
$\langle v,p,u;\hat{v},\hat{p},\hat{u}\rangle_{1}=\langle
v,u;\hat{v},\hat{u}\rangle_{0}+\int_{\Omega}p\overline{\hat{p}}dx$
be the inner product in $Z\times L^{2}(\Omega)$.
We introduce also a sesqui-linear form corresponding to the formulation (54),
(55), (57), (58):
$\displaystyle\widehat{\mathcal{A}}(v,p,u;\hat{v},\hat{p},\hat{u};\lambda)=\int_{\Omega}\big{(}i\omega
v\overline{\hat{v}}+2\nu\sum\hat{\varepsilon}_{ij}(v;\lambda)\overline{\hat{\varepsilon}_{ij}(\hat{v};-\bar{\lambda})}\big{)}dx+\int_{\Omega}p\overline{(\nabla_{y}\cdot\hat{v}^{\prime}-\bar{\lambda}\hat{v}_{3})}dx$
$\displaystyle+\int_{\Omega}(\nabla_{y}\cdot v^{\prime}+\lambda
v_{3})\,\overline{\hat{p}}dx+i\omega(-\omega^{2}\int_{\partial\Omega}u\overline{\hat{u}}+a(u,\hat{u};\lambda)+k\int_{\gamma}u_{1}\overline{\hat{u}_{1}}ds).$
Clearly, this form is bounded in $Z\times L^{2}(\Omega)$. For
${\mathcal{F}}\in Z^{*}$, ${\mathcal{H}}\in L^{2}(\Omega)$ and $h\in
L^{2}(\Omega)$ the weak formulation reads as the integral identity
${\mathcal{A}}(v,p,u;\hat{v},\hat{p},\hat{u};\lambda)={\mathcal{F}}(\hat{v},\hat{u})+\int_{\Omega}{\mathcal{H}}\overline{\hat{p}}dy$
(64)
which has to be valid for all $(\hat{v},\hat{p},\hat{u})\in Z\times
L^{2}(\Omega)$ and $\nabla_{y}\cdot v^{\prime}+\lambda v_{3}=h$ in $\Omega$,
where $v^{\prime}=(v_{1},v_{2})$.
If $\nabla_{y}\cdot v^{\prime}+\lambda v_{3}=0$, it is enough to require that
$\widehat{\mathcal{A}}(v,p,u;\hat{v},0,\hat{u};\lambda)={\mathcal{F}}(\hat{v},\hat{u})$
(65)
for all $(\hat{v},\hat{u})\in Z$.
It will be useful to introduce the operator pencil in the space
${\mathcal{Z}}\times L^{2}(\Omega)$ depending on the parameter
$\lambda\in\mathbb{C}$ by
$\widehat{\mathcal{A}}(v,p,u;\hat{v},\hat{p},\hat{u};\lambda)=\langle\Theta(\lambda)(v,u,p);\hat{v},\hat{p},\hat{u}\rangle_{1}.$
(66)
Clearly
$\Theta(\lambda)\,:\,Z\times L^{2}(\Omega)\mapsto Z\times L^{2}(\Omega)$ (67)
is a bounded operator pencil, quadratic with respect to $\lambda$.
### 4.5 Properties of the operator pencil $\Theta$
We will need the following known lemma on the divergence equation
$\partial_{y_{1}}v_{1}+\partial_{y_{2}}v_{2}=h\;\;\mbox{in $\Omega$}.$ (68)
###### Lemma 4.1.
There exists a linear operator
$L^{2}(\Omega)\ni h\rightarrow v^{\prime}=(v_{1},v_{2})\in(H^{1}(\Omega))^{2}$
such that the equation (68) is satisfied, $v_{s}=0$ on $\gamma$,
$v^{\prime}|_{\gamma}\in H^{1}(\gamma)$ and
$||v^{\prime};H^{1}(\Omega)||+||v_{n}|_{\gamma};H^{1}(\gamma)||\leq
C||h||_{0}.$ (69)
Clearly the vector function $(v,u)$ belongs to $Z$, where $v=(v_{1},v_{2},0)$
and $i\omega u=v|_{\gamma}$. Estimate (69) implies
$||(v,u);Z||\leq C||h||_{0}.$ (70)
###### Proof.
We represent $h$ as
$h(y)=\check{h}(y)+\tilde{h},\;\;\tilde{h}=\frac{1}{|\Omega|}\int_{\Omega}h(x)dx.$
Then $v=\check{v}+\tilde{v}$, where $\check{v}\in H^{1}_{0}(\Omega)$ solves
the problem $\nabla_{y}\cdot\check{v}=\check{h}$ in $\Omega$ and $\tilde{v}$
is a solution to
$\nabla_{y}\cdot\tilde{v}=\tilde{h}\;\;\mbox{and}\;\;\tilde{v}_{n}=\tilde{c}=\frac{|\partial\Omega|}{|\Omega|}\tilde{h}.$
Both mappings $\check{h}\mapsto\check{v}$ and $\tilde{h}\mapsto\tilde{v}$ can
be chosen linear and satisfying
$||\check{v};H^{1}(\Omega)||\leq
C||\check{h};L^{2}(\Omega)||\;\;\mbox{and}\;||\tilde{v};H^{2}(\Omega)||\leq
C||\tilde{h};H^{1}(\Omega)||$
respectively. This implies the required assertion.
∎
###### Lemma 4.2.
Let $\omega\in\mathbb{R}$ and $\omega\neq 0$. Then the operator pencil
$\mathbb{C}\ni\lambda\mapsto\Theta(\lambda)$ possesses the following
properties:
(i) $\Theta(\lambda)$ is a Fredholm operator for all $\lambda\in\mathbb{C}$
and its spectrum consists of isolated eigenvalues of finite algebraic
multiplicity. The line $\Re\lambda=0$ is free of the eigenvalues of $\Theta$.
(ii) Let $\lambda=i\xi$, $\xi\in\mathbb{R}$. Then there exists a positive
constant $\rho(|\omega|)$ which may depend on $|\omega|$ such that the
solution of problem (65) with ${\mathcal{F}}=(f,g)\in L^{2}(\Omega)\times
L^{2}(\gamma)$ and $|\xi|\geq\rho(|\omega|)$ satisfies the estimate
$\displaystyle|\xi|^{2}\int_{\gamma}(|\xi|^{2}(|u_{2}|^{2}+|u_{3}|^{2})+|\partial_{s}u_{2}|^{2}+|\partial_{s}u_{3}|^{2})ds+\int_{\Omega}|p|^{2}dy$
(71)
$\displaystyle+|\xi|^{2}\int_{\Omega}(|\xi|^{2}|v|^{2}+|\nabla_{y}v|^{2})dy\leq
CN(f,g;\xi)^{2},$
where
$N(f,g;\xi)=\Big{(}||f||_{0}+||g_{2}||_{0}+||g_{3}||_{0}+|\xi|^{1/2}\,||g_{1}||_{0}\Big{)}.$
(72)
The constant $c$ here may depend on $\omega$ but it is independent of $\xi$.
###### Proof.
Let $\lambda=i\xi$ and
${\mathcal{X}}={\mathcal{X}}(\lambda)=\\{(v,u)\in{\mathcal{Z}}_{\omega}\,:\;\nabla\cdot
v^{\prime}+i\xi v_{3}=0\\}.$
(i) Consider the integral identity
${\mathcal{A}}(v,0,u;\hat{v},\hat{u};i\xi)={\mathcal{F}}(\hat{v},\hat{u})\;\;\mbox{
$\forall(\hat{v},\hat{u})\in{\mathcal{X}}$}.$ (73)
We want to apply the Lax-Milgram lemma to find solution
$(v,u)\in{\mathcal{X}}$. First, we note that
$\Re{\mathcal{A}}(v,0,u;v,u;i\xi)\geq
c\big{(}I(v,v)+|\lambda|^{2}\int_{\Omega}|v_{3}|^{2}dy+\int_{\Omega}|\lambda
v^{\prime}+\nabla_{y}v_{3}|^{2}dy\big{)}$ (74)
and
$\displaystyle|\Im{\mathcal{A}}(v,0,u;v,u;i\xi)|\geq
c|\omega|\Big{(}\int_{\Omega}|v|^{2}dy+\int_{\gamma}(k|u_{1}|^{2}-\omega^{2}|u|^{2})ds$
(75) $\displaystyle+\int_{\gamma}(|\xi|^{2}|u_{3}|^{2}+|\kappa
u_{1}+\partial_{s}u_{2}|^{2}+|i\xi u_{2}+\partial_{s}u_{3}|^{2})ds\Big{)},$
where the constant $c$ does not depend on $\omega$ and $\xi$.
We use the representation
$|i\xi v+\nabla_{y}v_{3}|^{2}=(1-\tau)|i\xi
v+\nabla_{y}v_{3}|^{2}+\tau|\xi|^{2}|v|^{2}+\tau|\nabla_{y}v_{3}|^{2}+2\tau\Re(i\xi
v\cdot\nabla_{y}\overline{v_{3}}),$ (76)
where $\tau\in(0,1]$. Let us estimate the last term in (76). We have
$\displaystyle\int_{\Omega}v\cdot\nabla_{y}\overline{v_{3}}dx=-\int_{\Omega}\nabla_{y}\cdot
v\,\overline{v_{3}}dx+\int_{\partial\Omega}v_{n}\overline{v_{3}}ds$
$\displaystyle=-\int_{\Omega}(\hat{\varepsilon}_{11}(v)+\hat{\varepsilon}_{22}(v))\overline{v_{3}}dx+|\omega|^{2}\int_{\partial\Omega}u_{n}\overline{u_{3}}ds.$
Since
$\Big{|}\xi\int_{\Omega}(\hat{\varepsilon}_{11}(v)+\hat{\varepsilon}_{22}(v))\overline{v_{3}}dx\Big{|}\leq\frac{1}{2}|\xi|^{2}\int_{\Omega}|v_{3}|^{2}dx+\int_{\Omega}(|\hat{\varepsilon}_{11}(v)|^{2}+|\hat{\varepsilon}_{22}(v)|^{2})dy,$
(77)
we derive that
$2\tau|\Re(i\xi v\cdot\nabla_{y}\overline{v_{3}})|\leq
C_{\omega}\tau\big{(}\int_{\gamma}(|\xi|^{2}|u_{3}|^{2}+|u|^{2})ds+|\xi|^{2}\int_{\Omega}|v_{3}|^{2}dx+I(v,v)\big{)}$
Using above inequalities together with (62) for a small $\epsilon$, we arrive
at the estimate
$\displaystyle|{\mathcal{A}}(v,0,u;v,u;i\xi)|\geq
C_{\omega}\Big{(}\int_{\Omega}(|\xi|^{2}|v|^{2}+|\nabla_{y}v|^{2})dy$
$\displaystyle+\int_{\gamma}(|\xi|^{2}|u_{3}|^{2}+|\partial_{s}u_{2}|^{2}+|i\xi
u_{2}+\partial_{s}u_{3}|^{2})ds\Big{)},$ (78)
where $C_{\omega}$ is a positive constant which may depend on $\omega$ and
$|\xi|$ is chosen to be sufficiently large with respect to $|\omega|+1$. On
the basis of
$\int_{\gamma}|i\xi u_{2}+\partial_{s}u_{3}|^{2}ds=\int_{\gamma}\big{(}|\xi
u_{2}|^{2}+|\partial_{s}u_{3}|^{2}-2\Re(i\xi\partial_{s}u_{2}\bar{u}_{3})\big{)}ds$
one can continue the estimation in (4.5) as follows:
$\displaystyle|{\mathcal{A}}(v,0,u;v,u;i\xi)|\geq
C_{\omega}\Big{(}\int_{\Omega}(|\xi|^{2}|v|^{2}+|\nabla_{y}v|^{2})dy$
$\displaystyle+\int_{\gamma}(|\xi|^{2}|u_{3}|^{2}+|\partial_{s}u_{2}|^{2}+\xi^{2}u_{2}|^{2}+|\partial_{s}u_{3}|^{2})ds\Big{)},$
(79)
with possibly another constant $C_{\omega}$. Application of the Lax-Milgram
lemma gives existence of a unique solution in ${\mathcal{X}}$ and the
following estimate for this solution
$\displaystyle\int_{\Omega}(|\xi|^{2}|v|^{2}+|\nabla_{y}v|^{2})dy$ (80)
$\displaystyle+\int_{\partial\Omega}(|\xi|^{2}|u_{3}|^{2}+|\partial_{s}u_{2}|^{2}+\xi^{2}|u_{2}|^{2}+|\partial_{s}u_{3}|^{2})ds\leq
C||{\mathcal{F}};{\mathcal{Z}}^{*}||^{2}$
with a constant $C$ which may depend on $\omega$ and $\xi$. It remains to
estimate the function $p$. We chose the test function $(\hat{v},\hat{u})$ in
the following way: $i\omega\hat{u}=v$ on $\partial\Omega$, $\hat{v_{3}}=0$ and
$\hat{v}^{\prime}\in H^{1}(\Omega)$ solves the problem
$\nabla_{y}\cdot\hat{v}^{\prime}=\hat{h}\;\;\mbox{in $\Omega$}$
where $h\in L^{2}(\Omega)$. According to Lemma 4.1 the mapping
$h\mapsto\hat{v}^{\prime}$ can be chosen linear and satisfying the estimate
(69). The pressure $p$ must satisfy the relation
$\displaystyle\int_{\Omega}p\overline{\hat{h}}dy={\mathcal{F}}(\hat{v},\hat{u})-\int_{\Omega}\big{(}i\omega
v\overline{\hat{v}}+2\nu\sum\varepsilon_{ij}(v)\overline{\varepsilon_{ij}(\hat{v})}\big{)}dx$
$\displaystyle-i\omega(\omega^{2}\int_{\partial\Omega}u\overline{\hat{u}}+a(u,\hat{u};\lambda)).$
(81)
One can verify using (80) that the right-hand side of (4.5) is a linear
bounded functional with respect to $h\in L^{2}(\Omega)$ and therefore there
exists $p\in L^{2}(\Omega)$ solving (4.5) and estimated by the corresponding
norm of ${\mathcal{F}}$. Thus the operator pencil (67) is isomorphism for
large $|\xi|$.
Since the operator $\Theta(\lambda_{1})-\Theta(\lambda_{2})$ is compact we
obtain that the spectrum of the operator pencil $\Theta$ consists of isolated
eigenvalues of finite algebraic multiplicities, see [3].
Let us show that the kernel of $\Theta(i\xi)$ is trivial for all
$\xi\in\mathbb{R}$. Indeed, if $(v,u,p)\in{\mathcal{Z}}_{\omega}\times
L_{2}(\Omega)$ is a weak solution with ${\mathcal{F}}=0$ then in the case
$\xi\neq 0$ inequality (74) implies $v=0$ and hence $p=0$ because $i\omega
u=v$ on $\gamma$. From (66) it follows that
$\int_{\Omega}p\overline{(\nabla_{y}\cdot\hat{v}^{\prime}+i\xi\hat{v}_{3})}dy=0.$
By Lemma 4.1 there exists the element $(v_{1},v_{2},0)\in{\mathcal{X}}$
solving $\nabla_{y}\cdot\hat{v}^{\prime}=p$ which gives $p=0$. In the case
$\lambda=0$ we derive from (74) $v_{3}=c_{3}$ and
$(v_{1},v_{2})=(c_{1},c_{2})+c_{0}(y,-x)$, where $c_{0},\ldots,c_{3}$ are
constant. From (3.3) it follows that
$v_{3}=0\;\,\mbox{and}\;\;i\omega v_{j}+\partial_{y_{j}}p=0,\;j=1,2.$
Since the vector $v$ is a rigid displacement, we have $Du=0$ due to (8) and
(58). Hence relation (57) implies
$-\rho(s)\omega^{2}u(s)+Ku(s)=\sigma(p,0,0)^{T}.$
Therefore, $u_{2}=0$ and $-\rho(s)\omega^{2}u_{1}+ku_{1}=p$. By (58), we have
$c_{1}\zeta_{1}^{\prime}+c_{2}\zeta_{2}^{\prime}+c_{0}(\zeta_{2}\zeta_{1}^{\prime}-\zeta_{1}\zeta_{2}^{\prime})=0.$
In view of Lemma 5.1 this yields $c_{0}=c_{1}=c_{2}=0$. Thus, the assertion
(i) is proved
(ii) Let
${\mathcal{F}}(\hat{v},\hat{u})=\int_{\Omega}f\cdot\overline{\hat{v}}dy+\int_{\gamma}g\cdot\overline{\hat{u}}ds,\;\;f\in
L_{2}(\Omega,\;\;g\in L_{2}(\gamma).$
Then
$\displaystyle|{\mathcal{F}}(v,u)|\leq\Big{(}||f||_{0}+||g_{2}||_{0}+||g_{3}||_{0}+|\xi|^{1/2}\,||g_{1}||_{0}\Big{)}$
$\displaystyle\times\Big{(}||v||_{0}+||u_{2}||_{0}+||u_{3}||_{0}+|\xi|^{-1/2}||u_{1}||_{0}\Big{)}.$
Using (62) and (809, we get
$|\xi|^{2}\big{(}\int_{\Omega}|v|^{2}dy+\int_{\gamma}(|u_{2}|^{2}+|u_{3}|^{2}+|\xi|^{-1}|u_{1}|^{2})ds\big{)}\leq
C|{\mathcal{F}}(v,u)|.$
Therefore,
$|\xi|^{4}\big{(}\int_{\Omega}|v|^{2}dy+\int_{\gamma}(|u_{2}|^{2}+|u_{3}|^{2}+|\xi|^{-1}|u_{1}|^{2})ds\big{)}\leq
C\Big{(}||f||_{0}^{2}+||g_{2}||_{0}^{2}+||g_{3}||_{0}^{2}+|\xi|\,||g_{1}||_{0}^{2}\Big{)}.$
(82)
Furthermore,
$\displaystyle\int_{\Omega}|\nabla_{y}v|^{2})dy+\int_{\gamma}(|\partial_{s}u_{2}|^{2}+|\partial_{s}u_{3}|^{2})ds\leq
C|{\mathcal{F}}(v,u)|$ $\displaystyle\leq
C\Big{(}||f||_{0}+||g_{2}||_{0}+||g_{3}||_{0}+|\xi|^{1/2}\,||g_{1}||_{0}\Big{)}$
$\displaystyle\times\Big{(}||v||_{0}+||u_{2}||_{0}+||u_{3}||_{0}+|\xi|^{-1/2}||u_{1}||_{0}\Big{)}$
$\displaystyle\leq
C|\xi|^{-2}\Big{(}||f||_{0}+||g_{2}||_{0}+||g_{3}||_{0}+|\xi|^{1/2}\,||g_{1}||_{0}\Big{)}^{2}$
where we have used (82). The last inequality together with (82) delivers (71)
for the vector functions $v$ and $u$.
To obtain the estimate for $p$, we proceed as in (i).
∎
### 4.6 Solvability of the problem (50)–(53)
###### Proposition 4.2.
Let $\omega\in\mathbb{R}$ and $\omega\neq 0$. There exist a positive number
$\beta^{*}$ depending on $\omega$, $\Omega$ and $\nu$ such that the following
assertions hold:
(i) The strip $|\Re\lambda|<\beta^{*}$ is free of eigenvalues of the operator
pencil $\Theta$.
(ii) For $\Re\lambda=\beta\in(-\beta^{*},\beta^{*})$ the estimate
$\displaystyle(|\lambda|^{2}+1)\int_{\gamma}((|\lambda|^{2}+1)(|u_{2}|^{2}+|u_{3}|^{2})+|\partial_{s}u_{2}|^{2}+|\partial_{s}u_{3}|^{2})ds$
(83)
$\displaystyle+\int_{\Omega}|p|^{2}dy+(|\lambda|^{2}+1)\int_{\Omega}((|\lambda|^{2}+1)|v|^{2}+|\nabla_{y}v|^{2})dy\leq
CN(f,g;\xi)^{2}$
is valid, where $N$ is given by (72). The positive constant $C$ here may
depend on $\beta$, $\omega$, $\nu$ and $\Omega$.
###### Proof.
Let $\lambda=\beta+i\xi$. It is straightforward to verify that
$\Theta(\lambda)(v,u,p)-\Theta(i\xi)(v,u,p)=\beta(A,B,C)^{T}$
where
$A=-\nu(\beta+2i\xi)v+(0,0,p);\;\;B=v_{3};$
$C=\frac{\nu\sigma}{i\omega}(u_{1},0,0)-(\beta+2i\xi)D_{0}^{T}QD_{0}u+(D_{1}(-\partial_{s})^{T}QD_{0}+D_{0}^{T}QD(\partial_{s}))u.$
Therefore,
$\Theta(\lambda)-\Theta(i\xi):Z\times L_{2}(\Omega)\to L_{2}(\Omega)^{3}\times
L_{2}(\gamma)^{3}\times L_{2}(\Omega)$
is a small operator for small $\beta$. Therefore the estimate (83) for large
$|\xi|$ follows from (71). From Lemma 4.2(i) it follows that this can be
extended on $\xi\in{\mathbb{R}}$ if $\beta$ is chosen sufficiently small. Thus
we arrive at both assertions of the lemma.
∎
### 4.7 Proof of Proposition 4.1 and Theorem 4.1
###### Proof.
The assertion (i) in Proposition 4.1 is obtained from Proposition 4.2(ii) by
using the inverse Fourier transform together with Parseval’s identity.
To conclude with (ii) we observe that the same proposition provides
$({\bf V}_{\pm},{\bf P}_{\pm})=\frac{1}{2\pi
i}\int_{\Re\lambda=\pm\beta}e^{-\lambda
z}{\Theta}^{-1}(\lambda)(f(y,\lambda),g(s,\lambda),0)d\lambda$
and the assertion (ii) in Proposition 4.1 is obtained by applying the residue
theorem. ∎
Proof of Theorem 4.1. Let $({\bf V},{\bf U},{\bf P})$ be a solution to
(46)–(48) satisfying (49). Our first step is to construct a representation of
the solution in the form
$({\bf V},{\bf U},{\bf P})=({\bf V}^{(+)},{\bf U}^{(+)},{\bf P}^{(+)})+({\bf
V}^{(-)},{\bf U}^{(-)},{\bf P}^{(-)}),$ (84)
where $({\bf V}^{(\pm)},{\bf
U}^{(\pm)})\in{\mathcal{Z}}_{\pm\beta}({\mathcal{C}})$, ${\bf P}^{(\pm)})\in
L^{2}_{\pm\beta}({\mathcal{C}})$ and they solve the problem (46)–(48) with
certain $({\bf F},{\bf G})=({\bf F}^{(\pm)},{\bf G}^{(\pm)})$ such that ${\bf
F}^{(\pm)}\in L^{2}_{\pm\beta}({\mathcal{C}})$ and ${\bf
G}^{(\pm)},\partial_{z}{\bf G}^{(\pm)}\in L^{2}_{\pm\beta}(\Gamma)$.
Let $\zeta$ be the same cut-off function as in the proof of Theorem 3.1. We
choose in (84)
$\displaystyle({\bf V}^{(+)},{\bf U}^{(+)},{\bf P}^{(+)})=\zeta({\bf V},{\bf
U},{\bf P})+\zeta^{\prime}(\tilde{\bf V},\tilde{\bf U},\tilde{\bf P}),$
$\displaystyle({\bf V}^{(-)},{\bf U}^{(-)},{\bf P}^{(-)})=(1-\zeta)({\bf
V},{\bf U},{\bf P})-\zeta^{\prime}(\tilde{\bf V},\tilde{\bf U},\tilde{\bf
P}),$ (85)
where the vector function $(\tilde{\bf V},\tilde{\bf U},\tilde{\bf P})$ solves
the problem (64) for $\lambda=0$ and with ${\mathcal{F}}=0$, ${\mathcal{H}}=0$
and $h={\bf V}_{3}(y,z)$ i.e.
$\partial_{y_{1}}{\bf W}_{1}+\partial_{y_{2}}{\bf W}_{2}={\bf
V}_{3}(y,z)\;\;\mbox{for $y\in\Omega$}.$ (86)
In this problem the variable $z$ is considered as a parameter. In order to
apply Lemma 4.2(i) we reduce the above formulation to the case $h=0$. Applying
for this purpose Lemma 4.1 we find a function
${\mathcal{V}}(y)=({\mathcal{V}}_{1},{\mathcal{V}}_{2},{\mathcal{V}}_{3})(y)$
solving (86) and satisfying (69) or (70) where
$i\omega{\mathcal{U}}={\mathcal{V}}$. The function
$({\mathcal{V}},{\mathcal{U}})\in Z$ and the above formulation is reduced to
the case $h=0$ but with some nonzero ${\mathcal{F}}$. Applying to the new
problem Lemma 4.2(i) we find solution satisfying
$||(\tilde{\bf V},\tilde{\bf U},\tilde{\bf P});Z\times L^{2}(\Omega)||\leq
C||{\bf V}_{3}||_{0},$
which depends on the parameter $z$. Since
${\bf V}_{2},\;\partial_{z}{\bf V}_{3},\partial_{z}^{2}{\bf V}_{3}\in
L^{2}_{\rm loc}(\mathbb{R};L_{2}(\Omega)),$
we get that
$(\tilde{\bf V},\tilde{\bf U},\tilde{\bf P}),\partial_{z}(\tilde{\bf
V},\tilde{\bf U},\tilde{\bf P})\in L^{2}_{\rm loc}(\mathbb{R};Z\times
L_{2}(\Omega))\;\;\mbox{and}\;\partial_{z}^{2}(\tilde{\bf V},\tilde{\bf U})\in
L^{2}_{\rm loc}(\mathbb{R};L_{2}(\Omega)\times L_{2}(\gamma)).$
Now one can verify that the vector functions (4.7) satify (46)–(48) with
certain right-hand sides $({\bf F}^{(\pm)},{\bf G}^{(\pm)})$ having compact
supports. Moreover ${\bf F}^{(+)}=-{\bf F}^{(-)}$ and ${\bf G}^{(+)}=-{\bf
G}^{(-)}$ and
${\bf F}^{(\pm)}\in L^{2}_{\pm\beta}({\mathcal{C}}),\;\;{\bf
G}^{(\pm)},\partial_{z}{\bf G}^{(\pm)}\in L^{2}_{\pm\beta}(\Gamma).$
Applying Theorem 4.1(ii) we get
$({\bf V}^{(+)},{\bf U}^{(+)},{\bf P}^{(+)})=-({\bf V}^{(-)},{\bf
U}^{(-)},{\bf P}^{(-)}),$
which means that $({\bf V},{\bf U},{\bf P})=0$. Theorem 4.1 is proved.
### 4.8 The case $\omega=0$, homogeneous system
If $\omega=0$ the system (3))–(9) becomes
$-\nu\Delta{\bf v}+\nabla p=0\;\;\mbox{and}\;\;\nabla\cdot{\bf
v}=0\;\;\mbox{in ${\mathcal{C}}\times\mathbb{R}$}.$ (87) $v=0\;\;\mbox{on
$\Gamma$}$ (88)
and
$\displaystyle
D(\kappa(s),-\partial_{s},-\partial_{z})^{T}\,\overline{Q}(s)D(\kappa(s),\partial_{s},\partial_{z}){\bf
u}(s,z)$ $\displaystyle+k(s){\bf u}_{1}(s,z)+\sigma(s){\mathcal{F}}(s,z)=0,$
(89)
where ${\mathcal{F}}$ is given by (6). So we see that the system becomes
uncoupled with respect to $({\bf v},p)$ and $u$. Solutions to (87)-(88) are
given by
${\bf v}_{1}={\bf v}_{2}=0,\;\;p=p_{0}z+p_{1}\;\;\mbox{and}\;\;{\bf
v}_{3}=p_{0}{\bf v}_{*}(y),$
where $p_{0}$, $p_{1}$ are constants and ${\bf v}_{*}$ solves the problem
(16). In this case the vector function ${\mathcal{F}}$ is evaluated as
${\mathcal{F}}_{1}={\mathcal{F}}_{n}=-(p_{0}z+p_{1}),\;\;{\mathcal{F}}_{2}={\mathcal{F}}_{s}=0\;\;\mbox{and}\;\;{\mathcal{F}}_{3}={\mathcal{F}}_{z}=\nu\partial_{n}{\bf
v}_{3}.$
Let
$Q=\left(\begin{array}[]{lll}Q_{11}&Q_{12}&Q_{13}\\\ Q_{21}&Q_{22}&Q_{23}\\\
Q_{31}&Q_{32}&Q_{33}\end{array}\right)$ (90)
First we consider the case $p_{0}=p_{1}=0$. Namely, we want to solve the
homogeneous equation
$(-D_{0}\partial_{z}+D_{1}(-\partial_{s}))^{T}Q(s)(D_{0}\partial_{z}+D_{1}(\partial_{s})){\bf
U}+k{\bf U}_{1}(s)=0,$ (91)
where $D_{0}$ and $D_{1}$ are defined by (5). First, we are looking for
solution independent of $z$. Then it must satisfy
$\kappa{\bf U}_{1}+\partial_{s}{\bf U}_{2}=0,\;\;\partial_{s}{\bf
U}_{3}=0\;\;\mbox{and}\;\;{\bf U}_{1}=0.$
Thus
${\bf U}_{1}=0,\;\;{\bf U}_{2}=c_{2},\;\;{\bf U}_{3}=c_{3},$
where $c_{2}$ and $c_{3}$ are constants.
Next let ${\bf U}$ be a linear function in $z$, i.e. ${\bf U}(s,z)=z{\bf
u}^{0}(s)+{\bf u}^{1}(s)$. Then
$D_{1}(-\partial_{s})^{T}Q(s)D_{1}(\partial_{s}){\bf u}^{0}+K{\bf
u}^{0}=0,\;\;$
and
$D_{1}(-\partial_{s})^{T}Q(s)D_{1}(\partial_{s}){\bf u}^{1}+K{\bf
u}^{1}+\Big{(}D_{1}(-\partial_{s})^{T}Q(s)D_{0}-D_{0}^{T}Q(s)D_{1}(\partial_{s})\Big{)}{\bf
u}^{0}=0.$ (92)
Since ${\bf u}^{0}=(0,\alpha,\beta)^{T}$, $\alpha$ and $\beta$ are constant,
equation (92) takes the form
$D_{1}(-\partial_{s})^{T}Q(s)(D_{1}(\partial_{s}){\bf u}^{1}+D_{0}{\bf
u}^{0})+K{\bf u}^{1}=0$ (93)
and it is solvable since the term containing ${\bf u}^{0}$ is orthogonal to
constant vectors $(0,a_{1},a_{2})$. Thus there exists linear in $z$ solutions.
Let us find these solutions. We have
$Q(D_{1}(\partial_{s}){\bf u}^{1}+D_{0}{\bf u}^{0})=(A,B,C)^{T},$ (94)
where
$\displaystyle A=Q_{11}(\kappa{\bf u}^{1}_{1}+\partial_{s}{\bf
u}^{1}_{2})+Q_{13}\frac{1}{\sqrt{2}}\partial_{s}{\bf
u}^{1}_{3}+Q_{12}\beta+\frac{1}{\sqrt{2}}Q_{13}\alpha,$ $\displaystyle
B=Q_{21}(\kappa{\bf u}^{1}_{1}+\partial_{s}{\bf
u}^{1}_{2})+Q_{23}\frac{1}{\sqrt{2}}\partial_{s}{\bf
u}^{1}_{3}+Q_{22}\beta+\frac{1}{\sqrt{2}}Q_{23}\alpha,$ $\displaystyle
C=Q_{31}(\kappa{\bf u}^{1}_{1}+\partial_{s}{\bf
u}^{1}_{2})+Q_{33}\frac{1}{\sqrt{2}}\partial_{s}{\bf
u}^{1}_{3}+Q_{32}\beta+\frac{1}{\sqrt{2}}Q_{33}\alpha.$
Now system (93) takes the form
$\kappa A+k{\bf u}^{1}_{1}=0,\;\;\partial_{s}A=0,\;\;\partial_{s}C=0.$
This implies
$A=b_{1},\;\;C=b_{2},\;\;\kappa b_{1}+k{\bf u}^{1}_{1}=0,$
where $b_{1}$ and $b_{2}$ are constants. Therefore
${\bf u}^{1}_{1}=-\frac{\kappa b_{1}}{k}$ (95)
and
$(\kappa{\bf u}^{1}_{1}+\partial_{s}{\bf
u}^{1}_{2},\frac{1}{\sqrt{2}}\partial_{s}{\bf
u}^{1}_{3})^{T}={\mathcal{R}}(s)(b_{1},b_{2})^{T}-{\mathcal{R}}(s){\mathcal{S}}(\beta,\frac{1}{\sqrt{2}}\alpha)^{T},$
(96)
where
${\mathcal{R}}(s)=\left(\begin{matrix}Q_{11}&Q_{13}\\\ Q_{31}&Q_{33}\\\
\end{matrix}\right)^{-1},\;\;\;{\mathcal{S}}=\left(\begin{matrix}Q_{12}&Q_{13}\\\
Q_{32}&Q_{33}\\\ \end{matrix}\right).$
Using (95) we can write the compatibility condition for (96) as
$\int_{0}^{|\gamma|}{\mathcal{R}}(s)ds(b_{1},b_{2})^{T}+\int_{0}^{|\gamma|}\frac{\kappa^{2}}{k}ds(b_{1},0)^{T}=\int_{0}^{|\gamma|}{\mathcal{R}}(s){\mathcal{S}}ds(\beta,\frac{1}{\sqrt{2}}\alpha)^{T}.$
(97)
Since ${\mathcal{R}}$ is a positive definite matrix, this system is uniquely
solvable with respect to $(b_{1},b_{2})$.
Next let us look for solution to (91) in the form
${\bf U}=\frac{1}{2}z^{2}{\bf u}^{0}+z{\bf u}^{1}+{\bf u}^{2},$
where ${\bf u}^{0}$ and ${\bf u}^{1}$ just constructed above vector functions.
Then equation for ${\bf u}^{2}$ has the form
$D_{1}(-\partial_{s})^{T}Q(s)(D_{1}(\partial_{s}){\bf u}^{2}+D_{0}{\bf
u}^{1})+K{\bf u}^{2}-D_{0}^{T}Q(s)(D_{1}(\partial_{s}){\bf u}^{1}+QD_{0}{\bf
u}^{0})=0.$ (98)
According to (97) solvability of this system is equivalent to
$\int_{0}^{|\gamma|}Bds=0\;\;\mbox{and}\;\;\int_{0}^{|\gamma|}Cds=0.$
this means that $b_{2}=0$ and
$\displaystyle\int_{0}^{|\gamma|}\Big{(}(Q_{22},Q_{23})-(Q_{21},Q_{23}){\mathcal{R}}{\mathcal{S}}\Big{)}ds(\beta,\frac{1}{\sqrt{2}}\alpha)^{T}$
$\displaystyle+\int_{0}^{|\gamma|}(Q_{21},Q_{23}){\mathcal{R}}ds(b_{1},b_{2})^{T}=0.$
(99)
Furthermore, $(b_{2},b_{1})$ and $(\alpha,\beta)$ are connected by (94). To
simplify calculation we assume from now that
$Q_{13}=Q_{23}=0.$
Then the matrixes ${\mathcal{R}}$ and ${\mathcal{S}}$ are diagonal and from
(97) it follows that $b_{2}=0$ implies $\alpha=0$ and
$\int_{0}^{|\gamma|}Q_{11}^{-1}dsb_{1}+\int_{0}^{|\gamma|}\frac{\kappa^{2}}{k}dsb_{1}=\int_{0}^{|\gamma|}Q_{11}^{-1}Q_{12}ds\beta.$
(100)
The relation (4.8) implies
$\int_{0}^{|\gamma|}\Big{(}Q_{22}-Q_{21}Q_{11}^{-1})\Big{)}ds\beta+\int_{0}^{|\gamma|}Q_{21}Q_{11}^{-1}dsb_{1}=0.$
This relation together with (100) requires that $\beta=0$ and $b_{1}=0$.
If $\xi\neq 0$ then ${\bf U}=0$. Consider the case $\xi=0$ and let $p_{0}=0$.
Then (4.8) takes the form
$\left(\begin{array}[]{lll}\kappa&0&0\\\ -\partial_{s}&0&0\\\
0&0&-\frac{1}{\sqrt{2}}\partial_{s}\end{array}\right)Q\left(\begin{array}[]{lll}\kappa{\bf
U}_{1}+\partial_{s}{\bf U}_{2}\\\ 0\\\ \frac{1}{\sqrt{2}}\partial_{s}{\bf
U}_{3}\end{array}\right)+K{\bf U}=-p_{1}\sigma\left(\begin{array}[]{lll}1\\\
0\\\ 0\end{array}\right)$ (101)
This is equivalent to the following three equations
$\kappa\Big{(}Q_{11}(\kappa{\bf U}_{1}+\partial_{s}{\bf
U}_{2})+Q_{13}\frac{1}{\sqrt{2}}\partial_{s}{\bf U}_{3}\Big{)}+k{\bf
U}_{1}=-p_{1}\sigma,$ $-\partial_{s}\Big{(}Q_{11}(\kappa{\bf
U}_{1}+\partial_{s}{\bf U}_{2})+Q_{13}\frac{1}{\sqrt{2}}\partial_{s}{\bf
U}_{3}\Big{)}=0,$ $\partial_{s}\Big{(}Q_{31}(\kappa{\bf
U}_{1}+\partial_{s}{\bf U}_{2})+Q_{33}\frac{1}{\sqrt{2}}\partial_{s}{\bf
U}_{3}\Big{)}=0$
This implies
$Q_{11}(\kappa{\bf U}_{1}+\partial_{s}{\bf
U}_{2})+Q_{13}\frac{1}{\sqrt{2}}\partial_{s}{\bf
U}_{3}=b_{1},\;\;Q_{31}(\kappa{\bf U}_{1}+\partial_{s}{\bf
U}_{2})+Q_{33}\frac{1}{\sqrt{2}}\partial_{s}{\bf U}_{3}=b_{2},$ (102)
where $b_{1}$ and $b_{2}$ are constants, and
$\kappa b_{1}+k{\bf U}_{1}=-p_{1}\sigma.$
Hence
${\bf U}_{1}=-\frac{p_{1}\sigma+\kappa b_{1}}{k}.$
Solving the system
$Q_{11}B_{1}+Q_{13}B_{2}=b_{1},\;\;Q_{31}B_{1}+Q_{33}B_{2}=b_{2},$
we get
$(B_{1},B_{2})^{T}={\mathcal{R}}(b_{1},b_{2})^{T},\;\;{\mathcal{R}}=\left(\begin{matrix}Q_{11}&Q_{13}\\\
Q_{31}&Q_{33}\\\ \end{matrix}\right)^{-1}.$
We write the equations (102) as
$\partial_{s}{\bf U}_{2}=B_{1}-\kappa{\bf U}_{1},\;\;\partial_{s}{\bf
U}_{3}=\sqrt{2}B_{2}.$ (103)
These equations has periodic solutions if
$\int_{0}^{|\gamma|}(B_{1}-\kappa{\bf
U}_{1})ds=0,\int_{0}^{|\gamma|}B_{2}ds=0.$
We write these equations as a system with respect to $b_{1}$ and $b_{2}$
$|\gamma|{\mathcal{R}}(b_{1},b_{2})^{T}+\int_{0}^{|\gamma|}\frac{\kappa^{2}}{k}ds(b_{1},0)^{T}=-\Big{(}\int_{0}^{|\gamma|}\kappa\frac{p_{1}\sigma}{k}ds,0\Big{)}^{T}$
From these relations we can find $b_{1}$ and $b_{2}$ and then solving (103) we
can find ${\bf U}_{2}$ and ${\bf U}_{3}$.
## 5 Appendix
###### Lemma 5.1.
Let
$c_{1}\zeta_{1}^{\prime}(s)+c_{2}\zeta_{2}^{\prime}(s)+c_{0}(\zeta_{2}(s)\zeta_{1}^{\prime}(s)-\zeta_{1}(s)\zeta_{2}^{\prime}(s))=0\;\;\mbox{for
$s\in(0,|\gamma|]$},$
where $c_{0}$, $c_{1}$ and $c_{2}$ are complex constants. Then
$c_{0}=c_{1}=c_{2}=0$.
###### Proof.
It is sufficient to prove assertion for real $c_{0}$, $c_{1}$ and $c_{2}$.
Assume that $c_{0}=0$. Then $c_{1}\zeta_{1}(s)+c_{2}\zeta_{2}(s)=c$ and hence
the boundary $\gamma$ belongs to the line or $c_{1}=c_{2}=0$. Since the first
option is impossible we obtain that both constants are zero.
So it is sufficient to prove that $c_{0}=0$. Assume that it is not. Moving the
origin in the $(y_{1},y_{2})$ plane (replacing $\zeta_{k}$ by
$\zeta_{k}+\alpha_{k}$, $k=1,2$), we arrive at the equation
$(\zeta_{2}(s)\zeta_{1}^{\prime}(s)-\zeta_{1}(s)\zeta_{2}^{\prime}(s))=0\;\;\mbox{for
$s\in(0,|\gamma|]$.}$
The last relation means that at each point $(y_{1},y_{2})$ on $\gamma$ the
corresponding vector is orthogonal to the normal to this curve at the same
point, what is impossible. ∎
Acknowledgements. V.Kozlov was supported by the Swedish Research Council (VR),
2017-03837. S.Nazarov is supported by RFBR grant 18-01-00325. This study was
supported by Linköping University, and by RFBR grant 16-31-60112.
## References
* [1]
* [2] Ghosh A., Kozlov V.A., Nazarov S.A., and Rule D., A Two-Dimensional Model of the Thin Laminar Wall of a Curvilinear Flexible Pipe. _The Quarterly Journal of Mechanics and Applied Mathematics_ , Vol. 71, Issue 3, 349-367, 2018.
* [3] I. Gohberg, MG. Kren, Introduction to the theory of linear nonselfadjoint operators, AMS, 1978.
* [4] F. Berntsson, M. Karlsson, V. Kozlov, SA. Nazarov, A one-dimensional model of viscous blood flow in an elastic vessel, Applied Mathematics and Computation 274, 125-132, 2016.
* [5] F. Berntsson, A. Ghosh, VA. Kozlov, SA. Nazarov, A one dimensional model of blood flow through a curvilinear artery, Applied Mathematical Modelling 63, 633-643, 2018.
* [6] Womersley J. R., Method for the calculation of velocity, rate of flow and viscous drag in arteries when the pressure gradient is known, _J. Physiol._ , 127(3):553–563, 1955.
* [7] Kozlov V. A., Nazarov S. A., One-dimensional model of viscoelastic blood flow through a thin elastic vessel, _J. Math. Sci._ , 207(2):249–269, 2015.
* [8] Kozlov V. A., Nazarov S. A., Surface enthalpy and elastic properties of blood vessels, _Dokl. Phys._ , 56(11):560–566, 2011.
* [9] Kozlov V. A., Nazarov S. A., Asymptotic models of anisotropic heterogeneous elastic walls of blood vessels, _J. Math. Sci._ , 213(4):561–581, 2016.
* [10] Kozlov V. A., Nazarov S. A., Zavorokhin G. L., Pressure drop matrix for a bifurcation with defects, _EJMCA_ , 7:3, 33–55, 2019.
* [11] Kozlov, V. A., Maz’ya, V. G., Rossmann J., Spectral problems associated with corner singularities of solutions to elliptic equations. Vol. 85, AMS, 2001.
* [12] Beir$\tilde{a}$ da Veiga, H. Time periodic solutions of the Navier-Stokes equations in unbounded cylindrical domains-Leray’s problem for periodic flows. Arch. Ration. Mech. Anal. 178 (2005), no. 3, 301-325.
* [13] Galdi, G. P.; Pileckas, K.; Silvestre, A. L. On the unsteady Poiseuille flow in a pipe. Z. Angew. Math. Phys. 58 (2007), no. 6, 994-1007.
* [14] Farwig, Reinhard; Ri, Myong-Hwan Stokes resolvent systems in an infinite cylinder. Math. Nachr. 280 (2007), no. 9-10, 1061-1082.
* [15] Ri, Myong-Hwan; Farwig, Reinhard Maximal regularity in exponentially weighted Lebesgue spaces of the Stokes operator in unbounded cylinders. Analysis (Berlin) 35
* [16] V. Filonova, CJ Arthurs, IE Vignon-Clementel, CA Figueroa, Verification of the coupled-momentum method with Womersley’s Deformable Wall analytical solution, International Journal for Numerical Methods in Biomedical Engineering 36, 2020/2.
* [17] J.R. Womersley, Oscillatory motion of a viscous liquid in a thin-walled elastic tubeI: The linear approximation for long waves. The London, Edinburgh, and Dublin Philosophical Magazine and Journal of Science Series 7, Volume 46, 1955 - Issue 373.
|
# A Statistical Theory of Heavy Atoms:
Asymptotic Behavior of the Energy and Stability of Matter
Heinz Siedentop Mathematisches Institut
Ludwig-Maximilans Universität München, Theresienstraße 39
80333 München
Germany
and Munich Center for Quantum Science and Technology (MCQST)
Schellingstr. 4
80799 München, Germany<EMAIL_ADDRESS>Dedicated to Ari Laptev on the occasion of
his septuagesimal birthday.
His ideas in analysis have inspired many.
(Date: January 15, 2021)
###### Abstract.
We give the asymptotic behavior of the ground state energy of Engel’s and
Dreizler’s relativistic Thomas-Fermi-Weizsäcker-Dirac functional for heavy
atoms for fixed ratio of the atomic number and the velocity of light. Using a
variation of the lower bound, we show stability of matter.
## 1\. Introduction
Heavy atoms require a relativistic description because of the extremely fast
moving inner electrons. However, a statistical theory of the atom in the the
spirit of Thomas [29] and Fermi [12, 13] yields a functional which is
unbounded from below because the semi-classical relativistic Fermi energy is
too weak to prevent mass from collapsing into the nucleus. (See Gombas [17,
§14] and [18, Chapter III, Section 16.]. Gombas also suggested that
Weizsäcker’s (non-relativistic) inhomogeneity correction would solve this
problem. Tomishia [30] carried this suggestion through.) Because of the same
reason the relativistic generalization of the Lieb-Thirring inequality by
Daubechis is not directly applicable to Chandrasekhar operators with Coulomb
potentials but requires a separate treatment of the singularity. Frank and
Ekholm [10] found a way circumventing this problem treating critical
potentials of Schrödinger operators; later Frank et al. [14] accomplished the
same for Chandrasekhar operators with Coulomb potentials. It amounts to a
Thomas-Fermi functional with a potential whose critical singularity has been
extracted. However, there is a drawback, namely the Thomas-Fermi constant of
this functional is smaller than the classical one, i.e., we cannot expect
asymptotically correct results.
Here we discuss an alternative relativistic density functional which can
handle Coulomb potentials of arbitrary strength: Engel and Dreizler [11]
derived a functional $\mathcal{E}_{c,Z}^{\mathrm{TFWD}}$ of the electron
density $\rho$ from quantum electrodynamics (in contrast to Gombas’ ad hoc
procedure of adding the non-relativistic Weizsäcker correction). It is – in a
certain sense – a generalization of the non-relativistic Thomas-Fermi-
Weizsäcker-Dirac functional to the relativistic setting, a feature that it
shares with the functional investigated by Lieb et [22]. However, it does not
suffer from the problem that it becomes unbounded from below for heavy atoms.
We will show here, that it has – unlike the functional which can be obtained
from [14] – the same asymptotic behavior as the atomic quantum energy. The
price to pay is the absence of a known inequality relating it to the full
quantum problem. One could speculate that it might be an upper bound on the
ground state energy. The way we prove the upper bound of the asymptotics might
nourishes such thoughts. However, even in the non-relativistic context this is
open despite numerical evidence and claims to the contrary, e.g., by March and
Young [25]: the arguments given contain a gap. In other words, such a claim
would have – even in the non-relativistic context – at best the status of a
conjecture.
Engel’s and Dreizler’s functional is the relativistic TF functional (see
Chandrasekhar [7] [in the ultrarelativistic limit] and Gombas [17, §14] for
the general case) with an inhomogeneity and exchange correction different from
the non-relativistic terms but with an integrand tending pointwise to their
non-relativistic analogue for large velocity of light $c$. In Hartree units it
reads for atoms of atomic number $Z$ and electron density $\rho$
(1)
$\mathcal{E}_{c,Z}^{\mathrm{TFWD}}(\rho):=\mathcal{T}^{\mathrm{W}}(\rho)+\mathcal{T}^{\mathrm{TF}}(\rho)-\mathcal{X}(\rho)+\mathcal{V}(\rho).$
The first summand on the right is an inhomogeneity correction of the kinetic
energy generalizing the Weizsäcker correction. Using the Fermi momentum
$p(x):=(3\pi^{2}\rho(x))^{1/3}$ it is
(2)
$\mathcal{T}^{\mathrm{W}}(\rho):=\int_{\mathbb{R}^{3}}\mathrm{d}x{3\lambda\over
8\pi^{2}}(\nabla p(x))^{2}c\,f(p(x)/c)^{2}$
with
$f(t)^{2}:=t(t^{2}+1)^{-\frac{1}{2}}+2t^{2}(t^{2}+1)^{-1}\mathfrak{Arsin}(t)$
where $\mathfrak{Arsin}$ is the inverse function of the hyperbolic sine. The
parameter $\lambda\in\mathbb{R}_{+}$ is given by the gradient expansion as
$1/9$ but is in the non-relativistic analogue sometimes taken as an adjustable
parameter (Weizsäcker [31], Yonei and Tomishima [32], Lieb [21, 20]).
The second summand is the relativistic generalization of the Thomas-Fermi
kinetic energy, namely
(3)
$\mathcal{T}^{\mathrm{TF}}(\rho):=\int_{\mathbb{R}^{3}}\mathrm{d}x{c^{5}\over
8\pi^{2}}T^{\mathrm{TF}}(\tfrac{p(x)}{c})$
with
$T^{\mathrm{TF}}(t):=t(t^{2}+1)^{3/2}+t^{3}(t^{2}+1)^{1/2}-\mathfrak{Arsin}(t)-{8\over
3}t^{3}$.
The third summand is a relativistic generalization of the exchange energy. It
is
(4) $\mathcal{X}(\rho):=\int_{\mathbb{R}^{3}}\mathrm{d}x{c^{4}\over
8\pi^{3}}X(\tfrac{p(x)}{c})$
with $X(t):=2t^{4}-3[t(t^{2}+1)^{\frac{1}{2}}-\mathfrak{Arsin}(t)]^{2}$.
Eventually, the last summand is the potential energy, namely the sum of the
electron-nucleus and the electron-electron interaction. It is
(5)
$\mathcal{V}(\rho):=-Z\int_{\mathbb{R}^{3}}\mathrm{d}x\rho(x)|x|^{-1}+\underbrace{\tfrac{1}{2}\int_{\mathbb{R}^{3}}\mathrm{d}x\int_{\mathbb{R}^{3}}\mathrm{d}y\rho(x)\rho(y)|x-y|^{-1}}_{=:\mathcal{D}[\rho]}.$
Using $F(t):=\int_{0}^{t}\mathrm{d}sf(s)$, the functional
$\mathcal{E}_{c,Z}^{\mathrm{TFWD}}$ is naturally defined on
(6) $P:=\\{\rho\in L^{\frac{4}{3}}(\mathbb{R}^{3})|\rho\geq 0,\
\mathcal{D}[\rho]<\infty,F\circ p\in D^{1}(\mathbb{R}^{3})\\}$
and bounded from below [9] for all $c$ and $Z$. In fact Chen et al. [8]
obtained a Thomas-Fermi type lower bound for fixed ratio $\kappa:=Z/c$.
Unfortunately its Thomas-Fermi constant $\gamma_{e}$ is less than
${\gamma_{\mathrm{TF}}}:=(3\pi^{2})^{\frac{2}{3}}$, the correct physical
value.
For comparison we need the non-relativistic Thomas-Fermi functional
(7)
${\mathcal{E}_{Z}^{\mathrm{TF}}}(\rho):=\frac{3}{10}{\gamma_{\mathrm{TF}}}\int_{\mathbb{R}^{3}}\mathrm{d}x\rho(x)^{\frac{5}{3}}+\mathcal{V}(\rho)$
defined on $I:=\\{\rho\in L^{\frac{5}{3}}(\mathbb{R}^{3})|\rho\geq 0,\
\mathcal{D}[\rho]<\infty\\}$. The functional is bounded from below (Simon
[27]) and its infimum fulfills the scaling relation
(8)
$E^{\mathrm{TF}}(Z):=\inf{\mathcal{E}_{Z}^{\mathrm{TF}}}(I)=-{e^{\mathrm{TF}}}Z^{\frac{7}{3}}$
where ${e^{\mathrm{TF}}}=-E^{\mathrm{TF}}(1)$ (Gombas [17], Lieb and Simon
[23]). There exists a unique minimizer of ${\mathcal{E}_{Z}^{\mathrm{TF}}}$
which we denote by $\sigma$.
Our first result is
###### Theorem 1.
Assume $\kappa:=Z/c\in\mathbb{R}_{+}$ fixed. Then, as $Z\to\infty$,
(9)
$\inf\mathcal{E}_{c,Z}^{\mathrm{TFWD}}(P)=-e^{\mathrm{TF}}Z^{\frac{7}{3}}+O(Z^{2}).$
Our second result is the stability of the second kind of the functional which
we address in Section 3.
From a mathematical perspective it might come as a surprise that Engel’s and
Dreizler’s density functional – derived by purely formal methods from a
quantum theory which is still lacking a full mathematical understanding –
yields a fundamental feature like the ground state energy of heavy atoms to
leading order quantitatively correct, in full agreement with the $N$-particle
descriptions of heavy atoms like the Chandrasekhar Hamiltonian and no-pair
Hamiltonians (Sørensen [26], [6], Solovej [28], Frank et al. [15, 16], [19]).
It remains to be seen whether this is also true for other quantities like the
density or whether the functional can be used as a tool to investigate
relativistic many particle systems – like Thomas-Fermi theory in non-
relativistic many body quantum mechanics – or, whether it even can shed light
on a deeper understanding of quantum electrodynamics.
## 2\. Bounds on the Energy
### 2.1. Upper Bound on the Energy
We begin with an innocent lemma.
###### Lemma 1.
Assume $\rho:\mathbb{R}^{3}\to\mathbb{R}_{+}$ such that
$p:=(3\pi^{2}\rho)^{\frac{1}{3}}$ has partial derivatives with respect to all
variables at $x\in\mathbb{R}^{3}$. Then
(10) ${3\over 8\pi^{2}}|\nabla
p(x)|^{2}cf(p(x)/c)\leq|(\nabla\sqrt{\rho})(x)|^{2}.$
Thus every nonnegative $\rho$ with $\nabla\sqrt{\rho}\in
L^{2}(\mathbb{R}^{3})$ fulfills
(11)
$\mathcal{T}^{\mathrm{W}}(\rho)\leq\lambda\int_{\mathbb{R}^{3}}|\nabla\sqrt{\rho}|^{2}.$
###### Proof.
We set $\psi:=\sqrt{\rho}$ and compute
(12) $\begin{split}&{3\over 8\pi^{2}}|\nabla p(x)|^{2}cf(p(x)/c)\\\
=&{3^{2}\over
8}|\nabla\sqrt[3]{\rho}(x)|^{2}\left({\psi^{\frac{2}{3}}(x)\over\sqrt{1+(p(x)/c)^{2}}}+2{\psi^{\frac{2}{3}}(x)(p(x)/c)\mathfrak{Arsin}(p(x)/c)\over
1+(p(x)/c)^{2}}\right)\\\
\leq&\tfrac{1}{2}|\nabla\psi(x)|^{2}\max\left\\{{\sqrt{1+t^{2}}+2t\mathfrak{Arsin}(t)\over
1+t^{2}}|t\in\mathbb{R}_{+}\right\\}\leq|\nabla\psi(x)|^{2}.\end{split}$
∎
Of course the illuminati are hardly impressed by (11), since dominating
relativistic energies by non-relativistic ones is common place for them.
Presumably not even use of the numerically correct value 1.658290113 of the
maximum in the proof instead of the estimate 2 would change that.
Now we turn to the upper bound on the left side of (9). It will be practical
to use the non-relativistic Thomas-Fermi-Weizsäcker functional
(13)
$\mathcal{E}^{\mathrm{nrTFW}}_{Z}(\rho):=\frac{\beta}{2}\int_{\mathbb{R}^{3}}|\nabla\sqrt{\rho}|^{2}+\mathcal{E}^{\mathrm{TF}}_{Z}(\rho)$
where $\beta\in\mathbb{R}_{+}$. It is defined on $J:=\\{\rho\in
L^{\frac{5}{3}}(\mathbb{R}^{3})|\rho\geq 0,\ \sqrt{\rho}\in
D^{1}(\mathbb{R}^{3}),\ \mathcal{D}[\rho]<\infty\\}$, has a unique minimizer
$\rho_{W}$ with $\int_{\mathbb{R}^{3}}\rho_{W}\leq Z+C$, and
$\int_{\mathbb{R}^{3}}\rho_{W}^{\frac{5}{3}}=O(Z^{\frac{7}{3}})$ (Benguria
[1], Benguria et al. [2], Benguria and Lieb [3]). Moreover,
(14)
$E^{\mathrm{nrTFW}}(Z)=\mathcal{E}^{\mathrm{nrTFW}}_{Z}(\rho_{W})=E^{\mathrm{TF}}(Z)+D_{\beta}Z^{2}+o(Z^{2})$
for some $\beta$-dependent constant $D_{\beta}\in\mathbb{R}_{+}$ (Lieb and
Liberman [21] and Lieb [20, Formula (1.6)]).
In the following we pick $\beta=2$ and use the minimizer $\rho_{W}$ of the
non-relativistic Thomas-Fermi-Weizsäcker functional as a test function.
We estimate the exchange term first. Since $-X(t)\leq t^{4}$, we get
(15) $-\mathcal{X}(\rho_{W})\leq{(3\pi^{2})^{\frac{4}{3}}\over
8\pi^{3}}\int_{\mathbb{R}^{3}}\rho_{w}^{\frac{4}{3}}\leq
C\sqrt{\int_{\mathbb{R}^{3}}\rho_{W}^{\frac{5}{3}}\int_{\mathbb{R}^{3}}\rho_{W}}=O(Z^{\frac{5}{3}}).$
Thus, since $T^{\mathrm{TF}}(t)\leq\frac{3}{5}t^{5}$,
(16)
$\inf\mathcal{E}_{c,Z}^{\mathrm{TFWD}}(P)\leq\mathcal{E}_{c,Z}^{\mathrm{TFWD}}(\rho_{W})\leq\mathcal{E}^{\mathrm{nrTFW}}_{Z}(\rho_{W})+O(Z^{\frac{5}{3}})=E^{\mathrm{TF}}(Z)+O(Z^{2})$
which concludes the proof of the upper bound.
### 2.2. Lower Bound on the Energy
We set ${\mathchar
22\relax\mkern-12.0mu\mathrm{d}}\xi:=\mathrm{d}\xi/h^{3}=\mathrm{d}\xi/(2\pi)^{3}$.
(Note that the rationalized Planck constant $\hbar$ equals one in Hartree
units and, therefore, $h=2\pi$.)
We introduce the notation $(a)_{-}:=\min\\{0,a\\}$ and write
$\varphi_{\sigma}:=Z/|\cdot|-\sigma*|\cdot|^{-1}$ for the Thomas-Fermi
potential of the minimizer $\sigma$. We start again with a little Lemma.
###### Lemma 2.
Assume $\kappa=Z/c$ fixed. Then, as $Z\to\infty$,
$\begin{split}\int\limits_{|x|>\frac{1}{Z}}\mathrm{d}x\int\limits_{\mathbb{R}^{3}}{\mathchar
22\relax\mkern-12.0mu\mathrm{d}}\xi(\frac{\xi^{2}}{2}-\varphi_{\sigma}(x))_{-}-\int\limits_{|x|>\frac{1}{Z}}\mathrm{d}x\int\limits_{\mathbb{R}^{3}}{\mathchar
22\relax\mkern-12.0mu\mathrm{d}}\xi(\sqrt{c^{2}\xi^{2}+c^{4}}-c^{2}-\varphi_{\sigma}(x))_{-}=O(Z^{2}).\end{split}$
Again, it does not come as a surprise to the physicist that relativistic and
non-relativistic theory give the same result up to errors, if the innermost
electrons, i.e., in particular the fast moving, are disregarded.
###### Proof.
Since $\xi^{2}/2\geq\sqrt{c^{2}\xi^{2}+c^{4}}-c^{2}$, the left side of the
claimed inequality cannot be negative. Thus, we merely need an upper bound:
(17)
$\begin{split}&\int\limits_{|x|>\frac{1}{Z}}\mathrm{d}x\int\limits_{\mathbb{R}^{3}}{\mathchar
22\relax\mkern-12.0mu\mathrm{d}}\xi(\frac{\xi^{2}}{2}-\varphi_{\sigma}(x))_{-}-\int\limits_{|x|>\frac{1}{Z}}\mathrm{d}x\int\limits_{\mathbb{R}^{3}}{\mathchar
22\relax\mkern-12.0mu\mathrm{d}}\xi\left(\sqrt{c^{2}\xi^{2}+c^{4}}-c^{2}-\varphi_{\sigma}(x)\right)_{-}\\\
\leq&\int_{|x|>\frac{1}{Z}}\mathrm{d}x\left\\{c^{5}\int_{{\xi^{2}\over
2}<{\varphi_{\sigma}(x)\over c^{2}}}{\mathchar
22\relax\mkern-12.0mu\mathrm{d}}\xi[\tfrac{1}{2}{\xi^{2}}-(\sqrt{\xi^{2}+1}-1)]\right.\\\
&\left.-\int_{\sqrt{c^{2}\xi^{2}+c^{4}}-c^{2}<\varphi_{\sigma}(x)\leq\xi^{2}/2}{\mathchar
22\relax\mkern-12.0mu\mathrm{d}}\xi\left(\sqrt{c^{2}\xi^{2}+c^{4}}-c^{2}-\varphi_{\sigma}(x)\right)\right\\}\\\
\leq&c^{5}\int_{|x|>\frac{1}{Z}}\mathrm{d}x\left(\int\limits_{{\xi^{2}\over
2}<{\varphi_{\sigma}(x)\over c^{2}}}{\mathchar
22\relax\mkern-12.0mu\mathrm{d}}\xi+\int\limits_{\sqrt{\xi^{2}+1}-1<{\varphi_{\sigma}(x)\over
c^{2}}\leq{\xi^{2}\over 2}}{\mathchar
22\relax\mkern-12.0mu\mathrm{d}}\xi\right)[\tfrac{1}{2}{\xi^{2}}-(\sqrt{\xi^{2}+1}-1)]\\\
\leq&{c^{5}\over 8}\int_{|x|>\frac{1}{Z}}\mathrm{d}x\left(\int_{{\xi^{2}\over
2}<{\varphi_{\sigma}(x)\over c^{2}}}{\mathchar
22\relax\mkern-12.0mu\mathrm{d}}\xi+\int_{\sqrt{\xi^{2}+1}-1<{\varphi_{\sigma}(x)\over
c^{2}}\leq{\xi^{2}\over 2}}{\mathchar
22\relax\mkern-12.0mu\mathrm{d}}\xi\right)|\xi|^{4}\\\ \leq&{c^{5}\over
8}\int_{|x|>{1\over
Z}}\mathrm{d}x\int_{\sqrt{\xi^{2}+1}-1<{\varphi_{\sigma}(x)\over
c^{2}}}{\mathchar 22\relax\mkern-12.0mu\mathrm{d}}\xi|\xi|^{4}\leq{c^{2}\over
8\kappa^{3}}\int_{|x|>1}\mathrm{d}x\int_{\sqrt{\xi^{2}+1}-1<{\kappa^{2}\over|x|}}{\mathchar
22\relax\mkern-12.0mu\mathrm{d}}\xi|\xi|^{4}\end{split}$
where we used $\varphi_{\sigma}(x)\leq Z/|x|$ in the last inequality.
Moreover, the resulting last integral obviously exists and is independent of
$Z$. Thus the left side of the claimed inequality is bounded from above by a
constant depending only on $\kappa$ times $Z^{2}$ quod erat demonstrandum. ∎
We turn to the lower bound on the left side of (9) and follow initially [8].
In fact, apart from minor modifications, we copy the high density part and
focus on the low density part. We pick any $\rho\in P$ and address the parts
of the energy separately.
#### 2.2.1. The Weizsäcker Energy
Since $F(t)\geq t\sqrt{\mathfrak{Arsin}(t)}/2$ (see [9, Formula (90)]),
Hardy’s inequality gives the lower bound
(18) $\mathcal{T}^{\mathrm{W}}(\rho)\geq{3\lambda c\over
2^{7}\pi^{2}}\int_{\mathbb{R}^{3}}\mathrm{d}x{p(x)^{2}\mathfrak{Arsin}(\tfrac{p(x)}{c})\over|x|^{2}}={3^{\frac{5}{3}}\lambda
c\over
2^{7}\pi^{\frac{2}{3}}}\underbrace{\int_{\mathbb{R}^{3}}\mathrm{d}x{\rho(x)^{\frac{2}{3}}\mathfrak{Arsin}(\tfrac{p(x)}{c})\over|x|^{2}}}_{=:\mathcal{H}(\rho)}.$
#### 2.2.2. The Potential Energy
Since $\sigma$ is positive, we have $\varphi_{\sigma}(x)\leq Z/|x|$. Then
(19)
$\mathcal{V}(\rho)=-\int_{\mathbb{R}^{3}}\mathrm{d}x\varphi_{\sigma}(x)\rho(x)-2\mathcal{D}(\sigma,\rho)+\mathcal{D}[\rho]\geq-\int_{\mathbb{R}^{3}}\mathrm{d}x\varphi_{\sigma}(x)\rho(x)-\mathcal{D}[\sigma].$
Splitting the integrals at $s$, using (19), and Schwarz’s inequality yields
(20)
$\begin{split}\mathcal{V}(\rho)\geq&-\int_{p(x)/c<s}\mathrm{d}x\varphi_{\sigma}(x)\rho(x)\\\
&-Z\int_{p(x)/c\geq
s}\mathrm{d}x{\rho(x)^{\frac{1}{3}}\over|x|}\mathfrak{Arsin}(\tfrac{p(x)}{c})^{\frac{1}{2}}{\rho(x)^{\frac{2}{3}}\over\mathfrak{Arsin}(\tfrac{p(x)}{c})^{\frac{1}{2}}}-\mathcal{D}[\sigma]\\\
\geq&-{Z\over\mathfrak{Arsin}(s)^{\frac{1}{2}}}\mathcal{H}(\rho)^{\frac{1}{2}}{{\mathcal{T}}_{>}}(\rho)^{\frac{1}{2}}-\int_{p(x)/c<s}\mathrm{d}x\varphi_{\sigma}(x)\rho(x)-\mathcal{D}[\sigma]\end{split}$
with
${{\mathcal{T}}_{>}}(\rho):=\int_{p(x)/c>s}\mathrm{d}x\rho(x)^{\frac{4}{3}}$.
#### 2.2.3. The Thomas-Fermi Term
First, we note that
(21) $\mathbb{R}_{+}\to\mathbb{R}_{+},\ t\mapsto T^{\mathrm{TF}}(t)/t^{4}$
is monotone increasing from $0$ to $2$. Thus
(22)
$\begin{split}&\mathcal{T}^{\mathrm{TF}}(\rho)=\int_{p(x)/c<s}\mathrm{d}x\frac{c^{5}}{8\pi^{2}}T^{\mathrm{TF}}(\tfrac{p(x)}{c})+\int_{p(x)/c\geq
s}\mathrm{d}x\frac{c^{5}}{8\pi^{2}}T^{\mathrm{TF}}(\tfrac{p(x)}{c})\\\
\geq&\int_{p(x)/c<s}\mathrm{d}x\frac{c^{5}}{8\pi^{2}}T^{\mathrm{TF}}(\tfrac{p(x)}{c})+\int_{p(x)/c\geq
s}\mathrm{d}x\frac{T^{\mathrm{TF}}(s)}{s^{4}}\tfrac{3}{8}(3\pi^{2})^{\frac{1}{3}}c\rho(x)^{\frac{4}{3}}\\\
=&\int_{p(x)/c<s}\mathrm{d}x\frac{c^{5}}{8\pi^{2}}T^{\mathrm{TF}}(\tfrac{p(x)}{c})+\frac{3}{8}\frac{T^{\mathrm{TF}}(s)}{s^{4}}{\gamma_{\mathrm{TF}}}^{\frac{1}{2}}c{{\mathcal{T}}_{>}}(\rho).\end{split}$
#### 2.2.4. Exchange Energy
Since $X$ is bounded from above and $X(t)=O(t^{4})$ at $t=0$, we have that for
every $\alpha\in[0,4]$ there is an $\eta_{0}$ such that
$X(t)\leq\eta_{0}t^{\alpha}$. We pick $\alpha=3$ in which case $\xi_{0}\approx
1.15$. Thus, with $\eta:=\eta_{0}/(4\pi)\approx 0.0914$, we have
(23) $\mathcal{X}(\rho)\leq{c\eta_{0}\over 4\pi}N=\eta cN.$
#### 2.2.5. The Total Energy
Adding everything up yields
(24)
$\begin{split}\mathcal{E}_{c,Z}^{\mathrm{TFWD}}(\rho)\geq&{3^{\frac{5}{3}}\lambda
c\over
2^{7}\pi^{\frac{2}{3}}}\mathcal{H}(\rho)+\frac{3}{8}\frac{T^{\mathrm{TF}}(s)}{s^{4}}{\gamma_{\mathrm{TF}}}^{\frac{1}{2}}c{{\mathcal{T}}_{>}}(\rho)-{Z\over\mathfrak{Arsin}(s)^{\frac{1}{2}}}\mathcal{H}(\rho)^{\frac{1}{2}}{{\mathcal{T}}_{>}}(\rho)^{\frac{1}{2}}\\\
&+\int_{\frac{p(x)}{c}<s}\mathrm{d}x\frac{c^{5}}{8\pi^{2}}T^{\mathrm{TF}}(\tfrac{p(x)}{c})-\varphi_{\sigma}(x)\rho(x))-\mathcal{D}[\sigma]-\xi
cN.\end{split}$
We pick $s\in\mathbb{R}_{+}$ such that the sum of the first three summands on
the right of (24) is a complete square, i.e., fulfilling
(25) $\sqrt{{3^{\frac{5}{3}}\over
2^{7}\pi^{\frac{2}{3}}}{3T^{\mathrm{TF}}(s)(3\pi^{2})^{\frac{1}{3}}\over
8s^{4}}}={Z\over c\sqrt{\lambda}}{1\over 2\mathfrak{Arsin}(s)^{\frac{1}{2}}}.$
The solution is uniquely determined, because of (21) (and the line below) and
$\mathfrak{Arsin}(s)$ is also monotone increasing from $0$ to $\infty$. Call
the corresponding $s$ $s_{0}$. Obviously, $s_{0}$ does not depend on $c$ and
$Z$ independently but only on the ratio $\kappa:=Z/c$ and is strictly monotone
increasing from $0$ to $\infty$.
We set
(26) $\begin{split}{I_{s,Z}}:=&\\{x\in\mathbb{R}^{3}|p(x)/c<s,\ |x|<1/Z\\},\\\
{A_{s,Z}}:=&\\{x\in\mathbb{R}^{3}|p(x)/c<s,\ |x|\geq 1/Z\\}.\end{split}$
Then
(27) $\mathcal{E}_{c,Z}^{\mathrm{TFWD}}(\rho)\geq I+A-\mathcal{D}[\sigma]-\xi
cN$
with
(28) $\displaystyle I:=$
$\displaystyle\int_{I_{s,Z}}\mathrm{d}x\left(\frac{c^{5}}{8\pi^{2}}T^{\mathrm{TF}}(\tfrac{p(x)}{c})-\varphi_{\sigma}(x)\rho(x)\right)$
(29) $\displaystyle A:=$
$\displaystyle\int_{A_{s,Z}}\mathrm{d}x\left(\frac{c^{5}}{8\pi^{2}}T^{\mathrm{TF}}(\tfrac{p(x)}{c})-\varphi_{\sigma}(x)\rho(x)\right)$
We estimate $I$ from below by dropping the TF-term, using
$\varphi_{\sigma}(x)\leq Z/|x|$, and observing that $x\in{I_{s,Z}}$. We get
(30) $I\geq-C_{\kappa}Zc^{3}\int_{0}^{Z^{-1}}\mathrm{d}rr=-C_{\kappa}Z^{2}$
where $C_{\kappa}$ is a generic constant depending on $\kappa$ only. In other
words, we can pull the Coulomb tooth paying a negligible price.
Next we estimate $A$ from below by keeping ${A_{s,Z}}$ fixed and minimizing
the integrand at each point $x\in{A_{s,Z}}$ by varying the values
$p(x)\in\mathbb{R}_{+}$. We get
(31)
$\begin{split}A\geq&2\int_{A_{s,Z}}\mathrm{d}x\int_{\mathbb{R}^{3}}{\mathchar
22\relax\mkern-12.0mu\mathrm{d}}\xi\left(\sqrt{c^{2}\xi^{2}+c^{4}}-c^{2}-\varphi_{\sigma}(x)\right)_{-}\\\
\geq&2\int_{|x|\geq 1/Z}\mathrm{d}x\int_{\mathbb{R}^{3}}{\mathchar
22\relax\mkern-12.0mu\mathrm{d}}\xi\left(\sqrt{c^{2}\xi^{2}+c^{4}}-c^{2}-\varphi_{\sigma}(x)\right)_{-}.\end{split}$
Although at first glance the first inequality might seem abrupt, it is easily
checked that the Thomas-Fermi functional (relativistic or non-relativistic,
restricted to some region in space $M$) with kinetic energy $T(\xi)$ and
external potential $\varphi$ is merely the marginal functional (integrating
out the momentum variable $\xi$) of the phase space variational principle
(32)
$\mathcal{E}_{\Gamma}(\gamma):=\int_{M}\mathrm{d}x\int_{\mathbb{R}^{3}}{\mathchar
22\relax\mkern-12.0mu\mathrm{d}}\xi(T(\xi)-\varphi(x))\gamma(x,\xi)$
with $\gamma(M,\mathbb{R}^{3})\subset[0,2]$ and the choice
$\gamma(x,\xi):=\chi_{\\{(x,\xi)\in M\times\mathbb{R}^{3}||\xi|<p(x)\\}}$ for
given Fermi momentum $p$. Eventually, since $\gamma$ has only values between
$0$ and $2$, (32) is obviously minimized by the characteristic function of the
support of the negative part of $T(\xi)-\varphi(x)$ times 2.
Metaphorically speaking the second inequality of (31) states, that we can
bound the energy from below by a toothless relativistic Thomas-Fermi energy.
Now, by Lemma 2 this equals the corresponding non-relativistic expression up
to errors of order $O(Z^{2})$, i.e., we have
(33) $\begin{split}\mathcal{E}_{c,Z}^{\mathrm{TFWD}}(\rho)\geq&2\int_{|x|\geq
1/Z}\mathrm{d}x\int_{\mathbb{R}^{3}}{\mathchar
22\relax\mkern-12.0mu\mathrm{d}}\xi\left(\tfrac{1}{2}\xi^{2}-\varphi_{\sigma}(x)\right)_{-}-\mathcal{D}[\sigma]-C_{\kappa}Z^{2}\\\
\geq&2\int_{\mathbb{R}^{3}}\mathrm{d}x\int_{\mathbb{R}^{3}}{\mathchar
22\relax\mkern-12.0mu\mathrm{d}}\xi\left(\tfrac{1}{2}\xi^{2}-\varphi_{\sigma}(x)\right)_{-}-\mathcal{D}[\sigma]-C_{\kappa}Z^{2}\\\
=&\mathcal{E}_{Z}^{\mathrm{TF}}(\sigma)-C_{\kappa}Z^{2}=-{e^{\mathrm{TF}}}Z^{7/3}-C_{\kappa}Z^{2}\end{split}$
which concludes the proof of the desired lower bound and therefore also the
proof of (9).
## 3\. Stability of Matter
For several atoms the potential term $\mathcal{V}$ in (5) is replaced by
(34)
$\mathcal{V}(\rho):=-\sum_{k=1}^{K}\int_{\mathbb{R}^{3}}\mathrm{d}x{Z_{k}\rho(x)\over|x-R_{k}|}+\mathcal{D}[\rho]+\sum_{1\leq
k<l\leq K}{Z_{k}Z_{l}\over|R_{k}-R_{l}|}$
with pairwise different positions of the nuclei
$\mathfrak{R}:=(R_{1},...,R_{K})\in\mathbb{R}^{3K}$, and atomic numbers
$\mathfrak{Z}:=(Z_{1},...,Z_{K})\in\mathbb{R}^{K}_{+}$. The first term is the
attraction potential between the electrons and the nuclei, the second term,
the electron-electron interaction, is unchanged, and the third is the
repulsion between the nuclei. We write
$\mathcal{E}_{\mathfrak{R},\mathfrak{Z}}^{\mathrm{TFWD}}$ for the otherwise
unchanged Engel-Dreizler functional. We are interested in finding a lower
bound to $\mathcal{E}_{\mathfrak{R},\mathfrak{Z}}^{\mathrm{TFWD}}$ which is
uniform in the density $\rho$ as long as the constraint $\int\rho=N$ is
respected, is uniform in $R_{1},...,R_{K}$ and linear in $N$ and $K$, i.e., we
wish to show that the energy per particle is bounded from below by the same
constant irrespective of the electron density and the positions of the nuclei.
This property is also known as stability of the second kind, thermodynamic
stability, and stability of matter.
Benguria et al. [4] showed stability of matter for the massless version of
this functional without the regularizing $\mathfrak{Arsin}$-term in
$\mathcal{T}^{\mathrm{W}}$ and without the exchange correction.
Initially we restrict to the case of equal atomic numbers $Z$. We begin by
extracting all Coulomb teeth which is conveniently done using [24, Formula
(5.2)]: given disjoint balls $B_{1},...,B_{K}$ with centers at
$R_{1},...,R_{K}$ and radii $D_{1},...,D_{K}$ we have
(35) $-\Delta\geq\sum_{\kappa=1}^{K}\left({1\over 4|\cdot-
R_{\kappa}|^{2}}-D_{k}^{-2}Y(\tfrac{|\cdot-
R_{\kappa}|}{D_{\kappa}})\right)_{+}\chi_{B_{\kappa}}$
with $Y(r)=1+r^{2}/4$. We also set
$H_{k}(x):=2\sqrt{Y(|x-R_{k}|/D_{K})}/D_{k}$. We pick the radii maximal,
namely $D_{k}$ as half the distance of the $k$-th nucleus to its nearest
neighbor.
We also use Lieb and Yau’s electrostatic inequality [24, Formula (4.5)]: we
write $\Gamma_{1},...,\Gamma_{K}$ for the Voronoi cells of $R_{1},...,R_{K}$
and set
(36) $\Phi(x):=\sum_{l=1}^{K}\Gamma_{l}(x)\sum_{k=1,k\neq
l}^{K}{Z\over|x-R_{k}|},$
i.e., the nuclear potential at point $x$ of all nuclei except the one from the
cell in which $x$ lies. With this notation the electrostatic inequality reads
(37)
$\tfrac{1}{2}D[\nu]-Z\int_{\mathbb{R}^{3}}\Phi(x)\mathrm{d}\nu(x)+Z^{2}\sum_{1\leq
k<l\leq K}{1\over|R_{k}-R_{l}|}\geq{Z^{2}\over 8}\sum_{k=1}^{K}{1\over D_{k}}$
for any bounded measure and $Z\in\mathbb{R}_{+}$.
With these tools we modify the atomic lower bounds term by term.
### 3.1. The Weizsäcker Energy
Instead of (18) we get
(38) $\mathcal{T}^{\mathrm{W}}(\rho)\geq{3^{\frac{5}{3}}\lambda c\over
2^{7}\pi^{\frac{2}{3}}}\sum_{k=1}^{K}\underbrace{\int_{B_{k}}\mathrm{d}x\rho(x)^{\frac{2}{3}}\mathfrak{Arsin}(\tfrac{p(x)}{c})\left(|x-R_{k}|^{-2}-H_{k}(x)^{2}\right)_{+}}_{=:\mathcal{H}_{k}(\rho)}$
where we use (35) and $F(t)\geq t\sqrt{\mathfrak{Arsin}(s)}/2$ (see [9,
Formula (90)]).
### 3.2. The Potential Energy
Using (37) and $\sqrt{(a^{2}-b^{2})_{+}}\geq a-b$ for $a,b\in\mathbb{R}_{+}$
we get
(39)
$\begin{split}&\mathcal{V}(\rho)\geq-\sum_{k=1}^{K}\left(\int_{B_{K}}\mathrm{d}x+\int_{\Gamma_{k}\setminus
B_{k}}\mathrm{d}x\right){Z\rho(x)\over|x-R_{k}|}+{Z^{2}\over
8}\sum_{k=1}^{K}{1\over D_{k}}\\\ \geq&-\sum_{k=1}^{K}Z\left\\{\int_{B_{k},\
p(x)/c\geq
s}\mathrm{d}x\rho(x)\left[\sqrt{\left({1\over|x-R_{k}|^{2}}-H_{k}(x)^{2}\right)_{+}}+H_{k}(x)\right]\right.\\\
&+\left.\int_{B_{k},\ p(x)/c\leq
s}\mathrm{d}x{\rho(x)\over|x-R_{k}|}+\int_{\Gamma_{k}\setminus
B_{k}}{\rho(x)\over|x-R_{k}|}-\frac{Z}{8}{1\over D_{k}}\right\\}\\\
\geq&-\sum_{k=1}^{K}\left({Z\over\sqrt{\mathfrak{Arsin}(s)}}\sqrt{\mathcal{H}_{R_{k}}(\rho)\mathcal{T}_{R_{k}}(\rho)}+Z\int_{B_{k},\
p(x)/c\geq s}\mathrm{d}xH_{k}(x)\rho(x)\right.\\\ &+\left.\int_{B_{k},\
p(x)/c\leq s}\mathrm{d}x{Z\rho(x)\over|x-R_{k}|}+\int_{\Gamma_{k}\setminus
B_{k}}\mathrm{d}x{Z\rho(x)\over|x-R_{k}|}\right)+{Z^{2}\over
8}\sum_{k=1}^{K}{1\over D_{k}}\end{split}$
with $\mathcal{T}_{R_{k}}(\rho):=\int_{B_{k},\ p(x)/c\geq
s}\mathrm{d}x\rho(x)^{\frac{4}{3}}$.
### 3.3. The Combined Thomas-Fermi and Exchange Terms
We use $T^{\mathrm{TF}}(t)\geq 2t^{4}-8t^{3}/3$, (23), and set
$\delta:=(3\pi^{2})^{\frac{1}{3}}$. This yields
(40)
$\mathcal{T}^{\mathrm{TF}}(\rho)-\mathcal{X}(\rho)=\int\limits_{\mathbb{R}^{3}}\mathrm{d}x\frac{c^{5}}{8\pi^{2}}T^{\mathrm{TF}}(\tfrac{p(x)}{c})-\eta
cN\geq\tfrac{3}{4}\delta
c\int\limits_{\mathbb{R}^{3}}\mathrm{d}x\rho(x)^{\frac{4}{3}}-C_{c}N,$
### 3.4. The Total Energy
Adding again all up yields
(41) $\displaystyle\mathcal{E}_{c,Z}^{\mathrm{TFWD}}(\rho)$ (42)
$\displaystyle\geq$
$\displaystyle\sum_{k=1}^{K}c\left[{3^{\frac{5}{3}}\lambda\over
2^{7}\pi^{\frac{2}{3}}}\mathcal{H}_{R_{k}}(\rho)+\tfrac{3}{8}\delta\mathcal{T}_{R_{k}}(\rho)-{\kappa\over\sqrt{\mathfrak{Arsin}(s)}}\sqrt{\mathcal{H}_{R_{k}}(\rho)\mathcal{T}_{R_{k}}(\rho)}\right.$
(43) $\displaystyle+\int_{B_{k},\
\frac{p(x)}{c}<s}\mathrm{d}x\left(\tfrac{3}{4}\delta\rho(x)^{\frac{4}{3}}-{\kappa\rho(x)\over|x-R_{k}|}\right)$
(44) $\displaystyle+\int_{B_{k},\
p(x)/c>s}\mathrm{d}x\left(\tfrac{3}{8}\delta\rho(x)^{\frac{4}{3}}-\kappa
H_{k}(x)\rho(x)\right)$ (45) $\displaystyle+\left.\int_{\Gamma_{k}\setminus
B_{k}}\mathrm{d}x\left(\tfrac{3}{4}\delta\rho(x)^{\frac{4}{3}}-{\kappa\rho(x)\over|x-R_{k}|}\right)\right]$
(46) $\displaystyle+\sum_{k=1}^{K}{Z^{2}\over 8D_{k}}-C_{c}N.$
We pick $s$ such that
(47) $2\sqrt{{3^{\frac{5}{3}}\lambda\over
2^{7}\pi^{\frac{2}{3}}}\frac{3}{8}\delta}={\kappa\over\sqrt{\mathfrak{Arsin}(s)}}$
which makes (42) a sum of complete squares. Next
(48)
$\begin{split}&\eqref{3}\geq\delta(cs)^{4}\inf\left\\{\int_{p(x)<1}\mathrm{d}x\left(\tfrac{3}{4}\rho(x)^{\frac{4}{3}}-{\kappa\rho(x)\over
cs\delta|x|}\right)\Big{|}\rho\in P\right\\}\\\
\geq&{cs\kappa^{3}\over\delta^{2}}\inf\left\\{\int_{p(x)<1}\mathrm{d}x\left(\tfrac{3}{4}\rho(x)^{\frac{4}{3}}-{\rho(x)\over|x|}\right)\Big{|}\rho\in
P\right\\}\geq-C{cs\kappa^{3}\over\delta^{2}}\end{split}$
where we replaced $p$ by $csp$ in the first step and $x$ by
$\kappa/(cs\delta)x$ in the second step. Thus (48) yields, after summation,
$K$ times a constant which is irrelevant for stability. Furthermore,
(49) $\eqref{4}\geq-{2^{4}\kappa^{4}\cdot 4\pi\over 2\cdot
4\delta^{3}D_{k}}\int_{0}^{1}\mathrm{d}rr^{2}2^{4}(1+r^{2}/4)^{2}=-{5944\pi\kappa^{4}\over
105\delta^{3}D_{k}}$
and using $\int_{\Gamma_{k}\setminus B_{k}}\mathrm{d}x|x-R_{k}|^{-4}\leq
3\pi/D_{k}$ (Lieb et al. [22, Formula (4.6)]) we get
(50) $\eqref{5}\geq-{\kappa^{4}\over 4\delta^{3}}\int_{\Gamma_{k}\setminus
B_{k}}{1\over|x-R_{k}|^{4}}\geq-{3\pi\kappa^{4}\over 4\delta^{3}D_{k}}.$
Thus, the energy per particle is bounded from below uniformly in $\rho$, $K$,
and $N$, if
(51) $c\left({2972\pi\kappa^{4}\over 105\delta^{3}}+{3\pi\kappa^{4}\over
4\delta^{3}}\right)\leq{Z^{2}\over 8},$
i.e.,
(52) $Z\leq Z_{\mathrm{max}}:=3{\sqrt{1686370\pi}\over 48182}c^{\frac{3}{2}}.$
Numerically, using the physical value of the velocity of light $c=137.037$
(Bethe and Salpeter [5, p. 84]), we get
(53) $Z_{\mathrm{max}}\approx 229.9029615$
covering liberally all known elements. The result can be condensed into
###### Theorem 2.
There exists a constant $C$ such that for all $\rho\in P$ and all pairwise
different $R_{1},...,R_{K}\in\mathbb{R}^{3}$ and
$Z_{1}=...=Z_{K}\in[0,Z_{\mathrm{max}}]$
$\mathcal{E}^{\mathrm{TFWD}}_{\mathfrak{R},\mathfrak{Z}}(\rho)\geq
C\cdot(K+N).$
We conclude with two remarks:
1\. Theorem 2 holds actually for all
$\mathfrak{Z}\in[0,Z_{\mathrm{max}}]^{K}$. Our proof obviously generalizes to
that case, since the potential estimate (37) also generalizes in the obvious
way.
2\. It might be surprising that there is no requirement on a minimal velocity
of light which is independent of the value of $Z$. This is different from
other unrenormalized relativistic models like the Thomas-Fermi functional with
inhomogeneity correction $(\sqrt{\rho},|\nabla|\sqrt{\rho})$ investigated by
Lieb et al. [22, Formula (2.7)]. There we were forced to control the exchange
energy by the Thomas-Fermi term. Here, this is no longer necessary: due to the
renormalization in Engel’s and Dreizler’s derivation the exchange energy is
bounded from below by a multiple of the particle number.
## Acknowledgments
Special thanks go to Rupert Frank for many inspiring discussions, in
particular for pointing out that in addition to separation in high and low
density regimes, a localization in space near the nucleus could be useful, and
for critical reading of a substantial part of the manuscript.
Thanks go also to Hongshuo Chen for critical reading of the manuscript.
Partial support by the Deutsche Forschungsgemeinschaft (DFG, German Research
Foundation) through Germany’s Excellence Strategy EXC - 2111 \- 390814868 is
gratefully acknowledged.
## References
* [1] Rafael Benguria. The von Weizsäcker and Exchange Corrections in the Thomas-Fermi Theory. PhD thesis, Princeton, Department of Physics, June 1979.
* [2] Rafael Benguria, Haim Brezis, and Elliott H. Lieb. The Thomas-Fermi-von Weizsäcker theory of atoms and molecules. Comm. Math. Phys., 79(2):167–180, 1981.
* [3] Rafael Benguria and Elliott H. Lieb. The most negative ion in the Thomas-Fermi-von Weizsäcker theory of atoms and molecules. J. Phys. B., 18:1045–1059, 1985.
* [4] Rafael D Benguria, Michael Loss, and Heinz Siedentop. Stability of atoms and molecules in an ultrarelativistic Thomas-Fermi-Weizsäcker model. Journal of Mathematical Physics, 49(1):012302, 2008.
* [5] Hans A. Bethe and Edwin E. Salpeter. Quantum mechanics of one- and two-electron atoms. In S. Flügge, editor, Handbuch der Physik, XXXV, pages 88–436. Springer, Berlin, 1 edition, 1957.
* [6] Roch Cassanas and Heinz Siedentop. The ground-state energy of heavy atoms according to Brown and Ravenhall: Absence of relativistic effects in leading order. J. Phys. A, 39(33):10405–10414, 2006.
* [7] Subramanyan Chandrasekhar. The maximum mass of ideal white dwarfs. Astrophys. J., 74:81–82, 1931.
* [8] Hongshuo Chen, Rupert L. Frank, and Heinz Siedentop. A statistical theory of heavy atoms: Energy and excess charge. arxiv:2010.12074, October 2020.
* [9] Hongshuo Chen and Heinz Siedentop. On the excess charge of a relativistic statistical model of molecules with an inhomogeneity correction. Journal of Physics A: Mathematical and Theoretical, 53(39):395201, September 2020.
* [10] T. Ekholm and R. L. Frank. On Lieb-Thirring inequalities for Schrödinger operators with virtual level. Comm. Math. Phys., 264(3):725–740, 2006.
* [11] E. Engel and R. M. Dreizler. Field-theoretical approach to a relativistic Thomas-Fermi-Dirac-Weizsäcker model. Phys. Rev. A, 35:3607–3618, May 1987.
* [12] E. Fermi. Un metodo statistico per la determinazione di alcune proprietá dell’atomo. Atti della Reale Accademia Nazionale dei Lincei, Rendiconti, Classe di Scienze Fisiche, Matematiche e Naturali, 6(12):602–607, 1927.
* [13] E. Fermi. Eine statistische Methode zur Bestimmung einiger Eigenschaften des Atoms und ihre Anwendung auf die Theorie des periodischen Systems der Elemente. Z. Phys., 48:73–79, 1928.
* [14] Rupert L. Frank, Elliott H. Lieb, and Robert Seiringer. Stability of relativistic matter with magnetic fields for nuclear charges up to the critical value. Comm. Math. Phys., 275(2):479–489, 2007.
* [15] Rupert L. Frank, Heinz Siedentop, and Simone Warzel. The ground state energy of heavy atoms: Relativistic lowering of the leading energy correction. Comm. Math. Phys., 278(2):549–566, 2008.
* [16] Rupert L. Frank, Heinz Siedentop, and Simone Warzel. The energy of heavy atoms according to Brown and Ravenhall: the Scott correction. Doc. Math., 14:463–516, 2009.
* [17] P. Gombás. Die statistische Theorie des Atoms und ihre Anwendungen. Springer-Verlag, Wien, 1 edition, 1949.
* [18] P. Gombás. Statistische Behandlung des Atoms. In S. Flügge, editor, Handbuch der Pysik. Atome II, volume 36, pages 109–231. Springer-Verlag, Berlin, 1956.
* [19] Michael Handrek and Heinz Siedentop. On the maximal excess charge of the Chandrasekhar-Coulomb Hamiltonian in two dimension. Lett. Math. Phys., 103(8):843–849, 2013.
* [20] Elliott H. Lieb. Analysis of the Thomas-Fermi-von Weizsäcker equation for an infinite atom without electron repulsion. Comm. Math. Phys., 85(1):15–25, 1982.
* [21] Elliott H. Lieb and David A. Liberman. Numerical calculation of the Thomas-Fermi-von Weizsäcker function for an infinite atom without electron repulsion. Technical Report LA-9186-MS, Los Alamos National Laboratory, Los Alamos, New Mexico, April 1982.
* [22] Elliott H. Lieb, Michael Loss, and Heinz Siedentop. Stability of relativistic matter via Thomas-Fermi theory. Helv. Phys. Acta, 69(5/6):974–984, December 1996.
* [23] Elliott H. Lieb and Barry Simon. The Thomas-Fermi theory of atoms, molecules and solids. Advances in Math., 23(1):22–116, 1977.
* [24] Elliott H. Lieb and Horng-Tzer Yau. The stability and instability of relativistic matter. Comm. Math. Phys., 118:177–213, 1988.
* [25] N. H. March and W. H. Young. Variational methods based on the density matrix. Proc. Phys. Soc., 72:182–192, 1958.
* [26] Thomas Østergaard Sørensen. The large-$Z$ behavior of pseudorelativistic atoms. J. Math. Phys., 46(5):052307, 24, 2005.
* [27] Barry Simon. Functional Integration and Quantum Physics. Academic Press Inc. [Harcourt Brace Jovanovich Publishers], New York, 1979\.
* [28] Jan Philip Solovej, Thomas Østergaard Sørensen, and Wolfgang L. Spitzer. The relativistic Scott correction for atoms and molecules. Commun. Pure Appl. Math., 63:39–118, January 2010.
* [29] L. H. Thomas. The calculation of atomic fields. Proc. Camb. Phil. Soc., 23:542–548, 1927.
* [30] Yasuo Tomishima. A Relativistic Thomas-Fermi Theory. Progress of Theoretical Physics, 42(3):437–447, 09 1969.
* [31] C. F. v. Weizsäcker. Zur Theorie der Kernmassen. Z. Phys., 96:431–458, 1935.
* [32] Katsumi Yonei and Yasuo Tomishima. On the Weizsäcker correction to the Thomas-Fermi theory of the atom. Journal of the Physical Society of Japan, 20(6):1051–1057, 1965\.
|
# Development of a physically-informed neural network interatomic potential
for tantalum
Yi-Shen Lin, Ganga P. Purja Pun and Yuri Mishin Department of Physics and
Astronomy, MSN 3F3, George Mason University, Fairfax, Virginia 22030, USA
###### Abstract
Large-scale atomistic simulations of materials heavily rely on interatomic
potentials, which predict the system energy and atomic forces. One of the
recent developments in the field is constructing interatomic potentials by
machine-learning (ML) methods. ML potentials predict the energy and forces by
numerical interpolation using a large reference database generated by quantum-
mechanical calculations. While high accuracy of interpolation can be achieved,
extrapolation to unknown atomic environments is unpredictable. The recently
proposed physically-informed neural network (PINN) model significantly
improves the transferability by combining a neural network regression with a
physics-based bond-order interatomic potential. Here, we demonstrate that
general-purpose PINN potentials can be developed for body-centered cubic (BCC)
metals. The proposed PINN potential for tantalum reproduces the reference
energies within 2.8 meV/atom. It accurately predicts a broad spectrum of
physical properties of Ta, including (but not limited to) lattice dynamics,
thermal expansion, energies of point and extended defects, the dislocation
core structure and the Peierls barrier, the melting temperature, the structure
of liquid Ta, and the liquid surface tension. The potential enables large-
scale simulations of physical and mechanical behavior of Ta with nearly first-
principles accuracy while being orders of magnitude faster. This approach can
be readily extended to other BCC metals.
Computer modeling of materials; machine learning; artificial neural network;
transition metals.
## Introduction
The critical ingredient of all large-scale molecular dynamics (MD) and Monte
Carlo (MC) simulations of materials is the classical interatomic potential,
which predicts the system energy and atomic forces as a function of atomic
positions and, for multicomponent systems, their occupation by chemical
species. Computations with interatomic potentials are much faster than
quantum-mechanical calculations explicitly treating the electrons. The
computational efficiency of interatomic potentials enables simulations on
length scales up to $\sim 10^{2}$ nm ($\sim 10^{7}$ atoms) and time scales up
to $\sim 10^{2}$ ns.
Interatomic potentials partition the total potential energy $E$ into a sum of
energies $E_{i}$ assigned to individual atoms $i$: $E=\sum_{i}E_{i}$. Each
atomic energy $E_{i}$ is expressed as a function of the local atomic positions
$\mathbf{R}_{i}\equiv(\mathbf{r}_{i1},\mathbf{r}_{i2},...,\mathbf{r}_{in_{i}})$
in the vicinity of the atom. The form of the potential function
$E_{i}=\Phi(\mathbf{R}_{i},\mathbf{p})$ (1)
must ensure the invariance of energy under rotations and translations of the
coordinate axes, and permutations of the atoms. The partitioning into atomic
energies makes the total energy computation a linear-$N$ procedure ($N$ being
the number of atoms), enabling effective parallelization by domain
decomposition. Physically, this partitioning is only justified for systems
with short-range interactions. The potential function $\Phi$ in Eq.(1)
additionally depends on a set of adjustable parameters
$\mathbf{p}=(p_{1},...,p_{m})$, which are optimized by training on a reference
database. Once the optimization is complete, the parameters are fixed once and
for all and used in all subsequent simulations. The atomic forces required for
MD simulations are obtained by differentiation of the total energy with
respect to atomic coordinates.
Potentials can be divided into two categories according to their intended
usage. General-purpose potentials are trained to reproduce a broad spectrum of
properties that are most essential for atomistic simulations. The reference
structures must be diverse enough to represent the most typical atomic
environments occurring in simulations. Once published, the potential is used
for almost any type of simulation of the material. Special-purpose potentials
are designed for one particular type of simulation and are not expected to be
transferable to other applications.
Two major classes of potentials are the traditional potentials and the
emerging class of machine-learning (ML) potentials (Behler:2016aa, ;
Botu:2017aa, ; Deringer:2019aa, ; Zuo:2020aa, ). The traditional potentials
use a functional form of $\Phi(\mathbf{R}_{i},\mathbf{p})$ in Eq.(1) that
reflects the basic physics of interatomic bonding in the given material.
Accordingly, such functional forms are specific to particular classes of
materials. For example, the embedded atom method (EAM) (Daw83, ; Daw84, ;
Finnis84, ), the modified EAM (MEAM) (Baskes87, ), and the angular-dependent
potential (ADP) (Mishin05a, ) are designed for metallic systems. The Tersoff
(Tersoff88, ; Tersoff:1988dn, ; Tersoff:1989wj, ) and Stillinger-Weber
(Stillinger85, ) potentials were specifically developed for strongly covalent
materials such as silicon and carbon. Traditional potentials depend on a small
($\sim 10$) number of parameters, which are trained on a small database of
experimental properties and first-principles energies or forces. The accuracy
of traditional potentials is limited compared to both the first-principles
calculations and the ML potentials. However, due to the physical
underpinnings, traditional potentials often demonstrate reasonable
transferability to atomic configurations lying well outside the training
dataset. As long as the nature of chemical bonding remains the same as assumed
during the potential construction, the predicted energies and forces may not
be very accurate but at least remain physically meaningful. Most of the
traditional potentials are general-purpose type.
The ML potentials are based on a different philosophy. The physics of
interatomic bonding is not considered beyond the principle of locality of
atomic interactions and the invariance of energy. The potential function (1)
is a high-dimensional nonlinear regression implemented numerically. This
function depends on a large ($\sim 10^{3}$) number of parameters, which are
trained on a database containing $10^{3}$ to $10^{4}$ supercells whose
energies or forces (or both) are obtained by high-throughput density
functional theory (DFT) (hohenberg64:dft, ; kohn65:inhom_elec, ) calculations.
An ML potential computes the energy in two steps. First, the local position
vector $\mathbf{R}_{i}$ is mapped onto a set of local structural parameters
$\mathbf{G}_{i}=(G_{i1},G_{i2},...,G_{iK})$, which encode the local
environment and are invariant under rotations, translations, and relabeling of
atoms. Behler and Parrinello (Behler:2007aa, ) proposed that the size $K$ of
the descriptors $\mathbf{G}_{i}$ be the same for all atoms, even though the
number of neighbors $n_{i}$ can vary from one atom to another. At the second
step, the $K$-dimensional descriptor space is mapped onto the 1D space of
atomic energies. This mapping is implemented by a pre-trained nonlinear
regression $\mathcal{R}$. Thus, the atomic energy calculation can be
represented by the formula
$\mathbf{R}_{i}\rightarrow\mathbf{G}_{i}\overset{\mathcal{R}}{\rightarrow}E_{i}$.
Several regression methods have been employed, such as the Gaussian process
regression (Payne.HMM, ; Bartok:2010aa, ; Bartok:2013aa, ; Li:2015aa, ;
Glielmo:2017aa, ; Bartok_2018, ; Deringer:2018aa, ), the kernel ridge
regression (Botu:2015bb, ; Botu:2015aa, ; Mueller:2016aa, ), artificial neural
networks (NN) (Behler07, ; Bholoa:2007aa, ; Behler:2008aa, ; Sanville08, ;
Eshet2010, ; Handley:2010aa, ; Behler:2011aa, ; Behler:2011ab, ; Sosso2012, ;
Behler:2015aa, ; Behler:2016aa, ; Schutt:148aa, ; Imbalzano:2018aa, ), the
spectral neighbor analysis (Thompson:2015aa, ; Chen:2017ab, ; Li:2018aa, ),
and the moment tensor potentials (Shapeev:2016aa, ).
The development of an ML potential is a complex high-dimensional problem,
which is solved by applying ML methods for the reference database generation,
training, and error quantification. With the large number of parameters
available, an ML potential can be trained to reproduce the reference database
within a few meV/atom, which is the intrinsic uncertainty of DFT calculations.
Since the potential format is independent of the nature of chemical bonding,
ML potentials are not specific to any particular class of materials. The same
procedure is applied to develop a potential for a metal, nonmetal, or a mixed-
bonding system. However, the high accuracy and flexibility come at a price: ML
potentials are effective numerical interpolators but poor extrapolators. Since
the energy and force predictions are not guided by any physics, extrapolation
outside the domain of known environments is unpredictable and often
unphysical. Nearly all ML potentials are special-purpose type; development of
general-purpose ML potentials is an extremely challenging task (Bartok_2018, ;
Pun:2020aa, ).
The recently proposed physically-informed neural network (PINN) method
(PINN-1, ) takes the best from both worlds by integrating an ML regression
with a physics-based interatomic potential. Instead of directly predicting the
atomic energy, the regression predicts the best set of potential parameters
$\mathbf{p}_{i}$ appropriate to the given environment. The potential
$\Phi(\mathbf{R}_{i},\mathbf{p}_{i})$ then computes the atomic energy using
the predicted parameter set $\mathbf{p}_{i}$. Thus, the formula of the method
is (Fig. 1a)
$\mathbf{R}_{i}\rightarrow\mathbf{G}_{i}\overset{\mathcal{R}}{\rightarrow}\mathbf{p}_{i}\overset{\Phi}{\rightarrow}E_{i}.$
(2)
The method drastically improves transferability to new environments because
the extrapolation is now guided by the physics embedded in the interatomic
potential rather than a purely mathematical algorithm. This general idea can
be realized with any regression method and any meaningful interatomic
potential. PINN is a particular realization of this approach using a NN
regression and an analytical bond-order potential (BOP) (PINN-1, ). The latter
has a functional form general enough to work for both metals and nonmetals.
Specifically, the potential represents pairwise repulsions and attractions of
the atoms, the bond-order effect (bond weakening with the number of bonds),
the angular dependence of the bond energies, the screening of bonds by
surrounding atoms, and the promotion energy. The interactions extend to
neighbors within a 0.5 to 0.6 nm distance with a smooth cutoff. More details
about the BOP potential can be found in the Methods section.
The original PINN formulation (PINN-1, ) has been recently improved by
introducing a global BOP potential trained on the entire reference database.
Since the optimized parameter set ($\mathbf{p}^{0}$) is small, the error of
fitting is relatively large ($\sim 10^{2}$ meV/atom). A pre-trained NN then
adds to $\mathbf{p}^{0}$ a set of local “perturbations”
$\delta\mathbf{p}_{i}=(\delta p_{i1},...,\delta p_{im})$ to obtain the final
parameter set $\mathbf{p}_{i}=\mathbf{p}^{0}+\delta\mathbf{p}_{i}$. The latter
is used to predict the atomic energy
$E_{i}=\Phi(\mathbf{R}_{i},\mathbf{p}^{0}+\delta\mathbf{p}_{i})$. In this
scheme, the energy predictions are largely guided by the global BOP potential,
which provides a smooth and physically meaningful extrapolation outside the
training domain. The magnitudes of the perturbations are kept as small as
possible. The same DFT level of accuracy is achieved during the training as in
the original PINN formulation (PINN-1, ), no computational overhead is
incurred, but the transferability is improved significantly.
The modified PINN method has been recently applied to develop a general-
purpose ML potential for the face-centered cubic (FCC) Al (Pun:2020aa, ).
Here, we demonstrate that the method can also generate highly accurate and
transferable general-purpose potentials for body-centered cubic (BCC) metals.
We chose tantalum as a representative transition BCC metal, in which the
interatomic bonding has a significant directional component due to the
$d$-electrons. In addition to many structural applications of Ta and Ta-based
alloys (Buckman:2000aa, ; Sungail:2020aa, ), porous Ta is a promising
biomaterial for orthopedic applications due to its excellent biocompatibility
and a favorable combination of physical and mechanical properties
(Balla:2010aa, ). This work paves the way for the development of PINN
potentials for other BCC metals in the future.
## Results
The potential development. The reference database has been generated by DFT
calculations employing the Vienna Ab Initio Simulation Package (VASP)
(Kresse1996, ; Kresse1999, ) (see the Methods section for details). The
database consists of the energies of 3,552 supercells containing from 1 to 250
atoms. The supercells represent energy-volume relations for 8 crystal
structures of Ta, 5 uniform deformation paths between pairs of structures,
vacancies, interstitials, surfaces with low-index orientations, 4 symmetrical
tilt grain boundaries, $\gamma$-surfaces on the (110) and (211) fault planes,
a $\tfrac{1}{2}$[111] screw dislocation, liquid Ta, and several isolated
clusters containing from 2 to 51 atoms. Some of the supercells contain static
atomic configurations. However, most are snapshots of _ab initio_ MD
simulations at different densities, and temperatures ranging from 293 K to
3300 K. The BCC structure was sampled in the greatest detail, including a wide
range of isotropic and uniaxial deformations. The database represents a total
of 161,737 highly diverse atomic environments occurring in typical atomistic
simulations. More detailed information about the database can be found in
Supplementary Tables 1 and 2. About 90% of the supercells, representing
136,177 environments, were randomly selected for training. The remaining 10%
were set aside for cross-validation.
The potential training process involves several hyper-parameters describing
the NN architecture, the number and types of the local structural parameters,
and the regularization coefficients in the loss function. We tested many
combinations of these parameters before choosing the final version. We
emphasize that, for almost any choice of the hyper-parameters, the potential
could be trained to an error of about 3 meV/atom. However, the predicted
properties of Ta were different. Even different random initializations of the
NN’s weights and biases resulted in slightly different combinations of Ta
properties. Hundreds of initialization-training cycles with automated property
testing had to be performed before a potential that we deemed the best was
selected. The hyper-parameters and the properties reported below refer to the
final version of the potential.
As the local structural descriptors, we chose products of a radial function
and an angular function. The radial function is a Gaussian peak of width
$\sigma=1$ Å centered at radius $r_{0}$ and smoothly truncated at a cutoff
$r_{c}$ = 4.8 Å within the range $d$ = 1.5 Å (see the Methods section). The
angular part is a Legendre polynomial $P_{l}(\cos\theta_{ijk})$, where
$\theta_{ijk}$ is the angle between the bonds $ij$ and $ik$. The size of the
descriptors $\mathbf{G}_{i}$ is $K=40$, corresponding to the set of Gaussian
positions $r_{0}=\\{2.4,2.8,3.0,3.2,3.4,3.6,4.0,4.4\\}$ Å and the Legendre
polynomials of orders $l=0,1,2,4$ and $6$.
The NN has a feed-forward $40\times 32\times 32\times 8$ architecture with 32
nodes in each of the hidden layers and 8 nodes in the output layer. The latter
corresponds to the $m=8$ perturbations to the BOP parameters delivered by the
NN. The total number of weights and biases in the NN is 2,632. The loss
function represents the mean-square deviation of the predicted supercell
energies from the DFT energies, augmented by two regularization terms as
detailed in the Methods section. The NN’s weights and biases were optimized by
the L-BFGS unconstrained minimization algorithm (Fletcher:1987aa, ) to reach
the root-mean-square error (RMSE) of 2.80 meV/atom. Fig. 1b demonstrates the
accurate and unbiased agreement between the PINN and DFT energies over a 13
eV/atom wide energy interval. Although atomic forces were not used during the
training, the forces predicted by the potential were compared with DFT forces
after the training. A strong correlation is observed (Supplementary Fig. 1)
with the RMSE of about 0.3 eV/Å. Note that the comparison includes forces as
large as 20 eV/Å.
10-fold cross-validation was performed to verify that the database was not
overfitted. At each rotation, the set-aside dataset mentioned above replaced a
similar number of supercells in the training set. The training was repeated,
and the RMSE of the potential was computed on the validation dataset unseen
during the training. The RMSE of validation averaged over the 10 rotations is
2.89 meV/atom.
Properties of BCC Ta. The Ta properties predicted by the PINN potential were
computed mainly with the ParaGrandMC (PGMC) code (ParaGrandMC, ; Purja-
Pun:2015aa, ; Yamakov:2015aa, ) and compared with DFT calculations performed
in this work. For consistency, the property calculations used the same density
functional and other DFT parameters as for the reference database generation.
Table 1 summarizes the basic properties of BCC Ta. The potential predicts the
equilibrium lattice constant and elastic moduli in good agreement with DFT
calculations. The phonon dispersion relations also compare well with DFT
calculations and the experimental data (Figure 2). Fig. 3 demonstrates the
performance of the potential under strong lattice deformations. The PINN
predictions closely match the DFT stress-strain relations up to 40% of the
uniaxial compression in the [110] direction (Fig. 3a). Equally good agreement
is found for compression along the [100] and [111] axes (Supplementary Fig.
2). This agreement makes the potential suitable for atomistic simulations of
shock deformation of Ta. As another test, the BCC lattice was sheared in the
$[11\overline{1}]$ direction parallel to the $(112)$ plane, transforming the
BCC lattice to itself along the twinning or anti-twinning deformation paths.
The energy variation along both paths accurately follows the DFT calculations
(Fig. 3b), which demonstrates the ability of the potential to model
deformation twinning in Ta.
The potential accurately reproduces the DFT vacancy formation and migration
energies (Table 1). We tested five different configurations of self-
interstitial defects and found excellent agreement between the PINN and DFT
energies in all cases. Both PINN and DFT predict the [111] dumbbell to be the
lowest energy state. Due to its ability to describe the point-defect
properties on the DFT level of accuracy, the potential can be safely used to
model diffusion-controlled processes and radiation damage in Ta. The surface
energies predicted by the potential are close to the DFT values (Table 1).
Furthermore, Fig. 4 and Supplementary Fig. 3 show that the potential
faithfully reproduces the surface relaxations for all four crystallographic
orientations tested here. Surface properties of Ta are important for catalytic
applications and the biocompatibility of porous structures (Balla:2010aa, ).
Mechanical behavior of Ta is controlled by the $\frac{1}{2}[111]$ screw
dislocations, which have low mobility caused by the non-planar core structure
(Vitek04a, ; CaiBCLY04, ; Lin2014, ). In turn, the core structure depends on
the shape of the $\gamma$-surface, representing the energy of a generalized
stacking fault parallel to a given crystal plane as a function of the
displacement vector. For the screw dislocations in BCC metals, the relevant
fault planes are (110) and (211). DFT $\gamma$-surfaces for these two planes
were included in the reference database during the potential development (Fig.
5a,b). The potential accurately reproduces both $\gamma$-surfaces, as
illustrated by select cross-sections shown in Fig. 5c,d. The cross-sections
along the [111] direction are especially important as they determine the
lowest-energy core structure. The computed cross-sections predict a non-
generate core. To test this prediction, we directly computed the relaxed
dislocation core structure using both the potential and the DFT (see the
Methods section). The Nye tensor plot (Hartley05, ) in Fig. 6a confirms that
the core computed with the PINN potential indeed has a non-generate structure.
The Nye tensor plot obtained by DFT calculations is not shown as it looks
virtually identical to Fig. 6a. Non-degenerate core structures were also found
in previous DFT calculations for BCC transition metals (Weinberger:2013aa, ).
By contrast, most of the traditional potentials incorrectly predict a
degenerate core with a different symmetry. Many of them also predict a
spurious local minimum on the $\gamma$-surface (Moller:2018aa, ).
The dislocation glide mobility depends on the Peierls barrier, defined as the
energy barrier that a straight dislocation must overcome to move between two
adjacent equilibrium positions. We computed the Peierls barrier of the
dislocation using the same supercell setup as in the dislocation core
calculations. The minimum energy path between the fully-relaxed initial and
final dislocation positions was obtained by the nudged elastic band (NEB)
method (Jonsson98, ; HenkelmanJ00, ). The PINN and DFT calculations give
nearly identical results for the energy along the transition path, showing a
single maximum in the middle (Fig. 6b). The height of the barrier, normalized
by the Burgers vector’s magnitude, is in close agreement with previous DFT
calculations (Weinberger:2013aa, ).
Alternate crystal structures of Ta. In addition to the ground-state BCC
structure, the reference database contained the equations of state (energy
versus atomic volume) for seven more crystal structures. The PINN potential
accurately reproduces the DFT equations of state for all seven structures,
including the low-energy A15 and $\beta$-Ta structures that compete with the
BCC structure for the ground state (Fig. 7). Furthermore, the potential
continues to predict physically reasonable behavior well outside the volume
range covered by the DFT points. The energy continues to rise monotonically
under strong compression and smoothly approaches zero at the cutoff distance
between atoms (Supplementary Fig. 4). This behavior illustrates the physics-
based transferability of the PINN model guided by the BOP potential.
Table 2 shows the results of additional tests. The potential predicts the
ground state energies of a dimer, a trimer, and a tetrahedron in close
agreement with DFT calculations, even though these clusters were represented
in the reference database by a small number of supercells. As another
transferability test, we performed DFT and PINN calculations of equilibrium
energies for four relatively open crystal structures, including the A7
structure typical of nonmetals. Although the DFT energies of these structures
were not included in the reference database, they are predicted by the
potential reasonably well (Table 2).
To sample atomic configurations away from the stable and metastable crystal
structures, we included four volume-conserving deformation paths connecting
such structures. The tetragonal, trigonal, orthorhombic, and hexagonal paths
continuously transform the BCC structure to FCC, HCP, SC, or body-centered
tetragonal (BCT) structures (Fig. 8). Each path is characterized by a single
deformation parameter $p$ defined in Ref. (Lin2014, ). The hexagonal path
combines a homogeneous deformation with a simultaneous shuffling of alternate
close-packed atomic planes, while the remaining paths are fully homogeneous.
Fig. 8 shows that the PINN potential accurately reproduces the energy along
all four paths.
The liquid phase and solid-liquid coexistence. We computed the liquid Ta
structure by NVT MD simulations using the PINN potential and DFT at three
temperatures: below, above, and near the melting point (see the Methods
section for computational details). The potential perfectly reproduces the
radial distribution function and the bond-angle distribution, as demonstrated
in Fig. 9 for the temperature of 3500 K and in Supplementary Figs. 5 and 6 for
two other temperatures.
The melting point calculation used the solid-liquid phase coexistence method
(Morris02, ; Pun09b, ; Purja-Pun:2015aa, ; Howells:2018aa, ). A simulation
block containing the solid and liquid phases separated by a planar interface
was coupled to a thermostat. In the canonical ensemble, one of the phases
always grows at the expense of the other, depending on the temperature. MD
simulations were performed at several temperatures around the expected melting
point, and the energy increase or decrease rate was measured at each
temperature (see the Methods section). Interpolation to the zero rate gave us
the melting temperature $T_{m}$ at which the phases coexist in equilibrium
(Supplementary Fig. 7). The PINN potential predicts the melting temperature of
3000$\pm$6 K, which is reasonable but below the experimental value (3293 K).
Perfect agreement with experiment was not expected since the potential was
trained on DFT data without any experimental input. The only DFT calculation
that we could find in the literature gave $T_{m}=3085\pm 130$ K
(Taioli:2007aa, ), which matches our result within the statistical
uncertainty.
The liquid surface tension of Ta was computed by the capillary fluctuation
method (see Methods). The number obtained, $\gamma=1.69$ J m-2, is comparable
to the experimental values of 2.07 J m-2 (from equilibrium shape of molten
samples in microgravity) (Miiller:1993aa, ) and 2.15 J m-2 from electrostatic
levitation (Paradis:2005aa, ). The experimental measurements were performed on
relatively large (a few mm) droplets and their accuracy was limited due to
many factors, such as temperature control, surface contamination, and
evaporation. The accurate value of $\gamma$ reported here is one of the
missing material parameters in the models of 3D printing (additive
manufacturing) of Ta and Ta-based alloys (Sungail:2020aa, ).
### Discussion
After the initial publication (PINN-1, ), the PINN model has been modified by
reducing the role of the NN regression to only predicting local corrections to
a global BOP potential. This step has helped further improve the
transferability of PINN potentials to atomic configurations lying outside the
reference database. As with any model, a PINN potential eventually fails when
taken too far away from the familiar territory. However, the physics-guided
extrapolation significantly expands the potential’s applicability domain
compared with purely mathematical ML models. The proposed integration of ML
with a physics-based interatomic potential preserves the DFT level of the
training accuracy without increasing the number of fitting parameters. As
recently demonstrated (Pun:2020aa, ), the computational overhead due to
incorporating the BOP is close to 25%, which is marginal given that the
potential is orders of magnitude faster than straight DFT calculations.
The improved transferability of the PINN model enables the development of
general-purpose ML potentials intended for almost any type of MD or MC
simulations, similar to the off-the-shelve traditional potentials. This
capability has already been demonstrated by developing a general-purpose PINN
potential for FCC Al (Pun:2020aa, ). The present work has made the next step
by extending the PINN model to BCC metals. While FCC metals can be described
by EAM, MEAM, or ADP potentials reasonably well, BCC transition metals have
been notoriously challenging. For example, most traditional potentials fail to
predict the correct dislocation core structure and the Peierls barrier in BCC
metals even qualitatively. The PINN Ta potential developed here reproduces
both in excellent agreement with DFT calculations. The potential has been
trained on a highly diverse DFT database and reproduces a broad spectrum of
physical properties of Ta with DFT accuracy. Extrapolation of the potential to
unknown atomic configurations has also been demonstrated. We foresee that this
potential will be widely used in atomistic simulations of Ta, especially
simulations of mechanical behavior under normal conditions and shock
deformation. Using the same approach, PINN potentials for other BCC metals can
be developed in the future.
The PINN method undergoes rapid development. PINN potentials for other
metallic and nonmetallic elements are now being constructed. A multi-component
version of PINN has been developed and will be reported in forthcoming
publications, together with several PINN potentials for binary systems. The
PGMC MD/MC code (ParaGrandMC, ; Purja-Pun:2015aa, ; Yamakov:2015aa, ) now
works with multi-component PINN potentials. PINN potentials will soon be
incorporated in the Large-scale Atomic/Molecular Massively Parallel Simulator
(LAMMPS) (Plimpton95, ).
## Methods
DFT calculations. We used the generalized gradient approximation with the
Perdew-Burke-Ernzerhof (PBE) density functional (PerdewCVJPSF92, ; PerdewBE96,
) implemented in VASP (Kresse1996, ; Kresse1999, ). The calculations were
performed with the kinetic energy cutoff of 410 eV and a Methfessel-Paxton
smearing of order 1 with the smearing width of 0.1 eV. Monkhorts-Pack
$k$-point meshes were used to sample the Brillouin zone, with the number of
grid points chosen to ensure the energy convergence to around 1 meV/atom.
Before the training, all supercell energies (per atom) were shifted by a
constant such that the DFT energy of the equilibrium BCC structure is equal to
the negative experimental cohesive energy of BCC Ta.
The phonon calculations utilized the phonopy package (phonopy, ) with a
$6\times 6\times 6$ primitive BCC supercell (216 atoms) and a $6\times 6\times
6$ Monkhorst-Pack $k$-point grid. It was verified that, with this grid, the
supercell energy was converged to 1 meV/atom, and the phonon dispersion curves
did not show noticeable changes with the number of $k$-points. For point
defects, we used a cubic supercell of 128$\pm$1 atoms with full relaxation of
both atomic positions and the volume. The vacancy migration energy was
calculated by the NEB method (Jonsson98, ; HenkelmanJ00, ) with fully relaxed
supercells as the initial and final vacancy configurations. For computing the
surface energies and surface relaxations, we used stacks of 24 atomic layers
with a chosen crystallographic orientation. The $\gamma$-surface calculations
employed similar 24-layer stacks. A generalized stacking fault was formed in
the middle of the supercell by displacing the upper half of the layers
relative to the lower half by small increments. After each increment, the
energy was minimized by only allowing relaxations perpendicular to the fault
plane.
To find the dislocation core structure, we created a dipole configuration
consisting of 231 atoms with periodic boundary conditions in all three
dimensions as described in Ref. (Li:2004aa, ). By contrast to a single
dislocation, this dipole configuration avoids the possible incompatibility
with periodic boundary conditions. The two dislocations were first introduced
into the supercell by displacing the atoms according to the anisotropic
elasticity solution for the strain field. The atomic positions were then
relaxed before examining the core structure.
The liquid structure calculations utilized a 250-atom cubic supercell with the
experimental liquid density (Vinet:1993aa, ) near the melting temperature. The
initial BCC structure was melted and equilibrated at the temperature of 5,000
K before quenching it to the target temperature with the supercell box fixed.
After equilibration at the target temperature, an NVT MD run was performed in
which 100 snapshots were saved at 60 fs intervals. The saved configurations
were used to compute the average structural properties of the liquid following
the procedure outlined in Ref. (Vinet:1993aa, ). For the bond-angle
distribution, we only considered atoms within a sphere corresponding to the
first minimum of the radial distribution function.
The BOP potential. The current version of the BOP potential underlying the
PINN model was described elsewhere (Pun:2020aa, ) and is only briefly reviewed
here for completeness. The energy assigned to atom $i$ is given by
$E_{i}=\dfrac{1}{2}\sum_{j\neq
i}\left[e^{A_{i}-\alpha_{i}r_{ij}}-S_{ij}b_{ij}e^{B_{i}-\beta_{i}r_{ij}}\right]f_{c}(r_{ij},d,r_{c})+E_{i}^{(p)},$
(3)
where the summation is over neighbors $j$ separated from the atom $i$ by a
distance $r_{ij}$. The interactions are truncated at the cutoff distance
$r_{c}$ using the cutoff function
$f_{c}(r,r_{c},d)=\begin{cases}\dfrac{(r-r_{c})^{4}}{d^{4}+(r-r_{c})^{4}}\enskip&r\leq
r_{c}\\\ 0,\enskip&r\geq r_{c},\end{cases}$ (4)
where the parameter $d$ controls the truncation smoothness. The first term in
the square brackets in Eq.(3) describes the repulsion between atoms at short
separations. The second term describes chemical bonding and captures the bond-
order effect through the coefficient
$b_{ij}=(1+z_{ij})^{-1/2},$ (5)
where $z_{ij}$ represents the number of bonds $ik$ formed by the atom $i$. The
bonds are counted with weights depending on the bond angle $\theta_{ijk}$:
$z_{ij}=\sum_{k\neq
i,j}a_{i}S_{ik}\left(\cos\theta_{ijk}-h_{i}\right)^{2}f_{c}(r_{ik},d,r_{c}).$
(6)
The angular dependence account for the directional character of the bonds. All
chemical bonds are screened by the screening factor $S_{ij}$ defined by
$S_{ij}=\prod_{k\neq i,j}S_{ijk},$ (7)
where the partial screening factors $S_{ijk}$ represent the contributions of
individual atoms $k$ to the screening of the bond $i$-$j$:
$S_{ijk}=1-f_{c}(r_{ik}+r_{jk}-r_{ij},d,r_{c})e^{-\lambda_{i}(r_{ik}+r_{jk}-r_{ij})},$
(8)
where $\lambda_{i}$ is the inverse of the screening length. The closer the
atom $k$ is to the bond $i$-$j$, the smaller $S_{ijk}$, and thus the larger is
its contribution to the screening. Finally, the on-site energy
$E_{i}^{(p)}=-\sigma_{i}\left({\displaystyle\sum_{j\neq
i}S_{ij}b_{ij}}f_{c}(r_{ij})\right)^{1/2}$ (9)
represents the embedding energy in metals and the promotion energy for
covalent bonding.
The potential functions depend on 10 parameters, 8 of which ($A$, $B$,
$\alpha$, $\beta$, $a$, $h$, $\lambda$ and $\sigma$) are locally adjusted by
the NN while $d$ and $r_{c}$ are treated as global.
The training procedure. The NN weights and biases were initialized by random
numbers in the interval [-0.3,0.3]. The goal of the training was to minimize
the loss function
$\displaystyle\mathcal{E}$ $\displaystyle=$
$\displaystyle\dfrac{1}{N}\sum_{s}\left(\dfrac{E^{s}-E_{\textrm{DFT}}^{s}}{N_{s}}\right)^{2}+\tau_{1}\dfrac{1}{N_{p}}\left(\sum_{\epsilon\kappa}\left|w_{\epsilon\kappa}\right|^{2}+\sum_{\nu}\left|b_{\nu}\right|^{2}\right)$
(10) $\displaystyle+$
$\displaystyle\tau_{2}\dfrac{1}{N_{a}m}\sum_{s}\sum_{i_{s}=1}^{N_{s}}\sum_{n=1}^{m}\left|p_{i_{s}n}-\overline{p}_{i_{s}n}\right|^{2}$
with respect to the weights $w_{\epsilon\kappa}$ and biases $b_{\nu}$. In
Eq.(10), $E^{s}$ is the total energy of supercell $s$ predicted by the
potential, $E_{\textrm{DFT}}^{s}$ is the respective DFT energy, $N_{s}$ is the
number of atoms in the supercell $s$, $N$ is the number of supercells in the
reference database, $N_{a}$ is the total number of atoms in all supercells,
$N_{p}$ is the total number of NN parameters, and $\tau_{1}$ and $\tau_{2}$
are adjustable coefficients. The second and third terms are added for
regularization purposes. The second term ensures that the network parameters
remain reasonably small for smooth interpolation. The third term controls the
variations of the BOP parameters relative to their values
$\overline{p}_{i_{s}n}$ averaged over the training database. The values
$\tau_{1}$ = $10^{-10}$ and $\tau_{2}$ = $10^{-10}$ were chosen from previous
experience (Pun:2020aa, ). The L-BFGS algorithm implementing the minimization
requires the knowledge of partial derivatives of $\mathcal{E}$ with respect to
the NN parameters, which were derived analytically and implemented in the
training code. Since the loss function has many local minima, the training had
to be repeated multiple times, starting from different initial conditions. Due
to the large size of the optimization problem, the training process heavily
relies on massive parallel computations.
PINN calculations of Ta properties. The PINN calculations utilized the same
atomic configurations as in the DFT calculations, except for larger system
sizes in some cases. The phonon calculations employed the same phonopy package
(phonopy, ). The thermal expansion was computed by NPT MD simulations on a
4,394-atom periodic block. This simulation block was also used for point-
defect and liquid-structure calculations. A stack of 48 atomic layers was
created for the surface and $\gamma$-surface calculations. The dislocation
core analysis and the Peierls barriers calculation were based on the same
supercells as in the DFT calculations.
The melting point calculation followed the methodology of (Pun:2020aa, ). An
orthorhombic periodic simulation block had the dimensions of $39\times
42\times 202$ Å (16,320 atoms) and contained approximately equal amounts of
the solid and liquid phases. The solid-liquid interface had the (111)
crystallographic orientation and was normal to the long direction of the
block. The solid phase was scaled according to the thermal expansion factor at
the chosen temperature to eliminate internal stresses. The MD simulations were
performed in the canonical ensemble, in which the interface cross-section
remained fixed while the long dimension of the simulation block was allowed to
vary to ensure zero normal stress. The solid crystallized if the temperature
was below the melting temperature $T_{m}$ and melted if it was above $T_{m}$.
The phase change was accompanied by a decrease in the energy in the first case
and an increase in the second. The steady-state energy rate was recorded as a
function of the simulation temperature and interpolated to zero to obtain
$T_{m}$. The liquid surface tension of Ta was computed by the capillary
fluctuation method closely following the methodology of the previous
calculations for Al (Pun:2020aa, ). The liquid simulation block had a ribbon-
type geometry with the dimensions of $613\times 34\times 212$ Å (216,000
atoms) as shown in Supplementary Fig. 8. After equilibration at the melting
temperature, 241 MD snapshots were saved at 1 ps time intervals. The capillary
wave amplitudes $A(k)$ were obtained by a discrete Fourier transformation of
the liquid surface shape. The surface tension $\gamma$ was obtained from the
linear fit to the plot of the inverse of the ensemble-averaged spectral power
$\left\langle\left|A(k)\right|^{2}\right\rangle$ versus the wave-number
squared $k^{2}$ in the long wavelength limit (Supplementary Fig. 9).
Data availability
All data that support the findings of this study are available in the
Supplementary Information file and from the corresponding author upon
reasonable request.
## References
* (1) J. Behler, Perspective: Machine learning potentials for atomistic simulations, Phys. Chem. Chem. Phys. 145, 170901 (2016).
* (2) V. Botu, R. Batra, J. Chapman and R. Ramprasad, Machine learning force fields: Construction, validation, and outlook, The Journal of Physical Chemistry C 121, 511–522 (2017).
* (3) V. L. Deringer, M. A. Caro and G. Csányi, Machine learning interatomic potentials as emerging tools for materials science, Advanced Materials 31, 1902765 (2019).
* (4) Y. Zuo, C. Chen, X. Li, Z. Deng, Y. Chen, J. Behler, G. Csányi, A. V. Shapeev, A. P. Thompson, M. A. Wood and S. P. Ong, Performance and cost assessment of machine learning interatomic potentials, The Journal of Physical Chemistry A 124, 731–745 (2020).
* (5) M. S. Daw and M. I. Baskes, Semiempirical, quantum mechanical calculation of hydrogen embrittlement in metals, Phys. Rev. Lett. 50, 1285–1288 (1983).
* (6) M. S. Daw and M. I. Baskes, Embedded-atom method: Derivation and application to impurities, surfaces, and other defects in metals, Phys. Rev. B 29, 6443–6453 (1984).
* (7) M. W. Finnis and J. E. Sinclair, A simple empirical N-body potential for transition metals, Philos. Mag. A 50, 45–55 (1984).
* (8) M. I. Baskes, Application of the embedded-atom method to covalent materials: A semi-empirical potential for silicon, Phys. Rev. Lett. 59, 2666–2669 (1987).
* (9) Y. Mishin, M. J. Mehl and D. A. Papaconstantopoulos, Phase stability in the Fe-Ni system: Investigation by first-principles calculations and atomistic simulations, Acta Mater. 53, 4029–4041 (2005).
* (10) J. Tersoff, New empirical approach for the structure and energy of covalent systems, Phys. Rev. B 37, 6991–7000 (1988).
* (11) J. Tersoff, Empirical interatomic potential for silicon with improved elastic properties, Phys. Rev. B 38, 9902–9905 (1988).
* (12) J. Tersoff, Modeling solid-state chemistry: Interatomic potentials for multicomponent systems, Phys. Rev. B 39, 5566–5568 (1989).
* (13) F. H. Stillinger and T. A. Weber, Computer simulation of local order in condensed phases of silicon, Phys. Rev. B 31, 5262–5271 (1985).
* (14) P. Hohenberg and W. Kohn, Inhomogeneous electron gas, Phys. Rev. 136, B864–B871 (1964).
* (15) W. Kohn and L. J. Sham, Self-consistent equations including exchange and correlation effects, Phys. Rev. 140, A1133–A1138 (1965).
* (16) J. Behler and M. Parrinello, Generalized neural-network representation of high-dimensional potential-energy surfaces, Phys. Rev. Lett. 98, 146401 (2007).
* (17) M. Payne, G. Csanyi and A. de Vita, Hybrid atomistic modelling of materials precesses, in _Handbook of Materials Modeling_ , edited by S. Yip, pages p. 2763–2770 (Springer, Dordrecht, The Netherlands, 2005).
* (18) A. Bartok, M. C. Payne, R. Kondor and G. Csanyi, Gaussian approximation potentials: The accuracy of quantum mechanics, without the electrons, Phys. Rev. Lett. 104, 136403 (2010).
* (19) A. P. Bartok, R. Kondor and G. Csanyi, On representing chemical environments, Phys. Rev. B 87, 219902 (2013).
* (20) Z. Li, J. R. Kermode and A. De Vita, Molecular dynamics with on-the-fly machine learning of quantum-mechanical forces, Phys. Rev. Lett. 114, 096405 (2015).
* (21) A. Glielmo, P. Sollich and A. De Vita, Accurate interatomic force fields via machine learning with covariant kernels, Phys. Rev. B 95, 214302 (2017).
* (22) A. P. Bartok, J. Kermore, N. Bernstein and G. Csanyi, Machine learning a general purpose interatomic potential for silicon, Phys. Rev. X 8, 041048 (2018).
* (23) V. L. Deringer, C. J. Pickard and G. Csanyi, Data-driven learning of total and local energies in elemental boron, Phys. Rev. Lett. 120, 156001 (2018).
* (24) V. Botu and R. Ramprasad, Adaptive machine learning framework to accelerate ab initio molecular dynamics, Int. J. Quant. Chem. 115, 1074–1083 (2015).
* (25) V. Botu and R. Ramprasad, Learning scheme to predict atomic forces and accelerate materials simulations, Phys. Rev. B 92, 094306 (2015).
* (26) T. Mueller, A. G. Kusne and R. Ramprasad, Machine learning in materials science: Recent progress and emerging applications, in _Reviews in Computational Chemistry_ , edited by A. L. Parrill and K. B. Lipkowitz, volume 29, chapter 4, pages 186–273 (Wiley, 2016).
* (27) J. Behler and M. Parrinello, Generalized neural-network representation of high-dimensional potential-energy surfaces, Phys. Rev. Lett. 98, 146401 (2007).
* (28) A. Bholoa, S. D. Kenny and R. Smith, A new approach to potential fitting using neural networks, Nucl. Instrum. Methods Phys. Res. 255, 1–7 (2007).
* (29) J. Behler, R. Martonak, D. Donadio and M. Parrinello, Metadynamics simulations of the high-pressure phases of silicon employing a high-dimensional neural network potential, Phys. Rev. Lett. 100, 185501 (2008).
* (30) E. Sanville, A. Bholoa, R. Smith and S. D. Kenny, Silicon potentials investigated using density functional theory fitted neural networks, J. Phys.: Condens. Matter 20, 285219 (2008).
* (31) H. Eshet, R. Z. Khaliullin, T. D. Kuhle, J. Behler and M. Parrinello, Ab initio quality neural-network potential for sodium, Phys. Rev. B 81, 184107 (2010).
* (32) C. M. Handley and P. L. A. Popelier, Potential energy surfaces fitted by artificial neural networks, J. Phys. Chem. A 114, 3371–3383 (2010).
* (33) J. Behler, Neural network potential-energy surfaces in chemistry: a tool for large-scale simulations, Phys. Chem. Chem. Phys. 13, 17930–17955 (2011).
* (34) J. Behler, Atom-centered symmetry functions for constructing high-dimensional neural network potentials, J. Chem. Phys. 134, 074106 (2011).
* (35) G. C. Sosso, G. Miceli, S. Caravati, J. Behler and M. Bernasconi, Neural network interatomic potential for the phase change material GeTe, Phys. Rev. B 85, 174103 (2012).
* (36) J. Behler, Constructing high-dimensional neural network potentials: A tutorial review, Int. J. Quant. Chem. 115, 1032–1050 (2015).
* (37) K. T. Schutt, H. E. Sauceda, P. J. Kindermans, A. Tkatchenko and K. R. Muller, Schnet - a deep learning architecture for molecules and materials, J. Chem. Phys. 2018, 241722 (148).
* (38) G. Imbalzano, A. Anelli, D. Giofre, S. Klees, J. Behler and M. Ceriotti, Automatic selection of atomic fingerprints and reference configurations for machine-learning potentials, J. Chem. Phys. 148, 241730 (2018).
* (39) A. Thompson, L. Swiler, C. Trott, S. Foiles and G. Tucker, Spectral neighbor analysis method for automated generation of quantum-accurate interatomic potentials, Journal of Computational Physics 285, 316 – 330 (2015).
* (40) C. Chen, Z. Deng, R. Tran, H. Tang, I.-H. Chu and S. P. Ong, Accurate force field for molybdenum by machine learning large materials data, Phys. Rev. Materials 1, 043603 (2017).
* (41) X.-G. Li, C. Hu, C. Chen, Z. Deng, J. Luo and S. P. Ong, Quantum-accurate spectral neighbor analysis potential models for Ni-Mo binary alloys and fcc metals, Phys. Rev. B 98, 094104 (2018).
* (42) A. V. Shapeev, Moment tensor potentials: A class of systematically improvable interatomic potentials, Multiscale Modeling & Simulation 14, 1153–1173 (2016).
* (43) G. P. P. Pun, V. Yamakov, J. Hickman, E. H. Glaessgen and Y. Mishin, Development of a general-purpose machine-learning interatomic potential for aluminum by the physically informed neural network method, Physical Review Materials 4, 113807 (2020).
* (44) G. P. Purja Pun, R. Batra, R. Ramprasad and Y. Mishin, Physically informed artificial neural networks for atomistic modeling of materials, Nature Communications 10, 2339 (2019).
* (45) R. W. Buckman, New applications for tantalum and tantalum alloys, JOM 52, 40–41 (2000).
* (46) C. Sungail and A. D. Abid, Additive manufacturing of tantalum –a study of chemical and physical properties of printed tantalum, Metal Powder Report 75, 28–33 (2020).
* (47) V. K. Balla, S. Bodhak, S. Bose and A. Bandyopadhyay, Porous tantalum structures for bone implants: Fabrication, mechanical and in vitro biological properties, Acta Biomaterialia 6, 3349–3359 (2010).
* (48) G. Kresse and J. Furthmüller, Efficiency of ab-initio total energy calculations for metals and semiconductors using a plane-wave basis set, Comput. Mat. Sci. 6, 15 (1996).
* (49) G. Kresse and D. Joubert, From ultrasoft pseudopotentials to the projector augmented-wave method, Phys. Rev. B 59, 1758 (1999).
* (50) R. Fletcher, _Methods of practical optimization_ (John Wiley & Sons, 1987), 2nd edition.
* (51) V. Yamakov (The ParaGrandMC code can be obtained from the NASA Software Catalog: https://software.nasa.gov/software/LAR-18773-1, NASA/CR–2016-219202 (2016)).
* (52) G. P. Purja Pun, V. Yamakov and Y. Mishin, Interatomic potential for the ternary Ni–Al–Co system and application to atomistic modeling of the B2–L10 martensitic transformation, Model. Simul. Mater. Sci. Eng. 23, 065006 (2015).
* (53) V. Yamakov, J. D. Hochhalter, W. P. Leser, J. E. Warner, J. A. Newman, G. P. Purja Pun and Y. Mishin, Multiscale modeling of sensory properties of Co–Ni–Al shape memory particles embedded in an Al metal matrix, J. Mater. Sci. 51, 1204–1216 (2016).
* (54) V. Vitek, Core structure of screw dislocations in body-centred cubic metals: Relation to symmetry and interatomic bonding, Philos. Mag. 84, 415–428 (2004).
* (55) W. Cai, V. V. Bulatov, J.-P. Chang, J. Li and S. Yip, Dislocation core effects on mobility, in _Dislocations in Solids_ , edited by F. R. N. Nabarro and J. P. Hirth, volume 12, pages 1–80 (Elsevier, Amsterdam, 2004).
* (56) Y. S. Lin, M. Mrovec and V. Vitek, A new method for development of bond-order potentials for transition bcc metals, Model. Simul. Mater. Sci. Eng 22, 034002 (2014).
* (57) C. S. Hartley and Y. Mishin, Characterization and vizualization of the lattice misfit associated with dislocation cores, Acta Mater. 53, 1313–1351 (2005).
* (58) C. R. Weinberger, G. J. Tucker and S. M. Foiles, Peierls potential of screw dislocations in bcc transition metals: Predictions from density functional theory, Physical Review B 87 (2013).
* (59) J. J. Möller, M. Mrovec, I. Bleskov, J. Neugebauer, T. Hammerschmidt, R. Drautz, C. Elsässer, T. Hickel and E. Bitzek, (110) planar faults in strained bcc metals: Origins and implications of a commonly observed artifact of classical potentials, Physical Review Materials 2, 093606– (2018).
* (60) H. Jónsson, G. Mills and K. W. Jacobsen, Nudged elastic band method for finding minimum energy paths of transitions, in _Classical and Quantum Dynamics in Condensed Phase Simulations_ , edited by B. J. Berne, G. Ciccotti and D. F. Coker, page 1 (World Scientific, Singapore, 1998), p. 1.
* (61) G. Henkelman and H. Jonsson, Improved tangent estimate in the nudged elastic band method for finding minimum energy paths and saddle points, J. Chem. Phys. 113, 9978–9985 (2000).
* (62) J. Morris and X. Song, The melting lines of model systems calculated from coexistence simulations, J. Chem. Phys. 116, 9352–9358 (2002).
* (63) G. P. Purja Pun and Y. Mishin, Development of an interatomic potential for the Ni-Al system, Philos. Mag. 89, 3245–3267 (2009).
* (64) C. A. Howells and Y. Mishin, Angular-dependent interatomic potential for the binary Ni-Cr system, Model. Simul. Mater. Sci. Eng. 26, 085008 (2018).
* (65) S. Taioli, C. Cazorla, M. J. Gillan and D. Alfè, Melting curve of tantalum from first principles, Phys. Rev. B 75, 214103 (2007).
* (66) A. P. Miiller and A. Cezairliyan, Measurement of surface tension of tantalum by a dynamic technique in a microgravity environment, International Journal of Thermophysics 14, 1063–1075 (1993).
* (67) P.-F. Paradis, T. Ishikawa and S. Yoda, Surface tension and viscosity of liquid and undercooled tantalum measured by a containerless method, Journal of Applied Physics 97, 053506 (2005).
* (68) S. Plimpton, Fast parallel algorithms for short-range molecular-dynamics, J. Comput. Phys. 117, 1–19 (1995).
* (69) J. P. Perdew, J. A. Chevary, S. H. Vosko, K. A. Jackson, M. R. Pederson, D. J. Singh and C. Fiolhais, Atoms, molecules, solids, and surfaces - applications of the generalized gradient approximation for exchange and correlation, Phys. Rev. B 46, 6671–6687 (1992).
* (70) J. P. Perdew, K. Burke and M. Ernzerhof, Generalized gradient approximation made simple, Phys. Rev. Lett. 77, 3865–3868 (1996).
* (71) A. Tago and I. Tonaka, First principles phonon calculations in materials science, Scripta Mater. 108, 1–5 (2015).
* (72) J. Li, C.-Z. Wang, J.-P. Chang, W. Cai, V. V. Bulatov, K.-M. Ho and S. Yip, Core energy and peierls stress of a screw dislocation in bcc molybdenum: A periodic-cell tight-binding study, Physica B 70 (2004).
* (73) B. Vinet, J. P. Garandet and L. Cortella, Surface tension measurements of refractory liquid metals by the pendant drop method under ultrahigh vacuum conditions: Extension and comments on Tate’s law, Journal of Applied Physics 73, 3830–3834 (1993).
* (74) R. Ravelo, T. C. Germann, O. Guerrero, Q. An and B. L. Holian, Shock-induced plasticity in tantalum single crystals: Interatomic potentials and large-scale molecular-dynamics simulations, Phys. Rev. B 88, 134101 (2013).
* (75) A. Dewaele, P. Loubeyre and M. Mezouar, Refinement of the equation of state of tantalum, Phys. Rev B. 69, 092106 (2004).
* (76) C. Kittel, _Introduction to Sold State Physics_ (Wiley-Interscience, New York, 1986).
* (77) F. H. Featherston and J. R. Neighbours, Elastic constants of tantalum, tungsten, and molybdenum, Phys. Rev. 130, 1324–1333 (1963).
* (78) A. Satta, F. Willaime and S. de Gironcoli, First-principles study of vacancy formation and migration energies in tantalum, Phys. Rev B. 60, 7001–7005 (1999).
* (79) S. Mukherjee, R. E. Cohen and O. Gülseren, Vacancy formation enthalpy at high pressures in tantalum, J. Phys.: Condens. Matter 15, 855–861 (2003).
* (80) Y. Mishin and A. Y. Lozovoi, Angular-dependent interatomic potential for tantalum, Acta Mater. 54, 5013–5026 (2006).
* (81) P. Ehrhart, P. Jung, H. Schultz and H. Ullmaier, _Atomic Defects in Metals_ , volume 25 of _Landolt Börnstein New Series Group III: Crystal and Solid State Physics_ (Springer-Verlag, Berlin, 1991).
* (82) S. Feng and X. Cheng, First-principles investigation on metal tantalum under conditions of electronic excitation, Comp. Mater. Sci. 50, 3110–3113 (2011).
* (83) D. Nguyen-Manh, A. P. Horsfield and S. L. Dudarev, Self-interstitial atom defects in bcc transition metals: Group-specific trends, Phys. Rev. B 73, 020101(R) (2006).
* (84) A. Kiejna, Surface atomic structure and energetics of tantalum, Surf. Sci. 598, 276–284 (2005).
* (85) C. J. Wu, L. H. Yang, J. E. Klepeis and C. Mailhiot, Ab inition pseudopotential calculations of the atomic and electronic structure of the Ta (100) and (110) surfaces, Phys. Rev B. 52, 11784–11792 (1995).
* (86) W. R. Tyson and W. A. Miller, Surface free energies of solid metals: Estimation from liquid surface tension measurements, Surface Science 62, 267–276 (1977).
* (87) A. D. B. Woods, Lattice dynamics of tantalum, Phys. Rev. 136, A 781 – A 783 (1964).
* (88) Y. S. Touloukian, R. K. Kirby, R. E. Taylor and P. D. Desai (Editors), _Thermal Expansion: Metallic Elements and Alloys_ , volume 12 (Plenum, New York, 1975).
Acknowledgements
We are grateful to Dr. Vesselin Yamakov for developing the PGMC software used
for most of the PINN-based simulations reported in this work. This research
was supported by the Office of Naval Research under Award No.
N00014-18-1-2612. The computations were supported in part by a grant of
computer time from the DoD High Performance Computing Modernization Program at
ARL DSRC, ERDC DSRC and Navy DSRC. Additional computational resources were
provided by the NASA High-End Computing (HEC) Program through the NASA
Advanced Supercomputing (NAS) Division at Ames Research Center.
Author contributions
Y.-S. L. performed all DFT calculations for the Ta database and Ta properties,
developed the computer software for the PINN potential training, created the
PINN Ta potential, and performed all PINN-based simulations (except for the
liquid surface tension) under Y. M.’s direction and supervision. G. P. P.P.
shared his computer scripts and experience with PINN training and simulations,
and computed the liquid surface tension reported in this work. Y. M. wrote the
initial draft of the manuscript, which was then edited and approved in its
final form by all authors.
Competing interests
The authors declare no competing interests.
Table 1: Properties of Ta predicted by the PINN potential, DFT calculations, and measured by experiment. | DFT (this work) | | PINN | | | DFT (others’ work) | | Experiments
---|---|---|---|---|---|---|---|---
$a_{0}$ (Å) | 3.3202 | | 3.3203 | | | 3.321a | | 3.3039b
$E_{\mathrm{coh}}$ (eV/atom) | | | 8.1 | | | | | 8.1c
$B$ (GPa) | 196 | | 198 | | | | | 194.2d; 198e
$C_{11}$ (GPa) | 268 | | 269 | | | 247a | | 266.3d
$C_{12}$ (GPa) | 159 | | 163 | | | 170a | | 158.2d
$C_{44}$ (GPa) | 73 | | 72 | | | 67a | | 87.4d
$E_{v}^{f}$ (eV) | 2.866 | | 2.772 | | | 2.99f; 2.95g; 2.91i | | 2.2–3.1h
$E_{v}^{m}$ (eV) | 0.77 | | 0.76 | | | 0.83f; 0.76j | | 0.7h
$E_{i}^{f}\left\langle 111\right\rangle$ dumbbell (eV) | 4.730 | | 4.687 | | | 5.832k | |
$E_{i}^{f}\left\langle 110\right\rangle$ dumbbell (eV) | 5.456 | | 5.414 | | | 5.836k | |
$E_{i}^{f}$ tetrahedral (eV) | 5.799 | | 5.663 | | | 6.771k | |
$E_{i}^{f}\left\langle 100\right\rangle$ dumbbell (eV) | 5.892 | | 5.820 | | | 7.157k | |
$E_{i}^{f}$ octahedral (eV) | 5.940 | | 5.986 | | | 7.095k | |
$\gamma_{s}\left(110\right)$ $(\mathrm{J}/\mathrm{m}^{2})$ | 2.369 | | 2.367 | | | 2.31l; 2.881j; 2.51m | | 2.49n∗
$\gamma_{s}\left(100\right)$ $(\mathrm{J}/\mathrm{m}^{2})$ | 2.483 | | 2.482 | | | 2.27l; 3.142j; 2.82m | |
$\gamma_{s}\left(112\right)$ $(\mathrm{J}/\mathrm{m}^{2})$ | 2.675 | | 2.671 | | | 2.71l; 3.270j | |
$\gamma_{s}\left(111\right)$ $(\mathrm{J}/\mathrm{m}^{2})$ | 2.718 | | 2.718 | | | 2.74l; 3.312j | |
$\gamma_{\mathrm{us}}\left\\{110\right\\}\left\langle 001\right\rangle$ $(\mathrm{J}/\mathrm{m}^{2})$ | 1.628 | | 1.618 | | | 1.951a | |
$\gamma_{\mathrm{us}}\left\\{110\right\\}\left\langle 110\right\rangle$ $(\mathrm{J}/\mathrm{m}^{2})$ | 1.628 | | 1.618 | | | | |
$\gamma_{\mathrm{us}}\left\\{110\right\\}\left\langle 111\right\rangle$ $(\mathrm{J}/\mathrm{m}^{2})$ | 0.704 | | 0.722 | | | 0.840a | |
$\gamma_{\mathrm{us}}\left\\{112\right\\}\left\langle 110\right\rangle$ $(\mathrm{J}/\mathrm{m}^{2})$ | 3.330 | | 3.315 | | | | |
$\gamma_{\mathrm{us}}\left\\{112\right\\}\left\langle 111\right\rangle$ $(\mathrm{J}/\mathrm{m}^{2})$ | 0.844 | | 0.825 | | | 1.000a | |
$T_{m}$ (K) | | | 3000$\pm$6 | | | 3085$\pm$130o | | 3293c
$a_{0}$ equilibrium lattice constant, $E_{\mathrm{coh}}$ equilibrium cohesive
energy, $B$ bulk modulus,
$C_{ij}$ elastic constants, $E_{v}^{f}$ vacancy formation energy, $E_{v}^{m}$
vacancy migration energy,
$E_{i}^{f}$ self-interstitial formation energy, $\gamma_{s}$ surface energy,
$\gamma_{\mathrm{us}}$ unstable stacking fault energy,
$T_{m}$ melting temperature. | | | | | | | |
aRef. (Ravelo2013, ), bRef. (Dewaele2004, ), cRef. (Kittel, ), dRef.
(Featherston:1963aa, ),
eRef. (Dewaele2004, ), fRef. (Satta1999, ), gRef. (Mukherjee2003, ), jRef.
(Mishin.Ta, ),
hRef. (LandoltIII.25, ), iRef. (Feng2011, ),kRef. (Nguyen-Manh2006, ), lRef.
(Kiejna2005, ),
mRef. (Wu1995, ), nRef. (Tyson:1977aa, ), oRef. (Taioli:2007aa, ), ∗Average
orientation
Table 2: Equilibrium energy (in eV/atom) of small clusters and open crystal
structures unknown to the PINN potential. The values are reported relative to
the equilibrium BCC structure.
Structure | | DFT | | | PINN
---|---|---|---|---|---
Dimer | | 6.044 | | | 6.048
Trimer (linear) | | 5.944 | | | 5.947
Trimer (triangle) | | 4.754 | | | 4.756
Tetrahedron | | 3.667 | | | 3.668
$\alpha$-U (A20) | | 0.375 | | | 0.381
$\beta$-Sn (A5) | | 0.768 | | | 0.808
$\alpha$-Ga (A11) | | 1.300 | | | 1.283
$\alpha$-As (A7) | | 1.782 | | | 1.580
Figure 1: How the PINN potential works. a Flow diagram of energy calculation
in the PINN model. b Energies of atomic configurations in the Ta training set
computed with the PINN potential versus the DFT energies (red dots). The blue
line represents the perfect fit.
Figure 2: Lattice properties of BCC Ta. a Phonon dispersion relations computed
with the PINN potential (red lines) in comparison with DFT calculations (green
lines) and experimental measurements at 296 K (Woods1964, ) (blue points). b
Linear thermal expansion relative to room temperature (293 K) predicted by the
PINN potential in comparison with experimental data (Expansion, ). The
experimental melting temperature (3293 K) $T_{\textnormal{m}}$ is indicated.
The discrepancy below room temperature arises from quantum effects that cannot
be captured by a classical potential. Figure 3: BCC Ta under strong
deformations. The lines represent PINN calculations, the points DFT
calculations. a Stress-strain relation under strong uniaxial compression in
the [110] direction. The magnitudes of the stress components $\sigma_{xx}$,
$\sigma_{yy}$ and $\sigma_{zz}$ are plotted as a function of the strain
$\varepsilon_{xx}$. Axes orientations: $x$: $[110]$, $y$: $[\overline{1}10]$,
$z$: $[001]$. b Strong shear deformation. The energy $\Delta E$ (relative to
perfect BCC structure) is plotted as a function of the strain parameter $s$
transforming the BCC structure back to itself along the twinning and anti-
twinning paths. The shear is parallel to the $(112)$ plane in the
$[11\overline{1}]$ direction. Figure 4: Ta surface relaxations. Atomic plane
displacements near low-index surfaces in BCC Ta predicted by the PINN
potential (red lines) in comparison with DFT calculations (blue points
connected by dash lines). $\Delta_{n}$ is the percentage change of the $n$-th
interplanar spacing relative to the ideal spacing. a $\left(100\right)$, b
$\left(110\right)$, c $\left(111\right)$, and d $\left(112\right)$ surface
planes.
Figure 5: $\gamma$-surfaces in BCC Ta. a,b DFT $\gamma$-surfaces on the a
(110) and b (112) planes. c,d Cross-sections of the $\gamma$-surfaces on the c
(110) and d (112) planes predicted by the PINN potential (lines) in comparison
with DFT calculations (points). The displacements are normalized by the energy
period, and their directions are indicated next to the curves. Figure 6:
$\frac{1}{2}\left\langle 111\right\rangle$ screw dislocation in Ta. a Nye
tensor (Hartley05, ) plot (the screw component $\alpha_{zz}$) of the
dislocation core structure predicted by the PINN potential. The coordinate
system is $x$: $\left[\bar{1}2\bar{1}\right]$, $y$: $\left[\bar{1}01\right]$,
$z$: $[111]$ (normal to the page). The circles represent relaxed atomic
positions in three consecutive $\left(111\right)$ planes. b Peierls barrier of
the dislocation predicted by the PINN potential (lines) in comparison with DFT
calculations (points). Figure 7: Equations of state of several crystal
structures of Ta predicted by the PINN potential (lines) in comparison with
DFT calculations (points). The insets are zooms into competing structures near
the equilibrium volume. a Simple cubic (SC), hexagonal close-packed (HCP),
face-centered cubic (FCC), A15 (Cr3Si prototype), and body-centered cubic
(BCC) structures. b Diamond cubic (DC), simple hexagonal (SH), $\beta$-Ta, and
BCC structures. Figure 8: Deformation paths between different crystal
structures of Ta. The energy $\Delta E$ (relative to perfect BCC structure) is
plotted as a function of the deformation parameter $p$ defined in Ref.
(Lin2014, ). The lines represent PINN calculations, the points DFT
calculations. a Tetragonal (Bain) path BCC $\rightarrow$ FCC $\rightarrow$
BCT. b Trigonal path BCC $\rightarrow$ SC $\rightarrow$ FCC. c Orthorhombic
path BCC $\rightarrow$ BCT. d Hexagonal path BCC $\rightarrow$ HCP. The
structures encountered along the paths are indicated. Figure 9: Ta liquid
properties. Predictions of the PINN potential (red dashed lines) are compared
with DFT calculations (blue lines) at the temperature of 3500 K. a Radial
distribution function. b Bond-angle distribution function.
Supplementary Information
Development of a physically-informed neural network interatomic potential for
tantalum
Yi-Shen Lin, Ganga P. Purja Pun and Yuri Mishin
Table S1: Ta DFT database used for the development of the PINN potential. Subset | Structure | Simulation type | $N_{A}$ | $N_{tv}$
---|---|---|---|---
Crystals | BCC | Small homogeneous strains | 2 | 63
| BCC | Isotropic strain at 0 K | 2 | 38
| A15 | Isotropic strain at 0 K | 8 | 39
| $\beta$-Ta | Isotropic strain at 0 K | 30 | 39
| Diamond cubic | Isotropic strain at 0 K | 8 | 68
| FCC | Isotropic strain at 0 K | 4 | 38
| HCP | Isotropic strain at 0 K | 4 | 37
| SH | Isotropic strain at 0 K | 2 | 38
| SC | Isotropic strain at 0 K | 1 | 38
Small clusters | dimer | Isotropic strain at 0 K | 2 | 12
| trimer-linear | Isotropic strain at 0 K | 3 | 12
| trimer-triangle | Isotropic strain at 0 K | 3 | 12
| tetrahedron | Isotropic strain at 0 K | 4 | 12
Deformation paths | — | twinning-antitwinning | 1 | 37
| — | hexagonal | 4 | 121
| — | orthorhombic | 4 | 101
| — | tetragonal | 2 | 121
| — | trigonal | 1 | 471
$\gamma$-surface | — | (110) plane | 24 | 54
| — | (112) plane | 24 | 81
Liquid | — | NVT-MD (2600 K) | 250 | 20
| — | NVT-MD (2900 K) | 250 | 20
| — | NVT-MD (3500 K) | 250 | 20
| — | NVT-MD (5000 K) | 250 | 20
BCC-1 | BCC ($a$ = 3.3202) | NVT-MD (2500 K) | 54 | 20
| BCC ($a$ = 3.3202) | NVT-MD (5000 K) | 54 | 40
| BCC ($a$ = 3.3202) | NVT-MD (10,000 K) | 54 | 40
BCC-2 | BCC ($e$ = $-$2.5% ) | NVT-MD (2500 K) | 54 | 40
| BCC ($e$ = 2.5% ) | NVT-MD (2500 K) | 54 | 40
| BCC ($e$ = $-$5% ) | NVT-MD (2500 K) | 54 | 40
| BCC ($e$ = 5% ) | NVT-MD (2500 K) | 54 | 40
Continued in Supplementary Table S2 | | |
Table S2: Ta DFT database (continued from Supplementary Table S1). Subset | Structure | Simulation type | $N_{A}$ | $N_{tv}$
---|---|---|---|---
BCC-3 | BCC | Large uniaxial stain along $\left[100\right]$ | 2 | 80
| BCC | Large uniaxial stain along $\left[110\right]$ | 4 | 80
| BCC | Large uniaxial stain along $\left[111\right]$ | 6 | 79
BCC-4 | BCC ($a$ = 3.3247 ) | NVT-MD (293 K) | 128 | 180
| BCC ($a$ = 3.3292) | NVT-MD (500 K) | 128 | 210
| BCC ($a$ = 3.3247) | NVT-MD (293 K) | 54 | 140
| BCC ($a$ = 3.3292) | NVT-MD (500 K) | 54 | 140
| BCC ($a$ = 3.3408) | NVT-MD (1000 K) | 54 | 60
| BCC ($a$ = 3.3534) | NVT-MD (1500 K) | 54 | 60
| BCC ($a$ = 3.3675) | NVT-MD (2000 K) | 54 | 40
| BCC ($a$ = 3.3863) | NVT-MD (2500 K) | 54 | 40
| BCC ($a$ = 3.4137) | NVT-MD (3000 K) | 54 | 40
| BCC ($a$ = 3.4287) | NVT-MD (3200 K) | 54 | 40
| BCC ($a$ = 3.4287) | NVT-MD (3300 K) | 54 | 40
Dislocation | Dislocation | Relaxed structure and NEB calculations | 231 | 46
Spherical clusters | radius = $2^{\mathrm{nd}}$ NND | NVT-MD (2600 K) | 15 | 40
| radius = $3^{\mathrm{rd}}$ NND | NVT-MD (2500 K) | 27 | 40
| radius = $4^{\mathrm{th}}$ NND | NVT-MD (2500 K) | 51 | 40
Interfaces | GB $\Sigma 3\left(111\right)$ | NVT-MD (2500 K) | 48 | 40
| GB $\Sigma 3\left(112\right)$ | NVT-MD (2500 K) | 72 | 40
| GB $\Sigma 5\left(210\right)$ | NVT-MD (2500 K) | 36 | 40
| GB $\Sigma 5\left(310\right)$ | NVT-MD (2500 K) | 60 | 40
Point defects | Vacancy | Relaxed structure and NEB calculations | 127 | 16
| Vacancy | NVT-MD (2500 K) | 53 | 40
| Self-interstitials | Relaxed structures | 129 | 5
| Self-interstitials | NVT-MD (2500 K) | 55 | 40
Surfaces | Surfaces | Relaxed structures | 24 | 4
| Surface $\left(100\right)$ | NVT-MD (2500 K) | 63 | 40
| Surface $\left(110\right)$ | NVT-MD (2500 K) | 60 | 40
| Surface $\left(111\right)$ | NVT-MD (2500 K) | 44 | 40
$N_{tv}$: number of configurations for training and cross-validation.
$N_{A}$: number of atoms per supercell. $a$: cubic lattice parameter for BCC
in Å.
NND: nearest neighbor distance in equilibrium BCC structure.
$e$: isotropic stain from equilibrium BCC. GB: [001] symmetric tilt grain
boundary.
Figure S1: Testing of atomic force predictions. The components of the atomic
forces predicted by the PINN potential are compared with DFT calculations. The
straight lines represent the perfect fit. a $x$-components, b $y$-components,
and c $z$-components. The RMSE in eV/Å is indicated next to the points.
Figure S2: BCC Ta under strong uniaxial compression. The lines represent PINN
calculations, the points DFT calculations. The magnitudes of the stress
components $\sigma_{xx}$, $\sigma_{yy}$ and $\sigma_{zz}$ are plotted as a
function of the strain $\varepsilon_{xx}$. Axes orientations: a $x$: $[110]$,
$y$: $[\overline{1}10]$, $z$: $[001]$. b $x$: $[100]$, $y$: $[010]$, $z$:
$[001]$. c $x$: $[111]$, $y$: $[\overline{1}10]$, $z$:
$[\overline{1}\overline{1}2]$. Figure S3: Ta surface relaxations. Atomic plane
displacements near the (112) surface in BCC Ta predicted by the PINN potential
(red lines) in comparison with DFT calculations (blue points connected by dash
lines). $\Delta_{y,n}$ is the displacement of $n$-th atomic layer in the
$\left[\bar{1}\bar{1}1\right]$ direction due to relaxation from its ideal
position. Figure S4: Equations of state of crystal structures predicted by the
PINN potential (lines) in comparison with the DFT calculations (points), with
the atomic volume ranging from a state of large compression to the PINN
potential cutoff. a Simple cubic (SC), hexagonal close-packed (HCP), face-
centered cubic (FCC), A15 (Cr3Si prototype), and body-centered cubic (BCC)
structures. b Diamond cubic (DC), simple hexagonal (SH), $\beta$-Ta, and BCC
structures. Figure S5: a–c Radial distribution function (RDF) predicted by the
PINN potential (red dashed lines) in comparison with DFT calculations (blue
lines) at three temperatures. d Variation of the RDF with temperature
predicted by the PINN potential. Figure S6: a–c Bond angle distribution (BAD)
predicted by the PINN potential (red dashed lines) in comparison with DFT
calculations (blue lines) at three different temperatures. d BAD variation
with temperature predicted by the PINN potential. Figure S7: Energy rate in MD
simulations of the solid-liquid interface computed using the PINN potential at
different temperatures (red dots). The linear fit of the data (blue line) was
used to obtain the melting temperature of Ta predicted by the potential. The
inset shows the simulation block containing the solid (blue) and liquid (red)
phases. Figure S8: Simulation block used for computing the liquid surface
tension of Ta. The liquid layers are shown in purple and the surfaces are
outlined in blue. Periodic boundary conditions are applied in the $x$ and $y$
directions. Figure S9: Inverse power of capillary waves versus the wave number
squared for the liquid Ta surface computed with the PINN potential. The line
represents a linear fit in the long-wave limit.
|
# SceneGen: Learning to Generate Realistic Traffic Scenes
Shuhan Tan1,2 Kelvin Wong1,3∗ Shenlong Wang1,3
Sivabalan Manivasagam1,3 Mengye Ren1,3 Raquel Urtasun1,3
1Uber Advanced Technologies Group 2Sun Yat-Sen University 3University of
Toronto
<EMAIL_ADDRESS><EMAIL_ADDRESS>Indicates equal
contribution. Work done at Uber ATG.
###### Abstract
We consider the problem of generating realistic traffic scenes automatically.
Existing methods typically insert actors into the scene according to a set of
hand-crafted heuristics and are limited in their ability to model the true
complexity and diversity of real traffic scenes, thus inducing a content gap
between synthesized traffic scenes versus real ones. As a result, existing
simulators lack the fidelity necessary to train and test self-driving
vehicles. To address this limitation, we present SceneGen—a neural
autoregressive model of traffic scenes that eschews the need for rules and
heuristics. In particular, given the ego-vehicle state and a high definition
map of surrounding area, SceneGen inserts actors of various classes into the
scene and synthesizes their sizes, orientations, and velocities. We
demonstrate on two large-scale datasets SceneGen’s ability to faithfully model
distributions of real traffic scenes. Moreover, we show that SceneGen coupled
with sensor simulation can be used to train perception models that generalize
to the real world.
## 1 Introduction
The ability to simulate realistic traffic scenarios is an important milestone
on the path towards safe and scalable self-driving. It enables us to build
rich virtual environments in which we can improve our self-driving vehicles
(SDVs) and verify their safety and performance [9, 31, 32, 53]. This goal,
however, is challenging to achieve. As a first step, most large-scale self-
driving programs simulate pre-recorded scenarios captured in the real world
[32] or employ teams of test engineers to design new scenarios [9, 31].
Although this approach can yield realistic simulations, it is ultimately not
scalable. This motivates the search for a way to generate realistic traffic
scenarios _automatically_.
More concretely, we are interested in generating the layout of actors in a
traffic scene given the SDV’s current state and a high definition map (HD map)
of the surrounding area. We call this task _traffic scene generation_ (see
Fig. 1). Here, each actor is parameterized by a class label, a bird’s eye view
bounding box, and a velocity vector. Our lightweight scene parameterization is
popular among existing self-driving simulation stacks and can be readily used
in downstream modules; , to simulate LiDAR [9, 10, 32].
A popular approach to traffic scene generation is to use procedural models to
insert actors into the scene according to a set of rules [55, 31, 9, 37].
These rules encode reasonable heuristics such as “pedestrians should stay on
the sidewalk” or “vehicles should drive along lane centerlines”, and their
parameters can be manually tuned to give reasonable results. Still, these
simplistic heuristics cannot fully capture the complexity and diversity of
real world traffic scenes, thus inducing a content gap between synthesized
traffic scenes and real ones [26]. Moreover, this approach requires
significant time and expertise to design good heuristics and tune their
parameters.
Figure 1: Given the SDV’s state and an HD map, SceneGen autoregressively
inserts actors onto the map to compose a realistic traffic scene. The ego SDV
is shown in red; vehicles in blue; pedestrians in orange; and bicyclists in
green.
To address these issues, recent methods use machine learning techniques to
automatically tune model parameters [52, 51, 24, 26, 8]. These methods improve
the realism and scalability of traffic scene generation. However, they remain
limited by their underlying hand-crafted heuristics and priors; , pre-defined
scene grammars or assumptions about road topologies. As a result, they lack
the capacity to model the true complexity and diversity of real traffic scenes
and, by extension, the fidelity necessary to train and test SDVs in
simulation. Alternatively, we can use a simple data-driven approach by
sampling from map-specific empirical distributions [10]. But this cannot
generalize to new maps and may yield scene-inconsistent samples.
In this paper, we propose SceneGen—a traffic scene generation model that
eschews the need for hand-crafted rules and heuristics. Our approach is
inspired by recent successes in deep generative modeling that have shown
remarkable results in estimating distributions of a variety of data, without
requiring complex rules and heuristics; , handwriting [18], images [49], text
[39], . Specifically, SceneGen is a neural autoregressive model that, given
the SDV’s current state and an HD map of the surrounding area, sequentially
inserts actors into the scene—mimicking the process by which humans do this as
well. As a result, we can sample realistic traffic scenes from SceneGen and
compute the likelihood of existing ones as well.
We evaluate SceneGen on two large-scale self-driving datasets. The results
show that SceneGen can better estimate the distribution over real traffic
scenes than competing baselines and generate more realistic samples as well.
Furthermore, we show that SceneGen coupled with sensor simulation can generate
realistic labeled data to train perception models that generalize to the real
world. With SceneGen, we take an important step towards developing SDVs safely
and scalably through large-scale simulation. We hope our work here inspires
more research along this direction so that one day this goal will become a
reality.
## 2 Related Work
#### Traffic simulation:
The study of traffic simulation can be traced back to at least the 1950s with
Gerlough’s dissertation on simulating freeway traffic flow [16]. Since then,
various traffic models have been used for simulation. Macroscopic models
simulate entire populations of vehicles in the aggregate [30, 40] to study
“macroscopic” properties of traffic flow, such as traffic density and average
velocity. In contrast, microscopic models simulate the behavior of each
individual vehicle over time by assuming a car-following model [36, 6, 34, 13,
17, 1, 44]. These models improve simulation fidelity considerably but at the
cost of computational efficiency. Microscopic traffic models have been
included in popular software packages such as SUMO [31], CORSIM [35], VISSIM
[11], and MITSIM [55].
Recently, traffic simulation has found new applications in testing and
training the autonomy stack of SDVs. However, existing simulators do not
satisfy the level of realism necessary to properly test SDVs [52]. For
example, the CARLA simulator [9] spawns actors at pre-determined locations and
uses a lane-following controller to simulate the vehicle behaviors over time.
This approach is too simplistic and so it induces a sim2real content gap [26].
Therefore, in this paper, we study how to generate snapshots of traffic scenes
that mimic the realism and diversity of real ones.
Figure 2: Overview of our approach. Given the ego SDV’s state and an HD map
of the surrounding area, SceneGen generates a traffic scene by inserting
actors one at a time (Sec. 3.1). We model each actor
$\bm{a}_{i}\in\mathcal{A}$ probabilistically, as a product over distributions
of its class $c_{i}\in\mathbb{C}$, position $\bm{p}_{i}\in\mathbb{R}^{2}$,
bounding box $\bm{b}_{i}\in\mathbb{B}$, and velocity
$\bm{v}_{i}\in\mathbb{R}^{2}$ (Sec. 3.2).
#### Traffic scene generation:
While much of the research into microscopic traffic simulation have focused on
modeling actors’ behaviors, an equally important yet underexplored problem is
how to generate realistic snapshots of traffic scenes. These snapshots have
many applications; , to initialize traffic simulations [52] or to generate
labeled data for training perception models [26]. A popular approach is to
procedurally insert actors into the scene according to a set of rules [55, 31,
9, 37]. These rules encode reasonable heuristics such as “pedestrians should
stay on the sidewalk” and “vehicles should drive along lane centerlines”, and
their parameters can be manually tuned to give reasonable results. For
example, SUMO [31] inserts vehicles into lanes based on minimum headway
requirements and initializes their speeds according to a Gaussian distribution
[52]. Unfortunately, it is difficult to scale this approach to new
environments since tuning these heuristics require significant time and
expertise.
An alternative approach is to learn a probabilistic distribution over traffic
scenes from which we can sample new scenes [52, 51, 24, 10, 14, 15, 57]. For
example, Wheeler et al. [52] propose a Bayesian network to model a joint
distribution over traffic scenes in straight multi-lane highways. This
approach was extended to model inter-lane dependencies [51] and generalized to
handle a four-way intersection [24]. These models are trained to mimic a real
distribution over traffic scenes. However, they consider a limited set of road
topologies only and assume that actors follow reference paths in the map. As a
result, they are difficult to generalize to real urban scenes, where road
topologies and actor behaviors are considerably more complex; , pedestrians do
not follow reference paths in general.
Recent advances in deep learning have enabled a more flexible approach to
learn a distribution over traffic scenes. In particular, MetaSim [26] augments
the probabilistic scene graph of Prakash et al.[37] with a graph neural
network. By modifying the scene graph’s node attributes, MetaSim reduces the
content gap between synthesized images versus real ones, without manual
tuning. MetaSim2 [8] extends this idea by learning to sample the scene graph
as well. Unfortunately, these approaches are still limited by their hand-
crafted scene grammar which, for example, constrains vehicles to lane
centerlines. We aim to develop a more general method that avoids requiring
these heuristics.
#### Autoregressive models:
Autoregressive models factorize a joint distribution over $n$-dimensions into
a product of conditional distributions $p(x)=\prod_{i=1}^{n}p(x_{i}|x_{<i})$.
Each conditional distribution is then approximated with a parameterized
function [12, 2, 45, 46, 47]. Recently, neural autoregressive models have
found tremendous success in modeling a variety of data, including handwriting
[18], images [49], audio [48], text [39], sketches [20], graphs [29], 3D
meshes [33], indoor scenes [50] and image scene layouts [25]. These models are
particularly popular since they can factorize a complex joint distribution
into a product of much simpler conditional distributions. Moreover, they
generally admit a tractable likelihood, which can be used for likelihood-based
training, uncovering interesting/outlier examples, . Inspired by these
advances, we exploit autoregressive models for traffic scene generation as
well.
## 3 Traffic Scene Generation
Figure 3: Traffic scenes generated by SceneGen conditioned on HD maps from
ATG4D (top) and Argoverse (bottom).
Our goal is to learn a distribution over traffic scenes from which we can
sample new examples and evaluate the likelihood of existing ones. In
particular, given the SDV $\bm{a}_{0}\in\mathcal{A}$ and an HD map
$\bm{m}\in\mathcal{M}$, we aim to estimate the joint distribution over other
actors in the scene $\\{\bm{a}_{1},\ldots,\bm{a}_{n}\\}\subset\mathcal{A}$,
$\displaystyle p(\bm{a}_{1},\ldots,\bm{a}_{n}|\bm{m},\bm{a}_{0})$ (1)
The HD map $\bm{m}\in\mathcal{M}$ is a collection of polygons and polylines
that provide semantic priors for a region of interest around the SDV; , lane
boundaries, drivable areas, traffic light states. These priors provide
important contextual information about the scene and allow us to generate
actors that are consistent with the underlying road topology.
We parameterize each actor $\bm{a}_{i}\in\mathcal{A}$ with an eight-
dimensional random variable containing its class label $c_{i}\in\mathbb{C}$,
its bird’s eye view location $(x_{i},y_{i})\in\mathbb{R}^{2}$, its bounding
box $\bm{b}_{i}\in\mathbb{B}$111Pedestrians are not represented by bounding
boxes. They are represented by a single point indicating their center of
gravity., and its velocity $\bm{v}_{i}\in\mathbb{R}^{2}$. Each bounding box
$\bm{b}_{i}\in\mathbb{B}$ is a 3-tuple consisting of the bounding box’s size
$(w_{i},l_{i})\in\mathbb{R}^{2}_{>0}$ and heading angle
$\theta_{i}\in[0,2\pi)$. In our experiments, $\mathbb{C}$ consists of three
classes: vehicles, pedestrians, and bicyclists. See Fig. 1 for an example.
Modeling Eq. 1 is a challenging task since the actors in a given scene are
highly correlated among themselves and with the map, and the number of actors
in the scene is random as well. We aim to model Eq. 1 such that our model is
easy to sample from and the resulting samples reflect the complexity and
diversity of real traffic scenes. Our approach is to autoregressively
factorize Eq. 1 into a product of conditional distributions. This yields a
natural generation process that sequentially inserts actors into the scene one
at a time. See Fig. 2 for an overview of our approach.
In the following, we first describe our autoregressive factorization of Eq. 1
and how we model this with a recurrent neural network (Sec. 3.1). Then, in
Sec. 3.2, we describe how SceneGen generates a new actor at each step of the
generation process. Finally, in Sec. 3.3, we discuss how we train and sample
from SceneGen.
### 3.1 The Autoregressive Generation Process
Given the SDV $\bm{a}_{0}\in\mathcal{A}$ and an HD map $\bm{m}\in\mathcal{M}$,
our goal is to estimate a conditional distribution over the actors in the
scene $\\{\bm{a}_{1},\ldots,\bm{a}_{n}\\}\subset\mathcal{A}$. As we alluded to
earlier, modeling this conditional distribution is challenging since the
actors in a given scene are highly correlated among themselves and with the
map, and the number of actors in the scene is random. Inspired by the recent
successes of neural autoregressive models [18, 49, 39], we propose to
autoregressively factorize $p(\bm{a}_{1},\ldots,\bm{a}_{n}|\bm{m},\bm{a}_{0})$
into a product of simpler conditional distributions. This factorization
simplifies the task of modeling the complex joint distribution
$p(\bm{a}_{1},\ldots,\bm{a}_{n}|\bm{m},\bm{a}_{0})$ and results in a model
with a tractable likelihood. Moreover, it yields a natural generation process
that mimics how a human might perform this task as well.
In order to perform this factorization, we assume a fixed canonical ordering
over the sequence of actors $\bm{a}_{1},\ldots,\bm{a}_{n}$,
$\displaystyle
p(\bm{a}_{1},\ldots,\bm{a}_{n}|\bm{m},\bm{a}_{0})=p(\bm{a}_{1}|\bm{m},\bm{a}_{0})\prod_{i=1}^{n}p(\bm{a}_{i}|\bm{a}_{<i},\bm{m},\bm{a}_{0})$
(2)
where $\bm{a}_{<i}=\\{\bm{a}_{1},\ldots,\bm{a}_{i-1}\\}$ is the set of actors
up to and including the $i-1$-th actor in canonical order. In our experiments,
we choose a left-to-right, top-to-bottom order based on each actor’s position
in bird’s eye view coordinates. We found that this intuitive ordering works
well in practice.
Since the number of actors per scene is random, we introduce a stopping token
$\bot$ to indicate the end of our sequential generation process. In practice,
we treat $\bot$ as an auxillary actor that, when generated, ends the
generation process. Therefore, for simplicity of notation, we assume that the
last actor $\bm{a}_{n}$ is always the stopping token $\bot$.
#### Model architecture:
Our model uses a recurrent neural network to capture the long-range
dependencies across our autoregressive generation process. The basis of our
model is the ConvLSTM architecture [42]—an extension of the classic LSTM
architecture [22] to spatial data—and the input to our model at the $i$-th
generation step is a bird’s eye view multi-channel image encoding the SDV
$\bm{a}_{0}$, the HD map $\bm{m}$, and the actors generated so far
$\\{\bm{a}_{1},\ldots,\bm{a}_{i-1}\\}$.
For the $i$-th step of the generation process: Let
$\bm{x}^{(i)}\in\mathbb{R}^{C\times H\times W}$ denote the multi-channel
image, where $C$ is the number of feature channels and $H\times W$ is the size
of the image grid. Given the previous hidden and cell states $\bm{h}^{(i-1)}$
and $\bm{c}^{(i-1)}$, the new hidden and cell states are given by:
$\displaystyle\bm{h}^{(i)},\bm{c}^{(i)}$
$\displaystyle=\mathrm{ConvLSTM}(\bm{x}^{(i)},\bm{h}^{(i-1)},\bm{c}^{(i-1)};\bm{w})$
(3) $\displaystyle\bm{f}^{(i)}$
$\displaystyle=\mathrm{CNN}_{\mathrm{b}}(\bm{h}^{(i)};\bm{w})$ (4)
where $\mathrm{ConvLSTM}$ is a two-layer ConvLSTM, $\mathrm{CNN}_{\mathrm{b}}$
is a five-layer convolutional neural network (CNN) that extract backbone
features, and $\bm{w}$ are the neural network parameters. The features
$\bm{f}^{(i)}$ summarize the generated scene so far $\bm{a}_{<i}$,
$\bm{a}_{0}$, and $\bm{m}$, and we use $\bm{f}^{(i)}$ to predict the
conditional distribution $p(\bm{a}_{i}|\bm{a}_{<i},\bm{m},\bm{a}_{0})$, which
we describe next. See our appendix for details.
### 3.2 A Probabilistic Model of Actors
Having specified the generation process, we now turn our attention to modeling
each actor probabilistically. As discussed earlier, each actor
$\bm{a}_{i}\in\mathcal{A}$ is parameterized by its class label
$c_{i}\in\mathbb{C}$, location $(x_{i},y_{i})\in\mathbb{R}^{2}$, oriented
bounding box $\bm{b}_{i}\in\mathbb{B}$ and velocity
$\bm{v}_{i}\in\mathbb{R}^{2}$. To capture the dependencies between these
attributes, we factorize $p(\bm{a}_{i}|\bm{a}_{<i},\bm{m},\bm{a}_{0})$ as
follows:
$\displaystyle
p(\bm{a}_{i})=p(c_{i})p(x_{i},y_{i}|c_{i})p(\bm{b}_{i}|c_{i},x_{i},y_{i})p(\bm{v}_{i}|c_{i},x_{i},y_{i},\bm{b}_{i})$
(5)
where we dropped the condition on $\bm{a}_{<i}$, $\bm{m}$, and $\bm{a}_{0}$ to
simplify notation. Thus, the distribution over an actor’s location is
conditional on its class; its bounding box is conditional on its class and
location; and its velocity is conditional on its class, location, and bounding
box. Note that if $\bm{a}_{i}$ is the stopping token $\bot$, we do not model
its location, bounding box, and velocity. Instead, we have
$p(\bm{a}_{i})=p(c_{i})$, where $c_{i}$ is the auxillary class $c_{\bot}$.
Figure 4: Qualitative comparison of traffic scenes generated by SceneGen and
various baselines.
#### Class:
To model a distribution over an actor’s class, we use a categorical
distribution whose support is the set of classes
$\mathbb{C}\cup\\{c_{\bot}\\}$ and whose parameters $\bm{\pi}_{\mathrm{c}}$
are predicted by a neural network:
$\displaystyle\bm{\pi}_{\mathrm{c}}$
$\displaystyle=\mathrm{MLP}_{\mathrm{c}}(\text{avg-
pool}(\bm{f}^{(i)});\bm{w})$ (6) $\displaystyle c_{i}$
$\displaystyle\sim\mathrm{Categorical}(\bm{\pi}_{\mathrm{c}})$ (7)
where $\text{avg-pool}\colon\mathbb{R}^{C\times H\times
W}\rightarrow\mathbb{R}^{C}$ is average pooling over the spatial dimensions
and $\mathrm{MLP}_{\mathrm{c}}$ is a three-layer multi-layer perceptron (MLP)
with softmax activations.
#### Location:
We apply uniform quantization to the actor’s position and model the quantized
values using a categorical distribution. The support of this distribution is
the set of $H\times W$ quantized bins within our region of interest and its
parameters $\bm{\pi}_{\mathrm{loc}}$ are predicted by a class-specific CNN.
This approach allows the model to express highly multi-modal distributions
without making assumptions about the distribution’s shape [49]. To recover
continuous values, we assume a uniform distribution within each quantization
bin.
Let $k$ denote an index into one of the $H\times W$ quantized bins, and
suppose $\lfloor\bm{p}_{k}\rfloor\in\mathbb{R}^{2}$ (_resp._ ,
$\lceil\bm{p}_{k}\rceil\in\mathbb{R}^{2}$) is the minimum (_resp._ , maximum)
continuous coordinates in the $k$-th bin. We model $p(x_{i},y_{i}|c_{i})$ as
follows:
$\displaystyle\bm{\pi}_{\mathrm{loc}}$
$\displaystyle=\mathrm{CNN}_{\mathrm{loc}}(\bm{f}^{(i)};c_{i},\bm{w})$ (8)
$\displaystyle k$
$\displaystyle\sim\mathrm{Categorical}(\bm{\pi}_{\mathrm{loc}})$ (9)
$\displaystyle(x_{i},y_{i})$
$\displaystyle\sim\mathrm{Uniform}(\lfloor\bm{p}_{k}\rfloor,\lceil\bm{p}_{k}\rceil)$
(10)
where $\mathrm{CNN}_{\mathrm{loc}}(\cdot;c_{i},\bm{w})$ is a CNN with softmax
activations for the class $c_{i}$. During inference, we mask and re-normalize
$\bm{\pi}_{\mathrm{loc}}$ such that quantized bins with invalid positions
according to our canonical ordering have zero probability mass. Note that we
do not mask during training since this resulted in worse performance.
After sampling the actor’s location $(x_{i},y_{i})\in\mathbb{R}^{2}$, we
extract a feature vector $\bm{f}^{(i)}_{x_{i},y_{i}}\in\mathbb{R}^{C}$ by
spatially indexing into the $k$-th bin of $\bm{f}^{(i)}$. This feature vector
captures local information at $(x_{i},y_{i})$ and is used to subsequently
predict the actor’s bounding box and velocity.
| ATG4D | Argoverse
---|---|---
Method | NLL | Features | Class | Size | Speed | Heading | NLL | Features | Class | Size | Speed | Heading
Prob. Grammar | - | 0.20 | 0.24 | 0.46 | 0.34 | 0.31 | - | 0.38 | 0.26 | 0.41 | 0.57 | 0.38
MetaSim | - | 0.12 | 0.24 | 0.45 | 0.35 | 0.15 | - | 0.18 | 0.26 | 0.50 | 0.52 | 0.18
Procedural | - | 0.38 | 0.24 | 0.17 | 0.34 | 0.07 | - | 0.58 | 0.26 | 0.23 | 0.59 | 0.17
Lane Graph | - | 0.17 | 0.24 | 0.30 | 0.32 | 0.16 | - | 0.11 | 0.26 | 0.31 | 0.32 | 0.28
LayoutVAE | 210.80 | 0.15 | 0.12 | 0.18 | 0.33 | 0.29 | 200.78 | 0.25 | 0.11 | 0.21 | 0.41 | 0.29
SceneGen | 59.86 | 0.11 | 0.20 | 0.06 | 0.27 | 0.08 | 67.11 | 0.14 | 0.21 | 0.17 | 0.17 | 0.21
Table 1: Negative log-likelihood (NLL) and maximum mean discrepency (MMD)
results on ATG4D and Argoverse. NLL is reported in _nats_ , averaged across
all scenes in the test set. MMD is computed between distributions of features
extracted by a motion forecasting model and various scene statistics (see main
text for description). For all metrics, lower is better.
#### Bounding box:
An actor’s bounding box $\bm{b}_{i}\in\mathbb{B}$ consists of its width and
height $(w_{i},l_{i})\in\mathbb{R}^{2}_{>0}$ and its heading
$\theta_{i}\in[0,2\pi)$. We model the distributions over each of these
independently. For an actor’s bounding box size, we use a mixture of $K$
bivariate log-normal distributions:
$\displaystyle[\bm{\pi}_{\mathrm{box}},\bm{\mu}_{\mathrm{box}},\bm{\Sigma}_{\mathrm{box}}]$
$\displaystyle=\mathrm{MLP}_{\mathrm{box}}(\bm{f}^{(i)}_{x_{i},y_{i}};c_{i},\bm{w})$
(11) $\displaystyle k$
$\displaystyle\sim\mathrm{Categorical}(\bm{\pi}_{\mathrm{box}})$ (12)
$\displaystyle(w_{i},l_{i})$
$\displaystyle\sim\mathrm{LogNormal}(\bm{\mu}_{\mathrm{box},k},\bm{\Sigma}_{\mathrm{box},k})$
(13)
where $\bm{\pi}_{\mathrm{box}}$ are mixture weights, each
$\bm{\mu}_{\mathrm{box},k}\in\mathbb{R}^{2}$ and
$\bm{\Sigma}_{\mathrm{box},k}\in\mathbb{S}_{+}^{2}$ parameterize a component
log-normal distribution, and $\mathrm{MLP}_{\mathrm{box}}(\cdot;c_{i},\bm{w})$
is a three-layer MLP for the class $c_{i}$. This parameterization allows our
model to naturally capture the multi-modality of actor sizes in real world
data; , the size of sedans versus trucks.
Similarly, we model an actor’s heading angle with a mixture of $K$ Von-Mises
distributions:
$\displaystyle[\bm{\pi}_{\theta},\mu_{\theta},\kappa_{\theta}]$
$\displaystyle=\mathrm{MLP}_{\theta}(\bm{f}^{(i)}_{x_{i},y_{i}};c_{i},\bm{w})$
(14) $\displaystyle k$
$\displaystyle\sim\mathrm{Categorical}(\bm{\pi}_{\theta})$ (15)
$\displaystyle\theta_{i}$
$\displaystyle\sim\mathrm{VonMises}(\mu_{\theta,k},\kappa_{\theta,k})$ (16)
where $\bm{\pi}_{\theta}$ are mixture weights, each
$\mu_{\theta,k}\in[0,2\pi)$ and $\kappa_{\theta,k}>0$ parameterize a component
Von-Mises distribution, and $\mathrm{MLP}_{\theta}(\cdot;c_{i},\bm{w})$ is a
three-layer MLP for the class $c_{i}$. The Von-Mises distribution is a close
approximation of a normal distribution wrapped around the unit circle [38] and
has the probability density function
$\displaystyle p(\theta|\mu,\kappa)=\frac{e^{\kappa\cos(\theta-\mu)}}{2\pi
I_{0}(\kappa)}$ (17)
where $I_{0}$ is the modified Bessel function of order 0. We use a mixture of
Von-Mises distributions to capture the multi-modality of headings in real
world data; , a vehicle can go straight or turn at an intersection. To sample
from a mixture of Von-Mises distributions, we sample a component $k$ from a
categorical distribution and then sample $\theta$ from the Von-Mises
distribution of the $k$-th component [3].
#### Velocity:
We parameterize the actor’s velocity $\bm{v}_{i}\in\mathbb{R}^{2}$ as
$\bm{v}_{i}=(s_{i}\cos\omega_{i},s_{i}\sin\omega_{i})$, where
$s_{i}\in\mathbb{R}_{\geq 0}$ is its speed and $\omega_{i}\in[0,2\pi)$ is its
direction. Note that this parameterization is not unique since $\omega_{i}$
can take any value in $[0,2\pi)$ when $\bm{v}_{i}=0$. Therefore, we model the
actor’s velocity as a mixture model where one of the $K\geq 2$ components
corresponds to $\bm{v}_{i}=0$. More concretely, we have
$\displaystyle\bm{\pi}_{\mathrm{v}}$
$\displaystyle=\mathrm{MLP}_{\mathrm{v}}(\bm{f}^{(i)}_{x_{i},y_{i}};c_{i},\bm{w})$
(18) $\displaystyle k$
$\displaystyle\sim\mathrm{Categorical}(\bm{\pi}_{\mathrm{v}})$ (19)
where for $k>0$, we have
$\bm{v}_{i}=(s_{i}\cos\omega_{i},s_{i}\sin\omega_{i})$, with
$\displaystyle[\mu_{\mathrm{s}},\sigma_{\mathrm{s}}]$
$\displaystyle=\mathrm{MLP}_{\mathrm{s}}(\bm{f}^{(i)}_{x_{i},y_{i}};c_{i},\bm{w})$
(20) $\displaystyle[\mu_{\omega},\kappa_{\omega}]$
$\displaystyle=\mathrm{MLP}_{\omega}(\bm{f}^{(i)}_{x_{i},y_{i}};c_{i},\bm{w})$
(21) $\displaystyle s_{i}$
$\displaystyle\sim\mathrm{LogNormal}(\mu_{\mathrm{s},k},\sigma_{\mathrm{s},k})$
(22) $\displaystyle\omega_{i}$
$\displaystyle\sim\mathrm{VonMises}(\mu_{\mathrm{\omega},k},\kappa_{\mathrm{\omega},k})$
(23)
and for $k=0$, we have $\bm{v}_{i}=0$. As before, we use three-layer MLPs to
predict the parameters of each distribution.
For vehicles and bicyclists, we parameterize $\omega_{i}\in[0,2\pi)$ as an
offset relative to the actor’s heading $\theta_{i}\in[0,2\pi)$. This is
equivalent to parameterizing their velocities with a bicycle model [43], which
we found improves sample quality.
### 3.3 Learning and Inference
#### Sampling:
Pure sampling from deep autoregressive models can lead to degenerate examples
due to their “unrealiable long tails” [23]. Therefore, we adopt a sampling
strategy inspired by _nucleus sampling_ [23]. Specifically, at each generation
step, we sample from each of SceneGen’s output distributions $M$ times and
keep the most likely sample. We found this to help avoid degenerate traffic
scenes while maintaining sample diversity. Furthermore, we reject vehicles and
bicyclists whose bounding boxes collide with those of the actors sampled so
far.
#### Training:
We train our model to maximize the log-likelihood of real traffic scenes in
our training dataset:
$\displaystyle\bm{w}^{\star}$
$\displaystyle=\mathrm{arg}\max_{\bm{w}}\sum_{i=1}^{N}\log
p(\bm{a}_{i,1},\ldots,\bm{a}_{i,n}|\bm{m}_{i},\bm{a}_{i,0};\bm{w})$ (24)
where $\bm{w}$ are the neural network parameters and $N$ is the number of
samples in our training set. In practice, we use the Adam optimizer [27] to
minimize the average negative log-likelihood over mini-batches. We use teacher
forcing and backpropagation-through-time to train through the generation
process, up to a fixed window as memory allows.
## 4 Experiments
We evaluate SceneGen on two self-driving datasets: Argoverse [7] and ATG4D
[54]. Our results show that SceneGen can generate more realistic traffic
scenes than the competing methods (Sec. 4.3). We also demonstrate how SceneGen
with sensor simulation can be used to train perception models that generalize
to the real world (Sec. 4.4).
### 4.1 Datasets
#### ATG4D:
ATG4D [54] is a large-scale dataset collected by a fleet of SDVs in cities
across North America. It consists of 5500 25-seconds logs which we split into
a training set of 5000 and an evaluation set of 500. Each log is subsampled at
10Hz to yield 250 traffic scenes, and each scene is annotated with bounding
boxes for vehicles, pedestrians, and bicyclists. Each log also provides HD
maps that encode lane boundaries, drivable areas, and crosswalks as polygons,
and lane centerlines as polylines. Each lane segment is annotated with
attributes such as its type (car bike), turn direction, boundary colors, and
traffic light state.
In our experiments, we subdivide the training set into two splits of 4000 and
1000 logs respectively. We use the first split to train the traffic scene
generation models and the second split to train the perception models in Sec.
4.4.
#### Argoverse:
Argoverse [7] consists of two datasets collected by a fleet of SDVs in
Pittsburgh and Miami. We use the Argoverse 3D Tracking dataset which contains
track annotations for 65 training logs and 24 validation logs. Each log is
subsampled at 10Hz to yield 13,122 training scenes and 5015 validation scenes.
As in ATG4D, Argoverse provides HD maps annotated with drivable areas and lane
segment centerlines and their attributes; , turn direction. However, Argoverse
does not provide crosswalk polygons, lane types, lane boundary colors, and
traffic lights.
# Mixtures | Scene | Vehicle | Pedestrian | Bicyclist
---|---|---|---|---
1 | 125.97 | 7.26 | 10.36 | 9.16
3 | 68.22 | 2.64 | 8.52 | 7.34
5 | 64.05 | 2.35 | 8.27 | 7.22
10 | 59.86 | 1.94 | 8.32 | 6.90
Table 2: Ablation of the number of mixture components in ATG4D. Scene NLL is averaged across scenes and NLL per class is the average NLL per actor of that class. L | DA | C | TL | Scene | Veh. | Ped. | Bic.
---|---|---|---|---|---|---|---
| | | | 93.73 | 4.90 | 8.85 | 7.17
✓ | | | | 63.33 | 2.12 | 8.69 | 7.10
✓ | ✓ | | | 57.66 | 1.73 | 8.40 | 6.84
✓ | ✓ | ✓ | | 57.96 | 1.77 | 8.32 | 6.61
✓ | ✓ | ✓ | ✓ | 59.86 | 1.94 | 8.32 | 6.90
Table 3: Ablation of map in ATG4D (in NLL). L is lanes; DA drivable areas; C
crosswalks; and TL traffic lights.
### 4.2 Experiment Setup
#### Baselines:
Our first set of baselines is inspired by recent work on probabilistic scene
grammars and graphs [37, 26, 8]. In particular, we design a probabilistic
grammar of traffic scenes (Prob. Grammar) such that actors are randomly placed
onto lane segments using a hand-crafted prior [37]. Sampling from this grammar
yields a _scene graph_ , and our next baseline (MetaSim) uses a graph neural
network to transform the attributes of each actor in the scene graph. Our
implementation follows Kar et al. [26], except we use a training algorithm
that is supervised with heuristically generated ground truth scene
graphs.222We were unable to train MetaSim using their unsupervised losses.
Our next set of baselines is inspired by methods that reason directly about
the road topology of the scene [52, 51, 24, 32]. Given a _lane graph_ of the
scene, Procedural uses a set of rules to place actors such that they follow
lane centerlines, maintain a minimum clearance to leading actors, . Each
actor’s bounding box is sampled from a Gaussian KDE fitted to the training
dataset [5] and velocities are set to satisfy speed limits and a time gap
between successive actors on the lane graph. Similar to MetaSim, we also
consider a learning-based version of Procedural that uses a lane graph neural
network [28] to transform each actor’s position, bounding box, and velocity
(Lane Graph).
Since the HD maps in ATG4D and Argoverse do not provide reference paths for
pedestrians, the aforementioned baselines cannot generate pedestrians.333 In
Argoverse, these baselines generate vehicles only since bike lanes are not
given. This highlights the challenge of designing good heuristics. Therefore,
we also compare against LayoutVAE [25]—a variational autoencoder for image
layouts that we adapt for traffic scene generation. We modify LayoutVAE to
condition on HD maps and output oriented bounding boxes and velocities for
actors of every class. Please see our appendix for details.
#### Metrics:
Our first metric measures the negative log-likelihood (NLL) of real traffic
scenes from the evaluation distribution, measured in _nats_. NLL is a standard
metric to compare generative models with tractable likelihoods. However, as
many of our baselines do not have likelihoods, we compute a sample-based
metric as well: maximum mean discrepancy (MMD) [19]. For two distributions $p$
and $q$, MMD measures a distance between $p$ and $q$ as
$\begin{split}\mathrm{MMD}^{2}(p,q)=\mathbb{E}_{x,x^{\prime}\sim
p}[k(x,x^{\prime})]\\\ +\mathbb{E}_{y,y^{\prime}\sim
q}[k(y,y^{\prime})]-2\mathbb{E}_{x\sim p,y\sim q}[k(x,y)]\end{split}$ (25)
for some kernel $k$. Following [56, 29], we compute MMD using Gaussian kernels
with the total variation distance to compare distributions of scene statistics
between generated and real traffic scenes. Our scene statistics measure the
distribution of classes, bounding box sizes, speeds, and heading angles
(relative to the SDV) in each scene. To peer into the global properties of the
traffic scene, we also compute MMD in the feature space of a pre-trained
motion forecasting model that takes a rasterized image of the scene as input
[53]. This is akin to the popular IS [41], FID [21], and KID [4] metrics for
evaluating generative models, except we use a feature extractor trained on
traffic scenes. Please see our appendix for details.
#### Additional details:
Each traffic scene is a $80m\times 80m$ region of interest centered on the ego
SDV. By default, SceneGen uses $K=10$ mixture components and conditions on all
available map elements for each dataset. We train SceneGen using the Adam
optimizer [27] with a learning rate of $1\mathrm{e-}4$ and a batch size of 16,
until convergence. When sampling each actor’s position, heading, and velocity,
we sample $M=10$ times and keep the most likely sample.
### 4.3 Results
#### Quantitative results:
Tab. 1 summarizes the NLL and MMD results for ATG4D and Argoverse. Overall,
SceneGen achieves the best results across both datasets, demonstrating that it
can better model real traffic scenes and synthesize realistic examples as
well. Interestingly, all learning-based methods outperform the hand-tuned
baselines with respect to MMD on deep features—a testament to the difficulty
of designing good heuristics.
#### Qualitative results:
Fig. 3 visualizes samples generated by SceneGen on ATG4D and Argoverse. Fig. 4
compares traffic scenes generated by SceneGen and various baselines. Although
MetaSim and Lane Graph generate reasonable scenes, they are limited by their
underlying heuristics; , actors follow lane centerlines. LayoutVAE generates a
greater variety of actors; however, the model does not position actors on the
map accurately, rendering the overall scene unrealistic. In contrast,
SceneGen’s samples reflects the complexity of real traffic scenes much better.
That said, SceneGen occassionally generates near-collision scenes that are
plausible but unlikely; , Fig. 3 top-right.
Figure 5: ATG4D scene with a traffic violation.
#### Ablation studies:
In Tab. 2, we sweep over the number of mixture components used to parameterize
distributions of bounding boxes and velocities. We see that increasing the
number of components consistently lowers NLL, reflecting the need to model the
multi-modality of real traffic scenes. We also ablate the input map to
SceneGen: starting from an unconditional model, we progressively add lanes,
drivable areas, crosswalks, and traffic light states. From Tab. 3, we see that
using more map elements generally improves NLL. Surprisingly, incorporating
traffic lights slightly degrades performance, which we conjecture is due to
infrequent traffic light observations in ATG4D.
#### Discovering interesting scenes:
We use SceneGen to find unlikely scenes in ATG4D by searching for scenes with
the highest NLL, normalized by the number of actors. Fig. 5 shows an example
of a traffic violation found via this procedure; the violating actor has an
NLL of 21.28.
| Vehicle | Pedestrian | Bicyclist
---|---|---|---
Method | 0.5 | 0.7 | 0.3 | 0.5 | 0.3 | 0.5
Prob. Gram. | 81.1 | 66.6 | - | - | 11.2 | 10.6
MetaSim | 76.3 | 63.3 | - | - | 8.2 | 7.5
Procedural | 80.2 | 63.0 | - | - | 6.5 | 3.8
Lane Graph | 82.9 | 71.7 | - | - | 7.6 | 6.9
LayoutVAE | 85.9 | 76.3 | 49.3 | 41.8 | 18.4 | 16.4
SceneGen | 90.4 | 82.4 | 58.1 | 48.7 | 19.6 | 17.9
Real Scenes | 93.7 | 86.7 | 69.3 | 61.6 | 29.2 | 25.9
Table 4: Detection AP on real ATG4D scenes.
Figure 6: Outputs of detector trained with SceneGen scenes.
### 4.4 Sim2Real Evaluation
Our next experiment demonstrates that SceneGen coupled with sensor simulation
can generate realistic labeled data for training perception models. For each
method under evaluation, we generate 250,000 traffic scenes conditioned on the
SDV and HD map in each frame of the 1000 held-out logs in ATG4D. Next, we use
LiDARsim [32] to simulate the LiDAR point cloud corresponding to each scene.
Finally, we train a 3D object detector [54] using the simulated LiDAR and
evaluate its performance on real scenes and LiDAR in ATG4D.
From Tab. 4, we see that SceneGen’s traffic scenes exhibit the lowest sim2real
gap. Here, Real Scenes is simulated LiDAR from ground truth placements. This
reaffirms our claim that the underlying rules and priors used in MetaSim and
Lane Graph induce a content gap. By eschewing these heuristics altogether,
SceneGen learns to generate significantly more realistic traffic scenes.
Intriguingly, LayoutVAE performs competitively despite struggling to position
actors on the map. We conjecture that this is because LayoutVAE captures the
diversity of actor classes, sizes, headings, well. However, by accurately
modeling actor positions as well, SceneGen further reduces the sim2real gap,
as compared to ground truth traffic scenes.
## 5 Conclusion
We have presented SceneGen—a neural autoregressive model of traffic scenes
from which we can sample new examples as well as evaluate the likelihood of
existing ones. Unlike prior methods, SceneGen eschews the need for rules or
heuristics, making it a more flexible and scalable approach for modeling the
complexity and diversity of real world traffic scenes. As a result, SceneGen
is able to generate realistic traffic scenes, thus taking an important step
towards safe and scalable self-driving.
## References
* [1] M. Bando, K. Hasebe, A. Nakayama, A. Shibata, and Y. Sugiyama. Dynamical model of traffic congestion and numerical simulation. Physical Review E, 1995.
* [2] Yoshua Bengio and Samy Bengio. Modeling high-dimensional discrete data with multi-layer neural networks. In NeurIPS, 1999.
* [3] Donald Best and Nicholas Fisher. Efficient simulation of the von mises distribution. Journal of the Royal Statistical Society. Series C. Applied Statistics, 1979.
* [4] Mikolaj Binkowski, Dougal J. Sutherland, Michael Arbel, and Arthur Gretton. Demystifying MMD gans. In ICLR, 2018.
* [5] Christopher M. Bishop. Pattern Recognition and Machine Learning. 2006\.
* [6] Robert E. Chandler, Robert Herman, and Elliott W. Montroll. Traffic dynamics: Studies in car following. In Operations Research, 1958.
* [7] Ming-Fang Chang, John Lambert, Patsorn Sangkloy, Jagjeet Singh, Slawomir Bak, Andrew Hartnett, De Wang, Peter Carr, Simon Lucey, Deva Ramanan, and James Hays. Argoverse: 3d tracking and forecasting with rich maps. In CVPR, 2019.
* [8] Jeevan Devaranjan, Amlan Kar, and Sanja Fidler. Meta-sim2: Unsupervised learning of scene structure for synthetic data generation. 2020\.
* [9] Alexey Dosovitskiy, Germán Ros, Felipe Codevilla, Antonio M. López, and Vladlen Koltun. CARLA: an open urban driving simulator. In CoRL, 2017.
* [10] Jin Fang, Dingfu Zhou, Feilong Yan, Tongtong Zhao, Feihu Zhang, Yu Ma, Liang Wang, and Ruigang Yang. Augmented lidar simulator for autonomous driving, 2019.
* [11] Martin Fellendorf. Vissim: A microscopic simulation tool to evaluate actuated signal control including bus priority. 1994\.
* [12] Brendan J. Frey, Geoffrey E. Hinton, and Peter Dayan. Does the wake-sleep algorithm produce good density estimators? In NeurIPS, 1995.
* [13] Denos C. Gazis, Robert Herman, and Richard W. Rothery. Nonlinear follow-the-leader models of traffic flow. In Operations Research, 1961.
* [14] Andreas Geiger, Martin Lauer, and Raquel Urtasun. A generative model for 3d urban scene understanding from movable platforms. In CVPR, 2011.
* [15] Andreas Geiger, Christian Wojek, and Raquel Urtasun. Joint 3d estimation of objects and scene layout. In NeurIPS, 2011.
* [16] Daniel Gerlough. Simulation of Freeway Traffic on a General-purpose Discrete Variable Computer. 1955\.
* [17] Peter Gipps. Computer program multsim for simulating output from vehicle detectors on a multi-lane signal-controlled road. 1976\.
* [18] Alex Graves. Generating sequences with recurrent neural networks. CoRR, 2013.
* [19] Arthur Gretton, Karsten M. Borgwardt, Malte J. Rasch, Bernhard Schölkopf, and Alexander J. Smola. A kernel two-sample test. JMLR, 2012.
* [20] David Ha and Douglas Eck. A neural representation of sketch drawings. In ICLR, 2018.
* [21] Martin Heusel, Hubert Ramsauer, Thomas Unterthiner, Bernhard Nessler, and Sepp Hochreiter. Gans trained by a two time-scale update rule converge to a local nash equilibrium. In NeurIPS, 2017.
* [22] Sepp Hochreiter and Jürgen Schmidhuber. Long short-term memory. Neural Computation, 1997.
* [23] Ari Holtzman, Jan Buys, Li Du, Maxwell Forbes, and Yejin Choi. The curious case of neural text degeneration. In ICLR, 2020.
* [24] Stefan Jesenski, Jan Erik Stellet, Florian A. Schiegg, and J. Marius Zöllner. Generation of scenes in intersections for the validation of highly automated driving functions. In IV, 2019.
* [25] Akash Abdu Jyothi, Thibaut Durand, Jiawei He, Leonid Sigal, and Greg Mori. Layoutvae: Stochastic scene layout generation from a label set. In ICCV, 2019.
* [26] Amlan Kar, Aayush Prakash, Ming-Yu Liu, Eric Cameracci, Justin Yuan, Matt Rusiniak, David Acuna, Antonio Torralba, and Sanja Fidler. Meta-sim: Learning to generate synthetic datasets. In ICCV, 2019.
* [27] Diederik P. Kingma and Jimmy Ba. Adam: A method for stochastic optimization. In ICLR, 2015.
* [28] Ming Liang, Bin Yang, Rui Hu, Yun Chen, Renjie Liao, Song Feng, and Raquel Urtasun. Learning lane graph representations for motion forecasting. In ECCV, 2020.
* [29] Renjie Liao, Yujia Li, Yang Song, Shenlong Wang, William L. Hamilton, David Duvenaud, Raquel Urtasun, and Richard S. Zemel. Efficient graph generation with graph recurrent attention networks. In NeurIPS, 2019.
* [30] Michael James Lighthill and Gerald Beresford Whitham. On kinematic waves. II. A theory of traffic flow on long crowded roads. In Royal Society of London. Series A, Mathematical and Physical Sciences, 1955.
* [31] Pablo Álvarez López, Michael Behrisch, Laura Bieker-Walz, Jakob Erdmann, Yun-Pang Flötteröd, Robert Hilbrich, Leonhard Lücken, Johannes Rummel, Peter Wagner, and Evamarie WieBner. Microscopic traffic simulation using SUMO. In ITSC, 2018.
* [32] Sivabalan Manivasagam, Shenlong Wang, Kelvin Wong, Wenyuan Zeng, Mikita Sazanovich, Shuhan Tan, Bin Yang, Wei-Chiu Ma, and Raquel Urtasun. Lidarsim: Realistic lidar simulation by leveraging the real world. In CVPR, 2020.
* [33] Charlie Nash, Yaroslav Ganin, S. M. Ali Eslami, and Peter W. Battaglia. Polygen: An autoregressive generative model of 3d meshes. ICML, 2020.
* [34] G. F. Newell. Nonlinear effects in the dynamics of car following. Operations Research, 1961.
* [35] Larry Owen, Yunlong Zhang, Lei Rao, and Gene Mchale. Traffic flow simulation using corsim. Winter Simulation Conference, 2001.
* [36] Louis A. Pipes. An operational analysis of traffic dynamics. In Journal of Applied Physics, 1953.
* [37] Aayush Prakash, Shaad Boochoon, Mark Brophy, David Acuna, Eric Cameracci, Gavriel State, Omer Shapira, and Stan Birchfield. Structured domain randomization: Bridging the reality gap by context-aware synthetic data. In ICRA, 2019.
* [38] Sergey Prokudin, Peter V. Gehler, and Sebastian Nowozin. Deep directional statistics: Pose estimation with uncertainty quantification. In ECCV, 2018.
* [39] Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. Language models are unsupervised multitask learners. 2018\.
* [40] Paul Richards. Shock waves on the highway. In Operations Research, 1956.
* [41] Tim Salimans, Ian J. Goodfellow, Wojciech Zaremba, Vicki Cheung, Alec Radford, and Xi Chen. Improved techniques for training gans. In NeurIPS, 2016.
* [42] Xingjian Shi, Zhourong Chen, Hao Wang, Dit-Yan Yeung, Wai-Kin Wong, and Wang-chun Woo. Convolutional LSTM network: A machine learning approach for precipitation nowcasting. In NeurIPS, 2015.
* [43] Saied Taheri. An investigation and design of slip control braking systems integrated with four wheel steering. 1990\.
* [44] Martin Treiber, Ansgar Hennecke, and Dirk Helbing. Congested traffic states in empirical observations and microscopic simulations. In Automatisierungstechnik, 2000.
* [45] Benigno Uria, Iain Murray, and Hugo Larochelle. NADE: the real-valued neural autoregressive density-estimator. CoRR, 2013.
* [46] Benigno Uria, Iain Murray, and Hugo Larochelle. RNADE: the real-valued neural autoregressive density-estimator. In NeurIPS, 2013.
* [47] Benigno Uria, Iain Murray, and Hugo Larochelle. A deep and tractable density estimator. In ICML, 2014.
* [48] Aäron van den Oord, Sander Dieleman, Heiga Zen, Karen Simonyan, Oriol Vinyals, Alex Graves, Nal Kalchbrenner, Andrew W. Senior, and Koray Kavukcuoglu. Wavenet: A generative model for raw audio. In ISCA, 2016.
* [49] Aäron van den Oord, Nal Kalchbrenner, and Koray Kavukcuoglu. Pixel recurrent neural networks. In ICML, 2016.
* [50] Kai Wang, Manolis Savva, Angel X. Chang, and Daniel Ritchie. Deep convolutional priors for indoor scene synthesis. TOG, 2018.
* [51] Tim Allan Wheeler and Mykel J. Kochenderfer. Factor graph scene distributions for automotive safety analysis. In ITSC, 2016.
* [52] Tim Allan Wheeler, Mykel J. Kochenderfer, and Philipp Robbel. Initial scene configurations for highway traffic propagation. In ITSC, 2015.
* [53] Kelvin Wong, Qiang Zhang, Ming Liang, Bin Yang, Renjie Liao, Abbas Sadat, and Raquel Urtasun. Testing the safety of self-driving vehicles by simulating perception and prediction. ECCV, 2020.
* [54] Bin Yang, Ming Liang, and Raquel Urtasun. HDNET: exploiting HD maps for 3d object detection. In CoRL, 2018.
* [55] Qi Yang and Haris N. Koutsopoulos. A microscopic traffic simulator for evaluation of dynamic traffic management systems. Transportation Research Part C: Emerging Technologies, 1996.
* [56] Jiaxuan You, Rex Ying, Xiang Ren, William L. Hamilton, and Jure Leskovec. Graphrnn: Generating realistic graphs with deep auto-regressive models. In ICML, 2018.
* [57] Hongyi Zhang, Andreas Geiger, and Raquel Urtasun. Understanding high-level semantics by modeling traffic patterns. In ICCV, 2013.
|
# ConE: A Concurrent Edit Detection Tool for Large Scale Software Development
Chandra Maddila Microsoft ResearchRedmondWA, USA<EMAIL_ADDRESS>,
Nachiappan Nagappan Microsoft Research Microsoft ResearchRedmondUSA
<EMAIL_ADDRESS>, Christian Bird Microsoft ResearchRedmondWA,
USA<EMAIL_ADDRESS>, Georgios Gousios Delft University of
TechnologyDelftThe Netherlands FacebookMenlo ParkCA, USA<EMAIL_ADDRESS>and
Arie van Deursen 0000-0003-4850-3312 Delft University of TechnologyDelftThe
Netherlands<EMAIL_ADDRESS>
###### Abstract.
Modern, complex software systems are being continuously extended and adjusted.
The developers responsible for this may come from different teams or
organizations, and may be distributed over the world. This may make it
difficult to keep track of what other developers are doing, which may result
in multiple developers concurrently editing the same code areas. This, in
turn, may lead to hard-to-merge changes or even merge conflicts, logical bugs
that are difficult to detect, duplication of work, and wasted developer
productivity. To address this, we explore the extent of this problem in the
pull request based software development model. We study half a year of changes
made to six large repositories in Microsoft in which at least 1,000 pull
requests are created each month. We find that files concurrently edited in
different pull requests are more likely to introduce bugs. Motivated by these
findings, we design, implement, and deploy a service named ConE (Concurrent
Edit Detector) that proactively detects pull requests containing concurrent
edits, to help mitigate the problems caused by them. ConE has been designed to
scale, and to minimize false alarms while still flagging relevant concurrently
edited files. Key concepts of ConE include the detection of the _Extent of
Overlap_ between pull requests, and the identification of _Rarely Concurrently
Edited Files_. To evaluate ConE, we report on its operational deployment on
234 repositories inside Microsoft. ConE assessed 26,000 pull requests and made
775 recommendations about conflicting changes, which were rated as useful in
over 70% (554) of the cases. From interviews with 48 users we learned that
they believed ConE would save time in conflict resolution and avoiding
duplicate work, and that over 90% intend to keep using the service on a daily
basis.
Pull-based software development, pull request, merge conflict, distributed
software development
††copyright: rightsretained††ccs: Software and its engineering Integrated and
visual development environments††ccs: Software and its engineering Software
maintenance tools††ccs: Software and its engineering Software configuration
management and version control systems††copyright: rightsretained††journal:
TOSEM
## 1\. Introduction
In a collaborative software development environment, developers, commonly,
work on their individual work items independently by forking a copy of the
code base from the latest main branch and editing the source code files
locally. They then create pull requests to merge their local changes into the
main branch. With the rise of globally distributed and large software
development teams, this adds a layer of complexity due to the fact that
developers working on overlapping parts of the same codebase might be in
different teams or geographies or both. While such collaborative software
development is essential for building complex software systems that meet the
expected quality thresholds and delivery deadlines, it may have unintended
consequences or ‘side effects’ (Kalliamvakou et al., 2014; Bird et al., 2009;
McKee et al., 2017; Accioly et al., 2018). The side effects can be as simple
as syntactic merge conflicts, which can be handled by version control systems
(Rochkind, 1975) and various techniques/tools (de Souza et al., 2003b; Horwitz
et al., 1989; Grinter, 1995), to semantic conflicts (Fowler, 2020). Such bugs
can be very hard to detect and may cause substantial disruptions (Horwitz et
al., 1989). Primarily, all of this happens due to lack of awareness and early
communication among developers editing the same source code file or area, at
the same time, through active pull requests.
There is no substitute to resolving merge or semantic conflicts (or fixing
logical bugs or refactoring duplicate code) when the issue is manifested.
Studies show that pull requests getting into merge conflicts is a prevalent
problem (Kasi and Sarma, 2013a; Brun et al., 2011a; Brindescu et al., 2019).
Merge conflicts have a significant impact on code quality and can disrupt the
developer workflow (Nieminen, 2012; de Souza et al., 2003a; Ahmed et al.,
2017). Sometimes, the conflict becomes so convoluted that one of the
developers involved in the conflict has to abandon their change and start
afresh. Because of that, developers often defer resolving their conflicts
(Nelson et al., 2019) which makes the conflict resolution even harder at a
later point of time (Berczuk and Appleton, 2002; Nelson et al., 2019). Time
spent in conflict resolution or refactoring activities is going to take away
valuable time and prohibits developers from fulfilling their primary
responsibility, which is to deliver value to the organization in the form of
new functionality, bug fixes and maintaining the service. In addition to loss
of time and money, this causes frustration (Sarma et al., 2003; Bird and
Zimmermann, 2012). Studies have shown that these problems can be avoided by
following strategies such as effective communication within the team (Guzzi et
al., 2015), and developing awareness about others’ changes that have a
potential to incur conflicts (Estler et al., 2014).
Our goal is to design a method to help developers discover changes made on
other branches that might conflict with their own changes. This goal is
particularly challenging for modern, large scale software development,
involving thousands of developers working on a shared code base. One of the
design choices that we had to make was to minimize the false alarms by making
it more conservative. Studies have shown that, in large organizations, tools
that generate many false alarms are not used and eventually deprecated
(Winters et al., 2020).
The direct source of inspiration for our research is complex, large scale
software development as taking place at Microsoft. Microsoft employs ~166K
employees worldwide and 58.6% of Microsoft’s employees are in engineering
organizations. Microsoft employs ~69K employees outside of the United States
making it truly multinational (microsoft, 2020). Because of the scale and
breadth of the organization, tools and technologies used across the company,
it is very common for Microsoft’s developers to constantly work on overlapping
parts of the source code, at the same time, and encounter some of the problems
explained above.
Over a period of twelve months, we studied pull requests, source control
systems, code review tools, conflict detection processes, and team and
organizational structures, across Microsoft and across different geographies.
This greatly helped us assess the extent of the problem and practices followed
to mitigate the issues induced by the collaborative software development
process. We make three key observations:
1. (1)
Discovering others’ changes is not trivial. There are several solutions
offered by source control systems like GitHub or Azure DevOps (GitHub, 2020;
Azure Devops, 2020) that enable developers to subscribe to email notifications
when new pull requests are created or existing ones are updated. In addition,
products like Microsoft Teams or Slack can show a feed of changes that are
happening in a repository a user is interested in. The notification feed
becomes noisy over time and it becomes very hard for developers to digest all
of this information and locate pull requests that might cause conflicts. This
problem is aggravated when a developer works on multiple repositories.
2. (2)
Tools have to fit into developers’ workflows. Making developers install
several client tools and making them switch their focus between different
tools and windows is a big obstacle for adoption of any solution. There exists
a plethora of tools (Sarma et al., 2003; Biehl et al., 2007; Brun et al.,
2011b) that aim to solve this problem in bits and pieces. Despite this,
usability is still a challenge because none of them fit naturally into
developers’ workflows. Therefore, they cause more inconvenience than the
potential benefits they might yield.
3. (3)
Suggestions about conflicting changes must be accurate and scalable. There
exist solutions which attempt to merge changes proactively between a
developer’s local branch and the latest version of main branch or two
developer branches. These tools notify the developers when they detect a merge
conflict situation (Sarma et al., 2003; Brun et al., 2013, 2011b). Such
solutions are impractical to implement in large development environments as
the huge infrastructure costs incurred by them may outweigh the gains realized
in terms of saved developer productivity.
Keeping these observations in mind, we propose ConE, a novel technique to i)
calculate the Extent Of Overlap (EOO) between two pull requests that are
active at the same time frame, and ii) determine the existence of Rarely
Concurrently Edited files (RCEs). We also derived thresholds to filter out
noise and implemented ranking techniques to prioritize conflicting changes.
We have implemented and deployed ConE on 234 repositories across different
product lines and large scale cloud development environments within Microsoft.
Since deployed, in March 2020, ConE evaluated 26,000 pull requests and made
775 recommendations about conflicting changes.
This paper describes ConE and makes the following contributions:
* •
We characterize empirically how concurrent edits and the probability of source
code files introducing bugs vary based on the fashion in which edits to them
are made, i.e., concurrent vs non-concurrent edits (Section 3).
* •
We introduce the ConE algorithm that leverages light-weight heuristics such as
the extent of overlap and the existence of rarely concurrently edited files,
and ConE’s thresholding and ranking algorithm that filters and prioritizes
conflicting changes for notification (Section 4).
* •
We provide implementation and design details on how we built ConE as a
scalable cloud service that can process tens of thousands of pull requests
across different product lines every week (Section 5).
* •
We present results from our quantitative and qualitative evaluation of the
ConE system (Section 6).
To the best of our knowledge, this is the first study of an early conflict
detection system that is also deployed, in a large scale, cloud based,
enterprise setting comprised of a diverse set of developers who work with
multiple frameworks and programming languages, on multiple disparate product
lines and who are from multiple geographies and cultural contexts. We have
observed overwhelmingly positive response to this system with a 71.48 %
positive feedback provided by the end users: A very good user interaction rate
(2.5 clicks per recommendation that is surfaced by ConE to learn more about
conflicting changes) and 93.75% of the users indicating their intent to use or
keep using the tool on a daily basis.
Our interactions and interviews with developers across the company made us
realize that developers find it valuable to have a service that can facilitate
better communication among them about edits that are happening elsewhere (to
the same files or functions that are being edited by them) through simple and
non-obtrusive notifications. This is reflected strongly in the qualitative
feedback that we have received (explained in detail in section 6).
## 2\. Related Work
The software engineering community has extensively studied the impact of merge
conflicts on software quality (Brindescu et al., 2019; Ahmed et al., 2017),
investigated various methodologies and tools that can help developers discover
conflicting changes through interactive visualizations, and developed
speculative analysis tools (de Souza et al., 2003b; Horwitz et al., 1989;
Grinter, 1995). While ConE draws inspiration from some of this prior work, it
is more ambitious, targeting a method that is effective while not resource
intensive, can be easily scaled to work on tens of thousands of repositories
of all sizes, is easy to integrate and fits naturally into existing software
development workflows and tools with very little to no disruption.
A conflict detection system that has to work for large organizations with
disparate sets of programming languages, tools, product portfolio and has
thousands of developers that are also geographically distributed, has to
satisfy the requirements listed below:
* •
language-independent: the techniques and tooling built should be language-
independent in nature and support repositories that hosts code written in any
programming language and should support new languages with no or minimal
customization.
* •
non-intrusive: the recommendations passed by the tool should naturally fit
into developer workflows and environment.
* •
scalable: finally, the techniques proposed and the system should be performant
and responsive without consuming a lot of computing resources and demanding
lot of infrastructure to scale them up.
We now explain some of the prior work that is relevant and explain why they do
not satisfy some or all of the requirements.
Tools based on edit activity. Manhattan (Lanza et al., 2013) is a tool that
generates visualizations about team activity whenever a developer edits a
class and notifies developers through a client program, in real time. While
this shows useful 3D visualizations about merge conflicts in the IDE itself
(thus being non-intrusive and natural to use), it is not adaptive (it does not
automatically reflect any changes to the code in the visualization, unless the
user decides to re-import the code base), not generic (it works only for Java
and Eclipse) and not scalable as it operates on the client side and has to go
through the cycle of import-analyze-present again and again for every change
that is made, inside the IDE environment. Similarly, FASTDash (Biehl et al.,
2007) is a tool that scans every single file that is edited/opened in every
developer local workspace and communicates about their changes back and forth
through a central server. This is impractical to implement across large
development teams. It requires tracking changes at the client side with the
help of an agent program that runs on each client. Furthermore it then keeps
listening to every file edit activity in the workspace, then communicating
that information with a central server which mediates communication between
different workspaces. This is prone to failures and runs into scale issues
even with a linear increase in developers and pull requests in the system.
Tools based on early merging. Some tools were built upon the idea of
attempting actual merging and notifying the developers through a separate
program that runs on the client (Sarma et al., 2003; Brun et al., 2013,
2011b). These solutions are very resource intensive because the system needs
to perform the actual source code merge for every pull request or developer
branch with the latest version of the main branch (despite implementing
optimization techniques like caching and tweaking the algorithm to compute
relationships between changes when there is a change to the history of the
repository). It is not possible to implement and scale this at a company like
Microsoft where tens of thousands of pull requests are created every week.
Additionally, these solutions do not attempt to merge between two different
user branches or two different active pull requests but attempt to merge a
developer branch with the latest version of the main branch. This will not
find conflicting changes that exist in independent developer branches and thus
cannot trigger early intervention. Palantir (Sarma et al., 2003) is a tool
that addresses some of the performance issues by leveraging a cache for doing
dependency analysis. It is, however, still hard to scale due to the fact that
there is client-server communication involved between IDEs and centralized
version control servers to scan, detect, merge and update every workspace with
information about remote conflicting changes. Some solutions explore
speculative merging (Brun et al., 2011c; Kasi and Sarma, 2013b; Guimarães and
Silva, 2012) but the concerns with scalability, non-obtrusiveness remain valid
with all of them.
Predictive tools. Owhadi-Kareshk _et al._ explored the idea of building binary
classifiers to predict conflicting changes (Owhadi-Kareshk et al., 2019).
Their model consists of nine features, of which the number of jointly edited
files is the dominant one. The model has been evaluated on a dataset of
syntactic merge conflicts reverse engineered from git histories. The model’s
reported performance in terms of precision ranges from 0.48 to 0.63 (depending
on the programming languages).
While one of our proposed metrics, our Extent of Overlap, is akin to the
dominant feature in Owhadi-Kareshk’s model, unfortunately their proposed
approach cannot be applied in our context. In particular the reported
precision is too low and would generate too many false alarms which would
render our tool unused (Winters et al., 2020). Furthermore, the reported
precision and recall are measured based on a gold standard of syntactic
changes. Instead, we target an evaluation with actual developers, based on a
service deployed on repositories they are working with on a daily basis. As we
will see in our evaluation, these developers not only value warnings about
syntactic changes, but also semantic conflicts (Fowler, 2020), or even cases
of code/effort duplication (as explained in Section 6.3).
Empirical studies of merge conflicts and collaboration. There exists many
studies that do not propose tools, but study merge conflicts or present
methods to predict conflicts or recommend coordination. Zhang _et al._ (Zhang
et al., 2012) conducted an empirical study of the effect of file editing
patterns on software quality. They conducted their study on three open source
software systems to investigate the individual and the combined impact of the
four patterns on software quality. To the best of our knowledge ours is the
first empirical study that is conducted at scale, on industry data. We perform
analysis on 67K bug reports, from 83K files (in comparison to the studies
conducted by Zhange _et al._ which looked at 98 bugs, from 2,140 files).
Ashraf _et al._ presented reports from mining cross-task artifact dependencies
from developer interactions (Ashraf et al., 2019). Dias _et al._ proposed
methods to understanding predictive factors for merge conflicts (Dias et al.,
2020), i.e., how conflict occurrence is affected by technical and
organizational factors. Studies conducted by Blincoe _et al._ and Cataldo _et
al._ (Ashraf et al., 2019; Cataldo et al., 2006) show the importance of timely
and efficient recommendations and the implications for the design of
collaboration awareness tools. Studies like this form a basis for building
solutions that are scalable and responsive (the large-scale ConE service that
we deployed at Microsoft) and their importance in creating awareness of the
potential conflicts.
Costa _et al._ proposed methods to recommend experts for integrating changes
across branches (Costa et al., 2016) and characterized the problem of
developers’ assignment for merging branches (Costa et al., 2014). They
analyzed merge profiles of eight software projects and checked if the
development history is an appropriate source of information for identifying
the key participants for collaborative merge. They also presented a survey on
developers about what actions they take when they need to merge branches, and
especially when a conflict arises during the merge. Their studies report that
the majority of the developers (75%) prefer collaborative merging (as opposed
to merging and taking decisions alone). This reiterates the fact that tools
that facilitate collaboration, by providing early warnings, are important in
handling merge conflict situations.
## 3\. Concurrent versus non-concurrent edits in practice
The differences in the fashion in which edits are made to source code files
(concurrent vs non-concurrent) can cause various unintended consequences (as
explained in section 1). We performed large scale empirical analysis of source
code edits to understand the ability of concurrent edits to cause bugs. We
picked bugs as a candidate for our case study because it is relatively easy to
mine and generate massive amounts of ground truth data about bugs and map them
back to the changes that induced the bugs, by leveraging some of the
techniques proposed by Wang _et al._ (Wang et al., 2019), at Microsoft’s
scale. Understanding the extent of the problem, i.e., the side effects caused
by concurrent source code edits in a systematic way, is an essential first
step towards making a case for building an early intervention service like
ConE. This allows us to quickly sign up customers inside the company and
deploy the ConE system on thousands of repositories, for tens of thousands of
developers, across Microsoft. To that extent, we formulate two research
questions that we would like to find answers for.
* •
RQ1 How do concurrent and non-concurrent edits to files compare in the number
of bugs introduced in these files?
* •
RQ2 To what extent are concurrent, non-concurrent, and all edits, correlated
with subsequent bug fixes to these files?
Answering the questions above allows us to assess the urgency of the problem.
The methods, techniques and outcomes used can also be employed to inform
decision makers, when investments in the adoption of techniques like ConE need
to be made.
We performed an empirical study on data that is collected from multiple,
differently sized repositories. For our study, we focused on one of the
important side effects that is induced by collaborative software development,
i.e., the “number of bugs introduced by concurrent edits”. We chose this
scenario as we have an option to generate an extensive set of ground truth
data, by leveraging techniques proposed by Wang _et al._ (Wang et al., 2019),
to tag pull requests as bug fixes. They employ two simple heuristics to tag
bug fixes: the commit message should contain the words “bug” or “fix”, but not
“test case” or “unit test”. Tagging changes that introduce bugs is not a
practice that is followed very well in organizations. Studies have shown that
files changed in bug fixes can be considered as a good proxy to files that
introduced the bugs in the first place (Kim et al., 2006; Williams and Spacco,
2008). Combining both ideas we created a ground truth data set which we used
in our empirical analysis. We broadly classify our empirical study into three
main steps.
1. (1)
Data collection: Collect data using the data ingestion framework that we have
built which ingests meta data about pull requests (author, created/closed
dates, commits, reviewers etc), iterations/updates of pull requests, file
changes in pull requests, and intent of the pull request (feature work, bug
fix, refactoring etc).
2. (2)
Use the data collected in Step 1 to analyze the impact of concurrent edits on
bugs or bug fixes in comparison to non-concurrent edits.
3. (3)
Explain the differences in correlations between concurrently versus non-
concurrently edited files to the number of bugs that they introduce.
For the purpose of the empirical analysis we define concurrently and non-
concurrently edited files as follows:
* •
Concurrently edited files: Files which have been edited in two or more pull
requests, at the same time, while the pull requests are active. A pull request
is in an ‘active’ state when it is being reviewed but not completed or merged.
* •
Non-concurrently edited files: Files which have never been edited in two pull
requests while they both are in active state. So, we are sure that changes
made to these files are always made in the latest version and are merged
before they are edited through another active pull request
### 3.1. Data Collection
We collected data about file edits (concurrent and non-concurrent) from the
pull request data, for six months, from six repositories. We picked
repositories in which at least 1,000 pull requests are created every month.
After reducing the repositories to a subset, we randomly selected six
repositories for the purpose of the analysis. We made sure our data set is
representative in various dimensions like size (small (1), medium (2), large
(3)), the nature of the product (on-prem product (2) vs cloud service (4)),
geographical distribution of the teams (US only (2) versus split between
different countries and time zones (4)), and programming languages (as listed
in Table 3). We performed data cleansing by applying the filters listed below:
* •
Exclude PRs that are open for more than 30 days: the majority of these pull
requests are ‘Stale PRs’ which will be left open forever or abandoned at a
later point of time. Studies shows that 70% of the pull requests gets
completed within a week after creation (Gousios et al., 2014).
* •
Exclude PRs with more than 50 files (this is the 90th percentile for file
counts in our pull request data set). This is one of the proxies that we use
to to exclude PRs which are created by non-human developers that do mass
refactoring or styling changes etc.
* •
Exclude edits made to certain file types. We are primarily interested in
understanding the effects of concurrent edits on source code changes as
opposed to files like configuration or initialization files which are edited
by lot of developers through lot of concurrent pull requests, all the time.
For the purpose of this study, we consider only the following file types: .cs,
.c, .cpp, .ts, .py, .java, .js, .sql.
* •
Exclude files that are edited a lot: For example, files that contain global
constants, key value pairs, configuration values, or enums are usually seen in
a lot of active pull requests at the same time. We studied 200 pull requests
to understand the concurrent edits to these files. They typically are in the
order of a few thousands of lines in size, which is well above the median file
size. In all cases the edits are localized to different areas of the files and
surgical in nature. Sometimes, the line numbers of the edits are far away (few
thousands of lines away, at least). Therefore, we impose a filter on the edit
count of fewer than twenty times in a month (90th percentile of edit counts
for all source code files) and exclude any files that are edited more than
this. Without this filter, these frequently edited files would dominate the
results of the ConE recommendations thus yielding too many warnings for
harmless concurrent edits.
Table 1. Distribution of concurrently and non-concurrently edited files per repository Repo | Distinct number of concurrently edited files | Distinct number of non-concurrently edited files | Number of bug fix pull requests | Percentage of concurrently edited files | Percentage of non-concurrently edited files
---|---|---|---|---|---
Repo-1 | 3500 | 4875 | 4781 | 41.7 | 58.2
Repo-2 | 10470 | 16879 | 15678 | 38.2 | 61.8
Repo-3 | 2907 | 4119 | 5467 | 41.3 | 58.7
Repo-4 | 5560 | 7550 | 8972 | 42.4 | 57.6
Repo-5 | 4110 | 7569 | 9786 | 35.2 | 64.8
Repo-6 | 5987 | 9541 | 9443 | 38.5 | 61.5
Total | 32534 | 50533 | 54127 | 39.1 | 60.9
We started with a data set of 208,556 pull requests. As bug fixes is our main
concentration for the empirical analysis, we removed all the pull requests
that are not bug fixes. That reduced the data set to 67,155 pull requests
(32.2% of the pull requests are bug fixes). Then we applied other filters
mentioned above, which further reduced the data set to 54,127 pull requests
(25.95%). Table 1 shows the distribution of concurrently and non-concurrently
edited files per repository.
### 3.2. RQ1: Concurrent versus non-concurrent bug inducing edits
We take every (concurrently or non-concurrently) edited file, and check
whether the nature of the edit has any effect on the likelihood of that file
appearing in bug fixes after the edit has been merged. We compare how the
percentage of edited files that are seen in bug fixes (within a day, a week,
two weeks and a month), varies with the nature of the edit (concurrent vs non-
concurrent).
[] [] [] []
[] []
Figure 1. Graphs showing how the percentage of files seen in bug fixes (within
a day, a week, two weeks and a month) is changing. Concurrent versus non-
concurrent bug inducing edits for six repositories Blue and orange bars
represent concurrently and non-concurrently edited files, repsectively.
Figure 1 shows the impact of concurrent versus non-concurrent edits on the
number of bugs being introduced. Across all six repositories, the percentage
of bug inducing edits is consistently higher for concurrently edited files
(blue bars) than for non-concurrently edited ones (orange bars).
### 3.3. RQ2: Edits in files versus bug fixes in files
We use Spearman’s rank correlation to analyze how the total number of edits,
concurrent edits, and non-concurrent edits to files each correlate with the
number of bug fixes seen in those files.
While Figure 1 shows that more concurrently edited files are seen in bug fix
pull requests (compared to non-concurrently edited ones), this might also be
because these files are frequently edited and seen in bug fix pull requests
naturally. To validate this, we performed Spearman rank correlation analysis
for each file that is ever edited with respect to how many times it is seen in
bug fixes (the numbers of data points from the six repositories are listed in
Table 1):
* •
The total number of times a file is seen in _all_ completed pull requests vs
the number of bug fixes in which it is seen
* •
The total number of times a file is seen in _concurrent_ pull requests vs the
number of bug fixes in which it is seen
* •
The total number of times a file is seen in _non-concurrent_ pull requests vs
the number of bug fixes in which it is seen
The results are in Table 2. We observe that concurrent edits (third column)
consistently are correlated with bug fixes, more so than non-concurrent edits
(column 4) and all edits (column 2). For all repositories except Repo-4, there
exists almost no correlation between non-concurrent edits (column 4) and bug
fixes.
For Repo-4, frequently edited files are not necessarily the ones seen in more
bug fixes: there exists a _negative_ correlation between total edits (column
2) and the number of bug fixes. However, files that are _concurrently_ edited
(column 3) do have a positive correlation with the number of bug fixes.
The variety in the correlations can be explained by the fact that concurrent
editing is just one of many factors related to the need for bug fixing. Other
factors might include the level of modularization, developer skills, the test
adequacy, engineering system efficiency, and so on.
Table 2. Spearman rank correlation analysis for total edits, concurrent edits, non-concurrent edits vs bug fixes. Repo | Total Edits to Bug Fixes | Concurrent Edits to Bug Fixes | Non-Concurrent Edits to Bug Fixes
---|---|---|---
Repo-1 | $0.145$∗∗∗ | $0.298$∗∗∗ | $0.034$∗∗
Repo-2 | $0.072$∗∗∗ | $0.140$∗∗∗ | $0.057$∗∗
Repo-3 | $0.140$∗ | $0.330$∗ | $0.120$∗
Repo-4 | $-0.077$∗∗∗ | $0.451$∗∗∗ | $-0.461$∗∗∗
Repo-5 | $0.164$∗∗∗ | $0.472$∗∗∗ | $0.091$∗∗∗
Repo-6 | $0.084$∗∗ | $0.196$∗∗∗ | $0.005$∗
***$p<0.001$, **$p<0.01$, *$p<0.05$ |
## 4\. System Design
Backed by the correlation analysis suggesting that concurrent edits maybe
prone to causing issues. Also, there exists a huge demand from engineering
organizations, inside Microsoft, for a better tool that can detect conflicting
changes early on and facilitate better communication among developers, we
moved forward to materialize the idea of ConE into reality. We then performed
large scale testing and validation by deploying ConE on 234 repositories.
Details about the implementation, deployment and scale-out are provided in
section 5.
In this section we describe ConE’s conflict change detection methodology,
algorithm and system design in detail. We will use the following terminology:
* •
Reference pull request is a pull request in which a new commit/update is
pushed thus triggering the ConE algorithm to be run on that pull request.
* •
Active pull request is a pull request whose state is ‘active’ when the ConE
algorithm is triggered to be run on a reference pull request.
A key design consideration is that we want to avoid false alarms. In the
current state of the practice developers never receive warnings about
potentially harmful concurrent edits. Based on this we believe it is
acceptable to miss a few warnings. On the other hand, giving false warnings
will likely lead to rejection of a tool like ConE. For that reason, ConE has
several built-in heuristics that are aimed at reducing such false alarms.
Due to the nature of the problem, the domain we are operating in, and the
algorithm we have in-place, it is possible to see notifications that are false
alarms. One of the design choices that we had to make was to minimize the
false alarms by making it more conservative. A side effect of this is our
coverage (number of pull requests for which we send a notification) will be
lower. Studies have shown that, in large organizations, tools that generate
many false alarms are not used and eventually deprecated (Winters et al.,
2020). However, recent techniques proposed by Brindescu _et al._ (Brindescu et
al., 2020), can potentially aid in facilitating a decision by determining the
merge conflict situations to flag, based on the complexity of the merge
conflict.
### 4.1. Core Concepts
ConE constantly listens to events that happen in an Azure DevOps environment
(Azure Devops, 2020). When any new activity is recorded (e.g., pushing a new
update or commit) in a pull request, the ConE algorithm is run against that
pull request. Based on the outcome, ConE notifies the author of the pull
request about conflicting changes. We describe two novel constructs that we
came up with for detecting conflicting changes and determining candidates for
notifications: Extent of overlap (EOO) and the existence of ‘Rarely
Concurrently Edited’ files (RCEs). Next, we provide a detailed description of
ConE’s conflict change detection algorithm and the parameters we have in place
to tune ConE’s algorithm.
#### 4.1.1. Extent of Overlap (EOO)
ConE scans all the active pull requests which meet our filtering criteria
(explained in section 3.1) and for each such pull request (reference pull
request) calculates the percentage of files edited in the reference pull
request that overlap with each of the active pull requests.
$\textit{Extent of Overlap}=\frac{\mid F_{r}\cap F_{a}-F_{e}\mid}{\mid
F_{r}\mid}*100$
where Fr = Files edited in reference pull request, Fa = Files edited in a
given active pull request, Fe = Files excluded i.e., files that are not of
types listed in the paragraph below. The idea is to find the percentage of
items that are commonly edited in multiple active pull requests and create a
pairwise overlap score for each of the active and reference pull request
pairs. Intuitively, if the overlap between two active pull requests is high,
the probability of them doing duplicate work or causing merge conflicts when
they are merged is also going to be high. We use this technique to calculate
the overlap in terms of number of overlapping files for now. This can be
easily extended to calculate the overlap between two active pull requests in
terms of number of classes or methods or stubs if that data is available.
A milder version of EOO is used by the model proposed by Owhadi-Kareshk et al
(Owhadi-Kareshk et al., 2019), which looks at the number of files that are
commonly edited in two pull requests when determining conflicting changes.
While calculating extent of overlap it is important to exclude edits to
certain file types whose probability of inducing conflicts is minimal. This
helps in reducing false alarms in our notifications significantly. Based on a
manual inspection of 500 randomly selected bug fix pull requests, by the first
three authors, we concluded that concurrent edits to initialization or
configuration files are relatively safe, but that concurrent edits to source
code files are more likely to lead to problems. Therefore, we created an
_allow list_ based on file types as shown in Table 3. As can be seen, this
eliminates around 6.4% of the files. Note that such an _allow list_ is
programming language-specific. When ConE is to be applied in different
contexts, different allow lists are likely needed.
Table 3. Distribution of file types seen in bug fixes File type | Percentage | On ConE _allow list_?
---|---|---
.cs | 44.32 | yes
.cpp | 18.55 | yes
.c | 11.27 | yes
.sql | 6.20 | yes
.java | 5.36 | yes
.js | 3.98 | yes
.ts | 3.79 | yes
.ini | 0.20 | no
.csproj | 0.04 | no
others | 6.29 | no
Table 4. Number of Bug Fixes with RCEs and No RCEs Edit type | Count | Percentage
---|---|---
Bug fix PRs with no RCEs | 1617 | 78.3
Bug fix PRs with at least one RCE | 446 | 21.7
#### 4.1.2. Rarely Concurrently Edited files (RCEs)
These are the files which typically are not edited concurrently, recently.
Usually all the updates or edits to them are performed, in a controlled
fashion, by a single person or small set of people. Seeing RCEs in multiple
active pull requests is an anomalous phenomenon. For example, a file foo.cs is
always edited by a given developer, through one active pull request at any
point. The ConE system keeps a track of such files and tags them as RCEs. In
the future, if multiple active pull requests are seen editing this file
simultaneously, ConE flags them. Our intuition is that, if a lot of RCEs are
seen in multiple active pull requests, which is unusual, changes to these
files should be reviewed carefully and everyone involved in editing them
should be aware of others’ changes.
We performed an empirical analysis, from our shadow mode deployment data (as
explained in Section 5.2), to understand how pervasive RCEs really are. As
explained in Table 4, 21.7% of bug fixes contains at least one RCE in them
while the total number of RCEs in these repositories is just 2%. Based on this
data and anecdotal feedback from developers, we realized that concurrent edits
to RCEs is an unusual activity which should not be seen a lot. But, if
observed, it should be notified to all the developers involved.
For building the ConE system, we ran the RCE detection algorithm that looks at
the pull requests that are created in a repository within the last three
months from when the algorithm runs. The duration can be increased or
decreased based on how big or how active the system is. This process, after
each run, creates a list of RCEs. Once the initial bootstrapping is done and a
list of RCEs is prepared, that list is used by the ConE algorithm when
checking for the existence of the RCEs in a pair of pull requests. The RCE
list is updated and refreshed once every week, through a separate process. The
process of detecting and updating RCEs is resource intensive. So, we need to
strike a balance between how quickly we would like to update the RCE list
versus how many resources we need to throw at the system, without compromising
the quality of the suggestions. We picked one week as the refresh interval
through multiple iterations of experiments. This process guarantees that the
ConE system reacts to the changes in the rarity of concurrent edits,
especially the cases where an RCE becomes a non-RCE due to the concurrent
edits it experiences. The steps involved in creating and updating RCEs are
listed below.
Creating the RCE list:
1. (1)
Get all the pull requests created in the last three months from when the
algorithm is run. Create a list of all the files that are edited in these pull
requests by applying the filters explained in the paragraph above on file
types.
2. (2)
Prepare sets of pull requests that overlap with others. Prepare a list of
files edited in the overlapping pull requests by applying the filters
explained in the paragraph above on file types.
3. (3)
The list of files created in step-1 minus the list of files created in step-2
constitutes the list of rarely concurrently edited files (RCEs).
Updating the RCE list:
1. (4)
Remove files from the RCE list if they are seen in overlapping pull requests
when the algorithm is run the next time. Because, if they are seen in
overlapping pull requests, they will not be qualified to be RCEs anymore.
2. (5)
Refresh the list by adding the new RCEs discovered in the latest edits, when
the algorithm is run again.
### 4.2. The ConE Algorithm
ConE’s algorithm to select candidate pull requests that developers need to be
notified about primarily leverages the techniques explained above: Extent of
Overlap (EOO) and existence of Rarely Concurrently Edited files (RCEs).
Together these serve to reduce the total number of active pull requests under
consideration, in order to pick the pull requests that need to be notified
about. The ConE algorithm consists of seven steps listed below:
Step 1: Check if the reference pull request’s age is more than 30 days.
Studies have shown that pull requests which are active for so long may not
even be completed (Gousios et al., 2014). Exclude all such pull requests.
Step 2: Construct a list of files that are being edited in the reference pull
request. While constructing this set, we exclude any files of types that are
not in the allow list from Table 3.
Step 3: Construct a set of files that are being edited in each of the active
pull requests, using the methodology mentioned in Steps 1 and 2. One extra
filter that we apply here is to exclude PRs which are being interacted by the
author of the reference pull request. If the author of the reference pull
request is already aware of this pull request there is no need to notify
thems.
Step 4: Calculate the extent of overlap using the formula described in section
4.1. For every pair of reference pull request PRr and active pull request
PRa1, calculate the tuple Tea1 = ⟨PRr, PRa1, E1⟩, where E1 is the extent of
overlap between the two pull requests. Do this for all the active pull
requests with respect to a reference pull request. At the end of this step we
have a list of tuples, Tea = [⟨PR1, PR7, 55⟩, ⟨PR1, PR12, 95⟩, ⟨PR1, PR34,
35⟩….].
Step 5: Check for the existence of rarely concurrently edited files (RCEs) and
the number of RCEs between each pair of reference and active pull request.
Create a tuple Tr = ⟨PRr, PRa1, R1⟩ where PRr is the reference pull request,
PRa1 is active pull request and R1 is the number of RCEs in the overlap of
reference and active pull requests. Do this for all reference and active pull
request combinations. At the end of this step we have a list of tuples, Tra =
[⟨PR1, PR7, 2⟩, ⟨PR1, PR12, 2⟩, ⟨PR1, PR34, 9⟩….]
Step 6: Apply thresholds on the values for extent of overlap and the number of
RCEs, as explained in section 4.3. For example, we can apply a threshold that
we select the pull requests whose extent of overlap is greater than 50% OR
there should be at least two RCEs. We go through the list of tuples that we
have generated in Steps 4 and 5 above and apply the thresholding criteria.
Step 7: Apply a ranking algorithm to prioritize the pull requests that need to
be looked at first if multiple pull requests are selected by the algorithm. We
rank candidate pull requests based on the number of RCEs present and then by
the extent of overlap. This is because RCEs being edited through multiple
active pull requests is an anomalous phenomenon which needs to be prioritized.
### 4.3. Default Thresholds and Parameter Tuning
In this section we describe the thresholding criteria, and the rationale that
needs to be applied while choosing parameter values for large scale
deployment. The parameters that we have in place are: the extent of overlap
(EOO), the number of rarely concurrently edited files (RCEs), the window of
time period (i.e., the number of months to consider for determining RCEs), and
the total number of file edits in the reference PR.
In line with our objectives, we are searching for parameter settings that find
actual conflicts, yet minimize false alarms. Furthermore, we target settings
that are easy to explain (e.g., “this PR was flagged because half of the files
changed it are also touched in another PR”).
Table 5. Distribution of extent of overlap (EOO) Percentage of overlap (range) | Number of PRs
---|---
0-10 | 309
11-20 | 223
21-30 | 137
31-40 | 87
41-50 | 25
51-60 | 359
61-70 | 92
71-80 | 21
81-90 | 23
91-100 | 378
##### Threshold for EOO:
For Extent of Overlap, we explored what would happen if we put the threshold
at 50%: if at least half of the files edited in another pull request, consider
it for notification. To assess the consequences of this, we randomly selected
1654 pull requests, which have at least one file overlapped with another pull
request. This data set is a subset of the data collected to perform empirical
analysis on concurrent edits (see Section 3). We manually inspected each of
these 1654 pull requests to make sure the overlap we observe is indeed
correct. Our empirical analysis (see Table 5), shows that 50% of the pull
requests have an overlap of 50% or less. Thus, this simple heuristic
eliminates half of the candidate pull requests for notification, substantially
reducing potential false alarms, and keeping the candidates that are more
likely to be in conflict.
Figure 2. Distribution of the number of PRs with a given number of RCEs
400 pull requests have no RCEs; another 400 have just one; About 200 have two,
and very few PRs have more than 3 RCEs.
Figure 3. Distribution of the number of PRs with a given number of overlapping
files.
450 pull requests have files that occur in just one PR; 300 pull requests
contain files that occur in two PRs; 150 in three or more; then dropping
sharply.
##### Threshold for RCEs:
For RCEs we again followed a simple rule: if the active-reference pull request
pair contains at least two files that are modified in them, which are always
edited in isolation, select the active pull request as a candidate. As shown
in Figure 3, the majority of the pull requests contains fewer than two RCEs.
To be conservative, we imposed a threshold on RCE $\geq$ 2, i.e., to select a
PR as a candidate, that pull request needs to have at least two RCEs that are
commonly edited between the reference and active pull requests.
##### Number of overlapping files:
Assume a developer creates a pull request by editing two files and one of them
is also edited in another active pull request. Here EOO is 50%. This means
this pull request qualifies to be picked as a candidate for notification.
Editing just one file in two active pull requests might not be enough to
reasonably make an assumption about the potential of conflicts arising.
Therefore, we impose a threshold on the “number of files” that needs to be
edited, simultaneously, in both pull requests. As a starting point, we imposed
a threshold of two, i.e., every candidate pull request should have more than
two overlapping files (in addition to satisfying the EOO condition of ¿= 50%).
We plotted the distribution of the number of overlapping files in Figure 3. As
shown in Figure 3, the number of PRs (on the Y-axis) drops sharply after the
number of overlapping file edits is two. Therefore, we picked two as the
default threshold.
##### Threshold Customization:
In addition to the empirical analysis, we collected initial feedback from
developers working with the production systems through our shadow mode
deployment (Section 5.2). One of the prominent requests from the developers
was to enable the repository administrators to change the values of the
parameters explained above based on the developer feedback. Therefore, we
provided customization provisions to make ConE system suit each repository’s
needs. Based on the pull request patterns and needs of the repository, system
administrators can tune the thresholds to optimize the efficacy of the ConE
system for particular reporsitories.
## 5\. Implementation and deployment
### 5.1. Core Components and Implementation
The core ConE components are displayed in Figure 4. ConE is implemented on
Azure DevOps (ADO), the DevOps platform provided by Microsoft. We chose to
develop ConE on ADO due to its extensibility that allows third party services
to interact with pull requests through various collaboration points such as
adding comments in pull requests, a rich set of APIs provided by ADO to read
meta data about pull requests, and service hooks which allow a third party
application to listen to events such as updates that happen inside the pull
request environment.
Figure 4. ConE System design: Pull requests from Azure DevOps are listened to
by the ConE change scanner, suggestion generator, notification engine, and
decorator
Pull requests from Azure DevOps are listened to by the ConE change scanner,
suggestion generator, notification engine, and decorator
Within Azure DevOps, as shown in the left box of Figure 4, ConE listens to
events triggered by pull requests, and has the ability to decorate pull
requests with notifications about potentially conflicting other pull requests.
The ConE service itself, shown at the right in Figure 4, runs within the Azure
Cloud. The ConE change scanner listens to pull request events, and dispatches
them to workers in the ConE suggestion generator. Furthermore, the scanner
monitors telemetry data from interactions with ConE notifications. The core
ConE algorithm is offered as a scalable service in the Suggestion Generator,
with parameters tunable as explained in Section 4.3.
The ConE Service is implemented using C# and .NET 4.7. It has been built on
top of Microsoft Azure cloud services: Azure Batch (Azure Batch, 2020) for
compute, Azure DevOps service hooks for event notification, Azure worker roles
and its service bus for processing events, Azure SQL for data storage, Azure
Active Directory for authentication and Application Insights for telemetry and
alerting.
### 5.2. ConE Deployment
We selected 234 repositories to pilot ConE in the first phase. Some of the key
attributes based on which the repository selection process has taken place are
listed below:
* •
Prioritize repositories where we have developers and managers who volunteered
to try ConE, since we expect them to be willing to provide meaningful
feedback.
* •
Include repositories that are of different sizes (based on the number of files
present in them): very large, large, medium, and small.
* •
Include repositories that host source code for diverse set of products and
services. That includes client side products, mobile apps, enterprise
services, cloud services, and gaming services.
* •
Consider repositories which have cross-geography and cross-timezone
collaborators, as well as repositories that have most of the collaborators
from a single country.
* •
Consider repositories that host source code written in multiple programming
languages including combinations of Java, C#, C++, Objective C, Swift,
Javascript, React, SQL etc.
* •
Include repositories which contain a mix of developers with different levels
of experience (based on their job titles): Senior, mid-level and junior.
We enabled ConE in _shadow mode_ on 60 repositories for two months (with a
more liberal set of parameters to maximize the number of suggestions we
generate). In this mode we actively listen to pull requests, run the ConE
algorithm, generate suggestions, and save all the suggestions in our SQL data
store for further analysis, without sending the notifications to the
developers. We generated and saved 1200 suggestions by enabling ConE in this
mode for two months. We then went through the suggestions and the telemetry
collected to optimize the system before a large scale roll out.
The primary purpose of shadow mode deployment is to validate whether
operationalizing a service like ConE is even possible at the scale of
Microsoft. Furthermore, it allowed us to check whether we indeed can flag
meaningful conflicting pull requests, and what developers would think of the
corresponding notifications. The telemetry we collected includes the time it
takes to run the ConE algorithm, resource utilization, the number of
suggestions the ConE system would have made, etc. We experimented with tuning
our parameters (explained in subsection 4.3) and their impact on the
processing time and system utilization. This helped us in understanding the
scale and infrastructure requirements and overall feasibility.
We collected feedback from the developers by reaching out to them directly. We
have shown them the suggestions we would have made if the ConE system was
enabled on their pull requests, format of the suggestions and the mode of
notifications. We iterated over the design of the notification based on the
user feedback before settling on the version of the notification as shown in
Figure 5.
After multiple iterations of user studies and feedback collection, on the
design, frequency, and the quality of the ConE suggestions as validated by the
developers participated in our shadow mode deployment program, we turned on
the notifications on 234 repositories.
Figure 5. ConE notificationt that a pull request has significant overlap with
another pull request.
Comment generated by ConE listing another pull request that has significant
overlap, the author of that pull request, and the files edited in both pull
requests.
### 5.3. Notification Mechanism
We leveraged Azure DevOps’s collaboration points to send notifications to
developers. A notification is a comment placed by our system in Azure DevOps
pull requests. Figure 5 shows a screenshot of a comment placed by ConE on an
actual pull request. It displays the key elements of a ConE notification: a
comment text which provides a brief description of the notification, the id of
the conflicting pull request, the name(s) of the author(s) of the conflicting
pull request, files that are commonly edited in the pull requests, a provision
to provide feedback by resolving or not fixing a comment (marked as “Resolved”
in the example), and the option to reply to the comment inline to provide
explicit written feedback.
While ConE actively monitors every commit that is being pushed to a pull
request, it will only add a second comment on the same pull request again if
the state of the active or the reference pull request is significantly changed
in subsequent updates and ConE finds a different set of pull requests as
candidates for notification.
In a ConE comment, elements like pull request id, file names, author name are
actually hyperlinks. The pull request id hyperlink points to the respective
pull request’s page in Azure DevOps. The file name hyperlink points to a page
that shows the diff between the versions of the file in the current and
conflicting pull requests. The author name element, upon clicking, spins up a
chat window with the author of the conflicting pull request instantly. When
people interact with these elements by clicking them, we track those telemetry
events (which is consented by the users of the Azure DevOps system, in
Microsoft) to better understand the level of interaction developers are having
with the ConE system.
### 5.4. Scale
The ConE system has been deployed on 234 repositories in Microsoft. The
repositories have been picked based to maximize the diversity and variety of
the repositories in various dimensions as explained in Section 5.2. Since
enabled in March 2020, until September 2020 (when we pulled the telemetry
data) ConE evaluated 26,000 pull requests which were created in all the
repositories on which ConE has been enabled. Within these 26,000 pull
requests, an additional 156,000 update events (commits on the same branch,
possibly affecting new files) occurred. Thus, ConE had to react to and process
a total of 182,000 events that were generated, within Azure DevOps, in those
six months. For every update, ConE has to compare the reference pull request
with all active pull requests that match ConE’s filtering criteria. In total
ConE made a total of approximately two million comparisons.
The scale of operations and processing is expected to grow as we onboard new
and large repositories. The simple and lightweight nature of the ConE
algorithm combined with the scalable architecture and efficient design, and
its engineering on Azure cloud has given us the ability to process events at
this scale with a response rate of less than four seconds per event. The time
it takes to process an event end to end, i.e., receiving the pull request
creation or update event, running the ConE algorithm and passing the
recommendations back (if any) has never taken more than four seconds. ConE
employed a single service bus queue and four worker roles in Azure to handle
the current scale. As per our monitoring and telemetry (resource utilization
on Azure infrastructure, processing latency, etc.) ConE still had bandwidth
left to serve the next hundred repositories of similar scale with the current
infrastructure setup.
## 6\. Evaluation: Developers perceptions about ConE’s usefulness
Out of the 26,000 pull requests under analysis during ConE’s six month
deployment (Section 5.4), ConE’s filtering algorithm (Section 4.2) excluded
2,735 pull requests. In the remaining 23,265 pull requests, ConE identified
775 pull requests to send notifications to (3.33%). In this section, we
evaluate the usefulness of these 775 notifications.
All repositories were analyzed with the standard configuration; No adjustments
were made to the parameters. Though the service is enabled to send
notifications in 234 repositories, during the six-month observation period,
ConE raised alerts on just 44 distinct repositories. As shown in Figure 6, the
notification volume varies between repositories.
Figure 6. Distribution of notifications per repository
Two repositories have over 50 notifications, five have betwewwen 40 and 50
notifications, another 20 have between 20 and 40 notifications, and most
repositories have 20 or fewer notifications
### 6.1. Comment resolution percentage
ConE offers an option for users to provide explicit feedback on every comment
it placed, within their pull requests. Users can select the “Resolved” option
if they like or agree with the notification, and the “Won’t fix” option if
they think it is not useful. A subset of users were given instructions and
training on how to use these options. The notification itself also contains
instructions, as shown in Figure 5. A user can choose not to provide any
feedback by just leaving the comment as is, in the “Active” state. Through
this we collect direct feedback from the users of the ConE system.
Figure 7. Number of positive (Resolved), negative (Won’t Fix), and neutral
(Active) responses to ConE notifications
554 Resolved, 74 Won’t Fix, and 147 Active notifications
Figure 7 shows the distribution of the feedback received. The vast majority
(554 out of 775, for 71.48 %) of notifications was flagged as “Resolved”. For
147 (18.96%) of the notifications, no feedback was provided. Various studies
have shown that users tend to provide explicit negative feedback when they do
not like or agree with a recommendation, while tend not be so explicit about
positive feedback (Steck, 2011; Liu et al., 2019). Therefore, we cautiously
interpret this as neutral to positive.
We manually analyzed all 74 (9.5%) cases where the developers provided
negative feedback. For the majority of them, the developer was already aware
of the other conflicting pull request. In some cases the developers thought
that ConE is raising a false alarm as they expect no one else to be making
changes to the same files as the ones they are editing. When we show them
other overlapping pull requests that were active while they were working on
their pull request, to their surprise, the notification were not false alarms.
We list some of the anecdotes in subsection 6.4.
Figure 8. Number of notifications (orange), and number of interactions (blue)
with those notfications, per month. As developers become more familiar with
ConE, they increasingly interact with its notifications
Notifications per month for six months, fairly stable at 150 per month, and
interactions, which grow from 50 to 650 per month
### 6.2. Extent of interaction
As discussed in Section 5.3, a typical ConE notification/comment has multiple
elements that a developer can interact with: For each conflicting pull
request, the pull request id, files with conflicting changes, and the author
name are shown. These are deep links. Developers can just take a look at the
comment and ignore it or interact with it by clicking on one of the “clickable
elements” in the ConE notification. If the user decides to pursue further
clicking on one of these elements, that action is also logged as telemetry (in
Azure AppInsights).
From March 2020 to September 2020, we logged 2170 interactions on 775 comments
that ConE has placed, which amounts to 2.8 clicks per notification on average.
Measured over time, as shown in Figure 8, the number of interactions and the
“clicks per notification” are clearly increasing as more and more people are
getting used to ConE comments, and are using it to learn more about
conflicting pull requests recommended by ConE.
Note that the extent of interaction does not include additional actions
developers can take to contact authors of conflicting pull requests once ConE
has made them aware of the conflicts, such as reaching out by phone, walking
into each other’s office, or a simple talk at the water cooler.
### 6.3. User Interviews
The quantitative feedback discussed so far captures both direct (comment
resolution percentage) and indirect (extent of interaction) feedback. To
better understand the usefulness we directly reached out (via Microsoft Teams,
asynchronously) to authors of 100 randomly selected pull requests for which
ConE placed comments. The user feedback for these 100 pull requests is 45%
positively resolved, 35% won’t fix, and 20% no response. The interviewers did
not know these authors, nor had worked with them before, also because the
teams working on the systems under study are organizationally far away from
the interviewers.
The interview format is semi-structured where users are free to bring up their
own ideas and free to express their opinions about the ConE system. We posed
the following questions:
1. (1)
Is it useful to know about these other PRs that change the same file as yours?
2. (2)
If yes, roughly how much effort do you estimate was saved as a result of
finding out about the overlapping PRs? If not, is there other information
about overlapping PRs that could be useful to you?
3. (3)
Does knowing about the overlapping PRs help you to avoid or mitigate a future
merge conflict?
4. (4)
What action (if any) will you likely take now that you know about the
overlapping PRs?
5. (5)
Would you be interested in keeping using ConE which notifies you about
overlapping PRs in the future? (Note that we aim to avoid being too noisy by
not alerting if the overlapping files are frequently edited by many people, if
they are not source code files, etc.)
Table 6. Distribution of qualitative feedback responses | Category | # of responses
---|---|---
Favorable | I’d love to use ConE | 25 (52.08%)
I will use ConE | 20 (41.67%)
Unfavorable | I don’t want to use ConE | 3 (6.25%)
We did not receive the responses in a uniform format directly based on the
structure of the questions. We used Microsoft Teams to reach out to the
developers and the questions are open ended. Therefore, we could not enforce a
strict policy on the number of questions the respondents should answer and on
the length of the answers. Some of the participants answered all questions,
while some answered only one or two. Some respondents were detailed in their
response, while some were succinct with ‘yes’ or ‘’no’ answers. Some of the
respondents provided a free-form response, with an average word count of just
47 words per response. So, we could not calculate the distribution of
responses for all questions. However, we see that for question-5, there were
responses. We coded and categorized the responses we received for question-5
as explained below.
The first three authors, together, grouped the responses that we received (48
out of 100), until consensus was reached, into two categories: Favorable (if
the users would like to continue using ConE, i.e., the answer to question 5 is
along the lines of ‘I will use ConE’ or a ‘I’d love to use/keep using ConE’)
and Unfavorable (users do not find the ConE system to be useful and do not
want to continue using it.). Table 6 shows the distribution of the feedback:
93.75% of the respondents indicated their interest and willingness to use
ConE.
### 6.4. Representative Quotes
To offer an impression, we list some typical quotes (positive and negative)
that we received from the developers. In one of the pull requests where we
sent a ConE notification notifying about potential conflicting changes, a
developer said:
> ”I wasn’t aware about other 2 conflicting PRs that are notified by ConE. I
> believe that would be very helpful to have a tool that could provide
> information about existence of other PRs and let you know if they perform
> duplicate work or conflicting change!!”
It turned out that the other two developers (the authors of the conflicting
pull requests flagged by ConE) are from entirely different organizations and
geographies. Their common denominator is the CEO of the company. It would be
very difficult for the author of the reference pull request to know about the
existence of the other two pull requests without ConE bringing it to their
notice.
Several remarks are clear indicators of the usefulness of the ConE system:
> ”Yes, I would be really interested in a tool that would notify overlapping
> PRs.”
> ”Looking forward to use it! Very promising!”
> ”ConE is such a neat tool! Very simple but super effective!”
> ”ConE is a great tool, looking forward to seeing more recommendations from
> ConE”
> ”This is an awesome tool, Thank you so much for working to improve our
> engineering!”
> ”It is a nice feature and when altering files that are critical or very
> complex, it is great to know.”
Some developers mentioned that ConE helped them saving time and/or effort
significantly by providing early intervention:
> ”ConE is very useful. It saved at least two hours to resolve the conflicts
> and smoke again”
> ”This would save a couple of hours of dev investigation time a month”
> ”ConE would have saved probably an hour or so for PR ¡XYZ¿”
We also received feedback from some developers who expressed a feeling that a
tool like ConE may not necessarily be useful for their scenarios:
> ”For me no, I generally have context on all other ongoing PRs and work that
> might cause merge issues. No, thank you!”
> ”For my team and the repositories that I work in, I don’t think the benefit
> would be that great. I can see where it could be useful in some cases
> though”
> ”It’s not helpful for my specific change, but don’t let that discourage you.
> I can see how something like ConE be definitely useful for repositories like
> ¡XYZ¿ which has a lot of common code”
Another interesting case we noticed is, ConE’s ability to help in detecting
duplication of work. ConE notified a developer (D1) about an active pull
request authored by another developer (D2). After the ConE notification was
sent to D1, they realized that D2’s pull request is already solving the same
problem and D2 made more progress. D1 ended up abandoning their pull request
and pushed several code changes in D2’s pull request, which was eventually
completed and merged. When we reached out to D1, they said:
> ”Due to poor communication / project planning D2 and I ended up working on
> the same work item. Even if I was not notified about this situation, I would
> have eventually learned about it, but that would have costed me so much
> time. This is great!”
Though we do not observe scenarios like this frequently, this case
demonstrates an example of the kind of potential conflicts ConE can surface,
in addition to flagging syntactic conflicts.
Table 7. Distribution of quantitative feedback based on size of the repository Feedback | Large repositories | Small repositories | Total
---|---|---|---
Positively resolved | 404 (77.69%) | 150 (58.82%) | 554 (71.48%)
Won’t fix | 33 ( 6.34%) | 41 (16.08%) | 74 ( 9.54%)
No response | 83 (15.96%) | 64 (25.10%) | 147 (18.96%)
Total | 520 (67.09%) | 255 (32.90%) | 775 (100.0%)
### 6.5. Factors Affecting ConE Appreciation
After analyzing all the responses from our interviews, analyzing the pull
requests on which we received ‘Won’t Fix’ and interviewing respective pull
request authors, we identified the following main factors as to what makes a
developer incline towards using a system like ConE.
##### Developers who found the ConE notifications useful:
These are the developers who typically work on large services with distributed
development teams across multiple organizations, geographies and time zones.
They also tend to work on core platforms or common infrastructure (as opposed
to the ones who make changes to the specific components of the product or
service). To corroborate this, the first author classified the repositories
into large and small manually, based on the size and the activity volume in
those repositories. We then, programmatically, categorized the 628 responses
based on their repository sizes. The results, in Table 7), show that for large
repositories developers are positive for 77.69% (404/520) of the cases,
whereas for small repositories this is 58.82% (150/255).
##### Developers who found ConE not so useful:
These developers are the ones who work on small micro services or small scale
products and typically work in smaller teams. These developers, and their
teams, tend to have delineated responsibilities. They usually have more
control over who makes changes to their code base. Interestingly, there were
cases where some of these developers were surprised to see another active pull
request, created by a different developer, from a different team sometimes,
which was editing the same area of the source code as their pull request. This
could be a result of underestimating the pace with which service dependencies
are introduced, product road maps change, and codebases are re-purposed in
large scale organizations.
## 7\. Discussion
In this section we describe the outlook and future work. We also explain some
of the limitations of the ConE system and how we plan to address them.
### 7.1. Outlook
One of the immediate goals of the ConE system is to expand its reach beyond
the initial 234 on which it is enabled, and eventually on every source code
repository in Microsoft. Furthermore, in the long run, Microsoft may consider
offering ConE as part of its Azure DevOps pipeline, making it available to its
customers across the world. Likewise, GitHub may consider to develop a free
version of ConE as an extension on the GitHub marketplace for the broader
developer community to benefit from this work.
As explained, ConE is expected to generate false alarms because of the fact
that it is a heuristics based system.To improve the system and reduce the
number of false alarms at this point, ConE checks for very simple but
effective heuristics (see Section 4.2) and conditions to flag conflicting
changes that causes unintended consequences. We offer three configuration
parameters (see Section 4.3), that help us make the solution effective by
striking a suitable balance between the rate of false alarms and coverage, and
customize the solution based on individual repository needs.
To further improve precision, we would like to investigate the options that
let us go one level deeper from file level to, e.g., analyze actual code
diffs. Understanding code diffs and performing semantic analysis on them is a
natural next a step for a system like ConE. Providing diff information across
every developer branch is fundamentally expensive, so it is not offered by
Azure DevOps, the source control system on which ConE is operationalized, nor
by other commercial or free source control systems like GitLab or GitHub. A
possible remedy is to bring the diff information into the ConE system. This
involves checking out two versions of the same file, within ConE, and finding
differences. This has to happen in real-time, in a scalable and language
agnostic fashion.
Once we have the diff information, another idea is to apply deep learning and
code embeddings to develop better contextual understanding of code changes. We
can use the semantic understanding in combination with the historical data
about concurrent and non-concurrent edits to develop better prediction models
and raise alarms when concurrent edits are problematic.
ConE was found to be useful by facilitating early intervention about the
potential conflicting change. However, this does not fully solve the problem
i.e., fixing the merge conflicts or merging the duplicate code. Exploring
auto-fixing of conflicts or code duplication as a natural extension to ConE’s
conflict detection algorithm will help in alleviating the problems caused by
the conflicts and fixing them in an automated fashion.
### 7.2. Threats to validity
Concerning internal validity, our qualitative analysis was conducted by
reaching out to the developers via Microsoft Teams, asynchronously. None of
the interviewers know the people that were reached out neither worked with
them before. We purposefully avoided deploying ConE on repositories that are
under the same organization as any of the researchers involved in this work.
As Microsoft is a huge company and most of the users of the ConE service are
organizationally distant from the interviewers, the risk of response bias is
very minimal. However, there is a small chance that respondents may be
positive about the system because they want to make the interviewers, who are
from the same company, happy.
Concerning external validity, the empirical analysis, design and deployment,
evaluation and feedback collection are done specifically in the context of
Microsoft. The correlations we reported in Table 2 can vary based on the
setting of the organization in which the analysis is performed. As Microsoft
is one of the world’s largest concentration of developers, and developers at
Microsoft uses very diverse set of tools, frameworks, programming languages,
our research and the ConE system will have a broader applicability. However,
at this point the results are not verified in the context of other
organizations.
## 8\. Conclusion and future work
In this paper, we seek to address problems originating from concurrent edits
to overlapping files in different pull requests. We start out by exploring the
extent of the problem, establishing a statistical relationship between
concurrent edits and the need for bug fixes in six active industrial
repositories from Microsoft.
Inspired by these findings we set out to design ConE, an approach to detect
concurrently edited files in pull requests at scale. It is based on heuristics
like the extent of overlap and the presence of rarely concurrently edited
files between pairs of pull requests. To make sure the precision of the system
is sufficiently high, we deploy various filters and parameters that help in
controlling the behavior of the ConE system.
ConE has been deployed on 234 repositories inside Microsoft. During a period
of six months, ConE generated 775 notifications, from which 71.48 % received
positive feedback. Interviews with 48 developers showed 93% favorable
feedback, and applicability in avoiding merge conflicts as well as duplicate
work.
In the future, we anticipate ConE will be employed at substantially more
systems within Microsoft. As ConE has been deployed and found to be useful by
the developers in a large and diverse (in terms of programming languages used,
tools, engineering systems, geographical presence, etc) organization like
Microsoft, we believe the techniques and the system has applicability beyond
Microsoft. Furthermore, we see opportunities for implementing a ConE service
for systems like GitHub or GitLab. Future research surrounding ConE might
entail improving its precision by learning from past user feedback or by
leveraring diffs without sacrificing scalability. Beyond warnings, future
research could also target automating actions to be taken to address the pull
request conflicts detected by ConE.
## References
* (1)
* Accioly et al. (2018) Paola Accioly, Paulo Borba, and Guilherme Cavalcanti. 2018\. Understanding Semi-Structured Merge Conflict Characteristics in Open-Source Java Projects. _Empirical Softw. Engg._ 23, 4 (Aug. 2018), 2051–2085. https://doi.org/10.1007/s10664-017-9586-1
* Ahmed et al. (2017) Iftekhar Ahmed, Caius Brindescu, Umme Ayda Mannan, Carlos Jensen, and Anita Sarma. 2017\. An Empirical Examination of the Relationship between Code Smells and Merge Conflicts. In _2017 ACM/IEEE International Symposium on Empirical Software Engineering and Measurement (ESEM)_. 58–67. https://doi.org/10.1109/ESEM.2017.12
* Ashraf et al. (2019) Usman Ashraf, Christoph Mayr-Dorn, and Alexander Egyed. 2019\. Mining Cross-Task Artifact Dependencies from Developer Interactions. In _2019 IEEE 26th International Conference on Software Analysis, Evolution and Reengineering (SANER)_. 186–196. https://doi.org/10.1109/SANER.2019.8667990
* Azure Batch (2020) Azure Batch Accessed 2020. Azure Batch. https://azure.microsoft.com/en-us/services/batch/
* Azure Devops (2020) Azure Devops Accessed 2020. Azure DevOps. https://azure.microsoft.com/en-us/services/devops/?nav=min.
* Berczuk and Appleton (2002) Steve Berczuk and Brad Appleton. 2002. _Software Configuration Management Patterns: Effective Teamwork, Practical Integration_.
* Biehl et al. (2007) Jacob T. Biehl, Mary Czerwinski, Greg Smith, and George G. Robertson. 2007. FASTDash: A Visual Dashboard for Fostering Awareness in Software Teams. In _Proceedings of the SIGCHI Conference on Human Factors in Computing Systems_ (San Jose, California, USA) _(CHI ’07)_. Association for Computing Machinery, New York, NY, USA, 1313–1322. https://doi.org/10.1145/1240624.1240823
* Bird et al. (2009) Christian Bird, Peter C. Rigby, Earl T. Barr, David J. Hamilton, Daniel M. German, and Prem Devanbu. 2009\. The promises and perils of mining git. In _2009 6th IEEE International Working Conference on Mining Software Repositories_. 1–10. https://doi.org/10.1109/MSR.2009.5069475
* Bird and Zimmermann (2012) Christian Bird and Thomas Zimmermann. 2012. Assessing the Value of Branches with What-If Analysis. In _Proceedings of the ACM SIGSOFT 20th International Symposium on the Foundations of Software Engineering_ (Cary, North Carolina) _(FSE ’12)_. Association for Computing Machinery, New York, NY, USA, Article 45, 11 pages. https://doi.org/10.1145/2393596.2393648
* Brindescu et al. (2019) Caius Brindescu, Iftekhar Ahmed, Carlos Jensen, and Anita Sarma. 2019. An empirical investigation into merge conflicts and their effect on software quality. _Empirical Software Engineering_ 25 (09 2019). https://doi.org/10.1007/s10664-019-09735-4
* Brindescu et al. (2020) Caius Brindescu, Iftekhar Ahmed, Rafael Leano, and Anita Sarma. 2020\. Planning for untangling: predicting the difficulty of merge conflicts. 801–811. https://doi.org/10.1145/3377811.3380344
* Brun et al. (2011a) Yuriy Brun, Reid Holmes, Michael Ernst, and David Notkin. 2011a. Proactive detection of collaboration conflicts. _SIGSOFT/FSE 2011 - Proceedings of the 19th ACM SIGSOFT Symposium on Foundations of Software Engineering_ , 168–178. https://doi.org/10.1145/2025113.2025139
* Brun et al. (2011b) Yuriy Brun, Reid Holmes, Michael D. Ernst, and David Notkin. 2011b. Proactive Detection of Collaboration Conflicts. In _Proceedings of the 19th ACM SIGSOFT Symposium and the 13th European Conference on Foundations of Software Engineering_ (Szeged, Hungary) _(ESEC/FSE ’11)_. Association for Computing Machinery, New York, NY, USA, 168–178. https://doi.org/10.1145/2025113.2025139
* Brun et al. (2011c) Yuriy Brun, Reid Holmes, Michael D. Ernst, and David Notkin. 2011c. Proactive Detection of Collaboration Conflicts. In _Proceedings of the 19th ACM SIGSOFT Symposium and the 13th European Conference on Foundations of Software Engineering_ (Szeged, Hungary) _(ESEC/FSE ’11)_. Association for Computing Machinery, New York, NY, USA, 168–178. https://doi.org/10.1145/2025113.2025139
* Brun et al. (2013) Yuriy Brun, Reid Holmes, Michael D. Ernst, and David Notkin. 2013\. Early Detection of Collaboration Conflicts and Risks. _IEEE Trans. Softw. Eng._ 39, 10 (Oct. 2013), 1358–1375. https://doi.org/10.1109/TSE.2013.28
* Cataldo et al. (2006) Marcelo Cataldo, Patrick A. Wagstrom, James D. Herbsleb, and Kathleen M. Carley. 2006. Identification of Coordination Requirements: Implications for the Design of Collaboration and Awareness Tools. In _Proceedings of the 2006 20th Anniversary Conference on Computer Supported Cooperative Work_ (Banff, Alberta, Canada) _(CSCW ’06)_. Association for Computing Machinery, New York, NY, USA, 353–362. https://doi.org/10.1145/1180875.1180929
* Costa et al. (2014) Catarina Costa, Jose Figueiredo, Gleiph Ghiotto lima de Menezes, and Leonardo Murta. 2014. Characterizing the Problem of Developers’ Assignment for Merging Branches. _International Journal of Software Engineering and Knowledge Engineering_ 24 (12 2014), 1489–1508. https://doi.org/10.1142/S0218194014400166
* Costa et al. (2016) Catarina Costa, Jair Figueiredo, Leonardo Murta, and Anita Sarma. 2016. TIPMerge: Recommending Experts for Integrating Changes across Branches. In _Proceedings of the 2016 24th ACM SIGSOFT International Symposium on Foundations of Software Engineering_ (Seattle, WA, USA) _(FSE 2016)_. Association for Computing Machinery, New York, NY, USA, 523–534. https://doi.org/10.1145/2950290.2950339
* de Souza et al. (2003a) Cleidson R. B. de Souza, David Redmiles, and Paul Dourish. 2003a. ”Breaking the Code”, Moving between Private and Public Work in Collaborative Software Development. In _Proceedings of the 2003 International ACM SIGGROUP Conference on Supporting Group Work_ (Sanibel Island, Florida, USA) _(GROUP ’03)_. Association for Computing Machinery, New York, NY, USA, 105–114. https://doi.org/10.1145/958160.958177
* de Souza et al. (2003b) Cleidson R. B. de Souza, David Redmiles, and Paul Dourish. 2003b. “Breaking the Code”, Moving between Private and Public Work in Collaborative Software Development. In _Proceedings of the 2003 International ACM SIGGROUP Conference on Supporting Group Work_ (Sanibel Island, Florida, USA) _(GROUP ’03)_. Association for Computing Machinery, New York, NY, USA, 105–114. https://doi.org/10.1145/958160.958177
* Dias et al. (2020) Klissiomara Dias, Paulo Borba, and Marcos Barreto. 2020\. Understanding Predictive Factors for Merge Conflicts. _Information and Software Technology_ 121 (05 2020), 106256\. https://doi.org/10.1016/j.infsof.2020.106256
* Estler et al. (2014) H. Christian Estler, Martin Nordio, Carlo A. Furia, and Bertrand Meyer. 2014. Awareness and Merge Conflicts in Distributed Software Development. In _2014 IEEE 9th International Conference on Global Software Engineering_. 26–35. https://doi.org/10.1109/ICGSE.2014.17
* Fowler (2020) Martin Fowler. Accessed 2020\. Semantic Conflict. https://martinfowler.com/bliki/SemanticConflict.html.
* GitHub (2020) GitHub Accessed 2020. GitHub. https://github.com/about
* Gousios et al. (2014) Georgios Gousios, Martin Pinzger, and Arie van Deursen. 2014\. An exploratory study of the pull-based software development model.. In _ICSE_ , Pankaj Jalote, Lionel C. Briand, and André van der Hoek (Eds.). ACM, 345–355. http://dblp.uni-trier.de/db/conf/icse/icse2014.html#GousiosPD14
* Grinter (1995) Rebecca E. Grinter. 1995\. Using a Configuration Management Tool to Coordinate Software Development. In _Proceedings of Conference on Organizational Computing Systems_ (Milpitas, California, USA) _(COCS ’95)_. Association for Computing Machinery, New York, NY, USA, 168–177. https://doi.org/10.1145/224019.224036
* Guimarães and Silva (2012) Mário Luís Guimarães and António Rito Silva. 2012. Improving early detection of software merge conflicts. In _2012 34th International Conference on Software Engineering (ICSE)_. 342–352. https://doi.org/10.1109/ICSE.2012.6227180
* Guzzi et al. (2015) Anja Guzzi, Alberto Bacchelli, Yann Riche, and Arie van Deursen. 2015. Supporting Developers’ Coordination in the IDE. In _Proceedings of the 18th ACM Conference on Computer Supported Cooperative Work and Social Computing_ (Vancouver, BC, Canada) _(CSCW ’15)_. Association for Computing Machinery, New York, NY, USA, 518–532. https://doi.org/10.1145/2675133.2675177
* Horwitz et al. (1989) Susan Horwitz, Jan Prins, and Thomas Reps. 1989. Integrating Noninterfering Versions of Programs. _ACM Trans. Program. Lang. Syst._ 11, 3 (July 1989), 345–387. https://doi.org/10.1145/65979.65980
* Kalliamvakou et al. (2014) Eirini Kalliamvakou, Georgios Gousios, Kelly Blincoe, Leif Singer, Daniel M. German, and Daniela Damian. 2014. The Promises and Perils of Mining GitHub. In _Proceedings of the 11th Working Conference on Mining Software Repositories_ (Hyderabad, India) _(MSR 2014)_. Association for Computing Machinery, New York, NY, USA, 92–101. https://doi.org/10.1145/2597073.2597074
* Kasi and Sarma (2013a) Bakhtiar Khan Kasi and Anita Sarma. 2013a. Cassandra: Proactive Conflict Minimization through Optimized Task Scheduling. In _Proceedings of the 2013 International Conference on Software Engineering_ (San Francisco, CA, USA) _(ICSE ’13)_. IEEE Press, 732–741.
* Kasi and Sarma (2013b) Bakhtiar Khan Kasi and Anita Sarma. 2013b. Cassandra: Proactive conflict minimization through optimized task scheduling. In _2013 35th International Conference on Software Engineering (ICSE)_. 732–741. https://doi.org/10.1109/ICSE.2013.6606619
* Kim et al. (2006) Sunghun Kim, Thomas Zimmermann, Kai Pan, and E. James Jr. Whitehead. 2006. Automatic Identification of Bug-Introducing Changes. In _21st IEEE/ACM International Conference on Automated Software Engineering (ASE’06)_. 81–90. https://doi.org/10.1109/ASE.2006.23
* Lanza et al. (2013) Michele Lanza, Marco D’Ambros, Alberto Bacchelli, Lile Hattori, and Francesco Rigotti. 2013\. Manhattan: Supporting real-time visual team activity awareness. In _2013 21st International Conference on Program Comprehension (ICPC)_. 207–210. https://doi.org/10.1109/ICPC.2013.6613849
* Liu et al. (2019) Dugang Liu, Chen Lin, Zhilin Zhang, Yanghua Xiao, and Hanghang Tong. 2019. Spiral of Silence in Recommender Systems. In _Proceedings of the Twelfth ACM International Conference on Web Search and Data Mining_ (Melbourne VIC, Australia) _(WSDM ’19)_. Association for Computing Machinery, New York, NY, USA, 222–230. https://doi.org/10.1145/3289600.3291003
* McKee et al. (2017) Shane McKee, Nicholas Nelson, Anita Sarma, and Danny Dig. 2017\. Software Practitioner Perspectives on Merge Conflicts and Resolutions. In _2017 IEEE International Conference on Software Maintenance and Evolution (ICSME)_. 467–478. https://doi.org/10.1109/ICSME.2017.53
* microsoft (2020) microsoft Accessed 2020. Microsoft Facts. https://news.microsoft.com/facts-about-microsoft/#EmploymentInfo.
* Nelson et al. (2019) Nicholas Nelson, Caius Brindescu, Shane McKee, Anita Sarma, and Danny Dig. 2019. The life-cycle of merge conflicts: processes, barriers, and strategies. _Empirical Software Engineering_ 24 (02 2019). https://doi.org/10.1007/s10664-018-9674-x
* Nieminen (2012) Antti Nieminen. 2012\. Real-time collaborative resolving of merge conflicts. In _8th International Conference on Collaborative Computing: Networking, Applications and Worksharing (CollaborateCom)_. 540–543. https://doi.org/10.4108/icst.collaboratecom.2012.250435
* Owhadi-Kareshk et al. (2019) Moein Owhadi-Kareshk, Sarah Nadi, and Julia Rubin. 2019\. Predicting Merge Conflicts in Collaborative Software Development. In _2019 ACM/IEEE International Symposium on Empirical Software Engineering and Measurement (ESEM)_. 1–11. https://doi.org/10.1109/ESEM.2019.8870173
* Rochkind (1975) Marc J. Rochkind. 1975\. The Source Code Control System. _IEEE Transactions on Software Engineering_ 1, 4 (March 1975), 364–370. https://doi.org/10.1109/TSE.1975.6312866
* Sarma et al. (2003) Anita Sarma, Zahra Noroozi, and Andre van der Hoek. 2003\. Palantír: Raising Awareness among Configuration Management Workspaces. In _Proceedings of the 25th ACM/IEEE International Conference on Software Engineering (ICSE)_. IEEE, USA, 444–454. https://doi.org/10.1109/ICSE.2003.1201222
* Steck (2011) Harald Steck. 2011\. Item Popularity and Recommendation Accuracy. In _Proceedings of the Fifth ACM Conference on Recommender Systems_ (Chicago, Illinois, USA) _(RecSys ’11)_. Association for Computing Machinery, New York, NY, USA, 125–132. https://doi.org/10.1145/2043932.2043957
* Wang et al. (2019) Song Wang, Chetan Bansal, Nachiappan Nagappan, and Adithya Abraham Philip. 2019. Leveraging Change Intents for Characterizing and Identifying Large-Review-Effort Changes. In _Proceedings of the Fifteenth International Conference on Predictive Models and Data Analytics in Software Engineering_ (Recife, Brazil) _(PROMISE’19)_. Association for Computing Machinery, New York, NY, USA, 46–55. https://doi.org/10.1145/3345629.3345635
* Williams and Spacco (2008) Chadd Williams and Jaime Spacco. 2008. SZZ Revisited: Verifying When Changes Induce Fixes. In _Proceedings of the 2008 Workshop on Defects in Large Software Systems_ (Seattle, Washington) _(DEFECTS ’08)_. Association for Computing Machinery, New York, NY, USA, 32–36. https://doi.org/10.1145/1390817.1390826
* Winters et al. (2020) Titus Winters, Tom Manshreck, and Hyrum Wright. 2020\. _Software Engineering at Google: Lessons Learned from Programming Over Time_. O’Reilly Media, USA.
* Zhang et al. (2012) Feng Zhang, Foutse Khomh, Ying Zou, and Ahmed E. Hassan. 2012\. An Empirical Study of the Effect of File Editing Patterns on Software Quality. In _2012 19th Working Conference on Reverse Engineering_. 456–465. https://doi.org/10.1109/WCRE.2012.55
|
Geometry of Vaidya spacetimes
Armand COUDRAY1 & Jean-Philippe NICOLAS111LMBA, UMR CNRS 6205, Department of
Mathematics, University of Brest, 6 avenue Victor Le Gorgeu, 29200 Brest,
France. Emails<EMAIL_ADDRESS>Jean-Philippe.Nicolas@univ-
brest.fr.
Abstract. We investigate the geometrical structure of Vaidya’s spacetime in
the case of a white hole with decreasing mass, stabilising to a black hole in
finite or infinite time or evaporating completely. Our approach relies on a
detailed analysis of the ordinary differential equation describing the
incoming principal null geodesics, among which are the generators of the past
horizon. We devote special attention to the case of a complete evaporation in
infinite time and establish the existence of an asymptotic light-like
singularity of the conformal curvature, touching both the past space-like
singularity and future time-like infinity. This singularity is present
independently of the decay rate of the mass. We derive an explicit formula
that relates directly the strength of this null singularity to the asymptotic
behaviour of the mass function.
Keywords. Black hole, white hole, evaporation, Vaidya metric, Einstein
equations with matter, null singularity.
Mathematics subject classification. 83C57, 83C20
###### Contents
1. 1 Introduction
2. 2 Vaidya’s spacetime, connection, curvature
3. 3 The incoming principal null congruence
1. 3.1 General properties
2. 3.2 The second optical function
4. 4 Case of a complete evaporation in infinite time
1. 4.1 The asymptotic null singularity
2. 4.2 A family of uniformly timelike congruences
3. 4.3 The global structure of the spacetime
## 1 Introduction
In 1959, P.C. Vaidya published a paper [13] in which he was solving a long
standing open problem in general relativity: finding a modification of the
Schwarzschild metric in order to allow for a radiating mass. His original
derivation of the metric was based on Schwarzschild coordinates. Ten years
later, in [14], he observed that using instead what is now known as Eddington-
Finkelstein coordinates would simplify the construction a great deal. Vaidya’s
metric is a solution to the Einstein equations with matter in the form of null
dust and it has since been the object of numerous studies; see the book by
J.B. Griffiths and J. Podolsky [7], section 9.5, for a very clear presentation
of the metric and an excellent account of the history of these investigations.
Many of these works aimed at gluing some part of Vaidya’s spacetime with other
exact spacetimes like Schwarzschild and Minkowski, in order to construct
models for evaporating or collapsing black holes, see for instance W.A.
Hiscock [9]. A different approach consists in studying the maximal extension
(not necessarily analytical) and the matching of the Vaidya exterior with some
interior metric. This has been done under explicit assumptions on the manner
in which the mass varies with time, for instance in [1]. The general problem
was first tackled by W. Israel [10] and more recently by F. Fayos, M.M.
Martin-Prats and M.M. Senovilla [4] and F. Fayos and R. Torres [5], this last
work studying the possibility of a singularity-free gravitational collapse.
In this paper, we study the geometry of Vaidya’s spacetime itself, without
extension, in as much generality as possible. We treat the case of a radiating
white hole that emits null dust and as a result sees its mass decrease. The
case of a black hole whose mass increases due to incoming null dust is
obtained by merely reversing the arrow of time. We make only minimal natural
assumptions on the behaviour of the mass function. In the existing literature,
many of the precise behaviours of null geodesics are studied numerically. In
contrast, the goal of the present paper is to give a mathematical derivation
of the geometrical features of Vaidya’s spacetime, by analysing precisely the
ordinary differential equation describing the incoming principal null
geodesics, among which are the generators of the past horizon. This equation
is well-known and appears explicitly in I. Booth and J. Martin [3], in which a
notion of distance between the event horizon and an apparent horizon is
introduced. This is done for an evaporating white hole that, asymptotically or
in finite retarded time, stabilises to a black hole or evaporates completely.
We devote special attention to the case of a complete evaporation in infinite
time (the case where the mass function vanishes in finite time has been
studied in details in [5], with a mass function that is possibly non
differentiable at its vanishing point), for which the spacetime is already
maximally extended and, we prove the existence of a light-like singularity of
the Weyl tensor, touching both the past spacelike singularity and future
timelike infinity222Note that F. Fayos and R. Torres [5] also exhibit a null
singularity but under slightly different assumptions. The evaporation ends in
finite time, therefore the singularity is not asymptotic but well present in
the spacetime. Moreover the evaporation is allowed to end brutally: the mass
vanishes in finite retarded time with a possibly non-zero slope, in which case
it is non-differentiable at its vanishing point.. This classical evaporation
in infinite time may perhaps be seen as a classical analogue of the final pop
in black hole quantum evaporation, with the emission of a gravitational wave
propagating along a light-cone.
The article is structured as follows. In Section 2, we recall the construction
of Vaidya’s spacetime as a simple modification of the Schwarzschild metric
expressed in retarded Eddington-Finkelstein coordinates
($u=t-r_{*},r,\theta,\varphi$) and we recall the essential features of the
geometry: Petrov-type D, principal null directions, expression of the
Christoffel symbols, Weyl and Ricci tensors in the Newman-Penrose formalism,
scalar curvature and also the curvature scalar (also referred to as the
Kretschmann scalar). We also present our natural assumptions on the mass
function, which are essentially that it is smooth, decreases and admits finite
limits as the retarded times tends to $\pm\infty$. Finally, we derive the
ordinary differential equation that allows to locate the past event horizon
and gives the complete congruence of incoming principal null geodesics.
Section 3 presents the analysis of the properties of the solutions to this ODE
and the construction of an optical function such that its level hypersurfaces
are spanned by the incoming principal null geodesics. This is the
generalisation of the function $v=t+r_{*}$ in the Schwarzschild case. In
Section 4, we give further properties of the principal null geodesics in the
case of a complete evaporation in infinite time and we prove that they all end
up in the future at a light-like conformal curvature singularity. This
singularity is present independently of the speed at which the mass function
approaches zero in the future, however its strength seems to be directly
related to the decay rate of the mass. We also construct families of timelike
curves that end up either at the null singularity or at timelike infinity
depending of their “mass”, i.e. the rate of their proper time as measured with
respect to the retarded time. The section ends with Penrose diagrams of
Vaidya’s spacetime in the case of a complete evaporation in infinite time,
showing the various congruences.
All formal calculations of connection coefficients and curvature tensors have
been done using Sage Manifolds [6].
Notations. Throughout the paper, we use the formalisms of abstract indices,
Newman-Penrose and $2$-component spinors.
## 2 Vaidya’s spacetime, connection, curvature
The Vaidya metric can be constructed as follows. We start with the
Schwarzschild metric
$g=\left(1-\frac{2M}{r}\right)\mathrm{d}t^{2}-\left(1-\frac{2M}{r}\right)^{-1}\mathrm{d}r^{2}-r^{2}\mathrm{d}\omega^{2}\,,~{}\mathrm{d}\omega^{2}=\mathrm{d}\theta^{2}+\sin^{2}\theta\,\mathrm{d}\varphi^{2}\,.$
We express it in outgoing Eddington-Finkelstein coordinates
$(u,r,\theta,\varphi)$, where $u=t-r_{*}$ and $r_{*}=r+2M\log(r-2M)$:
$g=\left(1-\frac{2M}{r}\right)\mathrm{d}u^{2}+2\mathrm{d}u\mathrm{d}r-r^{2}\mathrm{d}\omega^{2}\,.$
Vaidya’s metric is then obtained simply by allowing the mass $M$ to depend on
$u$:
$g=\left(1-\frac{2M(u)}{r}\right)\mathrm{d}u^{2}+2\mathrm{d}u\mathrm{d}r-r^{2}\mathrm{d}\omega^{2}\,.$
(1)
Throughout the paper, we assume that
$M\mbox{ is a smooth function of the retarded time }u$ (2)
and we denote
$\dot{M}(u):=\frac{\mathrm{d}M}{\mathrm{d}u}\,.$
The non-zero Christoffel symbols for (1) are
$\displaystyle\Gamma_{0\,0}^{0}=-\frac{M(u)}{r^{2}}\,,~{}\Gamma_{2\,{2}}^{\,0}=r\,,~{}\Gamma_{3\,{3}}^{\,0}=r\sin^{2}\theta\,,$
$\displaystyle\Gamma_{0\,0}^{1}=-\frac{r^{2}\dot{M}(u)-rM(u)+2\,M(u)^{2}}{r^{3}}\,,~{}\Gamma_{0\,1}^{1}=\frac{M(u)}{r^{2}}\,,$
$\displaystyle\Gamma_{2\,2}^{1}=-r+2\,M(u)\,,~{}\Gamma_{3\,3}^{1}=-r\sin^{2}\theta+2\,M(u)\sin^{2}\theta\,,$
$\displaystyle\Gamma_{1\,2}^{2}=\frac{1}{r}\,,~{}\Gamma_{3\,3}^{2}=-\sin\theta\,\cos\theta\,,~{}\Gamma_{1\,3}^{3}=\frac{1}{r}\,,~{}\Gamma_{2\,3}^{3}=\frac{\cos\theta}{\sin\theta}\,.$
The Weyl tensor has Petrov type D (see [12] for the Petrov classification of
the Weyl tensor in terms of the multiplicities of its principal null
directions), i.e. it has two double principal null directions that are given
by
$V=\frac{\partial}{\partial r}\,,~{}W=\frac{\partial}{\partial
u}-\frac{1}{2}F\frac{\partial}{\partial r}\,.$ (3)
This is well known (see [7]) and can be checked easily by observing that $V$
and $W$ both satisfy the condition ensuring that they are at least double
roots of the Weyl tensor (see R. Penrose, W. Rindler [11] Vol. 2, p. 224)
$C_{abc[d}V_{e]}V^{b}V^{c}=C_{abc[d}W_{e]}W^{b}W^{c}=0\,.$
We consider a null tetrad built using the principal null vectors above
$\displaystyle l$ $\displaystyle=$ $\displaystyle V\,,$ $\displaystyle n$
$\displaystyle=$ $\displaystyle W\,,$ $\displaystyle m$ $\displaystyle=$
$\displaystyle\frac{1}{r\sqrt{2}}\left(\frac{\partial}{\partial\theta}+\frac{i}{\sin\theta}\frac{\partial}{\partial\varphi}\right)\,,$
$\displaystyle\bar{m}$ $\displaystyle=$
$\displaystyle\frac{1}{r\sqrt{2}}\left(\frac{\partial}{\partial\theta}-\frac{i}{\sin\theta}\frac{\partial}{\partial\varphi}\right)\,.$
It is a normalised Newman-Penrose tetrad, i.e.
$l_{a}l^{a}=n_{a}n^{a}=m_{a}m^{a}=\bar{m}_{a}\bar{m}^{a}=l_{a}m^{a}=n_{a}m^{a}=0\,,~{}l_{a}n^{a}=-m_{a}\bar{m}^{a}=1\,.$
Let $\\{o^{A},\iota^{A}\\}$ be the spin-frame (a local basis of the spin-
bundle $\mathbb{S}^{A}$ that is normalised, i.e. $o_{A}\iota^{A}=1$) defined
uniquely up to an overall sign by
$l^{a}=o^{A}\bar{o}^{A^{\prime}}\,,~{}n^{a}=\iota^{A}\bar{\iota}^{A^{\prime}}\,,~{}m^{a}=o^{A}\bar{\iota}^{A^{\prime}}\,,~{}\bar{m}^{a}=\iota^{A}\bar{o}^{A^{\prime}}\,.$
Since the spacetime has Petrov type D, the Weyl spinor $\Psi_{ABCD}$ has only
one non-zero component which is
$\Psi_{2}=\Psi_{ABCD}\,o^{A}o^{B}\iota^{C}\iota^{D}=-\frac{M(u)}{r^{3}}\,.$
The Ricci tensor is non-zero
$\mathrm{Ric}\left(g\right)=-\frac{2\,\dot{M}(u)}{r^{2}}\mathrm{d}u\otimes\mathrm{d}u$
but trace-free, i.e.
$\mathrm{Scal}_{g}=0\,,$
and the only non-zero Newman-Penrose scalar for the Ricci tensor is
$\Phi_{22}=\frac{1}{2}R_{ab}n^{a}n^{b}=-\frac{\dot{M}(u)}{r^{2}}\,.$
The curvature scalar, or Kretschmann scalar, is the total contraction of the
Riemann tensor with itself. It is related to the analogous invariant for the
Weyl tensor by the following formula (see C. Cherubini, D. Bini, S.
Capozziello, R. Ruffini [2])
$k:=R_{abcd}R^{abcd}=C_{abcd}C^{abcd}+2R_{ab}R^{ab}-\frac{1}{3}\mathrm{Scal}_{g}^{2}\,.$
For Vaidya’s spacetime, we have
$k=R_{abcd}R^{abcd}=C_{abcd}C^{abcd}=\frac{48M(u)^{2}}{r^{6}}\,.$ (4)
The metric (1) is defined on $\mathbb{R}_{u}\times]0,+\infty[_{r}\times
S^{2}_{\omega}$ and describes a radiative white hole whose mass varies with
time as a result of outgoing radiation carried by null dust. It is therefore
natural to assume that $M(u)$ is a non-increasing function of $u$; this
amounts to assuming that the null dust has positive energy. Another natural
assumption is that the mass has finite limits as $u$ tends to $\pm\infty$:
$\lim_{u\rightarrow\pm\infty}M(u)\rightarrow M_{\pm}~{}\mbox{with }0\leq
M_{+}<M_{-}<+\infty\,.$ (5)
In the case where $M$ is constant on $]-\infty,u_{-}]$ and on
$[u_{+},+\infty[$ with $-\infty<u_{-}<u_{+}<+\infty$, we have a Schwarzschild
white hole of mass $M_{-}$ which emits a burst of null radiation between the
retarded times $u_{-}$ and $u_{+}$ and eventually stabilises to a
Schwarzschild black hole of mass $M_{+}<M_{-}$ (unless $M_{+}=0$, in which
case the white hole evaporates completely in finite time). If $M_{+}>0$, the
future event horizon is at $r=2M_{+}$ but the location of the past horizon is
not so clear. For $u<u_{-}$, it is located at $r=2M_{-}$ and it is a null
hypersurface with spherical symmetry. Therefore, it is the hypersurface
generated by the family of curves indexed by $\omega\in\mathrm{S}^{2}$:
$\gamma(u)=(u,r=r(u),\omega)\,,~{}u\in\mathbb{R}\,,$ (6)
that are such that
$r(u)=2M_{-}\mbox{ for }u\leq u_{-}$
and have the property of being null, i.e.
$g(\dot{\gamma}(u),\dot{\gamma}(u))=1-\frac{2M(u)}{r(u)}+2\dot{r}(u)=0\,.$ (7)
Hence, the function $r(u)$ satisfies the following ordinary differential
equation
$\dot{r}(u)=-\frac{1}{2}\left(1-\frac{2M(u)}{r(u)}\right)\,,$ (8)
with $r>0$ and $r(u)=2M_{-}$ for $u\leq u_{-}$.
###### Remark 2.1.
We shall often (starting immediately below) identify the solutions to (8) with
the curves (6) satisfying (7) and simply refer to the integral lines of (8).
If we no longer assume that $M(u)=M_{-}$ in a neighbourhood of $-\infty$, the
past horizon will be spanned by solutions to (8) such that $r>0$ and
$\lim_{u\rightarrow-\infty}r(u)=2M_{-}$.
The ODE (8) is in fact the general equation for a null curve that is
transverse to the level hypersurfaces of $u$ (i.e. to $\nabla u$, that is a
normal and tangent vector field to these hypersurfaces) and orthogonal to the
orbits of the rotation Killing vectors. Vaidya’s spacetime comes equipped with
a null congruence, given by the lines of constant $u$ and $\omega$, which are
the integral lines of
$\nabla u=g^{-1}(\mathrm{d}u)=\frac{\partial}{\partial r}=V\,,$ (9)
where $g^{-1}$ is the inverse Vaidya metric given by
$g^{ab}\partial_{a}\partial_{b}=2\partial_{u}\partial_{r}-\left(1-\frac{2M(u)}{r}\right)\partial_{r}^{2}-r^{-2}\partial_{\omega}^{2}\,,$
(10)
$\partial_{\omega}^{2}$ denoting the euclidean inverse metric on
$\mathrm{S}^{2}$. The integral lines of (8) provide us with a second null
congruence that is transverse to the first one, corresponding to the lines of
constant $v$ and $\omega$ in the case of the Schwarzschild metric.
###### Remark 2.2.
Note that the tangent vector to the integral curves of (8) is exactly
$\frac{\partial}{\partial u}-\frac{1}{2}F\frac{\partial}{\partial r}=W\,.$
The integral curves of (8) are therefore the integral lines of the principal
null vector field $W$. Since the spacetime is not vacuum, we do not have the
Goldberg-Sachs Theorem that would ensure that these are geodesics. However we
shall see in Subsection 3.2 Proposition 3.3 that these curves are indeed
geodesics; they are the family of incoming principal null geodesics and form
the second natural null congruence of Vaidya’s spacetime. Similarly the
integral lines of $V$ are also geodesics (see Proposition 3.2), they are the
outgoing principal null geodesics of Vaidya’s spacetime.
## 3 The incoming principal null congruence
In this section, we analyse the qualitative behaviour of solutions to Equation
(8), with special emphasis on the solutions generating the past horizon. Our
main results are proved under the assumption that
$\dot{M}(u)<0\mbox{ on }]u_{-},u_{+}[\,,~{}-\infty\leq
u_{-}<u_{+}\leq+\infty\,,~{}\dot{M}\equiv 0\mbox{ elsewhere.}$ (11)
This covers the cases where the mass decreases strictly for all retarded times
and where it decreases only on a finite retarded time interval. We dismiss as
physically irrelevant the cases where intervals in $u$ with constant mass
alternate with intervals on which the mass decreases. We also ignore, for
similar reasons, cases where $\dot{M}$ vanishes at isolated points.
### 3.1 General properties
We start with an obvious observation.
###### Lemma 3.1.
On an interval $]u_{0},u_{1}[$ on which $\dot{M}(u)$ does not vanish
everywhere, $r(u)$ cannot be identically equal to $2M(u)$.
Proof. If $r(u)=2M(u)$ satisfies (8) on $]u_{0},u_{1}[$, then
$2\dot{M}(u)=-\frac{1}{2}\left(1-\frac{2M(u)}{2M(u)}\right)=0\mbox{ on
}]u_{0},u_{1}[\,,$
which contradicts the assumption. ∎
Then, we give an important estimate that is a consequence of the local
uniqueness of solutions to the Cauchy problem for (8).
###### Lemma 3.2.
Let $(]u_{1},u_{2}[,r)$ be a solution to (8) such that, for a given
$u_{0}\in]u_{1},u_{2}[$, we have $r(u_{0})\geq 2M(u_{0})$. Let us assume that
$\dot{M}(u)<0$ for all $u\in]u_{1},u_{2}[$, then $r(u)>2M(u)$ on
$]u_{0},u_{2}[$.
Proof. First, note that if $r(u_{0})=2M(u_{0})$, then $\dot{r}(u_{0})=0$,
while $\dot{M}(u_{0})<0$, hence there exists $\varepsilon>0$ such that in
$]u_{0},u_{0}+\varepsilon[$ we have $r(u)>2M(u)$. If $r(u_{0})>2M(u_{0})$,
then we have the same conclusion by continuity.
Now, let $u_{3}$ be the lowest value of $u$ in $]u_{0},u_{2}[$ such that
$r(u)=2M(u)$. Then (8) implies that $\dot{r}(u_{3})=0>2\dot{M}(u_{3})$ and
therefore, there exists $\delta>0$ such that $r(u)<2M(u)$ in
$]u_{3}-\delta,u_{3}[$. By continuity of $r$ and $M$, there exists
$u_{4}\in]u_{0},u_{3}[$ such that $r(u_{4})=2M(u_{4})$. This contradicts the
assumptions on $u_{3}$. It follows that $r(u)>2M(u)$ on $]u_{0},u_{2}[$. ∎
The asymptotic behaviour of maximal solutions to (8) in the past is unstable.
One solution has a finite limit $2M_{-}$; it corresponds to the past event
horizon. All other solutions either end at past null infinity or reach the
past singularity in finite retarded time. The following theorem gives a
complete classification of the solutions to (8) in terms of their behaviour in
the past and also describes precisely their behaviour in the future.
###### Theorem 3.1.
Under Assumptions (2), (5) and (11), there exists a unique maximal solution
$r_{h}$ to (8) such that
$\lim_{u\rightarrow-\infty}r_{h}(u)=2M_{-}\,.$
* •
If either $M_{+}>0$ or $u_{+}=+\infty$, $r_{h}$ exists on the whole real line,
$r_{h}(u)\rightarrow 2M_{+}$ as $u\rightarrow+\infty$ and any other maximal
solution $r$ to (8) belongs to either of the following two categories:
1. 1.
$r$ exists on the whole real line, $r(u)>r_{h}(u)$ for all $u\in\mathbb{R}$,
$\lim_{u\rightarrow-\infty}r(u)=+\infty$ and
$\lim_{u\rightarrow+\infty}r(u)=2M_{+}$;
2. 2.
$r$ exists on $]u_{0},+\infty[$ with $u_{0}\in\mathbb{R}$ and satisfies:
$r(u)<r_{h}(u)$ for all $u\in]u_{0},+\infty[$, $\lim_{u\rightarrow
u_{0}}r(u)=0$ and $\lim_{u\rightarrow+\infty}r(u)=2M_{+}$.
* •
If $M_{+}=0$ and $u_{+}<+\infty$, $r_{h}$ exists on an interval
$]-\infty,u_{0}[$ with $u_{+}\leq u_{0}<+\infty$ and $\lim_{u\rightarrow
u_{0}}r_{h}(u)=0$. The other maximal solutions are of two types:
1. 1.
$r$ exists on $]-\infty,u_{1}[$ with $u_{0}\leq u_{1}<+\infty$,
$r(u)>r_{h}(u)$ on $]-\infty,u_{0}[$, $\lim_{u\rightarrow u_{1}}r(u)=0$ and
$\lim_{u\rightarrow-\infty}r(u)=+\infty$;
2. 2.
$r$ exists on $]u_{1},u_{2}[$ with $-\infty<u_{1}<u_{2}\leq u_{0}$,
$r(u)\rightarrow 0$ as $u$ tends to either $u_{1}$ or $u_{2}$ and
$r(u)<r_{h}(u)$ on $]u_{1},u_{2}[$.
Proof.
Step 1: uniqueness of a maximal solution with finite limit as
$u\rightarrow-\infty$.
First, if a solution $r$ exists on an interval of the form $]-\infty,u_{0}[$
and has a finite limit at $-\infty$, then this limit must be $2M_{-}$. Indeed
let us denote this limit by $l$, using (8),
$\lim_{u\rightarrow-\infty}\dot{r}(u)=-\frac{1}{2}\left(1-\frac{2M_{-}}{l}\right)\,.$
So $\dot{r}$ also has a finite limit at $-\infty$ and this limit must be zero
in order not to contradict the finite limit of $r(u)$, i.e. $l=2M_{-}$.
Then let us show that there is at most one solution to (8) defined on an
interval of the form $]-\infty,u_{0}[$ such that
$\lim_{u\rightarrow-\infty}r(u)=2M_{-}\,.$
Let us assume that there are two such solutions $r_{1}$ and $r_{2}$. Then
$\psi=r_{2}-r_{1}$ satisfies
$\dot{\psi}(u)=\frac{M}{r_{2}}-\frac{M}{r_{1}}=\frac{-M}{r_{1}r_{2}}\psi$ (12)
and
$\lim_{u\rightarrow-\infty}\psi(u)=0\,.$
However, since
$\lim_{u\rightarrow-\infty}\frac{-M}{r_{1}r_{2}}=\frac{-1}{4M_{-}}<0\,,$
if follows that unless $\psi$ is identically zero,
$\left(\log(|\psi|)\right)^{\prime}\longrightarrow\frac{-1}{4M_{-}}\mbox{ as
}u\rightarrow-\infty$
and $\psi$ blows up exponentially fast at $-\infty$. Since we know that $\psi$
tends to zero at $-\infty$, we conclude that $\psi$ is identically zero, i.e.
$r_{1}=r_{2}$.
Step2: construction of the past horizon.
Now we construct a solution to (8) that tends to $2M_{-}$ at $-\infty$. Let us
first consider the case where $u_{-}=-\infty$. For each $n\in\mathbb{N}$, we
define $r_{n}$ to be the maximal solution to (8) such that $r_{n}(-n)=2M(-n)$.
It exists on an interval of the form $]u^{1}_{n},u^{2}_{n}[$,
$u^{1}_{n}<-n<u^{2}_{n}$. Let $u^{3}_{n}=\min\\{u^{2}_{n},u_{+}\\}$. By Lemma
3.2, $r_{n}(u)>2M(u)$ on $]-n,u^{3}_{n}[$, hence $\dot{r}_{n}<0$ there and it
follows that $r_{n}(u)<2M(-n)$. These a priori bounds imply that
$u_{n}^{2}\geq u_{+}$. Therefore, $u^{2}_{n}=+\infty$ in the case where
$u_{+}=+\infty$. If $u_{+}<+\infty$ and $M_{+}>0$, then we could have
$r_{n}(u_{+})=2M_{+}$, in which case $u_{n}^{2}=+\infty$ and $r_{n}(u)=2M_{+}$
for $u\geq u_{+}$, or $r_{n}(u_{+})>2M_{+}$ and then $r_{n}(u)>2M_{+}$ on
$[u_{+},u_{n}^{2}[$ since two solutions cannot cross, whence $\dot{r}_{n}$ is
negative there and we infer $u_{n}^{2}=+\infty$. If $u_{+}<+\infty$ and
$M_{+}=0$ then on $]u_{+},u_{n}^{2}[$, $r_{n}$ satisfies the simple ODE
$\dot{r}_{n}=-\frac{1}{2}\,,$
and $r_{n}$ reaches $0$ in finite retarded time. Hence in this case
$u_{n}^{2}<+\infty$.
Using again the fact that, by uniqueness of solutions to the Cauchy problem
for (8), two solutions cannot cross, we infer that the sequence $u_{n}^{2}$ is
increasing. Let
$u^{2}=\lim_{n\rightarrow+\infty}u^{2}_{n}\,.$
For any compact interval $I$ of $]-\infty,u^{2}[$, there exists
$n_{0}\in\mathbb{N}$ such that the sequence $(r_{n})_{n\geq n_{0}}$ is well-
defined, increasing and bounded on $I$. Hence, Lebesgue’s dominated
convergence theorem implies that the sequence $(r_{n})$ converges in
$L^{1}_{\mathrm{loc}}(]-\infty,u^{2}[)$ towards a positive function $r_{h}$
such that
$2M(u)\leq r_{h}(u)\leq 2M_{-}~{}\forall u\in]-\infty,u^{2}[\,.$ (13)
Moreover, $1/r_{n}$ also converges towards $1/r_{h}$ in
$L^{1}_{\mathrm{loc}}(]-\infty,u^{2}[)$, because, for any given compact
interval $I$, it is a well-defined, decreasing and bounded sequence on $I$ for
$n$ large enough. This implies, by equation (8) for $r_{n}$, that
$\dot{r}_{n}$ converges in $L^{1}_{\mathrm{loc}}(]-\infty,u^{2}[)$ and by
uniqueness of the limit in the sense of distributions, the limit must be
$\dot{r}_{h}$. Consequently $r_{h}$ is a solution to (8) in the sense of
distributions. An easy bootstrap argument then shows that $r_{h}$ is a strong
solution to (8) and is in fact smooth on $]-\infty,u^{2}[$.
Besides, by (13) and the fact that $M(u)\rightarrow M_{-}$ as
$u\rightarrow-\infty$, it follows that
$\lim_{u\rightarrow-\infty}r_{h}(u)=2M_{-}\,.$
In the case where $u_{-}>-\infty$, we simply need to consider the maximal
solution to (8) such that $r(u_{-}-1)=2M_{-}$. This solution exists on an
interval of the form $]-\infty,u^{2}[$ and satisfies (13).
Let us now turn to the value of $u^{2}$ and the behaviour of $r_{h}$ in the
future. If either $M_{+}>0$ or $u_{+}=+\infty$, then by (13), we have
$u^{2}=+\infty$. By (13) again, $\dot{r}_{h}(u)<0$ on $\mathbb{R}$ and it
follows that $r_{h}(u)$ has a finite limit $l$ as $u\rightarrow+\infty$. If
$M_{+}>0$, we have
$\dot{r}_{h}(u)\rightarrow-\frac{1}{2}\left(1-\frac{2M_{+}}{l}\right)\mbox{ as
}u\rightarrow+\infty$
and we must have $l=2M_{+}$ or contradict the finite limit of $r_{h}$. If
$M_{+}=0$ and $u_{+}=+\infty$, then if $l\neq 0$,
$\dot{r}_{h}(u)\rightarrow-\frac{1}{2}\mbox{ as }u\rightarrow+\infty$
which is incompatible with $u^{2}=+\infty$. Finally, if $M_{+}=0$ and
$u_{+}<+\infty$, then (13) implies that $u^{2}\geq u^{+}$. If $r_{h}(u_{+})=0$
then the solution terminates at $u=u_{+}$, $u^{2}=u_{+}$. Otherwise,
$u^{2}>u_{+}$ and on $[u_{+},u^{2}[$ we have
$\dot{r}_{h}(u)=-\frac{1}{2}\,,$
as long as $r_{h}(u)$ remains positive. Therefore, we have
$r_{h}(u)=r_{h}(u_{+})-(u-u_{+})\mbox{ for }u_{+}\leq u\leq
u_{+}+r_{h}(u_{+})$
and the integral curve ends at $u=u_{+}+r_{h}(u_{+})$, i.e. $u^{2}$ is finite
and is equal to $u_{+}+r_{h}(u_{+})$. Hence for $M_{+}=0$ and $u_{+}<+\infty$,
the past event horizon vanishes in finite retarded time $u_{+}+r_{h}(u_{+})$
and there is no future event horizon.
Step 3: classification of the other maximal solutions.
* •
We begin with the case where either $u_{+}=+\infty$ or $M_{+}>0$. Let
$(]u_{1},u_{2}[,r)$ be a maximal solution to (8). Let $u_{0}\in]u_{1},u_{2}[$
and assume that $0<r(u_{0})<r_{h}(u_{0})$ (resp. $r(u_{0})>r_{h}(u_{0})$). By
uniqueness of solutions to the Cauchy problem for (8), solutions cannot cross,
so for all $u\in]u_{1},u_{2}[$ we have $0<r(u)<r_{h}(u)$ (resp.
$r(u)>r_{h}(u)$).
1. 1.
Case where $0<r(u_{0})<r_{h}(u_{0})$. Let us first assume that
$r(u_{0})>2M(u_{0})$. If $r(u)>2M(u)$ on its interval of existence, then
$r(u)$ is bounded between $2M(u)$ and $r_{h}(u)$ and we must have
$]u_{1},u_{2}[\,=\mathbb{R}$. However, we then have $r(u)\rightarrow 2M_{-}$
as $u\rightarrow-\infty$ and this contradicts the uniqueness of $r_{h}$. It
follows that there exists $u_{3}\in]u_{1},u_{2}[$ such that
$r(u_{3})=2M(u_{3})$. Therefore $r(u)<2M(u)$ on $]u_{1},u_{3}[$ (the proof is
similar to that of Lemma 3.2), $r$ is an increasing function on this interval
and $\dot{r}$ is decreasing. This implies that $r(u)$ must reach $0$ in finite
time in the past and keep on existing towards the past as long as it has not
reached $0$. Hence $u_{1}>-\infty$ and $r(u)\rightarrow 0$ as $u\rightarrow
u_{1}$. Since $r(u_{3})=2M(u_{3})$, then we have by Lemma 3.2 that
$r(u)>2M(u)$ on $]u_{3},u_{+}[$ and since solutions do not cross, $r(u)\geq
2M(u)$ on $[u_{+},u_{2}[$. Hence
$2M(u)\leq r(u)<r_{h}(u)\mbox{ on }]u_{0},u_{2}[\,.$
This implies that $u_{2}=+\infty$ and $\lim_{u\rightarrow+\infty}r(u)=2M_{+}$.
If $r(u_{0})=2M(u_{0})$, then we can repeat the arguments above, replacing
$u_{3}$ by $u_{0}$; we infer: $u_{1}>-\infty$ and $\lim_{u\rightarrow
u_{1}}r(u)=0$, $u_{2}=+\infty$ and $\lim_{u\rightarrow+\infty}r(u)=2M_{+}$.
If $r(u_{0})<2M(u_{0})$ then $r(u)$ increases as long as $r(u)<2M(u)$. Either
$r(u)<2M(u)$ on its whole interval of existence (note that this requires
$M_{+}>0$), in which case $u_{2}=+\infty$, or there exists
$u_{4}\in]u_{0},u_{2}[$ such that $r(u_{4})=2M(u_{4})$ and we can then use the
same reasoning as before on $]u_{4},u_{2}[$ and infer that $u_{2}=+\infty$. In
the latter case, we have as before $\lim_{u\rightarrow+\infty}r(u)=2M_{+}$. In
the former, $r(u)$ has a finite positive limit $l$ as $u\rightarrow+\infty$
and (recall that we must have $M_{+}>0$)
$\dot{r}(u)\rightarrow-\frac{1}{2}\left(1-\frac{2M_{+}}{l}\right)\mbox{ as
}u\rightarrow+\infty$
and we must have $l=2M_{+}$ in order not to contradict $u_{2}=+\infty$. In
both cases, we have $u_{1}>-\infty$ and $r(u)\rightarrow 0$ as $u\rightarrow
u_{1}$.
2. 2.
If $r(u_{0})>r_{h}(u_{0})$ then $r$ is a decreasing function on its interval
of existence and is bounded below by $r_{h}$. This implies that
$u_{2}=+\infty$. Moreover, on its whole interval of existence, $r$ satisfies
$-\frac{1}{2}<\dot{r}(u)<0$
and it follows that $u_{1}=-\infty$. Since $r$ is a decreasing function on
$\mathbb{R}$, it has a limit as $u\rightarrow-\infty$ and we have seen above
that this limit cannot be finite, hence
$\lim_{u\rightarrow-\infty}r(u)=+\infty\,.$
The solution $r$ also has a limit $l$ as $u\rightarrow+\infty$ and since $r$
is a decreasing function and $r(u)>r_{h}(u)>2M(u)$ on $\mathbb{R}$, it follows
that $2M_{+}\leq l<+\infty$. If $M_{+}>0$, then $l>0$ and $\dot{r}$ also has a
limit as $u\rightarrow+\infty$ given by
$\lim_{u\rightarrow+\infty}\dot{r}(u)=-\frac{1}{2}\left(1-\frac{2M_{+}}{l}\right)\,.$
This must be zero in order not to contradict the finite limit of $r$. Hence
$l=2M_{+}$. If $M+=0$, then unless $l=0$ we have that
$\lim_{u\rightarrow+\infty}\dot{r}(u)=-\frac{1}{2}$
which implies that $r(u)$ must reach $0$ in finite retarded time and
contradicts $u_{2}=+\infty$. Hence in this case we have $l=0=2M_{+}$.
* •
In the case where $M_{+}=0$ and $u_{+}<+\infty$, the proof uses exactly the
same arguments as in step 3 and the end of step 2. ∎
If the mass decreases only for a finite range of $u$, we have not been able to
rule out cases for which we have $r_{h}(u_{+})=2M_{+}$, nor have we managed to
find explicit examples of this situation. It is however easy to see that there
are cases for which $r_{h}(u)>2M_{+}$ for all $u\in\mathbb{R}$.
###### Proposition 3.1.
In the case where $u_{\pm}$ are both finite, assume that
$u_{+}-u_{-}<4(M_{-}-M_{+})$, then $r_{h}(u_{+})>2M_{+}$ and therefore
$r_{h}(u)>2M_{+}$ for all $u\in\mathbb{R}$.
Proof. It is a simple observation. We have
$\displaystyle r_{h}(u_{+})-r_{h}(u_{-})$ $\displaystyle=$
$\displaystyle\int_{u_{-}}^{u_{+}}\dot{r}_{h}(u)\mathrm{d}u$ $\displaystyle=$
$\displaystyle-\frac{1}{2}\int_{u_{-}}^{u_{+}}\left(1-\frac{2M(u)}{r_{h}(u)}\right)\mathrm{d}u$
$\displaystyle=$
$\displaystyle-\frac{1}{2}(u_{+}-u_{-})+\int_{u_{-}}^{u_{+}}\frac{M(u)}{r_{h}(u)}\mathrm{d}u>-\frac{1}{2}(u_{+}-u_{-})\,.$
Since $r_{h}(u_{-})=2M_{-}$,
$r(u_{+})>2M_{-}-\frac{1}{2}(u_{+}-u_{-})>2M_{+}\,.$
Since on $]u_{+},+\infty[$, $2M_{+}$ is a solution to (8), then by uniqueness
of solutions we must have $r_{h}(u)>2M_{+}$ for all $u>u_{+}$. This proves the
proposition. ∎
### 3.2 The second optical function
The function $u$ is an optical function, which means that its gradient is a
null vector field, or equivalently that $u$ satisfies the eikonal equation
$g(\nabla u,\nabla u)=0\,.$ (14)
An important property of optical functions is that the integral lines of their
gradient are null geodesics with affine parametrisation. This is established
in [8]. The more complete Propositions (7.1.60) and (7.1.61) in Penrose and
Rindler Vol 2 [11] state that for a null congruence, the following three
properties are equivalent :
1. 1.
it is hypersurface-orthogonal;
2. 2.
it is hypersurface-forming;
3. 3.
it is geodetic and twist-free.
We recall the proof of the fact that the integral curves of an optical
function are null geodesics, as it is a straightforward calculation.
###### Lemma 3.3.
Let $\xi$ be an optical function and denote $\cal L=\nabla\xi$. The integral
curves of $\cal L$ are geodesics and $\cal L$ corresponds to a choice of
affine parameter, i.e.
$\nabla_{\cal L}{\cal L}=0\,.$
Proof. The proof is direct :
$\displaystyle\nabla_{\cal L}{\cal L}^{b}$ $\displaystyle=$
$\displaystyle\nabla_{\nabla\xi}{\nabla^{b}\xi}\,,$ $\displaystyle=$
$\displaystyle\nabla_{a}\xi\nabla^{a}\nabla^{b}\xi\,,$ $\displaystyle=$
$\displaystyle\nabla_{a}\xi\nabla^{b}\nabla^{a}\xi\mbox{ since the connection
is torsion-free,}$ $\displaystyle=$
$\displaystyle\nabla^{b}\left(\nabla_{a}\xi\nabla^{a}\xi\right)-\left(\nabla^{b}\nabla_{a}\xi\right)\nabla^{a}\xi\,,$
$\displaystyle=$ $\displaystyle 0-\nabla_{a}\xi\nabla^{a}\nabla^{b}\xi\mbox{
since }\nabla\xi\mbox{ is null and the connection torsion-free,}$
$\displaystyle=$ $\displaystyle-\nabla_{\nabla\xi}\nabla^{b}\xi\,.\qed$
Since (see (9))
$\nabla u=\frac{\partial}{\partial r}=V$
and $V$ is a principal null direction of the Weyl tensor, a consequence of
Lemma 3.3 and of (14) is the following.
###### Proposition 3.2.
The integral lines of $V$ are affinely parametrised null geodesics; they are
the outgoing principal null geodesics of Vaidya’s spacetime.
We now establish the existence of a second optical function.
###### Proposition 3.3.
There exists a function $v$ defined on
$\mathbb{R}_{u}\times]0,+\infty[_{r}\times S^{2}_{\omega}$, depending solely
on $u$ and $r$, such that $\nabla v$ is everywhere tangent to the integral
lines of (8). This means that $g(\nabla v,\nabla v)=0$, i.e. $v$ is an optical
function. The integral lines of (8), which are also the integral lines of
$\nabla v$, are therefore null geodesics and their congruence generates the
level hypersurfaces of $v$. Since the integral lines of (8) are also tangent
to $W$ (defined in (3)) it follows that they are the incoming principal null
geodesics of Vaidya’s spacetime.
Proof. The metric $g$ can be written as
$g=F\mathrm{d}u\left(\mathrm{d}u+2F^{-1}\mathrm{d}r\right)-r^{2}\mathrm{d}\omega^{2}\,.$
Following the construction of $v$ for the Schwarzschild metric, it is tempting
to put
$\mathrm{d}v=\mathrm{d}u+2F^{-1}\mathrm{d}r=2F^{-1}g^{-1}(W)\,,$
however this $1$-form is not closed since
$\mathrm{d}\left(\mathrm{d}u+2F^{-1}\mathrm{d}r\right)=-2F^{-2}\frac{\partial
F}{\partial
u}\mathrm{d}u\wedge\mathrm{d}r=\frac{4\dot{M}(u)}{rF^{2}}\mathrm{d}u\wedge\mathrm{d}r$
which vanishes identically only if the mass $M$ is constant, i.e. in the
Schwarzschild case. We introduce an auxiliary function $\psi>0$ and we write
$g=\frac{F}{\psi}\mathrm{d}u\left(\psi\mathrm{d}u+2\psi
F^{-1}\mathrm{d}r\right)-r^{2}\mathrm{d}\omega^{2}\,.$
Our purpose is to find conditions on $\psi$ that ensure that the $1$-form
$\alpha:=\psi\mathrm{d}u+2\psi F^{-1}\mathrm{d}r$ is exact. Since we work in
the variables $(u,r)$ on the simply connected domain
$\mathbb{R}_{u}\times]0,+\infty[_{r}$, all that is required is that $\alpha$
be closed, i.e. that
$\mathrm{d}\alpha=2\frac{\partial}{\partial
u}\left(\frac{\psi}{F}\right)-\frac{\partial\psi}{\partial r}=0\,.$
This equation has the more explicit form
$\frac{\partial\psi}{\partial u}-\frac{F}{2}\frac{\partial\psi}{\partial
r}+\frac{2}{F}\frac{\dot{M}}{r}\psi=0\,.$ (15)
This is an ordinary differential equation along the integral lines of the
second principal null direction (defined in (3))
$W=\frac{\partial}{\partial u}-\frac{F}{2}\frac{\partial}{\partial r}$
parametrised by $u$. Let $\gamma(u)=(u,r(u),\omega)$ be an integral line of
$W$ (which is equivalent to $r(u)$ being a solution to (8)), Equation (15)
along $\gamma$ reads
$\frac{\mathrm{d}}{\mathrm{d}u}(\psi\circ\gamma)=\left(-\frac{2\dot{M}}{rF}\psi\right)\circ\gamma\,,$
or equivalently
$\frac{\mathrm{d}}{\mathrm{d}u}\left(\log\left|\psi\circ\gamma\right|\right)=\left(-\frac{2\dot{M}}{rF}\right)\circ\gamma\,.$
(16)
Equation (15) can therefore be integrated as follows. First, we take a
hypersurface transverse to all the integral lines of (8), for instance
$\mathcal{S}=\\{u=0\\}$ and, we fix the value of $\psi$ on $\mathcal{S}$, say
$\psi=1$ on $\mathcal{S}$. Then we evaluate $\psi$ on each integral line of
(8) by solving the ODE (16). Since the integral lines of (8) are a congruence
of $\mathbb{R}_{u}\times]0,+\infty[_{r}\times S^{2}_{\omega}$, this allows to
define $\psi$ on this whole domain as a smooth (by smooth dependence on
initial data) and nowhere vanishing function. The $1$-form $\alpha$ is then
closed on $\mathbb{R}_{u}\times]0,+\infty[_{r}\times S^{2}_{\omega}$. Since
$\alpha$ depends only on $u$ and $r$, we may see it as a closed $1$-form on
$\mathbb{R}_{u}\times]0,+\infty[_{r}$ which is simply connected. It follows
that $\alpha$ is exact on $\mathbb{R}_{u}\times]0,+\infty[_{r}\times
S^{2}_{\omega}$ and modulo a choice of hypersurface $\Sigma$ generated by the
integral lines of (8), we can define a function $v$ on
$\mathbb{R}_{u}\times]0,+\infty[_{r}\times S^{2}_{\omega}$ such that $v=0$ on
$\Sigma$ and $\alpha=\mathrm{d}v$. In particular,
$\mathrm{d}v=2\psi F^{-1}g^{-1}(W)\mbox{ and }\nabla v=2\psi F^{-1}W\,.\qed$
## 4 Case of a complete evaporation in infinite time
We now devote particular attention to the case where $M_{+}=0$ and
$u_{+}=+\infty$. As before, we assume that $\dot{M}<0$ on $]u_{-},+\infty[$.
### 4.1 The asymptotic null singularity
As we have established in Theorem 3.1, the past event horizon ends up at $r=0$
as $u\rightarrow+\infty$ and so do all the integral curves of (8), i.e. all
the incoming principal null geodesics. From this, we infer the following
theorem.
###### Theorem 4.1.
Whatever the speed at which $M(u)\rightarrow 0$ as $u\rightarrow+\infty$, we
have a null singularity of the conformal structure in the future of our
spacetime. More precisely, the Kretschmann scalar $k$ does not remain bounded
as $u\rightarrow+\infty$ along any integral line of (8).
Proof. Consider $(]u_{0},+\infty[,r)$ a maximal solution to (8), with
$u_{0}\in\mathbb{R}\cup\\{-\infty\\}$. Assume that $k$ remains bounded along
the integral line as $u\rightarrow+\infty$. Then, using (4), so does $M/r^{3}$
and it follows that $M/r$ tends to $0$ as $u\rightarrow+\infty$ along the
integral line. This implies in turn that $\dot{r}(u)\rightarrow-1/2$ as
$u\rightarrow+\infty$, which contradicts the fact that $r(u)\rightarrow 0$ as
$u\rightarrow+\infty$. ∎
###### Remark 4.1.
If we assume that along the integral lines of (8), $\dot{r}(u)$ has a limit as
$u\rightarrow+\infty$, this limit is necessarily zero in order not to
contradict the fact that $r(u)\rightarrow 0$ as $u\rightarrow+\infty$. This
implies in turn that along the integral line,
$\frac{M(u)}{r(u)}\rightarrow\frac{1}{2}\mbox{ as }u\rightarrow+\infty\,,$
i.e.
$r(u)\simeq 2M(u)\mbox{ as }u\rightarrow+\infty$ (17)
and
$k\simeq\frac{3}{4M(u)^{4}}\mbox{ as }u\rightarrow+\infty\,.$ (18)
### 4.2 A family of uniformly timelike congruences
Some uniformly timelike curves also end up at $r=0$ as $u\rightarrow+\infty$.
Let us consider a curve
$\gamma(u)=(u,r(u),\theta,\varphi)\,,$
such that
$g(\dot{\gamma}(u),\dot{\gamma}(u))=\varepsilon^{2}\,,~{}\varepsilon>0\,,$
(19)
then $r$ satisfies the differential equation
$\dot{r}(u)=\frac{\varepsilon^{2}}{2}-\frac{1}{2}\left(1-\frac{2M(u)}{r(u)}\right)\,.$
(20)
The tangent vector is
$\tau=\partial_{u}+\left(\frac{\varepsilon^{2}}{2}-\frac{1}{2}\left(1-\frac{2M(u)}{r}\right)\right)\partial_{r}$
and
$\nabla_{\tau}\tau=-\frac{M(u)}{r^{2}}\left(\tau-\varepsilon^{2}\partial_{r}\right)\,,$
so the integral curves of (20) are not geodesics, except for $\varepsilon=0$.
The behaviour of the integral curves of (20) changes radically according to
the value of $\varepsilon$. This is detailed in the next two propositions. The
first one deals with the case where $0<\varepsilon<1$.
###### Proposition 4.1.
Let $0<\varepsilon<1$ be given. There exists a unique maximal solution
$r_{\varepsilon}$ to (20) such that
$\lim_{u\rightarrow-\infty}r_{\varepsilon}(u)=\frac{2M_{-}}{1-\varepsilon^{2}}\,.$
This solution exists on the whole real line and $r_{\varepsilon}(u)\rightarrow
0$ as $u\rightarrow+\infty$. Any other maximal solution $r$ to (8) belongs to
either of the two categories :
1. 1.
$r$ exists on the whole real line, $r(u)>r_{\varepsilon}(u)$ for all
$u\in\mathbb{R}$, $\lim_{u\rightarrow-\infty}r(u)=+\infty$ and
$\lim_{u\rightarrow+\infty}r(u)=0$ ;
2. 2.
$r$ exists on $]u_{0},+\infty[$ with $u_{0}\in\mathbb{R}$ and satisfies :
$r(u)<r_{\varepsilon}(u)$ for all $u\in]u_{0},+\infty[$ and $r(u)$ tends to
$0$ as $u\rightarrow u_{0}$ and as $u\rightarrow+\infty$.
Moreover, the Kretschmann scalar $k$ fails to be bounded on the integral lines
of (20) as $u\rightarrow+\infty$. If we assume moreover that $\dot{r}$ has a
limit as $u\rightarrow+\infty$ along the integral lines of (20), then we have
a similar behaviour for $k$ to that described in Remark 4.1 for the integral
lines of (8), namely
$k\simeq\frac{3(1-\varepsilon^{2})^{6}}{4M(u)^{4}}\mbox{ as
}u\rightarrow+\infty\,.$ (21)
The second proposition treats the cases where $\varepsilon\geq 1$.
###### Proposition 4.2.
If $\varepsilon\geq 1$, then all maximal solutions to (20) exist on a interval
$]u_{0},+\infty[$ with $u_{0}>-\infty$, $r$ is strictly increasing on
$]u_{0},+\infty[$ and $r(u)\rightarrow 0$ as $u\rightarrow u_{0}$. Moreover:
* •
if $\varepsilon=1$, then the limit of $r(u)$ as $u\rightarrow+\infty$ is
finite if and only if $M(u)$ is integrable in the neighbourhood of $+\infty$;
* •
if $\varepsilon>1$, then $r(u)\rightarrow+\infty$ as $u\rightarrow+\infty$.
###### Remark 4.2.
In view of (19), the proper time along an integral curve of (20) is exactly
given (after an adequate choice of origin) by $\tau=\varepsilon u$. Therefore,
the behaviour of the integral lines of (20) as $u\rightarrow+\infty$ described
in Propositions 4.1 and 4.2 corresponds to their behaviour as proper time
tends to $+\infty$.
Proof of Proposition 4.1. We observe that Equation (20) can be transformed to
(8). We put
$\tilde{r}(u)=\frac{r(u)}{1-\varepsilon^{2}}\,,~{}\tilde{M}(u)=\frac{M(u)}{(1-\varepsilon^{2})^{2}}\,,$
then Equation (20) becomes
$\dot{\tilde{r}}(u)=-\frac{1}{2}\left(1-\frac{2\tilde{M}(u)}{\tilde{r}(u)}\right)\,.$
The classification of maximal solutions therefore follows directly from
Theorem 3.1. The derivation of the behaviour of the Kretschmann scalar along
an integral line is also similar to the null case. The proof of the lack of
boundedness is the same as that of Theorem 4.1 and assuming that $\dot{r}$ has
a limit as $u\rightarrow+\infty$ along an integral line of (20), this limit
must be zero and we infer
$r(u)\simeq\frac{2M(u)}{1-\varepsilon^{2}}\,.$
Then (21) follows from (4). ∎
Proof of Proposition 4.2. Let $(]a,b[,r)$ be a maximal solution to (20).
* •
Case where $\varepsilon=1$. The differential equation (20) becomes
$\dot{r}(u)=\frac{M(u)}{r(u)}\,,$
or equivalently
$\frac{\mathrm{d}}{\mathrm{d}u}((r(u))^{2})=2M(u)\,.$
The function $r$ is strictly increasing and given $u_{1}\in]a,b[$, we have for
all $u\in]a,b[$
$r(u)^{2}=r(u_{1})^{2}+2\int_{u_{1}}^{u}M(s)\mathrm{d}s\,.$
Then $a$ is finite, strictly lower than $u_{1}$ and is precisely such that
$\int^{u_{1}}_{a}M(s)\mathrm{d}s=\frac{r(u_{1})^{2}}{2}\,.$
Also $b=+\infty$ and
$\lim_{u\rightarrow+\infty}r(u)^{2}=r(u_{1})^{2}+2\int_{u_{1}}^{+\infty}M(s)\mathrm{d}s\,.$
* •
Case where $\varepsilon>1$. Now for all $u\in]a,b[$,
$\dot{r}(u)>\frac{\varepsilon^{2}-1}{2}>0\,.$ (22)
It follows that $a$ is finite and
$\lim_{u\rightarrow a}r(u)=0\,.$
Moreover, let $u_{1}\in]a,b[$, then using the fact that $r$ is strictly
increasing on $]a,b[$, we have for all $u\in]u_{1},b[$,
$\dot{r}(u)<\frac{\varepsilon^{2}-1}{2}+\frac{M_{+}}{r(u_{1})}\,,$
whence $b=+\infty$ and (22) implies
$\lim_{u\rightarrow+\infty}r(u)=+\infty\,.\qed$
### 4.3 The global structure of the spacetime
The two congruences of null geodesics that we have considered (the curves of
constant $(u,\omega)$ and of constant $(v,\omega)$) are inextendible. The
spacetime is therefore maximally extended and we have two global charts
$\mathbb{R}_{u}\times]0,+\infty[_{r}\times S^{2}_{\omega}$ and
$\mathbb{R}_{v}\times]0,+\infty[_{r}\times S^{2}_{\omega}$. Figures 2 to 4
display the Penrose diagram of Vaidya’s spacetime for a mass function that
decreases strictly on the whole real line, tends to $0$ as
$u\rightarrow+\infty$ and to a finite positive limit $M_{+}$ as
$u\rightarrow-\infty$, with the general forms of various congruences: the
incoming principal null geodesics (lines of constant $(v,\omega)$) in Figure
2, the outgoing principal null geodesics (lines of constant $(u,\omega)$) in
Figure 2, the timelike curves given by the integral lines of (20) for
$0<\varepsilon<1$ in Figure 4 and for $\varepsilon\geq 1$ in Figure 4. The
dashed lines are curvature singularities. The null singularity in the future
is an asymptotic singularity.
Figure 1: Incoming principal null congruence: lines of constant $(v,\omega)$.
Figure 2: Outgoing principal null congruence: lines of constant $(u,\omega)$.
Figure 3: The timelike congruence for $\varepsilon<1$.
Figure 4: The timelike congruence for $\varepsilon\geq 1$.
Note that for the fourth figure, the general form of the congruence is the
same for $\varepsilon>1$ and for $\varepsilon=1$, independently of the
integrability of $M(u)$ near $u=+\infty$, because all curves end up at future
timelike infinity, whether the limit of $r$ along the curve is positive and
finite or infinite.
## References
* [1] V.A. Berezin, V.I. Dokuchaev and Y.N. Eroshenko, On maximal analytical extension of the Vaidya metric with linear mass function, Classical and Quantum Gravity, 33 (2016), 14, 145003.
* [2] C. Cherubini, D. Bini, S. Capozziello, R. Ruffini, Second order scalar invariants of the Riemann tensor: application to black hole spacetimes, Int. J. Mod. Phys. D, 11 (2002), 6, 827–841.
* [3] I. Booth, J. Martin, On the proximity of black hole horizons: lessons from Vaidya, Phys. Rev. D 82 (2010), 124046.
* [4] F. Fayos, M.M. Martin-Prats and J.M.M. Senovilla, On the extension of Vaidya and Vaidya-Reissner-Nordström spacetimes, Class. Quantum Grav. 12 (1995), 2565–2576.
* [5] F. Fayos and R. Torres, A class of interiors for Vaidya’s radiating metric: singularity-free gravitational collapse, Class. Quantum Grav., 25 (2008), 175009.
* [6] E. Gourgoulhon, M. Bejger, M. Mancini, Tensor calculus with open-source software: the SageManifolds project, Journal of Physics: Conference Series 600 (2015), 012002, doi:10.1088/1742-6596/600/1/012002.
* [7] J.B. Griffiths, J. Podolský, Exact spacetimes in Einstein’s general relativity, Cambridge Monographs on Mathematical Physics, Cambridge University Press, 2009.
* [8] D. Häfner, J.-P. Nicolas, The characteristic Cauchy problem for Dirac fields on curved backgrounds J. Hyperbolic Differ. Equ. 8 (2011), 3, 437–483.
* [9] W.A. Hiscock, Models of evaporating black holes: II. Effects of the outgoing created radiation, phys. Rev. D 23 (1981), 12, 2823–2827.
* [10] W. Israel, Gravitational collapse of a radiating star, Physics Letters 24A (1967), 3, 184–186.
* [11] R. Penrose, W. Rindler, Spinors and spacetime, Vol. I (1984) and Vol. 2 (1986), Cambridge University Press.
* [12] A.Z. Petrov, The classification of spaces defining gravitational fields, Scientific Proceedings of Kazan State University (named after V.I. Ulyanov-Lenin), Jubilee (1804-1954) Collection 114 (1954),8 ,55-69, translation by J. Jezierski and M.A.H. MacCallum, with introduction, by M.A.H. MacCallum, Gen. Rel. Grav. 32 (2000), 1661–1685.
* [13] P.C. Vaidya, The external field of a radiating star in general relativity, Current Sci. (India), 12 (1943), 183.
* [14] P.C. Vaidya, “Newtonian” time in general relativity, Nature 171 (1953), 260–261.
|
# Restrained Italian domination in trees
Kijung Kim Department of Mathematics, Pusan National University, Busan 46241,
Republic of Korea<EMAIL_ADDRESS>
###### Abstract.
Let $G=(V,E)$ be a graph. A subset $D$ of $V$ is a restrained dominating set
if every vertex in $V\setminus D$ is adjacent to a vertex in $D$ and to a
vertex in $V\setminus D$. The restrained domination number, denoted by
$\gamma_{r}(G)$, is the smallest cardinality of a restrained dominating set of
$G$. A function $f:V\rightarrow\\{0,1,2\\}$ is a restrained Italian dominating
function on $G$ if (i) for each vertex $v\in V$ for which $f(v)=0$, it holds
that $\sum_{u\in N_{G}(v)}f(u)\geq 2$, (ii) the subgraph induced by $\\{v\in
V\mid f(v)=0\\}$ has no isolated vertices. The restrained Italian domination
number, denoted by $\gamma_{rI}(G)$, is the minimum weight taken over all
restrained Italian dominating functions of $G$. It is known that
$\gamma_{r}(G)\leq\gamma_{rI}(G)\leq 2\gamma_{r}(G)$ for any graph $G$. In
this paper, we characterize the trees $T$ for which
$\gamma_{r}(T)=\gamma_{rI}(T)$, and we also characterize the trees $T$ for
which $\gamma_{rI}(T)=2\gamma_{r}(T)$.
Key words: restrained domination, restrained Italian domination, tree
###### 2010 Mathematics Subject Classification:
05C69
This research was supported by Basic Science Research Program through the
National Research Foundation of Korea funded by the Ministry of Education
(2020R1I1A1A01055403).
## 1\. Introduction and Terminology
Let $G=(V,E)$ be a finite simple graph with vertex set $V=V(G)$ and edge set
$E=E(G)$. The open neighborhood of $v\in V(G)$ is the set $N_{G}(v)=\\{u\in
V(G)\mid uv\in E(G)\\}$ and the closed neighborhood of $v\in V(G)$ is the set
$N_{G}[v]:=N_{G}(v)\cup\\{v\\}$. A subset $D$ of $V(G)$ is a dominating set if
every vertex in $V(G)\setminus D$ is adjacent to a vertex in $D$. The
domination number of $G$, denoted by $\gamma(G)$, is the minimum cardinality
of a dominating set in $G$. A dominating set with the cardinality $\gamma(G)$
is called a $\gamma(G)$-set.
In [3], Domke et al. gave the formal definition of restrained domination. A
subset $S$ of $V(G)$ is a restrained dominating set (RDS) if every vertex in
$V(G)\setminus S$ is adjacent to a vertex in $S$ and another vertex in
$V(G)\setminus S$. The restrained domination number of $G$, denoted by
$\gamma_{r}(G)$, is the minimum cardinality of a restrained dominating set in
$G$. A restrained dominating set with the cardinality $\gamma_{r}(G)$ is
called a $\gamma_{r}(G)$-set. As explained in [3], there is one possible
application of the concept of restrained domination. Each vertex in a RDS $S$
represents a guard and each vertex in $V(G)\setminus S$ represents a prisoner.
Each prisoner must be observed by at least one guard and every prisoner must
be seen by at least one other prisoner to protect the rights of prisoners. To
be cost effective, it is desirable to place as few guards as possible.
A function $f:V(G)\rightarrow\\{0,1,2\\}$ is an Italian dominating function on
$G$ if for each vertex $v\in V(G)$ for which $f(v)=0$, it holds that
$\sum_{u\in N_{G}(v)}f(u)\geq 2$. In [6], Samadi et al. introduced the concept
of restrained Italian domination as a variant of Italian dominating function.
An Italian dominating function $f:V\rightarrow\\{0,1,2\\}$ is a restrained
Italian dominating function (RIDF) on $G$ if the subgraph induced by $\\{v\in
V\mid f(v)=0\\}$ has no isolated vertices. A RIDF $f$ gives an ordered
partition $(V_{0},V_{1},V_{2})$ (or $(V_{0}^{f},V_{1}^{f},V_{2}^{f})$ to refer
to $f$) of $V(G)$, where $V_{i}:=\\{v\in V(G)\mid f(v)=i\\}$. The weight of a
RIDF $f$ is $\omega(f):=\sum_{v\in V}f(v)$. The restrained Italian domination
number, denoted by $\gamma_{rI}(G)$, is the minimum weight taken over all
restrained Italian dominating functions of $G$. A $\gamma_{rI}(G)$-function is
a RIDF on $G$ with weight $\gamma_{rI}(G)$.
As noted in [6, Proposition 3.3], it holds that
$\gamma_{r}(G)\leq\gamma_{rI}(G)\leq 2\gamma_{r}(G)$ for any graph $G$. We
define a tree $T$ to be a $(\gamma_{r},\gamma_{rI})$-tree if
$\gamma_{r}(T)=\gamma_{rI}(T)$. We define a tree $T$ to be a restrained
Italian tree if $\gamma_{rI}(T)=2\gamma_{r}(T)$. In this paper, we
characterize $(\gamma_{r},\gamma_{rI})$-trees and restrained Italian trees.
The rest of this section, we present some necessary terminology and notation.
For terminology and notation on graph theory not given here, the reader is
referred to [1, 7]. The degree of $v\in V(G)$ is defined as the cardinality of
$N_{G}(v)$, denoted by $deg_{G}(v)$. A diametral path of $G$ is a path with
the length which equals the diameter of $G$. A subset $S$ of $V(G)$ is a
packing in $G$ if the vertices of $S$ are pairwise at distance at least three
apart in $G$. The packing number of $G$, denoted by $\rho(G)$, is the maximum
cardinality of a packing in $G$. A packing with the cardinality $\rho(G)$ is
called a $\rho(G)$-set.
Let $T$ be a (rooted) tree. A leaf of $T$ is a vertex of degree one. A stem
(or support vertex) is a vertex adjacent to a leaf. A weak stem is a stem that
is adjacent to exactly one leaf. For a vertex $v$ in a rooted tree, we let
$C(v)$ and $D(v)$ denote the set of children and descendants, respectively, of
$v$. The subtree induced by $D(v)\cup\\{v\\}$ is denoted by $T_{v}$. We write
$K_{1,n-1}$ for the star of order $n\geq 3$. The double star $DS_{p,q}$, where
$p,q\geq 1$, is the graph obtained by joining the centers of two stars
$K_{1,p}$ and $K_{1,q}$. A healthy spider $S_{t,t}$ is the graph from a star
$K_{1,t}$ by subdividing each edges of $K_{1,t}$. For two graph $G$ and $H$,
if $G$ is isomorphic to $H$, we denote it by $G\cong H$. For a graph $G$ and
its subgraph $S$, $G-S$ denotes the subgraph of $G$ induced by $V(G)\setminus
V(S)$.
## 2\. $(\gamma_{r},\gamma_{rI})$-trees
In this section, we characterize the trees for which
$\gamma_{rI}(T)=\gamma_{r}(T)$. First, we introduce a family $\mathcal{H}$ of
trees that can be obtained from a sequence $T_{1},T_{2},\dotsc,T_{m}$ $(m\geq
1)$ of trees such that $T_{1}$ is a double star $DS_{l,n}$ $(l,n\geq 2)$, and
if $m\geq 2$, $T_{i+1}$ can be obtained recursively from $T_{i}$ by one of the
following operations for $1\leq i\leq m-1$.
Define
$LV(T_{i})=\\{v\in V(T_{i})\mid v~{}\text{is a leaf of}~{}T_{j}~{}\text{for
some}~{}j\leq i\\}$
and
$SV(T_{i})=\\{v\in V(T_{i})\mid v~{}\text{is a stem of}~{}T_{j}~{}\text{for
some}~{}j\leq i\\}.$
Note that $V(T_{i})=LV(T_{i})\cup SV(T_{i})$.
Operation $\mathcal{O}_{1}$. If $x\in LV(T_{i})$, then $\mathcal{O}_{1}$ adds
a double star $DS_{r,s}$ with a center $u$ and joins $u$ to $x$ to produce
$T_{i+1}$, where $s\geq 2$ and $u$ has $r$ leaves.
Operation $\mathcal{O}_{2}$. If $x\in SV(T_{i})$, then $\mathcal{O}_{2}$ adds
a star $K_{1,t}$ with the center $u$ and joins $u$ to $x$ to produce
$T_{i+1}$.
The following is obtained by the induction.
###### Observation 2.1.
With the previous notation, the following holds.
1. (i)
$LV(T_{i})$ is a unique minimum RDS of $T_{i}$.
2. (ii)
The subgraph induced by $SV(T_{i})$ is a forest and each component has at
least two vertices.
###### Lemma 2.2.
If $\gamma_{r}(T_{i})=\gamma_{rI}(T_{i})$ and $T_{i+1}$ is obtained from
$T_{i}$ by operation $\mathcal{O}_{1}$, then
$\gamma_{r}(T_{i+1})=\gamma_{rI}(T_{i+1})$.
###### Proof.
It follows from Observation 2.1 that
$\gamma_{r}(T_{i+1})=\gamma_{r}(T_{i})+r+s$. Since every
$\gamma_{rI}(T_{i})$-function can be extended to a RIDF of $T_{i+1}$, we have
$\gamma_{rI}(T_{i+1})\leq\gamma_{rI}(T_{i})+r+s$.
We verify $\gamma_{rI}(T_{i+1})=\gamma_{rI}(T_{i})+r+s$. Let $g$ be a
$\gamma_{rI}(T_{i+1})$-function. If $g(x)=0$, then $g(y)=1$ for each $y\in
N_{T_{i+1}}(x)$. This implies that
$\gamma_{rI}(T_{i+1})\geq\gamma_{rI}(T_{i})+r+s+2$, a contradiction. Thus, we
have $g(x)=1$. It is easy to see that $g|_{V(T_{i})}$ is a RIDF. So, we have
$\gamma_{rI}(T_{i})\leq\gamma_{rI}(T_{i+1})-r-s$.
Thus, it follows from $\gamma_{r}(T_{i})=\gamma_{rI}(T_{i})$ that
$\gamma_{r}(T_{i+1})=\gamma_{rI}(T_{i+1})$. ∎
###### Lemma 2.3.
If $\gamma_{r}(T_{i})=\gamma_{rI}(T_{i})$ and $T_{i+1}$ is obtained from
$T_{i}$ by operation $\mathcal{O}_{2}$, then
$\gamma_{r}(T_{i+1})=\gamma_{rI}(T_{i+1})$.
###### Proof.
It follows from Observation 2.1 that
$\gamma_{r}(T_{i+1})=\gamma_{r}(T_{i})+t$. Since every
$\gamma_{rI}(T_{i})$-function can be extended to a RIDF of $T_{i+1}$, we have
$\gamma_{rI}(T_{i+1})\leq\gamma_{rI}(T_{i})+t$.
We verify $\gamma_{rI}(T_{i+1})=\gamma_{rI}(T_{i})+t$. Let $g$ be a
$\gamma_{rI}(T_{i+1})$-function. If $g(x)=1$, then $g(u)=1$. This implies that
$\gamma_{rI}(T_{i+1})\geq\gamma_{rI}(T_{i})+t+1$, a contradiction. Thus, we
have $g(x)=0$. It is easy to see that $g|_{V(T_{i})}$ is a RIDF. So, we have
$\gamma_{rI}(T_{i})\leq\gamma_{rI}(T_{i+1})-t$.
Thus, it follows from $\gamma_{r}(T_{i})=\gamma_{rI}(T_{i})$ that
$\gamma_{r}(T_{i+1})=\gamma_{rI}(T_{i+1})$. ∎
Now we are ready to prove our main theorem.
###### Theorem 2.4.
A tree $T$ of order $n\geq 3$ is a $(\gamma_{r},\gamma_{rI})$-tree if and only
if $T\in\mathcal{H}\cup\\{K_{1,t}\mid t\geq 2\\}$.
###### Proof.
First, we prove that if $T\in\mathcal{H}\cup\\{K_{1,t}\mid t\geq 2\\}$, then
$\gamma_{rI}(T)=\gamma_{r}(T)$. Clearly,
$\gamma_{rI}(K_{1,t})=\gamma_{r}(K_{1,t})$. Assume that $T\in\mathcal{H}$.
Then there exist a sequence $T_{1},T_{2},\dotsc,T_{m}=T$ $(m\geq 1)$ such that
$T_{1}$ is a double star $DS_{r,s}$, and if $m\geq 2$, $T_{i+1}$ can be
obtained recursively from $T_{i}$ by an operation $\mathcal{O}_{1}$ or
$\mathcal{O}_{2}$ for $1\leq i\leq m-1$. We use induction on $m$. Clearly,
$\gamma_{rI}(T_{1})=\gamma_{r}(T_{1})$. Suppose that the statement is true for
any tree constructed by $m-1$ operations. Let $T^{\prime}=T_{m-1}$. By the
induction hypothesis, $\gamma_{rI}(T^{\prime})=\gamma_{r}(T^{\prime})$. It
follows from Lemma 2.2 or 2.3 that $\gamma_{rI}(T)=\gamma_{r}(T)$.
Next, we prove that if $\gamma_{rI}(T)=\gamma_{r}(T)$, then
$T\in\mathcal{H}\cup\\{K_{1,t}\mid t\geq 2\\}$. We proceed by induction on the
order $n$ of $T$ satisfying $\gamma_{rI}(T)=\gamma_{r}(T)$. Suppose that
$diam(T)=2$. Then $T$ is a star and clearly $\gamma_{rI}(T)=\gamma_{r}(T)$
Thus, $T\in\\{K_{1,t}\mid t\geq 2\\}$. Suppose that $diam(T)=3$. Then $T\cong
DS_{r,s}$ for $r,s\geq 2$. In this case, $T$ can be obtained from $K_{1,r}$ by
operation $\mathcal{O}_{2}$. Hence, we may assume that $diam(T)\geq 4$.
Among all of diametrical paths in $T$, we choose $x_{0}x_{1}\dotsc x_{d}$ so
that it maximizes the degree of $x_{d-1}$. Root $T$ at $x_{0}$. Let
$g=(V_{0}^{g},V_{1}^{g},V_{2}^{g})$ be a $\gamma_{rI}(T)$-function.
Claim 1. $V_{2}^{g}=\emptyset$ and $V_{1}^{g}$ is a RDS of $T$.
Since $V_{1}^{g}\cup V_{2}^{g}$ is a RDS of $T$, we have
$\gamma_{r}(T)\leq|V_{1}^{g}\cup
V_{2}^{g}|=|V_{1}^{g}|+|V_{2}^{g}|\leq|V_{1}^{g}|+2|V_{2}^{g}|=\gamma_{rI}(T).$
Since $\gamma_{rI}(T)=\gamma_{r}(T)$, we must have the equality throughout the
above inequality chain. Thus, $V_{2}^{g}=\emptyset$ and $V_{1}^{g}$ is a RDS
of $T$.
Claim 2. $deg_{T}(x_{d-1})\geq 3$.
Suppose to the contrary that $deg_{T}(x_{d-1})=2$. Suppose that
$deg_{T}(x_{d-2})=2$. In this case, $g(x_{d-1})=1$ for otherwise $g(x_{d-2})$
must be assigned the weight $1$ but this contradicts the fact that $V_{1}^{g}$
is a RDS of $T$. By the same argument, we have $g(x_{d-2})=1$. If
$g(x_{d-3})=1$, then $V_{1}^{g}\setminus\\{x_{d-2},x_{d-1}\\}$ is a RDS with
the cardinality less than $\gamma_{r}(T)$, a contradiction. Thus,
$g(x_{d-3})=0$ and $\sum_{x\in N_{T}(x_{d-3})}g(x)\geq 2$. This implies that
$V_{1}^{g}\setminus\\{x_{d-2}\\}$ is a RDS of $T$. This is a contradiction.
Suppose that $deg_{T}(x_{d-2})\geq 3$. Then each $x\in
N_{T}(x_{d-2})\setminus\\{x_{d-3}\\}$ is either a leaf or a weak stem by
$deg_{T}(x_{d-1})=2$ and the hypothesis about $deg_{T}(x_{d-1})$. If
$g(x_{d-2})=1$, then every vertex of $T_{x_{d-2}}$ has the weight $1$. Let $M$
be the subset of $V_{1}^{g}$ obtained by removing $x_{d-2}$ and weak stems in
$T_{x_{d-2}}$. Now we have $g(x_{d-3})=0$ for otherwise $M$ is a RDS of $T$, a
contradiction. Since $\sum_{x\in N_{T}(x_{d-3})}g(x)\geq 2$, $M$ is a RDS of
$T$, a contradiction. This completes the proof of claim.
Claim 3. $deg_{T}(x_{d-2})\geq 3$.
Suppose to the contrary that $deg_{T}(x_{d-2})=2$. Suppose that
$g(x_{d-2})=0$. Then $g(x_{d-3})=g(x_{d-1})=1$, since $\sum_{x\in
N_{T}(x_{d-2})}g(x)\geq 2$. This is a contradiction by Claim 1.
Suppose that $g(x_{d-2})=1$. Then $g(x_{d-1})=1$. Let $N$ be the subset of
$V_{1}^{g}$ obtained by removing $x_{d-2}$ and $x_{d-1}$. If $g(x_{d-3})=1$,
then $N$ is a RDS of $T$, a contradiction. Thus, we have $g(x_{d-3})=0$. But,
since $\sum_{x\in N_{T}(x_{d-3})}g(x)\geq 2$, $N\cup\\{x_{d-2}\\}$ is a RDS of
$T$, a contradiction. This completes the proof of claim.
We divide our consideration into two cases.
Case 1. $x_{d-3}\in V_{1}^{g}$. Then $x_{d-2},x_{d-1}\in V_{0}^{g}$ for
otherwise the subset of $V_{1}^{g}$ obtained by removing $x_{d-2}$ and stems
in $T_{x_{d-2}}$ is a RDS of $T$, a contradiction.
Subcase 1.1. $x_{d-2}$ has stems except for $x_{d-1}$. Consider
$T_{x_{d-1}}\cong K_{1,t}$ and let $T^{\prime}=T-T_{x_{d-1}}$. Since every
$\gamma_{r}(T^{\prime})$-set (resp., $\gamma_{rI}(T^{\prime})$-function) can
be extended to a RDS (resp., RIDF) of $T$,
$\gamma_{r}(T)\leq\gamma_{r}(T^{\prime})+t$ and
$\gamma_{rI}(T)\leq\gamma_{rI}(T^{\prime})+t$. Since $V_{1}^{g}\setminus
C(x_{d-1})$ (resp., $g|_{V(T^{\prime})}$) is a RDS (resp., RIDF) of
$T^{\prime}$, $\gamma_{r}(T^{\prime})\leq\gamma_{r}(T)-t$ and
$\gamma_{rI}(T^{\prime})\leq\gamma_{rI}(T)-t$. Thus, it follows from
$\gamma_{rI}(T)=\gamma_{r}(T)$ that
$\gamma_{r}(T^{\prime})=\gamma_{rI}(T^{\prime})$. Applying the inductive
hypothesis to $T^{\prime}$, $T^{\prime}\in\mathcal{H}$. By operation
$\mathcal{O}_{2}$, we have $T\in\mathcal{H}$.
Subcase 1.2. $x_{d-2}$ has no stem except for $x_{d-1}$. Consider
$T_{x_{d-2}}\cong DS_{r,s}$ and let $T^{\prime}=T-T_{x_{d-2}}$. Since every
$\gamma_{r}(T^{\prime})$-set (resp., $\gamma_{rI}(T^{\prime})$-function) can
be extended to a RDS (resp., RIDF) of $T$, we have
$\gamma_{r}(T)\leq\gamma_{r}(T^{\prime})+r+s$ and
$\gamma_{rI}(T)\leq\gamma_{rI}(T^{\prime})+r+s$. Since $V_{1}^{g}\setminus
D(x_{d-2})$ (resp., $g|_{V(T^{\prime})}$) is a RDS (resp., RIDF) of
$T^{\prime}$, $\gamma_{r}(T^{\prime})\leq\gamma_{r}(T)-r-s$ and
$\gamma_{rI}(T^{\prime})\leq\gamma_{rI}(T)-r-s$. Thus, it follows from
$\gamma_{rI}(T)=\gamma_{r}(T)$ that
$\gamma_{r}(T^{\prime})=\gamma_{rI}(T^{\prime})$. Applying the inductive
hypothesis to $T^{\prime}$, $T^{\prime}\in\mathcal{H}$. By operation
$\mathcal{O}_{1}$, we have $T\in\mathcal{H}$.
Case 2. $x_{d-3}\in V_{0}^{g}$. Then $x_{d-2},x_{d-1}\in V_{0}^{g}$ for
otherwise every vertex in $T_{x_{d-2}}$ belongs to $V_{1}^{g}$. Since
$x_{d-3}$ is adjacent to at least one vertex not in $T_{x_{d-2}}$, the subset
of $V_{1}^{g}$ obtained by removing $x_{d-2}$ and stems in $T_{x_{d-2}}$ is a
RDS of $T$, a contradiction.
Consider $T_{x_{d-1}}\cong K_{1,t}$ and let $T^{\prime}=T-T_{x_{d-1}}$. Since
every $\gamma_{r}(T^{\prime})$-set (resp., $\gamma_{rI}(T^{\prime})$-function)
can be extended to a RDS (resp., RIDF) of $T$,
$\gamma_{r}(T)\leq\gamma_{r}(T^{\prime})+t$ and
$\gamma_{rI}(T)\leq\gamma_{rI}(T^{\prime})+t$. Since $V_{1}^{g}\setminus
C(x_{d-1})$ (resp., $g|_{V(T^{\prime})}$) is a RDS (resp., RIDF) of
$T^{\prime}$, $\gamma_{r}(T^{\prime})\leq\gamma_{r}(T)-t$ and
$\gamma_{rI}(T^{\prime})\leq\gamma_{rI}(T)-t$. Thus, it follows from
$\gamma_{rI}(T)=\gamma_{r}(T)$ that
$\gamma_{r}(T^{\prime})=\gamma_{rI}(T^{\prime})$. Applying the inductive
hypothesis to $T^{\prime}$, $T^{\prime}\in\mathcal{H}$. By operation
$\mathcal{O}_{2}$, we have $T\in\mathcal{H}$. ∎
## 3\. Restrained Italian trees
In this section, we characterize the trees for which
$\gamma_{rI}(T)=2\gamma_{r}(T)$. First, we introduce a family $\mathcal{F}$ of
trees that can be obtained from a sequence $T_{1},T_{2},\dotsc,T_{m}$ $(m\geq
1)$ of trees such that $T_{1}$ is a path $P_{4}$, and if $m\geq 2$, $T_{i+1}$
can be obtained recursively from $T_{i}$ by one of the following operations
for $1\leq i\leq m-1$.
Define
$LV(T_{i})=\\{v\in V(T_{i})\mid v~{}\text{is a leaf of}~{}T_{j}~{}\text{for
some}~{}j\leq i\\}.$
Operation $\mathcal{O}_{1}$. If $x\in LV(T_{i})$, then $\mathcal{O}_{1}$ adds
a path $P_{3}$ with a leaf $u$ and joins $u$ to $x$ to produce $T_{i+1}$.
Operation $\mathcal{O}_{2}$. If $x\in LV(T_{i})$, then $\mathcal{O}_{2}$ adds
a healthy spider $S_{t,t}$ with the center $u$ and joins $u$ to $x$ to produce
$T_{i+1}$.
Since the family $\mathcal{F}$ is a subclass of $\mathcal{T}$ given in [2,
Lemma 3], we can get the following result.
###### Lemma 3.1.
With the previous notation, the following properties hold.
1. (i)
$LV(T_{i})$ is a packing.
2. (ii)
Every $v\in V(T_{i})\setminus LV(T_{i})$ is adjacent to at least one vertex in
$V(T_{i})\setminus LV(T_{i})$ and to exactly one vertex in $LV(T_{i})$.
3. (iii)
$LV(T_{i})$ is a $\gamma(T_{i})$-set.
4. (iv)
$LV(T_{i})$ is the unique $\gamma_{r}(T_{i})$-set.
5. (v)
$LV(T_{i})$ is the unique $\rho(T_{i})$-set.
###### Lemma 3.2.
With the previous notation, $(V(T_{i})\setminus
LV(T_{i}),\emptyset,LV(T_{i}))$ is a $\gamma_{rI}(T_{i})$-function.
###### Proof.
We show that every RIDF has weight at least $2|LV(T_{i})|$. Let $f$ be a RIDF
of $T_{i}$ and $P(T_{i}):=\\{N_{T_{i}}[v]\mid v\in LV(T_{i})\\}$. It follows
from Lemma 3.1 that $P(T_{i})$ is a partition of $V(T_{i})$.
We claim that $f(U)\geq 2$ for each $U\in P(T_{i})$. For a leaf $v\in
V(T_{i})$, clearly $f(N_{T_{i}}[v])=2$. Suppose to the contrary that
$f(N_{T_{i}}[u])=1$ for some $u\in LV(T_{i-1})$. For $w\in N_{T_{i}}(u)$, if
$f(w)=1$, then $u$ is not dominated, a contradiction. Assume that $f(u)=1$.
From the construction of $T_{i}$, there exist at least one vertex $z\in
N_{T_{i}}(u)$ such that $deg_{T_{i}}(z)=2$. To dominate $z$, there must be one
vertex with weight at least one. This is not restrained, a contradiction.
Thus, $\gamma_{rI}(T_{i})\geq 2|LV(T_{i})|=2\gamma_{r}(T_{i})$ and clearly
$(V(T_{i})\setminus LV(T_{i}),\emptyset,LV(T_{i}))$ is a
$\gamma_{rI}(T_{i})$-function. ∎
Now we are ready to prove our main theorem.
###### Theorem 3.3.
A tree $T$ of order $n\geq 4$ is a restrained Italian tree if and only if
$T\in\mathcal{F}$.
###### Proof.
The sufficiency follows from Lemmas 3.1 and 3.2. To prove the necessity, we
proceed by induction on the order $n$ of $T$ satisfying
$\gamma_{rI}(T)=2\gamma_{r}(T)$. It suffices to consider trees with diameter
at least three. Suppose that $diam(T)=3$. Then $T\cong DS_{r,s}$. Since
$\gamma_{r}(T)=\gamma_{rI}(T)=r+s$ for $r,s\geq 2$, we have $DS_{1,1}\cong
P_{4}\in\mathcal{F}$. Hence, we assume that $diam(T)\geq 4$ and $n\geq 5$.
Among all of diametrical paths in $T$, we choose $x_{0}x_{1}\dotsc x_{d}$ so
that it maximizes the degree of $x_{d-1}$. Root $T$ at $x_{0}$. Let $D$ be a
$\gamma_{r}(T)$-set and $g$ be a $\gamma_{rI}(T)$-function defined by $g(v)=2$
for $v\in D$ and $g(u)=0$ for $u\in V(T)\setminus D$.
Claim 1. $x_{d-1}\not\in D$.
Suppose to the contrary that $x_{d-1}\in D$. Since $x_{d}\in D$ and
$\gamma_{rI}(T)=2\gamma_{r}(T)$, we can define a function
$f:V(T)\rightarrow\\{0,1,2\\}$ by $f(x_{d})=1$ and $f(x)=g(x)$ otherwise. Then
$f$ is a RIDF of $T$ with weight less than $\omega(g)$, a contradiction.
Claim 2. $deg_{T}(x_{d-1})=2$.
Suppose to the contrary that $deg_{T}(x_{d-1})\geq 3$. Then there exists at
least one leaf $u\in N_{T}(x_{d-1})\setminus\\{x_{d}\\}$. Since $u,x_{d}\in D$
and $\gamma_{rI}(T)=2\gamma_{r}(T)$, we can define
$f:V(T)\rightarrow\\{0,1,2\\}$ by $f(u)=f(x_{d})=1$ and $f(x)=g(x)$ otherwise.
Then $f$ is a RIDF of $T$ with weight less than $\omega(g)$, a contradiction.
We divide our consideration into two cases.
Case 1. $deg_{T}(x_{d-2})\geq 3$. By Claim 2, each $x\in
N_{T}(x_{d-2})\setminus\\{x_{d-3}\\}$ is either a leaf or a weak stem. Now we
show that $x_{d-2}$ has no leaf. Suppose to the contrary that there exists a
leaf $u\in N_{T}(x_{d-2})$. Then $u\in D$. Note that $x_{d-1},x_{d-2}\not\in
D$. If $x_{d-3}\in D$, then define $f:V(T)\rightarrow\\{0,1,2\\}$ by $f(u)=1$
and $f(x)=g(x)$ otherwise. Clearly $f$ is a RIDF of $T$ with weight less than
$\omega(g)$, a contradiction. Suppose that $x_{d-3}\not\in D$. Define
$h:\rightarrow\\{0,1,2\\}$ by $h(u)=h(x_{d-1})=h(x_{d})=1$ and $h(x)=g(x)$
otherwise. Clearly $h$ is a RIDF of $T$ with weight less than $\omega(g)$, a
contradiction. Thus, $x_{d-2}$ has no leaf and so $T_{x_{d-2}}$ is a healthy
spider.
Since $x_{d-2}$ and its children do not belong to the $\gamma_{r}(T)$-set $D$,
$x_{d-3}$ belongs to $D$. Consider the tree $T^{\prime}:=T-T_{x_{d-2}}$. It is
easy to see that $V(T^{\prime})\cap D$ is a $\gamma_{r}(T^{\prime})$-set and
$\gamma_{rI}(T^{\prime})=2\gamma_{r}(T^{\prime})$. Applying the inductive
hypothesis to $T^{\prime}$, we have $T^{\prime}\in\mathcal{F}$. Since
$x_{d-3}$ is a leaf in $T^{\prime}$, the tree $T$ can be obtained from the
tree $T^{\prime}$ by applying operation $\mathcal{O}_{2}$. Thus,
$T\in\mathcal{F}$.
Case 2. $deg_{T}(x_{d-2})=2$. Then $x_{d-3}\in D$. Consider the tree
$T^{\prime}:=T-T_{x_{d-2}}$ It is easy to see that $V(T^{\prime})\cap D$ is a
$\gamma_{r}(T^{\prime})$-set and
$\gamma_{rI}(T^{\prime})=2\gamma_{r}(T^{\prime})$. Applying the inductive
hypothesis to $T^{\prime}$, we have $T^{\prime}\in\mathcal{F}$. Since
$x_{d-3}$ is a leaf in $T^{\prime}$, the tree $T$ can be obtained from the
tree $T^{\prime}$ by applying operation $\mathcal{O}_{1}$. Thus,
$T\in\mathcal{F}$. ∎
## 4\. Open problems
In this section, we discuss few open problems related to our results. For any
graph theoretical parameters $\sigma$ and $\delta$, we define a tree $T$ to be
$(\sigma,\delta)$-tree if $\sigma(T)=\delta(T)$. In general, it holds
$\gamma_{I}(G)\leq\gamma_{rI}(G)$ for any graph $G$. We suggest the following
problem.
###### Problem 4.1.
Characterize $(\gamma_{I},\gamma_{rI})$-trees.
In [5], D. Ma et al. gave the concept of total restrained domination.
Combining the properties of Italian dominating function and total restrained
dominating set, we give the concept of total restrained Italian dominating
function, namely a RIDF is a total restrained Italian dominating function on
$G$ if the subgraph induced by $\\{v\in V\mid f(v)\geq 1\\}$ has no isolated
vertices. We denote the total restrained Italian domination number by
$\gamma_{rI}^{t}(G)$. The total Italian domination number and total restrained
domination number are denoted by $\gamma_{I}^{t}(G)$ and $\gamma_{r}^{t}(G)$,
respectively (see [4, 5] for definitions). We suggest the following problems.
###### Problem 4.2.
Characterize $(\gamma_{I}^{t},\gamma_{rI}^{t})$-trees.
###### Problem 4.3.
Characterize $(\gamma_{rI},\gamma_{rI}^{t})$-trees.
###### Problem 4.4.
Characterize $(\gamma_{r}^{t},\gamma_{rI}^{t})$-trees.
## References
* [1] J.A. Bondy, U.S.R. Murty, Graph theory, Graduate Texts in Mathematics 244, Springer, 2007.
* [2] P. Dankelmann, J.H. Hattingh, M.A. Henning, H.C. Swart, Trees with equal domination and restrained domination numbers, J. Global Optim. 34 (2006) 597–607.
* [3] G.S. Domke, J.H. Hattingh, S.T. Hedetniemi, R.C. Laskar, L.R. Markus, Restrained domination in graphs, Discrete Math. 203 (1999) 61–69.
* [4] S.C. Garc$\acute{i}$a, A.C. Mart$\acute{i}$nez, F.A. Mira, I.G. Yero, Total Roman $\\{2\\}$-domination in graphs, Quaest. Math. to appear.
* [5] D. Ma, X. Chen, L. Sun, On the total restrained domination in graphs, Czechoslovak Math. J. 55 (2005) 165–173.
* [6] B. Samadi, M. Alishahi, I. Masoumi, D.A. Mojdeh, Restrained Italian domination in graphs, arXiv:2009.12209.
* [7] D.B. West, Introduction to graph theory, Prentice Hall, Inc., Upper Saddle River, NJ, 2001.
|
A multilevel clustering technique for community detection
label1]Isa Inuwa-Dutse
[label1]School of Computer Science
University of St Andrews
label2]Mark Liptrott
[label2]Department of Computer Science
Edge Hill University, UK
label2]Yannis Korkontzelos
A network is a composition of many communities, i.e., sets of nodes and edges with stronger relationships, with distinct and overlapping properties.
Community detection is crucial for various reasons, such as serving as a functional unit of a network that captures local interactions among nodes.
Communities come in various forms and types, ranging from biologically to technology-induced ones.
As technology-induced communities, social media networks such as Twitter and Facebook connect a myriad of diverse users, leading to a highly connected and dynamic ecosystem.
Although many algorithms have been proposed for detecting socially cohesive communities on Twitter, mining and related tasks remain challenging.
This study presents a novel detection method based on a scalable framework to identify related communities in a network.
We propose a multilevel clustering technique (MCT) that leverages structural and textual information to identify local communities termed microcosms.
Experimental evaluation on benchmark models and datasets demonstrate the efficacy of the approach.
This study contributes a new dimension for the detection of cohesive communities in social networks.
The approach offers a better understanding and clarity toward describing how low-level communities evolve and behave on Twitter.
From an application point of view, identifying such communities can better inform recommendation, among other benefits.
Clustering Multilevel clustering Community detection Twitter Social networks
§ ACKNOWLEDGEMENTS
The third author has participated in this research work as part of the [project name concealed for blind review]
Project, which has received funding from the European Union’s Horizon 2020 Research and Innovation Programme under grant agreement [agreement number concealed for blind review].
[Lancichinetti et al., 2009]
authorA. Lancichinetti, authorS. Fortunato,
authorJ. Kertesz,
titleDetecting the overlapping and hierarchical community
structure in complex networks,
journalNew journal of physics volume11
(year2009) pages033015.
[Scott, 1988]
authorJ. Scott,
titleSocial network analysis,
journalSociology volume22
(year1988) pages109–127.
[Watts and Strogatz, 1998]
authorD. J. Watts, authorS. H. Strogatz,
titleCollective dynamics of ‘small-world’networks,
journalnature volume393
(year1998) pages440.
[Albert and Barabási, 2002]
authorR. Albert, authorA.-L. Barabási,
titleStatistical mechanics of complex networks,
journalReviews of modern physics volume74
(year2002) pages47.
[Newman and Girvan, 2004]
authorM. E. Newman, authorM. Girvan,
titleFinding and evaluating community structure in
journalPhysical review E volume69
(year2004) pages026113.
[Berelson and Steiner, 1964]
authorB. Berelson, authorG. A. Steiner,
titleHuman behavior: An inventory of scientific
journalHarcourt, Brace & World (year1964).
[Shaw, 1971]
authorM. E. Shaw,
titleGroup dynamics: The psychology of small group
journalMcGraw Hill (year1971).
[Granovetter, 1992]
authorM. Granovetter,
titleProblems of explanation in economic sociology,
journalNetworks and organizations: Structure, form, and
action (year1992) pages25–56.
[Brass et al., 1998]
authorD. J. Brass, authorK. D. Butterfield,
authorB. C. Skaggs,
titleRelationships and unethical behavior: A social
network perspective,
journalAcademy of management review volume23
(year1998) pages14–31.
[Newman and Park, 2003]
authorM. E. Newman, authorJ. Park,
titleWhy social networks are different from other types of
journalPhysical review E volume68
(year2003) pages036122.
[Williams and Martinez, 2000]
authorR. J. Williams, authorN. D. Martinez,
titleSimple rules yield complex food webs,
journalNature volume404
(year2000) pages180.
[Flake et al., 2002]
authorG. W. Flake, authorS. Lawrence,
authorC. L. Giles, authorF. M. Coetzee,
titleSelf-organization and identification of web
journalComputer (year2002)
[Papadopoulos et al., 2012]
authorS. Papadopoulos, authorY. Kompatsiaris,
authorA. Vakali, authorP. Spyridonos,
titleCommunity detection in social media,
journalData Mining and Knowledge Discovery
volume24 (year2012) pages515–554.
[Cao et al., 2015]
authorC. Cao, authorQ. Ni, authorY. Zhai,
titleAn improved collaborative filtering recommendation
algorithm based on community detection in social networks,
in: booktitleProceedings of the 2015 annual conference on
genetic and evolutionary computation, organizationACM,
year2015, pp. pages1–8.
[Newman, 2003]
authorM. E. Newman,
titleProperties of highly clustered networks,
journalPhysical Review E volume68
(year2003) pages026121.
[Yang et al., 2013]
authorJ. Yang, authorJ. McAuley,
authorJ. Leskovec,
titleCommunity detection in networks with node
in: booktitle2013 IEEE 13th International Conference on
Data Mining, organizationIEEE, year2013, pp.
[Arnaboldi et al., 2013]
authorV. Arnaboldi, authorA. Guazzini,
authorA. Passarella,
titleEgocentric online social networks: Analysis of key
features and prediction of tie strength in facebook,
journalComputer Communications volume36
(year2013) pages1130–1144.
[Krogan et al., 2006]
authorN. J. Krogan, authorG. Cagney,
authorH. Yu, authorG. Zhong,
authorX. Guo, authorA. Ignatchenko,
authorJ. Li, authorS. Pu, authorN. Datta,
authorA. P. Tikuisis, et al.,
titleGlobal landscape of protein complexes in the yeast
saccharomyces cerevisiae,
journalNature volume440
(year2006) pages637.
[Nascimento et al., 2003]
authorM. A. Nascimento, authorJ. Sander,
authorJ. Pound,
titleAnalysis of sigmod's co-authorship graph,
journalACM Sigmod record volume32
(year2003) pages8–10.
[Palla et al., 2007]
authorG. Palla, authorA.-L. Barabási,
authorT. Vicsek,
titleQuantifying social group evolution,
journalNature volume446
(year2007) pages664.
[Laumann et al., 1989]
authorE. O. Laumann, authorP. V. Marsden,
authorD. Prensky,
titleThe boundary specification problem in network
journalResearch methods in social network analysis
volume61 (year1989) pages87.
[Borgatti and Halgin, 2011]
authorS. P. Borgatti, authorD. S. Halgin,
titleOn network theory,
journalOrganization science volume22
(year2011) pages1168–1181.
[Kwak et al., 2010]
authorH. Kwak, authorC. Lee, authorH. Park,
authorS. Moon,
titleWhat is twitter, a social network or a news media?,
in: booktitleProceedings of the 19th international
conference on World wide web, organizationAcM,
year2010, pp. pages591–600.
[Wilson et al., 2009]
authorC. Wilson, authorB. Boe,
authorA. Sala, authorK. P. Puttaswamy,
authorB. Y. Zhao,
titleUser interactions in social networks and their
in: booktitleProceedings of the 4th ACM European
conference on Computer systems, organizationAcm,
year2009, pp. pages205–218.
[Chen et al., 2009]
authorJ. Chen, authorO. R. Zaiane,
authorR. Goebel,
titleDetecting communities in large networks by iterative
local expansion,
in: booktitle2009 International Conference on
Computational Aspects of Social Networks, organizationIEEE,
year2009, pp. pages105–112.
[Doreian et al., 2005]
authorP. Doreian, authorV. Batagelj,
authorA. Ferligoj,
titlePositional analyses of sociometric data,
journalModels and methods in social network analysis
volume77 (year2005) pages77–96.
[Shi and Malik, 2000]
authorJ. Shi, authorJ. Malik,
titleNormalized cuts and image segmentation,
journalDepartmental Papers (CIS) (year2000)
[Newman, 2006]
authorM. E. Newman,
titleModularity and community structure in networks,
journalProceedings of the national academy of sciences
volume103 (year2006) pages8577–8582.
[Balasubramanyan and Cohen, 2011]
authorR. Balasubramanyan, authorW. W. Cohen,
titleBlock-lda: Jointly modeling entity-annotated text and
entity-entity links,
in: booktitleProceedings of the 2011 SIAM International
Conference on Data Mining, organizationSIAM,
year2011, pp. pages450–461.
[Lin et al., 2012]
authorW. Lin, authorX. Kong, authorP. S.
Yu, authorQ. Wu, authorY. Jia,
authorC. Li,
titleCommunity detection in incomplete information
in: booktitleProceedings of the 21st international
conference on World Wide Web, organizationACM,
year2012, pp. pages341–350.
[Leskovec and Mcauley, 2012]
authorJ. Leskovec, authorJ. J. Mcauley,
titleLearning to discover social circles in ego networks,
in: booktitleProceedings of NIPS, year2012,
pp. pages539–547.
[Allan et al., 1998]
authorJ. Allan, authorR. Papka,
authorV. Lavrenko,
titleOn-line new event detection and tracking.,
in: booktitleSigir, volume volume98,
organizationCiteseer, year1998, pp.
[Yang, 2001]
authorY. Yang,
titleA study of thresholding strategies for text
in: booktitleProceedings of the 24th annual international
ACM SIGIR conference on Research and development in information retrieval,
organizationACM, year2001, pp.
[Brants et al., 2003]
authorT. Brants, authorF. Chen,
authorA. Farahat,
titleA system for new event detection,
in: booktitleProceedings of the 26th annual international
ACM SIGIR conference on Research and development in informaion retrieval,
organizationACM, year2003, pp.
[Fung et al., 2005]
authorG. P. C. Fung, authorJ. X. Yu,
authorP. S. Yu, authorH. Lu,
titleParameter free bursty events detection in text
in: booktitleProceedings of the 31st international
conference on Very large data bases, organizationVLDB Endowment,
year2005, pp. pages181–192.
[Bishop, 2006]
authorC. M. Bishop, titlePattern recognition and machine
learning, publisherspringer, year2006.
[Erdős and Rényi, 1960]
authorP. Erdős, authorA. Rényi,
titleOn the evolution of random graphs,
journalPubl. Math. Inst. Hung. Acad. Sci
volume5 (year1960) pages17–60.
[Watts and Dodds, 2007]
authorD. J. Watts, authorP. S. Dodds,
titleInfluentials, networks, and public opinion
journalJournal of consumer research volume34
(year2007) pages441–458.
[Katz et al., 2017]
authorE. Katz, authorP. F. Lazarsfeld,
authorE. Roper, titlePersonal influence: The part
played by people in the flow of mass communications,
publisherRoutledge, year2017.
[Sundaram et al., 2012]
authorH. Sundaram, authorY.-R. Lin,
authorM. De Choudhury, authorA. Kelliher,
titleUnderstanding community dynamics in online social
networks: a multidisciplinary review,
journalIEEE Signal Processing Magazine
volume29 (year2012) pages33–40.
[Newman, 2004]
authorM. E. Newman,
titleDetecting community structure in networks,
journalThe European Physical Journal B
volume38 (year2004a)
[Newman, 2004]
authorM. E. Newman,
titleFast algorithm for detecting community structure in
journalPhysical review E volume69
(year2004b) pages066133.
[Lawson and Falush, 2012]
authorD. J. Lawson, authorD. Falush,
titlePopulation identification using genetic data,
journalAnnual review of genomics and human genetics
volume13 (year2012) pages337–361.
[Manning et al., 1999]
authorC. D. Manning, authorC. D. Manning,
authorH. Schütze, titleFoundations of statistical
natural language processing, publisherMIT press,
[Aggarwal and Subbian, 2014]
authorC. Aggarwal, authorK. Subbian,
titleEvolutionary network analysis: A survey,
journalACM Computing Surveys (CSUR) volume47
(year2014) pages10.
[Pons and Latapy, 2006]
authorP. Pons, authorM. Latapy,
titleComputing communities in large networks using random
journalJ. Graph Algorithms Appl. volume10
(year2006) pages191–218.
[Blondel et al., 2008]
authorV. D. Blondel, authorJ.-L. Guillaume,
authorR. Lambiotte, authorE. Lefebvre,
titleFast unfolding of communities in large networks,
journalJournal of statistical mechanics: theory and
experiment volume2008 (year2008)
[Pothen et al., 1990]
authorA. Pothen, authorH. D. Simon,
authorK.-P. Liou,
titlePartitioning sparse matrices with eigenvectors of
journalSIAM journal on matrix analysis and applications
volume11 (year1990) pages430–452.
[Blei et al., 2003]
authorD. M. Blei, authorA. Y. Ng,
authorM. I. Jordan,
titleLatent dirichlet allocation,
journalJournal of machine Learning research
volume3 (year2003) pages993–1022.
[Yan et al., 2013]
authorX. Yan, authorJ. Guo, authorY. Lan,
authorX. Cheng,
titleA biterm topic model for short texts,
in: booktitleProceedings of the 22nd international
conference on World Wide Web, organizationACM,
year2013, pp. pages1445–1456.
[Berkhin, 2006]
authorP. Berkhin,
titleA survey of clustering data mining techniques,
in: booktitleGrouping multidimensional data,
publisherSpringer, year2006, pp.
[Chaudhuri et al., 2009]
authorK. Chaudhuri, authorS. M. Kakade,
authorK. Livescu, authorK. Sridharan,
titleMulti-view clustering via canonical correlation
in: booktitleProceedings of the 26th annual international
conference on machine learning, organizationACM,
year2009, pp. pages129–136.
[Liu et al., 2013]
authorJ. Liu, authorC. Wang, authorJ. Gao,
authorJ. Han,
titleMulti-view clustering via joint nonnegative matrix
in: booktitleProceedings of the 2013 SIAM International
Conference on Data Mining, organizationSIAM,
year2013, pp. pages252–260.
[Bickel and Scheffer, 2004]
authorS. Bickel, authorT. Scheffer,
titleMulti-view clustering.,
in: booktitleICDM, volume volume4,
year2004, pp. pages19–26.
[Chao et al., 2017]
authorG. Chao, authorS. Sun, authorJ. Bi,
titleA survey on multi-view clustering,
journalarXiv preprint arXiv:1712.06246
[Ester et al., 2006]
authorM. Ester, authorR. Ge, authorB. J.
Gao, authorZ. Hu, authorB. Ben-Moshe,
titleJoint cluster analysis of attribute data and
relationship data: the connected k-center problem,
in: booktitleProceedings of the 2006 SIAM International
Conference on Data Mining, organizationSIAM,
year2006, pp. pages246–257.
[Zhou et al., 2009]
authorY. Zhou, authorH. Cheng, authorJ. X.
titleGraph clustering based on structural/attribute
journalProceedings of the VLDB Endowment
volume2 (year2009) pages718–729.
[Granovetter, 1977]
authorM. S. Granovetter,
titleThe strength of weak ties,
in: booktitleSocial networks,
publisherElsevier, year1977, pp.
[Inuwa-Dutse et al., 2019]
authorI. Inuwa-Dutse, authorM. Liptrott,
authorY. Korkontzelos,
titleAnalysis and prediction of dyads in twitter,
in: booktitleInternational Conference on Applications of
Natural Language to Information Systems, organizationSpringer,
year2019, pp. pages303–311.
[Marley and Regenwetter, 2016]
authorA. Marley, authorM. Regenwetter,
titleChoice, preference, and utility: Probabilistic and
deterministic representations,
journalNew handbook of mathematical psychology
volume1 (year2016) pages374–453.
[Inuwa-Dutse et al., 2019]
authorI. Inuwa-Dutse, authorM. Liptrott,
authorI. Korkontzelos,
titleSimmelian ties on twitter: empirical analysis and
in: booktitle2019 Sixth International Conference on Social
Networks Analysis, Management and Security (SNAMS),
organizationIEEE, year2019, pp.
[Wasserman and Faust, 1994]
authorS. Wasserman, authorK. Faust,
titleSocial network analysis: Methods and applications,
volume volume8, publisherCambridge university press,
[Han et al., 2011]
authorJ. Han, authorJ. Pei,
authorM. Kamber, titleData mining: concepts and
techniques, publisherElsevier, year2011.
[Lee and Seung, 1999]
authorD. D. Lee, authorH. S. Seung,
titleLearning the parts of objects by non-negative matrix
journalNature volume401
(year1999) pages788.
[Aggarwal, 2018]
authorC. C. Aggarwal, titleMachine learning for text,
publisherSpringer, year2018.
[Bertsekas, 1997]
authorD. P. Bertsekas,
titleNonlinear programming,
journalJournal of the Operational Research Society
volume48 (year1997) pages334–334.
[Airoldi et al., 2008]
authorE. M. Airoldi, authorD. M. Blei,
authorS. E. Fienberg, authorE. P. Xing,
titleMixed membership stochastic blockmodels,
journalJournal of machine learning research
volume9 (year2008) pages1981–2014.
[Yali et al., 2014]
authorP. Yali, authorY. Jian,
authorL. Shaopeng, authorL. Jing,
titleA biterm-based dirichlet process topic model for
short texts,
in: booktitle3rd International Conference on Computer
Science and Service System, publisherAtlantis Press,
[Salton and Buckley, 1988]
authorG. Salton, authorC. Buckley,
titleTerm-weighting approaches in automatic text
journalInformation processing & management
volume24 (year1988) pages513–523.
[Aggarwal and Subbian, 2012]
authorC. C. Aggarwal, authorK. Subbian,
titleEvent detection in social streams,
in: booktitleProceedings of the 2012 SIAM international
conference on data mining, organizationSIAM,
year2012, pp. pages624–635.
[Prokhorenkova, 2019]
authorL. Prokhorenkova,
titleUsing synthetic networks for parameter tuning in
community detection,
in: booktitleInternational Workshop on Algorithms and
Models for the Web-Graph, organizationSpringer,
year2019, pp. pages1–15.
[Yoshida, 2013]
authorT. Yoshida,
titleToward finding hidden communities based on user
journalJournal of Intelligent Information Systems
volume40 (year2013) pages189–209.
[Yang and Leskovec, 2015]
authorJ. Yang, authorJ. Leskovec,
titleDefining and evaluating network communities based on
journalKnowledge and Information Systems
volume42 (year2015) pages181–213.
[Inuwa-Dutse et al., 2018]
authorI. Inuwa-Dutse, authorM. Liptrott,
authorI. Korkontzelos,
titleDetection of spam-posting accounts on twitter,
journalNeurocomputing volume315
(year2018) pages496–511.
[Inuwa-Dutse and Korkontzelos, 2020]
authorI. Inuwa-Dutse, authorI. Korkontzelos,
titleA curated collection of covid-19 online datasets,
journalarXiv preprint arXiv:2007.09703
[Zachary, 1977]
authorW. W. Zachary,
titleAn information flow model for conflict and fission in
small groups,
journalJournal of anthropological research
volume33 (year1977) pages452–473.
[Lusseau et al., 2003]
authorD. Lusseau, authorK. Schneider,
authorO. J. Boisseau, authorP. Haase,
authorE. Slooten, authorS. M. Dawson,
titleThe bottlenose dolphin community of doubtful sound
features a large proportion of long-lasting associations,
journalBehavioral Ecology and Sociobiology
volume54 (year2003) pages396–405.
[Adamic and Glance, 2005]
authorL. A. Adamic, authorN. Glance,
titleThe political blogosphere and the 2004 us election:
divided they blog,
in: booktitleProceedings of the 3rd international workshop
on Link discovery, year2005, pp. pages36–43.
[Leskovec and Krevl, 2014]
authorJ. Leskovec, authorA. Krevl,
titleSNAP Datasets: Stanford large network dataset
collection, howpublished<http://snap.stanford.edu/data>,
[Leskovec et al., 2010]
authorJ. Leskovec, authorK. J. Lang,
authorM. Mahoney,
titleEmpirical comparison of algorithms for network
community detection,
in: booktitleProceedings of the 19th international
conference on World wide web, organizationACM,
year2010, pp. pages631–640.
[Radicchi et al., 2004]
authorF. Radicchi, authorC. Castellano,
authorF. Cecconi, authorV. Loreto,
authorD. Parisi,
titleDefining and identifying communities in networks,
journalProceedings of the national academy of sciences
volume101 (year2004) pages2658–2663.
[Danon et al., 2005]
authorL. Danon, authorA. Diaz-Guilera,
authorJ. Duch, authorA. Arenas,
titleComparing community structure identification,
journalJournal of Statistical Mechanics: Theory and
Experiment volume2005 (year2005)
[Fred and Jain, 2002]
authorA. L. Fred, authorA. K. Jain,
titleData clustering using evidence accumulation,
in: booktitleObject recognition supported by user
interaction for service robots, volume volume4,
organizationIEEE, year2002, pp.
[Girvan and Newman, 2002]
authorM. Girvan, authorM. E. Newman,
titleCommunity structure in social and biological
journalProceedings of the national academy of sciences
volume99 (year2002) pages7821–7826.
[Zhu and Ghahramani, 2002]
authorX. Zhu, authorZ. Ghahramani,
titleLearning from labeled and unlabeled data with label
propagation, typeTechnical Report, Citeseer,
[Lancichinetti et al., 2008]
authorA. Lancichinetti, authorS. Fortunato,
authorF. Radicchi,
titleBenchmark graphs for testing community detection
journalPhysical review E volume78
(year2008) pages046110.
[Bickel and Chen, 2009]
authorP. J. Bickel, authorA. Chen,
titleA nonparametric view of network models and
newman–girvan and other modularities,
journalProceedings of the National Academy of Sciences
volume106 (year2009) pages21068–21073.
[Karrer and Newman, 2011]
authorB. Karrer, authorM. E. Newman,
titleStochastic blockmodels and community structure in
journalPhysical review E volume83
(year2011) pages016107.
[Freeman, 1996]
authorL. C. Freeman,
titleSome antecedents of social network analysis,
journalConnections volume19
(year1996) pages39–42.
[Dunbar, 1998]
authorR. I. Dunbar,
titleThe social brain hypothesis,
journalEvolutionary Anthropology: Issues, News, and
Reviews: Issues, News, and Reviews volume6
(year1998) pages178–190.
|
# TrafficSim: Learning to Simulate Realistic Multi-Agent Behaviors
Simon Suo1,2 Sebastian Regalado3 Sergio Casas1,2 Raquel Urtasun1,2
1Uber ATG 2University of Toronto 3University of Waterloo
{suo, sergio<EMAIL_ADDRESS><EMAIL_ADDRESS>
###### Abstract
Simulation has the potential to massively scale evaluation of self-driving
systems enabling rapid development as well as safe deployment. To close the
gap between simulation and the real world, we need to simulate realistic
multi-agent behaviors. Existing simulation environments rely on heuristic-
based models that directly encode traffic rules, which cannot capture
irregular maneuvers (e.g., nudging, U-turns) and complex interactions (e.g.,
yielding, merging). In contrast, we leverage real-world data to learn directly
from human demonstration and thus capture a more diverse set of actor
behaviors. To this end, we propose TrafficSim, a multi-agent behavior model
for realistic traffic simulation. In particular, we leverage an implicit
latent variable model to parameterize a joint actor policy that generates
socially-consistent plans for all actors in the scene jointly. To learn a
robust policy amenable for long horizon simulation, we unroll the policy in
training and optimize through the fully differentiable simulation across time.
Our learning objective incorporates both human demonstrations as well as
common sense. We show TrafficSim generates significantly more realistic and
diverse traffic scenarios as compared to a diverse set of baselines. Notably,
we can exploit trajectories generated by TrafficSim as effective data
augmentation for training better motion planner.
## 1 Introduction
Self-driving has the potential to make drastic impact on our society. One of
the key remaining challenges is how to measure progress. There are three main
approaches for measuring the performance of a self-driving vehicle (SDV): 1)
structured testing in the real world, 2) virtual replay of pre-recorded
scenarios, and 3) simulation. These approaches are complementary, and each has
its key advantages and shortcomings. The use of a test track enables
structured and repeatable evaluation in the physical world. While this
approach is perceptually realistic, testing is often limited to a few
scenarios due to the long setup time and high cost for each test. Moreover it
is hard and often impossible to test safety critical situations, such as
unavoidable accidents. Virtual replay allows us to leverage diverse scenarios
collected from the real world, but it is still limited to what we observe.
Furthermore, since the replay is immutable, actors in the environment do not
react when the SDV plan diverges from what happened and the sensor data does
not reflect the new viewpoint. These challenges make simulation a particularly
attractive alternative as in a virtual environment we can evaluate against a
large number of diverse and dynamic scenarios in a safe, controllable, and
cost-efficient manner.
Figure 1: Generating realistic multi-agent behaviors is a key component for
simulation
Simulation systems typically consists of three steps: 1) specifying the scene
layout which includes the road topology and actor placement, 2) simulating the
motion of dynamic agents forward, and 3) rendering the generated scenario with
realistic geometry and appearance, as shown in Figure 1. In this paper, we
focus on the second step: generating realistic multi-agent behaviors
automatically. This can aid simulation design in several important ways: it
can expedite scenario creation by automating background actors, increase
scenario coverage by generating variants with emergent behaviors, and
facilitate interactive scenario design by generating preview of potential
interactions.
Figure 2: Complex human driving behavior observed in the real world: red is
actor of interest, green are interacting actors
However, bridging the behavior gap between the simulated world and the real
world remains an open challenge. Manually specifying each actor’s trajectory
is not scalable and results in unrealistic simulations since the actors will
not react to the SDV actions. Heuristic-based models [39, 20, 25] capture
basic reactive behavior, but rely on directly encoding traffic rules such as
actors follow the road and do not collide. While this approach generates
plausible traffic flow, the generated behaviors lack the diversity and nuance
of human behaviors and interactions present in real-world urban traffic
scenes. For instance, they cannot capture irregular maneuvers that do not
follow the lane graph such as U-turns, or complex multi-agent interplays such
as nudging past a vehicle stopped in a driving lane, or negotiations at an
unprotected left turn. In contrast, learning-based approaches [11, 38, 35] are
flexible and can capture a diverse set of behaviors. However, they often lack
common sense and are generally brittle to distributional shift. Furthermore,
they can also be computationally expensive if not optimized for simulating
large numbers of actors over long horizon.
To tackle these challenges, we present TrafficSim, a multi-agent behavior
model for traffic simulation. We leverage recent advances in motion
forecasting, and formulate the joint actor policy with an implicit latent
variable model [11], which can generate multiple scene-consistent samples of
actor trajectories in parallel. Importantly, to learn a robust policy amenable
for traffic simulation over long time horizon, we unroll the policy in
training and optimize through our fully differentiable simulation.
Furthermore, we propose a time-adaptive multi-task loss that balances between
learning from demonstration and common sense at each timestep of the
simulation. Our experiments show that TrafficSim is able to simulate traffic
scenarios that remain realistic over long time horizon, with minimal
collisions and traffic rule violations. In particular, it achieves the lowest
scenario reconstruction error in comparison to a diverse set of baselines
including heuristic, motion forecasting, and imitation learning models. We
also show that we can train better motion planners by exploiting trajectories
generated by TrafficSim. Lastly, we show experiments in trading off simulation
quality and computation. In particular, we can achieve up to 4x speedup with
multi-step updates, or further reduce collisions with additional optimization
at simulation-time.
## 2 Related Work
#### Simulation Environments:
Simulating traffic actors is a ubiquitous task with wide ranging applications
in transportation research, video games, and now training and evaluating self-
driving vehicles [15]. Microscopic traffic simulators [30] employ heuristic-
based models [39, 20, 25] to simulate traffic flow. These models capture
accurate high-level traffic characteristic by directly encoding traffic rules
(e.g., staying in lane, avoiding collision). However due to rigid assumptions,
they are not realistic at the street level even after calibrating with real
world data [24, 26]. In particular, they can not capture irregular maneuvers
(e.g., nudging, U-turns) and complex multi-agent interaction (e.g., yielding,
merging) that occur in the real world, shown in Figure 2. Progress in game
engines greatly advanced the realism of physical simulations. Researchers have
leveraged racing games [41, 33] and developed higher fidelity simulators [19,
4] to train and evaluate self driving systems. Real world data is leveraged by
[32] for realistic sensor simulation. However, actor behaviors are still very
simplistic: simulated actors in [19] are governed by a basic heuristic-based
controller that can only follow the lane while respecting traffic rules and
avoiding head-on collisions. This is insufficient to evaluate SDVs, since the
one of the main challenge in self-driving is accurately anticipating and
safely planning around diverse and often irregular human maneuvers, Thus, this
motivates us to learn from real world data to bridge this gap.
Figure 3: TrafficSim architecture: global map module (a) is run once per map
for repeated simulation runs. At each timestep, local observation module (b)
extracts motion and map features, then joint behavior module (c) produces a
multi-agent plan.
#### Motion Forecasting:
Motion forecasting is the task of predicting actor’s future motion based on
past context, which also requires accurate actor behavior modelling.
Traditional approaches track an object and propagate its state to predict its
future motion (e.g., Unscented Kalman filter [40] with kinematic bicycle model
[28]). More recently, deep-learning based models have been developed to
capture increasingly more complex behaviors. [18] rasterizes an HD map and a
history of actor bounding boxes to leverage a CNN to forecast actor behavior.
Since the future is inherently uncertain, [17, 14] output multiple
trajectories per actor. [12] shows that explicitly incorporating prior
knowledge help learn better predictive distributions. Several works [1, 35,
38, 11, 29] go beyond actor-independent modeling and explicitly reason about
interaction among actors as the future unfolds. To characterize the behavior
of multiple actors jointly, [1, 35, 38] leverages auto-regressive generation
with social mechanisms. In contrast, [11] employs spatially-aware graph neural
networks to model interaction in the latent space, thereby capturing longer
range interactions and avoiding slow sequence sampling. Importantly, these
models can generate multiple socially-consistent samples, where each sample
constitute a realistic traffic scenario, thus modeling complex multi-agent
dynamics beyond simple pairwise interaction. Thus, they are particularly
amenable for simulating actor behaviors in virtual traffic. However, they
cannot be directly used for simulation over long time horizon, since they are
brittle to distributional shift and cannot recover from compounding error.
#### Imitation Learning:
Imitation learning (IL) aims to learn control policies from demonstration.
Behavior cloning [34] treats state-action pairs as i.i.d examples to leverage
supervised learning, but suffers from distributional shift due to compounding
error [36]. Intuitively, during offline open-loop training, the policy only
observes ground truth past states, but when unrolled in closed-loop at test
time, it encounters novel states induced by its sequence of suboptimal past
decisions and fails to recover. Many approaches have been proposed to mitigate
the inevitable deviation from the observed distribution, but each has its
drawbacks. Online supervision [36] require access to an interactive expert
that captures the full distribution over human driving behaviors. Data
augmentation [2, 8] depends on manually designed out-of-distribution states
and corresponding desired actions, which is often brittle and adds bias.
Uncertainty-based regularization [9, 21] leverages predictive uncertainty to
avoid deviating from the observed distribution, but can be challenging and
computationally expensive to estimate accurately. Adversarial IL approaches
[23, 5] jointly learn the policy with a discriminator. However, they are
empirically difficult to train (requiring careful reward augmentation [6] and
curriculum design [3]), and are generally limited to simulating a small number
of actors [7] in a specific map topology (e.g., NGSIM). In contrast, we aim to
learn a joint actor policy that generalizes to diverse set of urban streets
and simulates the behavior for large number of actors in parallel.
Furthermore, while IL methods typically assume non-differentiable environment,
we directly model differentiable state transitions instead. This allow us to
directly optimize with back-propagation through the simulation.
## 3 Learning Multi-Agent Traffic Behaviors
In this section, we describe our approach for learning realistic multi-agent
behaviors for traffic simulation. Given a high-definition map $\mathcal{M}$,
traffic control $\mathcal{C}$, and initial dynamic states of $N$ traffic
actors, our goal is to simulate their motion forward. We use
$Y^{t}=\\{y_{1}^{t},y_{2}^{t},...,y_{N}^{t}\\}$ to denote a collection of $N$
actor states at time $t$. More precisely, each actor state is parameterized as
a bounding box $y_{i}^{t}=(b_{x},b_{y},b_{w},b_{h},b_{\theta})$ with 2D
position, width, height and heading. In the following, we first describe how
to extract rich context from the simulation environment. Then, we explain our
joint actor policy that explicitly reasons about interaction and generates
socially consistent plans. Lastly, we present a learning framework that
leverages back-propagation through the differentiable simulation, and balances
imitation and common sense. We illustrate the full architecture in Figure 3.
### 3.1 Extracting Rich Context from the Environment
Accurately modelling actor behaviors requires rich scene context from past
motion and map topology. Towards this goal, we propose a differentiable
observation module that takes as input the past actor states $Y^{:t}$, traffic
control $\mathcal{C}$ and HD map $\mathcal{M}$, and process them in two
stages. First, we use a CNN-based perception backbone network inspired by [42,
13] to extract rich geometrical features $\tilde{\mathcal{M}}$ from the raster
map $\mathcal{M}$, shown in Figure 3 (a). Since we are only interested in the
region of interest defined by $\mathcal{M}$ and these spatial features are
static across time, we can process each map once, and cached them for repeated
simulation runs.
Then we leverage a local observation module with two components: a _map
feature extractor_ and a _past trajectory encoder_ shown in Figure 3 (b).
Unlike the global map module, these feature extractors are run once per
simulation step, and are thus designed to be lightweight. To extract local
context $X_{m}^{t}$ around each actor, we apply Rotated Region of Interest
Align [31] to the map features $\tilde{\mathcal{M}}$ pre-processed by the map
backbone. To encode the past trajectories of each actor in the scene, we
employ a 4-layer GRU with 128 hidden states, yielding $X_{\tau}^{t}$. Finally,
we concatenate the map and past trajectory features to form the scene context
$X^{t}=[X_{m}^{t},X_{\tau}^{t}]$, which we use as input to the joint actor
policy.
### 3.2 Implicit Latent Variable Model for Multi-Agent Reasoning
We use a joint actor policy to explicitly reasons about multi-agent
interactions, shown in Figure 3 (c). This allows us to sample multiple
socially consistent plans for all actors in the scene in parallel. Concretely,
we aim to characterize the joint distribution over actors’ future states
$\mathcal{Y}^{t}=\\{Y^{t+1},Y^{t+2},...,Y^{t+T_{\text{plan}}}\\}$. This
formulation allows us to leverage supervision over the full planning horizon
$T_{\text{plan}}$ to learn better long-term interaction. To simplify notation,
we use $\mathcal{Y}^{t}$ in subsequent discussions.
It is difficult to represent this joint distribution over actors in an
explicit form as there is uncertainty over each actor’s goal and complex
interactions between actors as the future unfolds. A natural solution is to
implicitly characterize this distribution via a latent variable model [37,
11]:
$\displaystyle
P(\mathcal{Y}^{t}|X^{t})=\int_{Z}P(\mathcal{Y}^{t}|X^{t},Z^{t})P(Z^{t}|X^{t})$
(1)
Following [11], we use a deterministic decoder
$\mathcal{Y}^{t}=f(X^{t},Z^{t})$ to encourage the scene latent $Z$ to capture
all stochasticity and avoid factorizing $P(\mathcal{Y}^{t}|X^{t},Z^{t})$
across time. This allow us to generate $K$ scene-consistent samples of actor
plans efficiently in one stage of parallel sampling, by first drawing latent
samples $Z^{t}_{(k)}\sim P(Z^{t}|X^{t})$, and then decoding actor plans
$\mathcal{Y}^{t}_{(k)}=f(Z^{t}_{(k)},X^{t})$. Furthermore, we approximate the
posterior latent distribution $q(Z^{t},|X^{t},\mathcal{Y}^{t})$ to leverage
variational inference [27, 37] for learning. Intuitively, it learns to map
ground truth future $\mathcal{Y}^{t}_{GT}$ to the scene latent space for best
reconstruction.
We leverage the graph neural network (GNN) based scene interaction module
introduced by [10] to parameterize the prior network
$p_{\gamma}(Z^{t}|X^{t})$, posterior network
$q_{\phi}(Z^{t},|X^{t},\mathcal{Y}^{t})$, and the deterministic decoder
$\mathcal{Y}^{t}=f_{\theta}(X^{t},Z^{t})$, for encoding to and decoding from
the scene-level latent variable $Z^{t}$. By propagating messages across a
fully connected interaction graph with actors as nodes, the latent space
learns to capture not only individual actor goals and style, but also multi-
agent interactions. More concretely, we partition the latent space to learn a
distributed representation $Z^{t}=\\{z_{1},z_{2},...,z_{N}\\}$ of the scene,
where $z_{n}$ is spatially anchored to actor $n$ and captures unobserved
dynamics most relevant to that actor. This choice enables effective relational
reasoning across a large and variable number of actors and diverse map
topologies (i.e., to deal with the complexity of urban traffic). Additional
implementation details can be found in the supplementary.
Figure 4: TrafficSim models all actors jointly to simulate realistic traffic
scenarios through time. We can sample at each timestep to obtain parallel
simulations. Figure 5: We optimize our policy with back-propagation through
the differentiable simulation (left), and apply imitation and common sense
loss at each simulated state (right).
### 3.3 Simulating Traffic Scenarios
We model each traffic scenario as a sequential process where traffic actors
interact and plan their behaviors at each timestep. Leveraging the
differentiable observation module and joint actor policy, we can generate
traffic scenarios by starting with an initial history of the actors $Y^{-H:0}$
and simulating their motion forward for T steps. Concretely, at each timestep
t, we first extract scene context $X^{t}$, then sample actor plans
$\mathcal{Y}^{t}\sim P_{\theta,\gamma}(\mathcal{Y}^{t}|X^{t})$ from our joint
actor policy, shown in Figure 4. Since our policy produces a $T_{\text{plan}}$
step plan of the future
$\mathcal{Y}^{t}=\\{Y^{t+1},...,Y^{t+T_{\text{plan}}}\\}$, we can either use
the first timestep of the joint plan $Y^{t+1}$ to update the simulation
environment at the highest frequency, or take multiple steps
$\\{Y^{t+1},...,Y^{t+\kappa}\\}$ for faster simulation with minimal loss in
simulation quality:
$\displaystyle
P(Y^{1:T}|Y^{-H:0},\mathcal{M},\mathcal{C})=\prod_{t\in\mathcal{T}}P(Y^{t+1:t+\kappa}|X^{t})$
(2)
We provide further discussion on trading off simulation quality and
computation in Section 4.
### 3.4 Learning from Examples and Common Sense
In this section, we describe our approach for learning multi-agent behaviors
by leveraging large-scale datasets of human driving behaviors. We train by
unrolling our policy (i.e., in closed-loop) and exploiting our fully
differentiable formulation to directly optimize with back-propagation through
the simulation over time. Furthermore, we propose a multi-task loss that
balances between learning from demonstration and injecting common sense.
#### Backpropagation through Differentiable Simulation:
Learning from demonstration via behavior cloning yields good open-loop
behaviors (i.e., accurate $P(\mathcal{Y}^{t}|X^{t})$ when $X^{t}$ comes from
the observation distribution), but can suffer from compounding error in
closed-loop execution [36] (i.e., when $X^{t}$ is induced by the policy). To
bridge this gap, we propose to unroll the policy for closed-loop training and
compute the loss $\mathcal{L}^{t}$ at each simulation step $t$, as shown in
Figure 5 (left). Since we model state transitions in a fully differentiable
manner, we can directly optimize the total loss with back-propagation through
the simulation across time. In particular, the gradient is back-propagated
through action sampled from the policy at each timestep via
reparameterization. This gives a direct signal for how current decision
influences future states.
#### Augmenting Imitation with Common Sense:
Pure imitation suffers from poor supervision when a stochastic policy
inevitably deviates from the observed realization of the scenario.
Furthermore, inherent bias in the collected data (e.g., lack of safety
critical scenarios) means pure imitation can not reason about the danger of
collision. Thus, we augment imitation with an auxiliary common sense
objective, and use an time-adaptive multi-task loss to balance the
supervision. Through the simulation horizon, we anneal $\lambda(t)$ to favour
supervision from common sense over imitation.
$\displaystyle\mathcal{L}=\sum_{t}\lambda(t)\mathcal{L}^{t}_{\text{imitation}}+(1-\lambda(t))\mathcal{L}^{t}_{\text{collision}}$
(3)
Furthermore, we unroll the model in two distinct segments during training.
First for $t\leq T_{\text{label}}$, we unroll with posterior samples from the
model $\mathcal{Y}_{\text{post}}^{t}=f(X^{t},Z^{t}_{\text{post}})$ where
$Z^{t}_{post}$ is conditioned on ground truth future
$\mathcal{Y}^{t}_{\text{GT}}$. Subsequently for $T_{\text{label}}<t\leq T$, we
use $\mathcal{Y}_{\text{prior}}^{t}=f(X^{t},Z^{t}_{\text{prior}})$ instead.
Intuitively, posterior samples reconstruct the ground truth future, whereas
prior samples cover diverse possible futures. We now describe both objectives
in details, also shown in Figure 5 (right).
#### Imitation Objective:
To learn from demonstrations, we adapt the variational learning objective of
the CVAE framework [37] and optimize the evidence-based lower bound (ELBO) of
the log likelihood $\log P(\mathcal{Y}^{t}|X^{t})$ at each timestep $t\leq
T_{\text{label}}$. Concretely, the imitation loss consists of a reconstruction
component and a KL divergence component:
$\displaystyle\mathcal{L}^{t}_{\text{imitation}}(\mathcal{Y}^{t}_{\text{posterior}},\mathcal{Y}^{t}_{\text{GT}})=\mathcal{L}^{t}_{\text{recon}}+\beta\cdot\mathcal{L}^{t}_{\text{KL}}$
(4) $\displaystyle
L_{recon}^{t}=\sum_{a}^{N}\sum_{\tau=t+1}^{t+P}L_{\delta}(y_{a}^{\tau}-y_{a,GT}^{\tau})$
(5) $\displaystyle
L_{KL}^{t}=\text{KL}\left(q_{\phi}\left(Z^{t}|X^{t},\mathcal{Y}_{GT}^{t}\right)||p_{\gamma}\left(Z^{t}|X^{t}\right)\right)$
(6)
We use Huber loss $L_{\delta}$ for reconstruction and reweight the KL term
with $\beta$ as proposed by [22].
Model | | SCR12s (%) | TRV12s (%) | minSFDE (m) | minSADE (m) | meanSFDE (m) | meanSADE (m) | MASD12s (m)
---|---|---|---|---|---|---|---|---
Heuristic | IDM [39] | 1.19 | 0.25 | 4.97 | 3.03 | 5.39 | 3.48 | 4.01
Motion Forecasting | MTP [17] | 11.00 | 9.67 | 2.19 | 1.47 | 2.77 | 2.19 | 7.15
ESP [35] | 4.08 | 4.79 | 3.42 | 1.56 | 3.52 | 1.60 | 0.29
ILVM [11] | 2.90 | 4.37 | 2.56 | 1.33 | 2.92 | 1.50 | 1.17
Imitation Learning | AdversarialIL | 10.05 | 8.34 | 2.89 | 1.19 | 3.87 | 1.51 | 4.89
DataAug | 3.78 | 8.23 | 2.04 | 1.22 | 2.62 | 1.56 | 2.29
Ours | TrafficSim | 0.50 | 2.77 | 1.13 | 0.57 | 1.75 | 0.85 | 2.50
Table 1: [ATG4D] Comparison against existing approaches ($S=15$ samples,
$T=12$ seconds, $T_{\text{label}}=8$ seconds)
#### Common Sense Objective:
We use a pair-wise collision loss and design a efficient differentiable
relaxation to ease optimization. In particular, we approximate each vehicle
with 5 circles, and compute L2 distance between centroids of the closest
circles of each pair of actors. We apply this loss on prior samples from the
model $\mathcal{Y}^{t}_{prior}$ to directly regularize
$P(\mathcal{Y}^{t}|X^{t})$. More concretely, we define the loss as follows:
$\displaystyle\mathcal{L}^{t}_{\text{collision}}(\mathcal{Y}^{t}_{\text{prior}})=\frac{1}{N^{2}}\sum_{i\neq
j}\max(1,\sum_{\tau=t+1}^{t+P}\mathcal{L}_{\text{pair}}(y_{i}^{\tau},y_{j}^{\tau}))$
(7)
$\displaystyle\mathcal{L}_{\text{pair}}(y_{i}^{\tau},y_{j}^{\tau})=\begin{cases}1-\frac{d}{r_{i}+r_{j}},&\text{if
}d\leq r_{i}+r_{j}\\\ 0,&\text{otherwise}\end{cases}$ (8)
## 4 Experimental Evaluation
In this section, we first describe the simulation setup and propose a suite of
metrics for measuring simulation quality. We show our approach generates more
realistic and diverse traffic scenarios as compared to a diverse set of
baselines. Notably, training an imitation-based motion planner on synthetic
data generated by TrafficSim outperforms in planning L2 as compared to using
same amount of real data. This shows there’s minimal behavior gap between
TrafficSim and the real world. Lastly, we study how to tradeoff between
simulation quality and computation.
#### Dataset:
We benchmark our approach on a large-scale self driving dataset ATG4D, which
contains more than one million frames collected over several cities in North
America with a 64-beam, roof-mounted LiDAR. Our labels are very precise 3D
bounding box tracks. There are 6500 snippets in total, each 25 seconds long.
In each city, we have access to high definition maps capturing the geometry
and the topology of each road network. We consider a rectangular region of
interest centered around the self-driving vehicle that spans 140 meters along
the direction of its heading and 80 meters across. The region is fixed across
time for each simulation.
#### Simulation Setup:
In this work, we use real traffic states from ATG4D as initialization for the
simulations. This give us realistic actor placement and dynamic state, thus
controlling for domain gap that might arise from initialization. We subdivide
full snippets into 11s chunks, using the first 3s as the initial states
$Y^{-H:0}$, and the subsequent $T_{\text{label}}=8s$ as expert demonstration
for training. We run the simulation forward for $T=12$ seconds for both
training and evaluation. We use $\delta_{t}=0.5s$ as the duration for a
simulation tick (i.e., simulation frequency of $2Hz$). We use observed traffic
light states from the log snippets for simulation.
#### Baselines:
We use a wide variety of baselines. The Intelligent Driver Model (IDM) [39] is
a heuristic car-following model that explicitly encode traffic rules. We adapt
three state-of-the-art motion forecasting models for traffic simulation. MTP
[17] models multi-modal futures, but assume independence across actors. ESP
[35] models interaction at the output level, via social auto-regressive
formulation. ILVM [11] models interaction using a scene-level latent variable
model. Finally, we consider imitation learning techniques that have been
applied to learning driving behaviors. Following [8, 16, 2], DataAug adds
perturbed trajectories to help the policy learn to recover from mistakes.
Inspired by [23, 7, 3], AdversarialIL learns a discriminator as supervision
for the policy. We defer implementation details to the supplementary.
| T=0s | 4s | 8s | 12s
---|---|---|---|---
scenario 1 | | | |
Scenario 2 | | | |
Figure 6: Simulated traffic scenarios sampled from TrafficSim: colored triangle shows heading and tracks instances across time Model | $T_{\text{plan}}$ (timesteps) | Unroll in Training | Common Sense | SCR12s (%) | TRV12s (%) | minSFDE (m) | minSADE (m) | meanSFDE (m) | meanSADE (m) | MASD12s (m)
---|---|---|---|---|---|---|---|---|---|---
$\mathcal{M}_{0}$ | 1 | | | 5.92 | 10.19 | 2.04 | 0.88 | 2.50 | 1.04 | 0.80
$\mathcal{M}_{1}$ | 10 | | | 2.32 | 3.43 | 1.72 | 0.99 | 2.09 | 1.29 | 2.40
$\mathcal{M}_{2}$ | 1 | ✓ | | 1.28 | 3.30 | 1.02 | 0.54 | 1.70 | 0.88 | 3.57
$\mathcal{M}_{3}$ | 10 | ✓ | | 0.60 | 3.02 | 1.21 | 0.58 | 1.70 | 0.84 | 2.16
$\mathcal{M}^{*}$ | 10 | ✓ | ✓ | 0.50 | 2.77 | 1.13 | 0.57 | 1.75 | 0.85 | 2.50
Table 2: [ATG4D] Ablation study ($S=15$ samples, $T=12$ seconds,
$T_{\text{label}}=8$ seconds)
### 4.1 Metrics
Evaluating traffic simulation is challenging since there is no singular metric
that can fully capture the quality of the generated traffic scenarios. Thus we
propose a suite of metrics for measuring the diversity and realism, with a
particular focus on _coverage_ of real world scenarios. We provide
implementation details in the supplementary. For all evaluations, we sample
$K=15$ scenarios from the model given each initial condition. More concretely,
we create batches of $K$ scenarios with the same initialization. Then at each
timestep, we sample a single $\mathcal{Y}^{t}_{(k)}$ from
$P(\mathcal{Y}^{t}_{(k)}|X^{t}_{(k)})$ for each scenario $(k)$, all in
parallel. After unrolling for $\frac{T}{\delta_{t}}$ steps, we obtain the full
scenarios.
Interaction Reasoning: To evaluate the consistency of the actors’ behaviors,
we propose to measure the scenario collision rate ($\mathrm{SCR}$): the
average percentage of actors in collision in each sampled scenario (thus lower
being better). Two actors are considered in collision if the overlap between
their bounding boxes at any time step is higher than a small IOU threshold.
Traffic Rule Compliance: Traffic actors should comply with traffic rules.
Thus, we propose to measure traffic rule violation (TRV) rate, and focus on
two specific traffic rules: 1) staying within drivable areas, and 2) obey
traffic light signals.
Scenario Reconstruction: We use distance-based scenario reconstruction metric
to evaluate the model’s ability to sample a scenario close to the ground
truth. (i.e., recovering irregular maneuvers and complex interactions
collected from the real world). For each scenario sample, we calculate average
distance error (ADE) across time, and final distance error (FDE) at the last
labeled timestep. We calculate minSADE/minSFDE by selecting the best matching
scenario sample, and meanSADE/meanSFDE by averaging over all scenario samples.
Diversity: Following [43], we use a map-aware average self distance (MASD)
metric to measure the diversity of the sampled scenarios. In particular, we
measure the average distance between the two most distinct sampled scenarios
that do no violate traffic rules. We note that this metric can be exploited by
models that generate diverse but unrealistic traffic scenarios.
### 4.2 Experimental Results
Nudge | U-Turn | Yield | Swerve
---|---|---|---
| | |
Figure 7: Irregular actor maneuvers and complex interactions sampled from
TrafficSim
#### Comparison Against Existing Approaches:
Table 1 shows quantitative results. Car following models generate collision
free behavior that strictly follows traffic rules, but do not recover
naturalistic driving, and thus scores poorly on scenario reconstruction
metrics. Motion forecasting models recover accurate traffic behavior, but
exhibit unrealistic interactions and traffic rule violations when unrolled for
a long simulation horizon. Imitation learning techniques attempt to bridge the
gap between train and test, and thus results in marginally better scenario
reconstruction as compared to motion forecasting baselines. However, they
inject additional bias that results in worse collision rate and traffic rule
violation. Our TrafficSim achieves the best of both worlds: best results on
scenario reconstruction and interaction, and similar to IDM in traffic rule
violation, without directly encoding the rules. We note that the ground truth
TRV rate is 1.26%, since human exhibit non-compliant behaviors. Figure 6 shows
qualitative visualization of traffic scenarios generated from TrafficSim.
Figure 7 shows that TrafficSim can generate samples with irregular maneuvers
and complex interactions, which cannot be captured by heuristic models like
IDM.
#### TrafficSim for Data Augmentation:
TrafficSim can be used to generate synthetic training data for learning better
motion planners. More concretely, we generate $5s$ scenario snippets and train
a planner to imitate behaviors of all actors in the scenario. As shown in
Table 3, the planner trained with synthetic data generated from TrafficSim
significantly outperforms baselines in open-loop planning metrics when
evaluated against real scenarios. Most notably, we achieve _lower_ planning L2
error, while matching collision rate and progress of planner trained with the
same amount of real data. This shows that the scenarios generated from
TrafficSim are realistic and have minimal gap from behaviors observed in the
real world, and can be used as effective data augmentation. We show more
details on this experiment in the supplementary.
Training Data | Collision Rate (%) | Planning L2 (m) | Progress (m)
---|---|---|---
Real | 10.56 | 4.85 | 31.05
IDM [39] | 22.05 | 10.49 | 29.17
MTP [17] | 19.54 | 9.89 | 26.46
ESP [35] | 19.38 | 8.76 | 24.73
ILVM [11] | 14.82 | 7.16 | 26.39
AdversarialIL | 13.83 | 5.19 | 29.02
DataAug | 15.75 | 6.88 | 26.88
TrafficSim | 10.73 | 4.52 | 29.44
Table 3: Imitation planner trained with synthetic data from TrafficSim
outperforms real data in planning L2 error.
#### Ablation Study:
We show the importance of each component of our model and training methodology
in Table 2. Open-loop training ($\mathcal{M}_{0}$ & $\mathcal{M}_{1}$)
performs poorly due to compounding error at test-time. Closed-loop training
with back-propagation through simulation ($\mathcal{M}_{2}$) is the most
important component in learning a robust policy. Explicitly modelling longer
horizon plan (i.e., $\mathcal{Y}^{t}=\\{Y^{t+1},...,Y^{t+T_{\text{plan}}}\\}$
instead of $Y^{T+1}$) ($\mathcal{M}_{3}$) improves interaction reasoning.
Augmenting imitation with common sense ($\mathcal{M}^{*}$) further reduces
collision and traffic rule violation rates.
#### Multi-Step Update for Fast Simulation:
We can achieve faster simulation by running model inference once per $\kappa$
ticks of the simulation. This is possible since TrafficSim explicitly models
the actor plans
$\mathcal{Y}^{t}=\\{Y^{t+1},Y^{t+2},...,Y^{t+T_{\text{plan}}}\\}$ and
accurately captures future interactions in the planning horizon
$T_{\text{plan}}$ even without extracting scene context at the highest
simulation frequency. In particular, we can choose the desired tradeoff
between simulation quality and speed by modulating $\kappa$ at simulation-time
without retraining, as long as $\kappa\leq T_{\text{plan}}$. Table 4 shows we
can effectively achieve 4x speedup with minimal degradation in simulation
quality. Runtime is profiled on a single Nvidia GTX 1080 Ti.
#### Incorporating Constraints at Simulation-Time:
Explicitly modelling actor plans $\mathcal{Y}^{t}$ at each timestep also makes
it easy to incorporate additional constraints at simulation time. In
particular, we can define constraints such as avoiding collision and obeying
traffic rules over the actor plans $\mathcal{Y}^{t}$, to anticipate and
prevent undesired behaviors in the future. Concretely, we evaluate two
optimization methods for avoiding collision: 1) rejection sampling which
discard actor plans that collide and re-sample, and 2) gradient-based
optimization of the scene latent $Z^{t}$ to minimize the differentiable
relaxation of collision. Table 5 shows that both methods are effective in
reducing collision while keeping the simulation realistic. More details in
supplementary.
#### TrafficSim for interactive Simulation
We create an interactive simulation tool to showcase how simulation designers
can leverage TrafficSim to construct and preview interesting traffic
scenarios. In particular, they can alter traffic light states and add, modify,
or remove actors during simulation. In response, TrafficSim generates
realistic variants of the traffic scenario. We show visual demonstrations in
the supplementary.
Inference Frequency | Runtime (s) | SCR12s (%) | TRV12s (%) | min SFDE (m) | mean SFDE (m)
---|---|---|---|---|---
2Hz | 0.83 | 0.50 | 2.77 | 1.13 | 1.75
1Hz | 0.45 | 0.85 | 3.17 | 1.12 | 1.73
0.5Hz | 0.24 | 0.96 | 3.64 | 1.16 | 1.73
Table 4: Multi-step update achieves up to 4x speedup with minimal degradation in simulation quality Post-Processing | SCR12s (%) | TRV12s (%) | min SFDE (m) | mean SFDE (m)
---|---|---|---|---
None | 0.50 | 2.77 | 1.13 | 1.75
Rejection Sampling | 0.33 | 3.01 | 1.13 | 1.75
Gradient Optimization | 0.12 | 3.00 | 1.13 | 1.75
Table 5: Additional optimization at simulation-time further reduces collision
rate
## 5 Conclusion
In this work, we have proposed a novel method for generating diverse and
realistic traffic simulation. TrafficSim is a multi-agent behavior model that
generates socially-consistent plans for all actors in the scene jointly. It is
learned using back-propagation through the fully differentiable simulation, by
imitating trajectory observations from a real-world self driving dataset and
incorporating common sense. TrafficSim enables exciting new possibilities in
data augmentation, interactive scenario design, and safety evaluation. For
future work, we aim to extend this work to learn controllable actors where we
can specify attributes such as goal, route, and style.
## References
* [1] Alexandre Alahi, Kratarth Goel, Vignesh Ramanathan, Alexandre Robicquet, Li Fei-Fei, and Silvio Savarese. Social lstm: Human trajectory prediction in crowded spaces. In Proceedings of the IEEE CVPR, 2016.
* [2] Mayank Bansal, Alex Krizhevsky, and Abhijit S. Ogale. Chauffeurnet: Learning to drive by imitating the best and synthesizing the worst. CoRR, abs/1812.03079, 2018.
* [3] Feryal Behbahani, Kyriacos Shiarlis, Xi Chen, Vitaly Kurin, Sudhanshu Kasewa, Ciprian Stirbu, João Gomes, Supratik Paul, Frans A. Oliehoek, João V. Messias, and Shimon Whiteson. Learning from demonstration in the wild. CoRR, abs/1811.03516, 2018.
* [4] A. Best, S. Narang, Lucas Pasqualin, D. Barber, and D. Manocha. Autonovi-sim: Autonomous vehicle simulation platform with weather, sensing, and traffic control. 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), pages 1161–11618, 2018.
* [5] Raunak Bhattacharyya, Blake Wulfe, Derek Phillips, Alex Kuefler, Jeremy Morton, Ransalu Senanayake, and Mykel Kochenderfer. Modeling human driving behavior through generative adversarial imitation learning, 2020.
* [6] Raunak P. Bhattacharyya, Derek J. Phillips, Changliu Liu, Jayesh K. Gupta, Katherine Driggs-Campbell, and Mykel J. Kochenderfer. Simulating emergent properties of human driving behavior using multi-agent reward augmented imitation learning, 2019.
* [7] Raunak P. Bhattacharyya, Derek J. Phillips, Blake Wulfe, Jeremy Morton, Alex Kuefler, and Mykel J. Kochenderfer. Multi-agent imitation learning for driving simulation. CoRR, abs/1803.01044, 2018.
* [8] Mariusz Bojarski, Davide Del Testa, Daniel Dworakowski, Bernhard Firner, Beat Flepp, Prasoon Goyal, Lawrence D. Jackel, Mathew Monfort, Urs Muller, Jiakai Zhang, Xin Zhang, Jake Zhao, and Karol Zieba. End to end learning for self-driving cars. CoRR, abs/1604.07316, 2016.
* [9] Kiante Brantley, Wen Sun, and Mikael Henaff. Disagreement-regularized imitation learning. In International Conference on Learning Representations, 2020.
* [10] Sergio Casas, Cole Gulino, Renjie Liao, and Raquel Urtasun. Spatially-aware graph neural networks for relational behavior forecasting from sensor data, 2019.
* [11] Sergio Casas, Cole Gulino, Simon Suo, Katie Luo, Renjie Liao, and Raquel Urtasun. Implicit latent variable model for scene-consistent motion forecasting, 2020.
* [12] Sergio Casas, Cole Gulino, Simon Suo, and Raquel Urtasun. The importance of prior knowledge in precise multimodal prediction, 2020\.
* [13] Sergio Casas, Wenjie Luo, and Raquel Urtasun. Intentnet: Learning to predict intention from raw sensor data. In Conference on Robot Learning, 2018.
* [14] Yuning Chai, Benjamin Sapp, Mayank Bansal, and Dragomir Anguelov. Multipath: Multiple probabilistic anchor trajectory hypotheses for behavior prediction. arXiv preprint arXiv:1910.05449, 2019.
* [15] Qianwen Chao, Huikun Bi, Weizi Li, Tianlu Mao, Zhaoqi Wang, and Ming Lin. A survey on visual traffic simulation: Models, evaluations, and applications in autonomous driving. Computer Graphics Forum, 07 2019.
* [16] Felipe Codevilla, Matthias Müller, Antonio López, Vladlen Koltun, and Alexey Dosovitskiy. End-to-end driving via conditional imitation learning, 2018.
* [17] Henggang Cui, Vladan Radosavljevic, Fang-Chieh Chou, Tsung-Han Lin, Thi Nguyen, Tzu-Kuo Huang, Jeff Schneider, and Nemanja Djuric. Multimodal trajectory predictions for autonomous driving using deep convolutional networks. arXiv preprint arXiv:1809.10732, 2018.
* [18] Nemanja Djuric, Vladan Radosavljevic, Henggang Cui, Thi Nguyen, Fang-Chieh Chou, Tsung-Han Lin, and Jeff Schneider. Motion prediction of traffic actors for autonomous driving using deep convolutional networks. arXiv preprint arXiv:1808.05819, 2018.
* [19] Alexey Dosovitskiy, German Ros, Felipe Codevilla, Antonio Lopez, and Vladlen Koltun. Carla: An open urban driving simulator, 2017.
* [20] P. Gipps. A behavioural car-following model for computer simulation. Transportation Research Part B-methodological, 15:105–111, 1981\.
* [21] Mikael Henaff, Alfredo Canziani, and Yann LeCun. Model-predictive policy learning with uncertainty regularization for driving in dense traffic, 2019.
* [22] Irina Higgins, Loic Matthey, Arka Pal, Christopher Burgess, Xavier Glorot, Matthew Botvinick, Shakir Mohamed, and Alexander Lerchner. beta-vae: Learning basic visual concepts with a constrained variational framework.
* [23] Jonathan Ho and Stefano Ermon. Generative adversarial imitation learning. CoRR, abs/1606.03476, 2016.
* [24] Arne Kesting and Martin Treiber. Calibrating car-following models by using trajectory data: Methodological study. Transportation Research Record, 2088(1):148–156, 2008.
* [25] Arne Kesting, Martin Treiber, and Dirk Helbing. General lane-changing model mobil for car-following models. Transportation Research Record, 1999(1):86–94, 2007.
* [26] Arne Kesting, Martin Treiber, and Dirk Helbing. Agents for traffic simulation. Multi-agent systems: Simulation and applications, pages 325–356, 2009.
* [27] Diederik P Kingma and Max Welling. Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114, 2013.
* [28] Jason Kong, Mark Pfeiffer, Georg Schildbach, and Francesco Borrelli. Kinematic and dynamic vehicle models for autonomous driving control design. In 2015 IEEE Intelligent Vehicles Symposium (IV), pages 1094–1099. IEEE, 2015.
* [29] Lingyun Luke Li, Bin Yang, Ming Liang, Wenyuan Zeng, Mengye Ren, Sean Segal, and Raquel Urtasun. End-to-end contextual perception and prediction with interaction transformer, 2020.
* [30] Pablo Alvarez Lopez, Michael Behrisch, Laura Bieker-Walz, Jakob Erdmann, Yun-Pang Flötteröd, Robert Hilbrich, Leonhard Lücken, Johannes Rummel, Peter Wagner, and Evamarie Wießner. Microscopic traffic simulation using sumo. In The 21st IEEE International Conference on Intelligent Transportation Systems. IEEE, 2018.
* [31] Jianqi Ma, Weiyuan Shao, Hao Ye, Li Wang, Hong Wang, Yingbin Zheng, and Xiangyang Xue. Arbitrary-oriented scene text detection via rotation proposals. IEEE Transactions on Multimedia, 2018.
* [32] Sivabalan Manivasagam, Shenlong Wang, Kelvin Wong, Wenyuan Zeng, Mikita Sazanovich, Shuhan Tan, Bin Yang, Wei-Chiu Ma, and Raquel Urtasun. Lidarsim: Realistic lidar simulation by leveraging the real world, 2020\.
* [33] Mark Martinez, Chawin Sitawarin, Kevin Finch, Lennart Meincke, Alex Yablonski, and Alain L. Kornhauser. Beyond grand theft auto V for training, testing and enhancing deep learning in self driving cars. CoRR, abs/1712.01397, 2017.
* [34] Dean A. Pomerleau. Alvinn: An autonomous land vehicle in a neural network. In D. S. Touretzky, editor, Advances in Neural Information Processing Systems 1, pages 305–313. Morgan-Kaufmann, 1989.
* [35] Nicholas Rhinehart, Rowan McAllister, Kris Kitani, and Sergey Levine. PRECOG: PREdiction Conditioned On Goals in Visual Multi-Agent Settings. arXiv e-prints, page arXiv:1905.01296, May 2019.
* [36] Stéphane Ross, Geoffrey Gordon, and Drew Bagnell. A reduction of imitation learning and structured prediction to no-regret online learning. In Proceedings of the fourteenth international conference on artificial intelligence and statistics, pages 627–635, 2011.
* [37] Kihyuk Sohn, Honglak Lee, and Xinchen Yan. Learning structured output representation using deep conditional generative models. In Advances in neural information processing systems, pages 3483–3491, 2015.
* [38] Yichuan Charlie Tang and Ruslan Salakhutdinov. Multiple futures prediction, 2019.
* [39] Martin Treiber, Ansgar Hennecke, and Dirk Helbing. Congested traffic states in empirical observations and microscopic simulations. Physical Review E, 62(2):1805–1824, Aug 2000.
* [40] E. A. Wan and R. Van Der Merwe. The unscented kalman filter for nonlinear estimation. In Proceedings of the IEEE 2000 Adaptive Systems for Signal Processing, Communications, and Control Symposium (Cat. No.00EX373), pages 153–158, 2000.
* [41] Bernhard Wymann, Christos Dimitrakakisy, Andrew Sumnery, and Christophe Guionneauz. Torcs: The open racing car simulator, 2015.
* [42] Bin Yang, Wenjie Luo, and Raquel Urtasun. Pixor: Real-time 3d object detection from point clouds. In Proceedings of the IEEE CVPR, 2018.
* [43] Ye Yuan and Kris Kitani. Diverse trajectory forecasting with determinantal point processes, 2019\.
Appendix
In this supplementary material, we provide the following: additional details
of our method in Section A, implementation details of baselines and metrics in
Section B, and lastly, additional qualitative results in Section C.
## Appendix A Additional TrafficSim Details
In this section, we describe additional details of input parameterization,
model architecture, and learning methodology for TrafficSim.
#### Input Parameterization:
We use a rasterized map representation that encodes traffic elements into
different channels of a raster. There are a total of 13 map channels
consisting of intersections, lanes, roads, etc. We encode traffic control as
additional channels, by rasterizing the lane segments controlled by the
traffic light. We initialize each scenario with 3s of past actor states, with
each actor history represented by 7 bounding boxes across time, each 0.5s
apart. When an actor does not have the full history, we fill the missing
states with NaNs.
#### Global Map Module:
We use a multi-scale backbone to extract map features at different resolution
levels to encode both near and long-range map topology. The architecture is
adapted from [42]: it consists of a sequence of 4 blocks, each with a single
convolutional layer of kernel size 3 and [8, 16, 32, 64] channels. After each
block, the feature maps are down-sampled using max pooling with stride 2.
Finally, feature maps from each block are resized (via average-pooling or
bilinear sampling) to a common resolution of 0.8m, concatenated, and processed
by a header block with 2 additional convolutional layers with 64 channels.
#### Local Observation Module:
We design the local observation modules to be lightweight and differentiable.
This enables the simulation to be fast and allows us to backpropagate gradient
through the simulation. Works in motion forecasting (e.g., [17] typically
rasterize the bounding boxes of each actor, use a convolutional network to
extract motion features, and rely on a limited receptive field to incorporate
influences from its neighbors. In contrast, we directly encode the numerical
values parameterizing the bounding boxes using a 4-layer GRU with 128 hidden
states, and rely on the graph neural network based module for interaction
reasoning. More concretely, we fill NaNs with zeros, and also pass in binary
mask indicating missing values to the GRU. For extracting local map features
from the pre-processed map features, we use Rotated Region of Interest Align
with 70m in front, 10m behind, and 20m on each side. The extracted local
features are further processed by a 3-layer CNN, and then max-pooled across
the spatial dimensions. The final local context $x_{i}$ for each actor $i$ is
a 192 dimensional vector formed by concatenating the map and motion features.
#### Scene Interaction Module:
We leverage a graph neural network based scene interaction module to
parameterize our joint actor policy. In particular, our scene interaction
module is inspired by [10, 11], and is used in our Prior, Posterior, and
Decoder networks. We provide an algorithmic description in Algorithm 1. The
algorithm is written with for loops for clarity, but in practice our
implementation is fully vectorized, since the only loop that is needed is that
of the $K$ rounds of message passing, but in practice we observe that $K=1$ is
sufficient. Our edge function $\mathcal{E}^{(k)}$ consists of a 3-layer MLP
that takes as input the hidden states of the 2 terminal nodes at each edge in
the graph at the previous propagation step as well as the projected
coordinates of their corresponding bounding boxes. We use feature-wise max-
pooling as our aggregate function $\mathcal{A}^{(k)}$. To update the hidden
states we use a GRU cell as $\mathcal{U}^{(k)}$. Finally, to output the
results from the graph propagations, we use another MLP as readout function
$\mathcal{O}$.
Algorithm 1 SIM: Scene Interaction Module
Input: Initial hidden state for all of the actors in the scene
$H^{0}=\begin{Bmatrix}h_{0}^{0},h_{1}^{0},\cdots,h_{N}^{0}\end{Bmatrix}$. BEV
coordinates of the actor bounding boxes
$\begin{Bmatrix}c_{0},c_{1},...,c_{N}\end{Bmatrix}$. Number of message
propagations $K$ (defaults to $K=1$).
Output: Output vector per node
$\begin{Bmatrix}o_{0},o_{1},\cdots,o_{N}\end{Bmatrix}$.
1:Construct actor interaction graph $G=(V,E)$
2:Compute pairwise coordinate transformations $\mathcal{T}(c_{u},c_{v})$,
$\forall(u,v)\in E$
3:for $k=1,...,K$ do $\triangleright$ Loop over graph propagations
4: for $(u,v)\in E$ do $\triangleright$ Compute message for every edge in the
graph
5: $m_{u\rightarrow
v}^{(k)}=\mathcal{E}^{(k)}\left(h_{u}^{k-1},h_{v}^{k-1},\mathcal{T}(c_{u},c_{v})\right)$
6: for $v\in V$ do $\triangleright$ Update node states
7: $a_{v}^{(k)}=\mathcal{A}^{(k)}\left(\left\\{m_{u\rightarrow
v}^{(k)}:u\in\mathbf{N}(v)\right\\}\right)$ $\triangleright$ Aggregate
messages from neighbors
8: $h_{v}^{(k)}=\mathcal{U}^{(k)}\left(h_{v}^{(k-1)},a_{v}^{(k)}\right)$
$\triangleright$ Update the hidden state
9:for $v\in V$ do
10: $o_{v}=\mathcal{O}\left(h_{v}^{(K)}\right)$ $\triangleright$ Compute
outputs
11:return $\begin{Bmatrix}o_{0},o_{1},\cdots,o_{N}\end{Bmatrix}$
#### Simulating Traffic Scenarios:
We provide an algorithmic description of the overall inference process of
simulating traffic scenarios in Algorithm 2. While the algorithm describes the
process of sampling a single scenario, and loop over actors in the scene, we
can sample multiple scenarios with arbitrary number of actors in parallel by
batching over samples and actors. Note that we do not directly regress heading
of the actors. Instead, we approximate headings in a post-processing step by
taking the tangent of segments between predicted waypoints. This ensures the
headings are consistent with the predicted motion.
#### Time-Adaptive Multi-Task Loss:
We use a multi-task loss to balance supervision from imitation and common
sense:
$\displaystyle\mathcal{L}=\sum_{t}\lambda(t)\mathcal{L}^{t}_{\text{imitation}}+(1-\lambda(t))\mathcal{L}^{t}_{\text{collision}}$
(9)
Concretely, we define the time-adaptive weight as:
$\displaystyle\lambda(t)=\min(\frac{T_{\text{label}}-t}{T_{\text{label}}},0)$
(10)
where $T_{\text{label}}$ is the label horizon (i.e., latest timestep which we
have labels). Figure 8 illustrates this function. Intuitively, the weight on
imitation must drop to zero at $T_{\text{label}}$ as we no longer have access
to labels. We note that our method is not sensitive to the choice of
$\lambda(t)$. Experiments with other decreasing function of simulation
timestep yields similar results.
#### Differentiable Relaxation of Collision:
Figure 9 illustrates our proposed differentiable relaxation of collision. More
concretely, the loss is defined as follows:
$\displaystyle\mathcal{L}^{t}_{\text{collision}}(\mathcal{Y}^{t}_{\text{prior}})=\frac{1}{N^{2}}\sum_{i\neq
j}\max(1,\sum_{\tau=t+1}^{t+P}\mathcal{L}_{\text{pair}}(y_{i}^{\tau},y_{j}^{\tau}))$
(11)
$\displaystyle\mathcal{L}_{\text{pair}}(y_{i}^{\tau},y_{j}^{\tau})=\begin{cases}1-\frac{d}{r_{i}+r_{j}},&\text{if
}d\leq r_{i}+r_{j}\\\ 0,&\text{otherwise}\end{cases}$ (12)
Intuitively, if there’s no overlap between any circles, the collision loss is
0. If two circles completely overlap, the collision loss is 1. We further
reweight the collision loss with a factor of $0.01$ to be on similar scale as
the imitation loss.
Figure 8: Our adaptive weight is a decreasing function of simulation timestep.
Figure 9: Differentiable relaxation of collision loss approximates each
vehicle as 5 circles and considers distance between closest centroids.
Algorithm 2 TrafficSim: Simulating Traffic Scenarios
Input: Rasterized high definition map $M$. Initial actor states
$Y^{-H:0}=\begin{Bmatrix}Y^{-H},\cdots,Y^{0}\end{Bmatrix}$ where each
$Y^{t}=\begin{Bmatrix}y_{1}^{t},y_{2}^{t},...,y_{N}^{t}\end{Bmatrix}$ for the
$N$ actors in the scene.
Output: Simulated actor states
$Y^{1:T}=\begin{Bmatrix}Y^{1},Y^{2},\cdots,Y^{T}\end{Bmatrix}$ for $T$
simulation timesteps.
1:$\tilde{M}\leftarrow\operatorname{MapBackbone}(M)$ $\triangleright$ Extract
global map feature once per environment
2:for $t=1,...,T$ do $\triangleright$ Simulate for requested number of
timesteps
3: for $i=1,...,N$ do $\triangleright$ Extract local context for each actor at
each timestep
4: $r_{i}\leftarrow\operatorname{RRoiAlign}(y_{i}^{t},\tilde{M})$
5:
$x_{i}^{\text{map}}\leftarrow\operatorname{MaxPooling}(\operatorname{CNN}(r_{i}))$
6: $x_{i}^{\text{motion}}\leftarrow\operatorname{GRU}(y^{-H:t}_{i})$
7: $x_{i}\leftarrow x_{i}^{\text{map}}\oplus x_{i}^{\text{motion}}$
8: $X=\\{x_{i}:\forall i\in 1\dots N\\}$
9:
$\begin{Bmatrix}Z_{\mu},Z_{\sigma}\end{Bmatrix}\leftarrow\text{Prior}_{\gamma}(X)$
$\triangleright$ Use SIM modules to output latent prior distribution
10: $Z\sim\mathcal{N}\left(\begin{Bmatrix}Z_{\mu},Z_{\sigma}\cdot
I\end{Bmatrix}\right)$ $\triangleright$ Sample a scene latent from diagonal
Gaussian
11: $H=\begin{Bmatrix}\text{MLP}(x_{i}\oplus z_{i}):\forall i\in 1\dots
N\end{Bmatrix}$
12: $\mathcal{Y}\leftarrow\operatorname{Decoder}_{\theta}(H)$ $\triangleright$
Use SIM module to decode actor plans
13: $Y^{t+1:t+\kappa}\leftarrow\mathcal{Y}$ $\triangleright$ Update
environment by taking the first $\kappa$ steps of the actor plans
14:return $Y^{1:T}=\begin{Bmatrix}Y^{1},Y^{2},\cdots,Y^{T}\end{Bmatrix}$
## Appendix B Additional Experiment Details
In this section, we provide additional details on baselines, metrics, and
experimental setup.
### B.1 Baselines
#### IDM [39]:
The Intelligent Driver Model (IDM) is a heuristic car following model that
implement reactive keep lane behavior by following a specific headway vehicle.
We resolve headway vehicle based on lane association, and also a narrow field
of view in a 30∘ sector in front of each actor, with visibility up to 10
meters. We implement traffic control as a phantom actor that has zero size. We
use a simulation frequency of 2.5Hz (0.4s per timestep), max deceleration of
$3$ m/s2, reaction time of $0.1$s, time headway of $1.5$s. To generate diverse
simulations, we sample max acceleration in the range of $[0.6,2.5]$ m/s2, and
desired speed in the range of $[10,20]$ m/s. Since the motion of IDM actors
are paramterized against lane centerlines, it can trivially avoid traffic rule
violations. However, it does suffer from an inherent limitation: unlike
learned models, it cannot infer traffic flow from the initial actors states
when given partially observed traffic light states, and thus results in
occasional collisions at intersections.
#### MTP [17]:
Multiple Trajectory Prediction (MTP) models uncertainty over actors’ future
trajectories with mixture of Gaussians at each prediction timestep. It does
not explicitly reason about interaction between actors as the future unrolls,
and makes conditional independence assumption across actors. We use a mixture
of Gaussian with 16 modes. Following the training methodology described in the
original paper, we select the closest matching mode to compute loss, instead
of directly optimizing the mixture density.
#### ESP [35]:
ESP models multi-agent interaction by leveraging an autoregressive
formulation, where actors influence each other as the future unrolls. Due to
memory constraints, we limit the radii of the whiskers to [1, 2, 4] m while
keeping the seven angle bins. We implement the social context condition with a
minor modification. The original paper specifies a fixed number of actors
(since Carla has a small number of actors). , but this is not possible in
ATG4D since traffic scenes contain many more actors. Thus, we use k-nearest
neighbors to select $M=4$ neighbors to gather social features.
#### ILVM [11]:
We adapt ILVM from the joint perception and prediction setting by replacing
voxelized LiDAR input by rasterized actor bounding boxes. Since processing
noise-free actor bounding boxes require less model capacity than performing
LiDAR perception, we reduce the number of convolutional layers in the backbone
to improve inference speed. We noticed no degredation in performance in
reducing the model capacity.
#### DataAug:
We follow data augmentation technique described in [2], since it also
leverages large-scale self driving datasets and is closest to our setting. To
factor out effects of model architecture, we use the best motion forecasting
model ILVM as the base architecture. In particular, for each eligible
trajectory snippet, we perturb the waypoint at the current timestep with a
probability of 50%. we consider a trajectory to be eligible if has moved more
than 16m in the 2s window around the perturbation (i.e. speed higher than
8m/s). We uniformly sample perturbation distance in the range of [0, 0.5] m,
and sample a random direction to perturb the waypoint. Finally, we fit
quadratic curves for both x and y as function of time, to smooth out the past
and future trajectory. We only alter waypoints up to 2 seconds before and
after the perturbation point.
#### AdversarialIL:
Inspired by [23, 5, 7, 3, 6], we implement an adversarial imitation learning
baseline by jointly learning a discriminator as the supervision for the
policy. Similarly, to factor our the effects of model architecture, we
leverage our differentiable observation modules and scene interaction module
to parameterize the policy. In particular, the extracted scene context is
directly fed to the scene interaction module to output a bivariate Gaussian of
the next waypoint for each actor in the scene. The discriminator has a similar
architecture with spectral normalization. By leveraging our differentiable
components, we enable our adversarial IL baseline to also leverage
backpropagation through simulation. This allows it to sidestep the challenge
of policy optimization, and enables a more direct comparison with our method.
For optimization, we use a separate Adam optimizer for the policy, and RMSProp
optimizer for the discriminator. Furthermore, we found it’s necessary to
periodically use behavior cloning loss to stabilize training. We use a replay
buffer size of 200 and batch size 8 for optimizing the policy and the
discriminator. Furthermore, following [3], we use a curriculum of increasing
simulation horizon to ease optimization. More concretely, we first pre-train
with behavior cloning for 25k steps, then follow a schedule of [2,4,6,8]
simulation timesteps increasing every 25k steps. We found increasing
simulation horizon further does not improve the model.
### B.2 Metrics
#### Scenario Reconstruction:
For each simulation environment (i.e., with a given map, traffic control, and
initial actor states), we define the following scenario reconstruction metrics
over the $K$ traffic scenarios sampled from the model:
$\displaystyle\mathrm{minSADE}$ $\displaystyle=\min_{k\in 1\dots
K}\frac{1}{NT_{\text{label}}}\sum_{n=1}^{N}\sum_{t=1}^{T_{\text{label}}}||y^{t}_{n,GT}-y^{t}_{n,(k)}||^{2}$
$\displaystyle\mathrm{minSFDE}$ $\displaystyle=\min_{k\in 1\dots
K}\frac{1}{N}\sum_{n=1}^{N}||y^{T_{\text{label}}}_{n,GT}-y^{T_{\text{label}}}_{n,(k)}||^{2}$
$\displaystyle\mathrm{meanSADE}$
$\displaystyle=\frac{1}{KNT_{\text{label}}}\sum_{k=1}^{K}\sum_{n=1}^{N}\sum_{t=1}^{T_{\text{label}}}||y^{t}_{n,GT}-y^{t}_{n,(k)}||^{2}$
$\displaystyle\mathrm{meanSFDE}$
$\displaystyle=\frac{1}{KN}\sum_{k=1}^{K}\sum_{n=1}^{N}||y^{T_{\text{label}}}_{n,GT}-y^{T_{\text{label}}}_{n,(k)}||^{2}$
In particular, we only evaluate up to $T_{\text{label}}=8s$ due to the
availability of ground truth actor trajectories.
#### Interaction Reasoning:
Our scenario level collision rate metric is implemented as follows:
$\displaystyle\mathrm{SCR}=\frac{1}{NS}\sum_{s=1}^{S}\frac{}{}\sum_{i=1}^{N}\min\left(1,\sum_{j>i}^{N}\sum_{t=1}^{T}\mathbbm{1}\left[IoU(b^{t}_{i,s},b^{t}_{j,s})>\varepsilon\right]\right)$
In particular, we consider two actors to be in collision if their bounding
boxes overlap each other with IOU greater than a small $\epsilon$ of 0.1. This
threshold is necessary since the labeled bounding boxes are slightly larger
than true vehicle shape, thus sometimes resulting in collisions even in ground
truth scenarios. Furthermore, we count a maximum of 1 collision per actor. In
other words, we count number of actors in collision, rather than number of
total collisions between pairs of actors.
#### Traffic Rule Compliance:
We leverage our high definition map with precise lane graph annotations to
evaluate traffic rule compliance. More concretely, we first obtain the
drivable areas of each actor in the scenario by first performing lane
association. Then, we traverse the lane graph to derive a set of reachable
lane segments from the initial location, including neighbours and successors.
Furthermore, we cut off any connection influenced by traffic control (i.e. red
traffic light). We rasterize the drivable area with binary values: 1 for
drivable and 0 for violation. This allows us to efficiently index and
calculate traffic rule violations. To handle actors that begin initially
outside of mapped region (i.e. parked vehicles on the side of the road or in a
parking lot), we ignore actors that do not have initial lane associations.
#### Diversity:
To calculate our map-aware diversity metric, we leverage the same drivable
area raster employed by the traffic rule compliance metric. More concretely,
we first filter out actor trajectory samples that violate traffic rule, then
we measure the average distance (across time) between the two most distinct
trajectory samples for each actor.
$\displaystyle\mathrm{MASD}=\max_{k,k^{\prime}\in
1...K}\frac{1}{NT}\sum_{n=1}^{N}\sum_{t=1}^{T}||y_{n,(k)}^{t}-y_{n,(k^{\prime})}^{t}||^{2}$
### B.3 Experimental Setup
#### TrafficSim for Data Augmentation:
For synthetic data generation, we generate approximately 15k examples by
initializing from a subset of training scenarios used to train the traffic
simulation models. We use the same amount of real data for fair comparison. We
use a simple imitation planner, which takes rasterized map and actor bounding
box history as input, and directly regresses the future plan. The planning
horizon is 5s, with 0.5s per waypoint, for a total of 10 waypoints. The
imitation planner is learned with supervision from all actor trajectories in
the synthetic scenarios. For evaluation, we test on the same set of ground
truth scenarios that are used for evaluating traffic simulation metrics.
#### Incorporating Constraints at Simulation-Time:
For rejection sampling at simulation-time, we resample at most 10 times to
keep the runtime bounded. If we cannot generate enough collision-free plans
after 10 re-sampling steps, we sort all the generated samples by collision
loss, then return the minimum cost plans. To reason about potential collision
in the future, we evaluate our collision loss on the first 5 timesteps of the
sampled actor plans (i.e. 2.5s into the future). For gradient-based
optimization, we leverage our differentiable relaxation of collision loss.
Similarly, we evaluate the collision loss on the first 5 timesteps of the
actor plans (i.e. 2.5s into the future). While keeping our model frozen, we
backpropagate the gradient to optimize the latent samples $Z^{t}$. Performing
the optimization in the latent space allows us to influence the actor plans
while remaining in the model distribution. More concretely, we take 5 gradient
steps with a learning rate of 1e-2.
## Appendix C Additional Qualitative Results
In this section, we showcase additional qualitative results. Please refer to
the supplementary video for animated sequences. Figure 10 and 11 show
additional traffic scenarios sampled from our model. Figure 12 shows
comparison between baselines and our model.
| T=0s | 4s | 8s | 12s
---|---|---|---|---
Scenario 1 | | | |
Scenario 2 | | | |
Scenario 3 | | | |
Scenario 4 | | | |
Scenario 5 | | | |
Scenario 6 | | | |
Scenario 7 | | | |
Scenario 8 | | | |
Scenario 9 | | | |
Figure 10: Simulated traffic scenarios sampled from TrafficSim: colored triangle shows heading and tracks instances across time | T=0s | 4s | 8s | 12s
---|---|---|---|---
Scenario 10 | | | |
Scenario 11 | | | |
Scenario 12 | | | |
Scenario 13 | | | |
Scenario 14 | | | |
Scenario 15 | | | |
Scenario 16 | | | |
Scenario 17 | | | |
Scenario 18 | | | |
Figure 11: Simulated traffic scenarios sampled from TrafficSim: colored triangle shows heading and tracks instances across time | Scenario 1 | Scenario 2 | Scenario 3
---|---|---|---
IDM | | |
MTP | | |
ESP | | |
ILVM | | |
DataAug | | |
AdversarialIL | | |
Our TrafficSim | | |
Figure 12: Comparison between traffic scenarios simulated by the baselines and
our model. We show a snapshot at 6s after the start of the simulation. Red
circles highlight collisions and traffic rule violations
|
# Adversarial Attacks On Multi-Agent Communication
James Tu1,2 Tsunhsuan Wang3∗ Jingkang Wang1,2 Sivabalan Manivasagam1,2
Mengye Ren1,2 Raquel Urtasun1,2
1Waabi 2University of Toronto 3MIT
{jtu, wangjk, manivasagam, mren<EMAIL_ADDRESS>
<EMAIL_ADDRESS>Equal contribution.
###### Abstract
Growing at a fast pace, modern autonomous systems will soon be deployed at
scale, opening up the possibility for cooperative multi-agent systems. Sharing
information and distributing workloads allow autonomous agents to better
perform tasks and increase computation efficiency. However, shared information
can be modified to execute adversarial attacks on deep learning models that
are widely employed in modern systems. Thus, we aim to study the robustness of
such systems and focus on exploring adversarial attacks in a novel multi-agent
setting where communication is done through sharing learned intermediate
representations of neural networks. We observe that an indistinguishable
adversarial message can severely degrade performance, but becomes weaker as
the number of benign agents increases. Furthermore, we show that black-box
transfer attacks are more difficult in this setting when compared to directly
perturbing the inputs, as it is necessary to align the distribution of learned
representations with domain adaptation. Our work studies robustness at the
neural network level to contribute an additional layer of fault tolerance to
modern security protocols for more secure multi-agent systems.
00footnotetext: Work done while all authors were at UberATG.
## 1 Introduction
With rapid improvements of modern autonomous systems, it is only a matter of
time until they are deployed at scale, opening up the possibility of
cooperative multi-agent systems. Individual agents can benefit greatly from
shared information to better perform their tasks [26, 59]. For example, by
aggregating sensory information from multiple viewpoints, a fleet of vehicles
can perceive the world more clearly, providing significant safety benefits
[52]. Moreover, in a network of connected devices, distributed processing
across multiple agents can improve computation efficiency [18]. While
cooperative multi-agent systems are promising, relying on communication
between agents can pose security threats as shared information can be
malicious or unreliable [54, 3, 37].
Meanwhile, modern autonomous systems typically rely on deep neural networks
known to be vulnerable to adversarial attacks. Such attacks craft small and
imperceivable perturbations to drastically change a neural network’s behavior
and induce false outputs [48, 21, 8, 30]. Even if an attacker has the freedom
to send any message, such small perturbations may be the most dangerous as
they are indistinguishable from their benign counterparts, making corrupted
messages difficult to detect while still highly malicious.
While modern cyber security algorithms provide adequate protection against
communication breaches, adversarial robustness of multi-agent deep learning
models has yet to be studied. Meanwhile, when it comes to safety-critical
applications like self-driving, additional layers of redundancy and improved
security are always welcome. Thus, by studying adversarial robustness, we can
enhance modern security protocols by introducing an additional layer of fault
tolerance at the neural network level.
Figure 1: Overview of a multi-agent setting with one malicious agent (red).
Here the malicious agent attempts to sabotage a victim agent by sending an
adversarial message. The adversarial message is indistinguishable from the
original, making the attack difficult to detect.
Adversarial attacks have been studied extensively but existing approaches
mostly consider attacks on input domains like images [48, 21], point clouds
[7, 50], and text [44, 14]. On the other hand, multi-agent systems often
distribute computation across different devices and transmit intermediate
representations instead of input sensory information [52, 18]. Specifically,
when deep learning inference is distributed across different devices, agents
will communicate by transmitting feature maps, which are activations of
intermediate neural network layers. Such learned communication has been shown
to be superior due to transmitting compact but expressive messages [52] as
well as efficiently distributing computation [18].
In this paper, we investigate adversarial attacks in this novel multi-agent
setting where perturbations are applied to learned intermediate
representations. An illustration is shown in Figure 1. We conduct experiments
and showcase vulnerabilities in two highly practical settings: multi-view
perception from images in a fleet of drones and multi-view perception from
LiDAR in a fleet of self-driving vehicles (SDVs). By leveraging information
from multiple viewpoints, these multi-agent systems are able to significantly
outperform those that do not exploit communication.
We show, however, that perturbed transmissions which are indistinguishable
from the original can severely degrade the performance of receivers
particularly as the ratio of malicious to benign agents increases. With only a
single attacker, as the number of benign agents increase, attacks become
significantly weaker as aggregating more messages decreases the influence of
malicious messages. When multiple attackers are present, they can coordinate
and jointly optimize their perturbations to strengthen the attack. In terms of
defense, when the threat model is known, adversarial training is highly
effective, and adversarially trained models can defend against perturbations
almost perfectly and even slightly enhance performance on natural examples.
Without knowledge of the threat model, we can still achieve reasonable
adversarial robustness by designing more robust message aggregation modules.
We then move on to more practical attacks in a black box setting where the
model is unknown to the adversary. Since query-based black box attacks need to
excessively query a target model that is often unaccessible, we focus on
query-free transfer attacks that are more feasible in practice. However,
transfer attacks are much more difficult to execute at the feature-level than
on input domains. In particular, since perturbation domains are model
dependent, vanilla transfer attacks are ineffective because two neural
networks with the same functionality can have very different intermediate
representations. Here, we find that training the surrogate model with domain
adaptation is key to aligning the distribution of intermediate features and
achieve much better transferability. To further enhance the practicality of
attacks, we propose to exploit the temporal consistency of sensory information
processed by modern autonomous systems. When frames of sensory information are
collected milliseconds apart, we can exploit the redundancy in adjacent frames
to create efficient, low-budget attacks in an online manner.
## 2 Related Work
Figure 2: Attacking object detection proposals: False positives are created by
changing the class of background proposals and false negatives are created by
changing the class of the original proposals.
#### Multi-Agent Deep Learning Systems:
Multi-agent and distributed systems are widely employed in real-world
applications to improve computation efficiency [27, 17, 2], collaboration [52,
59, 18, 41, 42], and safety [38, 35]. Recently, autonomous systems have
improved greatly with the help of neural networks. New directions have opened
up in cooperative multi-agent deep learning systems e.g., federated learning
[27, 2]. Although multi-agent communication introduces a multitude of
benefits, communication channels are vulnerable to security breaches, as
communication channels can be attacked [34, 45], encryption algorithms can be
broken [46], and agents can be compromised [5, 61]. Thus, imperfect
communication channels may be used to execute adversarial attacks which are
especially effective against deep learning systems. While robustness has been
studied in the context of federated learning [20, 1, 56, 19], the threat
models are different as dataset poisoning and model poisoning are typically
used. To the best of our knowledge, few works study adversarial robustness on
multi-agent deep learning systems during inference.
#### Adversarial Attacks:
Adversarial attacks were first discovered in the context of image
classification [48], where a small imperceivable perturbation can drastically
change a neural network’s behaviour and induce false outputs. Such attacks
were then extended to various applications such as semantic segmentation [57]
and reinforcement learning [24]. There are two main settings for adversarial
attacks - white box and black box. In a white box setting [48, 21, 30], the
attacker has full access to the target neural network weights and adversarial
examples can be generated using gradient-based optimization to maximize the
network’s error. In contrast, black box attacks are conducted without
knowledge of the target neural network weights and therefore without any
gradient computation. In this case, attackers can leverage real world
knowledge to inject adversaries that resemble common real world objects [47,
36]. However, if the attacker is able to query the target model, the
literature proposes several different strategies to perform query-based
attacks [4, 12, 6, 10]. However, query-based attacks are infeasible for some
applications as they typically require prohibitively large amounts of queries
and computation. Apart from query-based attacks, a more practical but more
challenging alternative is to conduct transfer attacks [39, 58, 16] which do
not require querying the target model. In this setting, the attacker trains a
surrogate model that imitates the target model. By doing so, the hope is that
perturbations generated for the surrogate model will transfer to the target
model.
#### Perturbations In Feature Space:
While most works in the literature focus on input domains like images, some
prior works have considered perturbations on intermediate representations
within neural networks. Specifically, [25] estimated the projection of
adversarial gradients on a selected subspace to reduce the queries to a target
model. [40, 44, 14] proposed to generate adversarial perturbation in word
embeddings for finding adversarial but semantically-close substitution words.
[55, 60] showed that training on adversarial embeddings could improve the
robustness of Transformer-based models for NLP tasks.
## 3 Attacks On Multi-Agent Communication
This section first introduces the multi-agent framework in which agents
leverage information from multiple viewpoints by transmitting intermediate
feature maps. We then present our method for generating adversarial
perturbations in this setting. Moving on to more practical settings, we
consider black box transfer attacks and find that it is necessary to align the
distribution of intermediate representations. Here, training a surrogate model
with domain adaptation can create transferable perturbations. Finally, we show
efficient online attacks by exploiting the temporal consistency of sensory
inputs collected at high frequency.
### 3.1 Multi-Agent Communication
We consider a setting where multiple agents cooperate to better perform their
tasks by sharing observations from different viewpoints encoded via a learned
intermediate representation. Adopting prior work [52], we assume a homogeneous
set of agents using the same neural network. Then, each agent $i$ processes
sensor input $x_{i}$ to obtain an intermediate representation
$m_{i}=F(x_{i})$. The intermediate feature map is then broadcasted to other
agents in the scene. Upon receiving messages, agent $j$ will aggregate and
process all incoming messages to generate output $Z_{j}=G(m_{1},\dots,m_{N})$,
where $N$ is the number of agents. Suppose that an attacker agent $i$ targets
a victim agent $j$. Here, the attacker attempts to send an indistinguishable
adversarial message $m_{i}^{{}^{\prime}}=m_{i}+\delta$ to maximize the error
in $Z_{j}^{{}^{\prime}}=G(m_{1},\dots m_{i}+\delta,m_{N})$. The perturbation
$\delta$ is constrained by $\left\lVert\delta\right\rVert_{p}\leq\epsilon$ to
ensure that the malicious message is subtle and difficult to detect. An
overview of the multi-agent setting is shown in Figure 1.
Figure 3: Our proposed transfer attack which incorporates domain adaptation
when training the surrogate model. During training, the discriminator forces
$F^{\prime}$ to produce intermediate representations similar to $F$. As a
result, $G^{\prime}$ can generate perturbations that transfer to $G$.
In this paper, we specifically focus on object detection as it is a
challenging task where aggregating information from multiple viewpoints is
particularly helpful. In addition, many downstream robotics tasks depend on
detection and thus a strong attack can jeopardize the performance of the full
system. In this case, output $Z$ is a set of $M$ bounding box proposals
${z^{(1)},\dots,z^{(M)}}$ at different spatial locations. Each proposal
consists of class scores $z_{\sigma_{0}},\dots,z_{\sigma_{k}}$ and bounding
box parameters describing the spatial location and dimensions of the bounding
box. Here classes $0,\dots,k-1$ are the object classes and $k$ denotes the
background class where no objects are detected.
When performing detection, models try to output the correct object class $k$
and maximize the ratio of intersection over union (IOU) of the proposed and
ground truth bounding boxes. In a post processing step, proposals with high
confidence are selected and overlapping bounding boxes are filtered with non-
maximum suppression (NMS) to ideally produce a single estimate per ground
truth object.
### 3.2 Adversarial Perturbation Generation
We first introduce our loss objective for generating adversarial perturbations
against object detection. To generate false outputs, we aim to confuse the
proposal class. For detected objects, we suppress the score of the correct
class to generate false negatives. For background classes, false positives are
created by pushing up the score of an object class. In addition, we also aim
to minimize the intersection-over-union (IoU) of the bounding box proposals to
further degrade performance by producing poorly localized objects. We define
the adversarial loss of the perturbed output $z^{\prime}$ with respect to an
unperturbed output $z$ instead of the ground truth, as it may not always be
available to the attacker. For each proposal $z$, let
$u=\operatorname*{argmax}_{i}\\{z_{\sigma_{i}}|i=0\dots m\\}$ be the highest
confidence class. Given the original object proposal $z$ and the proposal
after perturbation $z^{\prime}$, our loss function tries to push $z^{\prime}$
away from $z$:
$\ell_{adv}(z^{\prime},z)=\begin{cases}-\log(1-z^{\prime}_{\sigma_{u}})\cdot\mathrm{IoU}(z^{\prime},z)&\text{if
}u\neq k\text{ and}\ z_{\sigma_{u}}>\tau^{+},\\\
-\lambda\cdot{z^{\prime}}_{\sigma_{v}}^{\gamma}\log(1-z^{\prime}_{\sigma_{v}})&\text{if
}u=k\text{ and}\ \ z_{\sigma_{u}}>\tau^{-},\\\ 0&\text{otherwise}\end{cases}$
(1)
An illustration of the attack objective is shown in Figure 2. When $u\neq k$
and the original prediction is not a background class, we apply an untargetted
loss to reduce the likelihood of the intended class. When the intended
prediction is the background class $k$, we specifically target a non-
background class $v$ to generate a false positive. We simply choose $v$ to be
the class with the highest confidence that is not the background class. The
$\mathrm{IoU}$ operator denotes the intersection over union of two proposals,
$\lambda$ is a weighting coefficient, and $\tau^{-},\tau^{+}$ filter out
proposals that are not confident enough. We provide more analysis and
ablations to justify our loss function design in our experiments.
Following prior work [50], it is necessary to minimize the adversarial loss
over all proposals. Thus, the optimal perturbation under an $\epsilon$ \-
$\ell_{p}$ bound is
$\delta^{\star}=\operatorname*{argmin}_{\left\lVert\delta\right\rVert_{p}\leq\epsilon}\sum_{m=1}^{M}\ell_{adv}(z^{\prime(m)},z^{(m)}).$
(2)
Our work considers an infinity norm $p=\infty$ and we minimize this loss
across all proposals using projected gradient descent (PGD) [31], clipping
$\delta$ to be within $[-\epsilon,\epsilon]$.
### 3.3 Transfer Attack
We also consider transfer attacks as they are the most practical. White box
attacks assume access to the victim model’s weights which is difficult to
obtain in practice. On the other hand, query-based optimization is too
expensive to execute in real time as state-of-the-art methods still require
thousands of queries [13, 11] on CIFAR-10. Instead, when we do not have access
to the weights of the victim model $G$, we can imitate it with a surrogate
model $G^{\prime}$ such that perturbations generated by the surrogate model
can transfer to the target model.
Figure 4: Two multi-agent datasets we use. On the left are images of ShapeNet
objects taken from different view points. On the right are LiDAR sweeps by
different vehicles in the same scene.
One major challenge for transfer attacks in our setting is that perturbations
are generated on intermediate feature maps. Our experiments show that vanilla
transfer attacks are almost completely ineffective as two networks with the
same functionality do not necessarily have the same intermediate
representations. When training $F$ and $G$, there is no direct supervision on
the intermediate features $m=F(x)$. Therefore, even with the same
architecture, dataset, and training schedule, a surrogate $F^{\prime}$ may
produce messages $m^{\prime}$ with very different distribution from $m$. As an
example, a permutation of feature channels carries the same information but
results in a different distribution. In general, different random seeds,
network initializations or non-deterministic GPU operations can result in
different intermediate representations. It follows that if $m^{\prime}$ does
not faithfully replicate $m$, we cannot expect $G^{\prime}$ to imitate $G$.
Thus, to execute transfer attacks, we must have access to samples of the
intermediate feature maps. Specifically, we consider a scenario where the
attacker can spy on the victim’s communication channel to obtain transmitted
messages. However, since sensory information is not transmitted, the attacker
does not have access to pairs of input $x$ and intermediate representation $m$
to directly supervise the surrogate $F^{\prime}$ via distillation. Thus, we
propose to use Adversarial Discriminative Domain Adaptation (ADDA) [51] to
align the distribution of $m$ and $m^{\prime}$ without explicit input-feature
pairs. An overview is shown in Figure 3.
In the original training pipeline, $F^{\prime}$ and $G^{\prime}$ would be
trained to minimize task loss
$\small\mathcal{L}_{task}(z,y,b)=\begin{cases}-\log(z_{\sigma_{y}})-\mathrm{IoU}(z,b)&\text{if
}y\neq k,\\\ -\log(z_{\sigma_{y}})&\text{if }y=k,\\\ \end{cases}$ (3)
where $b$ is a ground truth bounding box and $y$ is its class. The task loss
maximizes the log likelihood of the correct class and the IoU between the
proposal box and the ground truth box. In addition, we encourage domain
adaptation by introducing a discriminator $D$ to distinguish between real
messages $m$ and surrogate messages $m^{\prime}$. The three modules
$F^{\prime}$, $G^{\prime}$, and $D$ can be optimized using the following min-
max criterion:
$\min\limits_{F^{\prime}\
G^{\prime}}\max\limits_{D}\mathcal{L}_{task}(x)+\beta\big{[}\log
D(F(x))+\log(1-D(F^{\prime}(x)))]$
(4)
where $\beta$ is a weighting coefficient and we use binary cross entropy loss
to supervise the discriminator. During training, we adopt spectral
normalization [33] in the discriminator and the two-time update rule [22] for
stability.
### 3.4 Online Attack
In modern applications of autonomous systems, consecutive frames of sensory
information are typically collected only milliseconds apart. Thus, there is a
large amount of redundancy between consecutive frames which can be exploited
to achieve more efficient adversarial attacks. Following previous work [53] in
images, we propose to exploit this redundancy by using the perturbation from
the previous time step as initialization for the current time step.
Furthermore, we note that intermediate feature maps capture the spatial
context of sensory observations, which change due to the agent’s egomotion.
Therefore, by applying a rigid transformation on the perturbation at every
time step to account for egomotion, we can generate stronger perturbations
that are synchronized with the movement of sensory observations relative to
the agent. In this case, the perturbations are updated as follows:
$\delta^{(t+1)}\leftarrow H_{t\rightarrow t+1}(\delta^{(t)})\
-\alpha\nabla_{H_{t\rightarrow
t+1}(\delta)}\mathcal{L}_{adv}(Z^{\prime(t+1)},Z^{(t+1)}).$ (5)
Here $H_{t\rightarrow t+1}$ is a rigid transformation mapping the attacker’s
pose at time $t$ to $t+1$ and $\alpha$ is the step size. By leveraging
temporal consistency we can generate strong perturbations with only one
gradient update per time step, making online attacks more feasible.
## 4 Experiments
### 4.1 Multi-Agent Settings
Figure 5: Qualitative attack examples. Top: Messages sent by another agent
visualized in bird’s eye view. Bottom: outputs. Perturbations are very subtle
but severely degrade performance.
#### Multi-View ShapeNet:
We conduct our attacks on multi-view detection from images, which is a common
task for a fleets of drones. Following prior work [15], we generate a
synthetic dataset by placing 10 classes of ShapeNet [9] objects on a table
(see Figure 4). From each class, we subsample 50 meshes and use a 40/10 split
for training and validation. In every scene, we place 4 to 8 objects and
perform collision checking to ensure objects do not overlap. Then, we capture
128$\times$128 RGB-D images from 2 to 7 viewpoints sampled from the upper half
of a sphere centered at the table center with a radius of 2.0 units. This
dataset consists of 50,000 training scenes and 10,000 validation scenes. When
conducting attacks, we randomly sample one of the agents to be the adversary.
Our detection model uses an architecture similar to the one introduced in
[15]. Specifically, we process input RGB-D images using a U-Net [43] and then
unproject the features into 3D using the depth measures. Features from all
agents are then warped into the same coordinate frame and aggregated with mean
pooling. Finally, aggregated features are processed by a 3D U-Net and a
detection header to generate 3D bounding box proposals.
#### Vehicle To Vehicle Communication:
We also consider a self-driving setting with vehicle-to-vehicle(V2V)
communication. Here, we adopt the dataset used in [52], where 3D
reconstructions of logs of real world LiDAR scans are simulated from the
perspectives of other vehicles in the scene using a high-fidelity LiDAR
simulator [32]. These logs are collected by self-driving vehicles equipped
with LiDAR sensors capturing 10 frames per second (see Figure 4). The training
set consists of 46,796 subsampled frames from the logs and we do not subsample
the validation set, resulting in 96,862 frames. In every log we select one
attacker vehicle and sample others to be cooperative agents with up to 7
agents in each frame unless otherwise specified. This results in a consistent
assignment of attackers and V2V agents throughout the frames. In this setting,
we use the state-of-the-art perception and motion forecasting model V2VNet
[52]. Here, LiDAR inputs are first encoded into bird’s eye view (BEV) feature
maps. Feature maps from all agents are then warped into the ego coordinate
frame and aggregated with a GNN to produce BEV bounding box proposals. More
details of the ShapeNet model and V2VNet are provided in the supplementary
material.
Figure 6: Evaluation under no perturbation, uniform noise, transfer attack,
and white box attack. Results are grouped by the number of agents in the scene
where one agent is the attacker.
#### Implementation Details:
When conducting attacks, we set $\epsilon=0.1$. For the proposed loss
function, we set $\lambda=0.2,\tau^{-}=0.7,\tau^{+}=0.3$, and $\gamma=1$.
Projected gradient descent is done using Adam with learning rate $0.1$ and we
apply $15$ PGD steps for ShapeNet and only $1$ PGD step for low budget online
attacks in the V2V setting. The surrogate models use the same architecture and
dataset as the victim models. When training the surrogate model, we set
$\beta=0.01$, model learning rate $0.001$, and discriminator learning rate
$0.0005$. For evaluation, we compute area under the precision-recall curve of
bounding boxes, where bounding boxes are correct if they have an IoU greater
than 0.7 with a ground truth box of the same class. We refer to this metric as
AP at 0.7 in the following.
### 4.2 Results
#### Attack Results:
Visualizations of our attack are shown in Figure 5 and we present quantitative
results of our attack and baselines in Figure 6. We split up the evaluation by
the number of agents in the scene and one of the agents is always an attacker.
As a baseline, we sample the perturbation from
$\mathcal{U}{(-\epsilon,\epsilon)}$ to demonstrate that the same $\epsilon$
bounded uniform noise does not have any impact on detection performance. The
white box attack is especially strong when few agents are in the scene, but
becomes weaker as the number of benign agents increase, causing the relative
weight of the adversarial features in mean pooling layers to decrease.
Finally, our transfer attack with domain adaptation achieves moderate success
with few agents in the scene, but is significantly weaker than the white box
attack.
| ShapeNet | V2V
---|---|---
Clean | Perturbed | Clean | Perturbed
Original | 66.33 | 0.62 | 82.19 | 7.55
Adv Trained | 67.29 | 66.00 | 82.60 | 83.44
Table 1: Results of adversarial training. Robustness increases significantly,
matching clean inference. Furthermore performance on clean data also improves
slightly.
#### Robustifying Models:
To defend against our proposed attack, we conduct adversarial training against
the white box adversary and show the results in Table 1. Here, we follow the
standard adversarial training set up, except perturbations are applied to
intermediate features instead of inputs. This objective can be formulated as
$\min_{\theta}\mathbb{E}_{(x,y)\sim
D}\max_{\left\lVert\delta\right\rVert_{\infty}<\epsilon}\phi((x,y,\delta);\theta):=\\\
\mathcal{L}_{task}\left(G(F(x_{0}),\dots,F(x_{i})+\delta,\dots,F(x_{N});\theta)\right),$
(6)
where $D$ is the natural training distribution and $\theta$ denotes model
parameters. During training, we generate a new perturbation $\delta$ for each
training sample. In the multi-agent setting, we find it easier to recover from
adversarial perturbations when compared to traditional single-agent attacks.
Moreover, adversarial training is able to slightly improve performance on
clean data as well, while adversarial training has been known to hurt natural
performance in previous settings [28, 49].
| Clean | Perturbed
---|---|---
Agents | 2 | 4 | 6 | 2 | 4 | 6
Mean Pool | 82.09 | 89.25 | 92.43 | 0.90 | 12.93 | 41.77
GNN(Mean) | 82.19 | 89.93 | 92.94 | 7.55 | 52.31 | 76.18
GNN(Median) | 82.11 | 87.12 | 90.75 | 12.8 | 67.70 | 86.30
GNN(Soft Med) | 82.19 | 89.67 | 92.49 | 21.53 | 61.37 | 84.99
Table 2: Choice of fusion in V2VNet affects performance and robustness. We
investigate using mean pooling and using a GNN with various aggregation
methods.
While adversarial training is effective in this setting, it requires knowledge
of the threat model. When the threat model is unknown, we can still naturally
boost the robustness of multi-agent models with the design of the aggregation
module. Specifically, we consider several alternatives to V2VNet’s GNN fusion
and present the performance under attacked and clean data in Table 2. First,
replacing the entire GNN with an adaptive mean pooling layer significantly
decreases robustness. On the other hand, we swap out the mean pooling in GNN
nodes with median pooling and find that it increases robustness at the cost of
performance on clean data with more agents, since more information is
discarded. We refer readers to the supplementary materials for more details on
implementation of the soft median pooling.
#### Multiple Attackers:
We previously focused on settings with one attacker, and now conduct
experiments with multiple attackers in the V2V setting. In each case, we also
consider if attackers are able to cooperate. In cooperation, attackers jointly
optimize their perturbations. Without cooperation, attackers are blind to each
other and optimize their perturbations assuming other messages have not been
perturbed. Results with up to 3 attackers are shown in Table 3. As expected,
more attackers can increase the strength of attack significantly, furthermore,
if multiple agents can coordinate, a stronger attack can be generated.
| Cooperative | Non-Cooperative
---|---|---
Agents | 4 | 5 | 6 | 4 | 5 | 6
1 Attacker | 52.31 | 65.00 | 76.18 | 52.31 | 65.00 | 76.18
2 Attacker | 28.31 | 41.34 | 54.50 | 39.02 | 51.96 | 64.02
3 Attacker | 12.07 | 22.84 | 35.13 | 24.27 | 38.17 | 51.58
Table 3: Multiple white box attackers in the V2V setting. Cooperative
attackers jointly optimize their perturbations and non-cooperative attackers
optimize without knowledge of each other.
Next, we apply adversarial training to the multi-attacker setting and present
results in Table 4. Here, all attacks are done in the cooperative setting and
we show results with 4 total agents. Similar to the single attacker setting,
adversarial training is highly effective. However, while adversarial training
against one attacker improves performance in natural examples, being robust to
stronger attacks sacrifices performance on natural examples. This suggests
that adversarial training has the potential to improve general performance
when an appropriate threat model is selected. Furthermore, we can see that
training on fewer attacks does not generalize perfectly to more attackers but
the opposite is true. Thus, it is necessary to train against an equal or
greater threat model to fully defend against such attacks.
Attackers | 0 | 1 | 2 | 3
---|---|---|---|---
Train On 0 | 89.93 | 52.31 | 28.31 | 12.07
Train On 1 | 90.09 | 90.00 | 81.95 | 75.28
Train On 2 | 89.71 | 89.68 | 88.91 | 88.33
Train On 3 | 89.55 | 89.51 | 88.94 | 88.51
Table 4: Adversarial training with multiple attackers in the V2V setting. We
train on settings with various number of attackers and evaluate the models
across the settings.
#### Domain Adaptation:
More results of the transfer attack are included in Table 5. First, we conduct
an ablation and show that a transfer attack without domain adaptation (DA) is
almost completely ineffective. On the contrary, surrogate models trained with
DA achieve significant improvements. A visual demonstration of feature map
alignment with DA is shown in Figure 7, visualizing 4 channels of the
intermediate feature maps. Features from a surrogate trained with DA is
visually very similar to the victim, while a surrogate trained without DA
produces features with no resemblance.
Since our proposed DA improves the transferability of the surrogate model, we
can further improve our transfer attack by also adopting methods from the
literature which enhance the transferability of a given perturbation. We find
that generating perturbations from diversified inputs (DI) [58] is ineffective
as resizing input feature maps distorts spatial information which is important
for localizing objects detection. On the other hand, using an intermediate
level attack projection (ILAP) [23] yields a small improvement. Overall, we
find transfer attacks more challenging when at the feature level. In standard
attacks on sensory inputs, perturbations are transferred into the same input
domain. However, at a feature level the input domains are model-dependent,
making transfer attacks between different models more difficult.
| ShapeNet | V2V
---|---|---
Clean | 66.28 | 82.19
Transfer | 66.21 | 81.31
Transfer + DA | 42.59 | 72.45
Transfer + DA + ILAP | 35.69 | 71.76
Transfer + DA + DI | 49.38 | 75.18
Table 5: Transfer attacks evaluated with 2 agents. Training the surrogate with
domain adaptation (DA) significantly improves transferability. In addition, we
attempt to enhance transferability with ILAP [23] and DI [58].
Victim |
---|---
DA |
No DA |
Figure 7: Visualization of how domain adaptation(DA) affects 4 channels of the
intermediate feature map. Observe that the surrogate trained with DA closely
imitates the victim model, while the surrogate trained without DA produces
different features.
#### Online Attacks:
We conduct an ablation on the proposed methods for exploiting temporal
redundancy in an online V2V setting, shown in Table 6. First, if we ignore
temporal redundancy and do not reuse the previous perturbation, attacks are
much weaker. In this evaluation we switch from PGD [31] to FGSM [21] to obtain
a stronger perturbation in one update for fair comparison. We also show that
applying a rigid transformation on the perturbations at every frame to
compensate for egomotion provides a modest improvement to the attack when
compared to the No Warp ablation.
#### Loss Function Design:
We conduct an ablation study on using our adversarial loss $\mathcal{L}_{adv}$
instead of the negative task loss $-\mathcal{L}_{task}$ in Table 7. This
ablation validates our loss function and showcase that for structured outputs,
properly designed adversarial losses is more effective than the naive negative
task loss which is widely used in image classification tasks. Our choice for
the loss function design is motivated by our knowledge of the post-processing
non-maximum suppression (NMS). Since NMS selects bounding boxes with the
highest confidence in a local region, proposals with higher scores should
receive stronger gradients. More specifically, an appropriate loss function of
$f$ for proposal score $\sigma$ should satisfy
$\left(|\nabla_{\sigma_{2}}f(\sigma_{2})|-|\nabla_{\sigma_{1}}f(\sigma_{1})|\right)/{(\sigma_{2}-\sigma_{1})}>0$
so that $|\nabla_{\sigma}f(\sigma)|$ is monotonically increasing in $\sigma$.
We can see that the standard log likelihood does not satisfy this criteria,
which explains why our loss formulation is more effective. In addition, we add
the focal loss term [29] to generate more false positives, as aggressively
focusing on one proposal in a local region is more effective due to NMS.
AP @ 0.7 | 2 Agents | 4 Agents | 6 Agents
---|---|---|---
Our Attack | 7.55 | 52.31 | 76.18
No Warping | 7.17 | 52.35 | 77.37
Independent | 56.98 | 80.21 | 87.05
Table 6: Ablation on online attacks in the V2V setting. Independent refers to
treating each frame independently and not reusing previous perturbations. No
warp refers to omitting the rigid transformation to account for egomotion.
| 2 Agents | 4 Agents | 6 Agents
---|---|---|---
ShapeNet | $-\mathcal{L}_{task}$ | 6.10 | 20.07 | 29.00
$\mathcal{L}_{adv}$ | 0.37 | 4.45 | 13.77
V2V | $-\mathcal{L}_{task}$ | 20.8 | 63.82 | 79.11
$\mathcal{L}_{adv}$ | 7.55 | 52.31 | 76.18
Table 7: Ablation on loss function, it produces stronger adversarial attacks
than simply using the negative of the training task loss.
## 5 Conclusion
In this paper, we investigate adversarial attacks on communication in multi-
agent deep learning systems. Our experiments in two practical settings
demonstrate that compromised communication channels can be used to execute
adversarial attacks. However, robustness increases as the ratio of benign to
malicious actors increases. Furthermore, we found that more practical transfer
attacks are more challenging in this setting and require aligning the
distributions of intermediate representations. Finally, we propose a method to
achieve efficient and practical online attacks by exploiting temporal
consistency of sensory inputs. We believe studying adversarial robustness on
multi-agent deep learning models in real-world applications is an important
step towards more secure multi-agent systems.
## References
* [1] Arjun Nitin Bhagoji, Supriyo Chakraborty, Prateek Mittal, and Seraphin B. Calo. Analyzing federated learning through an adversarial lens. In ICML, volume 97 of Proceedings of Machine Learning Research, pages 634–643. PMLR, 2019.
* [2] Keith Bonawitz, Hubert Eichner, Wolfgang Grieskamp, Dzmitry Huba, Alex Ingerman, Vladimir Ivanov, Chloé Kiddon, Jakub Konecný, Stefano Mazzocchi, H. Brendan McMahan, Timon Van Overveldt, David Petrou, Daniel Ramage, and Jason Roselander. Towards federated learning at scale: System design. CoRR, 2019.
* [3] Niklas Borselius. Mobile agent security. Electronics & Communication Engineering Journal, 14(5), 2002.
* [4] Wieland Brendel, Jonas Rauber, and Matthias Bethge. Decision-based adversarial attacks: Reliable attacks against black-box machine learning models. In ICLR, 2018.
* [5] Thomas Brewster. Watch chinese hackers control tesla’s brakes from 12 miles away, 2016\.
* [6] Thomas Brunner, Frederik Diehl, Michael Truong-Le, and Alois Knoll. Guessing smart: Biased sampling for efficient black-box adversarial attacks. CoRR, abs/1812.09803, 2018.
* [7] Yulong Cao, Chaowei Xiao, Benjamin Cyr, Yimeng Zhou, Won Park, Sara Rampazzi, Qi Alfred Chen, Kevin Fu, and Z. Morley Mao. Adversarial sensor attack on lidar-based perception in autonomous driving. In CCS, 2019.
* [8] Nicholas Carlini and David Wagner. Towards evaluating the robustness of neural networks. In SP, 2017.
* [9] Angel X. Chang, Thomas Funkhouser, Leonidas Guibas, Pat Hanrahan, Qixing Huang, Zimo Li, Silvio Savarese, Manolis Savva, Shuran Song, Hao Su, Jianxiong Xiao, Li Yi, and Fisher Yu. ShapeNet: An Information-Rich 3D Model Repository. arXiv preprint arXiv:1512.03012, 2015.
* [10] Jianbo Chen, Michael I Jordan, and Martin J Wainwright. Hopskipjumpattack: A query-efficient decision-based attack. arXiv preprint arXiv:1904.02144, 3, 2019.
* [11] Jianbo Chen, Michael I. Jordan, and Martin J. Wainwright. Hopskipjumpattack: A query-efficient decision-based attack. In IEEE Symposium on Security and Privacy, pages 1277–1294. IEEE, 2020.
* [12] Pin-Yu Chen, Huan Zhang, Yash Sharma, Jinfeng Yi, and Cho-Jui Hsieh. ZOO: zeroth order optimization based black-box attacks to deep neural networks without training substitute models. In AISec, 2017.
* [13] Minhao Cheng, Thong Le, Pin-Yu Chen, Huan Zhang, JinFeng Yi, and Cho-Jui Hsieh. Query-efficient hard-label black-box attack: An optimization-based approach. In ICLR, 2019.
* [14] Minhao Cheng, Jinfeng Yi, Huan Zhang, Pin-Yu Chen, and Cho-Jui Hsieh. Seq2sick: Evaluating the robustness of sequence-to-sequence models with adversarial examples. CoRR, abs/1803.01128, 2018.
* [15] Ricson Cheng, Ziyan Wang, and Katerina Fragkiadaki. Geometry-aware recurrent neural networks for active visual recognition. In NeurIPS. 2018.
* [16] Shuyu Cheng, Yinpeng Dong, Tianyu Pang, Hang Su, and Jun Zhu. Improving black-box adversarial attacks with a transfer-based prior. In NeurIPS, 2019.
* [17] Tharam S. Dillon, Chen Wu, and Elizabeth Chang. Cloud computing: Issues and challenges. In AINA, 2010.
* [18] Amir Erfan Eshratifar and Massoud Pedram. Energy and performance efficient computation offloading for deep neural networks in a mobile cloud computing environment. In GLSVLSI, 2018.
* [19] Minghong Fang, Xiaoyu Cao, Jinyuan Jia, and Neil Zhenqiang Gong. Local model poisoning attacks to byzantine-robust federated learning. In USENIX Security Symposium, pages 1605–1622. USENIX Association, 2020.
* [20] Avishek Ghosh, Justin Hong, Dong Yin, and Kannan Ramchandran. Robust federated learning in a heterogeneous environment. arXiv preprint arXiv:1906.06629, 2019.
* [21] Ian J Goodfellow, Jonathon Shlens, and Christian Szegedy. Explaining and harnessing adversarial examples. ICLR, 2015.
* [22] Martin Heusel, Hubert Ramsauer, Thomas Unterthiner, Bernhard Nessler, and Sepp Hochreiter. Gans trained by a two time-scale update rule converge to a local nash equilibrium. In NIPS, 2017.
* [23] Qian Huang, Isay Katsman, Horace He, Zeqi Gu, Serge J. Belongie, and Ser-Nam Lim. Enhancing adversarial example transferability with an intermediate level attack. CoRR, abs/1907.10823, 2019.
* [24] Sandy Huang, Nicolas Papernot, Ian Goodfellow, Yan Duan, and Pieter Abbeel. Adversarial attacks on neural network policies. 2017\.
* [25] Linxi Jiang, Xingjun Ma, Shaoxiang Chen, James Bailey, and Yu-Gang Jiang. Black-box adversarial attacks on video recognition models. In ACM MM, 2019.
* [26] Jakub Konecný, H. Brendan McMahan, Felix X. Yu, Peter Richtárik, Ananda Theertha Suresh, and Dave Bacon. Federated learning: Strategies for improving communication efficiency. CoRR, abs/1610.05492, 2016.
* [27] Jakub Konecný, H. Brendan McMahan, Felix X. Yu, Peter Richtárik, Ananda Theertha Suresh, and Dave Bacon. Federated learning: Strategies for improving communication efficiency. CoRR, 2016.
* [28] Saehyung Lee, Hyungyu Lee, and Sungroh Yoon. Adversarial vertex mixup: Toward better adversarially robust generalization. In CVPR, pages 269–278. IEEE, 2020.
* [29] Tsung-Yi Lin, Priya Goyal, Ross Girshick, Kaiming He, and Piotr Dollár. Focal loss for dense object detection. In ICCV, 2017.
* [30] Aleksander Madry, Aleksandar Makelov, Ludwig Schmidt, Dimitris Tsipras, and Adrian Vladu. Towards deep learning models resistant to adversarial attacks. arXiv preprint arXiv:1706.06083, 2017.
* [31] Aleksander Madry, Aleksandar Makelov, Ludwig Schmidt, Dimitris Tsipras, and Adrian Vladu. Towards deep learning models resistant to adversarial attacks. arXiv preprint arXiv:1706.06083, 2017.
* [32] Sivabalan Manivasagam, Shenlong Wang, Kelvin Wong, Wenyuan Zeng, Wei-Chiu Ma, Mikita Sazanovich, Bin Yang, and Raquel Urtasun. Lidarsim: Realistic lidar simulation by leveraging the real world. In CVPR, 2020.
* [33] Takeru Miyato, Toshiki Kataoka, Masanori Koyama, and Yuichi Yoshida. Spectral normalization for generative adversarial networks. arXiv preprint arXiv:1802.05957, 2018.
* [34] Bassem Mokhtar and Mohamed Azab. Survey on security issues in vehicular ad hoc networks. Alexandria engineering journal, 54(4):1115–1126, 2015.
* [35] Satoshi Nakamoto. Bitcoin: A peer-to-peer electronic cash system. Technical report, 2019.
* [36] Ben Nassi, Dudi Nassi, Raz Ben-Netanel, Yisroel Mirsky, Oleg Drokin, and Yuval Elovici. Phantom of the adas: Phantom attacks on driver-assistance systems. IACR, 2020:85, 2020.
* [37] Petr Novák, Milan Rollo, Jirí Hodík, and Tomás Vlcek. Communication security in multi-agent systems. In CEEMAS, volume 2691 of Lecture Notes in Computer Science. Springer, 2003.
* [38] Marcus Obst, Laurens Hobert, and Pierre Reisdorf. Multi-sensor data fusion for checking plausibility of v2v communications by vision-based multiple-object tracking. In VNC, 2014.
* [39] Nicolas Papernot, Patrick D. McDaniel, Ian J. Goodfellow, Somesh Jha, Z. Berkay Celik, and Ananthram Swami. Practical black-box attacks against machine learning. In AsiaCCS, 2017.
* [40] Nicolas Papernot, Patrick D. McDaniel, Ananthram Swami, and Richard E. Harang. Crafting adversarial input sequences for recurrent neural networks. In MILCOM, 2016.
* [41] Andreas Rauch, Felix Klanner, Ralph Rasshofer, and Klaus Dietmayer. Car2x-based perception in a high-level fusion architecture for cooperative perception systems. In IV, 2012.
* [42] Matthias Rockl, Thomas Strang, and Matthias Kranz. V2v communications in automotive multi-sensor multi-target tracking. In VTC, 2008.
* [43] Olaf Ronneberger, Philipp Fischer, and Thomas Brox. U-net: Convolutional networks for biomedical image segmentation. In MICCAI, 2015.
* [44] Motoki Sato, Jun Suzuki, Hiroyuki Shindo, and Yuji Matsumoto. Interpretable adversarial perturbation in input embedding space for text. In IJCAI, 2018.
* [45] Hichem Sedjelmaci and Sidi Mohammed Senouci. An accurate and efficient collaborative intrusion detection framework to secure vehicular networks. Computers & Electrical Engineering, 43, 2015.
* [46] Catherine Stupp and James Rundle. Capital one breach highlights shortfalls of encryption, 2019.
* [47] Jiachen Sun, Yulong Cao, Qi Alfred Chen, and Z Morley Mao. Towards robust lidar-based perception in autonomous driving: General black-box adversarial sensor attack and countermeasures. In USENIX Security, pages 877–894, 2020.
* [48] Christian Szegedy, Wojciech Zaremba, Ilya Sutskever, Joan Bruna, Dumitru Erhan, Ian Goodfellow, and Rob Fergus. Intriguing properties of neural networks. ICLR, 2014.
* [49] Dimitris Tsipras, Shibani Santurkar, Logan Engstrom, Alexander Turner, and Aleksander Madry. Robustness may be at odds with accuracy. In ICLR (Poster). OpenReview.net, 2019.
* [50] James Tu, Mengye Ren, Sivabalan Manivasagam, Min Liang, Bin Yang, Richard Du, Cheng Frank, and Raquel Urtasun. Towards physically realistic adversarial examples for lidar object detection. arXiv, 2020.
* [51] Eric Tzeng, Judy Hoffman, Kate Saenko, and Trevor Darrell. Adversarial discriminative domain adaptation. In CVPR, 2017.
* [52] Tsun-Hsuan Wang, Sivabalan Manivasagam, Ming Liang, Bin Yang, Wenyuan Zeng, and Raquel Urtasun. V2vnet: Vehicle-to-vehicle communication for joint perception and prediction. arXiv, 2020.
* [53] Xingxing Wei, Jun Zhu, Sha Yuan, and Hang Su. Sparse adversarial perturbations for videos. In AAAI, 2019.
* [54] H. Chi Wong and Katia P. Sycara. Adding security and trust to multiagent systems. Applied Artificial Intelligence, 14(9):927–941, 2000.
* [55] Yi Wu, David Bamman, and Stuart J. Russell. Adversarial training for relation extraction. In EMNLP, 2017.
* [56] Chulin Xie, Keli Huang, Pin-Yu Chen, and Bo Li. DBA: distributed backdoor attacks against federated learning. In ICLR. OpenReview.net, 2020.
* [57] Cihang Xie, Jianyu Wang, Zhishuai Zhang, Yuyin Zhou, Lingxi Xie, and Alan Yuille. Adversarial examples for semantic segmentation and object detection. In ICCV, 2017.
* [58] Cihang Xie, Zhishuai Zhang, Yuyin Zhou, Song Bai, Jianyu Wang, Zhou Ren, and Alan L. Yuille. Improving transferability of adversarial examples with input diversity. In CVPR, 2019.
* [59] Tengchan Zeng, Mohammad Mozaffari, Omid Semiari, Walid Saad, Mehdi Bennis, and Merouane Debbah. Wireless communications and control for swarms of cellular-connected uavs. In ACSSC, 2018.
* [60] Chen Zhu, Yu Cheng, Zhe Gan, Siqi Sun, Tom Goldstein, and Jingjing Liu. Freelb: Enhanced adversarial training for natural language understanding. In ICLR, 2020.
* [61] Zeljka Zorz. Researchers hack bmw cars, discover 14 vulnerabilities, 2018.
|
# Membership Inference Attack on Graph Neural Networks
Iyiola E. Olatunji
L3S Research Center,
Hannover, Germany.
<EMAIL_ADDRESS>Wolfgang Nejdl
L3S Research Center,
Hannover, Germany.
<EMAIL_ADDRESS>Megha Khosla
L3S Research Center,
Hannover, Germany.
<EMAIL_ADDRESS>
###### Abstract
Graph Neural Networks (GNNs), which generalize traditional deep neural
networks on graph data, have achieved state-of-the-art performance on several
graph analytical tasks. We focus on how trained GNN models could leak
information about the _member_ nodes that they were trained on. We introduce
two realistic settings for performing a membership inference (MI) attack on
GNNs. While choosing the simplest possible attack model that utilizes the
posteriors of the trained model (black-box access), we thoroughly analyze the
properties of GNNs and the datasets which dictate the differences in their
robustness towards MI attack. While in traditional machine learning models,
overfitting is considered the main cause of such leakage, we show that in GNNs
the additional structural information is the major contributing factor. We
support our findings by extensive experiments on four representative GNN
models. To prevent MI attacks on GNN, we propose two effective defenses that
significantly decreases the attacker’s inference by up to 60% without
degradation to the target model’s performance. Our code is available at
https://github.com/iyempissy/rebMIGraph.
## I Introduction
Graph neural networks (GNNs) have gained substantial attention from academia
and industry in the past few years with high-impact applications ranging from
the analysis of social networks, recommender systems to biological networks.
One of the most popular tasks is that of _node classification_ in which the
goal is to predict the unknown node labels. These models differ from the
traditional machine learning (ML) models, in that they use additional
relational information among the node instances to make predictions. In fact,
the graph convolution-based model [7] which is the most popular class of GNNs
embeds graph structure into the model itself by computing representation of a
node via recursive aggregation and transformation of feature representations
of its neighbors. We take the first step in exposing the vulnerability of such
models to _membership inference_ (MI) attacks. In particular, we ask _whether
the trained GNN model can be used to identify the input instances (nodes) that
it was trained on._
To motivate the importance of the problem for graphs, suppose a researcher has
a list of patients infected with COVID19. The researcher is interested in
understanding the various factors contributing to the infection. To account
for the factors such as their social activity, she might want to utilize
knowledge of friendship/social connection known among the patients. She then
trains a GNN model on the graph induced on the nodes of interest and uses the
trained node representations as additional input for her disease analysis
models. A successful MI attack on the trained model would reveal the list of
infected persons even though the model might have not used any disease-related
sensitive information.
The goal of MI attack [14, 12] is to distinguish between the _target_ model’s
behavior for the inputs it encountered during training from the ones which it
did not. The inputs to the attack model are the class probabilities
(posteriors) or the confidence values output by the target model for the
corresponding data point. Thus, the attacker or adversary only requires
_black-box_ access to the model where she can query the model on her desired
data record and obtain the model predictions (output class probabilities).
While membership inference has been well studied in the context of traditional
ML models [14, 13] like convolution neural networks (CNNs) and multilayer
perceptron (MLP), GNNs has so far escaped attention. Much of the success of MI
attacks in traditional ML models has been attributed to the model’s tendency
to overfit or memorize the dataset [20]. Overfitting leads to the assignment
of high confidence scores to data records seen during training as compared to
new data it encountered during testing, making it possible to distinguish
between the instance types from the prediction scores. We ask _if overfitting
in GNNs is also the main contributing factor for successful membership
inference._ We discover that even if a GNN model generalizes well to unseen
data, it can still be highly prone to MI risks. The _encoding of the graph
structure into the model_ is what makes a GNN powerful but it is exactly this
property that makes it much more vulnerable to privacy attacks. Therefore,
unlike other models, reducing overfitting might not alone lead to higher
robustness against privacy risks.
While we showed that all GNN models are vulnerable to MI attack, we observed
differences in attack success rate. We explain these differences in terms of
differing dataset and model properties using insights from our large scale
experimental analysis. We further develop defense mechanisms based on _output
perturbation_ and _query neighborhood perturbation_ strategies. Our empirical
results show that our defenses can effectively defend against MI attacks on
GNNs by reducing the attacker’s inference by over 60% with negligible loss in
the target model’s inference.
To summarize, our key contributions are as follows.
1. 1.
We introduce two realistic settings for carrying out MI attack on GNNs.
2. 2.
We perform an extensive experimental study to expose the risks of privacy
leakage in GNN models. We further attribute the differences between the
model’s robustness towards MI attack to the dataset properties and the model
architecture.
3. 3.
Contrary to popular belief, we show that for GNNs, lack of overfitting does
not guarantee robustness towards privacy attacks.
4. 4.
We propose two defense mechanisms (based on output and query neighborhood
perturbation) against MI attacks in GNNs that significantly degrade attack
performance without compromising the target model’s utility.
## II Background and Related Works
### II-A Graph Neural Networks
Graph Neural Networks popularized by graph convolutional networks (GCNs) and
their variants, generalize the convolution operation for irregular graph data.
These methods encode graph structure directly into the model. In particular,
the node representation is computed by recursive aggregation and
transformation of feature representations of its neighbors.
Let $\boldsymbol{x}_{i}^{(\ell)}$ denote the feature representation of node
$i$ at layer $\ell$ and ${\mathcal{N}}_{i}$ denotes the set of its 1-hop
neighbors. Formally, the $\ell$-th layer of a general graph convolutional
operation can then be described as
$\displaystyle\boldsymbol{z}_{i}^{(\ell)}=$
$\displaystyle\operatorname{AGGREGATION}^{(\ell)}\left(\left\\{\boldsymbol{x}_{i}^{(\ell-1)},\left\\{\boldsymbol{x}_{j}^{(\ell-1)}\mid
j\in{\mathcal{N}}_{i}\right\\}\right\\}\right)$ (1)
$\displaystyle\boldsymbol{x}_{i}^{(\ell)}=$
$\displaystyle\operatorname{TRANSFORMATION}^{(\ell)}\left(\boldsymbol{z}_{i}^{(\ell)}\right)$
(2)
Finally, a softmax layer is applied to the node representations at the last
layer (say $L$) for the final prediction of node classes,
$\displaystyle\boldsymbol{y}_{i}\leftarrow\operatorname{softmax}(\boldsymbol{z}_{i}^{(L)}\mathbf{W}),$
(3)
where $\boldsymbol{y}_{i}\in\mathbb{R}^{c}$, $c$ is the number of classes and
$\mathbf{W}$ is a learnable weight matrix. Each element
$\boldsymbol{y}_{i}(j)$ corresponds to the (predicted) probability (or
posterior) that node $i$ is assigned to class $j$.
We focus on four representative models of this family which differ either on
one of the above two steps of aggregation and transformation. In the
following, we briefly describe these models and their differences.
Graph Convolutional Network (Gcn) [7]. Let $d_{i}$ denote the degree of node
$i$. The aggregation operation in Gcn is then given as
$\boldsymbol{z}_{i}^{(\ell)}\leftarrow\sum_{j\in\mathcal{N}(i)\cup
i}\frac{1}{\sqrt{d_{i}d_{j}}}\boldsymbol{x}_{j}^{(\ell-1)}.$ (4)
Gcn performs a non-linear transformation over the aggregated features to
compute the representation at layer $\ell$.
$\boldsymbol{x}_{i}^{(\ell)}\leftarrow\operatorname{ReLU}\left(\boldsymbol{z}_{j}^{(\ell-1)}\mathbf{W}^{(\ell)}\right).$
(5)
Simplifying Graph Convolutional Networks (Sgc) [18]. The authors in [18] argue
that the non-linear activation function in Gcn is not critical for the node
classification task and completely skips the non-linear transformation step.
In particular, in an $L$ layer Sgc model, $L$ aggregation steps are applied as
given by (4) followed by final prediction (as in (3)).
Graph Attention Networks (Gat) [16]. Gat modifies the aggregation operation in
(4) by introducing attention weights over the edges. In particular, the $p$-th
attention operation results in the following aggregation operation, where
$\boldsymbol{z}_{i}^{(\ell,p)}\leftarrow\sum_{j\in\mathcal{N}_{i}\cup
i}\alpha_{ij}^{p}\boldsymbol{x}_{j}^{(\ell-1)},$ (6)
where $\alpha_{ij}^{p}$ are normalized attention coefficients computed by the
p-th attention mechanism. In the transformation step, the $P$ intermediate
representations corresponding to $P$ attention mechanisms are concatenated
after a non-linear transformation as in (5) to obtain the representation at
layer $\ell$.
$\boldsymbol{x}_{i}^{(\ell)}=||_{p=1}^{P}\operatorname{ReLU}\left(\boldsymbol{z}_{i}^{(\ell,p)}\mathbf{W}^{(p\ell)}\right),$
(7)
where $||$ denotes concatenation operator and $\mathbf{W}^{(p\ell)}$ is the
corresponding weight matrix at layer $\ell$. For more details of the attention
computations, we refer the reader to [16].
GraphSage (Sage) [3]. GraphSage generalizes the graph convolutional framework
by proposing several aggregation operations. To achieve scalability, rather
than using the complete 1-hop neighborhood of the node, it samples a fixed
number of neighbors randomly at each layer for each node. Let
$\mathcal{\tilde{N}}(i)$ denote the set of sampled neighbors for node $i$. The
aggregation (we use mean aggregation in this work) operation as applied in
Sage is given as follows.
$\boldsymbol{z}_{i}^{(\ell)}\leftarrow\operatorname{CONCAT}\left(\boldsymbol{x}_{i}^{(\ell-1)},\frac{1}{|\mathcal{\tilde{N}}(i)|}\sum_{j\in\mathcal{\tilde{N}}(i)}\boldsymbol{x}_{j}^{(\ell-1)}\right).$
(8)
The transformation operation stays the same as in (5).
Our approach is the first work to compare different graph convolution-based
models with respect to their vulnerability to MI attack. More precisely, we
ask _if the differences in the aggregation and transformation operations of
the graph convolution-based models lead to differences in privacy risks._
### II-B Privacy attacks on Machine Learning
Several attacks on machine learning models have been proposed including
membership inference attack [14, 1, 19, 9, 13] where the adversary aims to
infer whether a data sample was part of the data used in training a model or
not. In the attribute inference attack, the attacker’s goal is to reconstruct
the missing attributes given partial information about the data record and
access to the machine learning model [21, 19, 5]. In model inversion attack
[2, 8, 10], the model’s output is used to extract features that characterize
one of the model’s classes. The goal of model extraction and stealing attack
is to steal model parameters and hyperparameters to duplicate or mimic the
functionality of the target model [15, 17]. However, little attention has been
paid to the privacy risks of GNNs. Recently, privacy preserving learning
algorithms for GNN models have been proposed [11, 22]. However. their proposed
solutions are not directly applicable in overcoming the risk incurred by MI
attacks. After our work, a recent paper on node-level membership inference
attack for GNN was proposed [4]. Their work is different from our work in that
we focus on analyzing the properties of GNNs and dataset properties that
determines the differences in their robustness. Moreover, as indicated by the
authors, their proposed defenses limits the target model’s utility whereas our
proposed defenses does not affect model’s utility.
## III Our Approach
### III-A Problem Description
#### III-A1 Notations
Let $G=(V,E)$ represents the graph dataset with $|V|$ nodes and $|E|$ edges.
Let the nodes be labeled. We denote by _target graph_ ,
$\mathcal{G}_{t}=(V_{t},E_{t})$, the induced graph on the set of sensitive or
the member nodes $V_{t}$ which is used to train the _target model_ ,
$\mathcal{M}$.
###### Problem Statement.
Let a GNN model $\mathcal{M}$ be trained using the graph
$G_{t}=(V_{t},E_{t})$. Given a node $v$ and its $L$-hop neighborhood,
determine if $v\in V_{t}$. Note that even if $v$ was in the training set, the
$L$-hop neighborhood known to the adversary might not be the same as the one
used to train the model $\mathcal{M}$.
#### III-A2 Our Proposed Settings
We propose two realistic settings for carrying out MI attack on GNNs: (i) in
the first setting which we call the TSTF (train on subgraph, test on full)
setting, in which the whole graph $G$ is available to the adversary but she is
not aware of the subgraph $\mathcal{G}_{t}$ used for training the target
model. This implies that the attacker has access to the links (if any) between
the member nodes and non-member nodes (ii) in our second setting TSTS (train
on subgraph, test on subgraph) setting, the target graph $G_{t}$ is an
isolated component of $G$, i.e., the member and non-member nodes are not
connected. The adversary has access to $G$ but does not know which of its
component is used for training the target model.
Figure 1: Attack methodology for membership inference on GNN models. The
training nodes and neighbor information used for training the shadow GNN model
are labeled as Member. We also query the shadow model with nodes from a test
graph and labeled the predictions as non-Member. These predictions are used to
train the attack model. The attacker then queries his trained attack model
with posteriors obtained from the target model (target predictions) to infer
membership.
### III-B Attack Methodology
We model the problem of membership inference as a binary classification task
where the goal is to determine if a given node $v\in V_{t}$. We denote our
attack model by $\mathcal{A}$.
We organize the adversary’s methodology (also shown in Figure 1) into three
phases, shadow model training, attack model training, and membership
inference.
#### III-B1 Shadow model training
To train the shadow model, we assume that the adversary has access to the
graph with vertex set $V_{s}$ that comes from the same underlying distribution
as $V_{t}$ (the assumption which we also relax in Section V-D4). Then the
adversary trains the shadow model using the shadow model’s training split,
$V^{Train}_{s}\subset V_{s}$. To replicate the behavior of the target model,
we use the output class probabilities of the target model (when
$V^{Train}_{s}$ is used as input) as the ground truth for training the shadow
model. This would result in querying the target model for each vertex in
$V^{Train}_{s}$. We later relax the number of queries required to $0$ by
directly training the shadow model on the original ground truth labels of
$V^{Train}_{s}$. We observe there is no significant change in attack success
rate (c.f. Section V-D1). We also find that we do not need to know the exact
target model. In fact, we show that using Gcn as the shadow model,
irrespective of the actual target model already results in good attack
performance (c.f. Section V-D3).
#### III-B2 Attack model training
To construct the attack model, we use the trained shadow model to perform
predictions over all nodes in $V^{Train}_{s}$ and $V^{Out}_{s}=V_{s}\setminus
V^{Train}_{s}$ and obtain the corresponding output class posterior
probabilities. For each node, we take the posteriors as input feature vectors
for the attack model and assigns a label 1 if the node is in $V^{Train}_{s}$
and 0 if the node is from $V^{Out}_{s}$. These assigned labels serve as ground
truth data for the attack model. All the generated feature vectors and labels
are used in training the attack model.
#### III-B3 Membership inference
To perform the inference attack on whether a given node $v\in V_{t}$, the
adversary queries the target model with $v$ and its known neighborhood to
obtain the posteriors. Note that even if $v$ was part of training data, the
adversary would not always have access to the exact neighborhood structure
that was used for training. Then she inputs the posteriors into the attack
model $\mathcal{A}$ to obtain the membership prediction.
## IV Experiments
We compare four popular GNN models: (i) graph convolution network (Gcn)[7],
(ii) graph attention network (Gat) [16] (iii) simplified graph convolution
(Sgc) [18] and (iv) GraphSage ( Sage) [3] as explained in Section II-A. We ran
all experiments for 10 random data splits (i.e., the target graphs, shadow
graphs as well as test graphs were generated 10 times) and report the average
performance along with the standard deviation.
### IV-A Dataset and Settings
To conduct our experiments, we used 5 different datasets commonly used as a
benchmark dataset for evaluating GNN performance. The properties of the
datasets are shown in Table I.
TABLE I: Dataset Statistics. $|V|$ and $|E|$ denote the number of vertices and edges in the corresponding graph dataset. $|V_{t}|$ denotes the number of vertices in the target/shadow (train) graph. $deg$ is the average degree of $|V_{t}|$ calculated for the training graph of target model. The average degree for trainset for shadow model $|V_{s}|$ is approximately the same as for the target model. | Cora | CiteSeer | PubMed | Flickr | Reddit
---|---|---|---|---|---
$|V|$ | 2708 | 3312 | 19,717 | 89,250 | 232,965
$|E|$ | 5429 | 4715 | 44,338 | 449,878 | 57,307,946
# features | 1433 | 3703 | 500 | 500 | 602
# classes | 7 | 6 | 3 | 7 | 41
$|V_{t}|$ | 630 | 600 | 4500 | 10,500 | 20,500
$deg_{TSTS}$ | 1.111 | 0.41 | 1.102 | 1.638 | 70.383
$deg_{TSTF}$ | 3.898 | 2.736 | 4.496 | 10.081 | 491.987
### IV-B Model Architecture and Training
We used a 2-layer Gcn, Gat, Sgc, and Sage architecture for our target models
and shadow models. The attack model is a 3-layer MLP model. All target and
shadow models are trained such that they achieve comparable performance as
reported by the authors in the literature. We vary the learning rates between
0.001 and 0.0001 depending on the model and dataset.
Evaluation Metrics. We report AUROC scores, Precision, and Recall for the
attack model as done in [14, 13]. For the target GNN models, we report train
and test accuracy. Due to space constraints we show in the main paper
summarized results using mostly the AUROC metric. The detailed results are
shown on our GitHub page 111https://github.com/iyempissy/rebMIGraph.
### IV-C Research Questions
Here, we summarize the main research questions that we investigate in this
work.
###### RQ 1.
How do different GNN models compare with respect to privacy leakage of
training data? What factors lead to differences in vulnerability of GNN models
towards MI attack? (c.f. Sections V-A and V-B)
###### RQ 2.
How does overfitting influence the performance of MI attacks in GNNs? (c.f.
Section V-C)
###### RQ 3.
How does the number of queries for shadow model training, absence of knowledge
of similar data distribution, target model architecture and hyperparameter
settings affect the attack performance? (c.f. Section V-D)
###### RQ 4.
How could we defend against the blackbox MI attack without compromising the
model performance? (c.f. Section VI)
## V Analysing the MI Attack on GNNs
### V-A Overall Attack Performance
In this section, we answer the first part of RQ 1.
(a) TSTF Setting
(b) TSTS Setting
Figure 2: Mean AUROC scores of attack model against different GNN models. All
target models except Sage encountered memory issues for the Reddit in TSTF
setting. Therefore, we only provide results for Reddit in the TSTS setting.
##### TSTF Setting
The AUROC scores for the attack model on all datasets except Reddit are shown
in Figure 2(a). For the models Gcn and Sgc, the attack model obtains similar
scores. Note that the difference between Sgc and Gcn is that Sgc does not use
a non-linear transformation after the feature aggregation step. The feature
aggregation scheme employed in both models is exactly the same.
Gat is the most robust towards the attack. Gat also differs from the models in
that it uses a weighted aggregation mechanism. Sage employs a mean aggregation
over the neighborhood’s features. Unlike the other models, for the aggregation
step, it samples a fixed number of neighbors rather than using the complete
neighborhood. Though it shows similar results for 3 citation networks, the
attack is less successful for the larger graph Flickr (when compared to Gcn
and Gat). We attribute the reason for such an observation to the induced noise
in the neighborhood structures because of the random sampling of neighbors.
Obviously, the effect is more prominent in denser graphs like Flickr as
compared to Cora where the average degree is less than 2.
##### TSTS Setting
Unlike in the TSTF setting, the train and test sets in this setting are
disconnected. This implies that any node $v\in V_{t}$ and its exact
neighborhood used during training is known to the adversary. We also see a
huge reduction in test set performance implying that the model is not
generalizing well to the test set. Intuitively, it would be much easier to
attack in this setting.
The AUROC scores for the corresponding attack are shown in Figure 2(b).
Precision and recall of the attack model along with the train-test set
performance of the target model are provided in Table VI on Github (1). We
observe that for Cora and CiteSeer the attack has a similar success rate as in
TSTF setting, for Flickr on the other hand, the attack performance degrades.
For the larger dataset Reddit, the attack is successful for Gcn and Sgc models
with a mean precision of 0.81 and 0.74 respectively. Gat and Sage shows more
robustness with AUROC scores close to 0.5 (implying that the attack model
cannot distinguish between member and non-member nodes better than a random
guess) for datasets: PubMed, Flickr, Reddit.
### V-B Effect of Model and Dataset Properties
To answer the second part of RQ 1, we analyse the differences in the
aggregation operation of models and three dataset properties to explain the
differences in attack performance.
#### V-B1 Which model is more robust towards MI attack and Why?
To summarize the above results, we found Gat to be most robust towards
membership inference attacks. The reason can be attributed to the learnable
attention weights for different edges. The above fact implies that instead of
the original graph model, a distorted one dictated by supervised signals of
class labels is embedded in the model. This is in contrast with Sgc and Gcn
where the actual graph is embedded with equal edge weights. Also, in Sage,
which uses neighborhood sampling before the aggregation operation, does not
use the complete information of the graph during training. The effect is more
prominent in denser graphs in which only a small fraction of the neighborhood
is used during a training epoch.
Another interesting observation is the attack behavior changes with datasets.
While Gat is overall less vulnerable than other models, the percentage drop in
attack performance (as compared to, for example, Gcn) for Flickr (32%) is much
larger than for Cora (9%).
#### V-B2 How do dataset properties affect attack performance?
To investigate the differences in the behavior of the attack model on
different datasets we consider three properties of the datasets (i) _average
degree_ which influences the graph structure and the effect of aggregation
function of the GNN (ii) the _number of input features_ that influence the
number of model parameters and (iii) the _number of classes_ that decides the
input dimension/features for the attack model.
First, note that for very low average degree graphs the effect of aggregation
operation is highly decreased as there would be very few or no neighbors to
aggregate over. From Table I, we observe that CiteSeer has the lowest average
degree (both in the TSTF and TSTS settings) leading to similar attack
vulnerability of all GNN models. Reddit, on the other hand, with the highest
average degree exhibits a high vulnerability to the attack when Gcn and Sgc
are the target models whereas fpr Gat and Sage attack performance drastically
reduces owing to reasons discussed in the last section. Similar observations
can be made for Flickr which has the second-highest average degree.
Differences in attack performance for Flickr are smaller as compared to
Reddit. This is expected as Reddit has an average degree which is around 50
times the average degree of Flickr for TSTF setting and around 70 times for
TSTS setting.
Second, for the three datasets Cora, CiteSeer, and PubMed, which exhibit
similar average degrees, attack performance is highest for CiteSeer followed
by Cora. The trend stays the same for different target models. The same
pattern is also observed in the number of input features. While CiteSeer has
the highest number of features, PubMed has the least. Note that the number of
input features leads to an increase in the number of parameters (the number of
parameters corresponding to the first hidden layer will be $p\times h$ where
$p$ is the number of input features and $h$ the hidden layer dimension). The
higher number of parameters, in turn, leads to better memorization by models,
which explains the above-observed trend in low average degree datasets.
Third, we recall that the output posterior vector is the input feature vector
for the attack model. As the dimension of the posterior vector is equal to the
number of classes, more information is revealed for datasets with larger
number of classes. The low attack performance on PubMed can be therefore
additionally attributed to its lowest number of classes.
#### V-B3 Effect of Neighborhood Sampling in Sage
We attribute the differences in Sage’s robustness towards attacks on different
datasets to its neighborhood sampling strategy. Recall that rather than using
complete neighborhood in the aggregation step, Sage samples a fixed number of
neighbors at each layer. SAGE also utilizes a mini-batching technique that
contains nodes on which representation needs to be generated and their sampled
neighbors. To showcase the effect of the neighborhood sampling, we varied the
number of neighbors sampled at different layers of the network and the batch
size.
We used [25,10] and [5,5] as sampled neighborhood sizes in layers 1 and 2. As
shown in Figure 3(a), the attack AUROC decreases as the number of sampled
nodes decreases. This is because the model uses the noisy neighborhood
information and it is not able to fully encode the graph structure in the
model, this, in turn, makes the posteriors of neighboring nodes less
correlated. Similar results are obtained for a larger dataset, Flickr (shown
in Figure 3(b)).
(a) Cora
(b) Flickr
Figure 3: Effect of training batch size and sampled neighbors on attack
performance for Sage model on (a) Cora and (b) Flickr dataset.
#### V-B4 Effect of Instance Connectivity
Here, we present a qualitative analysis of the differences in the robustness
of different models to MI attack using Flickr as an example dataset in the
TSTF setting. Recall that given the predicted posteriors as input, the attack
model labels the node instance as a member (label 1) or non-member (label 0)
node. To understand the pattern of label assignments by the attack model we
need the following definition.
###### Definition 1 (Homophily).
For any node $u$ which is either a member or non-member, we define its
homophily as the fraction of its one-hop neighbors which has the same
membership label as $u$. The neighborhood of any node is computed using the
graph available to the adversary. We call homophily with respect to ground
truth as the true homophily and with respect to the attack model predictions
as the predicted homophily.
Therefore, true homophily of $1$ means $u$, and all its neighbors in the graph
used by the adversary have the same membership label. Similarly, predicted
homophily of $1$ implies that $u$ and its neighbors were assigned the same
membership label by the attack model. In Figure 4, we visualize the
differences in attack behavior for different models on the Flickr dataset by
plotting the joint distribution of true and predicted homophily of the
correctly (orange contour lines) and incorrectly (blue contour lines)
predicted nodes. We chose Flickr here because the attack performance varies
the most with respect to different target models as also discussed in the last
sections.
Figure 4: Joint density plot of true and predicted homophily (refer to
Definition 1) on Flickr dataset.
We observe more dense regions in the upper half of the plots for all the
models. Noting the fact that these highly concentrated regions correspond to
high predicted homophily, we conclude that the attack model’s predictions on a
node are highly correlated with its predictions on its neighbors. As the
attack model is agnostic to the graph structure, this further implies that the
posterior of neighboring nodes are also correlated, which the attack model can
exploit.
The differences in the behavior of different models are also well illustrated.
Note that the higher the density of orange regions on diagonals (see for Gcn
and Sgc), the more accurate the attack model will be. In contrast to Gcn, the
attack model is confused for Gat and assigned the wrong label to corresponding
nodes and their neighbors (see blue regions corresponding to high predicted
homophily). For Sage, even though there are more orange regions, these do not
lie over the diagonal. This means that the attack model, even if it predicts
the right membership label for a member node, it also predicts the same
membership label for its non-member neighbors. Hence, including them
incorrectly in the member set. To summarize, for Gat the attack results in
more false negatives whereas for Sage there are more false positives. Both
scenarios render the attack less useful to the adversary.
### V-C Effect of Model Overfitting
To investigate the effect of overfitting (RQ 2), we train the models such that
they achieve zero training loss or high generalization error. The train and
test accuracy as well as the attack precision and recall are shown in Table
II.
TABLE II: Performance of GNN models and the attack model on Flickr dataset in case of overfitting. | Target Model | | Attack Model
---|---|---|---
| Train | Test | | Precision | Recall
Gcn | $0.70\pm 0.01$ | $0.13\pm 0.01$ | | $0.63\pm 0.01$ | $0.70\pm 0.04$
Gat | $0.61\pm 0.07$ | $0.50\pm 0.21$ | | $0.75\pm 0.05$ | $0.69\pm 0.03$
Sgc | $0.75\pm 0.02$ | $0.20\pm 0.03$ | | $0.62\pm 0.04$ | $0.59\pm 0.09$
Sage | $0.90\pm 0.03$ | $0.24\pm 0.03$ | | $0.57\pm 0.08$ | $0.50\pm 0.01$
(a) Influence of overfitting on the attack on Flickr. N= Normal and O=Overfit.
"Normal" refers to the case when all models were trained for a fixed number of
epochs.
(b) % of nodes for which the maximum posterior is greater than 0.8 for
overfitted GNN models. The statistics are shown for one random data split.
Figure 5: Effect of overfitting. In figure (a) we observe a surprising effect
that attack is less successful for overfitted models. The reason behind such
an effect is explained in (b) which illustrates that the overfitted model also
makes extremely confident albeit incorrect predictions on the test (unseen)
set.
Figure 5(a) shows the comparison between a "normal" model and the overfitted
model. The attack precision and recall of the overfitted model consistently
decreases across all models except for GAT. This implies that overfitting
alone might not always be a contributing factor to membership inference attack
and that overfitted model may not always encode the information needed to
launch an MI attack.
To understand the reasons behind the above observations, we investigate the
posterior distribution of member and non-member nodes. In Figure 5(b), we show
the distribution of the maximum posterior (i.e., the posterior probability
corresponding to the predicted class) of overfitted models on the members and
non-members. We observe that in the case of overfitting, the GNN model not
only makes highly confident predictions for the member nodes but also for the
non-members. Most of the nodes whether member or non-member obtain the maximum
class probability (or posterior) greater than $0.8$ for models Gcn and Sgc.
For Gat and Sage, the attack model obtains higher precision given that a
relatively less number of non-member nodes obtain a high maximum posterior.
Moreover, from the test set performance in Table II, we observe that Sage
generalizes better than Gat which also reflects in lower attack precision in
Sage.
### V-D Sensitivity Analysis of Attack
We answer RQ3 by performing the sensitivity analysis of the attack with
respect to the number of queries (Section V-D1), different sizes of hidden
layers (Section V-D2) and relaxation of model architecture and data
distribution assumptions (Section V-D3 and V-D4).
#### V-D1 Attack with reduced number of queries
We relax the number of queries required to imitate a target model to $0$ by
assuming that the adversary has access to the dataset from a similar
distribution as the dataset used for the target model. To construct such
datasets we randomly sampled disjoint sets of nodes from the full graph for
the target as well as the shadow model. We then construct the corresponding
induced graphs on the node sets to train the shadow and target models. Note
that the shadow model data, in this case, will not be exactly from the same
distribution as the target graph since our construction would not exactly
preserve the structural characteristics of these graphs e.g. degree
distribution. The data used in training the shadow model is in fact, similar
but not from the same distribution as the target model. We found that training
the shadow model using ground truth labels performs similarly to querying the
target model in the order of $\pm 0.02$ standard deviation.
#### V-D2 Attack performance without knowledge of exact hyperparameters
In this section, we relax the assumption that the attacker knows the exact
hyperparameters used in the target model by varying the number of hidden
neurons of the shadow model. We experiment with three values
$\\{256,128,64\\}$.
The corresponding mean AUROC scores are plotted in Figure 7 on Github (1) due
to space constraint. A general trend is that the larger the hidden layer size,
the better the attack performance. This is expected as an increase in the size
of the hidden layer increases the model parameters/capacity to store more
specific details about the training set. Therefore, though we observe some
reduction in attack performance for PubMed when using 128 or 64 as hidden
layer size, an attacker can just choose the hyperparameter which gives the
best train set performance on its shadow dataset.
#### V-D3 Attack without the knowledge of target model’s architecture
We further relax the assumption that the attacker knows the architecture of
the target model. Specifically, we used Sgc as the shadow model and other GNN
models as the target model. As the Sgc model is obtained from the Gcn model by
removing the non-linear activation function from Gcn, we aim to quantify how
this difference affects the attack performance. Therefore, we also used Gcn as
a shadow model. The mean AUROC scores corresponding to attacks for different
datasets are presented in Figure 8 in our Github page (1).
In both TSTF and TSTS, on the CiteSeer and Cora dataset, the performance of
using different shadow models is equivalent to using the same model as the
target model except for Sage where a significant drop in performance is
observed. However, Gcn performs significantly better than Sgc when used as the
shadow model by the attacker. On the PubMed dataset, an interesting
observation, particularly for Gat is that when Sgc is used for the shadow
model, the attack precision and recall increases more than when Gat (target
model) is used as the shadow model. On the Flickr and Reddit datasets, using
Gcn as the shadow model performs comparably to an adversary knowing the
architecture of the target model in both TSTS and TSTF settings. However,
using Sgc as a shadow model significantly led to reduced attack precision in
the TSTF setting. Better attack precision is achieved when Gcn is used as the
shadow model and Sage is used as the target model on large networks like
Reddit. Therefore, we conclude that using GCN as the shadow model is
sufficient to launch a successful attack and that the removed non-linear
activation function of Sgc makes it a less attractive option to use as a
"universal" shadow model.
#### V-D4 Attack using different data distribution (Data transferring attack)
We relax the assumption that the attacker trains her shadow model based on
data coming from similar distribution as that used by the target model.
Specifically, we used Cora as the data for training the target model and
CiteSeer as the data used by the attacker for training her shadow model. In
here, the goal of the attack model is to understand membership status based on
the posterior distribution. To cater for the discrepancies in the length of
the posterior vectors of these two datasets, we select the top $n$ coordinates
of the posterior vector and arrange them in ascending order.
As shown in Table III, we observe that relaxing the knowledge of the dataset
distribution does not affect the attack precision. Surprisingly, some gains
are observed on GCN and GAT. However, the recall drops by $13\%$ on GCN,
$16\%$ on GAT and SGC while the recall on SAGE remains the same. This implies
that the assumption of the attacker drawing the shadow dataset from the same
distribution as the target model can be relaxed with minimal loss in attack
performance.
TABLE III: Attack precision and recall when Cora is used as the target and CiteSeer as the shadow dataset. The % change with respect to original performance is shown in the brackets. | Gcn | Gat | Sgc | Sage
---|---|---|---|---
Precision | 0.766 (+1%) | 0.723 (+8%) | 0.780 (-2%) | 0.798 (-2%)
Recall | 0.653 (-13%) | 0.552 (-16%) | 0.663 (-16%) | 0.792 (0%)
## VI Defense mechanisms
To defend against the current black box attack based on posteriors (RQ 4), we
note that the defense mechanism should possess the following properties.
_First,_ given access to only posteriors, the defense should lend
indistinguishability among member and non-member nodes without compromising
task performance and target model’s utility. _Second,_ the defense should be
oblivious to the attacker. The second property is important for output
perturbation-based defense mechanisms such that the added noise cannot be
inferred from the released information.
Based on the insights gained from our experimental analysis of attack
performance for different GNN models and datasets we propose two defense
mechanisms : (i) _query neighborhood sampling defense_ (NsD) and (ii)
_Laplacian binned posterior perturbation_ (LbP) which we describe in the
following sections.
##### Laplacian binned posterior perturbation (LbP) defense
Here, we propose an output perturbation method by adding noise to the
posterior before it is released to the user. A simple strategy would be to add
Laplacian noise of an appropriate scale directly to each element of the
posterior. We refer to this strategy as VanPd. Note that the noise level
increases with the number of classes which can have an adverse effect on model
performance.
To reduce the amount of noise needed to distort the posteriors, we propose a
binned posterior perturbation defense. We first randomly shuffle the
posteriors and then assign each posterior to a partition/bin. The total number
of bins, $\psi$, is predefined and depends on the number of classes. For each
bin, we sample noise at scale $\beta$ from the Laplace distribution (LbP). The
sampled noise is added to each element of the bin. After the completion of the
noise addition operation to each bin, we restore the initial positions of the
noisy posterior y* before binning. Then we release y*.
We observe in our experiments that it leads to a drop in attack performance
without substantially compromising model performance on the node
classification task. We set the reference values for $\beta$ as {5, 2, 0.8,
0.5, 0.3, 0.1}. The higher the value of $\beta$, the higher the added noise.
We set $\psi$ as {2, 3, 4} where for example, $\psi=2$ implies that the
posterior vector is divided into $2$ groups and the same noise added to all
members of the same group.
##### Query neighborhood sampling defense (NsD)
Exploiting the observation that a node and its neighbors are classified alike
by the attack model (homophily property), we propose a query neighborhood
sampling (NsD) defense mechanism to distort the similarity pattern between the
posterior of the node and that of its neighbors. Specifically, when a target
model is queried with the node and its $L$-hop neighborhood, the defender
removes all its first-hop neighbors except $k$ randomly chosen neighbors (Note
that no change is made to the trained model.). The neighborhood of the $k$
sampled neighbors stays intact and is not changed. By doing so, NsD limits the
amount of information used to query the target model. We set the reference
values for $k$ as follows {0, 1, 2, 3} which implies sampling no neighbors,
$1,2,\text{ or }3$ neighbors respectively.
### VI-A Evaluating Defenses
We measure the effectiveness of a defense by the drop in attack performance
after the defense mechanism is applied. To further incorporate the two desired
properties of the defense into our evaluation we employ the following utility
measures [6].
##### Label loss ($\mathcal{L}$)
The label loss measures the fraction of nodes in the evaluation dataset whose
label predictions are altered by the defense. For a given query $i$, if the
highest coordinate of the true posterior and that of the perturbed or
distorted posterior is the same, then the $\mathcal{L}_{i}$ is $0$, otherwise,
it is $1$. The total label loss is quantified as:
$\mathcal{L}=\nicefrac{{\sum\limits_{i=1}^{|Q|}\mathcal{L}_{i}}}{{|Q|}},$
where $|Q|$ is the number of user queries. A $\mathcal{L}$ close to $0$ is
desirable whereas $\mathcal{L}$ close to $1$ indicates that the defense
mechanism is relatively bad since it alters the label prediction which
directly affects the test accuracy of the target model.
##### Confidence score distortion ($\mathcal{C}$)
For a given query $i$, we measure the confidence score distortion,
$\mathcal{C}_{i}$ by the distance between the true posterior and the distorted
posterior due to the defense mechanism. We use Jensen Shannon Distance (JSD)
as the distance metric. JSD extends Kullback–Leibler divergence (relative
entropy) to compute symmetrical score or similarity between two probability
distributions. The total confidence score distortion is given as:
$\mathcal{C}=\nicefrac{{\sum\limits_{i=1}^{|Q|}\mathcal{C}_{i}}}{{|Q|}}$ where
$\mathcal{C}_{i}=JSD(\textbf{y},\textbf{y*})$ for a given query $i\in Q$.
Ideally, $0$ indicates that both the perturbed posteriors and the true
posteriors are the same and $1$ indicates that they are highly dissimilar.
Setup. We compare the drop in attack precision with respect to label loss
($\mathcal{L}$) and confidence distortion ($\mathcal{C}$). For each defense
and a given value of each parameter ($\beta,\psi\text{ or }k$), we plot the
pairs $\\{\mathbb{P},\mathcal{L}\\}\text{ and }\\{\mathbb{P},\mathcal{C}\\}$
where $\mathbb{P}$ is the attack precision. We use the attacker’s AUROC as a
representative metric but observe a similar trend on the other attack
inference performance metrics (precision and recall). We performed our
experiments on Gcn since it is the most vulnerable GNN model across all
datasets as observed in Section V-A and V-D3.
### VI-B Results
TABLE IV: Attack precision, recall and AUROC (lower the better) corresponding to different label loss $\mathcal{L}$. – indicates that the defense mechanism does not incur the corresponding label loss. | | VanPd | LbP | NsD
---|---|---|---|---
| $\mathcal{L}$ | Prec | Rec | AUC | Prec | Rec | AUC | Prec | Rec | AUC
Cora | 0 | 0.814 | 0.811 | 0.813 | 0.777 | 0.772 | 0.750 | 0.800 | 0.699 | 0.700
0.1 | 0.782 | 0.778 | 0.800 | 0.721 | 0.717 | 0.705 | 0.774 | 0.686 | 0.691
0.3 | 0.678 | 0.673 | 0.658 | 0.616 | 0.608 | 0.631 | – | – | –
CiteSeer | 0 | 0.880 | 0.887 | 0.880 | 0.87 | 0.859 | 0.846 | 0.810 | 0.791 | 0.795
0.1 | 0.846 | 0.830 | 0.852 | 0.691 | 0.683 | 0.700 | 0.778 | 0.766 | 0.758
0.3 | 0.747 | 0.731 | 0.735 | 0.620 | 0.606 | 0.617 | – | – | –
PubMed | 0 | 0.642 | 0.633 | 0.681 | 0.592 | 0.58 | 0.66 | 0.533 | 0.521 | 0.568
0.1 | 0.564 | 0.548 | 0.653 | 0.570 | 0.554 | 0.556 | 0.519 | 0.509 | 0.537
0.3 | 0.523 | 0.508 | 0.600 | 0.347 | 0.503 | 0.542 | – | – | –
Flickr | 0 | 0.810 | 0.775 | 0.820 | 0.568 | 0.665 | 0.745 | 0.302 | 0.500 | 0.500
0.1 | 0.685 | 0.541 | 0.785 | 0.541 | 0.514 | 0.680 | – | – | –
0.3 | 0.571 | 0.505 | 0.655 | 0.484 | 0.645 | 0.649 | – | – | –
Reddit | 0 | 0.747 | 0.825 | 0.830 | 0.730 | 0.801 | 0.814 | 0.176 | 0.500 | 0.500
0.1 | 0.574 | 0.533 | 0.655 | 0.521 | 0.524 | 0.620 | – | – | –
0.3 | 0.527 | 0.518 | 0.632 | 0.227 | 0.384 | 0.500 | – | – | –
In Figure 6, we plot the attack AUROC (after the defense is applied) together
with label loss and confidence distortion. In the following, we analyze the
results for different datasets when the three different defense mechanisms
were applied. Table IV further provides the attack precision, recall and AUROC
scores (after defense mechanism has been applied) and the corresponding label
loss. All results corresponds to attacks in TSTF setting except for Reddit in
the TSTS setting.
##### Cora
Recall that Cora is a sparse graph (with average degree 3.89 in TSTF setting)
with $7$ classes. Because of high sparsity, we do not benefit much by NsD
defense which perturbs the input neighborhood of query node. Nevertheless, it
achieves a drop of $15\%$ in attack’s performance with a negligible label loss
and confidence distortion (see Figure 6(a)). LbP and VanPd which directly
perturbs the posteriors, achieves larger drops in attack performance though at
the expense of higher label loss and confidence distortion. Nevertheless, for
the same label loss, LbP achieves a better drop in attack performance. For
instance, LbP achieves a maximum drop of $24\%$ at label loss of $0.3$ whereas
for VanPd, the drop in attack precision is only $17\%$ at the same label loss.
At $0.1$ label loss, LbP achieves a $12\%$ (thrice the percentage drop in
attacker’s inference than VanPd and NsD at $0.1$ label loss).
##### CiteSeer
For the CiteSeer dataset (Figure 6(b)), the drop in attacker’s inference for
LbP at label loss of $0.3$ and $0.1$ is 30% and 22% respectively which is two
times (16%) and four times (5%) better than VanPd at the same label loss. At
$0.1$ label loss, NsD achieves $12\%$ drop in attacker’s performance. However,
at a $0$ label loss, NsD still achieves $9\%$ drop in attacker’s performance
while VanPd and LbP only have $1\%$ and $2\%$ drops respectively.
##### PubMed
On the PubMed dataset, we observe a further reduction in the attacker’s
inference with VanPd achieving $25\%$ and LbP achieving $50\%$ at label loss
of $0.3$. NsD does not incur any label loss above $0.1$. Hence, at a lower
label loss of $0.1$, VanPd and LbP perform similarly. One possible explanation
is that PubMed only has three classes. Therefore, the maximum number of bins
is restricted to $2$ which does not lead to any specific advantage for LbP as
compared to VanPd. On the contrary, NsD performs well with a $26\%$ and $24\%$
drop in attacker’s inference at a label loss of $0.1$ and $0$ respectively
making the attacker’s performance largely incorrectly classifies member nodes
as non-members. (Figure 6(c)).
##### Flickr
As in the previous analysis on other datasets, LbP outperforms VanPd by about
over $15\%$ at label loss of $0.3$. It is notable that NsD that samples the
neighborhood of the query performs significantly better than LbP with a drop
in inference performance of $65\%$ at a perfect label loss of $0$. This
significant drop explains the intriguing observation that the predictions of a
node follow that of its neighbors (Section V-B4). Therefore, when the
neighbors of a node are distorted by the query sampling mechanism, the
posterior is equally affected, causing the attack model to misclassify member
and non-member nodes.
##### Reddit
Similar to Flickr, at label loss $0$, we observe an $80\%$ drop for NsD,
$17\%$ for LbP, and $15\%$ drop for VanPd. We note that a similar drop in
attacker’s inference that LbP will achieve at $0.3$ label loss, NsD will
achieve the same drop at a perfect $0$ label loss (observed at $k=2$). The
observations also follow that of Flickr because of the high node degree of the
Reddit dataset.
Summary. We observe that VanPd leads to a degradation in the test performance
of the target model as well as the attack performance. Although this
significantly defends against MI attack, it is at the expense of the test
accuracy of the target model. Binning as in LbP provides a viable strategy to
reduce the amount of added noise without compromising defense. We observe that
our LbP defense is well suited for graphs with a low degree. For LbP, setting
$\psi=2$ led to a good balance between privacy and limiting label loss. We
remark that both LbP and VanPd are evaluated on the same noise scale.
On all datasets, NsD achieves the lowest label loss. We attribute the
observation of different defenses to the degree of each graph. Specifically,
Cora, CiteSeer, and PubMed have low degrees, therefore, the NsD does not
significantly reduce the attacker’s inference. However, on large datasets such
as Flickr and Reddit which have higher degrees, the attacker’s inference
reduces to a random guess (with AUC score of 0.5) with a perfect $0$-label
loss. With respect to the choice of $k$, we observed that the smaller the
value of $k$, the better the defense. For the current datasets, we observed
that for $k>3$, there was not much degradation in attack performance.
Comparison based on confidence score distortion.
The lower the $\mathcal{C}$, the more difficult it is for an attacker to
detect whether the model has undergone any defense. Moreover, a lower
confidence distortion is required for applications where the target’s model
output posterior is used rather than just the predicted class. As shown in
Figure 6, VanPd leads to very high confidence distortion as compared to other
defenses. For instance, VanPd, LbP, and NsD achieves $0.70$, $0.40$ and $0.10$
confidence score distortion respectively on Cora dataset corresponding to the
reduction in attack precision by $15\%$, $15\%$ and $12\%$. On the larger
dataset Reddit, VanPd, LbP, and NsD achieves $0.82$, $0.60$ and $0.05$
confidence score distortion corresponding to $30\%$, $30\%$ and $63\%$
reduction in attack precision. Our result shows that NsD achieves the lowest
confidence score distortion leading to an oblivious defense and the
preservation of target model’s utility.
(a) Cora
(b) CiteSeer
(c) PubMed
(d) Flickr
(e) Reddit
Figure 6: Comparison of defense mechanisms using label loss ($\mathcal{L}$)
and confidence score distortion ($\mathcal{C}$) metric. The red dashed line
corresponds to the attack AUROC when no defense mechanism was applied. We
observe a similar trend for Precision and Recall. Table IV shows the
performance on all metric at specific $\mathcal{L}$.
## VII Conclusion
We compare the vulnerability of GNN models to membership inference attacks. We
further show that the observed differences in vulnerability is caused by
differences in various model and dataset properties. We show that the simplest
binary classifier-based attack model already suffices to launch an attack on
GNN models even if they generalize well. We carried out experiments on five
popular datasets in two realistic settings. To prevent MI attacks on GNN, we
propose two effective defenses based on output perturbation and query
neighborhood sampling that significantly decrease the attacker’s inference
without substantially compromising the target model’s performance.
Acknowledgements. This work is in part funded by the Lower Saxony Ministry of
Science and Culture under grant number ZN3491 within the Lower Saxony "Vorab"
of the Volkswagen Foundation and supported by the Center for Digital
Innovations (ZDIN), and the Federal Ministry of Education and Research (BMBF)
under LeibnizKILabor (grant number 01DD20003).
## References
* [1] N. Carlini, C. Liu, Ú. Erlingsson, J. Kos, and D. Song. The secret sharer: Evaluating and testing unintended memorization in neural networks. In 28th $\\{$USENIX$\\}$ Security Symposium, 2019.
* [2] M. Fredrikson, S. Jha, and T. Ristenpart. Model inversion attacks that exploit confidence information and basic countermeasures. In Proceedings of the 22nd ACM SIGSAC Conference on Computer and Communications Security, pages 1322–1333, 2015.
* [3] W. L. Hamilton, R. Ying, and J. Leskovec. Inductive representation learning on large graphs. In NIPS, 2017.
* [4] X. He, R. Wen, Y. Wu, M. Backes, Y. Shen, and Y. Zhang. Node-level membership inference attacks against graph neural networks. arXiv preprint arXiv:2102.05429, 2021.
* [5] B. Jayaraman and D. Evans. Evaluating differentially private machine learning in practice. In 28th $\\{$USENIX$\\}$ Security Symposium, 2019.
* [6] J. Jia, A. Salem, M. Backes, Y. Zhang, and N. Z. Gong. Memguard: Defending against black-box membership inference attacks via adversarial examples. In Proceedings of the 2019 ACM SIGSAC Conference on Computer and Communications Security, 2019.
* [7] T. N. Kipf and M. Welling. Semi-supervised classification with graph convolutional networks. In ICLR, 2017.
* [8] L. Melis, C. Song, E. De Cristofaro, and V. Shmatikov. Exploiting unintended feature leakage in collaborative learning. In 2019 IEEE Symposium on Security and Privacy (SP), 2019.
* [9] M. Nasr, R. Shokri, and A. Houmansadr. Machine learning with membership privacy using adversarial regularization. In Proceedings of the 2018 ACM SIGSAC Conference on Computer and Communications Security, 2018.
* [10] N. Papernot, P. McDaniel, A. Sinha, and M. Wellman. Towards the science of security and privacy in machine learning. arXiv preprint arXiv:1611.03814, 2016.
* [11] S. Sajadmanesh and D. Gatica-Perez. When differential privacy meets graph neural networks. arXiv preprint arXiv:2006.05535, 2020.
* [12] A. Salem, A. Bhattacharya, M. Backes, M. Fritz, and Y. Zhang. Updates-leak: Data set inference and reconstruction attacks in online learning. In USENIX, pages 1291–1308, 2020.
* [13] A. Salem, Y. Zhang, M. Humbert, P. Berrang, M. Fritz, and M. Backes. Ml-leaks: Model and data independent membership inference attacks and defenses on machine learning models. arXiv preprint arXiv:1806.01246, 2018.
* [14] R. Shokri, S. Marco, S. Congzheng, and S. Vitaly. Membership inference attacks against machine learning models. In IEEE Symposium on Security and Privacy (SP), 2017.
* [15] F. Tramèr, F. Zhang, A. Juels, M. K. Reiter, and T. Ristenpart. Stealing machine learning models via prediction apis. USENIX Association, 2016.
* [16] P. Veličković, G. Cucurull, A. Casanova, A. Romero, P. Liò, and Y. Bengio. Graph Attention Networks. ICLR, 2018.
* [17] B. Wang and N. Z. Gong. Stealing hyperparameters in machine learning. In 2018 IEEE Symposium on Security and Privacy (SP), 2018.
* [18] F. Wu, A. Souza, T. Zhang, C. Fifty, T. Yu, and K. Weinberger. Simplifying graph convolutional networks. In Proceedings of the 36th International Conference on Machine Learning, pages 6861–6871. PMLR, 2019.
* [19] S. Yeom, I. Giacomelli, M. Fredrikson, and S. Jha. Privacy risk in machine learning: Analyzing the connection to overfitting. In 2018 IEEE 31st Computer Security Foundations Symposium (CSF), 2018.
* [20] C. Zhang, S. Bengio, M. Hardt, B. Recht, and O. Vinyals. Understanding deep learning requires rethinking generalization. arXiv preprint arXiv:1611.03530, 2016.
* [21] B. Z. H. Zhao, H. J. Asghar, R. Bhaskar, and M. A. Kaafar. On inferring training data attributes in machine learning models. arXiv preprint arXiv:1908.10558, 2019.
* [22] J. Zhou, C. Chen, L. Zheng, X. Zheng, B. Wu, Z. Liu, and L. Wang. Privacy-preserving graph neural network for node classification. arXiv:2005.11903, 2020.
|
# Dislocation content of grain boundary phase junctions and its relation to
grain boundary excess properties
T. Frolov Lawrence Livermore National Laboratory, Livermore, California
94550, USA D. L. Medlin Sandia National Laboratories, Livermore, California
94551, USA M. Asta Department of Materials Science and Engineering,
University of California Berkeley, Berkeley, California 94720, USA
###### Abstract
We analyze the dislocation content of grain boundary (GB) phase junctions,
i.e., line defects separating two different GB phases coexisting on the same
GB plane. While regular GB disconnections have been characterized for a
variety of interfaces, GB phase junctions formed by GBs with different
structures and different numbers of excess atoms have not been previously
studied. We apply a general Burgers circuit analysis to calculate the Burgers
vectors $\mathbf{b}$ of junctions in two $\Sigma 5$ Cu boundaries previously
simulated with molecular dynamics. The Burgers vectors of these junctions
cannot be described by the displacement shift complete (DSC) lattice alone. We
show that, in general, the normal component of $\mathbf{b}$ is not equal to
the difference in the GB excess volumes, but contains another contribution
from the numbers of GB atoms per unit area $\Delta N^{*}$ required to
transform one GB phase into another. In the boundaries studied, the latter
component dominates and even changes the sign of $\mathbf{b}$. We derive
expressions for the normal and tangential components of b in terms of the DSC
lattice vectors and the non-DSC part due to $\Delta N^{*}$ and additional GB
excess properties, including excess volume and shears. These expressions
provide a connection between GB phase transformations driven by the GB free
energy difference and the motion of GB junctions under applied normal and
shear stresses. The proposed analysis quantifies b and therefore makes it
possible to calculate the elastic part of the energy of these defects,
evaluate their contribution to the nucleation barrier during GB phase
transformations, and treat elastic interactions with other defects.
###### pacs:
64.10.+h, 64.70.K−, 68.35.Md
## I Introduction
Similarly to bulk materials, interfaces can exist in multiple states, or
phases, and exhibit first-order phase transitions.(Cantwell20141, ) Such
transitions proceed by nucleation and growth of a new interfacial structure
resulting in a discontinuous change in the excess properties of the
interface.(Frolov2013, ) For fluid interfacial phases, the conditions of
equilibrium and stability were first derived by Gibbs, who called them
interfacial states(Gibbs, ) Gibbs showed that an interface should transform to
a state with the lowest interfacial free energy. In order to describe such
transitions within the existing theories of nucleation and phase
transformation, it is necessary to quantify the driving force as well as the
nucleation barrier. The interfacial free energy difference provides the
driving force according to Gibbs, while the excess free energy associated with
the line defect separating the two phases is the penalty for the
transformation. A thermodynamic framework quantifying the excess properties of
such line defects and their free energy has been recently
proposed.(Frolov:2015ab, ) The developed framework is Gibbsian in spirit and
makes no assumptions about the atomic details of the defect structure. It
assumes that the energy of this defect is finite, scales with its length, and
is independent of the system size. As a result, the treatment applies to
fluids and some solid systems,(PhysRevB.95.155444, ) but cannot be extended to
general solid-solid interfaces without significant approximations.
A grain boundary is a solid-solid interface formed by two misoriented crystals
of the same material. The phase behavior of GBs has recently become a topic of
increased interest due to the accumulating experimental and modeling evidence
of phase transformations at such interfaces.(Divinski2012, ;
PhysRevLett.59.2887, ; Cantwell20141, ; krause_review_2019, ; Rickman2016225,
; doi:10.1080/095008399177020, ; Rohrer2016231, ; rupert_role_2016, ;
Dillon20076208, ; OBrien2018, ; ABDELJAWAD2017528, ; rajeshwari_k_grain_2020,
; glienke_grain_2020, ) The line defects formed between two different GB
phases can have a long range elastic field associated with them. GB
disconnections are examples of such line defects and have been analyzed for
different types of interfaces.(HIRTH2013749, ; Hirth96, ) Disconnections on
GBs and phase boundaries have been investigated by modeling and
experiments.(Medlin2017383, ; medlin_accommodation_2003, ;
medlin_accommodation_2006, ; Pond03a, ; hirth_disconnections_2016, ;
rajabzadeh_role_2014, ; zhu_situ_2019, ) On the other hand, GB phase junctions
have received much less attention since there have only been a few studies of
GB structures with two different connected interfacial phases. At present, the
topological nature of these defects and the magnitude of their Burgers vectors
are not well understood. Recent modeling and experimental studies in elemental
Cu suggest that the dislocation character of these defects has a strong effect
on the kinetics of GB phase transformations and could explain the low
transformation rates at room temperature.(meiners_observations_2020, ) The
elastic field contribution to the energy could, in principle, dominate the GB
phase nucleation behavior, and play a role in the interaction of the junctions
with other GB disconnections, lattice dislocations and point defects.
Atomistic simulations have demonstrated first-order transitions in $\Sigma
5(210)[001]$ and $\Sigma 5(310)[001]$ symmetric tilt GBs in Cu.(Frolov2013, )
The heterogeneous structures of these two $\Sigma 5$ GBs, each containing two
different phases, are shown in Fig. 1. Both structures were obtained by
molecular dynamic simulations at T=800K with the boundary connected to an open
surface. In these simulations the surface acts as a nucleation site and a
source of extra atoms or vacancies necessary to grow the new GB phase. Because
the cores of these GB phases are composed of different number of atoms, atoms
must be inserted or removed to transform from one phase to the
other.(doi:10.1080/01418618308243118, ; doi:10.1080/01418618608242811, ;
DUFFY84a, ; Phillpot1992, ; Phillpot1994, ; Alfthan06, ; Alfthan07, ;
Chua:2010uq, ; Frolov2013, ; Demkowicz2015, ) During the transformation these
GB phases are separated by a GB phase junction. The line direction of this
defect is normal to the plane of the image and it spans the periodic dimension
of the simulation block. The transformation was accompanied by the migration
of the GB phase junction, which required diffusion of extra atoms from the
surface. These two $\Sigma 5$ boundaries provide a convenient model system to
analyze the topological character of GB phase junctions and quantify their
Burgers vectors. In this work, we show that GB phase junctions have a
dislocation character, just like regular disconnections, but their Burgers
vectors differ from crystallographic DSC vectors since they include a
contribution resulting from the difference in the GB structures. We
demonstrate a general Burgers circuit analysis that quantifies the dislocation
content of these defects. We interpret it in terms of contributions from
different excess properties, including excess volume and shear, and the
difference in the number of atoms required to transform one GB phase into
another.
## II Methodology
### II.1 Closure failure and Burgers circuit construction
Fig. 2 shows why GB phase junctions are expected to have dislocation character
and generate long-range elastic fields. Consider two bicrystals with different
GB structures or phases, as shown schematically in Fig. 2a. Each GB structure
has its own excess properties such as excess volume(Gibbs, ; Cahn79, ) and
shear.(Frolov2012a, ; Frolov2012b, ) In addition, the GB cores are composed of
a different number of atoms. As a result the two bicrystals are geometrically
incompatible. To form a GB phase junction we need to join the two bicrystals.
Fig. 2b shows that this procedure will result in a closure failure. In order
to form the junction using the Volterra operation,(Hirth, ) we elastically
strain both bicrystals by appropriate amounts, so that their bulk lattice
planes away from the boundary match. As a result of this procedure, we form a
line defect with a dislocation character, as shown in Fig. 2c.
Based on how the GB phase junction was created in this thought experiment, it
is straightforward to design a Burgers circuit that would quantify the closure
failure, i.e., the Burgers vector of the defect. We start with the deformed
bicrystal shown in Fig. 3 and construct a closed circuit ABCD around the GB
phase junction. The corner lattice points of the circuit coincide with the
lattice sites located inside the bulk crystal, far away from the GB junction
and GB phases. Two vectors $\mathbf{\mathbf{AB}}$ and $\mathbf{\mathbf{CD}}$
cross the two different GB phases, while the other two vectors $\mathbf{BC}$
and $\mathbf{DA}$ are located inside the bulk of the two grains and are
lattice vectors. To calculate the Burgers vector of the GB phase junction we
measure these vectors in the reference state before the junction was created.
The vectors $\mathbf{\mathbf{B^{\prime}C^{\prime}}}$ and
$\mathbf{D^{\prime}A^{\prime}}$ are lattice vectors and their components can
be expressed in terms of the reference bulk lattice parameter. The crossing
vectors should be mapped on the respective reference bicrystals as shown in
Fig. 3b. Here, the references for the crossing vectors are chosen as
bicrystals possessing only a single GB phase so that the crossing vectors will
be unaffected by the elastic field of the GB phase junction. With such a
reference, it is always possible to find a pair of of crystallographically
equivalent lattice sites, such as A’ and B’ and thereby measure the vector
$\mathbf{A^{\prime}B^{\prime}}$. Alternatively these equivalent lattice sites
can be selected in the same deformed bicrystal but infinitely far away from
the junction. The points A, B, C and D can be chosen arbitrary as long as they
enclose the phase junction and their positions in the reference state are not
affected by the GB. In other words, the difference between any two choices of
a given lattice point such as $A^{\prime}$ equals a perfect lattice vector,
which also means that the difference between any two choices of the crossing
vectors is a DSC vector. According to the circuit construction, the closure
failure, or the Burgers vector $\mathbf{b}$, equals the sum of the four
vectors measured in the undeformed state:
$\boldsymbol{\mathbf{b}}=\boldsymbol{A^{\prime}B^{\prime}}+\boldsymbol{B^{\prime}C^{\prime}}+\boldsymbol{C^{\prime}D^{\prime}}+\boldsymbol{D^{\prime}A^{\prime}}.$
(1)
Here, we follow start to finish right handed (SFRH) convention. In Section
III, we apply this approach to quantify the dislocation content of GB phase
junctions in the $\Sigma 5(210)[001]$ and $\Sigma 5(310)[001]$ Cu GBs. We
describe how to calculate the crossing vectors $\mathbf{A^{\prime}B^{\prime}}$
and $\mathbf{C^{\prime}D^{\prime}}$ in the reference bicrystals.
### II.2 Relating normal and tangential components of the Burgers vector to
excess properties of GBs and DSC lattice vectors.
Equation (1) and the circuit procedure described in Sec. (II.1) are sufficient
to calculate the Burgers vector of any given GB phase junction.111In atomistic
simulations this is always possible at least in principle. The reference GB
phases can be generated separately or carved out from the deformed state and
relaxed in a proper way so that the reference crossing vectors could be
calculated. In experiment, the analysis cannot be completed based on a single
image of the deformed state. However, GB structures sufficiently far away from
the junction can be approximated as reference states. In general, a full
three-dimensional structure is necessary to determine all three components of
$\mathbf{b}$. The procedure, however, does not quantify the specific
contributions to $\mathbf{b}$ from bicrystallography and from the difference
in the GB excess properties. In this section, we express the crossing vectors
in terms of crystallographic properties and GB excess properties and quantify
their contributions to the Burgers vector. Relative normal and tangential
displacements of the grains due to the GB can be described using the
deformation gradient formalism.(Frolov2012a, ; Frolov2012b, ) The proposed
thermodynamic framework enables quantification of the excess shear at GBs and
its relation to the relative translation of the grains. Both the normal and
tangential components of the crossing vectors can be derived within this
approach and related to the excess volumes and shears of different GB phases.
In the following two subsections we will first derive the normal Burgers
vector component $b_{3}$ using the well-known definition of the GB excess
volume. In the next section, we derive all three components of
$\mathbf{\mathbf{b}}$ using the deformation gradient approach proposed in
Refs.(Frolov2012a, ; Frolov2012b, ).
#### II.2.1 Normal component $b_{3}$
The normal components of the crossing vectors measured in the reference state
can be expressed as a sum of bulk and GB contributions using the familiar
definition of the GB excess volume. The GB excess volume $[V]_{N}$ is equal to
the difference between the volume of the bicrystal $V_{bicrystal}$ spanned by
the crossing vector and containing $N$ atoms, and the volume of the bulk
crystal containing the same number of atoms divided by the GB area. The
regions spanned by the crossing vectors are shown in Fig. 4a. For two GB
phases $\alpha$ and $\beta$ we have
$[V]_{N}^{\alpha}\cdot\mathcal{A}=V_{bicrystal}^{\alpha}-\Omega\cdot
N^{\alpha}=A^{\prime}B^{\prime}_{3}\cdot A-\Omega\cdot N^{\alpha},$ (2)
$[V]_{N}^{\beta}\cdot\mathcal{A}=V_{bicrystal}^{\beta}-\Omega\cdot
N^{\beta}=D^{\prime}C^{\prime}_{3}\cdot A-\Omega\cdot N^{\beta},$ (3)
where $\Omega$ is volume per atom in the perfect lattice, $\mathcal{A}$ is the
GB area and the superscripts refer to bicrystals with GB phases $\alpha$ and
$\beta$. Eqs. (2) and (3) express the normal components of the crossing
vectors in the reference state in terms of the excess volumes and numbers of
atoms per unit area:
$A^{\prime}B^{\prime}_{3}=[V]_{N}^{\alpha}+\Omega\cdot
N^{\alpha}/\mathcal{A},$ (4)
$D^{\prime}C^{\prime}_{3}=[V]_{N}^{\beta}+\Omega\cdot N^{\beta}/\mathcal{A}.$
(5)
Here, both the GB excess volume and the second terms on the right-hand side,
$\Omega\cdot$$N/\mathcal{A}$, have the units of length. To illustrate the
physical meaning of the second term, consider first a region inside a perfect
crystal. In this case, $\Omega\cdot$$N/\mathcal{A}$ equals the distance
between the corresponding atomic planes, which is a normal component of a
lattice vector. On a DSC lattice formed by two perfect crystals with different
orientations, this quantity is equal to the normal component of a DSC vector.
However, for a general bicrystal with a GB, $\Omega\cdot$$N/\mathcal{A}$ is
not necessarily a component of a DSC vector because the number of atoms at the
GB is not restricted to be the same as in the lattice. We can now combine Eqs
(4) and (5) with Eq. (1) to derive an expression for $b_{3}$:
$b_{3}=(A^{\prime}B^{\prime}_{3}-D^{\prime}C^{\prime}_{3})+(B^{\prime}C_{3}^{\prime}-AD_{3}^{\prime})=[V]_{N}^{\alpha}-[V]_{N}^{\beta}+\Omega(N^{\alpha}-N^{\beta})/\mathcal{A}+d_{3}^{sc}=\Delta[V]_{N}+\Omega\Delta
N/\mathcal{A}+d_{3}^{sc}.$ (6)
Equation (6) is an analytical expression for the normal component of the
Burgers vector of a GB phase junction. The equation reveals the different
factors that contribute to $b_{3}$. The first one is the difference between
the excess volumes of the two GB phases. The second term corresponds to the
difference in the number of atoms inside the two regions spanned by the
crossing vectors. Finally, the two lattice vectors of the upper and lower
grains contribute the DSC component
$d_{3}^{sc}=BC_{3}^{\prime}-AD{}_{3}^{\prime}$.
We derived Eq. (6) for one particular GB phase junction, but we can use this
analysis to demonstrate that all other possible Burgers vectors of such
junctions form a DSC lattice. In addition, according to Eq. (6) the origin of
this lattice is shifted away from zero in the direction normal to the GB plane
by $\Delta[V]_{N}+\Omega\Delta N/\mathcal{A}$. To do so, we consider all other
possible junctions between the same GB phases with different Burgers vectors
and construct the Burgers circuits for each junction such that the crossing
vectors $\mathbf{A^{\prime}B^{\prime}}$ and $\mathbf{C^{\prime}D^{\prime}}$
are the same for all junctions and $\mathbf{B^{\prime}C^{\prime}}$ and
$\mathbf{A^{\prime}D^{\prime}}$ may not be the same. Then the difference
between any two Burgers vectors will be equal to the difference in
$\mathbf{B^{\prime}C^{\prime}}-\mathbf{A^{\prime}D^{\prime}}$ of the two
circuits, which is necessarily a DSC vector since
$\mathbf{B^{\prime}C^{\prime}}$ and $\mathbf{A^{\prime}D^{\prime}}$ are
lattice vectors of different grains. As a result the Burgers vectors of any
two junctions differ exactly by some DSC vector and all admissible Burgers
vectors form a DSC lattice.
So far we assumed that the corners of the circuit were chosen in a general
way, as shown in Fig. 4a. We can now consider some particular choices of the
lattice sites A, B, C and D to relate the terms in Eq. (6) to some measurable
properties of GB junctions. Specifically, consider a circuit shown in Fig. 4b
when the lattice vectors $\mathbf{BC}$ and $\mathbf{AD}$ are located along
atomic planes parallel to the GB plane. These atomic planes are elastically
distorted due to the presence of the dislocation, but they are parallel to
each other and to the GB plane in the reference state. By this choice,
$B^{\prime}C_{3}^{\prime}\equiv A^{\prime}D_{3}^{\prime}\equiv 0$ setting the
DSC term in Eq. (6) to zero and we obtain
$b_{3}=\Delta[V]_{N}+\Omega\Delta N^{\prime}/\mathcal{A}.$ (7)
Here, we define $\Delta N^{\prime}/\mathcal{A}$ as the defect absorption
capacity of a GB junction. It is equal to the difference in the number atoms
inside the equivalent volumes located on the two sides of the junction and
bound by the same atomic planes parallel to the GB plane shown in Fig. 4b. The
defect absorption capacity represents the number of atoms per unit of the GB
area absorbed or ejected when the junction moves along the GB.
As a simple illustration, consider the climb of a regular disconnection inside
a single-phase GB. The motion of this disconnection requires a supply of atoms
or vacancies with the number of the point defects proportional to the normal
component of the Burgers vector, which in this case is a DSC lattice
vector.(HIRTH2013749, ; Hirth96, ) According to Eq. (7) the defect absorption
capacity of such a disconnection is given by $\Delta
N_{disc}^{\prime}/\mathcal{A}=b_{3}/\Omega=d_{3}^{sc}/\Omega$, since the GB
structure on the two sides of the disconnection is the same and
$\Delta[V]_{N}\equiv 0$. This example also illustrates that different
disconnections may have different defect absorption capacities and the
difference is given by a normal component of some DSC vector divided by the
volume per atom. For a general GB phase junction separating different GB
phases, however, the defect absorption capacity is not defined by the DSC
lattice alone, as will be discussed below.
For a given physical system, $b_{3}$ of a GB phase junction is a well defined,
single-valued quantity. However, any GB phase junction can in principle
increase or decrease its dislocation content and its defect absorption
capacity by absorbing or ejecting regular disconnections. This multiplicity of
possible $b_{3}$ is captured by the second term in Eq. (7). While the first
term, $\Delta[V]_{N}=[V]_{N}^{\alpha}-[V]_{N}^{\beta}$, is a constant set by
the values of the excess volumes of the two GB phases, the second term,
$\Omega\Delta
N^{\prime}/\mathcal{A}=\Omega(N^{\alpha}-N^{\beta})/\mathcal{A}$, represents a
set of possible values. When disconnections are ejected or absorbed by the
junction, $\Omega\Delta N^{\prime}/\mathcal{A}$ term describes the discrete
changes in the normal component of the Burgers vector which occurs in
increments dictated by the DSC lattice. For this reason, it makes sense to
further split $\Omega\Delta N^{\prime}/\mathcal{A}$ in Eq. (7) into
contributions described by the DSC lattice and the smallest in magnitude non-
DSC part $\Delta N^{*}/\mathcal{A}$ :
$b_{3}=\Delta[V]_{N}+\Omega\Delta N^{*}/\mathcal{A}+d_{3}^{sc}$ (8)
Subtracting Eqs. (7) and (8) we also obtain
$\Delta N^{\prime}/\mathcal{A}=\Delta N^{*}/\mathcal{A}+d_{3}^{sc}/\Omega$ (9)
Equation (8) demonstrates that all admissible Burgers vectors of GB phase
junctions can be obtained by constructing a DSC lattice for a given bicrystal
and shifting the origin of this lattice by a non-DSC vector with the normal
component given by $\Delta[V]_{N}+\Omega\Delta N^{*}/\mathcal{A}$. The in-
plane components of this shift vector will be derived in the next section. Eq.
(9) shows that the defect absorption capacity of a junction can change in
increments dictated by the DSC lattice, but may not be reduced to zero in some
cases because of $\Delta N^{*}$. As introduced by Eq (8), $\Omega\Delta
N^{*}/\mathcal{A}$ is smaller than the smallest normal component of a DSC
vector $min(d_{3}^{sc})$. It is also defined up to the $min(d_{3}^{sc})$ and
can be both positive and negative.
To illustrate the meaning of the different terms in Eqs. (8) and (9) we
consider several examples. We start again with a regular disconnection as a
simplest case when the GB structure is the same on both sides of the line
defect. The Burgers vector is exactly a DSC lattice vector, as a result
$\Delta N^{*}/\mathcal{A}\equiv 0$. In other words, the defect adsorption
capacity of a regular disconnection is described exactly by the normal
components of the DSC lattice vectors.
In another special case, consider a junction formed by two different GB phases
composed of the same number of atoms, meaning that given a bicrystal with one
GB phase, the same bicrystal with a different GB phase can be obtained by
rearranging the atoms in the GB region and changing the relative translations
of the grains if necessary without inserting or removing atoms from the GB
core. In this case $\Delta N^{*}=0$ again, but the excess GB volume difference
contributes to the non-DSC part of the Burgers vector normal component:
$b_{3}=[V]_{N}^{\alpha}-[V]_{N}^{\beta}+d_{3}^{sc}$. As a result, differently
from regular disconnections, the origin of the DSC lattice of all possible
Burgers vectors of such a junction is not located at zero. In a general case,
however, $\Omega\Delta N^{\prime}/\mathcal{A}$ term is not equal to a normal
component of a DSC vector or zero and $\Delta N^{*}$ is not zero.
At this point $\Delta N^{*}$ was derived through its contribution to the
normal shift of the origin of the DSC lattice of all possible Burgers vectors
of a given GB phase junction. We now turn our discussion to the physical
meaning of this quantity. We show that $\Delta N^{*}/\mathcal{A}$ corresponds
to the smallest number of atoms or vacancies per unit area required to
transform one GB phase into another. Indeed, out of all possible choices, we
can always select a junction such that the difference $b_{3}-\Delta[V]_{N}$ is
smaller than the smallest normal component $min(d_{3}^{sc})$ of any DSC
vector, making $d_{3}^{sc}$ in Eq (8) zero and
$b_{3}-\Delta[V]_{N}=\Omega\Delta N^{*}/\mathcal{A}$. For this junction, by
definition, $\Delta N^{*}$ is the difference in the number of atoms inside two
regions containing two different GB phases and can be interpreted as the
number of atoms required to be inserted or removed to transform one GB phase
into another. This number is also the smallest, because any other changes in
the number of atoms that preserve the two given GB structures require
insertion or removal of atoms in the increments of $min(d_{3}^{sc})/\Omega$
atoms per unit area.
A growing number of modeling studies demonstrated that for many GB transitions
$\Delta N^{*}$ is not zero, some number of atoms must be added or removed to
transform one GB phase to the other. The difference in the number of GB atoms
$\Delta N^{*}$ originates from the fact that some GB structures cannot be
obtained by simply joining two perfect half crystals: in addition, some
fraction of the GB atoms needs to be added or removed from the GB core and
this fraction is different for different GB phases. The importance of
optimizing the number of atoms at GBs has been demonstrated in different GB
types and several different materials systems.(doi:10.1080/01418618308243118,
; doi:10.1080/01418618608242811, ; DUFFY84a, ; Phillpot1992, ; Phillpot1994, ;
Alfthan06, ; Alfthan07, ; Chua:2010uq, ; Frolov2013, ; Demkowicz2015, ;
Han:2016aa, ) New computational methods designed to perform grand-canonical
optimization of GB structure have been proposed.(Zhu2018, ;
banadaki_efficient_2018, ; gao_interface_2019, ; yang_grain_2020, )
The optimization of the number of atoms in the GB core is related to the
atomic density at the boundary, but it is not uniquely determined by the
excess GB volume and represents an additional GB parameter. Previous studies
reported the actual number of removed or inserted atoms for a given GB cross-
section or calculated it per unit area relative to an idealized reference
bicrystal system which is arbitrary.(Alfthan06, ; Demkowicz2015, ; Han:2016aa,
) In our previous study, we reported a fraction of GB atoms $[n]$ calculated
relative to the numbers of atoms in a bulk plane parallel to the
boundary.(Frolov2013, ) To calculate this quantity for a given GB, we count
the total number of atoms inside a region containing a GB and the number of
atoms in one atomic plane parallel to the GB located inside the perfect
crystal. The fraction $[n]$ was then calculated as a modulo of this two
numbers and was divided by the number of atoms in one plane. The advantage of
the quantity introduced in Ref. (Frolov2013, ) is that it allows to calculate
a well defined property related to the numbers of atoms at GBs without keeping
track of the number of atoms inserted or removed during the process of GB
optimization. While this parameter can be readily calculated for twist and
symmetric tilt boundaries for some crystal lattices, it cannot be accepted as
a general descriptor. For example, this quantity cannot be calculated for
asymmetric boundaries with different areal number density of atoms per plane
in the different grains. Moreover, even for symmetric boundaries this
descriptor needs to be generalized to work for crystal lattices with more than
one atom per primitive unit cell, such as in diamond or hexagonal close packed
(hcp) lattices. Note that for symmetric tilt GBs the number of atoms per unit
area in one bulk plane is given by $min(d_{3}^{sc})/\Omega$. As a result, for
such boundaries the proposed fraction of atoms $[n]$ is exactly equivalent to
subtracting out the largest DSC component and $[V]_{N}$ from the normal
component of a crossing vector of a given boundary and dividing it by
$min(d_{3}^{sc})$, which is analogous to Eq. (9) derived for $\Delta N^{*}$.
As a modulo, $[n]$ is required to be positive. The advantage of calculating
the smallest non-DSC component of a crossing vector derived in this study
instead of $[n]$ is that it is aslo defined for asymmetric GBs. This non-DSC
component can be calculated for each individual boundary and is related to the
excess number of GB atoms per unit area which we denote as $N^{*}/\mathcal{A}$
relative to the bulk system defined by the subtracted DSC vector component. By
this definition, this number of GB atoms per unit area is defined up to
$min(d_{3}^{sc})/\Omega$ and can be positive and negative.
In the context of GB phase transformations analyzed in this work, $\Delta
N^{*}/\mathcal{A}$, representing the smallest number of atoms or vacancies per
unit of GB area required to transform one GB phase into another, is a well
defined quantity which can be measured for symmetric and asymmetric GBs. The
derived Eq. (8) relates $\Delta N^{*}/\mathcal{A}$ to the non-DSC part of the
normal Burgers vector component of the GB phase junction. In our derivation
leading to Eq. (8), we made no assumptions about the type of the CSL boundary
and it is valid for both symmetric and asymmetric boundaries. We analyze
specific examples of GB phase junctions and calculate $\Delta N^{*}$ in the
atomistic simulation section of this article.
#### II.2.2 Deformation gradient treatment of all three components of the
Burgers vector
In the previous section we showed that the normal components of all possible
Burgers vectors of a GB phase junction can be described by a DSC lattice with
the origin shifted normal to the GB plane by a vector related to the
difference in the excess volumes and the number of atoms $\Delta N^{*}$. Since
a GB phase junction is a dislocation, it will experience a Peach-Koehler (PK)
force when mechanical stresses are applied.(Hirth, ) This force produces a
driving force for the junction motion i.e., GB phase transformation, and will
also influence the equilibrium coexistence. When tension or compression is
applied normal to the GB plane, the driving force or the work of the PK force
per unit area is equal to the product of the normal component of stress and
the normal component of the Burgers vector.
Another way to describe the same effects is to consider the free energies of
the two phases. Consider a junction between two GB phases with $\Delta
N^{*}=0$. The mechanical stresses normal to the GB plane change the free
energies of both grain boundary phases, with the change proportional to the
excess volume of each boundary, as described by the absorption equation
(Gibbs, ). This part of the free energy difference due to the normal stress
contributes to the driving force for the GB phase transformation and is given
by the product of the normal component of stress and the difference in the
excess volumes. So far, we have demonstrated that the normal component of the
Burgers vector contains a contribution from the difference in the excess
volumes. Thus, the analysis presented here for the normal component of
$\mathbf{b}$ provides a connection between these two equivalent descriptions
of the driving force.
In addition to the normal stress, solid interfaces support shear stresses
parallel to the interface plane.(Robin74, ; dingreville_interfacial_2008, ;
Larche_Cahn_78, ; doi:10.1063/1.448644, ) These stresses also result in a PK
force on GB phase junction and change the free energies of the two phases.
Excess shear of an interface is an extensive property that describes how the
interface free energy changes with applied shear stress parallel to the
boundary.(Frolov2012a, ) Excess shears and GB free energy as a function of
shear stress have been calculated for different GBs using atomistic
simulations.(Frolov2012b, ; meiners_observations_2020, ) A recent study
demonstrated a shear stress induced GB transformations as well as equilibrium
coexistence under applied shear stress. Moreover, the coexistence stress was
accurately predicted from the values of the excess shears and the stress
dependence of the GB free energies.(meiners_observations_2020, )
In this section we derive an expression for all three components of
$\mathbf{b}$, including the tangential components. We show that the origin of
this DSC lattice of possible Burgers vectors is also shifted in the plane of
the boundary due to the difference in excess shears. To do so, we need to
express all three components of the crossing vectors in terms of contributions
from the bulk and GB properties, such as the GB excess volume, shear and the
number of atoms. Following Ref. (Frolov2012a, ; Frolov2012b, ), we assumed
that there exists a mapping of one grain into the other, which establishes a
unique relation between the lattice sites of the two crystals. This
transformation is described by a deformation gradient $\mathbf{F}$. In this
work, we only consider mappings that exclude transformations resulting in GB
coupled motion, which means that $\mathbf{\mathbf{F_{i3}^{b}}}$ components of
the deformation gradient are the same for both grains, where the superscript b
indicated the bulk part of the crystals. Specific examples of excess shear
calculations for different GBs can be found in Refs. (Frolov2012b, ;
meiners_observations_2020, ).
The two-dimensional schematic in Fig. 5 shows how the deformation gradient
$\mathbf{F}$ can be used to describe the mapping between a single crystal and
a bicrystal with a GB phase. Here we present equations for one phase
($\alpha$) and subsequently distinguish GB phase specific properties by
respective super-scripts. We assume that there are lattice sites, or markers,
that can be selected and tracked during an imaginary transformation of a
single crystal in Fig. 5b into a bicrystal shown in Fig. 5c. These lattice
sites, labeled as a, b, c and d define a parallelogram (parallelepiped in
three dimensions) which has different shapes for the sites located in the
single crystal and the bicrystal. These shapes, shown in Figs 5b and e, can be
described using deformation gradients $\mathbf{F}^{b}$ and
$\bar{\mathbf{F}^{\alpha}}$ that map a common mathematical reference state
shown in Fig. 5a on both parallelograms. This mathematical reference state is
used to calculate the deformation gradients only and should not be confused
with the reference state for the Burgers circuit analysis discussed earlier.
Notice that, to describe the transformation from Fig. 5 b to c, we use an
effective deformation gradient $\bar{\mathbf{F}^{\alpha}}$ which is calculated
based on the positions of the lattice sites as shown in Fig. 5 e. For three-
dimensional systems, these deformation gradients are given by(Frolov2012a, ;
Frolov2012b, )
$F^{b}=\left(\begin{array}[]{ccc}F_{11}^{b}&F_{12}^{b}&F_{13}^{b}\\\
0&F_{22}^{b}&F_{23}^{b}\\\ 0&0&F_{33}^{b}\end{array}\right)$
$\bar{F}^{\alpha}=\left(\begin{array}[]{ccc}F_{11}^{\alpha}&F_{12}^{\alpha}&\bar{F}_{13}^{\alpha}\\\
0&F_{22}^{\alpha}&\bar{F}_{23}^{\alpha}\\\
0&0&\bar{F}_{33}^{\alpha}\end{array}\right)=\left(\begin{array}[]{ccc}F_{11}^{b}&F_{12}^{b}&\left(F_{13}^{b}+B_{1}^{\alpha}\mathcal{A}^{{}^{\prime}}/V^{{}^{\prime}}\right)\\\
0&F_{22}^{b}&\left(F_{23}^{b}+B_{2}^{\alpha}\mathcal{A}^{{}^{\prime}}/V^{{}^{\prime}}\right)\\\
0&0&\left(F_{33}^{b}+B_{3}^{\alpha}\mathcal{A}^{{}^{\prime}}/V^{{}^{\prime}}\right)\end{array}\right)$
(10)
where vector $\mathbf{B}^{\alpha}$ shown in Fig. 5 d describes the change in
the position of the site b relative to its position in the single crystal. The
coordinate axes are indicated in the figure. From the formal definition of
$\bar{\mathbf{F}}^{\alpha}$ by Eq. (10) and Figs 5c and e, it is clear that
its $\bar{F}_{i3}^{\alpha}$ components depend on the size of the selected GB
region and approach bulk values when the GB area to volume ratio
$\mathcal{A}^{\prime}/V^{\prime}$ decreases. We now recognize that $ab^{*}$ is
a crossing vector and its components can be expressed by
$ab_{i}^{*}$=$V/\mathcal{A}\bar{F}_{i3}^{\alpha}/\bar{F}_{33}^{\alpha}$, where
$i=1,2,3$. There are three excess properties associated with vector
$\mathbf{B}^{\alpha}$: two GB excess shears $[VF_{13}/F_{33}]_{N}^{\alpha}$
and $[VF_{23}/F_{33}]_{N}^{\alpha}$ and one GB excess volume
$[VF_{33}/F_{33}]_{N}^{\alpha}=[V]_{N}^{\alpha}$, which can be found from the
relation(Frolov2012a, ; Frolov2012b, )
$[VF_{i3}/F_{33}]_{N}^{\alpha}=a^{*}b_{i}^{*}-N/N^{b}\left(F_{i3}^{b}/F_{33}^{b}V^{b}\right)/\mathcal{A},\quad
i=1,2,3,$ (11)
where as before N refers to the total number of atoms inside the region
$ab^{*}c^{*}d$ spanned by the crossing vector $ab^{*}$ and $N^{b}$ is the
number of atoms in the volume of single crystal defined by lattice sites $a$,
$b$, $c$, and $d$. For $i=3$, Eq. (11) recovers the well known expression for
excess GB volume $[V]_{N}=(V-N\Omega)/A$. Eq. (11) relates the three
components of the crossing vector to the excess properties of the GB.
We can now apply this equation to GB phases $\alpha$ and $\beta$ separately
and evaluate the crossing vectors $\mathbf{A^{\prime}B^{\prime}}$ and
$\mathbf{C^{\prime}D^{\prime}}$ in the Burgers circuit analysis:
$A^{\prime}B_{i}^{\prime}=[VF_{i3}/F_{33}]_{N}^{\alpha}-\left(\Omega
F_{i3}^{b,\alpha}/F_{33}^{b,\alpha}N^{\alpha}\right)/\mathcal{A},\quad
i=1,2,3,$ (12)
$C^{\prime}D_{i}^{\prime}=[VF_{i3}/F_{33}]_{N}^{\beta}-\left(\Omega
F_{i3}^{b,\beta}/F_{33}^{b,\beta}N^{\beta}\right)/\mathcal{A},\quad i=1,2,3.$
(13)
Notice that the bulk deformation gradients $F^{b,\alpha}$ and $F^{b,\beta}$
are not identical, as they depend on how the lattice sites $AB$ and $CD$ were
chosen. It is possible to choose the same bulk reference state for both GB
phases and use the $F^{\alpha}$ and $F^{\beta}$ maps to predict the locations
of the sites $B^{\prime}$ and $C^{\prime}$. In this case, the A and D corners
of the Burgers circuit can still be selected arbitrarily, but their
counterparts in the upper grain are determined by the deformation gradient
$F^{b}$ and the number of atoms in the reference bulk crystal or
$V^{\prime}/\mathcal{A}^{\prime}$. Without loss of generality we assume that
the lattice sites in both bicrystals are chosen such that they are related by
the same $F^{b}$. Then, subtracting Eqs. (12) and (13) we obtain the following
expression for the Burgers vector
$b_{i}=\Delta[VF_{i3}/F_{33}]_{N}+\left(\Omega F_{i3}^{b}/F_{33}^{b}\Delta
N\right)/\mathcal{A}+d_{i}^{sc},\quad i=1,2,3,$ (14)
where, as before, $\Delta N$ corresponds to the difference in the number of
atoms in bicrystals spanned by the crossing vectors, and the DSC vector
appears as a result of adding the lattice vectors
$\mathbf{\mathbf{B^{\prime}C^{\prime}}}$ and
$\mathbf{\mathbf{D^{\prime}A^{\prime}}}$. Eq. (14) shows that all possible
Burgers vectors form a DSC lattice with the origin shifted by a vector
$\Delta[VF_{i3}/F_{33}]_{N}+\left(\Omega F_{i3}^{b}/F_{33}^{b}\Delta
N\right)/\mathcal{A}$, which components contain excess volume, excess shears
and a term related to the difference in the number of atoms. We can further
reduce the second term in Eq. (14) to $\left(\Omega
F_{i3}^{b}/F_{33}^{b}\Delta N^{*}\right)/\mathcal{A}$ by subtracting out all
DSC vectors, which is equivalent to selecting one of the Burgers vectors
closest to the origin of the shifted DSC lattice, and obtain :
$b_{i}=\Delta[VF_{i3}/F_{33}]_{N}+(\Omega F_{i3}^{b}/F_{33}^{b}\Delta
N^{*})/\mathit{\mathcal{A}}+d_{i}^{sc},\quad i=1,2,3.$ (15)
Eq. (15) is a vector form of Eq. (8) derived previously for only one component
$b_{3}$. The first term in Eq. (15) represents the contribution to the Burgers
vector from the difference in excess volumes and excess shears, while the
second is related to the the number of atoms required to transform one GB
phase into another. These two terms containing properties specific to the two
GB phases represent the non-DSC vector by which the origin of the DSC lattice
of all possible Burgers vectors of the junction is shifted relative to zero.
As discussed previously,(Frolov2012a, ; Frolov2012b, ) the excess shear as an
equilibrium property is not defined for GBs that move under applied shear
stress. As an example, consider symmetric tilt GBs. When such a GB moves, one
grain transforms into the other with a different shape. Thus the deformation
gradients in the two grains are not the same. When such a boundary does not
move, $\mathbf{F}^{b}$ can be assumed to be the same in both grains and a
relation between lattice sites across the GB has to be established to
calculate the formal $\bar{\mathbf{F}}^{\alpha}$. One way to establish this
relation is by following the bulk lattice sites during the GB creation
procedure such as the $\gamma$-surface approach. For example, Pond and Vitek
used this approach to track relative displacements of the grains and
calculated the Burgers vectors of partial GB dislocations formed by identical
GB structures corresponding to different grain translation
vectors.(pond_periodic_1977_1, ; pond_periodic_1977_2, ) While this procedure
is straightforward if the boundary structures can be obtained simply by
translating and relaxing the adjacent crystals, it cannot be applied if the
adjacent GB phases are composed of different numbers of atoms.
On the other hand, even for boundaries that move by coupled motion, such as
symmetric tilt boundaries, it is straightforward to calculate the excess shear
component parallel to the tilt axis and use Eq. (15) to predict $\mathbf{b}$
along that direction. A recent experimental and modeling study demonstrated
phases of [111] symmetric and asymmetric tilt GBs in Cu that had different
translations along the tilt axis. According to Eq. (15), GB phase junctions of
these boundaries have a screw component along the [111] tilt axis. GBs with
significant grain translations along the tilt axis have been previously
reported(doi:10.1080/01418610208240038, ; PhysRevLett.70.449, ) for other
boundaries. The translations are typically on the order of a half of the
inter-planar distance. These translations result in large excess shears and
produce junctions that have a large screw component parallel with the tilt
axes, such as the one studied in Ref. (meiners_observations_2020, ).
Quantification of these Burgers vectors using the described Burgers circuit
analysis and Eq. (14) allows one to make predictions about the stability of
metastable states and explain the slow kinetics of GB phase transformations
observed in Ref. (meiners_observations_2020, ).
## III Burgers vectors of GB phase junctions in $\Sigma 5$ symmetrical tilt
Cu boundaries
We now apply the methodology described above to analyze two specific GB phase
junctions in the $\Sigma 5(310)[001]$ and $\Sigma 5(210)[001]$ Cu
boundaries(Frolov2013, ) shown in Fig. 1. First, we calculate the vectors
$\mathbf{b}$ using the Burgers circuit construction described in Sec. II.1.
Then we predict $\mathbf{b}$ using Eqs (8) and (14) with the GB excess
properties summarized in Table 1, and compare the values of $\mathbf{b}$
obtained by the two methods.
### III.1 Analysis of the $\Sigma 5(310)[001]$ GB
Figure 6a shows a closed circuit ABCD around the GB phase junction in the
$\Sigma 5(310)[001]$ GB. For convenience, we consider a slice parallel to the
tilt axis containing only two atomic planes. The atoms with different
coordinates normal to the page are colored in red and black. Vectors
$\mathbf{AB}$ and $\mathbf{CD}$ cross the Kite and Split Kite GB structures,
respectively. To simplify the analysis, we have chosen the lattice sites A, B,
C and D such that the vectors $\mathbf{B^{\prime}C^{\prime}}$ and
$\mathbf{D^{\prime}A^{\prime}}$ have the same length equal to
$10d_{\\{310\\}}=10(a/2)\sqrt{10}$ in the reference state, where $a=3.615$ Å
is the fcc lattice constant.(Mishin01, ) Since the two vectors have the same
magnitude and opposite signs, they cancel each other in Eq. (1) and do not
contribute to $\mathbf{\mathbf{b}}$. The reference bicrystals are shown in
Fig. 6b and c. For these two simulation blocks, the boundary conditions are
periodic in the boundary plane and the stresses inside the bulk crystals away
from the boundary are zero. The exact simulation procedure is not important as
long as the structures of the boundaries match the ones in the two GB phase
state. In reality, the structures generated at 0 K and those taken out from
the simulation at T=800K were not identical. The high temperature structures
contain point defects and may have a somewhat different arrangement of the
atoms. Nevertheless, the 0 K and finite temperature structures are very close.
Table 1 summarizes the properties of the different GB phases calculated at 0
K.(Frolov2013, ) In this work, we used both the 0 K and the finite temperature
structures to generate the reference bicrystals and obtained the same result
within the expected error of the calculation method.
To map the crossing vectors $\mathbf{AB}$ and $\mathbf{CD}$ in the deformed
state on the vectors $\mathbf{A^{\prime}B^{\prime}}$ and
$\mathbf{C^{\prime}D^{\prime}}$ in the reference state, we follow the lattice
planes in both crystals as indicated by the green lines shown in Fig. 6. The
exact choice of the guiding lines is not important as long as they help to
establish the relation between the lattice points A and B on different sides
of the GB. Performing a direct calculation of the components of the crossing
vectors in the reference state, we obtained for the two bicrystals
$A^{\prime}B^{\prime}_{3}=22.61$ Å, $A^{\prime}B^{\prime}_{1}=0$ Å and
$C^{\prime}D^{\prime}_{3}=22.73$ Å,
$C^{\prime}D^{\prime}_{1}=-d_{\\{260\\}}=-0.572$ Å. The non-zero
$C^{\prime}D^{\prime}_{1}$ shown in Fig. 6c indicates that in the Split Kite
structure the upper crystal is translated to the right relative to the bottom
crystal by the amount of $d_{\\{260\\}}=a/(2\sqrt{10})$. At the same time, the
Kite structure is symmetric with $A^{\prime}B^{\prime}_{1}=0$ . We can express
the normal components of the crossing vectors in terms of the bicrystal and GB
contributions.
$A^{\prime}B^{\prime}_{3}=22.61$$\,\text{\AA}=39d_{\\{260\\}}+[V]_{N}^{K}$ and
$C^{\prime}D^{\prime}_{3}=22.73$
$\text{\AA}=39d_{\\{260\\}}+[V]_{N}^{SK}+0.38d_{\\{260\\}}$, where
$d_{\\{260\\}}=a/(2\sqrt{10})$ and represents the shortest distance between
two atomic planes in the crystal parallel to the GB plane. The smallest DSC
vector normal to the GB plane has the length $2d_{\\{260\\}}$ , as shown in
Fig. 7a, as a result even for simple Kite structure
$\mathbf{A^{\prime}B^{\prime}}-[V]_{N}^{K}\mathbf{n}_{GB}=(0,0,A^{\prime}B^{\prime}_{3}-[V]_{N}^{K})=(0,0,39d_{\\{260\\}})$
($\mathbf{n}_{GB}$ is the unit vector normal to the GB plane) is not a DSC
vector, which is not surprising because GBs allow for grain translations
parallel to the GB plane. The non-DSC part of the crossing vector equals to
$d_{260}$ reflects that translation and can be used as a useful GB descriptor.
While all crossing vectors form a DSC lattice, the origin of this lattice is
also shifted normal to the GB plane by $d_{260}$. Notice that for this
boundary $A^{\prime}B^{\prime}_{3}-[V]_{N}^{K}$ is equal to an integer number
times the smallest normal component of DSC equal to $d_{\\{260\\}}$. The Split
Kite structure cannot be obtained by joining two perfect half crystals and
requires an insertion or removal of a fraction of atoms less than one atomic
plane. This is reflected by the $0.38d_{\\{260\\}}$ terms in the expression
for $C^{\prime}D^{\prime}_{3}$.
Summing up the measured vectors of the circuit using Eq. (1) we obtain the
components of the Burgers vector of the GB phase junction:
$b_{1}=-0.572\,\text{\AA}=-d_{\\{260\\}}$ Å, $b_{3}=-0.121$ Å. Since the
entire circuit was located in one plane and there were no grain translations
parallel to the tilt axis, $b_{2}=0$ Å. The negative value of $b_{3}$
indicates that the bicrystal with the Split Kite structure is effectively
thicker than the bicrystal with the Kite structure. This result may seem
counterintuitive since the Split Kite has a smaller excess volume then the
Kite phase according to Table 1. To explain this result, we express the
calculated Burgers vector in terms of DSC vectors and the excess GB
properties.
We now apply Eq. (6) to predict the Burgers vector for the junction in the
$\Sigma 5(310)[001]$ GB. The excess volumes and the numbers of atoms in the
two GB phases, Kites and Split Kites, can be found in Table 1. TheGB areal
number density is expressed as a fraction of the number of atoms in one
lattice plane parallel to the GB. The excess volume of the Split Kite phase is
smaller than that of the Kites, as a result
$\Delta[V]_{N}=[V]_{N}^{K}-[V]_{N}^{SK}=0.316\text{\r{A}}-0.245\text{\r{A}}=0.071\text{\r{A}}$
is positive, while $\Omega\Delta
N^{*}/A=\Omega(N^{K}-N^{SK})/\mathcal{A}=-0.38d_{\\{260\\}}=-0.21$ Å is
negative. Summing up these two contributions we obtain a negative value
$b_{3}=-0.14\mathring{A}$, indicating that the bicrystal with Split Kite
structure is indeed effectively thicker than the bicrystal with the Kite
structure. The value of $b_{3}$ calculated using Eq. (6) also matches the
value obtained above using the Burgers circuit construction within the
numerical accuracy. This agreement suggests that the dislocation content of
this particular GB phase junction originates entirely from the difference in
excess properties of the two GB structures i.e., from the difference in their
excess volumes and the numbers of atoms. Indeed, the number-of-atoms term
$\Omega\Delta N^{*}/A=\Omega(N^{K}-N^{SK})/\mathcal{A}=-0.21$ Å is well
defined: during the transformation this exact amount of atoms per unit of area
diffused from the open surface and transformed the initial Kite phase to the
Split Kite phase, as was confirmed in Ref. (Frolov2013, ). This change in the
number of atoms can be easily evaluated by counting the total number of atoms
inside two regions on the two sides of the GB phase junction, containing the
two different GB phases with the same area. Such a calculation was performed
in the original study.(Frolov2013, ) In other simulations similar junctions
were formed by inserting a controlled number of atoms into the preexisting
parent phase.
The parallel component of the disconnection arises from the relative shift of
the grains parallel to the GB plane, which is different in the two structures.
For the given junction this difference is equal to a DSC vector
$b_{1}=-d_{\\{260\\}}=-0.57$ Å. Note that $b_{1}$ is smaller than the shortest
DSC vector with the same direction, with has the length $2d_{\\{260\\}}$. In
general, the parallel relative shift for a GB structure is not constrained to
be a DSC vector. The parallel components of the crossing vectors can be
expressed as a sum of DSC vector components and the excess shears at the
boundary as described by Eqs. 14. As was discussed above, symmetric tilt
boundaries move under shear in that particular direction and excess shear
becomes ill-defined. For a stationary boundary, this component of the Burgers
vector can be formally interpreted to have a contribution from the differences
in the excess shears and the numbers of atoms of the two GB phases as
described by Eq. (15).
A GB phase junction can change its dislocation content by absorbing or
ejecting GB disconnections with Burgers vectors given by vectors of the DSC
lattice. We can consider such reactions and compare the Burgers vector of the
GB phase junction obtained in MD to other possible valid vectors. The current
Burgers vector in the coordinate frame of the interface simulation is given by
$\mathbf{b^{MD}}=(-d_{\\{260\\}},0,\Delta[V]_{N}-0.38d_{\\{260\\}})$, while
the primitive DSC lattice vectors are
$\mathbf{d_{1}^{SC}}=(2d_{\\{260\\}},0,0)$,
$\mathbf{d_{2}^{SC}}=(d_{\\{260\\}},-a/2,d_{\\{260\\}})$ and
$\mathbf{d_{3}^{SC}}=(0,0,2d_{\\{260\\}})$, . Fig. 7a shows the dichromatic
pattern constructed for this bicrystal as well as the primitive DSC lattice
vectors.(doi:10.1098/rsta.1979.0069, ; pond_bicrystallography_1983, ) It is
clear, that all possible disconnection reactions leave the magnitude of the
current Burgers vector at best unchanged. For example, consider an absorption
of $\mathbf{d_{1}^{SC}}$, which can glide in the boundary. This dislocation
reaction changes the direction of the GB phase junction Burgers vector to
$\mathbf{b^{MD}+d_{1}^{SC}}=(d_{\\{260\\}},0,\Delta[V]_{N}-0.38d_{\\{260\\}})$,
but not its magnitude. Other disconnection reactions increase the magnitude of
the Burgers vector and we conclude, that this junction formed when the Split
Kite structure absorbed extra atoms gives the smallest Burgers vector possible
for this GB.
The analysis of other possible Burgers vectors can be used to explain why the
GB transformation in our simulation proceeded by absorption of extra atoms and
not vacancies. We can also predict possible Burgers vectors for vacancy
induced transformations. In the absence of mechanical stresses, the primary
driving force for the GB phase transformations is the free energy difference
between the Kite and Split Kite phases. The Split Kite phase can be obtained
from Kites by inserting a number of atoms equal to 0.38 fraction atoms in a
bulk plane parallel to the boundary. An insertion or removal of a complete
atomic plane (1.0 fraction) accompanied by a required grain translation
restores the original GB structure. Because of this periodicity, the Split
Kite phase can also be obtained from Kites by removing (1-0.38)=0.62 fraction
of atoms, i.e., by inserting this amount of vacancies. In both
transformations, we obtain a junction between the same phases: Kites and Split
Kites, but the Burgers vector of the junction is different. A schematic
illustration in Fig. 8 shows how two different junctions between the same GB
phases can be formed by inserting extra atoms or vacancies into the same
parent GB phase.
Using the available GB properties, we can predict the smallest normal
component of the Burgers vector due to this hypothetical transformation due to
vacancies. Since the phases obtained are identical, the excess volume
contribution to $b_{3}$ is the same,
$\Delta[V]_{N}=[V]_{N}^{K}-[V]_{N}^{SK}=0.071\text{\r{A}}$. The number of
atoms term, on the other hand, has a different magnitude and sign
$\left(\Omega\Delta
N^{*}/\mathcal{A}\right)^{vacancy}=\Omega(N^{K}-N^{SK})/\mathcal{A}=(1-0.38)d_{\\{260\\}}=0.35$
Å. Summing up the two contributions we obtain
$\mathbf{b}_{3}^{vacancy}=0.62d_{\\{260\\}}+\Delta[V]_{N}=0.35+0.071=0.42\>\mathring{A}$,
which is larger than the normal component obtained in the MD simulations. One
of the smallest possible Burgers vector with this normal component is
$\mathbf{b^{MD}+d_{2}^{SC}}=(0,-a/2,\Delta[V]_{N}+0.62d_{\\{260\\}})$, has a
much larger magnitude due to the non-zero component along the tilt axis. The
large energetic penalty due to nucleation of this dislocation makes the
transformation by absorption of $0.62$ fraction of a plane of vacancies less
likely.
Another valid Burgers vector consistent with the vacancy absorption mechanism
is
$\mathbf{b^{MD}}+\mathbf{d_{3}^{SC}}=(-d_{\\{260\\}},0,\Delta[V]_{N}+1.62d_{\\{260\\}})$.
It has a larger normal component compare to $0.62d_{\\{260\\}}+\Delta[V]_{N}$,
but a much smaller magnitude of the total Burgers vector. Instead of absorbing
0.62 fraction of vacancies to nucleate the Split Kite phase, this mechanism
requites absorbing that fraction of vacancies plus a complete lattice plane of
vacancies. The difference between the two Burgers vectors due to atoms and
vacancies absorption is $\mathbf{d_{3}^{SC}}=(0,0,2d_{\\{260\\}})$, with
$2d_{\\{260\\}}$ corresponding two atomic planes, not one. Thus, the presented
analysis makes a prediction about the difference in the transformation of Kite
structure into Split Kite by atom and vacancy absorption mechanisms. The
vacancy induced transformation requires about three times larger amount of
point defects to be absorbed per unit of the transformed GB area than the
interstitial induced transformation. Even this smallest Burgers vector
consistent with the vacancy induced transformation mechanism has a magnitude
larger than $\mathbf{b^{MD}}$, suggesting that in the absence of mechanical
stresses such a transformation by vacancy absorption is less energetically
favorable compared to the transformation by the absorption of atoms, which we
observed in our MD simulations. When mechanical stresses are applied
additional driving forces appear that can influence the transformation.
### III.2 Analysis of the $\Sigma 5(210)[001]$ GB
$\Sigma 5(210)[001]$ is another symmetric tilt GB studied in Ref. (Frolov2013,
). This boundary shows a first-order transition between Filled-Kite and Split-
Kite phases. Generation of both GB phases requires insertion or removal of
atoms. Unlike the Kite phase, they cannot be obtained using the
$\gamma$-surface approach, i.e., by simply translating the two grains
laterally with respect to each other and parallel with the GB plane. Figure 9a
shows the slice of the structure containing two atomic planes with atoms
colored in red and black according to their position along the tilt axis. As
before, we construct a closed circuit ABCD and identify the crossing vectors
$\mathbf{A^{\prime}B^{\prime}}$ and $\mathbf{C^{\prime}D^{\prime}}$ in
reference bicrystals as shown in Fig. 9. The calculated components of these
vectors in the reference state are $A^{\prime}B^{\prime}_{3}=22.0$ Å ,
$A^{\prime}B^{\prime}_{1}=0$ Å and $C^{\prime}D^{\prime}_{3}=22.36$ Å,
$C^{\prime}D^{\prime}_{1}=0$ Å. Both Filled-Kite and Split-Kite phases
required insertion or removal of atoms and we can express the calculated
components as
$A^{\prime}B^{\prime}_{3}=27d_{\\{420\\}}+[V]_{N}^{FK}-1/7d_{\\{420\\}}$ and
$C^{\prime}D^{\prime}_{3}=27d_{\\{420\\}}+[V]_{N}^{SK}+7/15d_{\\{420\\}}$,
where $d_{\\{420\\}}=a/(2\sqrt{5})=0.81$ Å and represents the shortest
distance between two atomic planes in the crystal parallel to the GB plane.
The smallest DSC vector strictly normal to the GB plane has the length
$2d_{\\{420\\}}$, while $d_{\\{420\\}}$ correspond to a smallest normal
component of a vector on the DSC lattice, as shown in Fig. 7b.
Using Eq. (1) and the measured crossing vectors, we obtain the following
components for the Burgers vector: $b_{3}=-0.36$ Å and $b_{1}=0$ Å. Similarly
to the $\Sigma 5(310)[001]$ boundary, the entire circuit is located in the
same plane in the reference state, so the component $b_{2}$ parallel to the
tilt axis is also zero. The Burgers vector components calculated directly
using the closed circuit approach can also be interpreted in terms of the
excess properties of the Filled Kite and Split Kite phases. According to Table
1, the difference in the excess volumes is
$\Delta[V]_{N}=[V]_{N}^{FK}-[V]_{N}^{SK}=0.301\text{\r{A}}-0.172\text{\r{A}}=0.129\text{\r{A}}$,
while the difference in the number of atoms gives $\Omega\Delta
N^{*}/\mathcal{A}=\Omega(N^{FK}-N^{SK})/\mathcal{A}=-(7/15+1/7)d_{\\{420\\}}=0.60\cdot
0.81\text{\r{A}}=-0.49$ Å. Adding these two terms we obtained the normal
component of the Burgers vector predicted by the Eq. (6) to be
$b_{3}=\Delta[V]_{N}+\Delta N^{*}\cdot\Omega/\mathcal{A}=-0.36$Å. Since the
relative tangential translation vectors are zero for both bicrystals, both
$b_{1}$ and $b_{2}$ are zero. These numbers again match well the components
calculated using the Burgers circuit analysis.
Similar to the first GB phase junction, here, we can also conclude that the
obtained Burgers vector is the smallest possible. Indeed, the current Burgers
vector is given by $\mathbf{b^{MD}}=(0,0,\Delta[V]_{N}-0.6d_{\\{420\\}})$,
while the primitive DSC lattice vectors are
$\mathbf{d_{1}^{SC}}=(d_{\\{420\\}},0,d_{\\{420\\}})$,
$\mathbf{d_{2}^{SC}}=(-d_{\\{420\\}},a/2,0)$,
$\mathbf{d_{3}^{SC}}=(-d_{\\{420\\}},0,d_{\\{420\\}})$. Fig. 7b shows the
dichromatic pattern constructed for this bicrystal as well as the primitive
DSC lattice vectors. Adding of any of these DSC vectors to $\mathbf{b^{MD}}$
will not decrease the magnitude of the resultant Burgers vector of the GB
phase junction.
Similarly to the analysis of the $\Sigma 5(310)[001]$ GB, here, we can also
consider a hypothetical transformation in which the Split-Kite phase of this
boundary grows via absorption of vacancies instead of atoms. The excess volume
component of $b_{3}$ remains again the same
$\Delta[V]_{N}=[V]_{N}^{FK}-[V]_{N}^{SK}=0.129\text{\r{A}}$, while the second
contribution from atoms becomes $\left(\Omega\Delta
N^{*}/\mathcal{A}\right)^{vacancy}=\Omega(N^{FK}-N^{SK})/\mathcal{A}=(1-7/15-1/7)d_{\\{420\\}}=0.4d_{\\{420\\}}=0.32$
Å. Summing up these two contributions we obtain the smallest normal component
$b_{3}^{vacancy}=0.129+0.32=0.55\>\mathring{A}$. A possible valid Burgers
vector for such a transformation could be for example
$\mathbf{b^{MD}}+\mathbf{d_{1}^{SC}}=(d_{\\{420\\}},0,\Delta[V]_{N}+0.4d_{\\{420\\}})$,
which is also the smallest Burgers vector consistent with the vacancy induced
transformation. Notice that for this boundary the difference in the normal
components of the burgers vector is $d_{\\{420\\}}$ which corresponds to one
atomic plane. As a result, there is no significant difference in the amount of
absorbed point defects by both mechanisms: 0.6 fraction of a plane of atoms is
absorbed in one case, and 0.4 fraction of a plane of vacancies in the other.
In MD simulations, both $\Sigma 5$ boundaries transformed to the Split Kite
phase by absorption of extra atoms, not vacancies. Our analysis indicates
that, when extra atoms are absorbed the two contributions to the Burgers
vector from the difference in the excess volumes and the numbers of atoms have
opposite signs resulting in a smaller Burgers vector of the GB phase junction,
making this transformation more energetically favorable when no external
mechanical stresses are applied.
## IV Discussion and Conclusions
In this work, we have analyzed the dislocation content of GB phase junctions.
Like dislocations, these line defects generate long-range elastic fields and
can interact with other defects such as regular GB disconnections,
dislocations, surfaces and precipitates. During GB phase nucleation, the
elastic interaction between GB phase junctions and their strain energy
contributes to the nucleation barrier. Understanding the Burgers vectors of
these defects is necessary to describe these interactions and to quantify the
nucleation barriers during GB phase transformations. In this study, we have
described a general Burgers circuit approach that allows one to calculate
Burgers vectors of junctions formed by different GB structures composed of
different numbers of atoms. We also derived expressions that relate the
components of the Burgers vector to the differences in the properties of GB
phases, including excess volume, excess shears and the numbers of atoms
$\Delta N^{*}$ required for the GB phase transformation. We showed that,
differently from regular GB disconnections, the Burgers vectors of GB phase
junctions are not DSC vectors. While all allowed Burgers vectors of a GB phase
junction form a DSC lattice, the origin of this lattice is shifted by a non-
DSC vector determined by the differences in the mentioned GB properties and
$\Delta N^{*}$. It has been recognized by prior studies(pond_periodic_1977_1,
; pond_periodic_1977_2, ) that the difference between the grain translation
vectors creates GB dislocations when structures with different translation
vectors coexist on the the same plane. Pond and Vitek simulated partial GB
dislocations formed by identical GB structures with different relative grain
translations and defined the Burgers vector of these dislocations as the
difference between their translation vectors.(pond_periodic_1977_1, ;
pond_periodic_1977_2, ) GB dislocations formed by different GB structures with
different excess properties and numbers of atoms have not been analyzed. It
has also been suggested that the difference in excess volumes of different GB
structures coexisting on the same plane contributes to the normal component of
the Burgers vector.(pond_periodic_1977_1, ; pond_periodic_1977_2, ) In this
work we have shown that, when two different GB phases are composed of
different number of atoms, the normal component of the Burgers vector is not
equal to the difference in the excess volumes. The difference in the numbers
of atoms required for the GB phase transformation also contributes to
$\mathbf{b}$.
We have applied this analysis to GB phase junctions modeled in the $\Sigma
5(210)[001]$ and $\Sigma 5(310)[001]$ symmetric tilt GBs in Cu. In both
boundaries, these junctions are formed between two GB phases with different
structures and different numbers of atoms. The Burgers vectors were calculated
using two separate approaches. In the first one, we used a straightforward
Burgers circuit construction, which characterizes the $\mathbf{b}$ components.
In the second approach, we used known values of excess properties of the
studied GB phases to predict the smallest components of the Burgers vectors
normal to the GB plane. The difference in the numbers of atoms was calculated
after the transformation took place. For both GB phase junctions studied, the
magnitudes of the Burgers vectors were found to be the smallest possible and
their normal components matched the ones predicted from the known GB
properties. The obtained Burgers vectors had two non-zero components and one
zero component parallel to the tilt axis. The normal component of the Burgers
vector was not equal to the difference in the excess volumes and contained a
second contribution due to the difference in the numbers of atoms $\Delta
N^{*}$ required for the GB phase transformation. For the $\Sigma 5(310)[001]$
boundary, this later contribution was larger than the difference in the excess
volumes and even had an opposite sign. For both junctions studied, the
contribution to $\mathbf{b}$ from the difference in the numbers of atoms
$\Delta N^{*}$ is significant and cannot be neglected. In our analysis we
considered absorption or ejection of additional disconnections with Burgers
vectors dictated by the DSC lattice and concluded that these reactions cannot
further reduce the calculated $\mathbf{b}$. We also showed that some larger
predicted Burgers vectors corresponded to GB phase transformations that
proceed by absorption of vacancies. This analysis could explain why both GBs
transformed by absorbing extra atoms and not vacancies.
The multiplicity of the possible burgers vectors of GB phase junctions formed
between the same GB phases has important implications for GB phase equilibrium
and the kinetics of GB phase transformations. In elemental fluid systems,
interfacial phases in equilibrium have the same Gibbs free energy, which means
that their excess Helmholtz free energy difference is balanced by the
$-P\Delta[V]_{N}$ term.(Gibbs, ) The later term represents the mechanical work
per unit area done by the pressure $P$ during the transformation. This
condition is analogous to the co-existence conditions of bulk phases under
pressure. Since the excess volume difference is the only interface property
that couples to external stress this coexistence state is unique and is
defined by the excess properties of the interfacial phases. In solid systems,
the generalized analog of the $-P\Delta[V]_{N}$ term is the work per unit area
of the PK force that acts on the GB phase junction. When GB phases are in
contact with particle reservoirs such as open surfaces that enable the
potential change in the number of GB atoms (or the system is closed but
$\Delta N^{*}=0$), the equilibrium is established when the difference in the
excess GB Helmholtz free energies is balanced by the PK force on the GB phase
junction. Since the PK force depends on the Burgers vector of the phase
junction, the equilibrium coexistence between the same GB phases can be
established at different temperatures and stresses depending on the Burgers
vector of the junction. Similarly, the driving force for the GB phase
transformation for a given temperature and stress is not determined by the GB
phases alone and also depends on the Burgers vector of the junction. For
example, in this work, we considered different junctions between the same GB
phases formed by the insertion of vacancies and interstitials and showed that
the normal components of their Burgers vectors have opposite signs. When the
same stress normal to the GB plane is applied the PK force will drive these
two junctions in opposite directions. Moreover, the $\Delta N^{*}$
contribution to the Burgers vector may change the PK force in the way that
normal compression no longer favors the GB phase with the smallest excess
volume, as usually expected. These considerations demonstrate that the
dislocation nature of GB phase junctions makes GB phase transformations richer
than similar transformations at interfaces in fluid systems.
The investigation of dislocation properties of GB phase junctions have
implications for our understanding of GB phase transitions. At present, the
role of elastic interactions in the kinetics of GB phase transformations is
not well-understood. At the same time, there is growing modeling evidence
suggesting that such interactions could be important.(Frolov2013, ;
meiners_observations_2020, ) Recent experimental and modeling study suggested
that barriers associated with the motion of the GB phase junction could be
responsible for the slow kinetics of such transformations and could stabilize
metastable GB states. Modeling studies also showed that nucleation at surface
triple junctions is much more effective than homogenous nucleation even when
sources of atoms are not required.(meiners_observations_2020, ) Nucleation
models that incorporate elastic interactions have been recently developed for
regular GB disconnections to describe GB migration and interactions with
triple junctions.(han_grain-boundary_2018, ; thomas_reconciling_2017, ;
Thomas8756, ) The present analysis suggests that similar nucleations models
should be developed for GB phase transformations to gain further insight into
their energetics and kinetics.
## Acknowledgment
This work was performed under the auspices of the U.S. Department of Energy
(DOE) by Lawrence Livermore National Laboratory under contract DE-
AC52-07NA27344. T.F. was funded by the Laboratory Directed Research and
Development Program at Lawrence Livermore National Laboratory under Project
Tracking Code number 19-ERD-026. DLM was funded by the U.S. Department of
Energy (DOE), Office of Science, Basic Energy Sciences (BES), Materials
Science and Engineering Division (MSE). Sandia National Laboratories is a
multi-mission laboratory managed and operated by National Technology &
Engineering Solutions of Sandia, LLC, a wholly owned subsidiary of Honeywell
International Inc., for the U.S. Department of Energy’s National Nuclear
Security Administration under contract DE-NA0003525. This paper describes
objective technical results and analysis. Any subjective views or opinions
that might be expressed in the paper do not necessarily represent the views of
the U.S. Department of Energy or the United States Government. M.A.
acknowledges support from the Office of Naval Research under grant number
N0014-19-1-2376. The authors are grateful to Yuri Mishin and David Olmsted for
valuable discussions. T.F. is grateful to Ian Winter for stimulating
discussions.
## References
* (1) P. R. Cantwell, M. Tang, S. J. Dillon, J. Luo, G. S. Rohrer, and M. P. Harmer, Acta Materialia 62, 1 (2014).
* (2) T. Frolov, D. L. Olmsted, M. Asta, and Y. Mishin, Nat. Commun. 4, 1899 (2013).
* (3) J. W. Gibbs, The Scientific Papers of J. Willard Gibbs, volume 1, Longmans-Green, London, 1906.
* (4) T. Frolov and Y. Mishin, J. Chem. Phys. 143, 044706 (2015).
* (5) R. Freitas, T. Frolov, and M. Asta, Phys. Rev. B 95, 155444 (2017).
* (6) S. V. Divinski, H. Edelhoff, and S. Prokofjev, Phys. Rev. B 85, 144104 (2012).
* (7) K. L. Merkle and D. J. Smith, Phys. Rev. Lett. 59, 2887 (1987).
* (8) A. R. Krause, P. R. Cantwell, C. J. Marvel, C. Compson, J. M. Rickman, and M. P. Harmer, Journal of the American Ceramic Society 102, 778 (2019).
* (9) J. Rickman and J. Luo, Curr. Opin. Solid State Mater. Sci. 20, 225 (2016).
* (10) E. Rabkin, C. Minkwitz, C. Herzig, and L. Klinger, Philosophical Magazine Letters 79, 409 (1999).
* (11) G. S. Rohrer, Curr. Opin. Solid State Mater. Sci. 20, 231 (2016).
* (12) T. J. Rupert, Current Opinion in Solid State and Materials Science 20, 257 (2016).
* (13) S. J. Dillon, M. Tang, W. C. Carter, and M. P. Harmer, Acta Mater. 55, 6208 (2007).
* (14) C. J. O’Brien, C. M. Barr, P. M. Price, K. Hattar, and S. M. Foiles, Journal of Materials Science 53, 2911 (2018).
* (15) F. Abdeljawad, P. Lu, N. Argibay, B. G. Clark, B. L. Boyce, and S. M. Foiles, Acta Materialia 126, 528 (2017).
* (16) S. Rajeshwari K., S. Sankaran, K. C. Hari Kumar, H. Rosner, M. Peterlechner, V. A. Esin, S. Divinski, and G. Wilde, Acta Materialia 195, 501 (2020).
* (17) M. Glienke, M. Vaidya, K. Gururaj, L. Daum, B. Tas, L. Rogal, K. G. Pradeep, S. V. Divinski, and G. Wilde, Acta Materialia 195, 304 (2020).
* (18) J. Hirth, R. Pond, R. Hoagland, X.-Y. Liu, and J. Wang, Progress in Materials Science 58, 749 (2013).
* (19) J. P. Hirth and R. C. Pond, Acta Mater. 44, 4749 (1996).
* (20) D. Medlin, K. Hattar, J. Zimmerman, F. Abdeljawad, and S. Foiles, Acta Materialia 124, 383 (2017).
* (21) D. L. Medlin, D. Cohen, and R. C. Pond, Philosophical Magazine Letters 83, 223 (2003).
* (22) D. L. Medlin, D. Cohen, R. C. Pond, A. Serra, J. A. Brown, and Y. Mishin, Microscopy and Microanalysis 12, 888 (2006).
* (23) R. C. Pond and S. Celotto, Int. Mater. Rev. 48, 225 (2003).
* (24) J. P. Hirth, J. Wang, and C. N. Tome, Progress in Materials Science 83, 417 (2016).
* (25) A. Rajabzadeh, F. Mompiou, S. Lartigue-Korinek, N. Combe, M. Legros, and D. A. Molodov, Acta Materialia 77, 223 (2014).
* (26) Q. Zhu, G. Cao, J. Wang, C. Deng, J. Li, Z. Zhang, and S. X. Mao, Nature Communications 10, 156 (2019), Number: 1 Publisher: Nature Publishing Group.
* (27) T. Meiners, T. Frolov, R. E. Rudd, G. Dehm, and C. H. Liebscher, Nature 579, 375 (2020).
* (28) P. W. Tasker and D. M. Duffy, Philos. Mag. A 47, L45 (1983).
* (29) D. M. Duffy and P. W. Tasker, Philos. Mag. A 53, 113 (1986).
* (30) D. M. Duffy and P. W. Tasker, J. Am. Ceram. Soc 67, 176 (1984).
* (31) S. R. Phillpot and J. M. Rickman, The Journal of Chemical Physics 97, 2651 (1992).
* (32) S. R. Phillpot, Phys. Rev. B 49, 7639 (1994).
* (33) S. von Alfthan, P. D. Haynes, K. Kashi, and A. P. Sutton, Phys. Rev. Lett. 96, 055505 (2006).
* (34) S. von Alfthan, K. Kaski, and A. P. Sutton, Phys. Rev. B 76, 245317 (2007).
* (35) A. L. S. Chua, N. A. Benedek, L. Chen, M. W. Finnis, and A. P. Sutton, Nat Mater 9, 418 (2010).
* (36) W. Yu and M. Demkowicz, Journal of Materials Science 50, 4047 (2015).
* (37) J. W. Cahn, Thermodynamics of solid and fluid surfaces, in Interface Segregation, edited by W. C. Johnson and J. M. Blackely, chapter 1, page 3, American Society of Metals, Metals Park, OH, 1979\.
* (38) T. Frolov and Y. Mishin, Phys. Rev. B 85, 224106 (2012).
* (39) T. Frolov and Y. Mishin, Phys. Rev. B 85, 224107 (2012).
* (40) J. P. Hirth and J. Lothe, Theory of Dislocations, Wiley, New York, 2 edition, 1982.
* (41) In atomistic simulations this is always possible at least in principle. The reference GB phases can be generated separately or carved out from the deformed state and relaxed in a proper way so that the reference crossing vectors could be calculated. In experiment, the analysis cannot be completed based on a single image of the deformed state. However, GB structures sufficiently far away from the junction can be approximated as reference states. In general, a full three-dimensional structure is necessary to determine all three components of $\mathbf{b}$.
* (42) J. Han, V. Vitek, and D. J. Srolovitz, Acta Mater. 104, 259 (2016).
* (43) Q. Zhu, A. Samanta, B. Li, R. E. Rudd, and T. Frolov, Nat. Commun. 9, 467 (2018).
* (44) A. D. Banadaki, M. A. Tschopp, and S. Patala, Computational Materials Science 155, 466 (2018).
* (45) B. Gao, P. Gao, S. Lu, J. Lv, Y. Wang, and Y. Ma, Science Bulletin 64, 301 (2019).
* (46) C. Yang, M. Zhang, and L. Qi, Computational Materials Science 184, 109812 (2020).
* (47) P. Y. Robin, Am. Miner 59, 1286 (1974).
* (48) R. Dingreville and J. Qu, Journal of the Mechanics and Physics of Solids 56, 1944 (2008).
* (49) F. C. Larche and J. W. Cahn, Acta Metall. 26, 1579 (1978).
* (50) W. W. Mullins and R. F. Sekerka, The Journal of Chemical Physics 82, 5192 (1985).
* (51) R. C. Pond, V. Vitek, and P. B. Hirsch, Proceedings of the Royal Society of London. A. Mathematical and Physical Sciences 357, 453 (1977).
* (52) R. C. Pond, Proceedings of the Royal Society of London. Series A, Mathematical and Physical Sciences 357, 471 (1977).
* (53) G. H. Campbell, M. Kumar, W. E. King, J. Belak, J. A. Moriarty, and S. M. Foiles, Philosophical Magazine A 82, 1573 (2002).
* (54) G. H. Campbell, S. M. Foiles, P. Gumbsch, M. Rühle, and W. E. King, Phys. Rev. Lett. 70, 449 (1993).
* (55) Y. Mishin, M. J. Mehl, D. A. Papaconstantopoulos, A. F. Voter, and J. D. Kress, Phys. Rev. B 63, 224106 (2001).
* (56) R. C. Pond, W. Bollmann, and F. C. Frank, Philosophical Transactions of the Royal Society of London. Series A, Mathematical and Physical Sciences 292, 449 (1979).
* (57) R. C. Pond and D. S. Vlachavas, Proceedings of the Royal Society of London. Series A, Mathematical and Physical Sciences 386, 95 (1983), Publisher: The Royal Society.
* (58) J. Han, S. L. Thomas, and D. J. Srolovitz, Progress in Materials Science 98, 386 (2018).
* (59) S. L. Thomas, K. Chen, J. Han, P. K. Purohit, and D. J. Srolovitz, Nature Communications 8, 1 (2017).
* (60) S. L. Thomas, C. Wei, J. Han, Y. Xiang, and D. J. Srolovitz, Proceedings of the National Academy of Sciences 116, 8756 (2019).
Figure 1: Molecular dynamics simulations of structural transformations in a) $\Sigma 5(210)[001]$ and b) $\Sigma 5(310)[001]$ symmetric tilt GBs in Cu at T=800 K from Ref. (Frolov2013, ). a) GB phase junction formed by Filled Kites and Split Kites GB phases. b) GB phase junction formed by Kites and Split Kites GB phases.The insets show zoomed-in views of the corresponding GB structures computed at 0 K. Figure 2: Closure failure. a) Bicrystals with different GB structures, shown in blue and orange, have different dimensions in the reference state. b) An attempt to form a GB phase junction using the reference bicrystals results in a closure failure. The bulk lattice planes cannot be joined because they generally mismatch. c) GB phase junction is formed by elastically deforming both bicrystals so the lattice planes in the bulk can be connected. Both GB structures are elastically distorted compared to a) and b). Here we assume that the system has a finite size. In an infinitely large system, the blue and orange GB structures would converge to their undistorted dimensions, shown in a) and b), infinitely far away from the junction. In this construction, the magnitude of the Burgers vector of the junction depends on the sizes and shapes of the reference bicrystals shown in a). In general, different Burgers vectors can be obtained between the same GB phases by changing the bicrystals original dimensions. Figure 3: Burgers circuit construction to calculate the Burgers vector of a GB phase junction. a) A closed circuit is constructed around the GB phase junction. b) The vectors crossing the different GB structures are measured in the reference state. c) The closure failure in the reference state gives the Burgers vector of the GB phase junction. The unprimed and primed letters represent equivalent lattice sites in the deformed and reference states, respectively. The black circles represent lattice sites. Figure 4: a) A general Burgers circuit ABCD around a GB phase junction in the deformed state. $\mathbf{AB}$ and $\mathbf{CD}$ are the crossing vectors of the GB phases $\alpha$ and $\beta$, respectively. $\mathbf{BC}$ and $\mathbf{DA}$ are the lattice vectors of the upper and lower grains, respectively. Their sum in the reference state is a DSC lattice vector. $N^{\alpha}$ and $N^{\beta}$ are the numbers of atoms inside the regions spanned by the crossing vectors. In this case the difference $N^{\beta}-N^{\alpha}$ depends on the choice of the corners of the circuit. b) A particular choice of the circuit around the same junction, with the lattice sites B, C and D, A located on the same lattice planes of the upper and lower crystals, respectively. These planes run parallel to the GB plane and are indicated by the dashed lines. They are elastically deformed due to the presence of the dislocation at the GB phase junction. The quantity $\Delta N^{\prime}/A=(N^{\prime\beta}-N^{\prime\alpha})/A$ is the defect absorption capacity of this junction: it represents the number of point defects per unit area which is absorbed or ejected when the junction moves along the GB. Figure 5: Two-dimensional schematic of a mapping of a region of single-crystalline material to a bicrystal containing a GB (reproduced from Ref. (Frolov2012a, )). a) Reference state used to calculate the deformation gradient. b) Actual, deformed state of the single crystal c) Region containing the GB obtained from the single-crystalline region. d) Superimposed single-crystal and bicrystal regions showing the displacement vector $\mathbf{B}$. The open circles represent lattice sites labeled a through d with the prime indicating the reference state and the asterisk indicating the bicrystal. The parallelogram defined by the vertices $a$,$b$, $c^{*}$ and $d^{*}$ is shown in e). Its mapping on the reference state in a) defines the deformation gradient $\bar{\mathbf{F}}^{\alpha}$ producing the bicrystal region with a given GB phase $\alpha$. Figure 6: Calculation of the Burgers vector $\mathbf{b}$ of phase junction in the $\Sigma 5(310)[001]$ GB using a closed circuit ABCD. a) Deformed state containing the GB phase junction. The Kite structure is on the left, while the Split Kite structure is on the right. For convenience, both the $\mathbf{BC}$ and $\mathbf{DA}$ vectors are chosen to lie in (310) planes and have the same length in the reference state. By this choice, their contribution to $\mathbf{b}$ is zero. b) and c) show bicrystals with Kite and Split Kite phases in the reference state. To map the lattice sites $A$, $B$, $C$ and $D$ from the deformed state onto their positions $A^{\prime}$, $B^{\prime}$, $C^{\prime}$ and $D^{\prime}$ in the reference state, we follow (100) planes marked by green lines. In the Split Kite structure, the lattice points $C^{\prime}$ and $D^{\prime}$ are offset by $d_{\\{260\\}}$ parallel to the interface, which is indicated by two vertical black lines. d) The Burgers vector $\mathbf{b}$ is equal to the sum $\mathbf{A^{\prime}B^{\prime}}+\mathbf{C^{\prime}D^{\prime}}$. Figure 7: The dichromatic patterns for bicrystals with a) $\Sigma 5(310)[001]$ and b) $\Sigma 5(210)[001]$ GBs. The filled and open symbols distinguish the lattice sites belonging to the two different grains. The lattice sites represented by diamonds are shifted relative to the sites represented by circles by a/2 normal to the plane of the figure. CSL in-plane edges are a) $a/2<310>=a\sqrt{10}/2$ and b) $a<210>=a\sqrt{5}$, where a is the lattice parameter of fcc Cu (Mishin01, ). $d_{\\{260\\}}=a/(2\sqrt{10})$ and $d_{\\{420\\}}=a/(2\sqrt{5})$ correspond to the distances between atomic planes inside the crystals along the directions normal to the planes of the boundaries. Black arrows indicate the vectors of the DSC lattice: a) $\mathbf{d_{1}^{SC}}=(a/\sqrt{10},0,0)$, $\mathbf{d_{2}^{SC}}=(a/(2\sqrt{10)},-a/2,a/(2\sqrt{10)})$, $\mathbf{d_{3}^{SC}}=(0,0,a/\sqrt{10})$ and b) $\mathbf{d_{1}^{SC}}=(a/(2\sqrt{5}),0,a/(2\sqrt{5}))$, $\mathbf{d_{2}^{SC}}=(-a/(2\sqrt{5}),a/2,0\mathbf{)}$, , $\mathbf{d_{3}^{SC}}=(-a/(2\sqrt{5}),0,a/(2\sqrt{5}))$. Figure 8: Schematic illustration of two GB phase junctions with different Burgers vectors formed by the same GB phases. For clarity, the upper and lower grains appear to have the same orientation in this projection. The gray and orange lattice sites represent the different core structures. The purple lattice sites indicate the GB phase junctions. The black rectangles indicate the equivalent volumes on the two sides of the junctions, they are bound by the same lattice planes indicated by the red lines. a) GB phase 2, mimicking the Split Kite phase, is formed by adding extra atoms to the GB phase 1 (Kite phase). There two additional atoms in the region on the right, which corresponds to +0.5 fraction of a bulk plane. b) GB phase 2 is formed by adding the same fraction of vacancies to the GB phase 1. While the GB phases are the same in a) and b) the junctions are different. The defect absorption capacities $\Delta N^{\prime}/A$ given by the difference in the numbers of atoms in the equivalent volumes per unit are are also different for the two junctions. Figure 9: Calculation of the Burgers vector $\mathbf{b}$ of phase junction in the $\Sigma 5(210)[001]$ GB using a closed circuit ABCD. a) Deformed state containing the GB phase junction. The Filled Kite structure is on the left, while the Split Kite structure is on the right. For convenience, the $\mathbf{BC}$ and $\mathbf{DA}$ vectors are chosen to lie in (210) planes and are have the same length in the reference state. By this choice, their contribution to $\mathbf{b}$ is zero. b) and c) show bicrystals with Filled Kite and Split Kite phases in the reference state. To map the lattice sites $A$, $B$, $C$ and $D$ from the deformed state onto their positions $A^{\prime}$, $B^{\prime}$, $C^{\prime}$ and $D^{\prime}$ in the reference state, we followed lattice planes indicated by the green lines. d) The Burgers vector $\mathbf{b}$ is equal to the sum $\mathbf{A^{\prime}B}^{\prime}+\mathbf{C^{\prime}D^{\prime}}$. Structure | $\Delta N^{*}$ relative to Kite phase | $[U]_{N}$, J/m2 | $[V]_{N}$, Å
---|---|---|---
$\Sigma 5(310)[001]$
Kites (0 K) | $0$ | 0.9047 | 0.316
Split kites (0 K) | $2/5$ | 0.911 | 0.233
Split kites (MD) | $0.37$ | 0.920 | 0.245
$\Sigma 5(210)[001]$
Kites (0 K) | $0$ | 0.951 | 0.322
Split kites (0 K) | $7/15$ | 0.936 | 0.172
Split kites (MD) | $0.46$ | 0.98 | 0.23
Filled kites (0 K) | $6/7$ | 0.953 | 0.301
Table 1: Excess properties of different GB phases calculated in Ref.
(Frolov2013, ) including numbers of atoms $\Delta N^{*}$ relative to Kite
phase expressed as a fraction of atoms in a bulk plane parallel to the GB,
excess energy and volume. $\Sigma 5(310)$ and $\Sigma 5(210)$ GBs in Cu
modeled with the EAM potential (Mishin01, ). The energies indicate the ground
states at 0 K. The zero fractions of a plane for Kite structures indicate that
these GB structures can be created by joining two perfect half-crystals, while
the non-zero fractions indicate that extra atoms have to be inserted or
removed to generate the other GB structures. The fractions of the inserted
atoms are calculated relative to the number of atoms in one atomic plane
parallel to the GB.
|
# Telescopers for differential forms with one parameter††thanks: S. Chen was
partially supported by the NSFC grants 11871067, 11688101, the Fund of the
Youth Innovation Promotion Association, CAS, and the National Key Research and
Development Project 2020YFA0712300, R. Feng was partially supported by the
NSFC grants 11771433, 11688101, Beijing Natural Science Foundation under Grant
Z190004, and the National Key Research and Development Project 2020YFA0712300.
Shaoshi Chen1, Ruyong Feng1, Ziming Li1,
Michael F. Singer2, Stephen Watt3
1KLMM, AMSS, Chinese Academy of Sciences and
School of Mathematics, University of Chinese Academy of Sciences,
Beijing, 100190, China
2Department of Mathematics, North Carolina State University,
Raleigh, 27695-8205, USA
3SCG, Faculty of Mathematics, University of Waterloo,
Ontario, N2L3G1, Canada
{schen<EMAIL_ADDRESS><EMAIL_ADDRESS>
<EMAIL_ADDRESS><EMAIL_ADDRESS>
###### Abstract
Telescopers for a function are linear differential (resp. difference)
operators annihilated by the definite integral (resp. definite sum) of this
function. They play a key role in Wilf-Zeilberger theory and algorithms for
computing them have been extensively studied in the past thirty years. In this
paper, we introduce the notion of telescopers for differential forms with
$D$-finite function coefficients. These telescopers appear in several areas of
mathematics, for instance parametrized differential Galois theory and mirror
symmetry. We give a sufficient and necessary condition for the existence of
telescopers for a differential form and describe a method to compute them if
they exist. Algorithms for verifying this condition are also given.
## 1 Introduction
In the Wilf-Zeilberger theory, telescopers usually refer to the operators in
the output of the method of creative telescoping, which are linear
differential (resp. difference) operators annihilated by the definite
integrals (resp. the definite sums) of the input functions. The telescopers
have emerged at least from the work of Euler [17] and have been found many
applications in the various areas of mathematics such as combinatorics, number
theory, knot theory and so on (see Section 7 of [19] for details). In
particular, telescopers for a function are often used to prove the identities
involving this function or even obtain a simpler expression for the definite
integral or sum of this function. As a clever and algorithmic process for
constructing telescopers, creative telescoping firstly appeared as a term in
the essay of van der Poorten on Apréy’s proof of the irrationality of
$\zeta(3)$ [30]. However, it was Zeilberger and his collaborators [3, 28, 35,
36, 38] in the early 1990s who equipped creative telescoping with a concrete
meaning and formulated it as an algorithmic tool. Since then, algorithms for
creative telescoping have been extensively studied. Based on the techniques
used in the algorithms, the existing algorithms are divided into four
generations, see [13] for the details. Most recent algorithms are called
reduction-based algorithms which were first introduced by Bostan et al. in [6]
and further developed in [7, 14, 15, 8] etc. The termination of these
algorithms relies on the existence of telescopers. The question for which
input functions the algorithms will terminate has been answered in [37, 1, 2,
16, 10] etc for several classes of functions such as rational functions and
hypergeometric functions and so on. The algorithmic framework for creative
telescoping is now called the Wilf-Zeilberger theory.
Most of algorithms for creative telescoping focus on the case of one bivariate
function as input. There are only a few algorithms which deal with
multivariate case (see [11, 9, 20, 12] etc). It is still a challenge to
develop the multivariate analogue of the existing algorithms (see Section 5 of
[13]). In the language of differential forms (with $m$ variables and one
parameter), the results in [11] and [20] dealt with the cases of differential
1-forms and differential $m$-forms respectively. On the other hand, in the
applications to other domains such as mirror symmetry (see [22, 25, 26]), one
needs to deal with the case of differential $p$-forms with $1\leq p\leq m$.
Below is an example.
###### Example 1
Consider the following one-parameter family of the quintic polynomials
$W(t)=\frac{1}{5}(x_{1}^{5}+x_{2}^{5}+x_{3}^{5}+x_{4}^{5}+x_{5}^{5})-tx_{1}x_{2}x_{3}x_{4}x_{5}$
where $t$ is a parameter. Set
$\omega=\sum_{i=1}^{5}\frac{(-1)^{i-1}x_{i}}{W(t)}{\rm
d}x_{1}\wedge\cdots\wedge\widehat{{\rm d}x_{i}}\wedge\cdots\wedge{\rm
d}x_{5}.$
To obtain the Picard-Fuchs equation for the mirror quintic, the geometriests
want to compute a fourth order linear differential operator $L$ in $t$ and
${\partial}_{t}$ such that $L(\omega)={\rm d}\eta$ for some differential
3-form $\eta$. Here one has that
$L=(1-t^{5})\frac{\partial^{4}}{\partial
t^{4}}-10t^{4}\frac{\partial^{3}}{\partial
t^{3}}-25t^{3}\frac{\partial^{2}}{\partial
t^{2}}-15t^{2}\frac{\partial}{\partial t}-1.$
Set $\theta_{t}=t\partial/\partial t$. Then
$\tilde{L}=-\frac{1}{5^{4}}L\frac{1}{t}=\theta_{t}^{4}-5t(5\theta_{t}+1)(5\theta_{t}+2)(5\theta_{t}+3)(5\theta_{t}+4)$
and the equation $\tilde{L}(y)=0$ is the required Picard-Fuchs equation.
We call the operator $L$ appearing in the above example a telescoper for the
differential form $\omega$ (see Definition 4). In this paper, we study the
telescopers for differential forms with $D$-finite function coefficients.
Instead of the geometric method used in [22, 25, 26], we provide an algebraic
treatment. We give a sufficient and necessary condition guaranteeing the
existence of telescopers and describe a method to compute them if they exist.
Meanwhile, we also present algorithms to verify this condition.
The rest of this paper is organized as follows. In Section 2, we recall
differential forms with $D$-finite function coefficients and introduce the
notion of telescopers for differential forms. In Section 3, we give a
sufficient and necessary condition for the existence of telescopers, which can
be considered as a parametrized version of Poincaré lemma on differential
manifolds. In Section 4, we give two algorithms for verifying the condition
presented in Section 3.
Notations: The following notations will be frequently used thoughout this
paper.
${\partial}_{t}$: | the usual derivation ${\partial}/{\partial}_{t}$ with respect to $t$,
---|---
${\partial}_{x_{i}}$: | the usual derivation ${\partial}/{\partial}_{x_{i}}$with respect to $x_{i}$,
${\bf x}$: | $\\{x_{1},\cdots,x_{n}\\}$
${\bm{\partial}}_{\bf x}$: | $\\{{\partial}_{x_{1}},\cdots,{\partial}_{x_{n}}\\}$,
The following formulas will also be frequently used:
$\displaystyle{\partial}_{x}^{\mu}x^{\nu}$
$\displaystyle=\begin{cases}\nu(\nu-1)\cdots(\nu-\mu+1)x^{\nu-\mu}+*{\partial}_{x},&\nu\geq\mu\\\
*{\partial}_{x},&\nu<\mu\end{cases}$ (1) $\displaystyle
x^{\mu}{\partial}_{x}^{\nu}$
$\displaystyle=\begin{cases}(-1)^{\nu}\mu(\mu-1)\cdots(\mu-\nu+1)x^{\mu-\nu}+{\partial}_{x}*,&\mu\geq\nu\\\
{\partial}_{x}*,&\mu<\nu\end{cases}$ (2)
where $*\in k\langle x,{\partial}_{x}\rangle$.
## 2 $D$-finite elements and differential forms
Throughout this paper, let $k$ be an algebraically closed field of
characteristic zero and let $K$ be the differential field
$k(t,x_{1},\cdots,x_{n})$ with the derivations
${\partial}_{t},{\partial}_{x_{1}},\cdots,{\partial}_{x_{n}}$. Let
${\mathfrak{D}}=K\langle{\partial}_{t},{\bm{\partial}}_{\bf x}\rangle$ be the
ring of linear differential operators with coefficients in $K$. For
$S\subset\\{t,{\bf x},{\partial}_{t},{\bm{\partial}}_{\bf x}\\}$, denote by
$k\langle S\rangle$ the subalgebra over $k$ of ${\mathfrak{D}}$ generated by
$S$. For brevity, we denote $k\langle t,{\bf x},{\partial}_{t},{\partial}_{\bf
x}\rangle$ by ${\mathfrak{W}}$. Let ${\cal U}$ be the universal differential
extension of $K$ in which every algebraic differential equation having a
solution in an extension of ${\cal U}$ has a solution (see page 133 of [18]
for more precise description).
###### Definition 2
An element $f\in{\cal U}$ is said to be $D$-finite over $K$ if for every
$\delta\in\\{{\partial}_{t},{\partial}_{x_{1}},\cdots,{\partial}_{x_{n}}\\}$,
there is a nonzero operator $L_{\delta}\in K\langle\delta\rangle$ such that
$L_{\delta}(f)=0$.
Denote by $R$ the ring of $D$-finite elements over $K$, and by ${\cal M}$ a
free $R$-module of rank $m$ with base
$\\{{\mathfrak{a}}_{1},\cdots,{\mathfrak{a}}_{m}\\}$. Define a map
${\mathfrak{D}}\times{\cal M}\rightarrow{\cal M}$ given by
$\left(L,\sum_{i=1}^{m}f_{i}{\mathfrak{a}}_{i}\right)\rightarrow
L\left(\sum_{i=1}^{m}f_{i}{\mathfrak{a}}_{i}\right):=\sum_{i=1}^{m}L(f_{i}){\mathfrak{a}}_{i}.$
This map endows ${\cal M}$ with a left ${\mathfrak{D}}$-module structure. Let
$\bigwedge({\cal M})=\bigoplus_{i=0}^{m}\bigwedge\nolimits^{i}({\cal M})$
be the exterior algebra of ${\cal M}$, where $\bigwedge^{i}({\cal M})$ denotes
the $i$-th homogeneous part of $\bigwedge({\cal M})$ as a graded $R$-algebra.
We call an element in $\bigwedge^{i}({\cal M})$ an $i$-form. $\bigwedge({\cal
M})$ is also a left ${\mathfrak{D}}$-module. Let ${\rm d}:R\rightarrow{\cal
M}$ be a map defined as
${\rm
d}f={\partial}_{x_{1}}(f){\mathfrak{a}}_{1}+\cdots+{\partial}_{x_{m}}(f){\mathfrak{a}}_{m}$
for any $f\in R$. Then ${\rm d}$ is a derivation over $k$. Note that for each
$i=1,\cdots,m$, ${\rm d}x_{i}={\mathfrak{a}}_{i}$. Hence in the rest of this
paper we shall use $\\{{\rm d}x_{1},\cdots,{\rm d}x_{m}\\}$ instead of
$\\{{\mathfrak{a}}_{1},\cdots,{\mathfrak{a}}_{m}\\}$. The map ${\rm d}$ can be
extended to a derivation on $\bigwedge({\cal M})$ which is defined recursively
as
${\rm d}(\omega_{1}\wedge\omega_{2})={\rm
d}\omega_{1}\wedge\omega_{2}+(-1)^{i-1}\omega_{1}\wedge{\rm d}\omega_{2}$
for any $\omega_{1}\in\bigwedge^{i}({\cal M})$ and
$\omega_{2}\in\bigwedge^{j}({\cal M})$. For detailed definitions on exterior
algebra and differential forms, we refer the readers to Chapter 19 of [21] and
Chapter 1 of [34] respectively. As the usual differential forms, we introduce
the following definition.
###### Definition 3
Let $\omega\in\bigwedge({\cal M})$ be a form.
* $(1)$
$\omega$ is said to be closed if ${\rm d}\omega=0$, and exact if there is
$\eta\in\bigwedge({\cal M})$ such that $\omega={\rm d}\eta$.
* $(2)$
$\omega$ is said to be ${\partial}_{t}$-closed (${\partial}_{t}$-exact) if
there is a nonzero $L\in k(t)\langle{\partial}_{t}\rangle$ such that
$L(\omega)$ is closed (exact).
###### Definition 4
Assume that $\omega\in\bigwedge({\cal M})$. A nonzero $L\in
k(t)\langle{\partial}_{t}\rangle$ is called a telescoper for $\omega$ if
$L(\omega)$ is exact.
## 3 Parametrized Poincaré lemma
The famous Poincaré lemma states that if $B$ is an open ball in
$\mathbb{R}^{n}$, any smooth closed $i$-form $\omega$ defined on $B$ is exact,
for any integer $i$ with $1\leq i\leq n$. In this section, we shall prove the
following lemma which can be viewed as a parametrized analogue of Poincaré
lemma for $\bigwedge({\cal M})$.
###### Lemma 5 (Parameterized Poincaré lemma)
Let $\omega\in\bigwedge^{p}({\cal M})$. If $\omega$ is ${\partial}_{t}$-closed
then it is ${\partial}_{t}$-exact.
To the above lemma, we need some lemmas.
###### Lemma 6 (Lipshitz’s lemma (Lemma 3 of [24]))
Assume that $f$ is a $D$-finite element over $k({\bf x})$. For each pair
$1\leq i<j\leq n$, there is a nonzero operator $L\in
k(x_{1},x_{3},\cdots,x_{n})\langle{\partial}_{x_{i}},{\partial}_{x_{j}}\rangle$
such that $L(f)=0$.
The following lemma is a generalization of Lipshitz’s lemma.
###### Lemma 7
Assume that $f_{1},\cdots,f_{m}$ are $D$-finite elements over $k({\bf x},t)$
and
$S\subset\\{t,x_{1},\cdots,x_{n},{\partial}_{t},{\partial}_{x_{1}},\cdots,{\partial}_{x_{n}}\\}$
with $|S|>n+1$. Then one can compute a nonzero operator $T$ in $k\langle
S\rangle$ such that $T(f_{i})=0$ for all $i=1,\cdots,m$.
* Proof.
For each $\delta\in\\{t,{\partial}_{x_{1}},\cdots,{\partial}_{x_{n}}\\}$ and
$i=1,\cdots,m$, let $T_{i}$ be a nonzero operator in $K\langle\delta\rangle$
such that $T_{i}(f_{i})=0$. Set $T$ to be the least common left multiple of
$T_{1},\dots,T_{m}$. Then $T(f_{i})=0$ for all $i=1,\cdots,m$. The lemma then
follows from an argument similar to that in the proof of Lipshitz’s lemma.
###### Lemma 8
Assume that $f_{1},\cdots,f_{m}$ are $D$-finite over $k({\bf x},t)$,
$I,J\subset\\{1,\cdots,n\\}$ and $I\cap J=\emptyset$. Assume further that
$V\subset\\{x_{i},{\partial}_{x_{i}}|i\in\\{1,\cdots,n\\}\setminus(I\cup
J)\\}$ with $|V|=n-|I|-|J|$. Then one can compute an operator $P$ of the form
$L+\sum_{i\in I}{\partial}_{x_{i}}M_{i}+\sum_{j\in J}N_{j}{\partial}_{x_{j}}$
such that $P(f_{l})=0$ for all $l=1,\cdots,m$, where $L$ is a nonzero operator
in $k\langle\\{t,{\partial}_{t}\\}\cup V\\}\rangle$,
$M_{i},N_{j}\in{\mathfrak{W}}$ and $N_{j}$ is free of $x_{i}$ for all $i\in I$
and $j\in J$.
* Proof.
Without loss of generality, we assume that $I=\\{1,\cdots,r\\}$ and
$J=\\{r+1,\cdots,r+s\\}$ where $r=|I|$ and $s=|J|$. Let
$S=\\{t,{\partial}_{t}\\}\cup\\{{\partial}_{x_{i}}|i\in
I\\}\cup\\{x_{j}|j=r+1,\cdots,r+s\\}\cup V.$
Then $|S|=n+2>n+1$. By Lemma 7, one can compute a $T\in k\langle
S\rangle\setminus\\{0\\}$ such that $T(f_{l})=0$ for all $l=1,\cdots,m$. Write
$T=\sum_{{\mathbf{d}}=(d_{1},\cdots,d_{r})\in\Gamma_{1}}{\partial}_{x_{1}}^{d_{1}}\cdots{\partial}_{x_{r}}^{d_{r}}T_{{\mathbf{d}}}$
where $T_{\mathbf{d}}\in
k\langle\\{t,{\partial}_{t},x_{r+1},\cdots,x_{r+s}\\}\cup
V\\}\rangle\setminus\\{0\\}$ and $\Gamma_{1}$ is a finite subset of
${\mathbb{Z}}^{r}$. Let $\bar{{\mathbf{d}}}=(\bar{d}_{1},\cdots,\bar{d}_{r})$
be the minimal element of $\Gamma_{1}$ with respect to the lex order on
${\mathbb{Z}}^{r}$. Multiplying $T$ by $\prod_{i=1}^{r}x_{i}^{\bar{d}_{i}}$ on
the left and using the formula (2) yield that
$\left(\prod_{i=1}^{r}x_{i}^{\bar{d}_{i}}\right)T=\alpha
T_{\bar{{\mathbf{d}}}}+\sum_{i=1}^{r}{\partial}_{x_{i}}\tilde{T}_{i}$ (3)
where $\alpha$ is a nonzero integer and $\tilde{T}_{i}\in k\langle
S\cup\\{x_{i}|i\in I\\}\rangle$. Write
$T_{\bar{{\mathbf{d}}}}=\sum_{{\mathbf{e}}=(e_{1},\cdots,e_{s})\in\Gamma_{2}}L_{{\mathbf{e}}}x_{r+1}^{e_{1}}\cdots
x_{r+s}^{e_{s}}$
where $L_{{\mathbf{e}}}\in k\langle\\{t,{\partial}_{t}\\}\cup
V\rangle\setminus\\{0\\}$ and $\Gamma_{2}$ is a finite subset of
${\mathbb{Z}}^{s}$. Let $\bar{{\mathbf{e}}}=(\bar{e}_{1},\cdots,\bar{e}_{s})$
be the maximal element of $\Gamma_{2}$ with respect to the lex order on
${\mathbb{Z}}^{s}$. Multiplying $T_{\bar{{\mathbf{d}}}}$ by
$\prod_{i=1}^{s}{\partial}_{x_{r+i}}^{\bar{e}_{i}}$ on the left and using the
formula (1) yield that
$\left(\prod_{i=1}^{s}{\partial}_{x_{r+i}}^{\bar{e}_{i}}\right)T_{\bar{{\mathbf{d}}}}=\beta
L_{\bar{{\mathbf{e}}}}+\sum_{j\in J}\tilde{L}_{j}{\partial}_{x_{j}}$ (4)
where $\tilde{L}_{i}\in
k\langle\\{t,{\partial}_{t},x_{r+1},\cdots,x_{r+s},{\partial}_{x_{r+1}},\cdots,{\partial}_{x_{r+s}}\\}\cup
V\rangle$ and $\alpha$ is a nonzero integer. Combining (3) with (4) yields the
required operator $P$.
###### Corollary 9
Assume that $f_{1},\cdots,f_{m}$ are $D$-finite over $k({\bf x},t)$, $J$ is a
subset of $\\{1,\cdots,n\\}$ and
$V\subset\\{x_{i},{\partial}_{x_{i}}|i\in\\{1,\cdots,n\\}\setminus J\\}$ with
$|V|=n-|J|$. Assume further that ${\partial}_{x_{j}}(f_{l})=0$ for all $j\in
J$ and $l=1,\cdots,m$. Then one can compute a nonzero $L\in
k\langle\\{t,{\partial}_{t}\\}\cup V\rangle$ such that $L(f_{l})=0$ for all
$l=1,\cdots,m$.
* Proof.
In Lemma 8, set $I=\emptyset$.
The main result of this section is the following theorem which can be viewed
as a generalization of Corollary 9 to differential forms. To describe and
prove this theorem, let us recall some notation from the first chapter of
[34]. For any $f\in R$, we define ${\rm d}_{0}(f)=0$ and
${\rm d}_{s}(f)=\partial_{x_{1}}(f){\rm d}x_{1}+\cdots+\partial_{x_{s}}(f){\rm
d}x_{s}$
for $s\in\\{1,2,\ldots,n\\}$. We can extend ${\rm d}_{s}$ to the module
$\bigwedge({\cal M})$ in a natural way. Precisely, let
$\omega=\sum_{i=1}^{m}f_{i}{\mathfrak{m}}_{i}$ where ${\mathfrak{m}}_{i}$ is a
monomial in ${\rm d}x_{1},\cdots,{\rm d}x_{n}$. Then ${\rm d}_{0}(\omega)=0$
and
${\rm d}_{s}(\omega)=\sum_{i=1}^{m}\sum_{j=1}^{s}{\partial}_{x_{j}}(f_{i}){\rm
d}x_{j}\wedge{\mathfrak{m}}_{i}=\sum_{j=1}^{s}{\rm
d}x_{j}\wedge{\partial}_{x_{j}}(\omega).$
By definition, one sees that
${\rm d}_{s}(u\wedge{\rm d}x_{s})={\rm d}_{s-1}(u)\wedge{\rm
d}x_{s}\,\,\mbox{and}\,\,{\rm d}_{s}(u)={\rm d}_{s-1}(u)+{\rm
d}x_{s}\wedge{\partial}_{x_{s}}(u).$
###### Theorem 10
Assume that $0\leq s\leq
n,V\subset\\{x_{s+1},\cdots,x_{n},{\partial}_{x_{x+1}},\cdots,{\partial}_{x_{n}}\\}$
with $|V|=n-s$ and $\omega\in\bigwedge^{p}({\cal M})$. If ${\rm
d}_{s}\omega=0$, then one can compute a nonzero $L\in
k\langle\\{t,\partial_{t}\\}\cup V\rangle$ and $\mu\in\bigwedge^{p-1}({\cal
M})$ such that $L(\omega)={\rm d}_{s}\mu.$
###### Remark 11
1. 1.
If $p=0$, then $\omega=f\in R$ and ${\rm d}_{s}f=0$ if and only if $s=0$ or
${\partial}_{x_{i}}(f)=0$ for all $1\leq i\leq s$ if $s>0$. Therefore
Corollary 9 is a special case of Theorem 10.
2. 2.
Note that the parametrized Poincaré lemma is just the special case of Theorem
10 when $s=n$.
* Proof.
We proceed by induction on $s$. Assume that $s=0$ and write
$\omega=\sum_{i=1}^{m}f_{i}{\mathfrak{m}}_{i}$
where ${\mathfrak{m}}_{i}$ a monomial in ${\rm d}x_{1},{\rm
d}x_{2},\cdots,{\rm d}x_{n}$ and $f_{i}\in R$. By Corollary 9 with
$I=\emptyset$, one can compute a nonzero $L\in
k\langle\\{t,{\partial}_{t}\\}\cup V\rangle$ such that $L(f_{i})=0$ for all
$i=1,\cdots,m$. Then one has that
$L(\omega)=\sum_{i=1}^{m}L(f_{i}){\mathfrak{m}}_{i}=0.$
This proves the base case. Now assume that the theorem holds for $s<\ell$ and
consider the case $s=\ell$. Write
$\omega=u\wedge{\rm d}x_{\ell}+v$
where both $u$ and $v$ do not involve ${\rm d}x_{\ell}$. Then the assumption
${\rm d}_{\ell}\omega=0$ implies that
${\rm d}_{\ell-1}u\wedge{\rm d}x_{\ell}+{\rm d}_{\ell}v={\rm
d}_{\ell-1}u\wedge{\rm d}x_{\ell}+{\rm d}_{\ell-1}v+{\rm
d}x_{\ell}\wedge{\partial}_{x_{l}}(v)=0.$
Since all of ${\rm d}_{\ell-1}u,{\rm d}_{\ell-1}v,{\partial}_{x_{\ell}}(v)$ do
not involve ${\rm d}x_{\ell}$, one has that ${\rm d}_{\ell-1}v=0$ and ${\rm
d}_{\ell-1}(u)-{\partial}_{x_{\ell}}(v)=0$. By the induction hypothesis, one
can compute a nonzero $\tilde{L}\in
k\langle\\{t,x_{\ell},{\partial}_{t}\\}\cup V\rangle$ and
$\tilde{\mu}\in\bigwedge^{p-1}({\cal M})$ such that
$\tilde{L}(v)={\rm d}_{\ell-1}(\tilde{\mu}).$ (5)
We claim that $\tilde{L}$ can be chosen to be free of $x_{\ell}$. Write
$\tilde{L}=\sum_{j=0}^{d}N_{j}x_{\ell}^{d}$
where $N_{j}\in k\langle\\{t,{\partial}_{t}\\}\cup V\rangle$ and $N_{d}\neq
0$. Multiplying $\tilde{L}$ by ${\partial}_{x_{\ell}}^{d}$ on the left and
using the formula (2) yield that
${\partial}_{x_{\ell}}^{d}\tilde{L}=\sum_{j=0}^{d}N_{j}{\partial}_{x_{\ell}}^{d}x_{\ell}^{j}=\alpha
N_{d}+\tilde{N}{\partial}_{x_{\ell}}$ (6)
where $\alpha$ is a nonzero integer and $\tilde{N}\in
k\langle\\{t,x_{\ell},{\partial}_{t},{\partial}_{x_{\ell}}\\}\cup V\rangle$.
The equalities (5) and (6) together with ${\partial}_{x_{\ell}}(v)={\rm
d}_{\ell-1}(\tilde{u})$ yield that $N_{d}(v)={\rm d}_{\ell-1}(\pi)$ for some
$\pi\in\bigwedge^{p-1}({\cal M})$. This proves the claim. Now one has that
$\tilde{L}(\omega)=\tilde{L}(u)\wedge{\rm d}x_{\ell}+{\rm
d}_{\ell-1}(\tilde{\mu})=\tilde{L}(u)\wedge{\rm d}x_{\ell}+{\rm
d}x_{\ell}\wedge{\partial}_{x_{\ell}}(\tilde{\mu})+{\rm
d}_{\ell}(\tilde{\mu}).$
Since $\tilde{L}$ is free of $x_{1},\cdots,x_{\ell}$, $\tilde{L}{\rm
d}_{\ell}={\rm d}_{\ell}\tilde{L}$. This implies that
$\displaystyle 0=\tilde{L}({\rm d}_{\ell}(\omega))={\rm
d}_{\ell}(\tilde{L}(\omega))$ $\displaystyle={\rm
d}_{\ell-1}(\tilde{L}(u))\wedge{\rm d}x_{\ell}+{\rm d}x_{\ell}\wedge{\rm
d}_{\ell-1}({\partial}_{x_{\ell}}(\tilde{\mu}))$ $\displaystyle={\rm
d}_{\ell-1}\left(\tilde{L}(u)-{\partial}_{x_{\ell}}(\tilde{\mu})\right)\wedge{\rm
d}x_{\ell}.$
Note that $\tilde{\mu}$ can always be chosen to be free of ${\rm d}x_{\ell}$.
Hence one has that ${\rm
d}_{\ell-1}(\tilde{L}(u)-{\partial}_{x_{\ell}}(\tilde{\mu}))=0$. By the
induction hypothesis, one can compute a nonzero $\bar{L}\in
k\langle\\{t,{\partial}_{x_{\ell}},{\partial}_{t}\\}\cup V\rangle$ and
$\bar{\mu}\in\bigwedge^{p-1}({\cal M})$ such that
$\bar{L}\left(\tilde{L}(u)-{\partial}_{x_{\ell}}(\tilde{\mu})\right)={\rm
d}_{\ell-1}(\bar{\mu}).$ (7)
Write
$\bar{L}=\sum_{j=e_{1}}^{e_{2}}{\partial}_{x_{\ell}}^{j}M_{j}$
where $M_{j}\in k\langle\\{t,{\partial}_{t}\\}\cup V\rangle$ and
$M_{e_{1}}\neq 0$. Multiplying $\bar{L}$ by $x_{\ell}^{e_{1}}$ on the left and
using the formula (2) yield that
$x_{\ell}^{e_{1}}\bar{L}=\beta M_{e_{1}}+{\partial}_{x_{\ell}}\tilde{M}$
where $\beta$ is a nonzero integer and $\tilde{M}\in
k\langle\\{t,{\partial}_{t},{\partial}_{x_{\ell}},x_{\ell}\\}\cup V\rangle$.
Hence applying $x_{\ell}^{e_{1}}$ to the equality (7), one gets that
$\beta
M_{e_{1}}\left(\tilde{L}(u)-{\partial}_{x_{\ell}}(\tilde{\mu})\right)={\rm
d}_{\ell-1}(x_{\ell}^{e_{1}}\bar{\mu})+{\partial}_{x_{\ell}}\left(\tilde{M}\left(\tilde{L}(u)-{\partial}_{x_{\ell}}(\tilde{\mu})\right)\right).$
Set $L=\beta M_{e_{1}}\tilde{L}$. The one has that
$\displaystyle L(\omega)$ $\displaystyle=\beta
M_{e_{1}}\left((\tilde{L}(u)-{\partial}_{x_{\ell}}(\tilde{\mu}))\wedge{\rm
d}x_{\ell}+{\rm d}_{\ell}(\tilde{\mu})\right)$ $\displaystyle=\left(\beta
M_{e_{1}}\left(\tilde{L}(u)-{\partial}_{x_{\ell}}(\tilde{\mu}\right)\right)\wedge{\rm
d}x_{\ell}+{\rm d}_{\ell}(\beta M_{e_{1}}(\tilde{\mu}))$ $\displaystyle={\rm
d}_{\ell-1}(x_{\ell}^{e_{1}}\bar{\mu})\wedge{\rm
d}x_{\ell}+{\partial}_{x_{\ell}}\tilde{M}\left(\tilde{L}(u)-{\partial}_{x_{\ell}}(\tilde{\mu})\right)\wedge{\rm
d}x_{\ell}+{\rm d}_{\ell}(\beta M_{e_{1}}(\tilde{\mu}))$ $\displaystyle={\rm
d}_{\ell}\left(x_{\ell}^{e_{1}}\bar{\mu}+\tilde{M}\left(\tilde{L}(u)-{\partial}_{x_{\ell}}(\tilde{\mu})\right)+\beta
M_{e_{1}}(\tilde{\mu})\right).$
The last equality holds because
${\rm
d}_{\ell-1}\left(\tilde{M}\left(\tilde{L}(u)-{\partial}_{x_{\ell}}(\tilde{\mu})\right)\right)=\tilde{M}{\rm
d}_{\ell-1}\left(\tilde{L}(u)-{\partial}_{x_{\ell}}(\tilde{\mu})\right)=0.$
###### Remark 12
Lemma 5 can be derived from the finiteness of the de Rham cohomology groups of
$D$-modules in the Bernstein class. To see this, let $\omega$ be a
differential $s$-form with coefficients in $R$ and let $M$ be the $D$-module
generated by all coefficients of $\omega$ and all derivatives of these
coefficients with respect to ${\partial}_{t}$. By Proposition 5.2 on page 12
of [5], $M$ is a $D$-module in the Bernstein class. Assume that $\omega$ is
closed. Then ${\partial}_{t}^{j}(\omega)\in H_{DR}^{s}(M)$, the $j$-th de Rham
cohomology group of $M$, for all nonnegative integer $j$. By Theorem 6.1 on
page 16 of [5], $H_{DR}^{s}(M)$ is of finite dimension over $k(t)$. This
implies that there are $a_{0},\cdots,a_{m}\in k(t)$ such that
$\sum_{j=0}^{m}a_{j}{\partial}_{t}^{j}(\omega)=0$ in $H_{DR}^{s}(M)$, i.e.
$\sum_{j=0}^{m}a_{j}{\partial}_{t}^{j}(\omega)$ is exact. This proves the
existence of telescopers for the ${\partial}_{t}$-closed differential forms.
However the proof of Theorem 10 is constructive and it provides a method to
compute a telescoper if it exists.
The proof of Theorem 10 can be summarized as the following algorithm.
###### Algorithm 13
Input: $\omega\in\bigwedge^{p}({\cal M})$ and
$V\in\\{x_{i},{\partial}_{x_{i}}|i=s+1,\cdots,n\\}$ satisfying that ${\rm
d}_{s}(\omega)=0$ and $|V|=n-s$
Output: a nonzero $L\in k\langle\\{t,{\partial}_{t}\\}\cup V\rangle$ such that
$L(\omega)={\rm d}_{s}(\mu)$.
1. 1.
If $\omega\in R$, then by Corollary 9, compute a nonzero $L\in
k\langle\\{t,{\partial}_{t}\\}\cup V\rangle$ such that $L(\omega)=0$. Return
$L$.
2. 2.
Write $\omega=u\wedge{\rm d}x_{s}+v$ with $u,v$ not involving ${\rm d}x_{s}$.
3. 3.
Call Algorithm 13 with $v$ and $V\cup\\{x_{s}\\}$ as inputs and let
$\tilde{L}$ be the output.
1. (a)
Write $\tilde{L}=\sum_{j=0}^{d}N_{j}x_{s}^{j}$ with $N_{j}\in
k\langle\\{t,{\partial}_{t},x_{s}\\}\cup V\rangle$ and $N_{d}\neq 0$.
2. (b)
Compute a $\tilde{\mu}\in\bigwedge^{p-1}({\cal M})$ such that $N_{d}(v)={\rm
d}_{s-1}(\tilde{\mu})$.
4. 4.
Write $N_{d}(\omega)=(N_{d}(u)-{\partial}_{x_{s}}(\tilde{\mu}))\wedge{\rm
d}x_{s}+{\rm d}_{s}(\tilde{\mu})$.
5. 5.
Call Algorithm 13 with $N_{d}(u)-{\partial}_{x_{s}}(\tilde{\mu})$ and
$V\cup\\{{\partial}_{x_{s}}\\}$ as inputs and let $\bar{L}$ be the output.
6. 6.
Write $\bar{L}=\sum_{j=e_{1}}^{e_{2}}{\partial}_{x_{s}}^{j}M_{j}$ with
$M_{j}\in k\langle\\{t,{\partial}_{t}\\}\cup V\rangle$ and $M_{e_{1}}\neq 0$.
7. 7.
Return $M_{e_{1}}N_{d}$.
## 4 The existence of telescopers
It is easy to see that if a differential form is ${\partial}_{t}$-exact then
it is ${\partial}_{t}$-closed. Therefore Lemma 5 implies that given a
$\omega\in\bigwedge^{p}({\cal M})$, to decide whether it has a telescoper, it
suffices to decide whether there is a nonzero $L\in k\langle
t,{\partial}_{t}\rangle$ such that $L({\rm d}\omega)=0$. Suppose that
${\rm d}\omega=\sum_{1\leq i_{1}<\dots<i_{p+1}\leq
n}a_{i_{1},\dots,i_{p+1}}{\rm d}x_{i_{1}}\cdots{\rm
d}x_{p+1},\,\,a_{i_{1},\dots,a_{p+1}}\in{\cal U}.$
Then $L({\rm d}\omega)=0$ if and only if $L(a_{i_{1},\dots,i_{p+1}})=0$ for
all $1\leq i_{1}<\cdots<i_{p+1}\leq n$. So the existence problem of
telescopers can be reduced to the following problem.
###### Problem 14
Given an element $f\in{\cal U}$, decide whether there exists a nonzero $L\in
k\langle t,{\partial}_{t}\rangle$ such that $L(f)=0$.
Let $P\in K\langle{\partial}_{t}\rangle\setminus\\{0\\}$ be the monic operator
of minimal order such that $P(f)=0$. Then $f$ is annihilated by a nonzero
$L\in k(t)\langle{\partial}_{t}\rangle$ if and only if $P$ is a right-hand
factor of $L$, i.e. $L=QP$ for some $Q\in K\langle{\partial}_{t}\rangle$. Such
operator $P$ will be called a $(\mathbf{x},t)$-separable operator. Problem 14
then is equivalent to the following one.
###### Problem 15
Given a $P\in K\langle{\partial}_{t}\rangle\setminus\\{0\\}$, decide whether
$P$ is $(\mathbf{x},t)$-separable.
The rest of this paper is aimed at developing an algorithm to solve the above
problem. Let us first investigate the solutions of $(\mathbf{x},t)$-separable
operators.
###### Notation 16
$C_{t}:=\left\\{c\in{\cal U}\mid{\partial}_{t}(c)=0\right\\},\,\,C_{\bf
x}:=\left\\{c\in{\cal U}\mid\forall\,x\in{\bf
x},{\partial}_{x}(c)=0\right\\}.$
Assume that $L\in k(t)\langle{\partial}_{t}\rangle\setminus\\{0\\}$. By
Corollary 1.2.12 of [29], the solution space of $L=0$ in ${\cal U}$ is a
$C_{t}$-vector space of dimension ${\rm ord}(L)$. Moreover we have the
following lemma.
###### Lemma 17
If $L\in k(t)\langle{\partial}_{t}\rangle\setminus\\{0\\}$, then the solution
space of $L=0$ in ${\cal U}$ has a basis in $C_{\bf x}$.
* Proof.
Let $d={\rm ord}(L)>0$ and $\\{v_{1},\cdots,v_{d}\\}$ be a basis of the
solution space of $L=0$ in ${\cal U}$. For all $1\leq i\leq d$ and all $1\leq
l\leq m$,
$L({\partial}_{x_{l}}(v_{i}))={\partial}_{x_{l}}(L(v_{i}))=0.$
Set ${\mathbf{v}}=(v_{1},\cdots,v_{d})^{t}$. Then for each $l=1,\cdots,m$,
${\partial}_{x_{l}}({\mathbf{v}})=A_{l}{\mathbf{v}},\,\,A_{l}\in{\rm
Mat}_{d}(C_{t}).$
Since
${\partial}_{x_{i}}{\partial}_{x_{j}}({\mathbf{v}})={\partial}_{x_{j}}{\partial}_{x_{i}}({\mathbf{v}})$
and $v_{1},\cdots,v_{d}$ are linearly independent over $C_{\bf x}$, for all
$1\leq i<j\leq m$,
${\partial}_{x_{i}}(A_{j})-{\partial}_{x_{j}}(A_{i})=A_{i}A_{j}-A_{j}A_{i}$
(8)
On the other hand, ${\partial}_{t}(A_{i})=0$ for all $1\leq i\leq d$. These
together with (8) imply that the system
${\partial}_{x_{1}}(Y)=A_{1}Y,\cdots,{\partial}_{x_{m}}(Y)=A_{m}Y,{\partial}_{t}(Y)=0$
is integrable. Then there is an invertible matrix $G$ with entries in ${\cal
U}$ satisfying this system. Let $\bar{{\mathbf{v}}}=G^{-1}{\mathbf{v}}$. As
${\partial}_{t}(G^{-1})=0$, $\bar{{\mathbf{v}}}$ is still a basis of the
solution space of $L=0$ in ${\cal U}$. Furthermore, for each $i=1,\cdots,m$,
we have
${\partial}_{x_{i}}(\bar{{\mathbf{v}}})={\partial}_{x_{i}}(G^{-1}{\mathbf{v}})={\partial}_{x_{i}}(G^{-1}){\mathbf{v}}+G^{-1}A_{i}{\mathbf{v}}=-G^{-1}A_{i}{\mathbf{v}}+G^{-1}A_{i}{\mathbf{v}}=0.$
Thus $\bar{{\mathbf{v}}}\in C_{\bf x}^{d}$.
As a consequence, we have the following corollary.
###### Corollary 18
Assume that $P\in K\langle{\partial}_{t}\rangle\setminus\\{0\\}$. Then $P$ is
$(\mathbf{x},t)$-separable if and only if the solutions of $P(y)=0$ in ${\cal
U}$ are of the form
$\sum_{i=1}^{s}g_{i}h_{i},\,\,g_{i}\in C_{t},h_{i}\in C_{\bf
x}\cap\\{f\in{\cal U}\mid P(f)=0\\}.$ (9)
* Proof.
The “only if” part is a direct consequence of Lemma 17. For the “if” part, one
only need to prove that if $h\in C_{\bf x}\cap\\{f\in{\cal U}\mid P(f)=0\\}$
then $h$ is annihilated by a nonzero operator in
$k(t)\langle{\partial}_{t}\rangle$. Suppose that $h\in C_{\bf
x}\cap\\{f\in{\cal U}\mid P(f)=0\\}$. Let $L$ be the monic operator in
$K\langle{\partial}_{t}\rangle\setminus\\{0\\}$ which annihilates $h$ and is
of minimal order. Write
$L={\partial}_{t}^{\ell}+\sum_{i=0}^{\ell-1}a_{i}{\partial}_{t}^{i},a_{i}\in
K.$
Then for every $j\in\\{1,\dots,m\\}$
$0={\partial}_{x_{j}}(L(h))=\sum_{i=0}^{\ell-1}{\partial}_{x_{j}}(a_{i}){\partial}_{t}^{i}(h)+L({\partial}_{x_{j}}(h))=\sum_{i=0}^{\ell-1}{\partial}_{x_{j}}(a_{i}){\partial}_{t}^{i}(h).$
The last equality holds because $h\in C_{\mathbf{x}}$. By the miniality of
$L$, one sees that ${\partial}_{x_{j}}(a_{i})=0$ for all $i=0,\dots,\ell-1$
and all $j=1,\dots,m$. Hence $a_{i}\in k(t)$ for all $i$. In other words,
$L\in k(t)\langle{\partial}_{t}\rangle$.
For convention, we introduce the following definition.
###### Definition 19
* $(1)$
We say $f\in{\cal U}$ is split if it can be written as the form $f=gh$ where
$g\in C_{t}$ and $h\in C_{\bf x}$, and say $f$ is semisplit if it is the sum
of finitely many split elements.
* $(2)$
We say a nonzero operator $P\in K\langle{\partial}_{t}\rangle$ is semisplit if
it is monic and all its coefficients are semisplit.
The semisplit operators have the following property.
###### Lemma 20
Assume that $P=Q_{1}Q_{2}$ where $P,Q_{1},Q_{2}$ are monic operators in
$K\langle{\partial}_{t}\rangle$. Assume further that $Q_{2}\in
k(t)[\mathbf{x},1/r]\langle{\partial}_{t}\rangle$ where $r\in
k[\mathbf{x},t]$. Then $P\in k(t)[\mathbf{x},1/r]\langle{\partial}_{t}\rangle$
if and only if so is $Q_{1}$.
* Proof.
Comparing the coefficients on both sides of $P=Q_{1}Q_{2}$ concludes the
lemma.
As a direct consequence, we have the following corollary.
###### Corollary 21
Assume that $P=Q_{1}Q_{2}$ where $P,Q_{1},Q_{2}$ are monic operators in
$K\langle{\partial}_{t}\rangle$. Assume further that $Q_{2}$ is semisplit.
Then $P$ is semisplit if and only if so is $Q_{1}$.
### 4.1 The completely reducible case
In Proposition 10 of [11], we show that given a hyperexponential function $h$
over $K$, ${\rm ann}(h)\cap k(t)\langle{\partial}_{t}\rangle\neq\\{0\\}$ if
and only if there is a nonzero $p\in k(\mathbf{x})[t]$ and $r\in k(t)$ such
that
$a=\frac{{\partial}_{t}(p)}{p}+r,$
where $a={\partial}_{t}(h)/h$. Remark that $a,p,r$ with $p\neq 0$ satisfy the
above equality if and only if
$\frac{1}{p}({\partial}_{t}-a)=({\partial}_{t}-r)\frac{1}{p}$. Under the
notion of $(\mathbf{x},t)$-separable and the language of differential
operators, Proposition 10 of [11] states that ${\partial}_{t}-a$ is
$(\mathbf{x},t)$-separable if and only if it is similar to a first order
operator in $k(t)\langle{\partial}_{t}\rangle$ by some $1/p$ with $p$ being
nonzero polynomial in $t$. In this section, we shall generalize Proposition 10
of [11] to the case of completely reducible operators. We shall use ${\rm
lclm}(Q_{1},Q_{2})$ to denote the monic operator of minimal order which is
divisible by both $Q_{1}$ and $Q_{2}$ on the right. We shall prove that if $P$
is $(\mathbf{x},t)$-separable and completely reducible then there is a nonzero
$L\in k(t)\langle{\partial}_{t}\rangle$ such that $P$ is the transformation of
$L$ by some $Q$ with semisplit coefficients. To this end, we need to introduce
some notations from [27].
###### Definition 22
Assume that $P,Q\in K\langle{\partial}_{t}\rangle\setminus\\{0\\}$.
1. 1.
We say $\tilde{P}$ is the transformation of $P$ by $Q$ if $\tilde{P}$ is the
monic operator satisfying that $\tilde{P}Q=\lambda{\rm lclm}(P,Q)$ for some
$\lambda\in K$.
2. 2.
We say $\tilde{P}$ is similar to $P$ (by $Q$) if there is an operator $Q$ with
${\rm gcrd}(P,Q)=1$ such that $\tilde{P}$ is the transformation of $P$ by $Q$,
where ${\rm gcrd}(P,Q)$ denotes the greatest common right-hand factor of $P$
and $Q$.
###### Definition 23
1. 1.
We say $P\in K\langle{\partial}_{t}\rangle$ is completely reducible if it is
the lclm of a family of irreducible operators in
$K\langle{\partial}_{t}\rangle$.
2. 2.
We say $Q\in K\langle{\partial}_{t}\rangle$ is the maximal completely
reducible right-hand factor of $P\in K\langle{\partial}_{t}\rangle$ if $Q$ is
the lclm of all irreducible right-hand factros of $P$.
Given a $P\in K\langle{\partial}_{t}\rangle$, Theorem 7 of [27] implies that
$P$ has the following unique decomposition called the maximal completely
reducible decomposition or the m.c.r. decomposition for short,
$P=\lambda H_{r}H_{r-1}\dots H_{1}$
where $\lambda\in K$ and $H_{i}$ is the maximal completely reducible right-
hand factor of $H_{r}\dots H_{i}$. For an $L\in
k(t)\langle{\partial}_{t}\rangle$, it has two m.c.r. decompositions viewed it
as an operator in $k(t)\langle{\partial}_{t}\rangle$ and an operator in
$K\langle{\partial}_{t}\rangle$ respectively. In the following, we shall prove
that these two decompositions coincide. For convenience, we shall denote by
$P_{x_{i}=c_{i}}$ the operator obtained by replacing $x_{i}$ by $c_{i}\in k$
in $P$.
###### Lemma 24
Assume that $P,L$ are two monic operators in $K\langle{\partial}_{t}\rangle$.
Assume further that $P\in k(t)[\mathbf{x},1/r]\langle{\partial}_{t}\rangle$
with $r\in k[\mathbf{x},t]$, and $L\in k(t)\langle{\partial}_{t}\rangle$. Let
${\mathbf{c}}\in k^{m}$ be such that $r({\mathbf{c}})\neq 0$.
1. 1.
If ${\rm gcrd}(P_{\mathbf{x}=\mathbf{c}},L)=1$ then ${\rm gcrd}(P,L)=1$.
2. 2.
If ${\rm gcrd}(P,L)=1$ then there is ${\mathbf{a}}\in k^{m}$ such that
$r({\mathbf{a}})\neq 0$ and ${\rm gcrd}(P_{\mathbf{x}={\mathbf{a}}},L)=1$.
* Proof.
1\. We shall prove the lemma by induction on $m=|\mathbf{x}|$. Assume that
$m=1$, and ${\rm gcrd}(P,L)\neq 1$. Then there are $M,N\in
k(t)[x_{1}]\langle{\partial}_{t}\rangle$ with ${\rm ord}(M)<{\rm ord}(L)$ such
that $MP+NL=0.$ Write
$M=\sum_{i=0}^{n-1}a_{i}{\partial}_{t}^{i},\quad
N=\sum_{i=0}^{s}b_{i}{\partial}_{t}^{i}$
where $n={\rm ord}(L)$. If the $a_{i}$’s have a common factor $c$ in
$k(t_{1})[x_{1}]$, then one sees that $c$ is a common factor of the $b_{i}$’s.
Thus we can cancel this factor $c$. So without loss of generality, we may
assume that the $a_{i}$’s have no common factor. This implies that
$M_{x_{1}=c_{1}}\neq 0$ and
$M_{x_{1}=c_{1}}P_{x_{1}=c_{1}}+N_{x_{1}=c_{1}}L=0$. Since ${\rm
ord}(M_{x_{1}=c_{1}})<{\rm ord}(L)$, ${\rm gcrd}(P_{x_{1}=c_{1}},L)\neq 1$, a
contradiction. For the general case, set $Q=P_{x_{1}=c_{1}}$. Then
$Q_{x_{2}=c_{2},\dots,x_{m}=c_{m}}=P_{\mathbf{x}={\mathbf{c}}}$. This implies
that ${\rm gcrd}(Q_{x_{2}=c_{2},\dots,x_{m}=c_{m}},L)=1$. By the induction
hypothesis, ${\rm gcrd}(Q,L)=1$. Finally, regarding $P$ and $L$ as operators
with coefficients in $k(t,x_{2},\dots,x_{m})[x_{1},1/r]$ and by the induction
hypothesis again, we get ${\rm gcrd}(P,L)=1$.
2\. Since ${\rm gcrd}(P,L)=1$, there are $M,N\in
K\langle{\partial}_{t}\rangle$ such that $MP+NL=1$. Let ${\mathbf{a}}\in
k^{m}$ be such that $r({\mathbf{a}})\neq 0$ and both
$M_{\mathbf{x}={\mathbf{a}}}$ and $N_{\mathbf{x}={\mathbf{a}}}$ are well-
defined. For such ${\mathbf{a}}$, one has that
$M_{\mathbf{x}={\mathbf{a}}}P_{\mathbf{x}={\mathbf{a}}}+N_{\mathbf{x}={\mathbf{a}}}L=1$
and then ${\rm gcrd}(P_{\mathbf{x}={\mathbf{a}}},L)=1$.
###### Lemma 25
Let $L\in k(t)\langle{\partial}_{t}\rangle$. The m.c.r. decompositions of $L$
viewed as an operator in $k(t)\langle{\partial}_{t}\rangle$ and an operator in
$K\langle{\partial}_{t}\rangle$ respectively coincide.
* Proof.
We first claim that an irreducible operator of
$k(t)\langle{\partial}_{t}\rangle$ is irreducible in
$K\langle{\partial}_{t}\rangle$. Let $P$ be a monic irreducible operator in
$k(t)\langle{\partial}_{t}\rangle$ and assume that $Q$ is a monic right-hand
factor of $P$ in $K\langle{\partial}_{t}\rangle$ with $1\leq{\rm ord}(Q)<{\rm
ord}(P)$. Then $P=\tilde{Q}Q$ for some $\tilde{Q}\in
K\langle{\partial}_{t}\rangle$. Suppose that $Q\in
k(t)[\mathbf{x},1/r]\langle{\partial}_{t}\rangle$. By Lemma 20, $\tilde{Q}$
belongs to $k(t)[\mathbf{x},1/r]\langle{\partial}_{t}\rangle$. Let
${\mathbf{c}}\in k^{m}$ be such that $r({\mathbf{c}})\neq 0$. Then
$P=\tilde{Q}_{\mathbf{x}={\mathbf{c}}}Q_{\mathbf{x}={\mathbf{c}}}$ and
$1\leq{\rm ord}(Q_{\mathbf{x}={\mathbf{c}}})\leq{\rm ord}(P)$. These imply
that $P$ is reducible, a contradiction. So $P$ is irreducible and thus the
claim holds. Let $L=\lambda H_{r}H_{r-1}\dots H_{1}$ be the m.c.r.
decomposition in $k(t)\langle{\partial}_{t}\rangle$. The above claim implies
that $H_{1}$ viewed as an operator in $K\langle{\partial}_{t}\rangle$ is
completely reducible. Assume that $H_{1}$ is not the maximal compleltely
reducible right-hand factor of $L$ in $K\langle{\partial}_{t}\rangle$. Let
$M\in K\langle{\partial}_{t}\rangle\setminus K$ be a monic irreducible right-
hand factor of $L$ satisfying that ${\rm gcrd}(M,H_{1})=1$. Due to Lemma 24,
there is ${\mathbf{a}}\in k^{m}$ satisfying that ${\rm
gcrd}(M_{\mathbf{x}={\mathbf{a}}},H_{1})=1$. Note that
$M_{\mathbf{x}={\mathbf{a}}}$ is a right-hand factor of $L$. Therefore
$M_{\mathbf{x}={\mathbf{a}}}$ has some irreducible right-hand factor of $L$ as
a right-hand factor. Such irreducible factor must be a right-hand factor of
$H_{1}$ and thus ${\rm gcrd}(M_{\mathbf{x}={\mathbf{a}}},H_{1})\neq 1$, a
contradiction. Therefore $H_{1}$ is the maximal completely reducible right-
hand factor of $L$ in $K\langle{\partial}_{t}\rangle$. Using the induction on
the order, one sees that $\lambda H_{r}H_{r-1}\dots H_{1}$ is the m.c.r.
decomposition of $L$ in $K\langle{\partial}_{t}\rangle$.
###### Lemma 26
Assume that $P$ is monic, $(\mathbf{x},t)$-separable and completely reducible.
Assume further that $P\in k(t)[\mathbf{x},1/r]\langle{\partial}_{t}\rangle$
with $r\in k[\mathbf{x},t]$. Let ${\mathbf{c}}\in k^{m}$ be such that
$r({\mathbf{c}})\neq 0$. Then $P_{\mathbf{x}=\mathbf{c}}$ is similar to $P$.
* Proof.
Let $\tilde{L}$ be a nonzero monic operator in
$k(t)\langle{\partial}_{t}\rangle$ with $P$ as a right-hand factor. Since $P$
is completely reducible, by Theorem 8 of [27], $P$ is a right-hand factor of
the maximal completely reducible right-hand factor of $\tilde{L}$. By Lemma
25, the maximal completely reducible right-hand factor of $\tilde{L}$ is in
$k(t)\langle{\partial}_{t}\rangle$. Hence we may assume that $\tilde{L}$ is
completely reducible after replacing $\tilde{L}$ by its maximal completely
reducible right-hand factor. Assume that $\tilde{L}=QP$ for some $Q\in
K\langle{\partial}_{t}\rangle$. By Lemma 20, $Q\in
k(t)[\mathbf{x},1/r]\langle{\partial}_{t}\rangle$. Then
$\tilde{L}=Q_{\mathbf{x}=\mathbf{c}}P_{\mathbf{x}=\mathbf{c}}$, i.e.
$P_{\mathbf{x}=\mathbf{c}}$ is a right-hand factor of $\tilde{L}$. We claim
that for a right-hand factor $T$ of $\tilde{L}$, there is a right-hand factor
$L$ of $\tilde{L}$ satisfying that ${\rm gcrd}(T,L)=1$ and ${\rm
lclm}(T,L)=\tilde{L}$. We prove this claim by induction on $s={\rm
ord}(\tilde{L})-{\rm ord}(T)$. When $s=0$, there is nothing to prove. Assume
that $s>0$. Then since $\tilde{L}$ is completely reducible, there is an
irreducible right-hand factor $L_{1}$ of $\tilde{L}$ such that ${\rm
gcrd}(T,L_{1})=1$. Let $N={\rm lclm}(T,L_{1})$. We have that ${\rm
ord}(N)={\rm ord}(T)+{\rm ord}(L_{1})$. Therefore ${\rm ord}(\tilde{L})-{\rm
ord}(N)<s$. By induction hypothesis, there is a right-hand factor $L_{2}$ of
$\tilde{L}$ such that ${\rm gcrd}(N,L_{2})=1$ and ${\rm
lclm}(N,L_{2})=\tilde{L}$. Let $L={\rm lclm}(L_{1},L_{2})$. Then
$\tilde{L}={\rm lclm}(N,L_{2})={\rm lclm}(T,L_{1},L_{2})={\rm lclm}(T,L).$
Taking the order of the operators in the above equality yields that
$\displaystyle{\rm ord}({\rm lclm}(T,L))$ $\displaystyle={\rm ord}({\rm
lclm}(N,L_{2}))={\rm ord}(N)+{\rm ord}(L_{2})$ $\displaystyle={\rm
ord}(T)+{\rm ord}(L_{1})+{\rm ord}(L_{2}).$
On the other hand, we have
${\rm ord}({\rm lclm}(T,L))\leq{\rm ord}(T)+{\rm ord}(L)\leq{\rm ord}(T)+{\rm
ord}(L_{1})+{\rm ord}(L_{2}).$
This implies that
${\rm ord}({\rm lclm}(T,L))={\rm ord}(T)+{\rm ord}(L).$
So ${\rm gcrd}(T,L)=1$ and then $L$ is a required operator. This proves the
claim. Now let $L_{{\mathbf{c}}}$ be a ritht-hand factor of $\tilde{L}$
satisfying that ${\rm gcrd}(P_{\mathbf{x}={\mathbf{c}}},L_{\mathbf{c}})=1$ and
${\rm lclm}(P_{\mathbf{x}={\mathbf{c}}},L_{\mathbf{c}})=\tilde{L}$. Let $M\in
k(t)\langle{\partial}_{t}\rangle$ be such that $\tilde{L}=ML_{\mathbf{c}}$.
Then $P_{\mathbf{x}=\mathbf{c}}$ is similar to $M$. It remains to show that
$P$ is also similar to $M$. Due to Lemma 24, ${\rm gcrd}(P,L_{\mathbf{c}})=1$.
Then
${\rm ord}({\rm lclm}(P,L_{\mathbf{c}}))={\rm ord}(P)+{\rm
ord}(L_{\mathbf{c}})={\rm ord}(P_{\mathbf{x}=\mathbf{c}})+{\rm
ord}(L_{\mathbf{c}})={\rm ord}(\tilde{L}).$
Note that ${\rm lclm}(P,L_{\mathbf{c}})$ is a right-hand factor of
$\tilde{L}$. Hence ${\rm lclm}(P,L_{\mathbf{c}})=\tilde{L}$ and thus $P$ is
similar to $M$.
For the general case, the above lemma is not true anymore as shown in the
following example.
###### Example 27
Let $y=x_{1}\log(t+1)+x_{2}\log(t-1)$ and
$P={\partial}_{t}^{2}+\frac{(t-1)^{2}x_{1}+(t+1)^{2}x_{2}}{(t^{2}-1)((t-1)x_{1}+(t+1)x_{2})}{\partial}_{t}.$
Then $P$ is $(x,t)$-separable since $\\{1,y\\}$ is a basis of the solution
space of $P=0$ in ${\cal U}$. We claim that $P$ is not similar to
$P_{\mathbf{x}=\mathbf{c}}$ for any $\mathbf{c}\in k^{2}\setminus\\{(0,0)\\}$.
Suppose on the contrary that $P$ is similar to $P_{\mathbf{x}=\mathbf{c}}$ for
some $\mathbf{c}=(c_{1},c_{2})\in k^{2}\setminus\\{(0,0)\\}$, i.e. there are
$a,b\in k(\mathbf{x},t)$, not all zero, such that ${\rm
gcrd}(a{\partial}_{t}+b,P_{\mathbf{x}=\mathbf{c}})=1$ and $P$ is the
transformation of $P_{\mathbf{x}=\mathbf{c}}$ by $a{\partial}_{t}+b$. Denote
$Q=a{\partial}_{t}+b$. As $\\{1,y_{\mathbf{x}=\mathbf{c}}\\}$ is a basis of
the solution space of $P_{\mathbf{x}=\mathbf{c}}$,
$\\{Q(1),Q(y_{\mathbf{x}=\mathbf{c}})\\}$ is a basis of the solution space of
$P=0$. In other words, there is $C\in{\rm GL}_{2}(C_{t})$ such that
$\left(b,a\left(\frac{c_{1}}{t+1}+\frac{c_{2}}{t-1}\right)+by_{\mathbf{x}=\mathbf{c}}\right)=(1,y)C.$
Note that $\log(t+1),\log(t-1),1$ are linearly independent over
$k(x_{1},x_{2},t)$. We have that $b\in C_{t}\setminus\\{0\\}$ and
$bc_{1}=\tilde{c}x_{1},bc_{2}=\tilde{c}x_{2}$ for some $\tilde{c}\in C_{t}$.
This implies that $x_{1}/x_{2}=c_{1}/c_{2}\in k$, a contradiction.
When the given two operators are of length two, i.e. they are the products of
two irreducible operators, a criterion for the similarity is presented in
[23]. For the general case, suppose that $P$ is similar to
$P_{\mathbf{x}={\mathbf{c}}}$ by $Q$. Then the operator $Q$ is a solution of
the following mixed differential equation
$Pz\equiv 0\mod P_{\mathbf{x}={\mathbf{c}}}.$ (10)
An algorithm for computing all solutions of the above mixed differential
equation is developed in [32]. In the following, we shall show that if $P$ is
$(\mathbf{x},t)$-separable then $Q$ is an operator with semisplit
coefficients. Note that $Q$ can be chosen to be of order less than ${\rm
ord}(P_{\mathbf{x}={\mathbf{c}}})$ and all solutions of the mixed differential
equation with order less than ${\rm ord}(P_{\mathbf{x}={\mathbf{c}}})$ form a
vector space over $k(\mathbf{x})$ of finite dimension. Furthermore $Q$ induces
an isomorphism from the solution space of $P_{\mathbf{x}={\mathbf{c}}}(y)=0$
to that of $P(y)=0$.
###### Proposition 28
Assume that $P$ is monic and completely reducible. Assume further that $P\in
k(t)[\mathbf{x},1/r]\langle{\partial}_{t}\rangle$ with $r\in k[\mathbf{x},t]$.
Let ${\mathbf{c}}\in k^{m}$ be such that $r({\mathbf{c}})\neq 0$. Then $P$ is
$(\mathbf{x},t)$-separable if and only if $P$ is similar to
$P_{\mathbf{x}={\mathbf{c}}}$ by an operator $Q$ with semisplit coefficients.
* Proof.
Denote $n={\rm ord}(P_{\mathbf{x}={\mathbf{c}}})={\rm ord}(P)$. Assume that
$\\{\alpha_{1},\cdots,\alpha_{n}\\}$ is a basis of the solution space of
$P_{\mathbf{x}=\mathbf{c}}(y)=0$ in $C_{\mathbf{x}}$ and $P$ is similar to
$P_{\mathbf{x}={\mathbf{c}}}$ by $Q$. Write
$Q=\sum_{i=0}^{n-1}a_{i}{\partial}_{t}^{i}$ where $a_{i}\in K$. Then
$\left(Q(\alpha_{1}),\dots,Q(\alpha_{n})\right)=(a_{0},\dots,a_{n-1})\begin{pmatrix}\alpha_{1}&\alpha_{2}&\dots&\alpha_{n}\\\
\alpha_{1}^{\prime}&\alpha_{2}^{\prime}&\dots&\alpha_{n}^{\prime}\\\
\vdots&\vdots&&\vdots\\\
\alpha_{1}^{(n-1)}&\alpha_{2}^{(n-1)}&\dots&\alpha_{n}^{(n-1)}\end{pmatrix}$
and $Q(\alpha_{1}),\dots,Q(\alpha_{n})$ form a basis of the solution space of
$P(y)=0$.
Now suppose that $P$ is $(\mathbf{x},t)$-separable. Due to Lemma 26, $P$ is
similar to $P_{\mathbf{x}={\mathbf{c}}}$ by $Q$. By Corollary 18, the
$Q(\alpha_{i})$ are semisplit. The above equalities then imply that the
$a_{i}$ are semisplit. Conversely, assume that $P$ is similar to
$P_{\mathbf{x}={\mathbf{c}}}$ by $Q$ and the $a_{i}$ are semisplit. It is easy
to see the $Q(\alpha_{i})$ are semisplit. By Corollary 18 again, $P$ is
$(\mathbf{x},t)$-separable.
Using the algorithm developed in [32], we can compute a basis of the solution
space over $k(\mathbf{x})$ of the equation (10). It is clear that the
solutions with semisplit entries form a subspace. We can compute a basis for
this subspace as follows. Suppose that $\\{Q_{1},\dots,Q_{\ell}\\}$ is a basis
of the solution space of the equation (10) consisting of solutions with order
less than ${\rm ord}(P_{\mathbf{x}={\mathbf{c}}})$. We may identity $Q_{i}$
with a vector ${\mathbf{g}}_{i}\in K^{n}$ under the basis
$1,{\partial}_{t},\dots,{\partial}_{t}^{n-1}$. Let $q\in k(\mathbf{x})[t]$ be
a common denominator of all entries of the ${\mathbf{g}}_{i}$. Write
${\mathbf{g}}_{i}={\mathbf{p}}_{i}/q$ for each $i=1,\dots,\ell$, where
${\mathbf{p}}_{i}\in k(\mathbf{x})[t]^{n}$. Write $q=q_{1}q_{2}$ where $q_{2}$
is split but $q_{1}$ is not. Note that a rational function in $t$ with
coefficients in $k(\mathbf{x})$ is semisplit if and only if its denominator is
split. For $c_{1},\dots,c_{\ell}\in k(\mathbf{x})$,
$\sum_{i=1}^{\ell}c_{i}{\mathbf{g}}_{i}$ is semisplit if and only if all
entries of $\sum_{i=1}^{\ell}c_{i}{\mathbf{p}}_{i}$ are divided by $q_{1}$.
For $i=1,\dots,\ell$, let ${\mathbf{h}}_{i}$ be the vector whose entries are
the remainders of the corresponding entries of ${\mathbf{p}}_{i}$ by $q_{1}$.
Then all entries of $\sum_{i=1}^{\ell}c_{i}{\mathbf{p}}_{i}$ are divided by
$q_{1}$ if and oly if $\sum_{i=1}^{\ell}c_{i}{\mathbf{h}}_{i}=0$. Let
${\mathbf{c}}_{1},\dots,{\mathbf{c}}_{s}$ be a basis of the solution space of
$\sum_{i=1}^{\ell}z_{i}{\mathbf{h}}_{i}=0$. Then
$\\{(Q_{1},\dots,Q_{\ell}){\mathbf{c}}_{i}\mid i=1,\dots,s\\}$ is the required
basis. Consequently, the required basis can be computed by solving the system
of linear equations $\sum_{i=1}^{\ell}z_{i}{\mathbf{h}}_{i}=0$.
In the following, for the sake of notations, we assume that
$\\{Q_{1},\dots,Q_{\ell}\\}$ is a basis of the solution space of the equation
(10) consisting of solutions with semisplit coefficients. By Proposition 28
and the definition of similarity, $P$ is $(\mathbf{x},t)$-separable if and
only if there is a nonzero $\tilde{Q}$ in the space spanned by
$Q_{1},\dots,Q_{\ell}$ such that ${\rm
gcrd}(P_{\mathbf{x}={\mathbf{c}}},\tilde{Q})=1$. Note that $\tilde{Q}$ induces
a homomorphism from the solutions space of $P_{\mathbf{x}={\mathbf{c}}}(y)=0$
to that of $P(y)=0$. Moreover, one can easily see that ${\rm
gcrd}(P_{\mathbf{x}={\mathbf{c}}},\tilde{Q})=1$ if and only if $\tilde{Q}$ is
an isomorphism i.e. $\tilde{Q}(\alpha_{1}),\dots,\tilde{Q}(\alpha_{n})$ form a
basis of the solution space of $P(y)=0$ where
$\\{\alpha_{1},\dots,\alpha_{n}\\}$ is a basis of the solution space of
$P_{\mathbf{x}={\mathbf{c}}}(y)=0$. Assume that
$\tilde{Q}=\sum_{i=0}^{n-1}a_{0,i}{\partial}_{t}^{i}$ with $a_{0,i}\in K$.
Using the relation $P_{\mathbf{x}={\mathbf{c}}}(\alpha_{j})=0$ with
$j=1,\dots,n$, one has that for all $j=1,\dots,n$
$\tilde{Q}(\alpha_{j})^{\prime}=\left(\sum_{i=0}^{n-1}a_{0,i}\alpha_{j}^{(i)}\right)^{\prime}=\sum_{i=0}^{n-1}a_{1,i}\alpha_{j}^{(i)}$
for some $a_{1,i}\in K$. Repeating this process, we can compute $a_{l,i}\in K$
such that for all $j=1,\dots,n$ and $l=1,\dots,n-1$,
$\tilde{Q}(\alpha_{j})^{(l)}=\sum_{i=0}^{n-1}a_{l,i}\alpha_{j}^{(i)}.$
Now suppose that $\tilde{Q}=\sum_{i=1}^{\ell}z_{i}Q_{i}$ with $z_{i}\in
k(\mathbf{x})$. One sees that the $a_{l,i}$ are linear in
$z_{1},\dots,z_{\ell}$. Set $A({\mathbf{z}})=(a_{i,j})_{0\leq i,j\leq n-1}$
with ${\mathbf{z}}=(z_{1},\dots,z_{\ell})$. Then one has that
$A({\mathbf{z}})\begin{pmatrix}\alpha_{1}&\dots&\alpha_{n}\\\
\vdots&&\vdots\\\
\alpha^{(n-1)}&\dots&\alpha_{n}^{(n-1)}\end{pmatrix}=\begin{pmatrix}\tilde{Q}(\alpha_{1})&\dots&\tilde{Q}(\alpha_{n})\\\
\vdots&&\vdots\\\
\tilde{Q}(\alpha_{1})^{(n-1)}&\dots&\tilde{Q}(\alpha_{n})^{(n-1)}\end{pmatrix}.$
(11)
It is well-known that $\tilde{Q}(\alpha_{1}),\dots,\tilde{Q}(\alpha_{n})$ form
a basis if and only if the right-hand side of the above equality is a
nonsingular matrix and thus if and only if $A({\mathbf{z}})$ is nonsingular.
In the sequel, one can reduce the problem of the existence of $\tilde{Q}$
satisfying ${\rm gcrd}(\tilde{Q},P_{\mathbf{x}={\mathbf{c}}})=1$ to the
problem of the existence of ${\mathbf{a}}\in k(\mathbf{x})^{\ell}$ in
$k(\mathbf{x})$ such that $\det(A({\mathbf{a}}))\neq 0$.
Suppose now we have already had an operator $Q$ with semisplit coefficients
such that $P$ is similar to $P_{\mathbf{x}={\mathbf{c}}}$ by $Q$. Write
$Q=\sum_{i=0}^{n-1}b_{i}{\partial}_{t}^{i}$ where $b_{i}\in K$ is semisplit.
Write further $b_{i}=\sum_{j=1}^{s}h_{i,j}\beta_{j}$ where $h_{i,j}\in
k(\mathbf{x})$ and $\beta_{j}\in k(t)\setminus\\{0\\}$. Let
$L_{0}=P_{\mathbf{x}=\mathbf{c}}$ and let $L_{i}$ be the transformation of
$L_{i-1}$ by ${\partial}_{t}$ for $i=1,\cdots,n-1$. Then $L_{i}$ annihilates
$\alpha_{j}^{(i)}$ for all $j=1,\cdots,n$ and $L_{i}\frac{1}{\beta_{l}}$
annihilates $\beta_{l}\alpha_{j}^{(i)}$ for all $l=1,\dots,s$ and
$j=1,\dots,n$. Set
$L={\rm lclm}\left(\left\\{L_{i}\frac{1}{\beta_{l}}\mid
i=0,\dots,n-1,l=1,\dots,s\right\\}\right).$
Then $L$ annihilates all $\tilde{Q}(\alpha_{i})$ and thus has $P$ as a right-
hand factor. We summarize the previous discussion as the following algorithm.
###### Algorithm 29
Input: $P\in K\langle{\partial}_{t}\rangle$ that is monic and completely
reducible.
Output: a nonzero $L\in k(t)\langle{\partial}_{t}\rangle$ which is divided by
$P$ on the right if it exists, otherwise 0.
1. 1.
Write
$P={\partial}_{t}^{n}+\sum_{i=0}^{n-1}\frac{a_{i}}{r}{\partial}_{t}^{i}$
where $a_{i}\in k(t)[\mathbf{x}],r\in k[\mathbf{x},t]$.
2. 2.
Pick $\mathbf{c}\in k^{m}$ such that $r(\mathbf{c})\neq 0$. By the algorithm
in [32], compute a basis of the solution space $V$ of the equation (10).
3. 3.
Compute a basis of the subspace of $V$ consisting of operators with semisplit
coefficients, say $Q_{1},\cdots,Q_{\ell}$.
4. 4.
Set $\tilde{Q}=\sum_{i=1}^{\ell}z_{i}Q_{i}$ and using $\tilde{Q}$, compute the
matrix $A({\mathbf{z}})$ as in (11).
5. 5.
If $\det(A({\mathbf{z}}))=0$ then return 0 and the algorithm terminates.
Otherwise compute ${\mathbf{a}}=(a_{1},\dots,a_{\ell})\in k^{\ell}$ such that
$\det(A({\mathbf{a}}))\neq 0$.
6. 6.
Set $b_{i}$ to be the coefficient of ${\partial}_{t}^{i}$ in
$\sum_{j=1}^{\ell}a_{j}Q_{j}$ and write $b_{i}=\sum_{j=1}^{s}h_{i,j}\beta_{j}$
where $h_{i,j}\in k(\mathbf{x})$ and $\beta_{j}\in k(t)$. Let
$L_{0}=P_{\mathbf{x}=\mathbf{c}}$ and for each $i=1,\cdots,n-1$ compute
$L_{i}$, the transformation of $L_{i-1}$ by ${\partial}_{t}$.
7. 7.
Return ${\rm lclm}\left(\left\\{L_{i}\frac{1}{\beta_{j}}\mid
i=0,\dots,n-1,j=1,\dots,s\right\\}\right)$.
### 4.2 The general case
Assume that $P$ is $(\mathbf{x},t)$-separable and $P=Q_{1}Q_{2}$ where
$Q_{1},Q_{2}\in K\langle{\partial}_{t}\rangle$. It is clear that $Q_{2}$ is
also $(\mathbf{x},t)$-separable. One may wonder whether $Q_{1}$ is also
$(\mathbf{x},t)$-separable. The following example shows that $Q_{1}$ may not
be $(\mathbf{x},t)$-separable.
###### Example 30
Let $K=k(x,t)$ and let $P={\partial}_{t}^{2}$. Then $P$ is
$(\mathbf{x},t)$-separable and
${\partial}_{t}^{2}=\left({\partial}_{t}+\frac{x}{xt+1}\right)\left({\partial}_{t}-\frac{x}{xt+1}\right).$
The operator ${\partial}_{t}+x/(xt+1)$ is not $(\mathbf{x},t)$-separable,
because $1/(xt+1)$ is one of its solutions and it is not semisplit.
While, the lemma below shows that if $Q_{2}$ is semisplit then $Q_{1}$ is also
$(\mathbf{x},t)$-separable.
###### Lemma 31
* $(1)$
Assume that $Q_{1},Q_{2}\in K\langle{\partial}_{t}\rangle\setminus\\{0\\}$,
and $Q_{2}$ is semisplit. Then $Q_{1}Q_{2}$ is $(\mathbf{x},t)$-separable if
and only if both $Q_{1}$ and $Q_{2}$ are $(\mathbf{x},t)$-separable.
* $(2)$
Assume that $P\in K\langle{\partial}_{t}\rangle\setminus\\{0\\}$ and $L$ is a
nonzero monic operator in $k(t)\langle{\partial}_{t}\rangle$. Then $P$ is
$(\mathbf{x},t)$-separable if and only if so is the transformation of $P$ by
$L$.
* Proof.
Note that the solution space of ${\rm lclm}(P_{1},P_{2})=0$ is spanned by
those of $P_{1}=0$ and $P_{2}=0$. Hence ${\rm lclm}(P_{1},P_{2})$ is
$(\mathbf{x},t)$-separable if and only if so are both $P_{1}$ and $P_{2}$.
$(1)$ For the “only if” part, one only need to prove that $Q_{1}$ is
$(\mathbf{x},t)$-separable. Assume that $g$ is a solution of $Q_{1}=0$ in
${\cal U}$. Let $f$ be a solution of $Q_{2}(y)=g$ in ${\cal U}$. Such $f$
exists because ${\cal U}$ is the universal differential extension of $K$. Then
$f$ is a solution of $Q_{1}Q_{2}=0$ in ${\cal U}$. By Corollary 18, $f$ is
semisplit. Since $Q_{2}$ is semisplit, one sees that $g=Q_{2}(f)$ is
semisplit. By Corollary 18 again, $Q_{1}$ is $(\mathbf{x},t)$-separable.
Now assume that both $Q_{1}$ and $Q_{2}$ are $(\mathbf{x},t)$-separable. Let
$\tilde{Q}\in K\langle{\partial}_{t}\rangle$ be such that $\tilde{Q}Q_{2}=L$
where $L\in k(t)\langle{\partial}_{t}\rangle$ is monic. By Corollary 21 and
the “only if” part, $\tilde{Q}$ is semisplit and $(\mathbf{x},t)$-separable.
Thus ${\rm lclm}(Q_{1},\tilde{Q})$ is $(\mathbf{x},t)$-separable. Assume that
${\rm lclm}(Q_{1},\tilde{Q})=N\tilde{Q}$ with $N\in
K\langle{\partial}_{t}\rangle$. Since $\tilde{Q}$ is semisplit, by the “only
if” part again, $N$ is $(\mathbf{x},t)$-separable. Let $M\in
K\langle{\partial}_{t}\rangle$ be such that $MN$ is a nonzero operator in
$k(t)\langle{\partial}_{t}\rangle$. We have that
$M{\rm lclm}(Q_{1},\tilde{Q})Q_{2}=MN\tilde{Q}Q_{2}=MNL\in
k(t)\langle{\partial}_{t}\rangle.$
On the other hand, $M{\rm lclm}(Q_{1},\tilde{Q})Q_{2}=M\tilde{M}Q_{1}Q_{2}$
for some $\tilde{M}\in K\langle{\partial}_{t}\rangle$. Hence $P=Q_{1}Q_{2}$ is
$(\mathbf{x},t)$-separable.
$(2)$ Since $L$ is $(\mathbf{x},t)$-separable, we have that $P$ is
$(\mathbf{x},t)$-separable if and only if so is ${\rm lclm}(P,L)$. Let
$\tilde{P}$ be the transformation of $P$ by $L$. Then $\tilde{P}L={\rm
lclm}(P,L)$. As $L$ is semisplit, the assertion then follows from $(1)$.
Assume that $P$ is a nonzero operator in $K\langle{\partial}_{t}\rangle$. Let
$P_{0}$ be an irreducible right-hand factor of $P$. By Algorithm 29, we can
decide whether $P_{0}$ is $(\mathbf{x},t)$-separable or not. Now assume that
$P_{0}$ is $(\mathbf{x},t)$-separable. Then we can compute a nonzero monic
operator $L_{0}\in k(t)\langle{\partial}_{t}\rangle$ having $P_{0}$ as a
right-hand factor. Let $P_{1}$ be the transformation of $P$ by $L_{0}$. Lemma
31 implies that $P$ is $(\mathbf{x},t)$-separable if and only if so is
$P_{1}$. Note that
$\displaystyle{\rm ord}(P_{1})$ $\displaystyle={\rm ord}({\rm
lclm}(P,L_{0}))-{\rm ord}(L_{0})$ $\displaystyle\leq{\rm ord}(P)+{\rm
ord}(L_{0})-{\rm ord}(P_{0})-{\rm ord}(L_{0})={\rm ord}(P)-{\rm ord}(P_{0}).$
In other words, ${\rm ord}(P_{1})<{\rm ord}(P)$. Replacing $P$ by $P_{1}$ and
repeating the above process yield an algorithm to decide whether $P$ is
$(\mathbf{x},t)$-separable.
###### Algorithm 32
Input: a nonzeor monic $P\in K\langle{\partial}_{t}\rangle$.
Output: a nonzero $L\in k(t)\langle{\partial}_{t}\rangle$ which is divided by
$P$ on the right if it exists, otherwise 0.
1. 1.
If $P=1$ then return 1 and the algorithm terminates.
2. 2.
Compute an irreducible right-hand factor $P_{0}$ of $P$ by algorithms
developed in [4, 31, 33].
3. 3.
Apply Algorithm 29 to $P_{0}$ and let $L_{0}$ be the output.
4. 4.
If $L_{0}=0$ then return 0 and the algorithm terminates. Otherwise compute the
transformation of $P$ by $L_{0}$, denoted by $P_{1}$.
5. 5.
Apply Algorithm 32 to $P_{1}$ and let $L_{1}$ be the output.
6. 6.
Return $L_{1}L_{0}$.
The termination of the algorithm is obvious. Assume that $L_{1}\neq 0$. Then
$L_{1}=Q_{1}P_{1}$ for some $Q_{1}\in K\langle{\partial}_{t}\rangle$. We have
that $P_{1}L_{0}={\rm lclm}(P,L_{0})$. Therefore
$L_{1}L_{0}=Q_{1}P_{1}L_{0}=Q_{1}{\rm lclm}(P,L_{0})=Q_{1}Q_{0}P$
for some $Q_{0}\in K\langle{\partial}_{t}\rangle$. This proves the correctness
of the algorithm.
## References
* [1] S. A. Abramov. When does Zeilberger’s algorithm succeed? Adv. in Appl. Math., 30(3):424–441, 2003.
* [2] S. A. Abramov and H. Q. Le. A criterion for the applicability of Zeilberger’s algorithm to rational functions. Discrete Math., 259(1-3):1–17, 2002.
* [3] G. Almkvist and D. Zeilberger. The method of differentiating under the integral sign. J. Symbolic Comput., 10(6):571–591, 1990.
* [4] E. Beke. Die Irreducibilität der homogenen linearen Differentialgleichungen. Math. Ann., 45(2):278–294, 1894.
* [5] J.-E. Björk. Rings of differential operators, volume 21 of North-Holland Mathematical Library. North-Holland Publishing Co., Amsterdam-New York, 1979.
* [6] A. Bostan, S. Chen, F. Chyzak, and Z. Li. Complexity of creative telescoping for bivariate rational functions. In ISSAC 2010—Proceedings of the 2010 International Symposium on Symbolic and Algebraic Computation, pages 203–210. ACM, New York, 2010.
* [7] A. Bostan, S. Chen, F. Chyzak, Z. Li, and G. Xin. Hermite reduction and creative telescoping for hyperexponential functions. In ISSAC 2013—Proceedings of the 38th International Symposium on Symbolic and Algebraic Computation, pages 77–84. ACM, New York, 2013.
* [8] A. Bostan, F. Chyzak, P. Lairez, and B. Salvy. Generalized Hermite reduction, creative telescoping and definite integration of D-finite functions. In ISSAC’18—Proceedings of the 2018 ACM International Symposium on Symbolic and Algebraic Computation, pages 95–102. ACM, New York, 2018.
* [9] A. Bostan, P. Lairez, and B. Salvy. Creative telescoping for rational functions using the Griffiths-Dwork method. In ISSAC 2013—Proceedings of the 38th International Symposium on Symbolic and Algebraic Computation, pages 93–100. ACM, New York, 2013.
* [10] S. Chen, F. Chyzak, R. Feng, G. Fu, and Z. Li. On the existence of telescopers for mixed hypergeometric terms. J. Symbolic Comput., 68(part 1):1–26, 2015.
* [11] S. Chen, R. Feng, Z. Li, and M. F. Singer. Parallel telescoping and parameterized Picard-Vessiot theory. In ISSAC 2014—Proceedings of the 39th International Symposium on Symbolic and Algebraic Computation, pages 99–106. ACM, New York, 2014.
* [12] S. Chen, Q.-H. Hou, G. Labahn, and R.-H. Wang. Existence problem of telescopers: beyond the bivariate case. In Proceedings of the 2016 ACM International Symposium on Symbolic and Algebraic Computation, pages 167–174. ACM, New York, 2016\.
* [13] S. Chen and M. Kauers. Some open problems related to creative telescoping. J. Syst. Sci. Complex., 30(1):154–172, 2017.
* [14] S. Chen, M. Kauers, and C. Koutschan. Reduction-based creative telescoping for algebraic functions. In Proceedings of the 2016 ACM International Symposium on Symbolic and Algebraic Computation, pages 175–182. ACM, New York, 2016\.
* [15] S. Chen, M. van Hoeij, M. Kauers, and C. Koutschan. Reduction-based creative telescoping for fuchsian D-finite functions. J. Symbolic Comput., 85:108–127, 2018.
* [16] W. Y. C. Chen, Q.-H. Hou, and Y.-P. Mu. Applicability of the $q$-analogue of Zeilberger’s algorithm. J. Symbolic Comput., 39(2):155–170, 2005.
* [17] L. Euler. Specimen de constructione aequationum differentialium sine indeterminatarum separatione. Commentarii academiae scientiarum Petropolitanae, 6:168–174, 1733\.
* [18] E. R. Kolchin. Differential algebra and algebraic groups. Academic Press, New York-London, 1973. Pure and Applied Mathematics, Vol. 54.
* [19] C. Koutschan. Creative telescoping for holonomic functions. In Computer algebra in quantum field theory, Texts Monogr. Symbol. Comput., pages 171–194. Springer, Vienna, 2013.
* [20] P. Lairez. Computing periods of rational integrals. Math. Comp., 85(300):1719–1752, 2016.
* [21] S. Lang. Algebra, volume 211 of Graduate Texts in Mathematics. Springer-Verlag, New York, third edition, 2002.
* [22] S. Li, B. H. Lian, and S.-T. Yau. Picard-Fuchs equations for relative periods and Abel-Jacobi map for Calabi-Yau hypersurfaces. Amer. J. Math., 134(5):1345–1384, 2012.
* [23] Z. Li and H. Wang. A criterion for the similarity of length-two elements in a noncommutative PID. J. Syst. Sci. Complex., 24(3):580–592, 2011.
* [24] L. Lipshitz. The diagonal of a $D$-finite power series is $D$-finite. J. Algebra, 113(2):373–378, 1988.
* [25] D. R. Morrison and J. Walcher. D-branes and normal functions. Adv. Theor. Math. Phys., 13(2):553–598, 2009.
* [26] S. Müller-Stach, S. Weinzierl, and R. Zayadeh. Picard-Fuchs equations for Feynman integrals. Comm. Math. Phys., 326(1):237–249, 2014.
* [27] O. Ore. Theory of non-commutative polynomials. Ann. of Math. (2), 34(3):480–508, 1933.
* [28] M. Petkovšek, H. S. Wilf, and D. Zeilberger. $A=B$. A K Peters, Ltd., Wellesley, MA, 1996. With a foreword by Donald E. Knuth, With a separately available computer disk.
* [29] M. F. Singer. Introduction to the Galois theory of linear differential equations. In Algebraic theory of differential equations, volume 357 of London Math. Soc. Lecture Note Ser., pages 1–82. Cambridge Univ. Press, Cambridge, 2009.
* [30] A. van der Poorten. A proof that Euler missed$\ldots$Apéry’s proof of the irrationality of $\zeta(3)$. Math. Intelligencer, 1(4):195–203, 1978/79. An informal report.
* [31] M. van der Put and M. F. Singer. Galois theory of linear differential equations, volume 328 of Grundlehren der Mathematischen Wissenschaften [Fundamental Principles of Mathematical Sciences]. Springer-Verlag, Berlin, 2003.
* [32] M. van Hoeij. Rational solutions of the mixed differential equation and its application to factorization of differential operators. In E. Engeler, B. F. Caviness, and Y. N. Lakshman, editors, Proceedings of the 1996 International Symposium on Symbolic and Algebraic Computation, ISSAC ’96, Zurich, Switzerland, July 24-26, 1996, pages 219–225. ACM, 1996.
* [33] M. van Hoeij. Factorization of differential operators with rational functions coefficients. J. Symbolic Comput., 24(5):537–561, 1997.
* [34] S. H. Weintraub. Differential forms:Theory and practice. Elsevier/Academic Press, Amsterdam, second edition, 2014. Theory and practice.
* [35] H. S. Wilf and D. Zeilberger. Rational functions certify combinatorial identities. J. Amer. Math. Soc., 3(1):147–158, 1990.
* [36] H. S. Wilf and D. Zeilberger. An algorithmic proof theory for hypergeometric (ordinary and “$q$”) multisum/integral identities. Invent. Math., 108(3):575–633, 1992.
* [37] H. S. Wilf and D. Zeilberger. Rational function certification of multisum/integral/“$q$” identities. Bull. Amer. Math. Soc. (N.S.), 27(1):148–153, 1992.
* [38] D. Zeilberger. The method of creative telescoping. J. Symbolic Comput., 11(3):195–204, 1991.
|
# On Chevalley restriction theorem for semi-reductive algebraic groups and its
applications
Ke Ou, Bin Shu and Yu-Feng Yao Department of Statistics and Mathematics,
Yunnan University of Finance and Economics, Kunming, 650221, China.
<EMAIL_ADDRESS>School of Mathematical Sciences, East China Normal
University, Shanghai, 200241, China<EMAIL_ADDRESS>Department of
Mathematics, Shanghai Maritime University, Shanghai, 201306, China.
<EMAIL_ADDRESS>
###### Abstract.
An algebraic group is called semi-reductive if it is a semi-direct product of
a reductive subgroup and the unipotent radical. Such a semi-reductive
algebraic group naturally arises and also plays a key role in the study of
modular representations of non-classical finite-dimensional simple Lie
algebras in positive characteristic, and some other cases. Let $G$ ba a
connected semi-reductive algebraic group over an algebraically closed field
$\mathbb{F}$ and $\mathfrak{g}=\mathsf{Lie}(G)$. It turns out that $G$ has
many same properties as reductive groups, such as the Bruhat decomposition. In
this note, we obtain an analogue of classical Chevalley restriction theorem
for $\mathfrak{g}$, which says that the $G$-invariant ring
$\mathbb{F}[\mathfrak{g}]^{G}$ is a polynomial ring if $\mathfrak{g}$
satisfies a certain “posivity” condition suited for lots of cases we are
interested in. As applications, we further investigate the nilpotent cones and
resolutions of singularities for semi-reductive Lie algebras.
###### Key words and phrases:
Semi-reductive algebraic groups, Semi-reductive Lie algebras, Chevalley
restriction theorem, Nilpotent cone, Steinberg map, Springer resolution
###### 2010 Mathematics Subject Classification:
20G05, 20G07, 20G15, 17B10, 17B45
This work is partially supported by the National Natural Science Foundation of
China (Grant Nos. 12071136, 11671138 and 11771279), Shanghai Key Laboratory of
PMMP (No. 13dz2260400), the Fundamental Research Funds of Yunnan Province
(Grant No. 2020J0375) and the Fundamental Research Funds of YNUFE (Grant No.
80059900196).
## 1\. Introduction
Let $\mathbb{F}$ be an algebraically closed field. An algebraic group $G$ over
$\mathbb{F}$ is called semi-reductive if $G=G_{0}\ltimes U$ with $G_{0}$ being
a reductive subgroup, and $U$ the unipotent radical. Let
$\mathfrak{g}=\mathsf{Lie}(G)$, $\mathfrak{g}_{0}=\mathsf{Lie}(G_{0})$ and
$\mathfrak{u}=\mathsf{Lie}(U)$, then
$\mathfrak{g}=\mathfrak{g}_{0}\oplus\mathfrak{u}$, which is called a semi-
reductive Lie algebra.
Preliminarily, the motivation of initiating the study of semi-reductive
algebraic groups and Lie algebras comes from the study of irreducible
representations of finite-dimensional non-classical simple Lie algebras in
prime characteristic (see Example 2.1). In the direction of this new study,
there have been some interesting mathematical phenomenons revealing (to see
[13], [14], [19], [22]).
In the present paper, we focus on establishing an analogue of the classical
Chevalley restriction theorem in the case of semi-reductive algebraic groups.
Recall that for a connected reductive algebraic group $L$ over $\mathbb{F}$
with $\mathfrak{l}=\mathsf{Lie}(L)$, and a maximal torus $T$ of $L$ with
$\mathfrak{h}=\mathsf{Lie}(T)$. We have the Weyl group
$\mathscr{W}=\operatorname{Nor}_{L}(T)/\operatorname{Cent}_{L}(T)$. The
classical Chevalley restriction theorem asserts that the restriction
homomorphism $\mathbb{F}[\mathfrak{l}]\rightarrow\mathbb{F}[\mathfrak{h}]$
induces an isomorphism of algebras of invariants:
$\mathbb{F}[\mathfrak{l}]^{{L}}\cong\mathbb{F}[\mathfrak{h}]^{\mathscr{W}}$
where $\mathbb{F}[\mathfrak{l}]$ (resp. $\mathbb{F}[\mathfrak{h}]$) denotes
the algebra of polynomial functions on $\mathfrak{l}$ (resp. $\mathfrak{h}$)
(see [5, Proposition 7.12] if $\operatorname{ch}(\mathbb{F})>0$ and [4, §23.1
and Appendix 23] for semisimple $\mathfrak{l}$ if
$\operatorname{ch}(\mathbb{F})=0$). The Chevalley restriction theorem plays a
key role in the study of representations of reductive Lie algebras. It has
been generalized in various ways and cases (see [8], [11] etc.). However, it
may fail if $\mathfrak{l}$ is replaced with an arbitrary algebraic Lie algebra
$\mathfrak{q}.$ In particular, one has to distinguish the adjoint and
coadjoint representations of $\mathfrak{q}.$ See §4 for a concrete example.
In this note, we will generalize the Chevalley restriction theorem to the case
of a semi-reductive algebraic group $G$ if $G$ satisfies a certain “posivity”
condition (see Convention 2.9 for the precise definition).
The generalization of the Chevalley restriction theorem seems to be quite
useful in understanding the structure of semi-reductive Lie algebras. As
applications, we investigate the nilpotent cones and resolutions of
singularities for semi-reductive Lie algebras. In particular, Propositions
5.2, 5.3 and 5.9 provide the structure of nilpotent cone $\mathcal{N}$,
Steinberg maps and resolution of singularities of $\mathcal{N}$ respectively.
Our paper is organized as follows. In §2, we recall some known results, and
investigate the structures of semi-reductive algebraic groups as well as semi-
reductive Lie algebras. In §3, the Chevalley restriction theorem for semi-
reductive Lie algebras $\mathfrak{g}$ are presented provided $\mathfrak{g}$
satisfied a certain “posivity” condition and some restrictions on the
characteristic of $\mathbb{F}.$ As the first application, we illustrate the
difference between the adjoint and coadjoint representations of $\mathfrak{g}$
in §4. As the second application, we study the nilpotent cone of
$\mathfrak{g}$ and its Springer resolution in the final section.
## 2\. Semi-redutive groups and semi-reductive Lie algebeas
### 2.1. Notions and assumptions
Throughout this paper, all vector spaces and varieties are defined over an
algebraically closed field $\mathbb{F}$.
$G=G_{0}\ltimes U$ be a semi-reductive group with $G_{0}$ being a reductive
subgroup, and $U$ the unipotent radical. Let $\mathfrak{g}=\mathsf{Lie}(G)$,
$\mathfrak{g}_{0}=\mathsf{Lie}(G_{0})$ and $\mathfrak{u}=\mathsf{Lie}(U)$,
then $\mathfrak{g}=\mathfrak{g}_{0}\oplus\mathfrak{u}$. Let
$\operatorname{pr}:\mathfrak{g}\rightarrow\mathfrak{g}_{0},\
\pi:\mathfrak{g}\rightarrow\mathfrak{u}$ be the projections. In the following,
we give some examples of semi-reductive Lie algebras (resp. semi-reductive
algebraic groups).
###### Example 2.1.
(The non-negative graded part of a restricted Lie algebra of Cartan type over
$\mathbb{F}$ of prime characteristic $p>0$) Let
$\mathscr{A}(n)=\mathbb{F}[T_{1},\ldots,T_{n}]/(T_{1}^{p},\ldots,T_{n}^{p})$,
a quotient of the polynomial ring by the ideal generated by $T_{i}^{p}$,
$1\leq i\leq n$. Set $\mathscr{L}=\text{Der}(\mathscr{A}(n))$. Then
$\mathscr{L}$ is a simple Lie algebra unless $n=1$ and $p=2$ (this simple Lie
algebra is usually called a generalized Witt algebra, denoted by $W(n)$).
Associated with the degrees of polynomial quotients, $\mathscr{L}$ becomes a
graded algebra $\mathscr{L}=\sum_{d\geq-1}\mathscr{L}_{[d]}$. Then
$\mathscr{L}$ has a filtration
$\mathscr{L}=\mathscr{L}_{-1}\supset\mathscr{L}_{0}\supset\cdots\supset\mathscr{L}_{n(p-1)-1}\supset
0.$
where
$\mathscr{L}_{i}=\bigoplus_{j\geq i}\mathscr{L}_{[j]},\,\,-1\leq i\leq
n(p-1)-1.$
Let $G=\operatorname{Aut}(\mathscr{L})$ be the automorphism group of
$\mathscr{L}$ and $\mathfrak{g}=\mathsf{Lie}(G)$. Then $G=\text{\rm
GL}(n,\mathbb{F})\ltimes U$ and $\mathfrak{g}=\mathscr{L}_{0}$ with
$\mathsf{Lie}(\text{\rm
GL}(n,\mathbb{F}))=\mathscr{L}_{[0]}(\cong\mathfrak{gl}(n,\mathbb{F}))$,
$\mathsf{Lie}(U)=\mathscr{L}_{1}=\sum_{d\geq 1}\mathscr{L}_{[d]}$ (see [9,
23]). So $G$ is a semi-reductive group and $\mathscr{L}_{0}$ is a semi-
reductive Lie algebra. According to the known results (cf. [10], [16], [17],
[20], [24] and [25], etc), the representation theory of $W(n)$ along with
other Cartan type Lie algebras are, to most extent, determined by that of its
distinguished maximal filtered subalgebra $\mathscr{L}_{0}$. So the study of
semi-reductive Lie algebra $\mathscr{L}_{0}$ becomes a crucial important
topic.
More generally, apart from $W(n)$, there are another three series of Cartan
type restricted simple Lie algebras $S(n),H(n),K(n)$ (see [15], [21], or the
forthcoming Example 2.12). Each of them is endowed with the corresponding
graded structure. Similarly, one can consider $\mathfrak{g}=\sum_{d\geq
0}X(n)_{[d]}$ for $X\in\\{S,H,K\\}$ and $G=\text{Aut}(X(n))$. The
corresponding semi-reductive groups and Lie algebras also appear as above.
###### Example 2.2.
(Parabolic subalgebras of reductive Lie algebras in any characteristic) Let
$\tilde{G}$ be a reductive algebraic group over $\mathbb{F}$ and $G$ be a
parabolic subgroup of $\tilde{G}$. Then $G$ is an semi-reductive group.
###### Example 2.3.
Let $G$ be a connected reductive algebraic group over $\mathbb{F}$ satisfying
that the characteristic of $\mathbb{F}$ is good for $G$, in the sense of [4,
§3.9]. For any given nilpotent $X\in\mathfrak{g}$, let $G_{X}$ be the
centralizer of $X$ in $G$, and $\mathfrak{g}_{X}=\mathsf{Lie}(G_{X})$. By [5,
§5.10-§5.11], $G_{X}=C_{X}\ltimes R_{X}$ is semi-reductive.
###### Example 2.4.
(Enhanced reductive algebraic groups) Let $G_{0}$ be a connected reductive
algebraic group over $\mathbb{F}$, and $(M,\rho)$ a finite-dimensional
representation of $G_{0}$ with representation space $M$ over $\mathbb{F}$.
Here and further, representations of an algebraical group always mean its
rational representations. Consider the product variety $G_{0}\times M$. Regard
$M$ as an additive algebraic group. The variety $G_{0}\times M$ is endowed
with a cross product structure denoted by $G_{0}\times_{\rho}M$, by defining
for any $(g_{1},v_{1}),(g_{2},v_{2})\in G_{0}\times M$
$\displaystyle(g_{1},v_{1})\cdot(g_{2},v_{2}):=(g_{1}g_{2},\rho(g_{1})v_{2}+v_{1}).$
(2.1)
Then by a straightforward computation it is easily known that
$\underline{G_{0}}:=G_{0}\times_{\rho}M$ becomes a group with unity $(e,0)$
for the unity $e\in G_{0}$, and $(g,v)^{-1}=(g^{-1},-\rho(g)^{-1}v)$. Then
$G_{0}\times_{\rho}M$ has a subgroup $G_{0}$ identified with $(G_{0},0)$ and a
subgroup $M$ identified with $(e,M)$. Furthermore, $\underline{G_{0}}$ is
connected since $G_{0}$ and $M$ are irreducible varieties. We call
$\underline{G_{0}}$ an enhanced reductive algebraic group associated with the
representation space $M$. What is more, $G_{0}$ and $M$ are closed subgroups
of $\underline{G_{0}}$, and $M$ is a normal closed subgroup. Actually, we have
$\underline{g}\underline{v}\underline{g}^{-1}=\rho(g)v$. Here and further
$\underline{g}$ stands for $(g,0)$ and $\underline{v}$ stands for $(e,v)$. Set
$\mathfrak{g}_{0}=\mathsf{Lie}(G_{0})$. Then $(\mathsf{d}(\rho),M)$ becomes a
representation of $\mathfrak{g}_{0}$. Naturally,
$\mathsf{Lie}(\underline{G_{0}})=\mathfrak{g}_{0}\oplus M$, with Lie bracket
$[(X_{1},v_{1}),(X_{2},v_{2})]:=([X_{1},X_{2}],\mathsf{d}(\rho)(X_{1})v_{2}-\mathsf{d}(\rho)(X_{2})v_{1}),$
which is called an enhanced reductive Lie algebra.
Clearly, $\underline{G_{0}}$ is a semi-reductive group with $M$ being the
unipotent radical.
### 2.2.
In the sequel, we always assume that $G=G_{0}\ltimes U$ is a connected semi-
reductive algebraic group over an algebraically closed field $\mathbb{F}$
where $G_{0}$ is a connected reductive subgroup and $U$ the unipotent radical.
Let $\mathfrak{g}=\mathsf{Lie}(G)$ be the Lie algebra of $G$. Recall that a
Borel subgroup (resp. Borel subalgebra) of $G$ (resp. $\mathfrak{g}$) is a
maximal solvable subgroup (resp. subalgebra) of $G$ (resp. $\mathfrak{g}$). In
the following we will illustrate the structure of Borel subgroups of $G$.
###### Lemma 2.5.
The following statements hold.
* (1)
Suppose $B$ is a Borel subgroup of $G$. Then $B\supset U$. Furthermore,
$B_{0}:=B\cap G_{0}$ is a Borel subgroup of $G_{0}$, and $B=B_{0}\ltimes U$.
* (2)
Any maximal torus $T$ of $G$ is conjugate to a maximal torus $T_{0}$ of
$G_{0}$.
###### Proof.
(1) As $U$ is the unipotent radical of $G$, $BU$ is still a closed subgroup
containing $B$. We further assert that $BU$ is solvable. Firstly, by a
straightforward computation we have that the $i$th derived subgroup
$\mathcal{D}^{i}(BU)$ is contained in $\mathcal{D}^{i}(B)U$. By the
solvableness of $B$, there exists some positive integer $t$ such that
$\mathcal{D}^{t}(B)$ is the identity group $\\{e\\}$. So
$\mathcal{D}^{t}(BU)\subset U$. Secondly, as $U$ is unipotent, and then
solvable. So there exists some positive integer $r$ such that
$\mathcal{D}^{r}(U)=\\{e\\}$. Hence $\mathcal{D}^{t+r}(BU)=\\{e\\}$. The
assertion is proved.
The maximality of the solvable closed subgroup $B$ implies that $BU=B$. This
is to say, $U$ is contained in the unipotent radical $B_{u}$ of $B$. Set
$B_{0}=B\cap G_{0}$ which is clearly a closed solvable subgroup of $G_{0}$. By
the same token as above, $B_{0}U$ is a closed solvable subgroup of $B$. On the
other hand, for any $b\in B$, by the definition of semi-reductive groups we
have $b=b_{0}u$ for some $b_{0}\in G_{0}$ and $u\in U$. As $U\subset B$, we
have further that $b_{0}=bu^{-1}\in B_{0}$. It follows that $B$ is contained
in $B_{0}\ltimes U$. The remaining thing is to prove that $B_{0}$ is a Borel
subgroup of $G_{0}$. It is clear that $B_{0}$ is a solvable closed subgroup of
$G_{0}$. If $B_{0}$ is not a maximal solvable closed subgroup of $G_{0}$. We
may suppose $B_{0}^{\prime}$ is a larger one properly containing $B_{0}$. Then
the solvable closed subgroup $B_{0}^{\prime}U$ of $G$ contains $B$ properly.
It contradicts with the maximality of $B$. Hence $B_{0}$ is really maximal.
Summing up, the statement in (1) is proved.
(2) Note that the maximal tori in $G$ are all conjugate (see [1, 11.3]). This
statement follows from (1). ∎
As a corollary to Lemma 2.5, we have the following fact for semi-reductive Lie
algebras.
###### Lemma 2.6.
For a semi-reductive group $G$ and $\mathfrak{g}=\mathsf{Lie}(G)$, all maximal
tori of $\mathfrak{g}$ are conjugate under adjoint action of $G$. Moreover,
$\mathfrak{g}_{\text{ss}}=\bigcup_{T}\mathsf{Lie}(T)$ where
$\mathfrak{g}_{\text{ss}}$ is the set consisting of all semisimple elements
and $T$ runs over all maximal tori of $G$.
By Lemmas 2.5 and 2.6, we can choose a maximal torus $T$ of $G$, which lies in
$G_{0}$ without loss of generality. By [1, §8.17], we have the following
decomposition of root spaces associated with
$\mathfrak{t}:=\operatorname{Lie}(T)$
$\displaystyle\mathfrak{g}=\mathfrak{t}\oplus\sum_{\alpha\in\Phi(G_{0},T)}(\mathfrak{g}_{0})_{\alpha}\oplus\sum_{\alpha\in\Phi(U,T)}\mathfrak{u}_{\alpha}$
(2.2)
where $\Phi(U,T)$ is the subset of $X(G,T)$ satisfying that for
$\mathfrak{u}:=\mathsf{Lie}(U)$,
$\mathfrak{u}=\sum_{\alpha\in\Phi(U,T)}\mathfrak{u}_{\alpha}.$
In the consequent arguments, the root system $\Phi(G_{0},T)$ will be denoted
by $\Delta$ for simplicity, which is actually independent of the choice of
$T$. We fix a positive root system $\Delta^{+}:=\Phi(G_{0},T)^{+}$. The
corresponding Borel subgroup will be denoted by $B^{+}$ which contains $T$,
and the corresponding simple system is denoted by $\Pi$.
### 2.3.
The following facts are clear.
###### Lemma 2.7.
Let $G$ be a connected semi-reductive group, and $T$ a maximal torus of
$G_{0}$.
* (1)
Set $\mathscr{W}(G,T):=\text{\rm Nor}_{G}(T)/\text{\rm Cent}_{G}(T)$. Then
$\mathscr{W}(G,T)\cong\mathscr{W}$, where $\mathscr{W}$ is the Weyl group of
$G_{0}$. This is to say, $G$ admits the Weyl group $\mathscr{W}$ coinciding
with the one of $G_{0}$.
* (2)
Set $\\{\dot{w}\mid w\in W\\}$ be a set of representatives in $N_{G}(T)$ of
the elements of $\mathscr{W}$. Denote by $C(w)$ the double coset
$B^{+}\dot{w}B^{+}$, and by $C_{0}(w)$ the corresponding component
$B^{+}_{0}\dot{w}B^{+}_{0}$ of the Bruhat decomposition of $G_{0}$. Then for
any $w\in\mathscr{W}$
$\displaystyle C(w)=C_{0}(w)\ltimes U.$ (2.3)
* (3)
Let $w=s_{1}\cdots s_{h}$ be a reduced expression of $w\in\mathscr{W}$, with
$s_{i}=s_{\gamma_{i}}$ for $\gamma_{i}\in\Pi$. Then
$C(w)\cong{\mathbb{A}}^{h}\times B^{+}$.
* (4)
For $w\in\mathscr{W}$, set $\Phi(w):=\\{\alpha\in\Delta^{+}\mid
w\cdot\alpha\in-\Delta^{+}\\}$, and $U_{w}:=\Pi_{\alpha\in\Phi(w)}U_{\alpha}$,
then $U_{w^{-1}}\times B^{+}\cong C(w)$ via sending $(u,b)$ onto $u\dot{w}b$.
* (5)
(Bruhat Decomposition) We have
$G={\dot{\bigcup}}_{w\in\mathscr{W}}B^{+}\dot{w}B^{+}$. Therefore, for any
$g\in G$, $g$ can be written uniquely in the form $u\dot{w}b$ with
$w\in\mathscr{W}$, $u\in U_{w^{-1}}$ and $b\in B^{+}$.
###### Proof.
(1) Note that $U$ is the unipotent radical of $G$, and $G=G_{0}\ltimes U$. We
have $\text{\rm Nor}_{G}(T)=\text{\rm Nor}_{G_{0}}(T)$ and $\text{\rm
Cent}_{G}(T)=\text{\rm Cent}_{G_{0}}(T)$. The statement is proved.
(2) Note that $U\dot{w}=\dot{w}U$ and $B^{+}=B^{+}_{0}U=UB^{+}_{0}$ for
$B^{+}_{0}=B^{+}\cap G_{0}$. We have
$\displaystyle C(w)$ $\displaystyle=B^{+}_{0}U\dot{w}B^{+}_{0}U$ (2.4)
$\displaystyle=B^{+}_{0}\dot{w}UB^{+}_{0}U$ (2.5)
$\displaystyle=B^{+}_{0}\dot{w}B^{+}_{0}U$ (2.6)
$\displaystyle=C_{0}(w)\ltimes U.$ (2.7)
(3) and (4) follow from (2) along with [18, Lemma 8.3.6].
(5) follows from (2) along with the Bruhat decomposition Theorem for $G_{0}$
([18, Theorem 8.3.8]). ∎
###### Proposition 2.8.
Let $G$ be a connected semi-reductive group. Keep the notations as in Lemma
2.7. The following statements hold.
* (1)
The group $G$ admits a unique open double coset $C(w_{0})$, where $w_{0}$ is
the longest element of $\mathscr{W}$.
* (2)
For any given Borel subgroup $B^{+}$, let $\mathcal{B}$ denote the homogeneous
space $G/B^{+}$, and $\pi:G\rightarrow\mathcal{B}$ be the canonical morphism.
Then $\pi$ has local sections.
* (3)
For a rational $B^{+}$-module $M$, there exists a fibre bundle over
$\mathcal{B}$ associated with $M$, denoted by $G\times^{B^{+}}M$.
###### Proof.
(1) Note that $C(w_{0})$ is a $B^{+}\times B^{+}$-orbit in $G$ under double
action, and $C(w_{0})$ is open in its closure. Thanks to Lemma 2.7(4), we have
$\dim C(w_{0})=\\#\\{\Delta^{+}\\}+\dim B^{+}=\dim G.$
On the hand, $G$ is irreducible, so $G=\overline{C(w_{0})}$. It follows that
$C(w_{0})$ is an open subset of $G$. By the uniqueness of the longest element
in $\mathscr{W}$ and Lemma 2.3(3)(5), we have that $C(w_{0})$ is a unique open
double coset.
(2) According to [18, §5.5.7], we only need to certify that (a) $\mathcal{B}$
is covered by open sets $\\{U\\}$; (b) each of such open sets has a section,
i.e. a morphism $\sigma:U\rightarrow\mathcal{B}$ satisfying
$\pi\circ\sigma=\mathsf{id}_{U}$. Thanks to [18, Theorem 5.5.5], $\pi$ is open
and separable. Let $X(w)=\pi(C(w))$. By (1), $X(w_{0})$ is an open set of
$\mathcal{B}$. By Lemma 2.7(4), $\pi$ has a section on $X(w_{0})$. Note that
$\\{gX(w_{0})\mid g\in G\\}$ constitute of open covering of $\mathcal{B}$.
Using the translation, the statement follows from the argument on $X(w_{0})$.
(3) follows from (2) and [18, Lemma 5.5.8]. ∎
### 2.4. Regular cocharacters and the positivity condition
Keep the notations and assumptions as above subsections. Let $T$ be a given
maximal torus of $G_{0}$. By [1, §12.2], there is a regular semisimple element
$t\in T$ such that $\alpha(t)\neq 1$ for any $\alpha\in\Phi(G_{0},T)$.
Furthermore, by [18, Lemma 3.2.11],
$X_{*}(T)\otimes\mathbb{F}^{\times}\rightarrow T$ with $\tau\otimes a\in
X_{*}(T)\otimes\mathbb{F}^{\times}$ mapping to $\tau(a)\in T$ gives rise to an
isomorphism of abelian groups. So there is a regular cocharacter $\tau\in
X_{*}(T)$ such that $t=\tau(a)$ for some $a\in\mathbb{F}^{\times}$. Thus, we
have
$\displaystyle\langle\alpha,\tau\rangle\neq 0\mbox{ for all
}\alpha\in\Phi(G_{0},T).$ (2.8)
By [18, §7.4.5-§7.4.6], we can further choose a regular cocharacter $\tau\in
X_{*}(T)$ for the reductive group $G_{0}$ such that
$\displaystyle\langle\alpha,\tau\rangle>0\mbox{ for all
}\alpha\in\Phi(G_{0},T)^{+}.$ (2.9)
We turn to semi-reductive case. Let $G$ be a semi-reductive algebraic group
and $\mathfrak{g}=\mathsf{Lie}(G)$.
###### Convention 2.9.
The group $G$ (or the Lie algebra $\mathfrak{g}$) is said to satisfy the
positivity condition if there exists a cocharacter $\chi\in X_{*}(T)$ such
that $\langle\alpha,\chi\rangle>0\mbox{ for\,any
}\alpha\in\Phi(G_{0},T)^{+}\cup\Phi(U,T).$
The positivity condition can be seen to valid in the following examples we are
interested in.
###### Example 2.10.
Suppose $\mathfrak{g}$ is the Lie algebra of a semi-reductive algebraic group
$G=\text{\rm GL}(V)\times_{\nu}V$ over $\mathbb{F}$ where $\nu$ is the natural
representation of $\text{\rm GL}(V)$ (see Example 2.4). Then $\mathfrak{g}$
satisfies the positivity condition.
Actually, suppose $\operatorname{dim}V=n.$ Fix a basis $\\{v_{i}\mid 1\leq
i\leq n\\}$ of $V$ associated with the isomorphism $\text{\rm
GL}(V)\cong\text{\rm GL}(n,\mathbb{F}).$ We take the standard maximal torus
$T$ of $\text{\rm GL}(n,\mathbb{F})$ consisting of diagonal invertible
matrices. Set $B^{+}$ the Borel subgroup consisting of upper triangular
invertible matrices, which corresponds to the positive root system. And take
$\varepsilon_{i}\in X^{*}(T)$ with $\varepsilon_{i}(\text{\rm
Diag}(t_{1},\cdots,t_{n}))=t_{i}$, $i=1,\cdots,n$. Then the character group
$X^{*}(T)$ is just the free abelian group generated by $\varepsilon_{i}$,
$i=1,\cdots,n$; and the positive root system $\Phi(G_{0},T)^{+}$ consists of
all $\varepsilon_{i}-\varepsilon_{j}$ with $1\leq i<j\leq n$. Set
$\alpha_{i}=\varepsilon_{i}-\varepsilon_{i+1}$ for $i=1,\cdots,n-1$. Then
$\Pi=\\{\alpha_{1},\cdots,\alpha_{n-1}\\}$ becomes a fundamental root system
of $\text{\rm GL}(V)$, and any positive root $\varepsilon_{i}-\varepsilon_{j}$
is expressed as $\sum_{s=i}^{j-1}\alpha_{s}$ for $i<j$. Then by (2.1), we have
that $\mbox{Ad}(\textbf{t})v_{i}=t_{i}v_{i}=\varepsilon_{i}(\textbf{t})v_{i}$
for $\textbf{t}=\text{\rm Diag}(t_{1},\cdots,t_{n})\in T$. It follows that
$\Phi(V,T)=\\{\varepsilon_{1},\varepsilon_{2},\cdots,\varepsilon_{n}\\}$ is
the set of weights of $V$ with respect to $T$.
Set $\varpi:=\sum_{i=1}^{n}(n-i+1)\varepsilon_{i}$. The corresponding
cocharacter is denoted by $\chi$, which is a regular cocharacter of $\text{\rm
GL}(V)$. Then $\langle\varepsilon_{i},\chi\rangle=(n-i+1)$ and
$\langle\alpha_{j},\chi\rangle=1$ for $1\leq i\leq n$ and $1\leq j\leq n-1$.
Hence $\text{\rm GL}(V)\times_{\nu}V$ satisfies the positivity condition 2.9.
###### Example 2.11.
Suppose $\mathfrak{g}$ is the Lie algebra of a parabolic subgroup of a
reductive algebraic group over $\mathbb{F}$ (see Example 2.2). Then the
positivity condition is assured for $\mathfrak{g}$.
We can show this below. Suppose $P$ is a parabolic subgroup of a connected
reductive algebraic group $G$. In this case, we can take a regular co-
character $\chi$ of $G$ such that $\langle\alpha,\chi\rangle>0$ for all
positive root $\alpha$ of $G$.
###### Example 2.12.
(Continued to Example 2.1) In this example, we will further make some
preliminary introduction to Cartan type Lie algebras in the classification of
finite-dimensional simple Lie algebras over an algebraically closed filed of
positive characteristic, which do not arise from simple algebraic groups (see
[7, 12, 21]). Suppose $\mathbb{F}$ is an algebraically closed field of
characteristic $p>2$ for the time being. There are four types of those Lie
algebras, and each type of them contains infinite series of ones. We will
introduce the first two types: Witt type $W(n)$ ($\geq 1$) and Special type
$S(n)$ ($n\geq 2$) (the remaining two types are Hamiltonian type $H(n)$ for
even $n\geq 2$ and contact type $K(n)$ for odd $n\geq 3$, see [15, 21] for
details).
By definition, the restricted Lie algebras of Cartan type are $p$-subalgebras
of the algebra of derivations of the truncated polynomial rings
$\mathscr{A}(n)=\mathbb{F}[T_{1},\cdots,T_{n}]/(T_{1}^{p},\ldots,T_{n}^{p})$,
whose canonical generators are denoted by $x_{i}$, $i=1,\ldots n$. Recall
$W(n)=\text{Der}(\mathscr{A}(n))$ (the $n$-th Jacobson-Witt algebra). Its
restricted mapping is the standard $p$-th power of linear operators. Denote by
$\partial_{i}$ the partial derivative with respect to the variable $x_{i}$ .
Then $\partial_{i}$, $i=1,\ldots,n$ is a basis of the $\mathscr{A}(n)$-module
$W(n)$. The Lie algebra $W(n)$ has a restricted grading
$W(n)=\bigoplus_{i=0}^{q}W(n)_{[i]}$ where
$W(n)_{[i]}=\sum_{k=1}^{n}\mathscr{A}_{i+1}\partial_{i}$ associated with the
natural grading of $\mathscr{A}$ arising from the degree of truncated
polynomials, and $q=n(p-1)-1$. Let $\Omega$ denote the exterior differential
algebra over $\mathscr{A}(n)$ in the usual sense. Then
$\Omega(n)=\sum_{i=1}^{n}\Omega(n)^{i}$ with
$\Omega(n)^{i}=\bigwedge^{i}\Omega(n)^{1}$ for
$\Omega(n)^{1}:=\text{Hom}_{\mathscr{A}(n)}(W(n),\mathscr{A}(n))$ which can be
presented as $\Omega(n)^{1}=\sum_{i=1}^{r}\mathscr{A}(n)\textsf{d}x_{i}$ where
d is an $\mathbb{F}$-linear map from $\Omega^{0}(n):=\mathscr{A}(n)$ to
$\Omega^{1}(n)$ via $\textsf{d}f:E\mapsto E(f)$ for any $f\in\mathscr{A}(n)$
and $E\in W(n)$. The exterior differential algebra $\Omega(n)$ is canonically
endowed with a $W(n)$-action (see [21, §4.2]).
Associated with the volume differential form
$\omega_{S}:=\textsf{d}x_{1}\wedge\cdots\wedge\textsf{d}x_{n}$, there is a
(Cartan-type) simple Lie algebra of Special type $S(n):=S^{\prime}(n)^{(1)}$
for $S^{\prime}(n)=\\{E\in W(n)\mid E\omega_{S}=0\\}$. Both $W(n)$ and $S(n)$
are simple Lie algebras (see [7, 12, 15, 21]), the latter of which is a graded
Lie subalgebra of the former. For $\mathfrak{g}=W(n)$ or $S(n)$, the graded
structure $\mathfrak{g}=\sum_{i\geq-1}\mathfrak{g}_{[i]}$ gives rise to the
filtration $\\{\mathfrak{g}_{i}\\}$ with $\mathfrak{g}_{i}=\sum_{j\geq
i}\mathfrak{g}_{[j]}$. Denote by $G=\text{Aut}(\mathfrak{g})$. Then $G$ is
compatible with the restricted mapping, and $G$ is connected provided that
$p\geq 5$ is additionally required for $\mathfrak{g}=W(1)$. Furthermore, the
following properties are satisfied.
* (1)
$G=\text{\rm GL}(n)\ltimes U$ where $U$ is the unipotent radical of G
consisting of elements $\sigma\in G$:
$(\sigma-\mathsf{id}_{\mathfrak{g}})(\mathfrak{g}_{i})\subset\mathfrak{g}_{i+1}$.
So $G$ is semi-reductive.
* (2)
$\mathsf{Lie}(G)=({C\hskip-2.0pt\mathfrak{g}})_{0}$ where
$\displaystyle{C\hskip-2.0pt\mathfrak{g}}:=\begin{cases}W(n)&\text{ if
}\mathfrak{g}=W(n);\cr\\{E\in W(n)\mid
E\omega_{S}\in\mathbb{F}\omega_{S}\\};&\text{ if
}\mathfrak{g}=S(n)\end{cases}$ (2.10)
which is a graded subalgebra of $W(n)$ (see [9, Proposition 3.2]). Actually,
${C\hskip-2.0ptS}(n)=S^{\prime}(n)+\mathbb{F}x_{1}\partial_{1}$ (see [21,
§4.2]). So
${C\hskip-2.0pt\mathfrak{g}}=({C\hskip-2.0pt\mathfrak{g}})_{[0]}\oplus({C\hskip-2.0pt\mathfrak{g}})_{1}$
with $({C\hskip-2.0pt\mathfrak{g}})_{[0]}=W_{[0]}\cong\mathfrak{gl}(n)$ and
$({C\hskip-2.0pt\mathfrak{g}})_{1}\subset W(n)_{1}$.
In particular, for $\mathfrak{g}=W(n)$ we have $G=\text{\rm GL}(n)\ltimes U$
with $\mathsf{Lie}(G)=W(n)_{0}$, and $\operatorname{Lie}(U)=W(n)_{1}$. Take a
maximal torus $T$ and a positive Borel subgroup $B^{+}$ as Example 2.10. Then
$\Phi(G_{0},T)^{+}=\\{\varepsilon_{i}-\varepsilon_{j}\mid 1\leq i<j\leq n\\}$.
For each $\textbf{t}=\text{\rm Diag}(t_{1},\cdots,t_{n})\in T,$
$\mbox{Ad}(\textbf{t})(x_{1}^{a_{1}}\cdots
x_{n}^{a_{n}}\partial_{j})=(\sum_{i=1}^{n}a_{i}\varepsilon_{i}-\varepsilon_{j})(\textbf{t})x_{1}^{a_{1}}\cdots
x_{n}^{a_{n}}\partial_{j}.$ Therefore,
$\Phi(U,T)=\left\\{\sum_{i=1}^{n}a_{i}\epsilon_{i}-\epsilon_{j}\mid 0\leq
a_{i}\leq p-1\text{ and }\sum_{i=1}^{n}a_{i}\geq 2\right\\}.$
Set $\varpi:=\sum_{i=1}^{n}(2n-i)\varepsilon_{i}$. The corresponding
cocharacter is denoted by $\chi$, which is a regular cocharacter of $\text{\rm
GL}(V)$. Then $\langle\varepsilon_{i}-\varepsilon_{j},\chi\rangle=j-i$ for
$1\leq i<j\leq n$ and
$\langle\sum_{i=1}^{n}a_{i}\varepsilon_{i}-\varepsilon_{j},\chi\rangle=\sum_{i=1}^{n}a_{i}(2n-i)-(2n-j)\geq\sum_{i=1}^{n}a_{i}n-(2n-1)\geq
2n-(2n-1)>0$
for all $\sum_{i=1}^{n}a_{i}\varepsilon_{i}-\varepsilon_{j}\in\Phi(U,T)$.
Hence associated with $W(n)$, $\mathsf{Lie}(G)$ satisfies the positivity
condition 2.9.
Associated with $S(n)$, $\mathsf{Lie}(G)$ still contains the reductive Lie
subalgebra part isomorphic to $\mathfrak{gl}(n)$ along with the unipotent part
$\mathsf{Lie}(U)\subset W(n)_{1}$. So the same arguments yield the assertion
for $\mathsf{Lie}(G)$ associated with $S(n)$.
###### Remark 2.13.
Associated with $\mathfrak{g}=H(n)$ and $K(n)$, $G=\text{Aut}(\mathfrak{g})$
still turn out to be a connected semi-reductive group. However, the above
argument is not available to the Hamiltonian algebra $H(n)$ and the contact
algebra $K(n)$. Indeed, for $H(n)$ with $n=2m,$ take a maximal torus
$\mathfrak{h}=\langle h_{i}:=x_{i}\partial_{i}-x_{i+m}\partial_{i+m}\mid 1\leq
i\leq m\rangle$, and $\alpha_{i}$ the fundamental weight with
$\alpha_{i}(h_{j})=\delta_{ij}$ for $1\leq i,j\leq m$. Note that both
$x_{1}^{2}\partial_{m+1}$ and $x_{m+1}^{2}\partial_{1}$ lie in $H(n)_{[1]},$
so that $\pm 3\alpha_{1}\in\Phi(U,T).$ Hence, the positivity condition fails
for $H(n).$ While for $K(n)$ with $n=2m+1,$ take a maximal torus
$\mathfrak{h}=\langle
h_{i}:=x_{i}\partial_{i}-x_{i+m}\partial_{i+m},h_{m+1}:=\sum_{j=1}^{2m}x_{j}\partial_{j}+2x_{2m+1}\partial_{2m+1}\mid
1\leq i\leq m\rangle$, and $\alpha_{i}$ the fundamental weight with
$\alpha_{i}(h_{j})=\delta_{ij}$ for $1\leq i,j\leq m+1$. Note that both
$3x_{1}^{2}x_{2m+1}^{\frac{p-1}{2}}\partial_{m+1}+x_{1}^{3}x_{2m+1}^{\frac{p-3}{2}}(\sum_{j=1}^{2m}x_{j}\partial_{j}-x_{2m+1}\partial_{2m+1})$
and
$3x_{m+1}^{2}x_{2m+1}^{\frac{p-1}{2}}\partial_{1}+x_{m+1}^{3}x_{2m+1}^{\frac{p-3}{2}}(\sum_{j=1}^{2m}x_{j}\partial_{j}-x_{2m+1}\partial_{2m+1})$
lie in $K(n)_{[p]},$ so that $\pm 3\alpha_{1}\in\Phi(U,T).$ Hence, the
positivity condition fails for $K(n).$
Analogue to [5, Proposition 2.11], we have the following paralleling result
for a semi-reductive algebraic group.
###### Proposition 2.14.
Suppose that $G=G_{0}\ltimes U$ is a semi-reductive algebraic group over
$\mathbb{F}$ with a maximal torus $T$ and satisfies the positivity condition
2.9. Let $\mathfrak{g}=\mathsf{Lie}(G)$ and
$\mathfrak{g}_{0}=\mathsf{Lie}(G_{0})$. Then for any $X\in\mathfrak{g}$ with
the Jordan-Chevalley decomposition $X=X_{s}+X_{n}$, we have
$X_{s}\in\overline{\mathcal{O}_{X}}$. In particular, if $X$ is nilpotent, then
$0\in\overline{\mathcal{O}_{X}}.$
###### Proof.
By Lemmas 2.5 and 2.7, any $X\in\mathfrak{g}$ under Ad$(G)$-conjugation lies
in a fixed Borel subalgebra. We first fix a Borel subgroup $B=B_{0}\ltimes U$
such that the maximal torus $T$ is contained in $B_{0}\subset G_{0}$, and
$\mathsf{Lie}(B)=\mathfrak{t}+\sum_{\alpha\in\Phi(G_{0},T)^{+}}(\mathfrak{g}_{0})_{\alpha}+\sum_{\alpha\in\Phi(U,T)}\mathfrak{u}_{\alpha}$
(keeping in mind (2.2)). Without loss of generality, we might as well suppose
$X\in\mathsf{Lie}(B)$. Consider its Jordan-Chevally decomposition
$X=X_{s}+X_{n}$. Then $X_{s},X_{n}\in\mathsf{Lie}(B)$. By assumption, we can
write
$\displaystyle
X=X_{s}+\sum_{\alpha\in\Phi(G_{0},T)^{+}}X_{{\alpha}}+\sum_{\beta\in\Phi(U,T)}X_{\beta}$
(2.11)
where $X_{s}\in\mathfrak{t},$ $X_{{\alpha}}\in(\mathfrak{g}_{0})_{\alpha}$ for
$\alpha\in\Phi(G_{0},T)^{+}$ and $X_{\beta}\in\mathfrak{u}_{\beta}$ for
$\beta\in\Phi(U,T)$.
Then $\mbox{Ad}(t)(X)=X_{s}+\sum_{\gamma\in R^{+}}{\gamma}(t)X_{\gamma}$ for
all $t\in T$, with $R^{+}:=\Phi(G_{0},T)^{+}\cup\Phi(U,T)$. Suppose $\chi$ is
a cocharacter satisfies the positivity condition 2.9. Then for all
$a\in\mathbb{F}^{\times}$, up to conjugation under $G$-action
$\mbox{Ad}(\chi(a))(X)=X_{s}+\sum_{\gamma\in
R^{+}}a^{\langle\gamma,\chi\rangle}X_{\gamma}$
with $\langle\gamma,\chi\rangle>0$ for all $\gamma\in R^{+}$, which shows that
we can extend the map $a\mapsto\mbox{Ad}(\chi(a))X$ to a morphism
$\mathbb{F}\rightarrow{\mathfrak{g}}$ by $0\mapsto X_{s}$. It follows that
$X_{s}\in\overline{G\cdot X}$ as claimed. ∎
## 3\. Chevalley Restriction Theorem
From now on, we always assume that $G=G_{0}\ltimes U$ (resp.
$\mathfrak{g}=\mathfrak{g}_{0}\oplus\mathfrak{u}$) is a semi-reductive
algebraic group (resp. semi-reductive Lie algebra) over $\mathbb{F}$ of rank
$n$.
### 3.1.
Let $\mathscr{W}$ be the Weyl group of $G$ relative to $T$, i.e.,
$\mathscr{W}=N_{G}(T)/\text{C}_{G}(T)$, which can be identified with
$N_{G_{0}}(T)/\text{C}_{G_{0}}(T)$. For each $w\in\mathscr{W},$ denote by
$\dot{w}$ the presentation of $w$ in $N_{G}(T).$ Thanks to [5, §7.12] and [1,
§11.12], the following lemma holds.
###### Lemma 3.1.
For a given maximal torus $T$, let $\mathfrak{h}:=\mathsf{Lie}(T)$. Take
$h\in\mathfrak{h}$. Then
$\text{Ad}(G)(h)\cap\mathfrak{h}=\\{\dot{w}(h)|w\in\mathscr{W}\\}.$
###### Lemma 3.2.
([5, Proposition 7.12]) The map $f\longmapsto f|_{\mathfrak{h}}$ induces an
injective homomorphism
$\Phi_{0}:\,\mathbb{F}[\mathfrak{g}_{0}]^{G_{0}}\longrightarrow\mathbb{F}[\mathfrak{h}]^{\mathscr{W}}$
of $\mathbb{F}$-algebras. This homomorphism is an isomorphism if
$\text{char}(\mathbb{F})\neq 2$ or if $\text{char}(\mathbb{F})=2$ and
$\alpha\notin 2X(T)$ for all roots $\alpha$ of $G_{0}$ relative to $T$.
Consider the canonical projection
$\operatorname{pr}:\mathfrak{g}\rightarrow\mathfrak{g}_{0}$ and the imbedding
$\operatorname{i}:\mathfrak{g}_{0}\rightarrow\mathfrak{g}$ for a given semi-
reductive Lie algebra $\mathfrak{g}=\mathfrak{g}_{0}\oplus\mathfrak{u}$. Then
$\operatorname{pr}$ and $\operatorname{i}$ are both morphisms of algebraic
varieties.
###### Lemma 3.3.
The comorphism
$\operatorname{pr}^{*}:\mathbb{F}[\mathfrak{g}_{0}]\rightarrow\mathbb{F}[\mathfrak{g}]$
induces an injective homomorphism
$\operatorname{pr}^{*}:\mathbb{F}[\mathfrak{g}_{0}]^{G_{0}}\rightarrow\mathbb{F}[\mathfrak{g}]^{G},$
while
$\operatorname{i}^{*}:\mathbb{F}[\mathfrak{g}]\rightarrow\mathbb{F}[\mathfrak{g}_{0}]$
induces a surjective homomorphism
$\operatorname{i}^{*}:\mathbb{F}[\mathfrak{g}]^{G}\rightarrow\mathbb{F}[\mathfrak{g}_{0}]^{G_{0}}.$
###### Proof.
For each $f\in\mathbb{F}[\mathfrak{g}_{0}]^{G_{0}},$ denote
$F=\operatorname{pr}^{*}(f)\in\mathbb{F}[\mathfrak{g}].$ Equivalently,
$F(D+u)=f(D)$, where $D\in{\mathfrak{g}}_{0},\ u\in\mathfrak{u}$. For any
$\sigma\in G$, we have $\sigma=\sigma_{1}\cdot\sigma_{2}$ where $\sigma_{1}\in
G_{0}$ and $\sigma_{2}\in U.$ Then $\sigma\cdot(D+u)=\sigma_{1}\cdot
D+u^{\prime}$, where $u^{\prime}\in\mathfrak{u}.$ Moreover,
$(\sigma^{-1}\cdot
F)(D+u)=F(\sigma(D+u))=F(\sigma_{1}D+u^{\prime})=f(\sigma_{1}D)=f(D)=F(D+u).$
Namely, $F$ is $G$-invariant and
$\operatorname{pr}^{*}:\mathbb{F}[\mathfrak{g}_{0}]^{G_{0}}\rightarrow\mathbb{F}[\mathfrak{g}]^{G}$
is well-defined.
Since $\operatorname{i}^{*}$ is the restriction map, it is easy to verify that
$\operatorname{i}^{*}:\mathbb{F}[\mathfrak{g}]^{G}\rightarrow\mathbb{F}[\mathfrak{g}_{0}]^{G_{0}}$
is well-defined.
Note that $\operatorname{pr}\circ\operatorname{i}$ is the identity map on
$\mathfrak{g}_{0},$. Therefore,
$\operatorname{i}^{*}\circ\operatorname{pr}^{*}$ is the identity map on
$\mathbb{F}[\mathfrak{g}_{0}]^{G_{0}}.$ As a corollary,
$\operatorname{pr}^{*}$ is injective and $\operatorname{i}^{*}$ is surjective.
∎
It follows from [2] that $\mathbb{F}[\mathfrak{h}]^{\mathscr{W}}$ is generated
by $n$ algebraically independent homogeneous elements (and the unit). Then
$\mathbb{F}[\mathfrak{g}_{0}]^{G_{0}}$ is a polynomial algebra with $n$
algebraically independent homogeneous polynomials $\\{f_{i}\mid 1\leq i\leq
n\\}\subset\mathbb{F}[\mathfrak{g}_{0}]$. Denote
$F_{i}:=\operatorname{pr}^{*}(f_{i})$ for all $i=1,\cdots,n.$ Then all $F_{i}$
are homogeneous and algebraically independent, and
$\operatorname{pr}^{*}(\mathbb{F}[\mathfrak{g}_{0}]^{G_{0}})=\mathbb{F}[F_{1},\cdots,F_{n}].$
###### Theorem 3.4.
Let $G$ be a semi-reductive algebraic group satisfying the positivity
condition 2.9. Then the map $f\longmapsto f|_{\mathfrak{h}}$ induces an
injective homomorphism
$\Phi:\,\mathbb{F}[\mathfrak{g}]^{G}\longrightarrow\mathbb{F}[\mathfrak{h}]^{\mathscr{W}}$
of $\mathbb{F}$-algebras. This homomorphism is an isomorphism if
$\text{char}(\mathbb{F})\neq 2$ or if $\text{char}(\mathbb{F})=2$ and
$\alpha\notin 2X(T)$ for all roots $\alpha$ of $G_{0}$ relative to $T$.
###### Proof.
Note that $\Phi=\Phi_{0}\circ\operatorname{i}^{*}.$ For each
$X\in\mathfrak{g},$ let $X=X_{s}+X_{n}$ be the Jordan decomposition. It
follows from Proposition 2.14 that $f(X)=f(X_{s})$ for all
$f\in\mathbb{F}[\mathfrak{g}]^{G}.$ By Lemma 2.6, there is $h\in\mathfrak{h}$
such that $X_{s}\in G\cdot h.$ By Lemma 3.1, $G\cdot
h\cap\mathfrak{h}=\\{\dot{w}(h)\mid w\in\mathscr{W}\\}$ for all
$h\in\mathfrak{h}$. It follows that each $f\in\mathbb{F}[\mathfrak{g}]^{G}$ is
determined by $f|_{\mathfrak{h}}$, which is invariant under the action of
$\mathscr{W}$. This implies that $\Phi$ is injective.
On the other hand, by Lemmas 3.2 and 3.3, $\Phi$ is surjective if
$\text{char}(\mathbb{F})\neq 2$ or if $\text{char}(\mathbb{F})=2$ and
$\alpha\notin 2X(T)$ for all roots $\alpha$ of $G_{0}$ relative to $T$. ∎
###### Corollary 3.5.
Suppose $G$ is a semi-reductive algebraic group of rank $n$ satisfying the
positivity condition 2.9, and $\text{char}(\mathbb{F})\neq 2$ or if
$\text{char}(\mathbb{F})=2$ and $\alpha\notin 2X(T)$ for all roots $\alpha$ of
$G_{0}$ relative to $T$. Then $\mathbb{F}[\mathfrak{g}]^{G}$ is a polynomial
algebra generated by $n$ algebraically independent homogeneous polynomials.
### 3.2. Eamples
Theorem 3.4 entails the following good properties for those important examples
we are interested and presented before.
###### Corollary 3.6.
Let $\mathfrak{g}$ be any one from the following list:
* (1)
the Lie algebra of an semi-reductive algebraic group $G=\text{\rm
GL}(V)\times_{\nu}V$ over $\mathbb{F}$ where $\nu$ is the natural
representation of $\text{\rm GL}(V)$ (see Example 2.10).
* (3)
the Lie algebra of a parabolic subgroup $P$ of a reductive algebraic group
over $\mathbb{F}$ with decomposition $P=L\ltimes U$ such that the unipotent
radical $U$ contains no trivial weight space as an $L$ module (see Example
2.11).
* (3)
$\mathsf{Lie}(G)$ associated with $W(n)$ and $S(n)$ over $\mathbb{F}$ (see
Example 2.12).
Then the invariants of $\mathbb{F}[\mathfrak{g}]$ under $G$-action must be a
polynomial ring. More precisely,
$\mathbb{F}[\mathfrak{g}]^{G}\cong\mathbb{F}[\mathfrak{h}]^{\mathscr{W}}$ as
$\mathbb{F}$-algebras, where $\mathscr{W}$ is the Weyl group of
$\mathfrak{g}$. In particular, $\mathscr{W}$ coincides with the $n$-th
symmetry group $\mathfrak{S}_{n}$ in Case (1) and Case (3).
## 4\. Application I: Comparison with the center of enveloping algebra of the
enhanced reductive Lie algebra $\underline{\mathfrak{gl}(V)}$
Suppose $\operatorname{ch}(\mathbb{F})=0$ in this section. Let $G=\text{\rm
GL}(V)\times_{\nu}V$ be the enhanced general linear group as in Corollary
3.6(1). Then $\mathfrak{g}=\mathfrak{gl}(n)\oplus V$.
### 4.1.
For each $x\in\mathfrak{gl}(n)$, we can write
$\det(t+x)=t^{n}+\sum_{i=0}^{n-1}\psi_{i}(x)t^{i}$. Then by the classical
Chevalley restriction theorem (see [5, Proposition 7.9])
$\mathbb{F}[\mathfrak{gl}(n)]^{\text{\rm
GL}(n)}=\mathbb{F}[\psi_{0},\cdots,\psi_{n-1}].$
For each $0\leq i\leq n-1,$ let
$\varphi_{i}:=\operatorname{pr}^{*}(\psi_{i})$. Namely,
$\varphi_{i}(g):=\psi_{i}(x)$ for all $g\in\mathfrak{g}$ with $g=x+v$,
$x\in\mathfrak{gl}(n)$, $v\in V$. By Theorem 3.4,
$S(\mathfrak{g}^{*})^{\mathfrak{g}}=S(\mathfrak{g}^{*})^{G}=\mathbb{F}[\mathfrak{g}]^{G}=\mathbb{F}[\varphi_{0},\cdots,\varphi_{n-1}].$
### 4.2.
Let $\mathfrak{p}$ be a parabolic subalgebra of maximal dimension in
$\mathfrak{sl}(n+1)$ consisting of $(n+1)\times(n+1)$ zero-traced matrices
with $(n+1,k)$-entries equal to zero for any $k=1,\ldots,n$.
Consider two elements $I=(\sum_{i=1}^{n}E_{ii},0)\in\mathfrak{g}$ and
$H=\frac{1}{n+1}\sum_{i=1}^{n}(E_{ii}-E_{n+1,n+1})\in\mathfrak{p}$. Then
$\mathfrak{g}=\mathbb{F}I\oplus\mathfrak{g}^{\prime}$ and
$\mathfrak{p}=\mathbb{F}H\oplus\mathfrak{p}^{\prime}$, and
$\mathfrak{g}^{\prime}\simeq\mathfrak{p}^{\prime}\simeq\mathfrak{sl}(n)\oplus
M$. Moreover, one can check that there exists an isomorphism of Lie algebras:
$\mathfrak{g}\simeq\mathfrak{p}$, sending $I$ to $H$.
By [6, Section 7], $S(\mathfrak{p})^{\mathfrak{p}^{\prime}}=\mathbb{F}[z]$ is
a polynomial ring with one indeterminate where $z$ is a harmonic element in
$\mathfrak{sl}(n+1).$ As in the proof of [6, Lemma 7.4], $[H,z]=nz$.
Therefore, we have
$\displaystyle S(\mathfrak{g})^{{\mathfrak{g}}}\simeq
S(\mathfrak{p})^{{\mathfrak{p}}}=\mathbb{F}[z]^{H}=\mathbb{F}.$ (4.1)
We finally obtain the structure of the center of $U(\mathfrak{g})$ for
$\mathfrak{g}=\mathfrak{gl}(V)\ltimes V$.
###### Proposition 4.1.
Let $\mathfrak{g}=\mathfrak{gl}(n)\oplus V$. Then the center of
$U(\mathfrak{g})$ is one-dimensional.
###### Proof.
It follows from Duflo’s theorem [3] and (4.1) that
$Z(\mathfrak{g})=U(\mathfrak{g})^{\mathfrak{g}}\simeq
S(\mathfrak{g})^{\mathfrak{g}}\simeq\mathbb{F}$. ∎
## 5\. Application II: Nilpotent cone and Steinberg map for semi-reductive
Lie algebras
Denote by $\mathcal{N}$ (resp. $\mathcal{N}_{0}$) the variety of all nilpotent
elements of $\mathfrak{g}$ (resp. $\mathfrak{g}_{0}$). Let
$\mathbb{F}[\mathfrak{g}]_{+}^{G}$ (resp.
$\mathbb{F}[\mathfrak{g}_{0}]_{+}^{G_{0}}$) be the subalgebra in
$\mathbb{F}[\mathfrak{g}]^{G}$ (resp. $\mathbb{F}[\mathfrak{g}_{0}]^{G_{0}}$)
consisting of all polynomials in $\mathbb{F}[\mathfrak{g}]^{G}$ (resp.
$\mathbb{F}[\mathfrak{g}_{0}]_{+}^{G_{0}}$) with zero constant term. It is
well-known that $\mathcal{N}_{0}$ coincides the common zeros of all
polynomials $f\in\mathbb{F}[\mathfrak{g}_{0}]_{+}^{G_{0}}$ (see [5, Lemma
6.1]).
### 5.1.
Suppose $G$ satisfies the positivity condition 2.9 and the assumption on
ch$\mathbb{F}$ in Corollary 3.5. By this corollary, there exist $n$
algebraically independent homogeneous polynomials
$F_{i}\in\mathbb{F}[\mathfrak{g}]$, $i=1,\ldots,n$ such that
$\mathbb{F}[\mathfrak{g}]^{G}=\mathbb{F}[F_{1},\cdots,F_{n}]$, of which the
precise meaning can be seen in Theorem 3.4. Furthermore, we have the following
lemma.
###### Lemma 5.1.
Let $G$ be a semi-reductive algebraic group over $\mathbb{F}$ satisfying the
positivity condition 2.9 and the assumption on ch$\mathbb{F}$ in Corollary
3.5. Then
$\mathcal{N}=\left\\{X\in\mathfrak{g}\mid f(X)=0\text{ for all
}f\in\mathbb{F}[\mathfrak{g}]_{+}^{G}\right\\}=V(F_{1},\ldots,F_{n}).$
Here and further, the notation $V(F_{1},\ldots,F_{n})$ stands for the
algebraic set of common zeros of $F_{1},\ldots,F_{n}$.
###### Proof.
Similar to [5, Section 6.1], $\mathcal{N}\supseteq\left\\{X\in\mathfrak{g}\mid
f(X)=0\text{ for all }f\in\mathbb{F}[\mathfrak{g}]_{+}^{G}\right\\}.$ Actually
Jantzen only deal with the nilpotent cone for reductive Lie algebras in [5],
but his arguments remain valid for a general algebraic Lie algebra.
If $x\in\mathfrak{g}$ is nilpotent, $0\in\overline{\mathcal{O}_{X}}$ by
Proposition 2.14. For each $F\in\mathbb{F}[\mathfrak{g}]_{+}^{G},$ since $F$
is continuous and constant in $\mathcal{O}_{X},$ $F$ is constant in
$\overline{\mathcal{O}_{X}}.$ Since $F(0)=0,$ $F(x)=0.$
Thanks to [5, Section 6.1] and the definition of $F_{i},$
$\mathcal{N}=V(F_{1},\cdots,F_{n})=V(f_{1},\cdots,f_{n})\times\mathfrak{u}=\mathcal{N}_{0}\times\mathfrak{u}.\qed$
So we have
###### Proposition 5.2.
Keep the notations and assumption as in Lemma 5.1. In particular,
$G=G_{0}\ltimes U$ denotes a connected semi-reductive algebraic group over
$\mathbb{F}$ satisfying the positivity condition 2.9, and
$\mathfrak{g}=\operatorname{Lie}(G)$. The following statements hold.
1. (1)
$\mathcal{N}\simeq\mathcal{N}_{0}\times\mathfrak{u}\text{ as varieties.}$
2. (2)
For any $D\in\mathfrak{g}$ with $D=x+y$, $\ x\in\mathfrak{g}_{0},\
y\in\mathfrak{u}$, we have the following decomposition of tangent spaces
$T_{D}(\mathcal{N})=T_{x}(\mathcal{N}_{0})\oplus\mathfrak{u}$
where $\mathfrak{u}$ is regarded as its tangent space.
3. (3)
Let $\mathcal{N}_{\text{sm}}$ $\operatorname{(}resp.\
\mathcal{N}_{0,\text{sm}}\operatorname{)}$ be the smooth locus of
$\mathcal{N}$ $\operatorname{(}resp.\ \mathcal{N}_{0}\operatorname{)}$. We
have following isomorphism of varieties:
$\displaystyle\mathcal{N}_{\text{sm}}\simeq\mathcal{N}_{0,\text{sm}}\times\mathfrak{u}.$
(5.1)
4. (4)
$\mathcal{N}$ is irreducible and normal.
###### Proof.
(1) folllows from the proof of Lemma 5.1.
(2) It is a direct consequence of (1).
(3) Thanks to (1) and (2),
$\dim(\mathcal{N})=\text{dim}(\mathcal{N}_{0})+\text{dim}\,\mathfrak{u}$ and
$\dim T_{X}(\mathcal{N})=\dim
T_{\operatorname{pr}(X)}(\mathcal{N}_{0})+\text{dim}\,\mathfrak{u},\,\,\forall\,X\in\mathcal{N}.$
Hence, $X\in\mathcal{N}$ is smooth if and only if
$\operatorname{pr}_{0}(X)\in\mathcal{N}_{\text{0}}$ is smooth. As a result,
the restriction of $\alpha$ to $\mathcal{N}_{\text{sm}}$ gives rise to the
isomorphism (5.1).
(4) Since $\mathcal{N}_{0}$ is irreducible and normal, so is $\mathcal{N}$ by
(1). ∎
### 5.2. The Steinberg map
Define canonically an adjoint quotient map
$\displaystyle\chi:\mathfrak{g}$ $\displaystyle\rightarrow\mathbb{A}^{n},$
$\displaystyle D$
$\displaystyle\mapsto(F_{1}(D),\cdots,F_{n}(D)),\quad\forall\,D\in\mathfrak{g}$
which we call the Steinberg map for $\mathfrak{g}$. Let
$\displaystyle\eta:\mathfrak{g}_{0}$ $\displaystyle\rightarrow\mathbb{A}^{n},$
$\displaystyle x$
$\displaystyle\mapsto(f_{1}(x),\cdots,f_{n}(x)),\quad\forall\,x\in\mathfrak{g}_{0}$
be the Steinberg map for $\mathfrak{g}_{0}$. Since
$F_{i}=\operatorname{pr}^{*}(f_{i})$, we have
$\chi=\eta\circ\operatorname{pr}.$
###### Proposition 5.3.
Keeping the notations and assumptions as in Corollary 5.1. In particular,
$G=G_{0}\ltimes U$ be a connected semi-reductive algebraic group with
$\mathfrak{g}=\mathsf{Lie}(G)=\mathfrak{g}_{0}\oplus\mathfrak{u}$ satisfying
the positivity condition.
1. (1)
$\chi^{-1}(0)=\mathcal{N}_{0}+\mathfrak{u}.$ In particular,
$\chi^{-1}(0)=\mathcal{N}$.
2. (2)
$\chi^{-1}(a)=\eta^{-1}(a)\times\mathfrak{u}$ for all
$a\in\chi(\mathfrak{g}).$ In particular, $\chi^{-1}(a)$ is irreducible of
dimension $\operatorname{dim}(\mathfrak{g})-n.$
3. (3)
For every $a\in\chi(\mathfrak{g}),$ $\chi^{-1}(a)$ contains exactly one orbit
consisting of semisimple elements.
###### Proof.
(1) and (2) are obvious.
(3) Fix a maximal torus $\mathfrak{t}$ of $\mathfrak{g}_{0}$. Then
$\mathfrak{t}$ is a maximal torus of $\mathfrak{g}$. Thanks to Lemma 2.5, for
all semisimple elements $x_{1},x_{2}\in\chi^{-1}(a),$ there exist $g_{i}\in
G,\ i=1,2,$ such that $y_{i}:=g_{i}\cdot
x_{i}\in\mathfrak{t}\subseteq\mathfrak{g}_{0},\ i=1,2.$ Note that
$\eta(y_{i})=\chi(y_{i})=a,\ i=1,2,$ and $\mathfrak{g}_{0}$ is reductive. It
follows from [5, Proposition 7.13] that $y_{1}$ and $y_{2}$ lie in the same
$G_{0}$-orbit, namely, there is $g_{0}\in G_{0}$ such that $g_{0}\cdot
y_{1}=y_{2}.$ Moreover, $x_{2}=g_{2}^{-1}\cdot g_{0}\cdot g_{1}\cdot x_{1}.$
As a result, $x_{1}$ and $x_{2}$ are in the same $G$-orbit. Hence (3) holds. ∎
### 5.3. Springer resolution for the semi-reductive case
Let $\mathcal{B}$ (resp. $\mathcal{B}_{0}$) be the set of all Borel
subalgebras of $\mathfrak{g}$ (resp. $\mathfrak{g}_{0}$). Lemma 2.5 implies
that $\mathfrak{b}_{0}\mapsto\mathfrak{b}_{0}\oplus\mathfrak{u}$ defines an
isomorphisms between $\mathcal{B}_{0}$ and $\mathcal{B}.$
Recall that for fixed positive root system $\Delta^{+},$ denote by
$B^{+}=B_{0}^{+}\ltimes U$ (resp.
$\mathfrak{b}^{+}=\mathfrak{b}_{0}^{+}\oplus\mathfrak{u}$) the corresponding
Borel subgroup of $G$ (resp. Borel subalgebra of $\mathfrak{g}$) where
$B_{0}^{+}$ (resp. $\mathfrak{b}_{0}^{+}$) is the corresponding Borel subgroup
of $G_{0}$ (resp. Borel subalgebra of $\mathfrak{g}_{0}$) (see §2.2). Set
$\tilde{\mathcal{N}}:=\\{(x,\mathfrak{b})\in\mathcal{N}\times\mathcal{B}\mid
x\in\mathfrak{b}\\}$. Similar to the case of reductive Lie algebras, we have
the following lemma.
###### Lemma 5.4.
Keep notations as above, we have
1. (1)
$N_{G}(\mathfrak{b}^{+})=B^{+}.$
2. (2)
$\mathcal{B}\simeq G/B^{+},$ and $\mathcal{B}$ is a projective variety.
3. (3)
$\tilde{\mathcal{N}}\simeq\\{(x,gB^{+})\in\mathcal{N}\times G/B^{+}\mid
g^{-1}x\in\mathfrak{b}^{+}\\}.$
###### Proof.
(1) For each $x\in\mathfrak{b}_{0},\ y\in\mathfrak{u}$, suppose $g\in
G_{0},h\in U$ such that $gh(x+y)\in\mathfrak{b}.$ Note that
$h(x)=x+x^{\prime}$, where $x^{\prime}\in\mathfrak{u}.$ Therefore,
$gh(x+y)=g(x)+g(x^{\prime})+gh(y)$. Hence, $g(x)\in\mathfrak{b}_{0}$ and
$g(x^{\prime})+gh(y)\in\mathfrak{u}.$ It follows that $g\in
N_{G_{0}}(\mathfrak{b}_{0})=B_{0}^{+}$ and $N_{G}(\mathfrak{b}^{+})\leq
B_{0}^{+}\ltimes U=B^{+}.$ Since $\operatorname{Lie}(B^{+})=\mathfrak{b}^{+},\
B^{+}\leq N_{G}(\mathfrak{b}^{+}).$ Consequently,
$N_{G}(\mathfrak{b}^{+})=B^{+}.$
(2) follows from (1) and Lemma 2.5.
(3) follows from (2). ∎
Denote by $\mathfrak{n}$ the nilpotent radical of $\mathfrak{b}^{+}$, then
$\mathfrak{n}=\mathfrak{n}_{0}\oplus\mathfrak{u}$ where $\mathfrak{n}_{0}$ is
the nilpotent radical of $\mathfrak{b}^{+}_{0}.$ One can check that $B^{+}$
can act on $G\times\mathfrak{n}$, with $x\cdot(g,n)=(x\cdot g^{-1},x\cdot n)$
for $x\in B^{+}$ and $g\in G$, $n\in\mathfrak{n}$. The arguments of [5, §6.5]
still work in our case. Therefore, the following proposition holds.
###### Proposition 5.5.
The projection $\pi:\tilde{\mathcal{N}}\rightarrow\mathcal{B}$ makes
$\tilde{\mathcal{N}}$ a $G$-equivariant vector bundle over $\mathcal{B}$ with
fiber $\mathfrak{n}.$ The assignment $(g,n)\mapsto(g\cdot n,gB^{+})$ gives a
$G$-equivariant isomorphism
$G\times^{B^{+}}\mathfrak{n}\simeq\tilde{\mathcal{N}}.$ In particular,
$\tilde{\mathcal{N}}$ is smooth.
###### Lemma 5.6.
The projection $\mu:\tilde{\mathcal{N}}\rightarrow\mathcal{N}$ is a proper
map.
###### Proof.
By definition, $\mu=\beta\circ\alpha,$ where
$\alpha:\tilde{\mathcal{N}}\hookrightarrow\mathcal{N}\times\mathcal{B}$ is a
closed immersion and
$\beta:\mathcal{N}\times\mathcal{B}\rightarrow\mathcal{N}$ is projection.
Since $\mathcal{B}$ is a projective variety, $\mu$ is a projective morphism.
Therefore $\mu$ is proper. ∎
Let $\mathcal{B}_{X}:=\\{\mathfrak{b}\in\mathcal{B}\mid X\in\mathfrak{b}\\}$
for $X\in\mathcal{N}$ and
$\mathcal{B}_{0,x}:=\\{\mathfrak{b}_{0}\in\mathcal{B}_{0}\mid
x\in\mathfrak{b}_{0}\\}$ for $x\in\mathcal{N}_{0}.$
###### Lemma 5.7.
Suppose $X=X_{0}+X_{1}\in\mathcal{N}(\mathfrak{g})$ where
$X_{0}\in\mathfrak{g}_{0},X_{1}\in\mathfrak{u}.$ Then
$\psi:\mathfrak{b}_{0}\mapsto\mathfrak{b}_{0}\oplus\mathfrak{u}$ defines a
one-to-one correspondence between $\mathcal{B}_{0,X_{0}}$ and
$\mathcal{B}_{X}.$
###### Proof.
Note that $X\in\mathfrak{b}_{0}\oplus\mathfrak{u}$ if and only if
$X_{0}\in\mathfrak{b}_{0}.$ The assertion follows immediately. ∎
###### Lemma 5.8.
If $n\in\mathcal{N}_{\text{sm}},$ then $\mu^{-1}(n)$ contains exactly one
element. Namely, there is unique Borel subalgebra of $\mathfrak{g}$ contains
$n.$
###### Proof.
It follows from Proposition 5.2 that $n=n_{0}+n_{1}$ for
$n_{0}\in\mathcal{N}(\mathfrak{g}_{0})_{\text{sm}}$ and
$n_{1}\in\mathfrak{u}.$ Since $\mathfrak{g}_{0}$ is a reductive Lie algebra,
there is exactly one Borel subalgerba $\mathfrak{b}_{0}\in\mathfrak{g}_{0}$
such that $n_{0}\in\mathfrak{b}_{0}.$
Now suppose $n\in\mathfrak{b}$ for a Borel subalgerbra $\mathfrak{b}$ of
$\mathfrak{g},$ and $\mathfrak{b}=\mathfrak{b}^{\prime}\oplus\mathfrak{u}$
where $\mathfrak{b}^{\prime}$ is a Borel subalgebra of $\mathfrak{g}_{0}.$
Since $n_{0}\in\mathfrak{b}^{\prime},$ then
$\mathfrak{b}^{\prime}=\mathfrak{b}_{0}$ and
$\mathfrak{b}=\mathfrak{b}_{0}\oplus\mathfrak{u}.$ ∎
Thanks to Proposition 5.5, Lemmas 5.6 and 5.8, $\tilde{\mathcal{N}}$ is smooth
and $\mu$ is a proper birational morphism. As a consequence, the following
result holds.
###### Proposition 5.9.
Keep notations as above, $\mu:\tilde{\mathcal{N}}\rightarrow\mathcal{N}$ is a
resolution of singularities.
###### Remark 5.10.
Let
$\tilde{\mathcal{N}_{0}}=\\{(x,\mathfrak{b})\in\mathcal{N}_{0}\times\mathcal{B}_{0}\mid
x\in\mathfrak{b}\\}$ be the Springer resolution of $\mathcal{N}_{0}.$ Assume
$G$ satisfies the positivity condition 2.9 and the assumption of
ch$\mathbb{F}$ as in Corollary 5.1, then
$\tilde{\mathcal{N}}\simeq\tilde{\mathcal{N}_{0}}\times\mathfrak{u}.$
Actually, by Proposition 5.2, $\mathcal{N}=\mathcal{N}_{0}\times\mathfrak{u}.$
Then $(x,\mathfrak{b}_{0},y)\mapsto(x+y,\mathfrak{b}_{0}+\mathfrak{u})$
defines an isomorphism between $\tilde{\mathcal{N}_{0}}\times\mathfrak{u}$ and
$\tilde{\mathcal{N}}$.
## References
* [1] A. Borel, Linear algebraic groups, Second Enlarged Edition, Graduate Texts in Mathematics 126, Springer Verlag, New York, 1991.
* [2] C. Chevalley, Invariants of finite groups generated by reflections, Amer. J. Math. 77 (1955), 778-782.
* [3] M. Duflo, Opérateurs diffrentiels bi-invariants sur un groupe, de Lie. Ann. Sci. École Norm. Sup. (4) 10 (1977), 265-288.
* [4] J. Humophreys, Conjugacy classes in semisimple algebraic groups, American Mathematical Society, Providence, RI, 1995.
* [5] J. C. Jantzen, Nilpotent orbits in representation theory, Lie theory, 1-211, Progr. Math., 228, Birkhauser Boston, Boston, MA, 2004.
* [6] A. Joseph, Second commutant theorems in enveloping algebraas, American Journal of Mathematics, 99 (1977), No. 6, 1167-1192
* [7] A. I. Kostrikin and I. R. Shafarevic, Graded Lie algebras of finite characteristic, Izv. Akad. Nauk SSSR Ser. Mat. 33(1969), 251-322.
* [8] D. Luna and R. W. Richardson, A generalization of the Chevalley restriction theorem, Duke Math. J. 46 (1979), 487-496.
* [9] Z. Lin and D. Nakano, Algebraic group actions in the cohomology theory of Lie algebras of Cartan type, J. Algebra 179 (1996), 852-888.
* [10] D. K. Nakano, Projective modules over Lie algebras of Cartan type, Mem. Amer. Math. Soc. 98 (1992), no. 470.
* [11] A. Premet, The theorem on restriction invariants, and nilpotent elements in $W_{n}$, Math. USSR Sbornik Vol. 73 (1992), N0.1, 135-159.
* [12] A. Premet and H. Strade, Classification of finite dimensional simple Lie algebras in prime characteristics Representations of algebraic groups, quantum groups, and Lie algebras, 185-214, Contemp. Math., 413, Amer. Math. Soc., Providence, RI, 2006.
* [13] Y. Ren, The BGG Category for generalized reductive Lie algebras, arXiv:2010.11849[Math.RT].
* [14] Y. Ren and B. Shu, The center of enveloping algebras of generalized reductive Lie algebras, preprint 2021.
* [15] H. Strade and R. Farnsteiner, Modular Lie algebras and their representations, Marcel Dekker, New York, 1988.
* [16] G. Shen, Graded modules of graded Lie algebras of Cartan type. III. Irreducible modules, Chinese Ann. Math. Ser. B 9 (1988), no. 4, 404-417.
* [17] S. Skryabin, Independent systems of derivations and Lie algebra representations, in: Algebra and Analysis (Kazan, 1994), de Gruyter, Berlin, 1996, pp. 115-150.
* [18] T. Springer, Linear algebraic groups, Birkhauser Boston, 2nd edition, Boston, 1998.
* [19] B. Shu, Y. Xue and Y. Yao, On enhanced groups (I): Parabolic Schur algebra and the dualities related to degenerate double Hecke algebras, arXiv:2005.13152[Math.RT].
* [20] B. Shu and Y. Yao, Irreducible representations of the generalized Jacobson-Witt algebras, Algebra Colloq. 19 (2012), no. 1, 53-72.
* [21] H. Strade, Simple Lie algebras over fields of positive charactersitic I. Structure theory, Walter de Gruyter, Berlin, 2004.
* [22] C. Xue, Double Hecke algebras and the related quantum Schur-Weyl duality, preprint 2020.
* [23] R. Wilson, Automorphisms of graded Lie algebras of Cartan type, Comm. Algebra 3(1975), no. 7, 591-613.
* [24] Y. Yao and B. Shu, Irreducible representations of the special algebras in prime characteristic Representation theory, 273-295, Contemp. Math., 478, Amer. Math. Soc., Providence, RI, 2009.
* [25] C. Zhang, Representations of the restricted Lie algebras of Cartan type, J. Algebra 290 (2005), no. 2, 408-432.
|
# Physics-Informed Deep Learning for
Traffic State Estimation
Rongye Shi, Zhaobin Mo, Kuang Huang, Xuan Di, and Qiang Du Manuscript received
month day, year; revised month day, year.This work was supported by the Data
Science Institute at Columbia University under the Seed Funds Program.
(Corresponding author: Rongye Shi and Xuan Di.)Rongye Shi and Zhaobin Mo are
with the Department of Civil Engineering and Engineering Mechanics, Columbia
University, New York, NY, 10027 USA (e-mail<EMAIL_ADDRESS>[email protected]).Kuang Huang is with the Department of Applied Physics and
Applied Mathematics, Columbia University, New York, NY, 10027 USA (e-mail:
[email protected]).Xuan Di is with the Department of Civil Engineering and
Engineering Mechanics, Columbia University, New York, NY, 10027 USA, and also
with the Data Science Institute, Columbia University, New York, NY, 10027 USA
(e-mail: [email protected]).Qiang Du is with the Department of Applied
Physics and Applied Mathematics, Columbia University, New York, NY, 10027 USA,
and also with the Data Science Institute, Columbia University, New York, NY,
10027 USA (e-mail: [email protected]).
###### Abstract
Traffic state estimation (TSE), which reconstructs the traffic variables
(e.g., density) on road segments using partially observed data, plays an
important role on efficient traffic control and operation that intelligent
transportation systems (ITS) need to provide to people. Over decades, TSE
approaches bifurcate into two main categories, model-driven approaches and
data-driven approaches. However, each of them has limitations: the former
highly relies on existing physical traffic flow models, such as Lighthill-
Whitham-Richards (LWR) models, which may only capture limited dynamics of
real-world traffic, resulting in low-quality estimation, while the latter
requires massive data in order to perform accurate and generalizable
estimation. To mitigate the limitations, this paper introduces a physics-
informed deep learning (PIDL) framework to efficiently conduct high-quality
TSE with small amounts of observed data. PIDL contains both model-driven and
data-driven components, making possible the integration of the strong points
of both approaches while overcoming the shortcomings of either. This paper
focuses on highway TSE with observed data from loop detectors, using traffic
density as the traffic variables. We demonstrate the use of PIDL to solve
(with data from loop detectors) two popular physical traffic flow models,
i.e., Greenshields-based LWR and three-parameter-based LWR, and discover the
model parameters. We then evaluate the PIDL-based highway TSE using the Next
Generation SIMulation (NGSIM) dataset. The experimental results show the
advantages of the PIDL-based approach in terms of estimation accuracy and data
efficiency over advanced baseline TSE methods.
###### Index Terms:
Traffic state estimation, traffic flow models, physics-informed deep learning.
## I Introduction
Ttraffic state estimation (TSE) is one of the central components that supports
traffic control, operations and other transportation services that intelligent
transportation systems (ITS) need to provide to people with mobility needs.
For example, the operations and planning of intelligent ramp metering, and
traffic congestion management rely on overall understanding of road-network
states. However, only a subset of traffic states can be observed using traffic
sensing systems deployed on roads or probe vehicles moving along with the
traffic flow. To obtain the overall traffic state profile from limited
observed information defines the TSE problem. Formally, TSE refers to the data
mining problem of reconstructing traffic state variables, including but not
limited to flow (veh/h), density (veh/km), and speed (km/h), on road segments
using partially observed data from traffic sensors [1].
TSE research can be dated back to late 1940s [2] in existing literature and
has continuously gained great attention in recent decades. TSE approaches can
be briefly divided into two main categories: model-driven and data-driven. A
model-driven approach is based on a priori knowledge of traffic dynamics,
usually described by a physical model, e.g., the Lighthill-Whitham-Richards
(LWR) model [3, 4], to estimate the traffic state using partial observation.
It assumes the model to be representative of the real-world traffic dynamics
such that the unobserved values can be properly added using the model with
less input data. The disadvantage is that existing models, which are provided
by different modelers, may only capture limited dynamics of real-world
traffic, resulting in low-quality estimation in the case of inappropriately-
chosen models and poor model calibrations. Paradoxically, sometimes, verifying
or calibrating a model requires a large amount of observed data, undermining
the data efficiency of model-driven approaches.
A data-driven approach is to infer traffic states based on the dependence
learned from historical data using statistical or machine learning (ML)
methods. Approaches of this type do not use any explicit traffic models or
other theoretical assumptions, and can be treated as a “black box” with no
interpretable and deductive insights. The disadvantage is that in order to
maintain a good generalizable inference to long-term unobserved values,
massive and representative historical data are a prerequisite, leading to high
demands on data collection infrastructure and enormous installation-
maintenance costs.
To mitigate the limitations of the previous TSE approaches, this paper, for
the first time (to the best of our knowledge), introduces a framework,
physics-informed deep learning (PIDL), to the TSE problem for improved
estimation accuracy and data efficiency. The PIDL framework is originally
proposed for solving nonlinear partial differential equations (PDEs) in recent
years [5, 6]. PIDL contains both a model-driven component (a physics-informed
neural network for regularization) and a data-driven component (a physics-
uninformed neural network for estimation), making possible the integration of
the strong points of both model-driven and data-driven approaches while
overcoming the weaknesses of either. This paper focuses on demonstrating PIDL
for TSE on highways using observed data from inductive loop detectors, which
are the most widely used sensors. We will show the benefit of using the LWR-
based physics to inform deep learning. Grounded on the highway and loop
detector setting, we made the following contributions: We
* •
Establish a PIDL framework for TSE, containing a physics-uninformed neural
network for estimation and a physics-informed neural network for
regularization. Specifically, the latter can encode a traffic flow model,
which will have a regularization effect on the former, making its estimation
physically consistent;
* •
Design two PIDL architectures for TSE with traffic dynamics described by
Greenshields-based LWR and three-parameter-based LWR, respectively. We show
the ability of PIDL-based method to estimate the traffic density in the
spatio-temporal field of interest using initial data or limited observed data
from loop detectors, and in addition, to discover model parameters;
* •
Demonstrate the robustness of PIDL using real data, i.e., the data of a
highway scenario in the Next Generation SIMulation (NGSIM) dataset. The
experimental results show the advantage of PIDL in terms of estimation
accuracy and data efficiency over TSE baselines, i.e., an extended Kalman
filter (EKF), a neural network, and a long short-term memory (LSTM) based
method.
The rest of this paper is organized as follows. Section II briefs related work
on TSE and PIDL. Section III formalizes the PIDL framework for TSE. Sections
IV and V detail the designs and experiments of PIDL for Greenshields-based LWR
and three-parameter-based LWR, respectively. Section VI evaluates the PIDL on
NGSIM data over baselines. Section VII concludes our work.
## II Related Work
### II-A Traffic State Estimation Approaches
A number of studies tackling the problem of TSE have been published in recent
decades. As discussed in the previous section, this paper briefly divides TSE
approaches into two main categories: model-driven and data-driven. For more
comprehensive categorization, we refer readers to [1].
Model-driven approaches accomplish their estimation process relying on traffic
flow models, such as the LWR [3, 4], Payne-Whitham (PW) [7, 8], Aw-Rascle-
Zhang (ARZ) [9, 10] models for one-dimensional traffic flow (link models) and
junction models [11] for network traffic flow (node models). Most estimation
approaches in this category are data assimilation (DA) based, which attempt to
find “the most likely state”, allowing observation to correct the model’s
prediction. Popular examples include the Kalman filter (KF) and its variants
(e.g., extended KF [12, 13, 14], unscented KF [15], ensemble KF [16]), which
find the state that maximizes the conditional probability of the next state
given current estimate. Other than KF-like methods, particle filter (PF) [17]
with improved nonlinear representation, adaptive smoothing filter (ASF) [18]
for combining multiple sensing data, were proposed to improve and extend
different aspects of the TSE process. In addition to DA-based methods, there
have been many studies utilizing microscopic trajectory models to simulate the
state or vehicle trajectories given some boundary conditions from data [19,
20, 21, 22, 23, 24, 25]. Among model-driven approaches, LWR model is of the
most popular deterministic traffic flow models due to its relative simplicity,
compactness, and success in capturing real-world traffic phenomena such as
shockwave [1]. To perform filtering based statistical methods on an LWR model,
we usually need to make a prior assumption about the distribution of traffic
states. The existing practice includes two ways: one to add a Brownian motion
on the top of deterministic traffic flow models, leading to Gaussian
stochastic traffic flow models. and the other to derive intrinsic stochastic
traffic flow models with more complex probabilistic distributions [26, 27, 28,
14, 29]. In this paper, we will use the stochastic LWR model with Gaussian
noise (solved by EKF) as one baseline to validate the superior performance of
our model.
Data-driven approaches estimate traffic states using historical data or
streaming data without explicit traffic flow models. Early studies considered
relatively simple statistical methods, such as heuristic imputation techniques
using empirical statistics of historical data [30], linear regression model
[31], and autoregressive moving average (ARMA) model for time series data
analysis [32]. To handle more complicated traffic data, ML approaches were
involved, including principal component analysis (PCA) and its variants [33,
34], k-nearest neighbors (kNN) [35], and probabilistic graphical models (i.e.,
Bayesian networks) [36]. Deep learning model [37], long short term memory
(LSTM) model [38], deep embedded model [39] and fuzzy neural network (FNN)
[40] have recently been applied for traffic flow prediction and imputation.
Each of these two approaches has disadvantages, and it is promising to develop
a framework to integrate physics to ML, such that we can combine the strong
points of both model-driven and data-driven approaches while overcoming the
weaknesses of either. This direction has gained increasing interest in recent
years. Hofleitner et al [41] and Wang et al [42] developed hybrid models for
estimating travel time on links for the signalized intersection scenarios,
combining Bayesian network and hydrodynamic theory. Jia et al [43] developed
physics guided recurrent neural networks that learn from calibrated model
predictions in addition to real data to model water temperature in lakes. Yuan
et al [44] recently proposed to leverage a hybrid framework, physics
regularized Gaussian process (PRGP) [45] for macroscopic traffic flow modeling
and TSE. Our paper contributes to the trend of developing hybrid methods for
transportation research, and explores the use of the PIDL framework to
addressing the TSE problem.
### II-B Physics-Informed Deep Learning for Solving PDEs
The concept of PIDL is firstly proposed by Raissi in 2018 as an effective
alternative PDE solver to numerical schemes [5, 6]. It approximates the
unknown nonlinear dynamics governed by a PDE with two deep neural networks:
one for predicting the unknown solution and the other, in which the PDE is
encoded, for evaluating whether the prediction is consistent with the given
physics. Increasing attentions have been paid to the application of PIDL in
scientific and engineering areas, to name a few, the discovery of constitutive
laws for flow through porous media [46], the prediction of vortex-induced
vibrations [47], 3D surface reconstruction [48], and the inference of
hemodynamics in intracranial aneurysm [49]. In addition to PDEs, researchers
have extended PIDL to solving space fractional differential equations [50] and
space-time fractional advection-diffusion equations [51].
### II-C Motivations of PIDL for TSE
This paper introduces the framework of PIDL to TSE problems. To be specific,
we develop PIDL-based approaches to estimate the spatio-temporal traffic
density of a highway segment over a period of time using observation from a
limited number of loop detectors. As shown in Fig. 1, in contrast to model-
driven and data-driven approaches, the proposed PIDL-based TSE approaches are
able to combine the advantages of both approaches by makeing efficient use of
traffic data and existing knowledge in traffic flow models. As real-world
traffic is composed of both physical and random human behavioral components, a
combination of model-driven and data-driven approaches has a great potential
to handle such complex dynamics.
Figure 1: A presentation of data-driven, model-drive, and PIDL-based TSE
approaches in a 2D view, where $x$-axis is the use of traffic data and
$y$-axis is the use of existing traffic flow models. (adapted from [52])
A PIDL-based TSE approach consists of one deep neural network to estimate
traffic state (data-driven component), while encoding the traffic models into
another neural network to regularize the estimation for physical consistency
(model-driven component). A PIDL-based approach also has the ability to
discover unknown model parameters, which is an important problem in
transportation research. To the best of our knowledge, this is the first work
of combining traffic flow models and deep learning paradigms for TSE.
## III PIDL Framework Establishment for TSE
This section introduces the PIDL framework in the context of TSE at a high-
level, and Section IV, V, VI will flesh out the framework for specific traffic
flow models and their corresponding TSE problems.
The PIDL framework, which consists of a physics-uninformed neural network
following by a physics-informed neural network, can be used for (1)
approximating traffic states described by some traffic flow models in the form
of nonlinear differential equations and (2) discovering unknown model
parameters using observed traffic data.
### III-A PIDL for TSE
Let $\mathcal{N}[\cdot]$ be a general nonlinear differential operator and
$\Omega$ be a subset of $\mathbb{R}^{d}$. For one lane TSE, $d$ is one by
default in this paper, i.e., $\Omega=[0,L],L\in\mathbb{R}^{+}$. Then, the
problem is to find the traffic state $\rho(t,x)$ at each point $(t,x)$ in a
continuous domain, such that the following PDE of a traffic flow model can be
satisfied:
$\rho_{t}(t,x)+\mathcal{N}[\rho(t,x)]=0,x\in\Omega,t\in[0,T],$ (1)
where $T\in\mathbb{R}^{+}$. Accordingly, the continuous spatio-temporal domain
$D$ is a set of points: $D=\\{(t,x)|\forall t\in[0,T],x\in[0,L]\\}$. We
represent this continuous domain in a discrete manner using grid points $G\in
D$ that are evenly deployed throughout the domain. We define the set of grid
points as $G=\\{(t^{(r)},x^{(r)})|r=1,..,N_{g}\\}$. The total number of grid
points, $N_{g}$, controls the fine-grained level of $G$ as a representation of
the continuous domain.
PIDL approximates $\rho(t,x)$ using a neural network with time $t$ and
location $x$ as its inputs. This neural network is called physics-uninformed
neural network (PUNN) (or estimation network in our TSE study), which is
parameterized by $\theta$. We denote the approximation of $\rho(t,x)$ from
PUNN as $\hat{\rho}(t,x;\theta)$. During the learning phase of PUNN (i.e., to
find the optimal $\theta$ for PUNN), the following residual value of the
approximation $\hat{\rho}(t,x;\theta)$ is used:
$\hat{f}(t,x;\theta):=\hat{\rho}_{t}(t,x;\theta)+\mathcal{N}[\hat{\rho}(t,x;\theta)],$
(2)
which is designed according to the traffic flow model in Eq. (1). The
calculation of residual $\hat{f}(t,x;\theta)$ is done by a physics-informed
neural network (PINN). This network can compute $\hat{f}(t,x;\theta)$ directly
using $\hat{\rho}(t,x;\theta)$, the output of PUNN, as its input. When
$\hat{\rho}(t,x;\theta)$ is closer to $\rho(t,x)$, the residual will be closer
to zero. PINN introduces no new parameters, and thus, shares the same $\theta$
of PUNN.
In PINN, $\hat{f}(t,x;\theta)$ is calculated by automatic differentiation
technique [53], which can be done by the function tf.gradient in
Tensorflow111https://www.tensorflow.org. The activation functions and the
connecting structure of neurons in PINN are designed to conduct the
differential operation in Eq. (2). We would like to emphasize that, the
connecting weights in PINN have fixed values which are determined by the
traffic flow model, and thus, $\hat{f}$ from PINN is only parameterized by
$\theta$.
The training data for PIDL consist of (1) observation points
$O=\\{(t^{(i)}_{o},x^{(i)}_{o})|i=1,...,N_{o}\\}$, (2) target values
$P=\\{\rho^{(i)}|i=1,...,N_{o}\\}$ (i.e., the true traffic states at the
observation points), and (3) auxiliary points
$A=\\{(t^{(j)}_{a},x^{(j)}_{a})|j=1,...,N_{a}\\}$. $i$ and $j$ are the indexes
of observation points and auxiliary points, respectively. One target value is
associated with one observation point, and thus, $O$ and $P$ have the same
indexing system (indexed by $i$). This paper uses the term, observed data, to
denote $\\{O,P\\}$. Both $O$ and $A$ are subsets of the grid points $G$ (i.e.,
$O\in G$ and $A\in G$). As will be seen in the future sections, if there are
more equations other than Eq. (1) (such as boundary conditions) that are
needed to fully define the traffic flow, then more types of auxiliary points,
in addition to $A$, will be created accordingly.
Observation points are usually limited to certain locations of $[0,L]$, and
other areas cannot be observed. In the real world, this could represent the
case in which traffic measurements over time can be made at the locations
where sensing hardware, e.g., sensors and probes, are equipped. Therefore, in
general, only limited areas can be observed and we need estimation approaches
to infer unobserved traffic values. In contrast to $O$, auxiliary points $A$
have neither measurement requirements nor location limitations, and the number
of $A$, as well as their distributions, is controllable. Usually, $A$ are
randomly distributed in the spatio-temporal domain (in implementation, $A$ is
selected from $G$). As will be discussed later, auxiliary points $A$ are used
for regularization purposes, making the PUNN’s approximation consistent with
the PDE of the traffic flow model.
To train a PUNN for TSE, the loss is defined as follows:
$\begin{split}\begin{gathered}Loss_{\theta}=\alpha\cdot MSE_{o}+\beta\cdot
MSE_{a}\\\
=\alpha\cdot\underbrace{\frac{1}{N_{o}}\sum\limits_{i=1}^{N_{o}}|\hat{\rho}(t^{(i)}_{o},x^{(i)}_{o};\theta)-\rho^{(i)}|^{2}}_{data\
discrepancy}\\\
+\beta\cdot\underbrace{\frac{1}{N_{a}}\sum\limits_{j=1}^{N_{a}}|\hat{f}(t^{(j)}_{a},x^{(j)}_{a};\theta)|^{2}}_{physical\
discrepancy},\end{gathered}\end{split}$ (3)
where $\alpha$ and $\beta$ are hyperparameters for balancing the contribution
to the loss made by data discrepancy and physical discrepancy, respectively.
The data discrepancy is defined as the mean square error (MSE) between
approximation $\hat{\rho}$ on $O$ and target values $P$. The physical
discrepancy is the MSE between residual values on $A$ and zero, quantifying
the extent to which the approximation deviates from the traffic model.
Given the training data, we apply neural network training algorithms, such as
backpropagation learning, to solve $\theta^{*}=\mathrm{argmin}_{\theta}\
Loss_{\theta}$, and this learning process is regularized by PINN via physical
discrepancy. The PUNN parameterized by $\theta^{*}$ can then be used to
approximate the traffic state at each point of $G$ (in fact, each point in the
continuous domain $D$ can also be approximated). The approximation
$\hat{\rho}$ is expected to be consistent with Eq. (1).
### III-B PIDL for TSE and Model Parameter Discovery
In addition to TSE with known PDE traffic flow, PIDL can handle traffic flow
models with unknown parameters, i.e., to discover the unknown parameters
$\lambda$ of the model that best describe the observed data. Let
$\mathcal{N}[\ \cdot\ ;\lambda]$ be a general nonlinear differential operator
parameterized by some unknown model parameters $\lambda$. Then, the problem is
to find the parameters $\lambda$ in the traffic flow model:
$\rho_{t}(t,x)+\mathcal{N}[\rho(t,x);\lambda]=0,x\in\Omega,t\in[0,T],$ (4)
that best describe the observed data, and at the same time, approximate the
traffic state $\rho(t,x)$ at each point in $G$ that satisfies this model.
For this problem, the residual value of traffic state approximation
$\hat{\rho}(t,x;\theta)$ from PUNN is redefined as
$\hat{f}(t,x;\theta,\boldsymbol{\lambda}):=\hat{\rho}_{t}(t,x;\theta)+\mathcal{N}[\hat{\rho}(t,x;\theta);\boldsymbol{\lambda}].$
(5)
The PINN, by which Eq. (5) is calculated, is related to both $\theta$ and
$\lambda$. The way in which training data are obtained and distributed remains
the same as Section III-A. The loss function for both discovering the unknown
parameters of traffic flow model and solving the TSE is defined as:
$\begin{split}\begin{gathered}Loss_{\theta,\boldsymbol{\lambda}}=\alpha\cdot
MSE_{o}+\beta\cdot MSE_{a}\\\
=\alpha\cdot\underbrace{\frac{1}{N_{o}}\sum\limits_{i=1}^{N_{o}}|\hat{\rho}(t^{(i)}_{o},x^{(i)}_{o};\theta)-\rho^{(i)}|^{2}}_{data\
discrepancy}\\\
+\beta\cdot\underbrace{\frac{1}{N_{a}}\sum\limits_{j=1}^{N_{a}}|\hat{f}(t^{(j)}_{a},x^{(j)}_{a};\theta,\boldsymbol{\lambda})|^{2}}_{physical\
discrepancy}.\end{gathered}\end{split}$ (6)
Given the training data, we apply neural network training algorithms to solve
$(\theta^{*},\lambda^{*})=\mathrm{argmin}_{\theta,\lambda}\
Loss_{\theta,\lambda}$. Then, the $\lambda^{*}$-parameterized traffic flow
model of Eq. (4) is the most likely physics that generates the observed data,
and the $\theta^{*}$-parameterized PUNN can then be used to approximate the
traffic states on $G$, which are consistent with the discovered traffic flow
model.
### III-C Problem Statement Summary for PIDL-based TSE
This subsection briefly summarizes the problem statements for PIDL-based TSE.
For a spatio-temporal domain represented by grid points
$G=\\{(t^{(r)},x^{(r)})|r=1,..,N_{g}\\}$, given observation points $O$, target
values $P$, and auxiliary points $A$ (note $\\{O,P\\}$ defines the observed
data)
$\begin{split}\left\\{{\begin{array}[]{*{20}l}\
O=\\{(t^{(i)}_{o},x^{(i)}_{o})|i=1,...,N_{o}\\}\in G\\\ \
P=\\{\rho^{(i)}|i=1,...,N_{o}\\}\\\ \
A=\\{(t^{(j)}_{a},x^{(j)}_{a})|j=1,...,N_{a}\\}\in
G\end{array}}\right.\end{split},$ (7)
with the design of two neural networks: a PUNN for traffic state approximation
$\hat{\rho}(t,x;\theta)$ and a PINN for computing the residual
$\hat{f}(t,x;\theta)$ of $\hat{\rho}(t,x;\theta)$, then a general PIDL for TSE
is to solve the problem:
$\begin{split}\begin{gathered}\mathop{\min}\limits_{\theta}Loss_{\theta}\\\
where\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \
\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \\\
Loss_{\theta}=\frac{\alpha}{N_{o}}\sum\limits_{i=1}^{N_{o}}|\hat{\rho}(t^{(i)}_{o},x^{(i)}_{o};\theta)-\rho^{(i)}|^{2}\\\
+\frac{\beta}{N_{a}}\sum\limits_{j=1}^{N_{a}}|\hat{f}(t^{(j)}_{a},x^{(j)}_{a};\theta)|^{2},\\\
\hat{f}(t,x;\theta):=\hat{\rho}_{t}(t,x;\theta)+\mathcal{N}[\hat{\rho}(t,x;\theta)].\end{gathered}\end{split}$
(8)
The PUNN parameterized by the solution $\theta^{*}$ can then be used to
approximate the traffic states on $G$.
In contrast, a general PIDL for both TSE and model parameter discovery is to
solve the problem:
$\begin{split}\begin{gathered}\mathop{\min}\limits_{\theta,\lambda}Loss_{\theta,\lambda}\\\
where\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \
\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \\\
Loss_{\theta,\lambda}=\frac{\alpha}{N_{o}}\sum\limits_{i=1}^{N_{o}}|\hat{\rho}(t^{(i)}_{o},x^{(i)}_{o};\theta)-\rho^{(i)}|^{2}\\\
+\frac{\beta}{N_{a}}\sum\limits_{j=1}^{N_{a}}|\hat{f}(t^{(j)}_{a},x^{(j)}_{a};\theta,\lambda)|^{2},\\\
\hat{f}(t,x;\theta,\lambda):=\hat{\rho}_{t}(t,x;\theta)+\mathcal{N}[\hat{\rho}(t,x;\theta);\lambda],\end{gathered}\end{split}$
(9)
where residual $\hat{f}$ is related to both PUNN parameters $\theta$ and
traffic flow model parameters $\lambda$. The PUNN parameterized by the
solution $\theta^{*}$ can then be used to approximate the traffic states on
$G$, and solution $\lambda^{*}$ is the most likely model parameters that
describe the observed data.
In the next three sections, we will demonstrate PIDL’s ability to estimate
traffic density dynamics and discover model parameters using two popular
highway traffic flow models, Greenshields-based LWR and three-parameter-based
LWR. Then, we extend the PIDL-based approach to a real-world highway scenario
in the NGSIM data.
## IV PIDL for Greenshields-Based LWR
This example aims to show the ability of our method to estimate the traffic
dynamics governed by the LWR model based on Greenshields flux function.
Define flow rate $Q$ to be the number of vehicles passing a specific position
on the road per unit time, and traffic density $\rho$ to be the average number
of vehicles per unit length of the road. By defining $u$ as the average speed
of a specific position on the road, we can deduce $Q=\rho u$. The traffic flux
$Q(\rho)$ describes $Q$ as a function of $\rho$. We treat $\rho$ as the basic
traffic state variable to estimate. Greenshields flux [54] is a basic and
popular choice of flux function, which is defined as $Q(\rho)=\rho
u_{max}(1-\rho/\rho_{max})$, where $u_{max}$ and $\rho_{max}$ are maximum
velocity and maximum (jam) density, respectively. This flux function has a
quadratic form with two coefficients $u_{max}$ and $\rho_{max}$, which are
usually fitted with data.
The LWR model [3, 4] describes the macroscopic traffic flow dynamics as
$\rho_{t}+(Q(\rho))_{x}=0$, which is derived from a conservation law of
vehicles. In order to reproduce more phenomena in observed traffic data, such
as delayed driving behaviors due to drivers’ reaction time, diffusively
corrected LWRs were introduced, by adding a diffusion term, containing a
second-order derivative $\rho_{xx}$ [55, 56, 57, 58, 59]. We focus on one
version of the diffusively corrected LWRs:
$\rho_{t}+(Q(\rho))_{x}=\epsilon\rho_{xx}$, where $\epsilon$ is the diffusion
coefficient.
In this section, we study the Greenshields-based LWR traffic flow model of a
“ring road”:
$\begin{split}\begin{gathered}\rho_{t}+(Q(\rho))_{x}=\epsilon\rho_{xx},\
t\in[0,3],\ x\in[0,1],\\\ Q(\rho)=\rho\cdot
u_{max}\Bigl{(}1-\frac{\rho}{\rho_{max}}\Bigl{)},\\\ \rho(t,0)=\rho(t,1)\ \ \
(boundary\ condition\ 1),\\\ \rho_{x}(t,0)=\rho_{x}(t,1)\ \ \ (boundary\
condition\ 2),\end{gathered}\end{split}$ (10)
where $\rho_{max}=1.0$, $u_{max}=1.0$, and $\epsilon=0.005$. $\rho_{max}$ and
$u_{max}$ are usually determined by physical restrictions of the road and
vehicles.
Given the bell-shape initial showed in Fig. 2, we apply the Godunov scheme
[60] to solve Eqs. (10) on 960 (time) $\times$ 240 (space) grid points $G$
evenly deployed throughout the $[0,3]\times[0,1]$ domain. In this case, the
total number of grid points $G$ is $N_{g}=$960$\times$240\. The numerical
solution is shown in Fig. 3. From the figure, we can visualize the dynamics as
follows: At $t=0$, there is a peak density at the center of the road segment,
and this peak evolves to propagate along the direction of $x$, which is known
as the phenomenon of traffic shockwave. Since this is a ring road, the
shockwave reaching $x=1$ continues at $x=0$. This numerical solution of the
Greenshields-based LWR model is treated as the ground-truth traffic density.
We will apply a PIDL-based approach method to estimate the entire traffic
density field using observed data.
Figure 2: Bell-shape initial traffic density $\rho$ over $x\in[0,1]$ at $t=0$.
Figure 3: Numerical solution of Eqs. (10) using the Godunov scheme. We treat
the numerical solution as the ground truth in our TSE experiments.
### IV-A PIDL Architecture Design
Based on Eqs. (10), we define the residual value of PUNN’s traffic density
estimation $\hat{\rho}(t,x;\theta)$ as
$\hat{f}(t,x;\theta):=\hat{\rho}_{t}(t,x;\theta)+(Q(\hat{\rho}(t,x;\theta)))_{x}-\epsilon\hat{\rho}_{xx}(t,x;\theta).$
(11)
The residual value is calculated by a PINN.
Given the definition of $\hat{f}$, the corresponding PIDL architecture that
encodes Greenshields-based LWR model is shown in Fig. 4. This architecture
consists of a PUNN for traffic density estimation, followed by a PINN for
calculating the residual Eq. (11). PUNN parameterized by $\theta$ is designed
as a fully-connected feedforward neural network with 8 hidden layers and 20
hidden nodes in each hidden layer. Hyperbolic tangent function (tanh) is used
as the activation function for each hidden neuron in PUNN. In contrast, in
PINN, connecting weights are fixed and the activation function of each node is
designed to conduct specific nonlinear operation for calculating an
intermediate value of $\hat{f}$.
To customize the training of PIDL to Eqs. (10), in addition to the training
data $O$, $P$ and $A$ defined in Section III-A, we need to introduce boundary
auxiliary points
$B=\\{(t^{(k)}_{b},0)|k=1,...,N_{b}\\}\cup\\{(t^{(k)}_{b},1)|k=1,...,N_{b}\\}$,
for learning the two boundary conditions in Eqs. (10).
For experiments of state estimation without model parameter discovery, where
the PDE parameters are known, we design the following loss:
$\begin{split}\begin{gathered}Loss_{\theta}=\alpha\cdot MSE_{o}+\beta\cdot
MSE_{a}+\gamma\cdot MSE_{b1}+\eta\cdot MSE_{b2}\\\
=\frac{\alpha}{N_{o}}\sum\limits_{i=1}^{N_{o}}|\hat{\rho}(t^{(i)}_{o},x^{(i)}_{o};\theta)-\rho^{(i)}|^{2}+\frac{\beta}{N_{a}}\sum\limits_{j=1}^{N_{a}}|\hat{f}(t^{(j)}_{a},x^{(j)}_{a};\theta)|^{2}\\\
+\frac{\gamma}{N_{b}}\sum\limits_{k=1}^{N_{b}}|\hat{\rho}(t^{(k)}_{b},0;\theta)-\hat{\rho}(t^{(k)}_{b},1;\theta)|^{2}\\\
+\frac{\eta}{N_{b}}\sum\limits_{k=1}^{N_{b}}|\hat{\rho}_{x}(t^{(k)}_{b},0;\theta)-\hat{\rho}_{x}(t^{(k)}_{b},1;\theta)|^{2},\end{gathered}\end{split}$
(12)
where each value used by the loss is an output of certain node of the PINN
(see Fig. 4). $MSE_{b1}$, scaled by $\gamma$, is the mean square error between
estimations at the two boundaries $x=0$ and $x=1$. $MSE_{b2}$, scaled by
$\eta$, quantifies the difference of first order derivatives at the two
boundaries.
Figure 4: PIDL architecture for Greenshields-based LWR in Eqs. (10),
consisting of a PUNN for traffic density estimation and a PINN for calculating
the residual Eq. (11). For experiments of estimation without parameter
discovery, all connecting weights, including $\rho_{max}=1$, $u_{max}=1$, and
$\epsilon=0.005$, are known and fixed in PINN. For experiments of estimation
with parameter discovery, $\rho_{max}$, $u_{max}$, and $\epsilon$ are learning
variables in PINN.
### IV-B TSE using Initial Observation
We start with justifying the ability of PIDL in Fig. 4 for estimating the
traffic density field in Fig. 3 using the observation of the entire road at
$t=0$ (i.e., the initial traffic density condition). To be specific, 240 grid
points along the space dimension at $t=0$ are used as the observation points
$O$ with $N_{o}=240$, and their corresponding densities constitute the target
values $P$ for training. There are $N_{a}=100,000$ auxiliary points in $A$
randomly selected from grid points $G$. $N_{b}=650$ out of 960 grid time
points (i.e., the time points on the temporal dimension of $G$) are randomly
selected to create boundary auxiliary points $B$. A sparse version of the
deployment of $O$, $A$ and $B$ in the spatio-temporal domain is shown in Fig.
5. Each observation point is associated with a target value in $P$. Note $O$,
$A$ and $B$ are all subsets of $G$.
Figure 5: A sparse presentation of the deployment of observation points $O$ at
the initial $t=0$, auxiliary points $A$ randomly selected from $G$, and
boundary auxiliary points $B$ deployed at the boundaries $x=0$ and $x=1$ for
certain time points.
We train the proposed PIDL on an NVIDIA Titan RTX GPU with 24 GB memory. By
default, we use the $\mathbbm{L}^{2}$ relative error on $G$ to quantify the
estimation error of the entire domain:
$Err=\frac{\sqrt{\sum_{r=1}^{N_{g}}\bigl{|}\hat{\rho}(t^{(r)},x^{(r)};\theta)-\rho(t^{(r)},x^{(r)})\bigl{|}^{2}}}{\sqrt{\sum_{r=1}^{N_{g}}\bigl{|}\rho(t^{(r)},x^{(r)})\bigl{|}^{2}}}.$
(13)
The reason for choosing the $\mathbbm{L}^{2}$ relative error is to normalize
the estimation inaccuracy, mitigating the influence from the scale of true
density values.
Figure 6: Top: Estimation of the traffic density dynamics
$\hat{\rho}(t,x;\theta^{*})$ on grid points $G$ in the domain using the
trained PUNN. Bottom: Snapshots of estimated and true traffic density at
certain time points.
We use the Xavier uniform initializer [61] to initialize $\theta$ of PUNN.
This neural network initialization method takes the number of a layer’s
incoming and outgoing network connections into account when initializing the
weights of that layer, which may lead to a good convergence. Then, we train
the PUNN through the PIDL architecture using a popular stochastic gradient
descent algorithm, the Adam optimizer [62], for $2,000$ steps for a rough
training. A follow-up fine-grained training is done by the L-BFGS optimizer
[63] for stabilizing the convergence, and the process terminates until the
loss change of two consecutive steps is no larger than $10^{-16}$. This
training process converges to a local optimum $\theta^{*}$ that minimizes the
loss in Eq. (12).
The results of applying the PIDL to Greenshields-based LWR is presented in
Fig. 6, where PUNN is parameterized by the optimal $\theta^{*}$. As shown in
Fig. 6, the estimation $\hat{\rho}(t,x;\theta^{*})$ is visually the same as
the true dynamics $\rho(t,x)$ in Fig. 3. By looking into the estimated and
true traffic density over $x$ at certain time points, there is a good
agreement between two traffic density curves. The $\mathbbm{L}^{2}$ relative
error of the estimation to the true traffic density is $1.6472\times 10^{-2}$.
Empirically, the difference cannot be visually distinguished when the
estimation error is smaller than $6\times 10^{-2}$.
### IV-C TSE using Observation from Loop Detectors
We now apply PIDL to the same TSE problem, but using observation from loop
detectors, i.e., only the traffic density at certain locations where loop
detectors are installed can be observed. By default, loop detectors are evenly
located along the road. We would like to clarify that in this paper, the
training data are the observed data from detectors, i.e., the traffic states
on the points at certain locations where loops are equipped. As implied by Eq.
(13), the test data are the traffic states on the grid points $G$, which
represent the whole spatio-temporal domain of the selected highway segment
(i.e., a fixed region). The test data consist of the observed data (i.e., the
training data) and the data that are not observed (the test data is defined in
this way per the definition of TSE used in this paper).
Figure 7: A sparse presentation of the deployment of $N_{o}=3\times 960$
observation points $O$ for three loop detectors. Corresponding density target
values $P$ are collected by the loop detectors. The deployment of auxiliary
points $A$ and boundary auxiliary points $B$ remains the same as Fig. 5.
Fig. 7 shows the deployment of observation points $O$ for three loop
detectors, where the detectors locate evenly on the road at $x=0$, $x=0.5$ and
$x=1$. The traffic density records of those locations over time constitute the
target values $P$.
Figure 8: Greenshields-based PIDL estimation error vs NN over different loop
detectors.
Using the same training process in Section IV-B, we conduct PIDL-based TSE
experiments with different numbers of loop detectors. For each number of loop
detectors, we make 100 trails with hyperparamters randomly selected between
$0$ and $200$. The minimal-achievable estimation errors of PIDL over the
numbers of loop detectors are presented in Fig. 8, which are compared with the
pure NN that is not regularized by the physics. The estimation errors of both
methods decrease as more loop detectors are installed to collect data, and the
PIDL outperforms the NN especially when the loop numbers are small. The
results demonstrate that the PIDL-based approach is data efficient as it can
handle the TSE task with fewer loop detectors for the traffic dynamics
governed by the Greenshields-based LWR.
### IV-D TSE and Parameter Discovery using Loop Detectors
This subsection justifies the ability of the PIDL architecture in Fig. 4 for
dealing with both TSE and model parameter discovery. In this experiment, three
model parameters $\rho_{max}$, $u_{max}$ and $\epsilon$ are encoded as
learning variables in PINN. We denote $\lambda=(\rho_{max},u_{max},\epsilon)$
in this experiment. The residual and loss become
$\begin{split}\hat{f}(t,x;\theta,\boldsymbol{\lambda}):=&\hat{\rho}_{t}(t,x;\theta)+(Q(\hat{\rho}(t,x;\theta);\boldsymbol{\rho_{max}},\boldsymbol{u_{max}}))_{x}\\\
&-\boldsymbol{\epsilon}\hat{\rho}_{xx}(t,x;\theta),\end{split}$ (14)
and
$\begin{split}Loss_{\theta,\boldsymbol{\lambda}}&=\frac{\alpha}{N_{o}}\sum\limits_{i=1}^{N_{o}}|\hat{\rho}(t^{(i)}_{o},x^{(i)}_{o};\theta)-\rho^{(i)}|^{2}\\\
&+\frac{\beta}{N_{a}}\sum\limits_{j=1}^{N_{a}}|\hat{f}(t^{(j)}_{a},x^{(j)}_{a};\theta,\boldsymbol{\lambda})|^{2}\\\
&+\frac{\gamma}{N_{b}}\sum\limits_{k=1}^{N_{b}}|\hat{\rho}(t^{(k)}_{b},0;\theta)-\hat{\rho}(t^{(k)}_{b},1;\theta)|^{2}\\\
&+\frac{\eta}{N_{b}}\sum\limits_{k=1}^{N_{b}}|\hat{\rho}_{x}(t^{(k)}_{b},0;\theta)-\hat{\rho}_{x}(t^{(k)}_{b},1;\theta)|^{2},\end{split}$
(15)
respectively. We use $N_{a}=150,000$ auxiliary points and other experimental
configurations remain unchanged. We conduct PIDL-based TSE experiments using
different numbers of loop detectors to solve
$(\theta^{*},\lambda^{*})=\mathrm{argmin}_{\theta,\lambda}\
Loss_{\theta,\lambda}$. In addition to the traffic density estimation errors
of $\hat{\rho}(t,x;\theta^{*})$, we evaluate the estimated model parameters
$\lambda^{*}$ using the $\mathbbm{L}^{2}$ relative error and present them in
percentage. The results are shown in Table I, where the errors below the
dashed line are acceptable.
TABLE I: Errors on Estimated Traffic Density and Model Parameters
$m$ | $\hat{\rho}(t,x;\theta^{*})$ | $\rho^{*}_{max}$(%) | $u^{*}_{max}$(%) | $\epsilon^{*}$(%)
---|---|---|---|---
2 | 6.007$\times 10^{-1}$ | 982.22 | 362.50 | $>$1000
3 | 4.878$\times 10^{-2}$ | 0.53 | 0.05 | 1.00
4 | 3.951$\times 10^{-2}$ | 0.95 | 0.54 | 4.37
5 | 3.881$\times 10^{-2}$ | 0.16 | 0.14 | 4.98
6 | 2.724$\times 10^{-2}$ | 0.07 | 0.18 | 1.38
8 | 3.441$\times 10^{-3}$ | 0.25 | 0.47 | 1.40
* •
$m$ stands for the number of loop detectors.
$\lambda^{*}=(\rho^{*}_{max},u^{*}_{max},\epsilon^{*})$ are estimated
parameters, compared to the true parameters
$\rho_{max}=1,u_{max}=1,\epsilon=0.005$.
From the table, we can see that the traffic density estimation errors improve
as the number of loop detectors increases. When more than two loop detectors
are used, the learning model parameters are able to converge to the true
parameters $\lambda$. Specifically, for three loop detectors, in addition to a
good traffic density estimation error of 4.878$\times 10^{-2}$, the model
parameters converge to $\rho^{*}_{max}=1.00532$, $u^{*}_{max}=0.99955$ and
$\epsilon^{*}=0.00495$, which are very close to the true ones. The results
demonstrate that PIDL method can handle both TSE and model parameter discovery
with three loop detectors for the traffic dynamics from the Greenshields-based
LWR.
## V PIDL for Three-Parameter-Based LWR
This example aims to show the ability of our method to handle the traffic
dynamics governed by the three-parameter-based LWR.
The traffic flow model considered in this example is the same as Eqs. (10)
except for a different flux function $Q(\rho)$: three-parameter flux function
[23, 64]. This flux function is triangle-shaped, which is a differentiable
version of the piecewise linear Daganzo-Newell flux function [65, 66]. This
function is defined as:
$\begin{split}Q(\rho)&=\sigma\Bigl{(}a+(b-a)\frac{\rho}{\rho_{max}}-\sqrt{1+y^{2}}\Bigl{)},\\\
&a=\sqrt{1+\bigl{(}\delta p\bigl{)}^{2}},\\\
&b=\sqrt{1+\bigl{(}\delta(1-p)\bigl{)}^{2}},\\\
&y=\delta\Bigl{(}\frac{\rho}{\rho_{max}}-p\Bigl{)},\end{split}$ (16)
where $\delta$, $p$ and $\sigma$ are the three free parameters as the function
is named. The parameters $\sigma$ and $p$ control the maximum flow rate and
critical density (where the flow is maximized), respectively. $\delta$
controls the roundness level of $Q(\rho)$. In addition to the above-mentioned
three parameters, $\rho_{max}$ and diffusion coefficient $\epsilon$ are also
part of the model parameters.
Using the same domain and grid points setting in Section IV and initializing
the dynamics with the bell-shaped density in Fig. 2, the numerical solution of
three-parameter-based LWR is presented in Fig. 9. The parameters are given as
$\delta=5$, $p=2$, $\sigma=1$, $\rho_{max}=1$ and $\epsilon=0.005$. We treat
the numerical solution as the ground truth and conduct our TSE experiments.
Figure 9: Numerical solution of three-parameter-based LWR on grid points $G$
using the Godunov scheme. The numerical solution serves as the ground truth.
### V-A PIDL Architecture Design
Using the definition of the residual value $\hat{f}$ in Eq. (11), the
corresponding PIDL architecture that encodes three-parameter-based LWR is
shown in Fig. 10. Different from Fig. 4 where model parameters can be easily
encoded as connecting weights, there are too many intermediate values
involving the same model parameter, and thus, we cannot simply encode the
model parameters as connecting weights. Instead, we use variable nodes (see
the blue rectangular nodes in the graph) for holding model parameters, such
that we can link the model parameter nodes to neurons that are directly
related.
Figure 10: PIDL architecture for three-parameter-based LWR using flux
$Q(\rho)$ defined in Eq. (16). Model parameters are held by variable nodes
(blue rectangular nodes). Connecting weights in PINN are one by default and
specific operations for intermediate values are calculated by PINN nodes via
their activation functions. For experiments of estimation without parameter
discovery, model parameters $\delta=5$, $p=2$, $\sigma=1$, $\rho_{max}=1$ and
$\epsilon=0.005$, are known and fixed in PINN. For experiments of estimation
with parameter discovery, model parameters are learning variables in PINN.
For the training loss, Eq. (12) is used, which includes the regularization of
minimizing the residual and the difference between the density values at the
two boundaries.
### V-B TSE using Initial Observation
We justify the ability of the PIDL architecture in Fig. 10 for estimation of
the traffic density field in Fig. 9 using the observation of the initial
traffic density of the entire road at $t=0$. The training data selection is
the same as Section IV-B, except for that $N_{a}=150,000$ auxiliary points $A$
are used, because this is a relatively more complicated model.
Figure 11: Top: Estimation of the traffic density dynamics
$\hat{\rho}(t,x;\theta^{*})$ on $G$ in the domain using the trained PUNN for
three-parameter-based LWR. Bottom: Snapshots of estimated and true traffic
density at certain time points.
The training process finds the optimal $\theta^{*}$ for PUNN, that minimizes
the loss in Eq. (12). The results of applying the $\theta^{*}$-parameterized
PUNN to the three-parameter-based LWR dynamics estimation are presented in
Fig. 11. The performance of the trained PUNN is satisfactory as the estimation
$\hat{\rho}(t,x;\theta^{*})$ achieves an $\mathbbm{L}^{2}$ relative error of
$1.0347\times 10^{-2}$, which is much smaller than the distinguishable error
$6\times 10^{-2}$.
### V-C TSE using Observation from Loop Detectors
We conduct PIDL-based TSE experiments with different numbers of loop detectors
and compare the results with NN. Fig. 12 shows the PIDL approach achieves
better errors than the pure NN, justifying the benefits of PIDL. A logarithmic
error-axis is used for better visualizing the difference in estimation errors.
Figure 12: 3 parameter-based PIDL estimation error vs NN over different loop
detectors.
From the figure, we observe that the estimation errors decrease as more loop
detectors are installed to collect data. A significant improvement in
estimation quality is achieved when three loop detectors are installed,
obtaining an error of $2.745\times 10^{-2}$ compared to $6.066\times 10^{-2}$
for two loop detectors. The NN needs more detectors to achieve these low
errors.
### V-D TSE and Parameter Discovery using Loop Detectors
This experiment focuses on using PIDL to address both TSE and model parameter
discovery. Five learning variables
$\lambda=(\delta,p,\sigma,\rho_{max},\epsilon)$ need to be determined. The
residual becomes:
$\begin{split}\hat{f}(t,x;\theta,\boldsymbol{\lambda})&:=\hat{\rho}_{t}(t,x;\theta)\\\
+&(Q(\hat{\rho}(t,x;\theta);\boldsymbol{\delta,p,\sigma,\rho_{max}}))_{x}-\boldsymbol{\epsilon}\hat{\rho}_{xx}(t,x;\theta),\end{split}$
(17)
and the $Loss_{\theta,\boldsymbol{\lambda}}$ in Eq. (15) is used in the
training process. All experimental configurations and training process remain
the same as Section V-C. We solve
$(\theta^{*},\lambda^{*})=\mathrm{argmin}_{\theta,\lambda}\
Loss_{\theta,\lambda}$, and the results of traffic density estimation and
model parameter discovery are presented in Table II. The errors below the
dashed line are acceptable.
TABLE II: Errors on Estimated Traffic Density and Model Parameters
$m$ | $\hat{\rho}(t,x;\theta^{*})$ | $\delta^{*}$(%) | $p^{*}$(%) | $\sigma^{*}$(%) | $\rho^{*}_{max}$(%) | $\epsilon^{*}$(%)
---|---|---|---|---|---|---
2 | 1.2040$\times 10^{0}$ | 58.19 | 153.72 | $>$1000 | $>$1000 | 99.98
3 | 7.550$\times 10^{-1}$ | 54.15 | 124.52 | $>$1000 | $>$1000 | 99.95
4 | 1.004$\times 10^{-1}$ | 59.07 | 72.63 | 381.31 | 14.60 | 6.72
5 | 3.186$\times 10^{-2}$ | 2.75 | 4.03 | 6.97 | 0.29 | 3.00
6 | 1.125$\times 10^{-2}$ | 0.69 | 2.49 | 2.26 | 0.49 | 7.56
8 | 7.619$\times 10^{-3}$ | 1.03 | 2.43 | 3.60 | 0.30 | 7.85
12 | 5.975$\times 10^{-3}$ | 1.83 | 1.65 | 1.24 | 0.82 | 5.70
* •
$m$ stands for the number of loop detectors.
$\lambda^{*}=(\delta^{*},p^{*},\sigma^{*},\rho^{*}_{max},\epsilon^{*})$ are
estimated parameters, compared to the true parameters
$\delta=5,p=2,\sigma=1,\rho_{max}=1,\epsilon=0.005$.
From the table, we observe that the PIDL architecture in Fig. 10 with five
loop detectors can achieve a satisfactory performance on both traffic density
estimation and model parameter discovery. In general, more loop detectors can
help our model improve the TSE accuracy, as well as the convergence to the
true model parameters. Specifically, for five loop detectors, an estimation
error of $3.186\times 10^{-2}$ is obtained, and the model parameters converge
to $\delta^{*}=4.86236$, $p^{*}=0.19193$, $\sigma^{*}=0.10697$,
$\rho^{*}_{max}=1.00295$ and $\epsilon^{*}=0.00515$, which are decently close
to the ground truth. The results demonstrate that PIDL can handle both TSE and
model parameter discovery with five loop detectors for the traffic dynamics
governed by the three-parameter-based LWR.
## VI PIDL-Based TSE on NGSIM Data
This section evaluates the PIDL-based TSE method using real traffic data, the
Next Generation SIMulation (NGSIM)
dataset222www.fhwa.dot.gov/publications/research/operations/07030/index.cfm,
and compares its performance to baselines.
### VI-A NGSIM Dataset
NGSIM dataset provides detailed information about vehicle trajectories on
several road scenarios. We focus on a segment of the US Highway 101 (US 101),
monitored by a camera mounted on top of a high building on June 15, 2005. The
locations and actions of each vehicle in the monitored region for a total of
around 680 meters and 2,770 seconds were converted from camera videos. This
dataset has gained a great attention in many traffic flow studies [67, 68, 69,
64].
We select the data from all the mainline lanes of the US 101 highway segment
to calculate the average traffic density for approximately every 30 meters
over a 30 seconds period. After preprocessing to remove the time when there
are non-monitored vehicles running on the road (at the beginning and end of
the video), there are 21 and 89 valid cells on the spatial and temporal
dimensions, respectively. We treat the center of each cell as a grid point.
Fig. 13 shows the spatio-temporal field of traffic density in the dataset.
From the figure, we can observe that shockwaves backpropagate along the
highway.
Figure 13: Visualization of the average traffic density on US 101 highway.
For TSE experiments in this section, loop detectors are used to provide
observed data with a recording frequency of 30 seconds. By default, they are
evenly installed on the highway segment. Since no ground-truth model
parameters are available for NGSIM, we skip the parameter discovery
experiments.
### VI-B TSE Methods for Real Data
We first introduce the PIDL-based method for the real world TSE problem, and
then, describe the baseline TSE methods.
#### VI-B1 PIDL-based Method
This method is based on a three-parameter-based LWR model introduced by [64],
which is shown to be more accurate in describing the NGSIM data than the
Greenshields-based LWR. To be specific, their model sets the diffusion
coefficient $\epsilon$ to $0$, and the traffic flow becomes
$\rho_{t}+(Q(\rho))_{x}=0$, with the three-parameter flux $Q(\rho)$ defined in
Eq. (16). The PIDL architecture is Fig. 10 is used. Because this is not a ring
road, no boundary conditions are involved.
The three-parameter flux $Q(\rho)$ is calibrated using the data from loop
detectors, which measure the traffic density $\rho$ (veh/km) and flow rate $Q$
(veh/h) over time. Specifically, as shown in Fig. 14, we use fundamental
diagram (FD) data (i.e., flow-density data points) to fit the three-parameter
flux function (an FD curve). Least-squares fitting [64] is used to determine
model parameters $\delta$, $p$, $\sigma$, and $\rho_{max}$. The calibrated
model is then encoded into PIDL.
Figure 14: Calibration of the three-parameter flux $Q(\rho)$ using data (blue
dots).
We select $N_{a}=1,566$ auxiliary points $A$ from the grid $G$ (80%). The loss
in Eq. (3) (with boundary terms removed) is used for training the PUNN using
the observed data from loop detectors (i.e., both observation points $O$ and
corresponding target density values $P$). After hyperparamter tuning, we
present the minimal-achievable estimation errors. The same for other baselines
by default.
#### VI-B2 Neural Netowrk (NN) Method
This baseline method only contains the PUNN component in Fig. 10, and uses the
first term in Eq. (3) as the training loss. To be specific, we have
$Loss_{\theta}=\alpha\cdot MSE_{o}$ for this baseline model and optimize the
neural network using observed data for data consistency.
#### VI-B3 Long Short Term Memory (LSTM) based Method
This baseline method employs the LSTM architecture, which is customized from
the LSTM-based TSE proposed by [38]. This model can be applied to our problem
by leveraging the spatial dependency, i.e., to use the information of previous
cells to estimate the traffic density of the next cell along the spatial
dimension. We use the loss in Eq. (3) to train the LSTM-based model for TSE.
#### VI-B4 Extended Kalman Filter (EKF) Method
This method applies the extended Kalman filter (EKF), which is a nonlinear
version of the Kalman filter that linearizes the models about the current
estimation [70]. We use EKF to (1) estimate the whole spatio-temporal domain
based on calibrated three-parameter-based LWR and (2) update its estimation
based on the observed data from loop detectors.
To show the advantages of the proposed PIDL-based method, we limit our prior
knowledge about the traffic flow to LWR and investigate that on top of this
physical model, how well PIDL can take advantage of it. Under this setting,
the TSE baselines we designed are either model-free (e.g., NN, LSTM) or LWR-
based (e.g., EKF). TSE methods related to other traffic flow models, such as
PW, ARZ and METANET will be left for future research.
### VI-C Results and Discussion
We apply PIDL-based and baseline methods to TSE on the NGSIM dataset with
different numbers of loop detectors. The results are presented in Fig. 15.
Figure 15: Comparison among PIDL-based methods and baseline methods.
From Fig. 15, we can observe that the PIDL-based method can achieve
satisfactory estimation (with an error below $0.06$) using eight loop
detectors. In contrast, other baselines need twelve or more to reach
acceptable errors. The EKF method performs better than the NN/LSTM-based
methods when the number of loop detectors is $<=6$, and NN/LSTM-based methods
outperform EKF when sufficient data are available from more loop detectors.
The results are reasonable as EKF is a model-driven approach, making
sufficient use of the traffic flow model to appropriately estimate unobserved
values when limited data are available. However, the model cannot fully
capture the complicated traffic dynamics in the real world, and as a result,
the EKF’s performance flattens out. NN/LSTM-based methods are data-driven
approaches which can make ample use of data to capture the dynamics. However,
their data efficiency is low and large amounts of data are needed for accurate
TSE. The PIDL-based method’s errors are generally below the baselines, because
it can make efficient use of both the traffic flow model and observed data.
Figure 16: Ratios of the contributions made by model-driven component and
data-driven component to optimize the training process.
In addition to data efficiency, PIDL has the advantage to flexibly adjust the
extent to which each of the data-driven and model-driven components will
affect the training process. This flexibility is made possible by the
hyperparamters $\alpha$ and $\beta$ in the loss Eq. (3). Fig. 16 shows the
$\beta/\alpha$ ratios corresponding to the minimal-achievable estimation
errors of the PIDL method presented in Fig. 15. When the number of loop
detectors is small, more priorities should be given to the model-driven
component (i.e., larger ratio) as the model is the only knowledge for making
the estimation generalizable to the unobserved domain. When sufficient data
are made available by a large number of loop detectors, more priorities should
be given to the data-driven component (i.e., smaller ratio) as the traffic
flow model model could be imperfect and can make counter effect to the
estimation.
Figure 17: Computation time for training and estimation of PIDL and baselines.
Computation time for training and estimation is another topic that TSE
practitioners may concern. The results of computation time corresponding to
Fig. 15 are presented in Fig. 17. Note that the training time and estimation
time of EKF are the same because it conducts training and estimation
simultaneously. For training, ML methods (NN and LSTM) consume more time than
EKF because it takes thousands of iterations for ML methods to learn and
converge, while EKF makes the estimation of the whole space per time step
forward, which involves no iterative learning. PIDL takes the most time for
training because it has to additionally backpropagete through the operations
in PINN for computing the residual loss. For estimation, both PIDL and NN map
the inputs directly to the traffic states and take the least computation time.
LSTM operates in a recurrent way which takes more time to estimate. EKF needs
to update the state transition and observation matrices for estimation at each
time step, and thus, consumes the most time to finish the estimation.
## VII Conclusion
In this paper, we introduced the PIDL framework to the TSE problem on highways
using loop detector data and demonstrate the significant benefit of using LWR-
based physics to inform the deep learning. This framework can be used to
handle both traffic state estimation and model parameter discovery. The
experiments on real highway traffic data show that PIDL-based approaches can
outperform baseline methods in terms of estimation accuracy and data
efficiency. Our work may inspire extended research on using sophisticated
traffic models for more complicated traffic scenarios in the future.
Future work is threefold: First, we will consider to integrate more
sophisticated traffic models, such as the PW, METANET and ARZ models, to PIDL
for further improving TSE and model parameter discovery. Second, we will
explore another type of observed data collected by probe vehicles, and study
whether PIDL can conduct a satisfactory TSE task using these mobile data.
Third, in practice, it is important to determine the optimal hyperparameters
of the loss given the observed data. We will explore hyperparameters finding
strategies, such as cross validations, to mitigate this issue.
## Acknowledgment
The authors would like to thank Data Science Institute at Columbia University
for providing a seed grant for this research.
## References
* [1] T. Seo, A. M. Bayen, T. Kusakabe, and Y. Asakura, “Traffic state estimation on highway: A comprehensive survey,” _Annual Reviews in Control_ , vol. 43, pp. 128–151, 2017.
* [2] D. S. Berry and F. H. Green, “Techniques for measuring over-all speeds in urban areas,” _Highway Research Board Proceedings_ , vol. 29, pp. 311–318, 1949.
* [3] M. J. Lighthill and G. B. Whitham, “On kinematic waves II. A theory of traffic flow on long crowded roads,” _Proceedings of the Royal Society of London. Series A. Mathematical and Physical Sciences_ , vol. 229, no. 1178, pp. 317–345, 1955.
* [4] P. I. Richards, “Shock waves on the highway,” _Operations Research_ , vol. 4, no. 1, pp. 42–51, 1956.
* [5] M. Raissi, “Deep hidden physics models: Deep learning of nonlinear partial differential equations,” _Journal of Machine Learning Research_ , vol. 19, no. 1, pp. 932–955, Jan. 2018.
* [6] M. Raissi and G. E. Karniadakis, “Hidden physics models: Machine learning of nonlinear partial differential equations,” _Journal of Computational Physics_ , vol. 357, pp. 125–141, Mar. 2018.
* [7] H. J. Payne, “Model of freeway traffic and control,” _Mathematical Model of Public System_ , pp. 51–61, 1971.
* [8] G. B. Whitham, “Linear and nonlinear waves,” _John Wiley & Sons_, vol. 42, 1974.
* [9] A. Aw, A. Klar, M. Rascle, and T. Materne, “Derivation of continuum traffic flow models from microscopic follow-the-leader models,” _SIAM Journal on Applied Mathematics_ , vol. 63, no. 1, pp. 259–278, 2002.
* [10] H. M. Zhang, “A non-equilibrium traffic model devoid of gas-like behavior,” _Transportation Research Part B: Methodological_ , vol. 36, no. 3, pp. 275–290, 2002.
* [11] M. Garavello, K. Han, and B. Piccoli, _Models for vehicular traffic on networks_. Springfield, MO: American Institute of Mathematical Sciences (AIMS), 2016, vol. 9.
* [12] Y. Wang, M. Papageorgiou, and A. Messmer, “Real-time freeway traffic state estimation based on extended Kalman filter: Adaptive capabilities and real data testing,” _Transportation Research Part A: Policy and Practice_ , vol. 42, no. 10, pp. 1340–1358, 2008.
* [13] Y. Wang, M. Papageorgiou, A. Messmer, P. Coppola, A. Tzimitsi, and A. Nuzzolo, “An adaptive freeway traffic state estimator,” _Automatica_ , vol. 45, no. 1, pp. 10–24, 2009.
* [14] X. Di, H. X. Liu, and G. A. Davis, “Hybrid extended Kalman filtering approach for traffic density estimation along signalized arterials: Use of global positioning system data,” _Transportation Research Record_ , vol. 2188, no. 1, pp. 165–173, 2010.
* [15] L. Mihaylova, R. Boel, and A. Hegiy, “An unscented Kalman filter for freeway traffic estimation,” in _Proceedings of the 11th IFAC Symposium on Control in Transportation Systems_ , Delft, Netherlands, 2006.
* [16] S. Blandin, A. Couque, A. Bayen, and D. Work, “On sequential data assimilation for scalar macroscopic traffic flow models,” _Physica D: Nonlinear Phenomena_ , vol. 241, no. 17, pp. 1421–1440, 2012.
* [17] L. Mihaylova and R. Boel, “A particle filter for freeway traffic estimation,” in _43rd IEEE Conference on Decision and Control (CDC)_ , Nassau, Bahamas, 2004, pp. 2106–2111.
* [18] M. Treiber and D. Helbing, “Reconstructing the spatio-temporal traffic dynamics from stationary detector data,” _Cooperative Transportation Dynamics_ , vol. 1, no. 3, pp. 3.1–3.24, 2002.
* [19] B. Coifman, “Estimating travel times and vehicle trajectories on freeways using dual loop detectors,” _Transportation Research Part A: Policy and Practice_ , vol. 36, no. 4, pp. 351–364, 2002.
* [20] J. A. Laval, Z. He, and F. Castrillon, “Stochastic extension of Newell’s three-detector method,” _Transportation Research Record_ , vol. 2315, no. 1, pp. 73–80, 2012.
* [21] M. Kuwahara and et al, “Estimating vehicle trajectories on a motorway by data fusion of probe and detector data,” in _20th ITS World Congress, Tokyo 2013\. Proceedings_ , Tokyo, Japan, 2013.
* [22] S. Blandin, J. Argote, A. M. Bayen, and D. B. Work, “Phase transition model of non-stationary traffic flow: Definition, properties and solution method,” _Transportation Research Part B: Methodological_ , vol. 52, pp. 31–55, 2013\.
* [23] S. Fan, M. Herty, and B. Seibold, “Comparative model accuracy of a data-fitted generalized Aw-Rascle-Zhang model,” _Networks and Heterogeneous Media_ , vol. 9, no. 2, pp. 239–268, 2014.
* [24] K. Kawai, A. Takenouchi, M. Ikawa, and M. Kuwahara, “Traffic state estimation using traffic measurement from the opposing lane-error analysis based on fluctuation of input data,” _Intelligent Transport Systems for Everyone’s Mobility_ , pp. 247–263, 2019.
* [25] S. E. G. Jabari, D. M. Dilip, D. Lin, and B. T. Thodi, “Learning traffic flow dynamics using random fields,” _IEEE Access_ , vol. 7, pp. 130 566–130 577, 2019.
* [26] S. Paveri-Fontana, “On boltzmann-like treatments for traffic flow: a critical review of the basic model and an alternative proposal for dilute traffic analysis,” _Transportation research_ , vol. 9, no. 4, pp. 225–235, 1975\.
* [27] G. A. Davis and J. G. Kang, “Estimating destination-specific traffic densities on urban freeways for advanced traffic management,” _Transportation Research Record_ , no. 1457, pp. 143–148, 1994.
* [28] J.-G. Kang, _Estimation of destination-specific traffic densities and identification of parameters on urban freeways using Markov models of traffic flow_. University of Minnesota, 1995.
* [29] S. E. Jabari and H. X. Liu, “A stochastic model of traffic flow: Theoretical foundations,” _Transportation Research Part B: Methodological_ , vol. 46, no. 1, pp. 156–174, 2012.
* [30] B. L. Smith, W. T. Scherer, and J. H. Conklin, “Exploring imputation techniques for missing data in transportation management systems,” _Transportation Research Record_ , vol. 1836, no. 1, pp. 132–142, 2003.
* [31] C. Chen, J. Kwon, J. Rice, A. Skabardonis, and P. Varaiya, “Detecting errors and imputing missing data for single-loop surveillance systems,” _Transportation Research Record_ , vol. 1855, no. 1, pp. 160–167, 2003.
* [32] M. Zhong, P. Lingras, and S. Sharma, “Estimation of missing traffic counts using factor, genetic, neural, and regression techniques,” _Transportation Research Part C: Emerging Technologies_ , vol. 12, no. 2, pp. 139–166, 2004.
* [33] L. Li, Y. Li, and Z. Li, “Efficient missing data imputing for traffic flow by considering temporal and spatial dependence,” _Transportation Research Part C: Emerging Technologies_ , vol. 34, pp. 108–120, 2013.
* [34] H. Tan, Y. Wu, B. Cheng, W. Wang, and B. Ran, “Robust missing traffic flow imputation considering nonnegativity and road capacity,” _Mathematical Problems in Engineering_ , 2014.
* [35] S. Tak, S. Woo, and H. Yeo, “Data-driven imputation method for traffic data in sectional units of road links,” _IEEE Transactions on Intelligent Transportation Systems_ , vol. 17, no. 6, pp. 1762–1771, 2016.
* [36] D. Ni and J. D. Leonard, “Markov chain Monte Carlo multiple imputation using Bayesian networks for incomplete intelligent transportation systems data,” _Transportation Research Record_ , vol. 1935, no. 1, pp. 57–67, 2005\.
* [37] N. G. Polson and V. O. Sokolov, “Deep learning for short-term traffic flow prediction,” _Transportation Research Part C: Emerging Technologies_ , vol. 79, pp. 1–17, 2017.
* [38] W. Li, J. Hu, Z. Zhang, and Y. Zhang, “A novel traffic flow data imputation method for traffic state identification and prediction based on spatio-temporal transportation big data,” in _Proceedings of International Conference of Transportation Professionals (CICTP)_ , Shanghai, China, 2017, pp. 79–88.
* [39] Z. Zheng, Y. Yang, J. Liu, H.-N. Dai, and Y. Zhang, “Deep and embedded learning approach for traffic flow prediction in urban informatics,” _IEEE Transactions on Intelligent Transportation Systems_ , vol. 20, no. 10, pp. 3927–3939, 2019.
* [40] J. Tang, X. Zhang, W. Yin, Y. Zou, and Y. Wang, “Missing data imputation for traffic flow based on combination of fuzzy neural network and rough set theory,” _Journal of Intelligent Transportation Systems_ , pp. 1–16, 2020\.
* [41] A. Hofleitner, R. Herring, and A. Bayen, “Arterial travel time forecast with streaming data: A hybrid approach of flow modeling and machine learning,” _Transportation Research Part B: Methodological_ , vol. 46, no. 9, pp. 1097–1122, 2012.
* [42] S. Wang, W. Huang, and H. K. Lo, “Traffic parameters estimation for signalized intersections based on combined shockwave analysis and Bayesian network,” _Transportation Research Part C: Emerging Technologies_ , vol. 104, pp. 22–37, 2019.
* [43] X. Jia, A. Karpatne, J. Willard, M. Steinbach, J. Read, P. C. Hanson, H. A. Dugan, and V. Kumar, “Physics guided recurrent neural networks for modeling dynamical systems: Application to monitoring water temperature and quality in lakes,” _arXiv preprint arXiv:1810.02880_ , 2018.
* [44] Y. Yuan, X. T. Yang, Z. Zhang, and S. Zhe, “Macroscopic traffic flow modeling with physics regularized gaussian process: A new insight into machine learning applications,” _arXiv preprint arXiv:2002.02374_ , 2020.
* [45] Z. Wang, W. Xing, R. Kirby, and S. Zhe, “Physics regularized gaussian processes,” _arXiv preprint arXiv:2006.04976_ , 2020.
* [46] Y. Yang and P. Perdikaris, “Adversarial uncertainty quantification in physics-informed neural networks,” _Journal of Computational Physics_ , vol. 394, pp. 136–152, 2019.
* [47] M. Raissi, Z. Wang, M. S. Triantafyllou, and G. E. Karniadakis, “Deep learning of vortex-induced vibrations,” _Journal of Fluid Mechanics_ , vol. 861, pp. 119–137, 2019.
* [48] Z. Fang and J. Zhan, “Physics-informed neural network framework for partial differential equations on 3D surfaces: Time independent problems,” _IEEE Access_ , 2020.
* [49] M. Raissi, A. Yazdani, and G. E. Karniadakis, “Hidden fluid mechanics: Learning velocity and pressure fields from flow visualizations,” _Science_ , vol. 367, no. 6481, pp. 1026–1030, 2020.
* [50] M. Gulian, M. Raissi, P. Perdikaris, and G. Karniadakis, “Machine learning of space-fractional differential equations,” _SIAM Journal on Scientific Computing_ , vol. 41, no. 4, pp. A2485–A2509, 2019.
* [51] G. Pang, L. Lu, and G. E. Karniadakis, “fPINNs: Fractional physics-informed neural networks,” _SIAM Journal on Scientific Computing_ , vol. 41, no. 4, pp. A2603–A2626, 2019.
* [52] A. Karpatne, G. Atluri, and et al, “Theory-guided data science: A new paradigm for scientific discovery from data,” _IEEE Transactions on Knowledge and Data Engineering_ , vol. 29, no. 10, pp. 2318–2331, 2017.
* [53] A. G. Baydin, B. A. Pearlmutter, A. A. Radul, and J. M. Siskind, “Automatic differentiation in machine learning: A survey,” _Journal of Machine Learning Research_ , vol. 18, no. 1, pp. 5595–5637, 2017.
* [54] B. Greenshields, W. Channing, and H. Miller, “A study of traffic capacity,” _Highway Research Board Proceedings_ , vol. 14, pp. 448–477, 1935.
* [55] P. Nelson, “Traveling-wave solutions of the diffusively corrected kinematic-wave model,” _Mathematical and Computer Modelling_ , vol. 35, pp. 561–579, 2002.
* [56] J. Li, H. Wang, Q.-Y. Chen, and D. Ni, “Traffic viscosity due to speed variation: Modeling and implications,” _Mathematical and Computer Modelling_ , vol. 52, pp. 1626–1633, 2010.
* [57] R. Burger, P. Mulet, and L. M. Villada, “Regularized nonlinear solvers for IMEX methods applied to diffusively corrected multispecies kinematic flow models,” _SIAM Journal on Scientific Computing_ , vol. 35, no. 3, pp. B751–B777, 2013.
* [58] C. D. Acosta, R. Burger, and C. E. Mejia, “Efficient parameter estimation in a macroscopic traffic flow model by discrete mollification,” _Transportmetrica A: Transport Science_ , vol. 11, no. 8, pp. 702–715, 2015\.
* [59] R. Burger and I. Kroker, “Hybrid stochastic Galerkin finite volumes for the diffusively corrected Lighthill-Whitham-Richards traffic model,” in _Finite Volumes for Complex Applications VIII - Hyperbolic, Elliptic and Parabolic Problems_. Cham: Springer International Publishing, 2017, pp. 189–197.
* [60] S. K. Godunov, “A difference method for numerical calculation of discontinuous solutions of the equations of hydrodynamics,” _Matematicheskii Sbornik_ , vol. 89, no. 3, pp. 271–306, 1959.
* [61] X. Glorot and Y. Bengio, “Understanding the difficulty of training deep feedforward neural networks,” in _Proceedings of the 30th International Conference on Artificial Intelligence and Statistics (AISTATS)_ , Chia Laguna, Sardinia, Italy, 2010, pp. 249–256.
* [62] O. Konur, D. P. Kingma, and J. Ba, “Adam: A method for stochastic optimization,” in _Proceedings of the International Conference on Learning Representations (ICLR)_ , San Diego, CA, USA, 2015.
* [63] R. H. Byrd, P. Lu, J. Nocedal, and C. Zhu, “A limited memory algorithm for bound constrained optimization,” _SIAM Journal on Scientific Computing_ , vol. 16, no. 5, pp. 1190–1208, 1995.
* [64] S. Fan and B. Seibold, “Data-fitted first-order traffic models and their second-order generalizations: Comparison by trajectory and sensor data,” _Transportation Research Record_ , vol. 2391, no. 1, pp. 32–43, 2013.
* [65] C. Daganzo, “The cell transmission model: A dynamic representation of highway traffic consistent with the hydrodynamic theory,” _Transportation Research Part B: Methodological_ , vol. 28, no. 4, pp. 269–287, 1994.
* [66] G. F. Newell, “A simplified theory of kinematic waves in highway traffic, part II: Queueing at freeway bottlenecks,” _Transportation Research Part B: Methodological_ , vol. 27, no. 4, pp. 289–303, 1993.
* [67] N. Chiabaut, L. Leclercq, and C. Buisson, “From heterogeneous drivers to macroscopic patterns in congestion,” _Transportation Research Part B: Methodological_ , vol. 44, no. 2, pp. 299–308, 2010.
* [68] J. A. Laval and L. Leclercq, “A mechanism to describe the formation and propagation of stop-and-go waves in congested freeway traffic,” _Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences_ , vol. 368, no. 1928, pp. 4519–4541, 2010.
* [69] M. Montanino and V. Punzo, “Making NGSIM data usable for studies on traffic flow theory: Multistep method for vehicle trajectory reconstruction,” _Transportation Research Record_ , vol. 2390, no. 1, pp. 99–111, 2013.
* [70] Y. Kim and H. Bang, “Introduction to Kalman filter and its applications,” in _Kalman Filter_. London, UK: IntechOpen, 2018, pp. 1–9.
|
# Powers of sums and their associated primes
Hop D. Nguyen Institute of Mathematics, VAST, 18 Hoang Quoc Viet, Cau Giay,
10307 Hanoi, Vietnam<EMAIL_ADDRESS>and Quang Hoa Tran University of
Education, Hue University, 34 Le Loi St., Hue City, Viet Nam
<EMAIL_ADDRESS>
###### Abstract.
Let $A,B$ be polynomial rings over a field $k$, and $I\subseteq A,J\subseteq
B$ proper homogeneous ideals. We analyze the associated primes of powers of
$I+J\subseteq A\otimes_{k}B$ given the data on the summands. The associated
primes of large enough powers of $I+J$ are determined. We then answer
positively a question due to I. Swanson and R. Walker about the persistence
property of $I+J$ in many new cases.
###### Key words and phrases:
Associated prime; powers of ideals; persistence property
###### 2010 Mathematics Subject Classification:
13A15, 13F20
## 1\. Introduction
Inspired by work of Ratliff [19], Brodmann [2] proved that in any Noetherian
ring, the set of associated primes of powers of an ideal is eventually
constant for large enough powers. Subsequent work by many researchers have
shown that important invariants of powers of ideals, for example, the depth
and the Castelnuovo–Mumford regularity also eventually stabilize in the same
manner. For a recent survey on associated primes of powers and related
questions, we refer to Hoa’s paper [12].
Our work is inspired by the aforementioned result of Brodmann, and recent
studies about powers of sums of ideals [8, 9, 18]. Let $A,B$ be standard
graded polynomial rings over a field $k$, and $I\subseteq A,J\subseteq B$
proper homogeneous ideals. Denote $R=A\otimes_{k}B$ and $I+J$ the ideal
$IR+JR$. Taking sums of ideals this way corresponds to the geometric operation
of taking fiber products of projective schemes over the field $k$. In [8, 9,
18], certain homological invariants of powers of $I+J$, notably the depth and
regularity, are computed in terms of the corresponding invariants of powers of
$I$ and $J$. In particular, we have exact formulas for
$\operatorname{depth}R/(I+J)^{n}$ and $\operatorname{reg}R/(I+J)^{n}$ if
either $\operatorname{char}k=0$, or $I$ and $J$ are both monomial ideals. It
is therefore natural to ask:
###### Question 1.1.
Is there an exact formula for $\operatorname{Ass}(R/(I+J)^{n})$ in terms of
the associated primes of powers of $I$ and $J$?
The case $n=1$ is simple and well-known: Using the fact that
$R/(I+J)\cong(A/I)\otimes_{k}(B/J)$, we deduce ([8, Theorem 2.5]):
$\operatorname{Ass}R/(I+J)=\mathop{\bigcup_{{\mathfrak{p}}\in\operatorname{Ass}_{A}(A/I)}}_{{\mathfrak{q}}\in\operatorname{Ass}_{B}(B/J)}\operatorname{Min}_{R}(R/{\mathfrak{p}}+{\mathfrak{q}}).$
Unexpectedly, in contrast to the case of homological invariants like depth or
regularity, we do not have a complete answer to Question 1.1 in characteristic
zero yet. One of our main results is the following partial answer to this
question.
###### Theorem 1.2 (Theorem 4.1).
Let $I$ be a proper homogeneous ideal of $A$ such that
$\operatorname{Ass}(A/I^{n})=\operatorname{Ass}(I^{n-1}/I^{n})$ for all $n\geq
1$. Let $J$ be any proper homogeneous ideal of $B$. Then for all $n\geq 1$,
there is an equality
$\operatorname{Ass}_{R}\frac{R}{(I+J)^{n}}=\bigcup_{i=1}^{n}\mathop{\bigcup_{{\mathfrak{p}}\in\operatorname{Ass}_{A}(A/I^{i})}}_{{\mathfrak{q}}\in\operatorname{Ass}_{B}(J^{n-i}/J^{n-i+1})}\operatorname{Min}_{R}(R/{\mathfrak{p}}+{\mathfrak{q}}).$
If furthermore $\operatorname{Ass}(B/J^{n})=\operatorname{Ass}(J^{n-1}/J^{n})$
for all $n\geq 1$, then for all such $n$, there is an equality
$\operatorname{Ass}_{R}\frac{R}{(I+J)^{n}}=\bigcup_{i=1}^{n}\mathop{\bigcup_{{\mathfrak{p}}\in\operatorname{Ass}_{A}(A/I^{i})}}_{{\mathfrak{q}}\in\operatorname{Ass}_{B}(B/J^{n-i+1})}\operatorname{Min}_{R}(R/{\mathfrak{p}}+{\mathfrak{q}}).$
The proof proceeds by filtering $R/(I+J)^{n}$ using exact sequences with terms
of the form $M\otimes_{k}N$, where $M,N$ are nonzero finitely generated
modules over $A,B$, respectively, and applying the formula for
$\operatorname{Ass}_{R}(M\otimes_{k}N)$.
Concerning Theorem 1.2, the hypothesis
$\operatorname{Ass}(A/I^{n})=\operatorname{Ass}(I^{n-1}/I^{n})$ for all $n\geq
1$ holds in many cases, for example, if $I$ is a monomial ideal of $A$, or if
$\operatorname{char}k=0$ and $\dim(A/I)\leq 1$ (see Theorem 3.2 for more
details). We are not aware of any ideal in a polynomial ring which does not
satisfy this condition (over non-regular rings, it is not hard to find such an
ideal). In characteristic zero, we show that the equality
$\operatorname{Ass}(A/I^{n})=\operatorname{Ass}(I^{n-1}/I^{n})$ holds for all
$I$ and all $n$ if $\dim A\leq 3$. If $\operatorname{char}k=0$ and $A$ has
Krull dimension four, using the Buchsbaum–Eisenbud structure theory of perfect
Gorenstein ideals of height three and work by Kustin and Ulrich [15], we
establish the equality
$\operatorname{Ass}(A/I^{2})=\operatorname{Ass}(I/I^{2})$ for all $I\subseteq
A$ (Theorem 3.5).
Another motivation for this work is the so-called persistence property for
associated primes. The ideal $I$ is said to have the _persistence property_ if
$\operatorname{Ass}(A/I^{n})\subseteq\operatorname{Ass}(A/I^{n+1})$ for all
$n\geq 1$. Ideals with this property abound, including for example complete
intersections. The persistence property has been considered by many people;
see, e.g., [5, 11, 14, 22]. As an application of Theorem 1.2, we prove that if
$I$ is a monomial ideal satisfying the persistence property, and $J$ is any
ideal, then $I+J$ also has the persistence property (Corollary 5.1). Moreover,
we generalize previous work due to I. Swanson and R. Walker [22] on this
question: If $I$ is an ideal such that $I^{n+1}:I=I^{n}$ for all $n\geq 1$,
then for any ideal $J$ of $B$, $I+J$ has the persistence property (see
Corollary 5.1(ii)). In [22, Corollary 1.7], Swanson and Walker prove the same
result under the stronger condition that $I$ is normal. It remains an open
question whether for any ideal $I$ of $A$ with the persistence property and
any ideal $J$ of $B$, the sum $I+J$ has same property.
The paper is structured as follows. In Section 3, we provide large classes of
ideals $I$ such that the equality
$\operatorname{Ass}(A/I^{n})=\operatorname{Ass}(I^{n-1}/I^{n})$ holds true for
all $n\geq 1$. An unexpected outcome of this study is a counterexample to [1,
Question 3.6], on the vanishing of the map
$\operatorname{Tor}^{A}_{i}(k,I^{n})\to\operatorname{Tor}^{A}_{i}(k,I^{n-1})$.
Namely in characteristic 2, we construct a quadratic ideal $I$ in $A$ such
that the natural map
$\operatorname{Tor}^{A}_{*}(k,I^{2})\to\operatorname{Tor}^{A}_{*}(k,I)$ is not
zero (even though $A/I$ is a Gorenstein Artinian ring, see Example 3.9). This
example might be of independent interest, for example, it gives a negative
answer to a question in [1]. Using the results in Section 3, we give a set-
theoretic upper bound and a lower bound for $\operatorname{Ass}(R/(I+J)^{n})$
(Theorem 4.1). Theorem 4.1 also gives an exact formula for the asymptotic
primes of $I+J$ without any condition on $I$ and $J$. In the last section, we
apply our results to the question on the persistence property raised by
Swanson and Walker.
## 2\. Preliminaries
For standard notions and results in commutative algebra, we refer to the books
[3, 4].
Throughout the section, let $A$ and $B$ be two commutative Noetherian algebras
over a field $k$ such that $R=A\otimes_{k}B$ is also Noetherian. Let $M$ and
$N$ be two nonzero finitely generated modules over $A$ and $B$, respectively.
Denote by $\operatorname{Ass}_{A}M$ and $\operatorname{Min}_{A}M$ the set of
associated primes and minimal primes of $M$ as an $A$-module, respectively.
By a filtration of ideals $(I_{n})_{n\geq 0}$ in $A$, we mean the ideals
$I_{n},n\geq 0$ satisfies the conditions $I_{0}=A$ and $I_{n+1}\subseteq
I_{n}$ for all $n\geq 0$. Let $(I_{n})_{n\geq 1}$ and $(J_{n})_{n\geq 1}$ be
filtrations of ideals in $A$ and $B$, respectively. Consider the filtration
$(W_{n})_{n\geq 0}$ of $A\otimes_{k}B$ given by
$W_{n}=\sum_{i=0}^{n}I_{i}J_{n-i}$. The following result is useful for the
discussion in Section 4.
###### Proposition 2.1 ([8, Lemma 3.1, Proposition 3.3]).
For arbitrary ideals $I\subseteq A$ and $J\subseteq B$, we have $I\cap J=IJ$.
Moreover with the above notation for filtrations, for any integer $n\geq 0$,
there is an isomorphism
$\displaystyle
W_{n}/W_{n+1}\cong\bigoplus_{i=0}^{n}\big{(}I_{i}/I_{i+1}\otimes_{k}J_{n-i}/J_{n-i+1}\big{)}.$
We recall the following description of the associated primes of tensor
products; see also [21, Corollary 3.7].
###### Theorem 2.2 ([8, Theorem 2.5]).
Let $M$ and $N$ be nonzero finitely generated modules over $A$ and $B$,
respectively. Then there is an equality
$\operatorname{Ass}_{R}(M\otimes_{k}N)=\displaystyle\bigcup_{\begin{subarray}{l}{\mathfrak{p}}\in\operatorname{Ass}_{A}(M)\\\
{\mathfrak{q}}\in\operatorname{Ass}_{B}(N)\end{subarray}}\operatorname{Min}_{R}(R/{\mathfrak{p}}+{\mathfrak{q}}).$
The following simple lemma turns out to be useful in the sequel.
###### Lemma 2.3.
Assume that $\operatorname{char}k=0$. Let $A=k[x_{1},\ldots,x_{r}]$ be a
standard graded polynomial ring over $k$, and ${\mathfrak{m}}$ its graded
maximal ideal. Let $I$ be proper homogeneous ideal of $A$. Denote by
$\partial(I)$ the ideal generated by partial derivatives of elements in $I$.
Then there is a containment $I:{\mathfrak{m}}\subseteq\partial(I).$
In particular, $I^{n}:{\mathfrak{m}}\subseteq I^{n-1}$ for all $n\geq 1$. If
for some $n\geq 2$, ${\mathfrak{m}}\in\operatorname{Ass}(A/I^{n})$ then
${\mathfrak{m}}\in\operatorname{Ass}(I^{n-1}/I^{n})$.
###### Proof.
Take $f\in I:{\mathfrak{m}}$. Then $x_{i}f\in I$ for every $i=1,\ldots,r$.
Taking partial derivatives, we get $f+x_{i}(\partial f/\partial
x_{i})\in\partial(I)$. Summing up and using Euler’s formula, $(r+\deg
f)f\in\partial(I)$. As $\operatorname{char}k=0$, this yields
$f\in\partial(I)$, as claimed.
The second assertion holds since by the product rule,
$\partial(I^{n})\subseteq I^{n-1}$.
If ${\mathfrak{m}}\in\operatorname{Ass}(A/I^{n})$ then there exists an element
$a\in(I^{n}:{\mathfrak{m}})\setminus I^{n}$. Thus $a\in I^{n-1}\setminus
I^{n}$, so ${\mathfrak{m}}\in\operatorname{Ass}(I^{n-1}/I^{n})$. ∎
The condition on the characteristic is indispensable: The inclusion
$I^{2}:{\mathfrak{m}}\subseteq I$ may fail in positive characteristic; see
Example 3.9.
The following lemma will be employed several times in the sequel. Denote by
$\operatorname{gr}_{I}(A)$ the associated graded ring of $A$ with respect to
the $I$-adic filtration.
###### Lemma 2.4.
Let $A$ be a Noetherian ring, and $I$ an ideal. Then the following are
equivalent:
1. (i)
$I^{n+1}:I=I^{n}$ for all $n\geq 1$,
2. (ii)
$(I^{n+1}:I)\cap I^{n-1}=I^{n}$ for all $n\geq 1$,
3. (iii)
$\operatorname{depth}\operatorname{gr}_{I}(A)>0$,
4. (iv)
$I^{n}=\widetilde{I^{n}}$ for all $n\geq 1$, where
$\widetilde{I}=\bigcup\limits_{i\geq 1}(I^{i+1}:I^{i})$ denotes the Ratliff-
Rush closure of $I$.
If one of these equivalent conditions holds, then
$\operatorname{Ass}(A/I^{n})\subseteq\operatorname{Ass}(A/I^{n+1})$ for all
$n\geq 1$, namely $I$ has the persistence property.
###### Proof.
Clearly (i) $\Longrightarrow$ (ii). We prove that (ii) $\Longrightarrow$ (i).
Assume that $(I^{n+1}:I)\cap I^{n-1}=I^{n}$ for all $n\geq 1$. We prove by
induction on $n\geq 1$ that $I^{n}:I=I^{n-1}$.
If $n=1$, there is nothing to do. Assume that $n\geq 2$. By the induction
hypothesis, $I^{n}:I\subseteq I^{n-1}:I=I^{n-2}$. Hence $I^{n}:I=(I^{n}:I)\cap
I^{n-2}=I^{n-1}$, as desired.
That (i) $\Longleftrightarrow$ (iii) $\Longleftrightarrow$ (iv) follows from
[10, (1.2)] and [20, Remark 1.6].
The last assertion follows from [11, Section 1], where the property
$I^{n+1}:I=I^{n}$ for all $n\geq 1$, called the _strong persistence property_
of $I$, was discussed. ∎
## 3\. Associated primes of quotients of consecutive powers
The following question is quite relevant to the task of finding the associated
primes of powers of sums.
###### Question 3.1.
Let $A$ be a standard graded polynomial ring over a field $k$ (of
characteristic zero), and $I$ a proper homogeneous ideal. Is it true that
$\operatorname{Ass}(A/I^{n})=\operatorname{Ass}(I^{n-1}/I^{n})\quad\text{for
all $n\geq 1$?}$
We are not aware of any ideal not satisfying the equality in Question 3.1
(even in positive characteristic). In the first main result of this paper, we
provide some evidence for a positive answer to Question 3.1. Denote by
$\operatorname{Rees}(I)$ the Rees algebra of $I$. The ideal $I$ is said to be
_unmixed_ if it has no embedded primes. It is called _normal_ if all of its
powers are integrally closed ideals.
###### Theorem 3.2.
Question 3.1 has a positive answer if any of the following conditions holds:
1. (1)
$I$ is a monomial ideal.
2. (2)
$\operatorname{depth}\operatorname{gr}_{I}(A)\geq 1$.
3. (3)
$\operatorname{depth}\operatorname{Rees}(I)\geq 2$.
4. (4)
$I$ is normal.
5. (5)
$I^{n}$ is unmixed for all $n\geq 1$, e.g. $I$ is generated by a regular
sequence.
6. (6)
All the powers of $I$ are primary, e.g. $\dim(A/I)=0$.
7. (7)
$\operatorname{char}k=0$ and $\dim(A/I)\leq 1$.
8. (8)
$\operatorname{char}k=0$ and $\dim A\leq 3$.
###### Proof.
(1): See [17, Lemma 4.4].
(2): By Lemma 2.4, since $\operatorname{depth}\operatorname{gr}_{I}(A)\geq 1$,
$I^{n}:I=I^{n-1}$ for all $n\geq 1$. Induce on $n\geq 1$ that
$\operatorname{Ass}(A/I^{n})=\operatorname{Ass}(I^{n-1}/I^{n})$.
Let $I=(f_{1},\ldots,f_{m})$. For $n\geq 2$, as $I^{n}:I=I^{n-1}$, the map
$I^{n-2}\to\underbrace{I^{n-1}\oplus\cdots\oplus
I^{n-1}}_{m\,\text{times}},a\mapsto(af_{1},\ldots,af_{m}),$
induces an injection
$\frac{I^{n-2}}{I^{n-1}}\hookrightarrow\left(\frac{I^{n-1}}{I^{n}}\right)^{\oplus
m}.$
Hence
$\operatorname{Ass}(A/I^{n-1})=\operatorname{Ass}(I^{n-2}/I^{n-1})\subseteq\operatorname{Ass}(I^{n-1}/I^{n})$.
The exact sequence
$0\to I^{n-1}/I^{n}\to A/I^{n}\to A/I^{n-1}\to 0$
then yields
$\operatorname{Ass}(A/I^{n})\subseteq\operatorname{Ass}(I^{n-1}/I^{n})$, which
in turn implies the desired equality.
Next we claim that (3) and (4) all imply (2).
(3) $\Longrightarrow$ (2): This follows from a result of Huckaba and Marley
[13, Corollary 3.12] which says that either $\operatorname{gr}_{I}(A)$ is
Cohen-Macaulay (and hence has depth $A=\dim A$), or
$\operatorname{depth}\operatorname{gr}_{I}(A)=\operatorname{depth}\operatorname{Rees}(I)-1$.
(4) $\Longrightarrow$ (2): If $I$ is normal, then $I^{n}:I=I^{n-1}$ for all
$n\geq 1$. Hence we are done by Lemma 2.4.
(5): Take $P\in\operatorname{Ass}(A/I^{n})$, we show that
$P\in\operatorname{Ass}(I^{n-1}/I^{n})$. Since $A/I^{n}$ is unmixed,
$P\in\operatorname{Min}(A/I^{n})=\operatorname{Min}(I^{n-1}/I^{n})$.
Observe that (6) $\Longrightarrow$ (5).
(7): Because of (6), we can assume that $\dim(A/I)=1$. Take
$P\in\operatorname{Ass}(A/I^{n})$, we need to show that
$P\in\operatorname{Ass}(I^{n-1}/I^{n})$.
If $\dim(A/P)=1$, then as $\dim(A/I)=1$, $P\in\operatorname{Min}(A/I^{n})$.
Arguing as for case (5), we get $P\in\operatorname{Ass}(I^{n-1}/I^{n})$.
If $\dim(A/P)=0$, then $P={\mathfrak{m}}$, the graded maximal ideal of $A$.
Since ${\mathfrak{m}}\in\operatorname{Ass}(A/I^{n})$, by Lemma 2.3,
${\mathfrak{m}}\in\operatorname{Ass}(I^{n-1}/I^{n})$.
(8) It is harmless to assume that $I\neq 0$. If $\dim(A/I)\leq 1$ then we are
done by (7). Assume that $\dim(A/I)\geq 2$, then the hypothesis forces $\dim
A=3$ and $\operatorname{ht}I=1$. Thus we can write $I=xL$ where $x$ is a form
of degree at least 1, and $L=R$ or $\operatorname{ht}L\geq 2$. The result is
clear when $L=R$, so it remains to assume that $L$ is proper of height $\geq
2$. In particular $\dim(A/L)\leq 1$, and by (7), for all $n\geq 1$,
$\operatorname{Ass}(A/L^{n})=\operatorname{Ass}(L^{n-1}/L^{n}).$
Take ${\mathfrak{p}}\in\operatorname{Ass}(A/I^{n})$. Since $A/I^{n}$ and
$I^{n-1}/I^{n}$ have the same minimal primes, we can assume
$\operatorname{ht}{\mathfrak{p}}\geq 2$. From the exact sequence
$0\to A/L^{n}\xrightarrow{\cdot x^{n}}A/I^{n}\to A/(x^{n})\to 0$
it follows that ${\mathfrak{p}}\in\operatorname{Ass}(A/L^{n})$. Thus
${\mathfrak{p}}\in\operatorname{Ass}(L^{n-1}/L^{n})$. There is an exact
sequence
$0\to L^{n-1}/L^{n}\xrightarrow{\cdot x^{n}}I^{n-1}/I^{n}$
so ${\mathfrak{p}}\in\operatorname{Ass}(I^{n-1}/I^{n})$, as claimed. This
concludes the proof. ∎
###### Example 3.3.
Here is an example of a ring $A$ and an ideal $I$ not satisfying any of the
conditions (1)–(8) in Theorem 3.2. Let
$I=(x^{4}+y^{3}z,x^{3}y,x^{2}t^{2},y^{4},y^{2}z^{2})\subseteq A=k[x,y,z,t]$.
Then $\operatorname{depth}\operatorname{gr}_{I}(A)=0$ as
$x^{2}y^{3}z\in(I^{2}:I)\setminus I$. So $I$ satisfies neither (1) nor (2).
Note that $\sqrt{I}=(x,y)$, so $\dim(A/I)=2$. Let ${\mathfrak{m}}=(x,y,z,t)$.
Since $x^{2}y^{3}zt\in(I:{\mathfrak{m}})\setminus I$,
$\operatorname{depth}(A/I)=0$, hence $A/I$ is not unmixed. Thus $I$ satisfies
neither (5) nor (7). By the proof of Theorem 3.2, $I$ satisfies none of the
conditions (3), (4), (6).
Unfortunately, experiments with Macaulay2 [6] suggest that $I$ satisfies the
conclusion of Question 3.1, namely for all $n\geq 1$,
$\operatorname{Ass}(A/I^{n})=\operatorname{Ass}(I^{n-1}/I^{n})=\\{(x,y),(x,y,z),(x,y,t),(x,y,z,t)\\}.$
###### Remark 3.4.
In view of Lemma 2.3 and Question 3.1, one might ask whether if
$\operatorname{char}k=0$, then
$\operatorname{Ass}(A/I)=\operatorname{Ass}(\partial(I)/I)$ for any
homogeneous ideal $I$ in a polynomial ring $A$?
Unfortunately, this has a negative answer. Let
$A=\mathbb{Q}[x,y,z],f=x^{5}+x^{4}y+y^{4}z,L=(x,y)$ and $I=fL$. Then we can
check with Macaulay2 [6] that $\partial(I):f=L$. In particular,
$\operatorname{Ass}(\partial(I)/I)=(f)\neq\operatorname{Ass}(A/I)=\\{(f),(x,y)\\}.$
Indeed, if $L=(x,y)\in\operatorname{Ass}(\partial(I)/I)$ then
$\operatorname{Hom}_{R}(R/L,\partial(I)/I)=(\partial(I)\cap(I:L))/I=(\partial(I)\cap(f))/I\neq
0$, so that $\partial(I):f\neq L$, a contradiction.
### 3.1. Partial answer to Question 3.1 in dimension four
We prove that if $\operatorname{char}k=0$ and $\dim A=4$, the equality
$\operatorname{Ass}(A/I^{2})=\operatorname{Ass}(I/I^{2})$ always holds, in
support of a positive answer to Question 3.1. The proof requires the structure
theory of perfect Gorenstein ideals of height three and their second powers.
###### Theorem 3.5.
Assume $\operatorname{char}k=0$. Let $(A,{\mathfrak{m}})$ be a four
dimensional standard graded polynomial ring over $k$. Then for any proper
homogeneous ideal $I$ of $A$, there is an equality
$\operatorname{Ass}(A/I^{2})=\operatorname{Ass}(I/I^{2})$.
###### Proof.
It is harmless to assume $I$ is a proper ideal. If $\operatorname{ht}I\geq 3$
then $\dim(A/I)\leq 1$, and we are done by Theorem 3.2(7).
If $\operatorname{ht}I=1$, then $I=fL$, where $f\in A$ is a form of positive
degree and $\operatorname{ht}L\geq 2$. The exact sequence
$0\to\frac{A}{L}\xrightarrow{\cdot f}\frac{A}{I}\to\frac{A}{(f)}\to 0,$
yields
$\operatorname{Ass}(A/I)=\operatorname{Ass}(A/L)\bigcup\operatorname{Ass}(A/(f))$,
as $\operatorname{Min}(I)\supseteq\operatorname{Ass}(A/(f))$. An analogous
formula holds for $\operatorname{Ass}(A/I^{2})$, as $I^{2}=f^{2}L^{2}$. If we
can show that
$\operatorname{Ass}(A/L^{2})\subseteq\operatorname{Ass}(L/L^{2})$, then from
the injection $L/L^{2}\xrightarrow{\cdot f^{2}}I/I^{2}$ we have
$\displaystyle\operatorname{Ass}(A/I^{2})$
$\displaystyle=\operatorname{Ass}(A/L^{2})\bigcup\operatorname{Ass}(A/(f))$
$\displaystyle=\operatorname{Ass}(L/L^{2})\bigcup\operatorname{Ass}(A/(f))\subseteq\operatorname{Ass}(I/I^{2}).$
Hence it suffices to consider the case $\operatorname{ht}I=2$. Assume that
there exists
${\mathfrak{p}}\in\operatorname{Ass}(A/I^{2})\setminus\operatorname{Ass}(I/I^{2})$.
The exact sequence
$0\to I/I^{2}\to A/I^{2}\to A/I\to 0$
implies ${\mathfrak{p}}\in\operatorname{Ass}(A/I)$.
By Lemma 2.3, ${\mathfrak{p}}\neq{\mathfrak{m}}$. Since
$\operatorname{Min}(I)=\operatorname{Min}(I/I^{2})$,
${\mathfrak{p}}\notin\operatorname{Min}(I)$, we get
$\operatorname{ht}{\mathfrak{p}}=3$. Localizing yields
${\mathfrak{p}}A_{\mathfrak{p}}\in\operatorname{Ass}(A_{\mathfrak{p}}/I_{\mathfrak{p}}^{2})\setminus\operatorname{Ass}(I_{\mathfrak{p}}/I_{\mathfrak{p}}^{2})$.
Then there exists
$a\in(I_{\mathfrak{p}}^{2}:{\mathfrak{p}}A_{\mathfrak{p}})\setminus
I_{\mathfrak{p}}^{2}$. On the other hand, since $A_{\mathfrak{p}}$ is a
regular local ring of dimension 3 containing one half, Lemma 3.6 below implies
$I_{\mathfrak{p}}^{2}:{\mathfrak{p}}A_{\mathfrak{p}}\subseteq
I_{\mathfrak{p}}$, so $a\in I_{\mathfrak{p}}\setminus I_{\mathfrak{p}}^{2}$.
Hence
${\mathfrak{p}}A_{\mathfrak{p}}\in\operatorname{Ass}(I_{\mathfrak{p}}/I_{\mathfrak{p}}^{2})$.
This contradiction finishes the proof. ∎
To finish the proof of Theorem 3.5, we have to show the following
###### Lemma 3.6.
Let $(R,{\mathfrak{m}})$ be a three dimensional regular local ring such that
$1/2\in R$. Then for any ideal $I$ of $R$, there is a containment
$I^{2}:{\mathfrak{m}}\subseteq I$.
We will deduce it from the following result.
###### Proposition 3.7.
Let $(R,{\mathfrak{m}})$ be a regular local ring such that $1/2\in R$. Let $J$
be a perfect Gorenstein ideal of height $3$. Then for all $i\geq 0$, the maps
$\operatorname{Tor}^{R}_{i}(J^{2},k)\to\operatorname{Tor}^{R}_{i}(J,k)$
is zero. In particular, there is a containment $J^{2}:{\mathfrak{m}}\subseteq
J$.
###### Proof.
We note that the second assertion follows from the first. Indeed, the
hypotheses implies that $\dim(R)=d\geq 3$. Using the Koszul complex of $R$, we
see that
$\operatorname{Tor}^{R}_{d-1}(J,k)\cong\operatorname{Tor}^{R}_{d}(R/J,k)\cong\frac{J:{\mathfrak{m}}}{J}.$
Since the map
$\operatorname{Tor}^{R}_{i}(J^{2},k)\to\operatorname{Tor}^{R}_{i}(J,k)$ is
zero for $i=d-1$, the conclusion is $J^{2}:{\mathfrak{m}}\subseteq J$. Hence
it remains to prove the first assertion. We do this by exploiting the
structure of the minimal free resolution of $J$ and $J^{2}$, and constructing
a map between these complexes.
Since $J$ is Gorenstein of height three, it has a minimal free resolution
$P:0\to R\xrightarrow{\delta}F^{*}\xrightarrow{\rho}F\to 0.$
Here $F=Re_{1}\oplus\cdots\oplus Re_{g}$ is a free $R$-module of rank $g$ – an
odd integer. The map $\tau:F\to J$ maps $e_{i}$ to $f_{i}$, where
$J=(f_{1},\ldots,f_{g})$. The free $R$-module $F^{*}$ has basis
$e_{1}^{*},\ldots,e_{g}^{*}$. The map $\rho:F^{*}\to F$ is alternating with
matrix $(a_{i,j})_{g\times g}$, namely $a_{i,i}=0$ for $1\leq i\leq g$ and
$a_{i,j}=-a_{j,i}$ for $1\leq i<j\leq g$, and
$\rho(e_{i}^{*})=\sum_{j=1}^{g}a_{j,i}e_{j}\quad\text{for all $i$}.$
The map $\delta:R\to F^{*}$ has the matrix $(f_{1}\ldots f_{g})^{T}$, i.e. it
is given by $\delta(1)=f_{1}e_{1}^{*}+\cdots+f_{g}e_{g}^{*}$.
It is known that if $J$ is Gorenstein of height three, then
$J\otimes_{R}J\cong J^{2}$, and by constructions due to Kustin and Ulrich [15,
Definition 5.9, Theorems 6.2 and 6.17], $J^{2}$ has a minimal free resolution
$Q$ as below. Note that in the terminology of [15] and thanks to the
discussion after Theorem 6.22 in that work, $J$ satisfies $\text{SPC}_{g-2}$,
hence Theorem 6.17, parts (c)(i) and (d)(ii) in _ibid._ are applicable. The
resolution $Q$ given in the following is taken from (2.7) and Definition 2.15
in Kustin and Ulrich’s paper.
$Q:0\to\wedge^{2}F^{*}\xrightarrow{d_{2}}(F\otimes
F^{*})/\eta\xrightarrow{d_{1}}S_{2}(F)\xrightarrow{d_{0}}J^{2}\to 0.$
Here $S_{2}(F)=\bigoplus_{1\leq i\leq j\leq g}R(e_{i}\otimes e_{j})$ is the
second symmetric power of $F$, $\eta=R(e_{1}\otimes
e_{1}^{*}+\cdots+e_{g}\otimes e_{g}^{*})\subseteq F\otimes F^{*}$, and
$\wedge^{2}F^{*}$ is the second exterior power of $F^{*}$.
The maps $d_{0}:S_{2}(F)\to J^{2}$, $d_{1}:(F\otimes F^{*})/\eta\to S_{2}(F)$,
$d_{2}:\wedge^{2}F^{*}\to(F\otimes F^{*})/\eta$ are given by:
$\displaystyle d_{0}(e_{i}\otimes e_{j})$
$\displaystyle=f_{i}f_{j}\quad\text{for $1\leq i,j\leq g$},$ $\displaystyle
d_{1}(e_{i}\otimes e_{j}^{*}+\eta)$
$\displaystyle=\sum_{l=1}^{g}a_{l,j}(e_{i}\otimes e_{l})\quad\text{for $1\leq
i,j\leq g$},$ $\displaystyle d_{2}(e_{i}^{*}\wedge e_{j}^{*})$
$\displaystyle=\sum_{l=1}^{g}a_{l,i}(e_{l}\otimes
e_{j}^{*})-\sum_{v=1}^{g}a_{v,j}(e_{v}\otimes e_{i}^{*})+\eta\quad\text{for
$1\leq i<j\leq g$}.$
We construct a lifting $\alpha:Q\to P$ of the natural inclusion map $J^{2}\to
J$ such that $\alpha(Q)\subseteq{\mathfrak{m}}P$.
$\textstyle{Q:}$$\textstyle{0\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{\wedge^{2}F^{*}\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\alpha_{2}}$$\scriptstyle{d_{2}}$$\textstyle{(F\otimes
F^{*})/\eta\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{d_{1}}$$\scriptstyle{\alpha_{1}}$$\textstyle{S_{2}(F)\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{d_{0}}$$\scriptstyle{\alpha_{0}}$$\textstyle{J^{2}\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\iota}$$\textstyle{0}$$\textstyle{P:}$$\textstyle{0\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{R\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\delta}$$\textstyle{F^{*}\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\rho}$$\textstyle{F\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\tau}$$\textstyle{J\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{0.}$
In detail, this lifting is
* •
$\alpha_{0}(e_{i}\otimes e_{j})=\dfrac{f_{i}e_{j}+f_{j}e_{i}}{2}\quad\text{for
$1\leq i,j\leq g$},$
* •
$\alpha_{1}(e_{i}\otimes
e_{j}^{*}+\eta)=\begin{cases}\dfrac{f_{i}e_{j}^{*}}{2},&\text{if
$(i,j)\neq(g,g)$},\\\ \dfrac{-\sum_{v=1}^{g-1}f_{v}e_{v}^{*}}{2},&\text{if
$(i,j)=(g,g)$},\end{cases}$
* •
$\alpha_{2}(e_{i}^{*}\wedge e_{j}^{*})=\begin{cases}0,&\text{if $1\leq i<j\leq
g-1$},\\\ \dfrac{-a_{g,i}}{2},&\text{if $1\leq i\leq g-1,j=g$}.\end{cases}$
Note that $\alpha_{1}$ is well-defined since
$\alpha_{1}(e_{1}\otimes e_{1}^{*}+\cdots+e_{g}\otimes
e_{g}^{*}+\eta)=\dfrac{\sum_{v=1}^{g-1}f_{v}e_{v}^{*}}{2}\dfrac{-\sum_{v=1}^{g-1}f_{v}e_{v}^{*}}{2}=0.$
Observe that $\alpha(Q)\subseteq{\mathfrak{m}}P$ since
$f_{i},a_{i,j}\in{\mathfrak{m}}$ for all $i,j$. It remains to check that the
map $\alpha:Q\to P$ is a lifting for $J^{2}\hookrightarrow J$. For this, we
have:
* •
$\tau(\alpha_{0}(e_{i}\otimes
e_{j}))=\tau\left(\dfrac{f_{i}e_{j}+f_{j}e_{i}}{2}\right)=f_{i}f_{j}=\iota(d_{0}(e_{i}\otimes
e_{j}))$.
Next we compute
$\displaystyle\alpha_{0}(d_{1}(e_{i}\otimes e_{j}^{*}+\eta))$
$\displaystyle=\alpha_{0}\left(\sum_{l=1}^{g}a_{l,j}(e_{i}\otimes
e_{l})\right)=\sum_{l=1}^{g}a_{l,j}\dfrac{f_{i}e_{l}+f_{l}e_{i}}{2}$
$\displaystyle=\dfrac{f_{i}\left(\sum_{l=1}^{g}a_{l,j}e_{l}\right)}{2}\quad\text{(since
$\sum_{l=1}^{g}a_{l,j}f_{l}=0$)}.$
* •
If $(i,j)\neq(g,g)$ then
$\displaystyle\rho(\alpha_{1}(e_{i}\otimes e_{j}^{*}+\eta))$
$\displaystyle=\rho(f_{i}e_{j}^{*}/2)=\dfrac{f_{i}\left(\sum_{l=1}^{g}a_{l,j}e_{l}\right)}{2}.$
* •
If $(i,j)=(g,g)$ then
$\displaystyle\rho(\alpha_{1}(e_{g}\otimes e_{g}^{*}+\eta))$
$\displaystyle=\rho\left(\dfrac{-\sum_{v=1}^{g-1}f_{v}e_{v}^{*}}{2}\right)=\dfrac{-\sum_{v=1}^{g-1}f_{v}(\sum_{l=1}^{g}a_{l,v}e_{l})}{2}$
$\displaystyle=\dfrac{\sum\limits_{l=1}^{g}(\sum\limits_{v=1}^{g-1}a_{v,l}f_{v})e_{l}}{2}\quad(\text{since
$a_{v,l}=-a_{l,v}$})$
$\displaystyle=\dfrac{-\sum\limits_{l=1}^{g}(a_{g,l}f_{g})e_{l}}{2}\quad\text{(since
$\sum_{v=1}^{g}a_{v,l}f_{v}=0$)}$
$\displaystyle=\dfrac{f_{g}\left(\sum_{l=1}^{g}a_{l,g}e_{l}\right)}{2}\quad(\text{since
$a_{g,l}=-a_{l,g}$})$
* •
Hence in both cases, $\alpha_{0}(d_{1}(e_{i}\otimes
e_{j}^{*}+\eta))=\rho(\alpha_{1}(e_{i}\otimes e_{j}^{*}+\eta))$.
Next, for $1\leq i<j\leq g-1$, we compute
$\displaystyle\alpha_{1}(d_{2}(e_{i}^{*}\wedge e_{j}^{*}))$
$\displaystyle=\alpha_{1}\left(\sum_{l=1}^{g}a_{l,i}(e_{l}\otimes
e_{j}^{*})-\sum_{v=1}^{g}a_{v,j}(e_{v}\otimes e_{i}^{*})+\eta\right)$
$\displaystyle=\dfrac{\left(\sum_{l=1}^{g}a_{l,i}f_{l}\right)e_{j}^{*}}{2}-\dfrac{\left(\sum_{v=1}^{g}a_{v,j}f_{v}\right)e_{i}^{*}}{2}$
(since neither $(l,j)$ nor $(v,i)$ is $(g,g)$)
$\displaystyle=0\quad\text{(since $\sum_{v=1}^{g}a_{v,l}f_{v}=0$)}$
$\displaystyle=\delta(\alpha_{2}(e_{i}^{*}\wedge e_{j}^{*})).$
Finally, for $1\leq i\leq g-1,j=g$, we have
$\displaystyle\alpha_{1}(d_{2}(e_{i}^{*}\wedge e_{g}^{*}))$
$\displaystyle=\alpha_{1}\left(\sum_{l=1}^{g}a_{l,i}(e_{l}\otimes
e_{g}^{*})-\sum_{v=1}^{g}a_{v,g}(e_{v}\otimes e_{i}^{*})+\eta\right)$
$\displaystyle=\dfrac{\left(\sum_{l=1}^{g-1}a_{l,i}f_{l}\right)e_{g}^{*}}{2}-\dfrac{\sum_{v=1}^{g-1}a_{g,i}f_{v}e_{v}^{*}}{2}-\dfrac{\left(\sum_{v=1}^{g}a_{v,g}f_{v}\right)e_{i}^{*}}{2}$
(the formula for $\alpha_{1}(e_{l}\otimes e_{g}^{*})$ depends on whether $l=g$
or not)
$\displaystyle=\dfrac{-a_{g,i}f_{g}e_{g}^{*}}{2}-\dfrac{\sum_{v=1}^{g-1}a_{g,i}f_{v}e_{v}^{*}}{2}\quad\text{(since
$\sum_{v=1}^{g}a_{v,l}f_{v}=0$)}$
$\displaystyle=\dfrac{-a_{g,i}\left(\sum_{v=1}^{g}f_{v}e_{v}^{*}\right)}{2}$
We also have
$\delta(\alpha_{2}(e_{i}^{*}\wedge
e_{g}^{*}))=\delta(-a_{g,i}/2)=\dfrac{-a_{g,i}\left(\sum_{v=1}^{g}f_{v}e_{v}^{*}\right)}{2}.$
Hence $\alpha:Q\to P$ is a lifting of the inclusion map $J^{2}\to J$.
Since $\alpha(Q)\subseteq{\mathfrak{m}}P$, it follows that
$\alpha\otimes(R/{\mathfrak{m}})=0$. Hence
$\operatorname{Tor}^{R}_{i}(J^{2},k)\to\operatorname{Tor}^{R}_{i}(J,k)$ is the
zero map for all $i$. The proof is concluded. ∎
###### Proof of Lemma 3.6.
It is harmless to assume that $I\subseteq{\mathfrak{m}}$. We can write $I$ as
a finite intersection $I_{1}\cap\cdots\cap I_{d}$ of irreducible ideals. If we
can show the lemma for each of the components $I_{j}$, then
$I^{2}:{\mathfrak{m}}\subseteq(I_{1}^{2}:{\mathfrak{m}})\cap\cdots\cap(I_{d}^{2}:{\mathfrak{m}})\subseteq\bigcap_{j=1}^{d}I_{j}=I.$
Hence we can assume that $I$ is an irreducible ideal. Being irreducible, $I$
is a primary ideal. If $\sqrt{I}\neq{\mathfrak{m}}$, then
$I^{2}:{\mathfrak{m}}\subseteq I:{\mathfrak{m}}=I$. Therefore we assume that
$I$ is an ${\mathfrak{m}}$-primary irreducible ideal. Let
$k=R/{\mathfrak{m}}$. It is a folklore and simple result that any
${\mathfrak{m}}$-primary irreducible ideal must satisfy
$\dim_{k}(I:{\mathfrak{m}})/I=1$. Note that $R$ is a regular local ring, so
being a Cohen-Macaulay module of dimension zero, $R/I$ is perfect. Hence $I$
is a perfect Gorenstein ideal of height three. It then remains to use the
second assertion of Proposition 3.7. ∎
In view of Lemma 3.6, it seems natural to ask the following
###### Question 3.8.
Let $(R,{\mathfrak{m}})$ be a three dimensional regular local ring containing
$1/2$. Let $I$ be an ideal of $R$. Is it true that for all $n\geq 2$,
$I^{n}:{\mathfrak{m}}\subseteq I^{n-1}$?
For regular local rings of dimension at most two, Ahangari Maleki has proved
that Question 3.8 has a positive answer regardless of the characteristic [1,
Proof of Theorem 3.7]. Nevertheless, if $\dim A$ is not fixed, Question 3.8
has a negative answer in positive characteristic in general. Here is a
counterexample in dimension 9(!).
###### Example 3.9.
Choose $\operatorname{char}k=2$,
$A=k[x_{1},x_{2},x_{3},\ldots,z_{1},z_{2},z_{3}]$ and
$M=\begin{pmatrix}x_{1}&x_{2}&x_{3}\\\ y_{1}&y_{2}&y_{3}\\\
z_{1}&z_{2}&z_{3}\end{pmatrix}.$
Let $I_{2}(M)$ be the ideal generated by the 2-minors of $M$, and
$I=I_{2}(M)+\sum_{i=1}^{3}(x_{i},y_{i},z_{i})^{2}+(x_{1},x_{2},x_{3})^{2}+(y_{1},y_{2},y_{3})^{2}+(z_{1},z_{2},z_{3})^{2}.$
Denote ${\mathfrak{m}}=A_{+}$. The Betti table of $A/I$, computed by Macaulay2
[6], is
0 1 2 3 4 5 6 7 8 9
total: 1 36 160 315 404 404 315 160 36 1
0: 1 . . . . . . . . .
1: . 36 160 315 288 116 . . . .
2: . . . . 116 288 315 160 36 .
3: . . . . . . . . . 1
Therefore $I$ is an ${\mathfrak{m}}$-primary, binomial, quadratic, Gorenstein
ideal. Also, the relation
$x_{1}y_{2}z_{3}+x_{2}y_{3}z_{1}+x_{3}y_{1}z_{2}\in(I^{2}:{\mathfrak{m}})\setminus
I$ implies $I^{2}:{\mathfrak{m}}\not\subseteq I$. This means that the map
$\operatorname{Tor}^{A}_{8}(k,I^{2})\to\operatorname{Tor}^{A}_{8}(k,I)$ is not
zero. In particular, this gives a negative answer to [1, Question 3.6] in
positive characteristic.
## 4\. Powers of sums and associated primes
### 4.1. Bounds for associated primes
The second main result of this paper is the following. Its part (3)
generalizes [7, Lemma 3.4], which deals only with squarefree monomial ideals.
###### Theorem 4.1.
Let $A,B$ be commutative Noetherian algebras over $k$ such that
$R=A\otimes_{k}B$ is Noetherian. Let $I,J$ be proper ideals of $A,B$,
respectively.
1. (1)
For all $n\geq 1$, we have inclusions
$\displaystyle\bigcup_{i=1}^{n}\mathop{\bigcup_{{\mathfrak{p}}\in\operatorname{Ass}_{A}(I^{i-1}/I^{i})}}_{{\mathfrak{q}}\in\operatorname{Ass}_{B}(J^{n-i}/J^{n-i+1})}\operatorname{Min}_{R}(R/{\mathfrak{p}}+{\mathfrak{q}})$
$\displaystyle\subseteq\operatorname{Ass}_{R}\frac{R}{(I+J)^{n}},$
$\displaystyle\operatorname{Ass}_{R}\frac{R}{(I+J)^{n}}$
$\displaystyle\subseteq\bigcup_{i=1}^{n}\mathop{\bigcup_{{\mathfrak{p}}\in\operatorname{Ass}_{A}(A/I^{i})}}_{{\mathfrak{q}}\in\operatorname{Ass}_{B}(J^{n-i}/J^{n-i+1})}\operatorname{Min}_{R}(R/{\mathfrak{p}}+{\mathfrak{q}}).$
2. (2)
If moreover $\operatorname{Ass}(A/I^{n})=\operatorname{Ass}(I^{n-1}/I^{n})$
for all $n\geq 1$, then both inclusions in (1) are equalities.
3. (3)
In particular, if $A$ and $B$ are polynomial rings and $I$ and $J$ are
monomial ideals, then for all $n\geq 1$, we have an equality
$\operatorname{Ass}_{R}\frac{R}{(I+J)^{n}}=\bigcup_{i=1}^{n}\mathop{\bigcup_{{\mathfrak{p}}\in\operatorname{Ass}_{A}(A/I^{i})}}_{{\mathfrak{q}}\in\operatorname{Ass}_{B}(B/J^{n-i+1})}\\{{\mathfrak{p}}+{\mathfrak{q}}\\}.$
###### Proof.
(1) Denote $Q=I+J$. By Proposition 2.1, we have
$Q^{n-1}/Q^{n}=\bigoplus_{i=1}^{n}(I^{i-1}/I^{i}\otimes_{k}J^{n-i}/J^{n-i+1}).$
Hence
(1)
$\bigcup_{i=1}^{n}\operatorname{Ass}_{R}(I^{i-1}/I^{i}\otimes_{k}J^{n-i}/J^{n-i+1})=\operatorname{Ass}_{R}(Q^{n-1}/Q^{n})\subseteq\operatorname{Ass}_{R}(R/Q^{n}).$
For each $1\leq i\leq n$, we have $J^{n-i}Q^{i}\subseteq
J^{n-i}(I^{i}+J)=J^{n-i}I^{i}+J^{n-i+1}$. We claim that
$(J^{n-i}I^{i}+J^{n-i+1})/J^{n-i}Q^{i}\cong J^{n-i+1}/J^{n-i+1}Q^{i-1}$, so
that there is an exact sequence
(2)
$0\longrightarrow\frac{J^{n-i+1}}{J^{n-i+1}Q^{i-1}}\longrightarrow\frac{J^{n-i}}{J^{n-i}Q^{i}}\longrightarrow\frac{J^{n-i}}{J^{n-i+1}+J^{n-i}I^{i}}\cong\frac{A}{I^{i}}\otimes_{k}\frac{J^{n-i}}{J^{n-i+1}}\longrightarrow
0.$
For the claim, we have
$\displaystyle(J^{n-i}I^{i}+J^{n-i+1})/J^{n-i}Q^{i}$
$\displaystyle=\frac{J^{n-i}I^{i}+J^{n-i+1}}{J^{n-i}(I^{i}+JQ^{i-1})}=\frac{J^{n-i}I^{i}+J^{n-i+1}}{J^{n-i}I^{i}+J^{n-i+1}Q^{i-1}}$
$\displaystyle=\frac{(J^{n-i}I^{i}+J^{n-i+1})/J^{n-i}I^{i}}{(J^{n-i}I^{i}+J^{n-i+1}Q^{i-1})/J^{n-i}I^{i}}$
$\displaystyle\cong\frac{J^{n-i+1}/J^{n-i+1}I^{i}}{J^{n-i+1}Q^{i-1}/J^{n-i+1}I^{i}}\cong\frac{J^{n-i+1}}{J^{n-i+1}Q^{i-1}}.$
In the display, the first isomorphism follows from the fact that
$J^{n-i+1}\cap J^{n-i}I^{i}=J^{n-i+1}I^{i}=J^{n-i}I^{i}\cap J^{n-i+1}Q^{i-1},$
which holds since by Proposition 2.1,
$J^{n-i+1}I^{i}\subseteq J^{n-i}I^{i}\cap J^{n-i+1}Q^{i-1}\subseteq
J^{n-i}I^{i}\cap J^{n-i+1}\subseteq I^{i}\cap J^{n-i+1}=J^{n-i+1}I^{i}.$
Now for $i=n$, the exact sequence (2) yields
$\operatorname{Ass}_{R}(R/Q^{n})\subseteq\operatorname{Ass}_{R}(J/JQ^{n-1})\cup\operatorname{Ass}_{R}(A/I^{n}\otimes_{k}B/J).$
Similarly for the cases $2\leq i\leq n-1$ and $i=1$. Putting everything
together,
(3)
$\operatorname{Ass}_{R}(R/Q^{n})\subseteq\bigcup_{i=1}^{n}\operatorname{Ass}_{R}(A/I^{i}\otimes_{k}J^{n-i}/J^{n-i+1}).$
Combining (1), (3) and Theorem 2.2, we finish the proof of (1).
(2) If $\operatorname{Ass}_{A}(A/I^{n})=\operatorname{Ass}_{A}(I^{n-1}/I^{n})$
for all $n\geq 1$, then clearly the upper bound and lower bound for
$\operatorname{Ass}(R/(I+J)^{n})$ in part (1) coincide. The conclusion
follows.
(3) In this situation, every associated prime of $A/I^{i}$ is generated by
variables. In particular, ${\mathfrak{p}}+{\mathfrak{q}}$ is a prime ideal of
$R$ for any
${\mathfrak{p}}\in\operatorname{Ass}(A/I^{i}),{\mathfrak{q}}\in\operatorname{Ass}_{B}(B/J^{j})$
and $i,j\geq 1$. The conclusion follows from part (2). ∎
###### Remark 4.2.
If Question 3.1 has a positive answer, then we can strengthen the conclusion
of Theorem 4.1: Let $A,B$ be standard graded polynomial rings over $k$. Let
$I,J$ be proper homogeneous ideals of $A,B$, respectively. Then for all $n\geq
1$, there is an equality
$\operatorname{Ass}_{R}\frac{R}{(I+J)^{n}}=\bigcup_{i=1}^{n}\mathop{\bigcup_{{\mathfrak{p}}\in\operatorname{Ass}_{A}(A/I^{i})}}_{{\mathfrak{q}}\in\operatorname{Ass}_{B}(B/J^{n-i+1})}\operatorname{Min}(R/({\mathfrak{p}}+{\mathfrak{q}})).$
###### Example 4.3.
In general, for singular base rings, each of the inclusions of Theorem 4.1 can
be strict. First, take $A=k[a,b,c]/(a^{2},ab,ac),I=(b)$, $B=k,J=(0)$. Then
$R=A,Q=I=(b)$ and $I^{2}=(b^{2})$. Let ${\mathfrak{m}}=(a,b,c)$. One can check
that $a\in(I^{2}:{\mathfrak{m}})\setminus I^{2}$ and $I/I^{2}\cong
A/(a,b)\cong k[c]$, whence
$\operatorname{depth}(A/I^{2})=0<\operatorname{depth}(I/I^{2})$. In
particular,
${\mathfrak{m}}\in\operatorname{Ass}_{A}(A/I^{2})\setminus\operatorname{Ass}_{A}(I/I^{2})$.
Thus the lower bound for $\operatorname{Ass}_{R}(R/Q^{2})$ is strict in this
case.
Second, take $A,I$ as above and $B=k[x,y,z]$,
$J=(x^{4},x^{3}y,xy^{3},y^{4},x^{2}y^{2}z).$ In this case
$Q=(b,x^{4},x^{3}y,xy^{3},y^{4},x^{2}y^{2}z)\subseteq
k[a,b,c,x,y,z]/(a^{2},ab,ac)$. Then $c+z$ is $(R/Q^{2})$-regular, so
$\operatorname{depth}R/Q^{2}>0=\operatorname{depth}A/I^{2}+\operatorname{depth}B/J$.
Hence $(a,b,c,x,y,z)$ does not lie in $\operatorname{Ass}_{R}(R/Q^{2})$, but
it belongs to the upper bound for $\operatorname{Ass}_{R}(R/Q^{2})$ in Theorem
4.1(1).
### 4.2. Asymptotic primes
Recall that if $I\neq A$, $\operatorname{grade}(I,A)$ denotes the maximal
length of an $A$-regular sequence consisting of elements in $I$; and if $I=A$,
by convention, $\operatorname{grade}(I,A)=\infty$ (see [3, Section 1.2] for
more details). Let $\operatorname{astab}^{*}(I)$ denote the minimal integer
$m\geq 1$ such that both $\operatorname{Ass}_{A}(A/I^{i})$ and
$\operatorname{Ass}_{A}(I^{i-1}/I^{i})$ are constant sets for all $i\geq m$.
By a result due to McAdam and Eakin [16, Corollary 13], for all
$i\geq\operatorname{astab}^{*}(I)$,
$\operatorname{Ass}_{A}(A/I^{i})\setminus\operatorname{Ass}_{A}(I^{i-1}/I^{i})$
consists only of prime divisors of $(0)$. Hence if
$\operatorname{grade}(I,A)\geq 1$, i.e. $I$ contains a non-zerodivisor, then
$\operatorname{Ass}_{A}(A/I^{i})=\operatorname{Ass}_{A}(I^{i-1}/I^{i})$ for
all $i\geq\operatorname{astab}^{*}(I)$. Denote
$\operatorname{Ass}_{A}^{*}(I)=\bigcup_{i\geq
1}\operatorname{Ass}_{A}(A/I^{i})=\bigcup_{i=1}^{\operatorname{astab}^{*}(I)}\operatorname{Ass}_{A}(A/I^{i})$
and
$\operatorname{Ass}_{A}^{\infty}(I)=\operatorname{Ass}_{A}(A/I^{i})\quad\text{for
any $i\geq\operatorname{astab}^{*}(I)$.}$
The following folklore lemma will be useful.
###### Lemma 4.4.
For any $n\geq 1$, we have
$\bigcup_{i=1}^{n}\operatorname{Ass}_{A}(A/I^{i})=\bigcup_{i=1}^{n}\operatorname{Ass}_{A}(I^{i-1}/I^{i}).$
In particular, if $\operatorname{grade}(I,A)\geq 1$ then
$\operatorname{Ass}_{A}^{*}(I)=\bigcup_{i=1}^{\operatorname{astab}^{*}(I)}\operatorname{Ass}_{A}(I^{i-1}/I^{i})=\bigcup_{i\geq
1}\operatorname{Ass}_{A}(I^{i-1}/I^{i}).$
###### Proof.
For the first assertion: Clearly the left-hand side contains the right-hand
one. Conversely, we deduce from the inclusion
$\operatorname{Ass}_{A}(A/I^{i})\subseteq\operatorname{Ass}_{A}(I^{i-1}/I^{i})\cup\operatorname{Ass}_{A}(A/I^{i-1})$
for $2\leq i\leq n$ that the other containment is valid as well.
The remaining assertion is clear. ∎
Now we describe the asymptotic associated primes of $(I+J)^{n}$ for $n\gg 0$
and provide an upper bound for $\operatorname{astab}^{*}(I+J)$ under certain
conditions on $I$ and $J$.
###### Theorem 4.5.
Assume that $\operatorname{grade}(I,A)\geq 1$ and
$\operatorname{grade}(J,B)\geq 1$, e.g. $A$ and $B$ are domains and $I,J$ are
proper ideals. Then for all
$n\geq\operatorname{astab}^{*}(I)+\operatorname{astab}^{*}(J)-1$, we have
$\displaystyle\operatorname{Ass}_{R}\frac{R}{(I+J)^{n}}$
$\displaystyle=\operatorname{Ass}_{R}\frac{(I+J)^{n-1}}{(I+J)^{n}}$
$\displaystyle=\mathop{\bigcup_{{\mathfrak{p}}\in\operatorname{Ass}^{*}_{A}(I)}}_{{\mathfrak{q}}\in\operatorname{Ass}^{\infty}_{B}(J)}\operatorname{Min}_{R}(R/{\mathfrak{p}}+{\mathfrak{q}})\bigcup\mathop{\bigcup_{{\mathfrak{p}}\in\operatorname{Ass}^{\infty}_{A}(I)}}_{{\mathfrak{q}}\in\operatorname{Ass}^{*}_{B}(J)}\operatorname{Min}_{R}(R/{\mathfrak{p}}+{\mathfrak{q}}).$
In particular,
$\operatorname{astab}^{*}(I+J)\leq\operatorname{astab}^{*}(I)+\operatorname{astab}^{*}(J)-1$
and
$\operatorname{Ass}^{\infty}_{R}(I+J)=\mathop{\bigcup_{{\mathfrak{p}}\in\operatorname{Ass}^{*}_{A}(I)}}_{{\mathfrak{q}}\in\operatorname{Ass}^{\infty}_{B}(J)}\operatorname{Min}_{R}(R/{\mathfrak{p}}+{\mathfrak{q}})\bigcup\mathop{\bigcup_{{\mathfrak{p}}\in\operatorname{Ass}^{\infty}_{A}(I)}}_{{\mathfrak{q}}\in\operatorname{Ass}^{*}_{B}(J)}\operatorname{Min}_{R}(R/{\mathfrak{p}}+{\mathfrak{q}}).$
###### Proof.
Denote $Q=I+J$. It suffices to prove that for
$n\geq\operatorname{astab}^{*}(I)+\operatorname{astab}^{*}(J)-1$, both the
lower bound (which is nothing but $\operatorname{Ass}_{R}(Q^{n-1}/Q^{n})$) and
upper bound for $\operatorname{Ass}_{R}(R/Q^{n})$ in Theorem 4.1 are equal to
$\mathop{\bigcup_{{\mathfrak{p}}\in\operatorname{Ass}^{*}_{A}(I)}}_{{\mathfrak{q}}\in\operatorname{Ass}^{\infty}_{B}(J)}\operatorname{Min}_{R}(R/{\mathfrak{p}}+{\mathfrak{q}})\bigcup\mathop{\bigcup_{{\mathfrak{p}}\in\operatorname{Ass}^{\infty}_{A}(I)}}_{{\mathfrak{q}}\in\operatorname{Ass}^{*}_{B}(J)}\operatorname{Min}_{R}(R/{\mathfrak{p}}+{\mathfrak{q}}).$
First, for the lower bound, we need to show that for
$n\geq\operatorname{astab}^{*}(I)+\operatorname{astab}^{*}(J)-1$,
(4)
$\displaystyle\bigcup_{i=1}^{n}\mathop{\bigcup_{{\mathfrak{p}}\in\operatorname{Ass}_{A}(I^{i-1}/I^{i})}}_{{\mathfrak{q}}\in\operatorname{Ass}_{B}(J^{n-i}/J^{n-i+1})}\operatorname{Min}_{R}(R/{\mathfrak{p}}+{\mathfrak{q}})$
$\displaystyle=\mathop{\bigcup_{{\mathfrak{p}}\in\operatorname{Ass}^{*}_{A}(I)}}_{{\mathfrak{q}}\in\operatorname{Ass}^{\infty}_{B}(J)}\operatorname{Min}_{R}(R/{\mathfrak{p}}+{\mathfrak{q}})\bigcup\mathop{\bigcup_{{\mathfrak{p}}\in\operatorname{Ass}^{\infty}_{A}(I)}}_{{\mathfrak{q}}\in\operatorname{Ass}^{*}_{B}(J)}\operatorname{Min}_{R}(R/{\mathfrak{p}}+{\mathfrak{q}}).$
If $i\leq\operatorname{astab}^{*}(I)$, $n-i+1\geq\operatorname{astab}^{*}(J)$,
hence
$\operatorname{Ass}_{B}(J^{n-i}/J^{n-i+1})=\operatorname{Ass}^{\infty}_{B}(J)$.
In particular,
$\displaystyle\bigcup_{i=1}^{\operatorname{astab}^{*}(I)}\mathop{\bigcup_{{\mathfrak{p}}\in\operatorname{Ass}_{A}(I^{i-1}/I^{i})}}_{{\mathfrak{q}}\in\operatorname{Ass}_{B}(J^{n-i}/J^{n-i+1})}\operatorname{Min}_{R}(R/{\mathfrak{p}}+{\mathfrak{q}})$
$\displaystyle=\bigcup_{i=1}^{\operatorname{astab}^{*}(I)}\mathop{\bigcup_{{\mathfrak{p}}\in\operatorname{Ass}_{A}(I^{i-1}/I^{i})}}_{{\mathfrak{q}}\in\operatorname{Ass}^{\infty}_{B}(J)}\operatorname{Min}_{R}(R/{\mathfrak{p}}+{\mathfrak{q}})=\mathop{\bigcup_{{\mathfrak{p}}\in\operatorname{Ass}^{*}_{A}(I)}}_{{\mathfrak{q}}\in\operatorname{Ass}^{\infty}_{B}(J)}\operatorname{Min}_{R}(R/{\mathfrak{p}}+{\mathfrak{q}}),$
where the second equality follows from Lemma 4.4.
If $i\geq\operatorname{astab}^{*}(I)$ then
$\operatorname{Ass}_{A}(A/I^{i})=\operatorname{Ass}^{\infty}_{A}(I)$, $1\leq
n+1-i\leq n+1-\operatorname{astab}^{*}(I)$. Hence
$\displaystyle\bigcup_{i=\operatorname{astab}^{*}(I)}^{n}\mathop{\bigcup_{{\mathfrak{p}}\in\operatorname{Ass}_{A}(I^{i-1}/I^{i})}}_{{\mathfrak{q}}\in\operatorname{Ass}_{B}(J^{n-i}/J^{n-i+1})}\operatorname{Min}_{R}(R/{\mathfrak{p}}+{\mathfrak{q}})$
$\displaystyle=\bigcup_{i=1}^{n+1-\operatorname{astab}^{*}(I)}\mathop{\bigcup_{{\mathfrak{p}}\in\operatorname{Ass}^{\infty}_{A}(I)}}_{{\mathfrak{q}}\in\operatorname{Ass}_{B}(J^{i-1}/J^{i})}\operatorname{Min}_{R}(R/{\mathfrak{p}}+{\mathfrak{q}})=\mathop{\bigcup_{{\mathfrak{p}}\in\operatorname{Ass}^{\infty}_{A}(I)}}_{{\mathfrak{q}}\in\operatorname{Ass}^{*}_{B}(J)}\operatorname{Min}_{R}(R/{\mathfrak{p}}+{\mathfrak{q}}).$
The second equality follows from the inequality
$n+1-\operatorname{astab}^{*}(I)\geq\operatorname{astab}^{*}(J)$ and Lemma
4.4. Putting everything together, we get (4). The argument for the equality of
the upper bound is entirely similar. The proof is concluded. ∎
## 5\. The persistence property of sums
Recall that an ideal $I$ in a Noetherian ring $A$ has the _persistence
property_ if
$\operatorname{Ass}(A/I^{n})\subseteq\operatorname{Ass}(A/I^{n+1})$ for all
$n\geq 1$. There exist ideals which fail the persistence property: A well-
known example is $I=(a^{4},a^{3}b,ab^{3},b^{4},a^{2}b^{2}c)\subseteq
k[a,b,c]$, for which $I^{n}=(a,b)^{4n}$ and
$(a,b,c)\in\operatorname{Ass}(A/I)\setminus\operatorname{Ass}(A/I^{n})$ for
all $n\geq 2$. (For the equality $I^{n}=(a,b)^{4n}$ for all $n\geq 2$, note
that
$U=(a^{4},a^{3}b,ab^{3},b^{4})\subseteq I\subseteq(a,b)^{4}.$
Hence $U^{n}\subseteq I^{n}\subseteq(a,b)^{4n}$ for all $n$, and it remains to
check that $U^{n}=(a,b)^{4n}$ for all $n\geq 2$. By direct inspection, this
holds for $n\in\\{2,3\\}$. For $n\geq 4$, since $U^{n}=U^{2}U^{n-2}$, we are
done by induction.) However, in contrast to the case of monomial ideals, it is
still challenging to find a homogeneous prime ideal without the persistence
property (if it exists).
Swanson and R. Walker raised the question [22, Question 1.6] whether given two
ideals $I$ and $J$ living in different polynomial rings, if both of them have
the persistence property, so does $I+J$. The third main result answers in the
positive [22, Question 1.6] in many new cases. In fact, its case (ii) subsumes
[22, Corollary 1.7].
###### Corollary 5.1.
Let $A$ and $B$ be standard graded polynomial rings over $k$, $I$ and $J$ are
proper homogeneous ideals of $A$ and $B$, respectively. Assume that $I$ has
the persistence property, and
$\operatorname{Ass}(A/I^{n})=\operatorname{Ass}(I^{n-1}/I^{n})$ for all $n\geq
1$. Then $I+J$ has the persistence property. In particular, this is the case
if any of the conditions hold:
1. (i)
$I$ is a monomial ideal satisfying the persistence property;
2. (ii)
$I^{n+1}:I=I^{n}$ for all $n\geq 1$.
3. (iii)
$I^{n}$ is unmixed for all $n\geq 1$.
4. (iv)
$\operatorname{char}k=0$, $\dim(A/I)\leq 1$ and $I$ has the persistence
property.
###### Proof.
The hypothesis $\operatorname{Ass}(A/I^{n})=\operatorname{Ass}(I^{n-1}/I^{n})$
for all $n\geq 1$ and Theorem 4.1(2), yields for all such $n$ an equality
(5)
$\operatorname{Ass}_{R}\frac{R}{(I+J)^{n}}=\bigcup_{i=1}^{n}\mathop{\bigcup_{{\mathfrak{p}}\in\operatorname{Ass}_{A}(A/I^{i})}}_{{\mathfrak{q}}\in\operatorname{Ass}_{B}(J^{n-i}/J^{n-i+1})}\operatorname{Min}_{R}(R/{\mathfrak{p}}+{\mathfrak{q}}).$
Take $P\in\operatorname{Ass}_{R}\dfrac{R}{(I+J)^{n}}$, then for some $1\leq
i\leq n,{\mathfrak{p}}\in\operatorname{Ass}_{A}(A/I^{i})$ and
${\mathfrak{q}}\in\operatorname{Ass}_{B}(J^{n-i}/J^{n-i+1})$, we get
$P\in\operatorname{Min}_{R}(R/{\mathfrak{p}}+{\mathfrak{q}})$.
Since $I$ has the persistence property, it follows that
$\operatorname{Ass}(A/I^{i})\subseteq\operatorname{Ass}(A/I^{i+1})$, so
${\mathfrak{p}}\in\operatorname{Ass}(A/I^{i+1})$. Hence thanks to (5),
$P\in\mathop{\bigcup_{{\mathfrak{p}}_{1}\in\operatorname{Ass}_{A}(A/I^{i+1})}}_{{\mathfrak{q}}_{1}\in\operatorname{Ass}_{B}(J^{n-i}/J^{n-i+1})}\operatorname{Min}_{R}(R/{\mathfrak{p}}_{1}+{\mathfrak{q}}_{1})\subseteq\operatorname{Ass}_{R}\frac{R}{(I+J)^{n+1}}.$
Therefore $I+J$ has the persistence property.
The second assertion is a consequence of the first, Theorem 3.2 and Lemma 2.4.
∎
## Acknowledgments
The first author (HDN) and the second author (QHT) are supported by Vietnam
National Foundation for Science and Technology Development (NAFOSTED) under
grant numbers 101.04-2019.313 and 1/2020/STS02, respectively. Nguyen is also
grateful to the support of Project CT0000.03/19-21 of the Vietnam Academy of
Science and Technology. Tran also acknowledges the partial support of the Core
Research Program of Hue University, Grant No. NCM.DHH.2020.15. Finally, part
of this work was done while the second author was visiting the Vietnam
Institute for Advance Study in Mathematics (VIASM). He would like to thank the
VIASM for its hospitality and financial support.
The authors are grateful to the anonymous referee for his/her careful reading
of the manuscript and many useful suggestions.
## References
* [1] R. Ahangari Maleki, _The Golod property of powers of ideals and Koszul ideals_ , J. Pure Appl. Algebra 223 (2019), no. 2, 605–618.
* [2] M. Brodmann, _Asymptotic stability of $\operatorname{Ass}(M/I^{n}M)$_, Proc. Amer. Math. Soc. 74 (1979), no. 1, 16–18.
* [3] W. Bruns and J. Herzog, _Cohen-Macaulay rings. Rev. ed._. Cambridge Studies in Advanced Mathematics 39, Cambridge University Press (1998).
* [4] D. Eisenbud, _Commutative algebra. With a view toward algebraic geometry_. Graduate Texts in Mathematics 150. Springer-Verlag, New York (1995).
* [5] C. Francisco, H.T. Hà, and A. Van Tuyl, _A conjecture on critical graphs and connections to the persistence of associated primes_. Discrete Math. 310 (2010), no. 15–16, 2176–2182.
* [6] D. Grayson and M. Stillman, _Macaulay2, a software system for research in algebraic geometry._ Available at http://www.math.uiuc.edu/Macaulay2.
* [7] H.T. Hà and S. Morey, _Embedded associated primes of powers of square-free monomial ideals_. J. Pure Appl. Algebra 214 (2010), 301–308.
* [8] H.T. Hà, H.D. Nguyen, N.V. Trung and T.N. Trung, _Symbolic powers of sums of ideals_. Math. Z. 294 (2020), 1499–1520.
* [9] H.T. Hà, N.V. Trung and T.N. Trung, _Depth and regularity of powers of sums of ideals_. Math. Z. 282 (2016), 819–838.
* [10] W. Heinzer, D. Lantz, and K. Shah, _The Ratliff-Rush ideals in a Noetherian ring_. Comm. Algebra 20 (1992), no. 2, 591–622.
* [11] J. Herzog and A.A. Qureshi, _Persistence and stability properties of powers of ideals._ J. Pure Appl. Algebra 219 (2015), no. 3, 530–542.
* [12] L.T. Hoa, _Powers of monomial ideals and combinatorics_. In _New Trends in Algebras and Combinatorics_ , Proceedings of the 3rd International Congress in Algebras and Combinatorics (ICAC2017), Eds: K.P. SHum, E. Zelmanov, P. Kolesnikov, and S.M. Anita Wong, World Scientific (2020), pp. 149–178.
* [13] S. Huckaba and T. Marley, _Depth formulas for certain graded rings associated to an ideal_. Nagoya Math. J. 133 (1994), 57–69.
* [14] T. Kaiser, M. Stehlík, and R.Škrekovski, _Replication in critical graphs and the persistence of monomial ideals_. J. Comb. Theory Ser. A 123 (1), 239 – 251 (2014)
* [15] A. Kustin, B. Ulrich, _A family of complexes associated to an almost alternating map, with applications to residual intersection_. Mem. Amer. Math. Soc. 95, no. 461 (1992).
* [16] S. McAdam and P. Eakin, _The asymptotic Ass_. J. Algebra 61 (1979), 71–81.
* [17] S. Morey and R.H. Villarreal, _Edge ideals: algebraic and combinatorial properties_. In: Progress in Commutative Algebra, Combinatorics and Homology, Vol. 1 (C. Francisco, L. C. Klingler, S. Sather-Wagstaff and J. C. Vassilev, Eds.), De Gruyter, Berlin, 2012, 85–126.
* [18] H.D. Nguyen and T. Vu, _Powers of sums and their homological invariants_. J. Pure Appl. Algebra 223 (2019), no. 7, 3081–3111.
* [19] L. J. Ratliff, Jr., _On prime divisors of $I^{n}$, $n$ large_, Michigan Math. J. 23 (1976), 337–352.
* [20] M.E. Rossi and I. Swanson, _Notes on the behavior of the Ratliff-Rush filtration._ Commutative algebra (Grenoble/Lyon, 2001), 313–328, Contemp. Math., 331, Amer. Math. Soc., Providence, RI, 2003.
* [21] H. Sabzrou, M. Tousi, and S. Yassemi, _Simplicial join via tensor products_. Manuscripta Math. 126, 255–272 (2008).
* [22] I. Swanson and R.M. Walker, _Tensor-multimomial sums of ideals: Primary decompositions and persistence of associated primes_. Proc. Amer. Math. Soc. 147 (2019), no. 12, 5071–5082.
|
[1]AnanthaAiyyer [1]TerrellWade
1]Department of Marine, Earth and Atmospheric Sciences, North Carolina State
University
A. Aiyyer<EMAIL_ADDRESS>
# Acceleration of Tropical Cyclones As a Proxy For Extratropical Interactions:
Synoptic-Scale Patterns and Long-Term Trends
###### Abstract
It is well known that rapid changes in tropical cyclone motion occur during
interaction with extratropical waves. While the translation speed has received
much attention in the published literature, acceleration has not. Using a
large data sample of Atlantic tropical cyclones, we formally examine the
composite synoptic-scale patterns associated with _tangential_ and _curvature_
components of their acceleration. During periods of rapid tangential
acceleration, the composite tropical cyclone moves poleward between an
upstream trough and downstream ridge of a developing extratropical wavepacket.
The two systems subsequently merge in a manner that is consistent with
extratropical transition. During rapid curvature acceleration, a prominent
downstream ridge promotes recurvature of the tropical cyclone. In contrast,
during rapid tangential or curvature deceleration, a ridge is located directly
poleward of the tropical cyclone. Locally, this arrangement takes the form of
a cyclone-anticyclone vortex pair somewhat akin to a dipole block. On average,
the tangential acceleration peaks 18 hours prior to extratropical transition
while the curvature acceleration peaks at recurvature. These findings confirm
that rapid acceleration of tropical cyclones is mediated by interaction with
extratropical baroclinic waves. Furthermore, The tails of the distribution of
acceleration and translation speed show a robust reduction over the past 5
decades. We speculate that these trends may reflect the poleward shift and
weakening of extratropical Rossby waves.
The track and movement of tropical cyclones are known to be governed by the
background environment (e.g., Hodanish and Gray, 1993). It was recognized
early on that the translation speed of a tropical cyclone can be approximated
by the surrounding wind field (Emanuel, 2018). A tropical cyclone, however, is
not an isolated vortex that is passively carried by the current. The
background environment is comprised of synoptic and large-scale circulation
features, with attendant gradients of potential vorticity, moisture, and
deformation. The tropical cyclone actively responds to these external stimuli.
The large scale environment is also impacted by the interaction. For example,
the generation of $\beta$-_gyres_ – that influence tropical cyclone motion –
is a response to the background potential vorticity gradient (e.g., Chan and
Williams, 1987; Wu and Emanuel, 1993). Similarly, vertical wind shear can
limit the intensification of tropical cyclones by importing low entropy air
(e.g., Tang and Emanuel, 2010). This, in turn, can impact subsequent tropical
cyclone motion. On the other hand, a tropical cyclone can influence the large-
scale flow by exciting waves in the extratropical stormtrack, leading to rapid
downstream development and rearrangement of the flow (e.g., Jones et al.,
2003). It can be contended that a tropical cyclone is always interacting with
its environment, and the interaction is partly manifested in its motion.
Track and translation speed are two aspects of tropical cyclone motion that
are particularly important for operational forecasts. The track garners much
attention for obvious reasons – it informs potential locations that will be
affected by a storm. The translation speed impacts intensity change, storm
surge and local precipitation amount. There is a large body of published
literature and review articles dealing with various research and operational
problems related to tropical cyclone track and speed (e.g., Chan, 2005;
Emanuel, 2018). When tropical cyclones move poleward, they often encroach upon
the extratropical stormtrack. This leads to their interaction with baroclinic
eddies of the stormtrack. The outcome of the interaction is varied. Some
tropical cyclones weaken and dissipate while others strengthen and retain
their tropical nature for an additional period. In the North Atlantic, around
50% of tropical cyclones eventually experience extratropical transition - a
complex process during which the warm-core tropical cyclone evolves into a
extratropical cyclone and becomes part of an extratropical stormtrack (e.g.,
Hart and Evans, 2001). Several recent reviews have extensively documented the
research history and dynamics of extratropical transition (e.g., Evans et al.,
2017; Keller et al., 2019). Forecasters have long known that tropical cyclones
accelerate forward during extratropical transition, but relatively less
attention has been devoted to the details of this acceleration in the research
literature.
This paper has two main themes. The first concerns the synoptic-scale flow
associated with rapid acceleration and deceleration of tropical cyclones. This
is addressed via composites of global reanalysis fields and observed tropical
cyclone tracks from 1980–2016. We consider both tangential and curvature
accelerations. To our knowledge, such formal delineation of extratropical
interaction based on categories of tropical cyclone acceleration has not been
presented in the literature. Of note, however, is the recent work of Riboldi
et al. (2019) that is relevant to this paper. That study examined the
interaction of accelerating and decelerating upper-level troughs and recurving
western North Pacific typhoons. Their key findings are: (a) In the majority of
cases, a recurving tropical cyclone is associated with a decelerating upper-
level trough that remains upstream; (b) The upper-level trough appears to
phase lock with the tropical cyclone; and (c) Recurvatures featuring such
trough deceleration are frequently associated with downstream atmospheric
blocking. As we shown in subsequent sections, many of these results can be
recovered independently when we approach the problem from the perspective of
tropical cyclone motion. As such, part of our work complements the findings of
Riboldi et al. (2019).
The second theme concerns the identification of long-term trends in tropical
cyclone motion. While we focus on acceleration, we begin with translation
speed to place our results within the context of related recent work. Kossin
(2018) reported that the translation speed of tropical cyclones over the
period 1949–2016 has reduced by about 10% over the globe, and about 6% over
the North Atlantic. Without directly attributing it, Kossin (2018) noted that
the trend is consistent with the observed slow-down of the atmospheric
circulation due to anthropogenic climate change. In addition to the
circulation changes, the general warming of the atmosphere is associated with
an increase in water vapor content per the Clausius-Clapeyron scaling (CCS).
This translates to increase in precipitation rates that can locally exceed the
CCS (e.g., Nie et al., 2018). Kossin (2018) made the point that the slowing of
tropical cyclones may compound the problem of heavy precipitation in a warmer
climate. However, Moon et al. (2019) and Lanzante (2019) argued that the
historical record of tropical cyclone track data, particularly prior the
advent of the satellite era around the mid-1960s, is likely incomplete. They
also showed that annual-mean tropical cyclone translation speed exhibits step-
like changes and questioned the existence of a true monotonic trend. They
attributed these discrete changes to natural factors (e.g., regional climate
variability) as well as omissions and errors due to lack of direct observation
prior to the availability of extensive remote sensing tools.
We revisit the issue of trends in tropical cyclone motion, but restrict our
attention to the Atlantic basin and the years since 1966. This year is
considered to be the beginning of the _satellite era_ , at least for the
Atlantic (Landsea, 2007). In contrast, global coverage by geostationary
satellites began much later, in 1981 (e.g., Schreck et al., 2014). Lanzante
(2019) found a change point in 1965 for the timeseries of annual average
tropical cyclone speed for the North Atlantic basin. Lanzante (2019) also
reported that accounting for the change point dramatically reduces the trend
from the value reported by Kossin (2018). In light of the issues raised about
the reliability of tropical cyclone track data prior to the _satellite era_ ,
we use 1966 as our starting point and seek to ascertain whether robust trends
exist in the observed record of tropical cyclone acceleration.
## 1 Data
We use the IBTrACS v4 (Knapp et al., 2010) for tropical cyclone locations. The
information in this database typically spans the tropical depression stage to
extratropical cyclone and/or dissipation. Kossin (2018), used all locations in
the IBTrACs as long as a storm lasted more than 3 days and did not apply a
threshold for maximum sustained winds. We follow the same method with one
major difference. We only consider those instances of the track information
wherein a given storm was still classified as tropical. We recognize the fact
that the nature of a storm is fundamentally altered after it loses its surface
enthalpy flux-driven, warm-core structure. After extratropical transition,
evolution is governed by stormtrack dynamics of baroclinic eddies. We wish to
avoid conflating the motions of two dynamically distinct weather phenomena.
For the same reason, we also omit subtropical storms which are catalogued in
the track database. We argue that, by restricting our track data to only
instances that were deemed to be tropical in nature, we can paint a more
appropriate picture of the composite environment and trends associated with
the rapid acceleration of tropical cyclones.
To ascertain whether a tropical storm underwent extratropical transition, we
use the _nature_ designation in the IBTrACS database that relies on the
judgement of the forecasters from one or more agencies responsible for an
ocean basin. In IBTrACS, the nature flag is set to "ET" after the transition
is complete. Admittedly, there will be some subjectivity in this designation.
An alternative would be to employ a metric such as the cyclone phase space
(Hart, 2003), usually calculated using meteorological fields from global
numerical weather prediction models (e.g., reanalysis). As we wish to remain
independent of modeled products to characterize the storms, we rely on the
forecaster designated storm nature. Furthermore, Bieli et al. (2019) found
that the phase space calculation was sensitive to the choice of the reanalysis
model, which would add another source of uncertainty. Occasionally, the nature
of the storm is not recorded (NR) or, if there is an inconsistency among the
agencies, it is designated as mixed (MX). In the north Atlantic, only a small
fraction of track data ($\approx 0.5\%$) in the IBTrACs is designated as NR or
MX. This fraction is higher in other basins (e.g., $\approx 14\%$ in the
western North Pacific). This is another reason why we restrict our attention
to the Atlantic basin. Henceforth, we will collectively refer to the
designations NR, MX and ET as _non-tropical_ , and unless explicitly stated,
omit the associated track data in our calculations.
For the trends and basic statistics, we focus on the years 1966–2019, within
the ongoing _satellite era_ for the North Atlantic basin (e.g. Landsea, 2007;
Vecchi and Knutson, 2008). Compared to years prior, tropical cyclone data is
deemed more reliable once satellite-based observations became available. For
the composites, we use the European Center for Medium Range Weather
Forecasting (ECMWF) ERA-Interim (ERAi) reanalysis (Dee et al., 2011) for the
period 1981–2016. In subsequent sections, we focus on the region 20–50oN,
wherein tropical cyclones are more likely to interact with extratropical
baroclinic waves.
## 2 Tangential and curvature acceleration
The acceleration of a hypothetical fluid element moving with the center of a
tropical cyclone can be written as:
$\mathbf{a}=\frac{dV}{dt}\mathbf{\hat{s}}+\frac{V^{2}}{R}\mathbf{\hat{n}}$ (1)
where $V$ is the forward speed and $R$ is the radius of curvature of the track
at a given location. Here, $\mathbf{\hat{s}}$ and $\mathbf{\hat{n}}$ are
orthogonal unit vectors in the so-called _natural coordinate_ system. The
former is directed along the tropical cyclone motion. The latter is directed
along the radius of curvature of the track. The first term in eq. 1 is the
tangential acceleration and the second is the curvature or normal
acceleration. The speed $V_{j}$ at any track location (given by index j) is
calculated as:
$V_{j}=\frac{(D_{j,j-1}+D_{j,j+1})}{\delta t}$ (2)
where $D$ refers to the distance between the two consecutive points, indexed
as shown above, along the track. Since we used 3-hourly reports from the
IBTrACS, $\delta t=6$ h. The tangential acceleration was calculated from the
speed using centered differences.
To calculate the curvature acceleration, it is necessary to first determine
the radius of curvature, R. A standard approach to calculating R, given a set
of discrete points along a curve – in our case, a tropical cyclone track – is
to fit a circle through three consecutive points. For a curved line on a
sphere, it can be shown that:
$R=R_{e}sin^{-1}{\left(\sqrt{\frac{2d_{12}d_{13}d_{23}}{(d_{12}+d_{13}+d_{23})^{2}-2(d_{12}^{2}+d_{13}^{2}+d_{23}^{2})}}\right)}$
(.1) where $R$ is the radius of curvature, $R_{e}$ is the radius of the Earth,
and the $d$ terms are expressed as follows: $\displaystyle
d_{12}=1-(\cos{T_{1}}\cos{T_{2}}\cos{(N_{2}-N_{1})}+\sin{T_{1}}\sin{T_{2}})$
(.2) $\displaystyle
d_{13}=1-(\cos{T_{1}}\cos{T_{3}}\cos{(N_{3}-N_{1})}+\sin{T_{1}}\sin{T_{3}})$
(.3) $\displaystyle
d_{23}=1-(\cos{T_{2}}\cos{T_{3}}\cos{(N_{3}-N_{2})}+\sin{T_{2}}\sin{T_{3}})$
(.4)
where, $T_{1}$, $T_{2}$, and $T_{3}$ are the latitudes of the 3 points while
$N_{1}$, $N_{2}$, and $N_{3}$ are the longitudes. The center of the circle is
given by the coordinates:
$\displaystyle\tan{\mathbf{T}}=\pm{\frac{\cos{T_{1}}\cos{T_{2}}\sin{(N_{2}-N_{1})}+\cos{T_{1}}\cos{T_{3}}\sin{(N_{1}-N_{3})}+\cos{T_{2}}\cos{T_{3}}\sin{(N_{3}-N_{2})}}{\sqrt{\alpha^{2}+\beta^{2}}}}$
(.1) $\displaystyle\tan{\mathbf{N}}=-{\frac{\alpha}{\beta}}$ (.2) Where
$\mathbf{T}$ and $\mathbf{N}$ are the latitude and longitude, respectively of
the circle’s center, and $\alpha$ and $\beta$ are obtained using:
$\displaystyle\alpha=\cos{T_{1}}(\sin{T_{2}}-\sin{T_{3}})\cos{N_{1}}+\cos{T_{2}}(\sin{T_{3}}-\sin{T_{1}})\cos{N_{2}}+\cos{T_{3}}(\sin{T_{1}}-\sin{T_{2}})\cos{N_{3}}$
(.3)
$\displaystyle\beta=\cos{T_{1}}(\sin{T_{2}}-\sin{T_{3}})\sin{N_{1}}+\cos{T_{2}}(\sin{T_{3}}-\sin{T_{1}})\sin{N_{2}}+\cos{T_{3}}(\sin{T_{1}}-\sin{T_{2}})\sin{N_{3}}$
(.4)
Figure 1: Illustration of the circle-fit and radius of curvature calculations
at five selected locations along the track of hurricanes Katrina (2005) and
Irma (2017).
While the expressions above were derived independently for this work, we make
no claim of originality since they are based on elementary principles of
geometry. Figure 1 shows some examples of the radius of curvature calculation
for hurricanes Katrina (2005) and Irma (2017).
## 3 Basic Statistics
Over the years 1966–2019, 689 storms in the North Atlantic meet the 3-day
threshold. Figure 2 shows some basic statistics for speed and accelerations as
a function of latitude for Atlantic storms. Tropical cyclone locations are
sorted in $10^{\circ}$-wide latitude bins and all instances classified as non-
tropical were excluded. Table 1 provides the same data. The average speed of
all North Atlantic tropical cyclones (including over-land) is about 21 km/hr.
As expected, tropical cyclone speed clearly varies with latitude. It is lower
in the subtropics as compared to other latitudes. It increases sharply in the
vicinity of $40^{o}$ N. The tangential acceleration, as defined in eq. 1, can
be positive or negative. The mean and median tangential acceleration remains
near-zero equatorward of $30^{\circ}$N. This suggests that tropical cyclones
in this region tend to translate steadily, or are equally likely to accelerate
and decelerate. The tangential acceleration is substantially positive poleward
of $30^{\circ}$N. Tropical cyclones in these latitudes are subject to rapid
acceleration. For example, the mean tangential acceleration in the
$30^{\circ}-40^{\circ}$N latitude band is about 5.0 km/hr day-1. The curvature
acceleration, by our definition, takes only positive values and steadily
increases with latitude. The distributions of tropical cyclone speed and
tangential acceleration are relatively symmetric about the median value as
compared to the curvature acceleration.
Figure 2: Distribution of (a) Speed(km hr-1) ; (b) Tangential acceleration (km hr-1 day-1) and (c) Curvature acceleration (km hr-1 day-1) of Atlantic TCs as a function of latitude. Storm instances classified as ET or NR were excluded. Data from 1966–2019 was binned within 10o-wide overlapping latitude bins. Statistics shown are: median (horizontal line within the box), mean (dot), and 10th, 25th, 75th and 90th percentiles. Table 1: Speed (km hr-1), tangential and curvature acceleration (km hr-1 day-1) of Atlantic TCs in the IBTRaCs database as a function of latitude. Storm instances classified as ET or NR were excluded. N refers to number of 3-hourly track positions in each latitude-bin over the period 1966–2019. | | Speed | Tang. Accel | Curv. Accel
---|---|---|---|---
Latitude | N | Mean | Median | Std Dev. | Mean | Median | Std Dev. | Mean | Median | Std Dev.
Full Basin | 38822 | 20.85 | 18.87 | 12.2 | 2.27 | 0.68 | 19.5 | 16.42 | 10.88 | 19.0
5–15 | 5870 | 23.53 | 23.4 | 9.4 | 0.18 | 0.0 | 14.7 | 11.65 | 8.2 | 13.1
10–20 | 13087 | 21.10 | 20.6 | 9.3 | -0.13 | -0.0 | 15.3 | 12.29 | 8.5 | 13.3
15–25 | 13656 | 18.66 | 18.1 | 8.6 | 0.26 | -0.0 | 16.0 | 13.90 | 9.4 | 15.6
20–30 | 13074 | 17.21 | 16.4 | 8.6 | 1.40 | 0.4 | 17.1 | 16.48 | 11.2 | 19.0
25–35 | 13905 | 17.74 | 16.2 | 10.2 | 2.74 | 1.3 | 19.7 | 18.04 | 12.2 | 20.6
30–40 | 10815 | 21.37 | 18.6 | 13.5 | 5.33 | 3.0 | 22.8 | 19.34 | 13.4 | 21.0
35–45 | 5272 | 29.41 | 26.7 | 18.0 | 8.27 | 4.8 | 27.4 | 22.41 | 15.8 | 22.9
40–50 | 1607 | 42.37 | 41.0 | 20.8 | 8.75 | 6.4 | 34.9 | 28.45 | 19.9 | 29.0
45–55 | 329 | 55.69 | 53.9 | 19.4 | 6.81 | 6.3 | 37.8 | 36.50 | 25.9 | 38.9
| | | | | | | | | |
## 4 Ensemble average flow
We now examine the flow pattern associated with rapid tangential and curvature
acceleration of tropical cyclones. For this, storm-relative ensemble average
fields are constructed using ERA-interim reanalysis over the period 1980–2016.
We use the method outlined as follows.
* •
All tropical cyclone track locations are binned into 10∘ wide latitude strips
(e.g., 10-20∘ N, 20-30∘ N, 30-40∘ N). Instances where storms are classified as
_non-tropical_ (c.f. section 2) are excluded. For brevity, only results for
the 30-40∘ N bin are discussed here. A total of 3515 track points are
identified for this latitude bin. Note that a particular tropical cyclone
could appear more than once in a latitude bin at different times.
* •
The tangential accelerations in each bin are separated into two categories:
rapid acceleration and rapid deceleration. The rapid acceleration composite is
calculated using all instances of acceleration exceeding a threshold: i.e.,
$a\geq a_{\tau}$, where $\tau$ refers to a specified quantile of the
acceleration distribution within the latitude bin. Similarly, the rapid
tangential deceleration composite is based on all instances where $a\leq
a_{\tau}$. We tried a variety of thresholds for $\tau$ (e.g., $0.80-0.95$ for
rapid tangential acceleration, and $0.05-0.20$ for rapid tangential
deceleration). Our conclusions for the composites are not sensitive to the
exact choice of the threshold values as long as they are sufficiently far from
the median.
* •
We also use the same method to create categories of curvature accelerations.
For this, since we only have positive values, we interpret these two quantile-
based categories as rapid acceleration and near-zero acceleration
respectively.
* •
For each category, we compute an ensemble average composite field of the
geopotential field at selected isobaric levels. The composites are calculated
after shifting the grids such that the centers of all storms were coincident.
The centroid position of storms was used for the composite storm center, and
the corresponding time was denoted as Day-0. Lag-composites were created by
shifting the dates backward and forward relative to Day-0.
* •
For anomaly fields, we subtract the long-term synoptic climatology from the
total field. The climatology is calculated for each day of the year by
averaging data for that day over the years 1980-2015. This is followed by a
7-day running mean smoother. To account for diurnal fluctuations, the daily
climatology is calculated for each available synoptic hour in the ERA-interim
data (00, 06, 12, and 18 UTC).
* •
For brevity, we only show the results for $\tau=0.9$ for rapid acceleration
and $\tau=0.1$ for rapid deceleration. For reference, within 30–40oN latitude
range, $\tau=0.9$ corresponds to $a=32$ km/hr day-1 and the $\tau=0.1$
corresponds $a=-18$ km/hr day-1. For curvature acceleration, they are,
respectively, 48 and 32 km/hr day-1. The sample size of each composite was 352
($\approx 10\%$ of the total number of track points in this latitude range.
These correspond to 196 and 168 unique storms for rapid acceleration and rapid
deceleration respectively.
* •
Statistical significance of anomaly fields shown in this section are evaluated
by comparing them against 1000 composites created by randomly drawing 352
dates for each composite from the period July–October, 1980–2015. A two-tailed
significance is evaluated at the 95% confidence level with the null hypothesis
being that the anomalies could have resulted from a random draw.
### 4.1 Tangential Acceleration
#### 4.1.1 Day 0 (reference day) composite
Figure 3: Storm-relative composite average geopotential heights (thick orange
lines) and anomalies (color shaded) for all TCs located in the latitude bin
30-40oN over the Atlantic. The composite fields are shown for three levels –
300 hPa, 500 hPa and 850 hPa. In each panel, the composite 1000 hPa anomalous
geopotential is shown using thin black contours. All anomalies are defined
relative to a long-term synoptic climatology. The contour intervals are: 12
dam, 6 dam and 3 dam for the three levels respectively. The shading interval
in dam for the 300 hPa anomaly fields is shown in the label-bar. It is half of
the value for the other two levels. The left column is for rapid tangential
acceleration and the right column is for rapid tangential deceleration.
Figure 3 shows storm-centered composite geopotential heights (thick orange
lines) and their anomalies (color shaded) for all Atlantic tropical cyclones
located within 30-40∘ N. Two categories of tangential acceleration are shown:
rapid acceleration (left column) and rapid deceleration (right column). The
fields are shown at three levels – 300 hPa, 500 hPa and 850 hPa. In each
panel, the anomalous 1000 hPa geopotential is shown using thinner black
contours. It highlights the composite tropical cyclone and the surface
development within the extratropical stormtrack.
The ensemble average for rapid tangential acceleration (left column of Fig. 3)
shows the composite tropical cyclone interacting with a well defined
extratropical wavepacket. The tropical cyclone is straddled by an upstream
trough and a downstream ridge. At 500 hPa, the geopotential anomalies of the
tropical cyclone and the upstream trough are close and, consequently, appear
to be connected. This yields a negative tilt in the horizontal and indicates
the onset of cyclonic wrap-up of the trough around the tropical cyclone. The
1000-hPa geopotential anomaly field is dominated by the composite tropical
cyclone. It also shows the relatively weaker near-surface cyclones and
anticyclones of the extratropical stormtrack. The entire wavepacket shows
upshear tilt of geopotential anomalies with height, indicating baroclinic
growth. This arrangement of the tropical cyclone and the extratropical
wavepacket is consistent with the synoptic-scale flow that is typically
associated with extratropical transition (e.g., Bosart and Lackmann, 1995;
Klein et al., 2000; McTaggart-Cowan et al., 2003; Riemer et al., 2008; Riemer
and Jones, 2014; Keller et al., 2019). At this point, all storms in the
ensemble were still classified as _tropical_. Thus, we interpret this
composite as pre-extratropical transition completion state.
Figure 4: Storm-relative average 500-hPa geopotential heights (thick orange
lines) and anomalies (color shaded) for all TCs located in the latitude bin
30-40oN over the Atlantic. The fields are shown for lags Day-2 to Day+2. In
each panel, the composite 1000 hPa anomalous geopotential is shown using thin
black contours. All anomalies are defined relative to a long-term synoptic
climatology. The contour interval is 6 dam and shading interval in dam is
shown in the label-bar. The plus symbol shows the location of the composite TC
at Day 0 and the hurricane symbol shows the approximate location at each lags.
The left column is for rapid tangential acceleration and the right column is
for rapid tangential deceleration
The ensemble average for rapid tangential deceleration cases (right column of
Fig. 3) shows an entirely different synoptic-scale pattern. The extratropical
wavepacket is substantially poleward, with a ridge immediately north of the
composite tropical cyclone. The geopotential anomalies of the extratropical
wavepacket and the composite tropical cyclone appear to be distinct at all
three levels, with no evidence of merger. The prominent synoptic structure is
the cyclone-anticyclone dipole formed by the tropical cyclone and the
extratropical ridge.
#### 4.1.2 Lag Composites
To get a sense of the temporal evolution of the entire system, we show lag
composites for Day-2 to Day+2 in Fig. 4. As in the previous figure, the two
categories of acceleration are arranged in the respective columns. The rows
now show 500-hPa geopotential height (thick contours) and anomalies (color
shaded). In each panel, the corresponding 1000-hPa geopotential height
anomalies are shown by thin black contours.
The ensemble average for rapid tangential acceleration (left column of Fig. 4)
shows a tropical cyclone moving rapidly towards an extratropical wavepacket.
At day-2, the tropical cyclone circulation is relatively symmetric as depicted
by the contours of 1000 hPa geopotential anomalies. The downstream
extratropical ridge is prominent, but the upstream trough is much weaker at
this time. On Day-1, the entire extratropical wavepacket has amplified and the
500 hPa geopotential anomalies of the tropical cyclone and a developing
upstream trough have merged. This process continues through Day 0. By Day+1,
the composite storm has moved further poleward and eastward and is now located
between the upper-level upstream trough and downstream ridge in a position
that is optimal for further baroclinic development. The 1000 hPa geopotential
field is now asymmetric with a characteristic signal of a cold front.
Figure 5: Track of the composite tropical cyclone (blue) and the downstream
500-hPa extratropical ridge (orange) from Day-2 to Day +2 for (a) rapid
tangential acceleration, and (b) rapid tangential deceleration. The composites
are based on all TC tracks locations within 30–40oN. Day 0 is the reference
day for the composites in Fig. 4
The picture that is evident from these 500-hPa composite fields is that, over
the course of the 4 days, downstream ridge-trough couplet amplifies while
simultaneously propagating eastward. The upstream trough cyclonically wraps
around the tropical cyclone and the two have merged by Day 1. The geopotential
gradient poleward of the storm is also enhanced, indicating a strengthening
jet streak. These features are consistent with the process of extratropical
transition (e.g., Keller et al., 2019). The poleward moving tropical cyclone
may either interact with an existing wavepacket or perturb the extratropical
flow and excite a Rossby wavepacket that disperses energy downstream (e.g.,
Riemer and Jones, 2014). The outflow of the tropical cyclone is a source of
low potential vorticity (PV) air that further reinforces the downstream ridge
(e.g., Riemer et al., 2008).
To further illustrate the interaction, the tracks of the tropical cyclone and
the 500-hPa ridge of the extratropical wavepacket are presented in Fig. 5a. It
can be clearly seen that the tropical cyclone merges with the extratropical
stormtrack. Furthermore, the 500 hPa ridge has rapidly moved downstream during
the 4 day period, indicating a very progressive pattern. The eastward phase
speed of the extratropical wavepacket, as inferred from the track of the
ridge, is $\approx 7$ ms-1. The tropical cyclone speed, averaged over the same
4-day period, is $\approx 6$ ms-1. The close correspondence between the two
and the merger of the tracks further supports the notion that the synoptic-
scale evolution during rapid acceleration cases is consistent with the
canonical pattern associated with extratropical transition.
On the other hand, during rapid deceleration (right column Fig. 4), the
composite tropical cyclone remains equatorward of the extratropical wavepacket
and maintains a nearly symmetric structure throughout the period. The
arrangement of the tropical cyclone and the extratropical ridge is akin to a
vortex dipole. The extratropical wavepacket is not as progressive as in the
rapid acceleration case. This is seen clearly from the tracks of the tropical
cyclone and the ridge (Fig. 5b). The phase speed of the extratropical
wavepacket is $\approx 3$ ms-1 while the tropical cyclone speed is $\approx
1.5$ ms-1. The phasing of the tropical cyclone and the extratropical
wavepacket has led to the formation of a cyclone-anticyclone vortex dipole. We
return to this point and relate it to similar findings in Riboldi et al.
(2019) in a later section.
### 4.2 Curvature Acceleration
Figure 6: As in Fig. 3, but for rapid and near-zero curvature acceleration
#### 4.2.1 Day 0 (reference day) composite
As in the previous section, Figure 6 shows storm-centered ensemble averages,
but this time for the two categories of curvature acceleration. The composite
for rapid acceleration (left column) shows a tropical cyclone that is
primarily interacting with an extratropical ridge that is poleward and
downstream of it. The upstream trough in the extratropics is weaker and
farther westward as compared to the rapid tangential acceleration composite
(Fig. 6a). Furthermore, instead of the upstream trough wrapping cyclonically,
in this case, we see the downstream ridge wrapping anticyclonically around the
tropical cyclone. This is similar to the composite 500-hPa fields that were
based on recurving tropical cyclones as shown in Fig. 5 of Aiyyer (2015).
Thus, in an ensemble average sense, rapid curvature acceleration appears to
mark the point of the recurvature of tropical cyclones.
The composite for near-zero curvature acceleration (right column) is quite
similar to the composite for rapid tangential deceleration (Fig. 6d–f). The
extratropical wavepacket is poleward and the tropical cyclone-ridge system
appears as a vortex dipole.
Figure 7: As in Fig. 4, except for rapid (left column) and near-zero (right
column) curvature acceleration.
#### 4.2.2 Lag Composites
The temporal evolution of the entire system for the two categories of
curvature acceleration is shown in Fig. 7. For rapid curvature acceleration,
we see a tropical cyclone that is moving poleward towards an extratropical
ridge. During the subsequent days, the ridge moves eastward initially and
begins to wrap around the tropical cyclone. This arrangement promotes the
recurving of the tropical cyclone. By Day+2, the anticyclonic wrapping and
thinning has resulted in a significantly weaker ridge as compared to a few
days prior. For the near-zero curvature cases ( Fig. 7f–g), the initial
movement of the tropical cyclone is also directly poleward towards the
extratropical ridge. However, in this case, the ridge remains poleward of the
tropical cyclone. There is also significantly less anticyclonic wrapping of
ridge. The tropical cyclone-ridge system takes the form of a cyclonic-
anticyclonic vortex pair similar to the rapid tangential deceleration
composite.
The tracks in Fig. 8 clearly show how the tropical and extratropical systems
propagate. For rapid curvature acceleration (Fig. 8a), the tropical cyclone
track shows a recurving tropical cyclone. The track of the ridge confirms the
initial eastward motion, followed by a poleward shift after parts of it wrap
around the tropical cyclone as noted from Fig. 7a-e. By Day+2, we do not
observe a merger of the tracks that happens in the case of rapid tangential
acceleration (Fig. 5a).
The tracks for near-zero curvature acceleration (Fig. 8a) are somewhat similar
to the rapid tangential deceleration (Fig. 5b). The key point here is that,
although the tropical cyclone moves poleward, tropical cyclone-ridge system
acts like a vortex dipole and is nearly stationary in the zonal direction.
This arrangement of the tropical cyclone and the extratropical wavepacket is
similar to the composite fields in Fig. 10 of Riboldi et al. (2019), where
they show upper-level potential vorticity (PV), 850-hPa potential temperature
and sea-level pressure. The difference is that their composite was conditioned
on the acceleration of the upstream trough for recurving western Pacific
typhoons. It is, however, not surprising that we can recover a similar pattern
when we condition our composites on the basis of tropical cyclone
acceleration. Since the the extratropical wavepacket and the tropical cyclone
are actively interacting, they influence each other’s motion. Riboldi et al.
(2019) referred to this as a phase-locking of the upstream trough and the
tropical cyclone while we have viewed this as a phase-lock between the ridge
and the tropical cyclone. The two are not mutually exclusive since the trough
and the ridge are part of the same wavepacket.
Figure 8: Track of the composite tropical cyclone (blue) and the downstream
500-hPa extratropical ridge (orange) from Day-2 to Day +2 for (a) rapid
curvature acceleration, and (b) near-zero curvature acceleration. The
composites are based on all TC tracks locations within 30–40oN. Day 0 is the
reference day for the composites in Fig. 7
## 5 Extratropical Transition
In the previous section, we showed that the composite synoptic-scale flow
associated with rapid tangential acceleration resembles a pattern that is
favorable for extratropical transition. However, this does not imply that all
storms in the composite underwent extratropical transition. Some tropical
cyclones may begin the process of extratropical transition but dissipate
before its completion (e.g., Kofron et al., 2010). We now consider tropical
cyclone motion from a different perspective by considering only those storms
that completed the transformation from being tropical to extratropical. Hart
et al. (2006) found that the time taken for extratropical transition
completion can vary considerably. For the storms that they examined, this
ranged from 12–168 hours. To get a sense of the temporal evolution relative to
extratropical transition completion, we examine composite tropical cyclone
speed and acceleration as a function of time. For this, we only considered
those Atlantic storms during 1966–2019 that were classified as _tropical_ at
some time and subsequently underwent extratropical transition. Of the 689
candidate storms that passed the three-day threshold, 18 storms were never
classified as _tropical_. Of the remaining 671 storms, 274 were eventually
classified as _extratropical_. This yields a climatological extratropical
transition fraction of 41%. However, in the data record, a few instances exist
where a storm was flagged as _extratropical_ earlier than _tropical_. If we
remove these instances, the extratropical transition fraction slightly reduces
to $\approx 38\%$. These estimates are lower than the fraction of 44% during
1979–2017 in Bieli et al. (2019), and 46% during 1950-1993 in Hart and Evans
(2001). The mean and median latitude of extratropical transition completion in
our data set were, respectively, 40.5∘N and 41.5∘ N. This is consistent with
Hart and Evans (2001) who found that the highest frequency of extratropical
transition in the Atlantic occurs between the latitudes of 35oN–45oN.
Figure 9: Composite speed and accelerations relative to time of (a)
Extratropical transition; and (b) Recurvature. A single pass of 5-point
running average was applied to the speed and tangential acceleration curves.
Two passes of the same filter were applied to the curvature acceleration.
Fig. 9a shows the composite accelerations and speed relative to the time of
extratropical transition. Hour 0 is defined as the first instance in the
IBTrACS where the storm nature is designated as extratropical transition. We
interpret this as the nearest time after extratropical transition has been
completed. In an ensemble-averaged sense, the forward speed of transitioning
tropical storms is seen to reach its peak around the time of extratropical
transition completion. The tangential acceleration peaks about 18 hours prior
to that. The curvature acceleration appears to steadily increase up to the
time of extratropical transition and stabilizes thereafter. The point here is
that the peak tangential acceleration of tropical cyclones precedes
extratropical transition completion. The rapid increase in the speed prior to
extratropical transition completion time is a direct outcome of the
interaction with the extratropical baroclinic wavepacket.
## 6 Recurvature
In the previous section, we found that the composite synoptic-scale flow
associated with rapid curvature acceleration closely matches the pattern
associated with recurving of tropical cyclones (Aiyyer, 2015). To further
explore this connection, Fig. 9b shows the acceleration and speed composite
timeseries relative to recurvature. We follow the method described in Aiyyer
(2015) to determine the location of recurvature. A total of 653 recurvature
points were found for Atlantic tropical storms over the period 1966–2019. Note
that a given storm could have more than one instance of recurvature. Fig 9b
confirms that, in an ensemble average sense, the time of recurvature is
associated with the highest curvature acceleration. Furthermore, it is also
associated with the lowest forward speed and a period of rapid increase in
tangential acceleration.
## 7 Trends
We first examine the trends in the annual-mean translation speed to place our
results within the context of recent studies of tropical cyclone motion. As
noted in the introduction, Kossin (2018) found a decreasing trend in annual-
mean tropical cyclone speed during 1949–2018 over most of the globe. That
study considered all storms in the IBTrACS dataset as long as they survived at
least three days. We revisit this for the Atlantic and test the sensitivity of
the trend when we exclude non-tropical systems. The rationale for this was
discussed earlier in section 2. Figure 10 shows the annual-mean speed of
tropical cyclones for two categories: All storms (grey) and storms excluding
NR and extratropical transition designations (orange). Panel (a) shows this
for the entire Atlantic and Panel (b) for the 20–40oN band.
Figure 10: Annual-mean speed and linear trend for (a) The entire Atlantic; and (b) 20–50oN latitude band. The grey curve is for all storms in the IBTRaCS dataset while the orange curve excludes instances when the storm was classified as ET or NR. Table 2: Trends in Speed (km hr-1 year-1). _All storms_ refers to all instances of a system recorded in the IBTraCs. ET refers to storm nature designated as extratropical, while NR refers to instances when the storm nature was not recorded. | 1966–2019 | 1949–2019 | 1949–2016
---|---|---|---
| LR | MK-TS | LR | MK-TS | LR | MK-TS
| Trend | p-value | Trend | p-value | Trend | p-value | Trend | p-value | Trend | p-value | Trend | p-value
Atlantic (All storms)
Full basin | 0.029 | 0.19 | 0.028 | 0.15 | -0.016 | 0.28 | -0.016 | 0.32 | -0.019 | 0.24 | -0.021 | 0.25
20–50 | 0.041 | 0.15 | 0.035 | 0.11 | -0.011 | 0.56 | -0.019 | 0.29 | -0.012 | 0.55 | -0.023 | 0.26
Atlantic (Excluding ET,NR)
Full basin | -0.007 | 0.70 | -0.008 | 0.62 | -0.004 | 0.77 | -0.007 | 0.48 | -0.002 | 0.90 | -0.006 | 0.63
20–50oN | 0.008 | 0.76 | 0.002 | 0.93 | <0.001 | 1.00 | -0.009 | 0.52 | 0.005 | 0.79 | -0.007 | 0.72
Table 2 contains the trends calculated using linear regression and the Thiel-
Sen estimate. We also include the trends for 1949–2016 to compare our
calculations with Kossin (2018). For 1949–2016, when we consider all tropical
cyclones, the linear regression and Thiel-Sen estimates of the trends in
annual-mean speed are $-0.019$ and $-0.021$ km hr-1 year -1. These are
practically identical to the value of $-0.02$ km hr-1 year -1 reported by
Kossin (2018). However, the trend for the satellite era over the entire basin
switches to a positive value of $\approx 0.028$ km hr-1 year -1. The
sensitivity to the choice of the years is consistent with Lanzante (2019) who
showed that the negative trend of the annual-mean speed over the longer period
1949–2016 was reduced in magnitude by accounting for the change points such as
those associated with the advent of satellite-based weather monitoring. When
we remove non-tropical data, the trends for various periods and regions are
generally lower. Furthermore, none of the trends shown in Table 2 can be
deemed significant if we use a p-value of 0.05 as the cut-off. We return to
this point in the following section.
Figure 11: Cumulative distributions of tangential acceleration (km hr-1 day-1
) for July-October, 1966–2019 for (a) The entire Atlantic; (b) 0–20oN; and (c)
20–50oN Figure 12: Cumulative distributions of curvature acceleration (km
hr-1 day-1 ) for July-October, 1966–2019 for (a) The entire Atlantic; (b)
0–20oN; and (c) 20–50oN
### 7.1 Quantile regression
Figure 11 compares the cumulative probability distribution (CDF) of tangential
accelerations over three 10-year periods: 1966–1975, 1988-1997, and 2010–2019.
The data covers the peak hurricane season: July–October (JASO). Three Atlantic
regions are shown - the entire basin, 0–20oN, and 20–50oN. In all three CDFs,
the lower and upper-tail probability of the distribution appear to show a
shift towards the median. The direction of the shift indicates a reduction in
the frequency of both rapid tangential acceleration ($a\geq 15$ km/hr day-1 )
and rapid tangential deceleration ($a\leq-15$ km/hr day-1 ) from the earlier
to recent decades. This is most pronounced over the 20–50oN latitude band. The
CDFs for curvature acceleration (bottom row of Fig. 12) show a similar shift
towards less frequent rapid acceleration. The CDF over the entire year shows
similar shifts. When we consider a smaller subset of months, we find that the
shifts are more pronounced when we omit October and November (not shown).
In the preceding sections, we showed that rapid acceleration or deceleration
of tropical cyclones are typically associated with interactions with the
extratropical baroclinic stormtrack. The attendant synoptic-scale pattern are
distinct in the phasing of the tropical cyclone and the extratropical
wavepacket. It is of interest to determine if the shift in CDFs of
acceleration (Figs. 11, 12) are related to long-term trends. The motivation
being that it can inform us about potential changes in the nature of tropical
cyclone-baroclinic stormtrack interaction. Given that we are interested in the
long-term trends of rapid acceleration and deceleration – i.e., the tails of
the probability distribution – we use quantile regressions (QR) as developed
by Koenker and Bassett (1978). QR is a useful tool to model the behavior of
the entire probability distribution and has been used in diverse fields and
applications (e.g., Koenker and Hallock, 2001). In atmospheric sciences, QR
has been applied to examine trends in extreme precipitation and temperature
(e.g., Koenker and Schorfheide, 1994; Barbosa et al., 2011; Gao and Franzke,
2017; Lausier and Jain, 2018; Passow and Donner, 2019).
The standard form of the simple linear regression model for a response
variable $Y$ in terms of its predictor $X$ is written as:
$\mu\\{Y|X\\}=\beta_{o}+\beta_{1}X$ (5)
Where, $\mu\\{Y|X\\}$ is the conditional mean of Y given a variable X, and
$\beta_{o}$ and $\beta_{1}$ are, respectively, the intercept and the slope.
This linear model fits the mean of a response variable under the assumption of
constant variance over the entire range of the predictor. However, when the
data is heteroscedastic, and there is interest in characterizing the entire
distribution - and not just the mean - QR is more appropriate and insightful.
The standard form of QR is written as follows (e.g., Lausier and Jain, 2018):
$Y(\tau|X)=\beta_{o}^{(\tau)}+\beta_{1}^{(\tau)}X+\epsilon^{(\tau)}$ (6)
where $Y(\tau|X)$ denotes the conditional estimate of Y at the quantile $\tau$
for a given X. By definition, $0<\tau<1$. In our case, Y is the timeseries
vector of either acceleration or speed, and X is the vector comprised of the
dates of the individual storm positions. Here, $\beta_{o}^{(\tau)}$ and
$\beta_{1}^{(\tau)}$ denote the intercept and slope, while the
$\epsilon^{(\tau)}$ denotes the error. As noted in previous studies cited
above, QR does not make any assumption about the distribution of parameters
and is known to be relatively robust to outliers. To determine the trends, we
fit the quantile regression model for a range of quantiles between .05 and
.95. Instead of calculating annual averages to get one value of acceleration
or speed per year, we retain all of the individual values for the tropical
cyclones. The corresponding time for each data point is assigned as a
fractional year.
Figure 13: Quantile regressions of tangential acceleration for regions and
months shown on the panels: The left columns show the acceleration (light blue
circles) for all TCs (NR and ET excluded) as a function of time. The dotted
magenta lines show the linear fits quantiles ranging from 0.05 to 0.95. The
red line shows the ordinary least square fit for the mean. The right columns
show the estimates of the slope (i.e., the trend in km/hr day-1 year-1) for
each quantile, along with the 95% confidence band (dotted magenta). Also shown
is the ordinary least square estimate of trend (red line) and its 95%
confidence band for each quantile.
Figure 13 shows the results of QR for tangential acceleration. The panels on
the left show the acceleration (light blue circles) at individual track
locations from 1966-2019 (ET and NR excluded). The dashed magenta lines are
the linear fits for the quantiles ranging from 0.5 to 0.95. The panels on the
right show the slope (trend; km/hr day-1 year-1) of the linear line as a
function of the quantile. These figures include the best fit using ordinary
least squares (OLS; red line) that models the mean of the distribution. Also
included are the associated 95% confidence bounds. The top row includes data
from all months over the entire Atlantic. The middle row is for 20–50oN over
the peak tropical cyclone months (July-October), and the bottom row restricts
the data to August-September. The latter two illustrate some of the
sensitivity to the choice of domain and months of analysis. They also focus
our attention on the region where tropical cyclones are most likely to
interact with extratropical systems. The corresponding numerical values are
shown in Table 3. Recall from Table 1 that the median value ($\tau=.5$) of
tangential acceleration is a small positive number. As such, $\tau<0.5$ is
indicative of deceleration, while $\tau\geq 0.5$ is indicative of
acceleration.
Table 3: Quantile trends of tangential acceleration over 1966-2019 (km hr-1 day-1 year-1 ) for months and regions labeled below. Trends of magnitude below $0.01$ are not reported. | Entire Atlantic | 20–50oN | 20–50oN
---|---|---|---
| All Months | Jul-Oct | Aug-Sep
$\tau$ | Trend | Change | p | 95% Conf | Trend | Change | p | 95% conf. | Trend | Change | p | 95% conf.
OLS | 0.01 | – | 0.01 | 0.00, 0.03 | 0.01 | 10 | 0.50 | -0.01, 0.02 | , – | 4 | 0.83 | -0.01, 0.02
0.05 | 0.12 | 23 | <0.01 | 0.09, 0.16 | 0.07 | 13 | <0.01 | 0.02, 0.11 | 0.14 | 25 | <0.01 | 0.09, 0.20
0.10 | 0.06 | 18 | <0.01 | 0.04, 0.08 | 0.03 | 10 | 0.03 | 0.00, 0.07 | 0.07 | 19 | <0.01 | 0.03, 0.10
0.15 | 0.03 | 12 | <0.01 | 0.01, 0.05 | – | 0 | 0.98 | -0.02, 0.02 | 0.01 | 5 | 0.35 | -0.02, 0.04
0.20 | 0.02 | 9 | 0.02 | <0.01, 0.03 | – | -2 | 0.76 | -0.02, 0.02 | – | 2 | 0.70 | -0.02, 0.03
0.30 | 0.01 | 8 | 0.19 | -0.00, 0.02 | — | 4 | 0.60 | -0.01, 0.02 | 0.01 | 6 | 0.53 | -0.01, 0.02
0.50 | 0.03 | – | <0.01 | 0.02, 0.03 | 0.03 | – | <0.01 | 0.02, 0.04 | 0.01 | – | 0.04 | 0.00, 0.03
0.70 | 0.02 | 18 | <0.01 | 0.01, 0.03 | 0.03 | 19 | <0.01 | 0.01, 0.05 | – | 2 | 0.75 | -0.02, 0.02
0.80 | 0.02 | 9 | 0.01 | 0.00, 0.04 | 0.02 | 9 | 0.04 | 0.00, 0.05 | -0.01 | -5 | 0.40 | -0.04, 0.02
0.85 | – | -2 | 0.73 | -0.02, 0.02 | 0.01 | 1 | 0.64 | -0.02, 0.04 | -0.04 | -11 | 0.05 | -0.07, -0.00
0.90 | -0.02 | -5 | 0.13 | -0.04, 0.01 | -0.03 | -7 | 0.09 | -0.07, 0.00 | -0.07 | -15 | <0.01 | -0.12, -0.02
0.95 | -0.11 | -18 | <0.01 | -0.15, -0.07 | -0.14 | -18 | <0.01 | -0.20, -0.07 | -0.17 | -23 | <0.01 | -0.25, -0.10
From Fig. 13 and Table 3, we note that the OLS estimate of the trend is weakly
positive when data from all months over the entire Atlantic are considered.
However, the OLS estimate is not statistically significant for the 20–50oN
region. As expected, the regression quantiles show a fuller picture. The
slopes of the individual quantiles provide an estimate of the trends of the
specific portions of the probability distribution. The key finding here is
that the magnitudes of both rapid deceleration and rapid acceleration show a
statistically-significant reducing trend. This is reflected in the positive
slope for $\tau\leq 0.15$ and negative slope for $\tau\geq 0.85$ (Table 3 and
left columns of Fig. 13 ). It also appears that the trends for $\tau<0.5$ are
generally positive, implying a reduction in the magnitude of tangential
deceleration at all quantile thresholds. On the other hand, the positive
slopes seen for $0.5<\tau<0.8$ suggest that there is an increasing trend in
the values of tangential acceleration that are closer to the median. This
shift towards less-extreme acceleration is noted in all three regional
categories, albeit with varying degree of statistical significance. The trends
in these quantiles are, however, weaker compared with those of the tails. From
the right column of Fig. 13, it is clear that the trends of the tails of the
distribution are significant and fall outside the 95% confidence bounds of the
OLS estimate of the trend.
Figure 14: As in Fig. 13, but for curvature acceleration. Table 4: Quantile trends of curvature acceleration over 1966-2019 (km hr-1 day-1 year-1 ) for months and regions labeled below. Trends of magnitude below $0.01$ are not reported. | Entire Atlantic | 20–50oN | 20–50oN
---|---|---|---
| All Months | Jul-Oct | Aug-Sep
$\tau$ | Trend | Change | p | 95% Conf | Trend | Change | p | 95% conf. | Trend | Change | p | 95% conf.
OLS | -0.02 | -5 | <0.01 | [-0.03, -0.01] | -0.04 | -11 | <0.01 | [-0.06, -0.02] | -0.06 | -15 | <0.01 | [-0.08, -0.04]
0.05 | 0.01 | 42 | <0.01 | 0.01, 0.01 | 0.01 | 40 | <0.01 | 0.00, 0.01 | 0.01 | 27 | 0.04 | 0.00, 0.01
0.10 | 0.01 | 32 | <0.01 | 0.01, 0.02 | 0.01 | 25 | <0.01 | 0.00, 0.02 | 0.01 | 16 | 0.05 | 0.00, 0.01
0.15 | 0.01 | 23 | <0.01 | 0.01, 0.02 | 0.01 | 14 | 0.01 | 0.00, 0.01 | <0.01 | 7 | 0.25 | -0.00, 0.01
0.20 | 0.01 | 17 | <0.01 | 0.01, 0.02 | 0.01 | 13 | <0.01 | 0.00, 0.02 | 0.01 | 7 | 0.22 | -0.00, 0.01
0.30 | 0.02 | 15 | <0.01 | 0.01, 0.02 | 0.01 | 10 | 0.01 | 0.00, 0.02 | 0.01 | 7 | 0.11 | -0.00, 0.02
0.50 | 0.02 | 8 | <0.01 | 0.01, 0.03 | <0.01 | 1 | 0.55 | -0.01, 0.02 | -0.01 | -4 | 0.37 | -0.02, 0.01
0.70 | – | 0 | 0.72 | -0.01, 0.02 | -0.03 | -7 | 0.01 | -0.05, -0.01 | -0.04 | -11 | <0.01 | -0.06, -0.02
0.80 | -0.03 | -7 | <0.01 | -0.05, -0.01 | -0.06 | -11 | <0.01 | -0.08, -0.03 | -0.08 | -15 | <0.01 | -0.11, -0.05
0.85 | -0.06 | -11 | <0.01 | -0.08, -0.03 | -0.09 | -14 | <0.01 | -0.13, -0.05 | -0.12 | -20 | <0.01 | -0.16, -0.08
0.90 | -0.09 | -13 | <0.01 | -0.12, -0.06 | -0.12 | -15 | <0.01 | -0.16, -0.07 | -0.18 | -22 | <0.01 | -0.23, -0.12
0.95 | -0.15 | -15 | <0.01 | -0.20, -0.09 | -0.22 | -20 | <0.01 | -0.31, -0.13 | -0.33 | -29 | <0.01 | -0.43, -0.22
The QR results for curvature acceleration (Fig. 14 and Table 4) show
statistically significant, weak positive trends for $0<\tau<0.5$. However, the
trends switch to increasingly negative values above the median. As in the case
of tangential acceleration, the decelerating trends in the upper quantiles of
the distribution ($\tau\geq 0.8)$ are statistically significant and outside
the 95% confidence bounds of the OLS estimate of the trend in the mean.
Figure 15: As in Fig. 13, but for translation speed. Table 5: Quantile trends of translation speed over 1966-2019 (km hr-1 year-1 ) for months and regions labeled below. Trends of magnitude below $0.01$ are not reported. | Entire Atlantic | 20–50oN | 20–50oN
---|---|---|---
| All Months | Jul-Oct | Aug-Sep
$\tau$ | Trend | Change | p | 95% Conf | Trend | Change | p | 95% conf. | Trend | Change | p | 95% conf.
OLS | -0.01 | -2 | 0.05 | -0.01, 0.0 | -0.01 | -3 | 0.04 | -0.02, -0.0 | -0.04 | -9 | <0.01 | -0.05, -0.02
0.05 | – | -13 | <0.01 | -0.02, -0.01 | -0.02 | -17 | <0.01 | -0.03, -0.01 | -0.02 | -17 | <0.01 | -0.03, -0.01
0.10 | – | -6 | 0.02 | -0.01, -0.00 | -0.01 | -8 | 0.02 | -0.02, -0.00 | -0.01 | -9 | 0.04 | -0.02, -0.00
0.15 | – | -2 | 0.46 | -0.01, 0.00 | – | -1 | 0.98 | -0.01, 0.01 | – | -1 | 0.93 | -0.01, 0.01
0.20 | – | -2 | 0.52 | -0.01, 0.01 | – | -2 | 0.66 | -0.01, 0.01 | – | -2 | 0.73 | -0.01, 0.01
0.30 | – | -1 | 0.61 | -0.01, 0.01 | -0.01 | -3 | 0.25 | -0.02, 0.00 | – | -7 | 0.01 | -0.03, -0.00
0.50 | – | -1 | 0.99 | -0.01, 0.01 | -0.01 | -5 | 0.01 | -0.02, -0.00 | -0.04 | -12 | <0.01 | -0.05, -0.03
0.70 | – | -2 | 0.34 | -0.01, 0.01 | -0.01 | -3 | 0.18 | -0.02, 0.00 | -0.04 | -10 | <0.01 | -0.06, -0.03
0.80 | -0.01 | -2 | 0.32 | -0.02, 0.01 | -0.02 | -5 | 0.02 | -0.04, -0.00 | -0.07 | -13 | <0.01 | -0.09, -0.05
0.85 | -0.03 | -5 | <0.01 | -0.04, -0.01 | -0.03 | -5 | 0.01 | -0.05, -0.01 | -0.09 | -15 | <0.01 | -0.12, -0.07
0.90 | -0.03 | -4 | <0.01 | -0.04, -0.01 | – | – | 0.96 | -0.03, 0.03 | -0.09 | -14 | <0.01 | -0.13, -0.06
0.95 | 0.02 | 2 | 0.20 | -0.01, 0.05 | -0.01 | -2 | 0.70 | -0.07, 0.04 | -0.08 | -10 | <0.01 | -0.14, -0.03
For completeness, we also show the corresponding QR results for translation
speed (Fig. 15 and Table 5). The OLS estimate of the trend is nearly the same
value as it was for the annual-mean speeds. However, it is now statistically
significant. The change in the p-value reflects the fact that the sample size
is much higher since the data is not averaged annually. When we consider the
entire Atlantic, estimated trends from QR are nearly the same as the OLS trend
with the exception of $\tau=0.95$. However, when we consider the subsets of
the data for 20–50${}^{o}N$, there are some notable differences. In
particular, for August-September, the trends in the fastest translation speeds
are even more negative.
Tables 3, 4 and 5 also include the percent changes defined using the first and
last value in the linear fit over the period 1966–2019. If we subjectively
assume that a statistically significant change of magnitude at least 10% over
the past 54 years can be deemed _robust_ , then the key outcome of the QR is
the following: The trends in the tails of the distribution of the
accelerations and speeds are most robust for the August-September months. For
the extratropical region (20–50${}^{o}N$), both rapid tangential acceleration
and deceleration show robust reductions. This indicates a general narrowing of
the tangential acceleration distribution over time. The curvature acceleration
shows an increase for the lower quantiles and reduction for the upper. This
suggests a shift in the curvature acceleration towards smaller values,
consistent with the results for tangential acceleration. Forward speed shows
mostly reducing trends for both upper and lower tails, indicating that
extremes in speeds are reducing over time.
## 8 Discussion
Ensemble-average composites of atmospheric fields show distinct synoptic-scale
patterns when they are categorized on the basis of the acceleration of
tropical cyclones. The composites for rapid tangential acceleration outside
the deep tropics depict a synoptic-scale pattern that is consistent with the
extratropical transition of tropical cyclones. This is unsurprising since it
is generally known that tropical storms speed up during extratropical
transition. The novel aspect here is that we have recovered the signal of
extratropical transition from the perspective of acceleration. The composites
show a poleward moving tropical cyclone that is straddled by an upstream
trough and a downstream ridge. Subsequently, the tropical cyclone merges with
the extratropical wavepacket ahead of the trough in an arrangement that is
conducive for further baroclinic development. Features commonly associated
with extratropical transition such as the downstream ridge-building,
amplification of the upper-level jet streak and downstream development can be
clearly seen in the composite maps (Fig. 3 and Fig. 4a-e). The composites for
rapid curvature acceleration also show the impact of the phasing of the
tropical cyclone and the extratropical wavepacket. For this category, we
recover a synoptic-scale pattern that is similar to the one obtained in
composites based on the recurvature of tropical cyclones (Aiyyer, 2015).
In contrast, the composite fields for rapid tangential and curvature
deceleration show a tropical cyclone that approaches an extratropical ridge.
The upstream trough remains at a distance and the tropical cyclone does not
merge with the extratropical wavepacket, but instead remains equatorward of it
over the following few days. The tropical cyclone and the extratropical ridge
– at least locally – can be viewed as a vortex dipole. The combined system
remains relatively stationary compared to the progressive pattern for rapid
acceleration. This arrangement qualitatively resembles a dipole block — an
important mode of persistent anomalies in the atmosphere (e.g., McWilliams,
1980; Pelly and Hoskins, 2003). The canonical dipole block is depicted as a
vortex pair comprised of a warm anticyclone and a low-latitude cut-off cyclone
(e.g., Haines and Marshall, 1987; McTaggart-Cowan et al., 2006). The dynamics
of blocked flows are rich and the subject of a variety of theories that are
far from settled (e.g. Woollings and Barriopedro, 2018). In the present case,
the slowly propagating cyclone-anticyclone pair is likely an outcome of a
fortuitous phasing of the tropical cyclone and the extratropical ridge.
Our study takes a complementary view of the extratropical interaction
described in Riboldi et al. (2019). In particular, we note the close
correspondence between their composite sea-level pressure and potential
vorticity composites (their Fig. 10) for decelerating troughs and our
geopotential composites for rapidly decelerating tropical cyclones (right
columns of Fig. 3 and Fig. 6). The vortex dipole can be seen in all three
figures. Riboldi et al. (2019) conditioned their composites on the basis of
the acceleration of the upstream trough interacting with recurving typhoons in
the western North Pacific. We recover the same pattern when we condition the
composites based on rapidly decelerating tropical cyclones in the North
Atlantic.
The interaction between the tropical cyclone and the extratropical wavepacket
that leads to the deceleration of the entire system can be viewed from a PV
perspective. As noted by Riboldi et al. (2019), both adiabatic and diabatic
pathways are active in this interaction. In the former, the induced flow from
the cyclonic vortex (tropical cyclone) will be westward within the poleward
anticyclonic vortex (ridge). The induced flow from the ridge will also be
westward at the location of the tropical cyclone. The combined effect will be
a mutual westward advection, and thus reduced eastward motion in the earth-
relative frame. The latter, diabatic pathway relies on the amplification of
the ridge through the action of precipitating convection in the vicinity of
the tropical cyclone. The negative vertical gradient of diabatic heating in
upper levels of the tropical cyclone implies that its anticyclonic outflow is
a source of low PV air (e.g., Wu and Emanuel, 1993). The advection of this low
PV air by the irrotational component of the outflow and its role in ridge-
building has been extensively documented (e.g., Atallah et al., 2007; Riemer
et al., 2008; Grams et al., 2013; Archambault et al., 2015). Riboldi et al.
(2019) also showed that the ridge is more amplified for rapidly decelerating
troughs as compared to accelerating troughs. They implicated stronger latent
heating and irrotational outflow for this difference. Our composites also show
a stronger ridge for rapid tropical cyclone deceleration as compared to rapid
acceleration. This can be noted by comparing the left and right columns of
Figs. 4 and 7. The amplification of the ridge also results in the slow-down of
the upstream trough, and as shown by Riboldi et al. (2019) yields frequent
downstream atmospheric blocking events.
The tracks of TC in the vicinity of extratropical wavetrains and subtropical
ridges are sensitive to the existence of bifurcation points (e.g., Grams et
al., 2013; Riemer and Jones, 2014), and small shifts in positions can yield
different outcomes for motion and extratropical transition. Bifurcation points
also exist in the case of tropical-cyclone cut-off low interactions (Pantillon
et al., 2016). Our acceleration-based composites have further highlighted the
impact of the phasing of the tropical cyclone and the extratropical wavepacket
in mediating the interactions between them.
While there is some storm-to-storm variability, in an average sense the
tangential acceleration peaks 18 hours prior to completion of extratropical
transition. Interestingly, the forward speed peaks around the time of
completion of extratropical transition. Curvature acceleration increases
rapidly prior to extratropical transition and remains nearly steady after this
time. Composite time series also show that the curvature acceleration peaks at
track recurvature while the forward speed is nearly at its minimum. The
tangential acceleration shows a sharp, steady increase around recurvature.
This is consistent with the observations that extratropical transition is
typically completed within 2-4 days of recurvature. A related relevant
question is the following: Is rapid tangential acceleration a sign of imminent
extratropical transition? We found that $\approx 65\%$ of the storms that
comprised the composites for rapid acceleration (Left column: Fig. 3)
completed the transition within 3 days of the reference time (day 0). This is
substantially higher than the climatological fraction of $\approx 50\%$ for
extratropical transition over the entire basin and storm lifetime. On the
other hand, only $\approx 39\%$ of the storms that comprised the composites
for rapid deceleration (Right column: Fig. 3) completed the transition within
a similar time range. This fraction is substantially lower than the
climatological fraction. It is, however, consistent with the observation made
earlier that the synoptic-scale pattern for rapid deceleration promotes
recurvature rather than extratropical transition. Furthermore, not all
recurving storms become extratropical (Evans et al., 2017).
The composites show that rapid acceleration and deceleration can be viewed as
a proxy for distinct types of tropical cyclone-extratropical interactions. It
is of interest, therefore, to ascertain whether any trends in tropical cyclone
accelerations can be found in the track data. For this, we use quantile
regression to examine the linear trends over the entire distribution. The
tails of acceleration and translation speed show statistically significant
trends. The trends for the extratropical regions of the Atlantic (20–50oN) are
most robust for August-September. Both rapid tangential acceleration and
deceleration have reduced over the past five decades. The trends for curvature
acceleration show increases for the lower quantiles and decreases for the
upper quantiles. The forward speed, particularly values above the median, also
shows robust decreases for the August-September months. This supports the
general conclusion of Kossin (2018) that tropical cyclones have slowed down in
the past few decades.
We have not explored the physical basis for the trends discussed above. We,
however, speculate that they are indicative of systematic changes in the
interaction between tropical cyclones and extratropical waves. It is, however,
unclear from our preliminary examination whether the trends reflect changes in
the frequency or some measure of the _strength_ of the interactions. We also
recognize that the notion of strength of interaction needs a firm and
objective definition. Nevertheless, there are some recent findings related to
the atmospheric general circulation that may be relevant to this point. First,
there is growing evidence for a poleward shift in the stormtrack of
extratropical baroclinic eddies. As noted by Tamarin and Kaspi (2017) and
references therein, this shift has been found in both reanalysis data and
climate model simulations. Separately, Coumou et al. (2015) found a robust
decrease of the zonal wind and the amplitude of synoptic-scale Rossby waves in
the ERA-interim reanalysis over the Northern hemisphere during the months
June-August. We hypothesize that the poleward shift of the extratropical waves
and their weakening could potentially account for the acceleration trends
reported here. This, however, needs to be examined further if any robust
conclusion regarding attribution to climate change is to be made.
When we separate tropical cyclones on the basis of their acceleration and
consider ensemble average composites of atmospheric fields (e.g.,
geopotential), we get two broad sets of synoptic-scale patterns. The composite
for rapid tangential acceleration shows a poleward moving tropical cyclone
straddled by an upstream trough and a downstream ridge. The subsequent merger
of the tropical cyclone and the developing extratropical wavepacket is
consistent with the process of extratropical transition. The composite for
rapid curvature acceleration shows a prominent downstream ridge that promotes
recurvature. On the other hand, the synoptic-scale pattern for rapid
tangential as well as curvature deceleration takes the form of a cyclone-
anticyclone dipole with a ridge directly poleward of the tropical cyclone. We
note the qualitative resemblance of this arrangement with the canonical dipole
block. Some of our findings from the perspective of tropical cyclone
acceleration closely match those of Riboldi et al. (2019), who conditioned
their diagnostics on the basis of the acceleration of the upstream trough and
recurving western north Pacific typhoons.
Accelerations and speed show robust trends in the tails of their distribution.
For the extratropical region of the Atlantic (20–50oN), and particularly for
the months August-September, peak acceleration/deceleration, as well as speeds
of tropical cyclones, have reduced over the past 5 decades. The reduction in
the tails of the speed distribution provide complementary evidence for a
general slowing trend of tropical cyclones reported by Kossin (2018). We also
suggest that the robust reduction in the tails of the acceleration
distribution is indicative of a systematic change in the interaction of
tropical cyclones with extratropical baroclinic waves. We have not, however,
examined the underlying processes. We speculate that poleward shift and
decreasing amplitude of extratropical Rossby waves found in other studies may
account for the acceleration trends. Detailed modeling and observational
studies are needed to better understand the source of these trends.
## 9 Data and code availability
The reanalysis data used here can be obtained from the European Center for
Medium-Range Weather Forecasts archived at
https://www.ecmwf.int/en/forecasts/datasets/reanalysis-datasets/era-interim/.
The tracks of Atlantic tropical cyclones were obtained from the IBTrACS
database available from https://www.ncdc.noaa.gov/ibtracs/. Authors will
provide computer code developed for this paper to anyone who is interested.
## 10 Author contributions
AA wrote the computer code for all analysis and visualization, and wrote the
text of the paper. TW derived the expression for the radius of curvature, and
assisted with editing the text and interpretation of the results.
## 11 Competing interests
The authors declare that they have no conflict of interest.
## 12 Financial Support
This work was supported by NSF through award #1433763
## References
* Aiyyer (2015) Aiyyer, A.: Recurving western North Pacific tropical cyclones and midlatitude predictability, Geophysical Research Letters, 42, 7799–7807, 10.1002/2015GL065082, 2015.
* Archambault et al. (2015) Archambault, H. M., Keyser, D., Bosart, L. F., Davis, C. A., and Cordeira, J. M.: A Composite Perspective of the Extratropical Flow Response to Recurving Western North Pacific Tropical Cyclones, Monthly Weather Review, 143, 1122–1141, 10.1175/MWR-D-14-00270.1, 2015.
* Atallah et al. (2007) Atallah, E., Bosart, L. F., and Aiyyer, A. R.: Precipitation Distribution Associated with Landfalling Tropical Cyclones over the Eastern United States, Mon. Wea. Rev., 135, 2185–+, 10.1175/MWR3382.1, 2007.
* Barbosa et al. (2011) Barbosa, S. M., Scotto, M. G., and Alonso, A. M.: Summarising changes in air temperature over central europe by quantile regression and clustering, Nat. Hazards Earth Syst. Sci., 11, 3227–3233, 10.5194/nhess-11-3227-2011, 2011.
* Bieli et al. (2019) Bieli, M., Camargo, S. J., Sobel, A. H., Evans, J. L., and Hall, T.: A Global Climatology of Extratropical Transition. Part I: Characteristics across Basins, Journal of Climate, 32, 3557–3582, 10.1175/JCLI-D-17-0518.1, 2019.
* Bosart and Lackmann (1995) Bosart, L. F. and Lackmann, G. M.: Postlandfall Tropical Cyclone Reintensification in a Weakly Baroclinic Environment: A Case Study of Hurricane David (September 1979), Monthly Weather Review, 123, 3268, 10.1175/1520-0493(1995)123<3268:PTCRIA>2.0.CO;2, 1995.
* Chan (2005) Chan, J. C.: THE PHYSICS OF TROPICAL CYCLONE MOTION, Ann. Rev. of Fluid Mech., 37, 99–128, 10.1146/annurev.fluid.37.061903.175702, URL https://doi.org/10.1146/annurev.fluid.37.061903.175702, 2005.
* Chan and Williams (1987) Chan, J. C. L. and Williams, R. T.: Analytical and Numerical Studies of the Beta-Effect in Tropical Cyclone Motion. Part I: Zero Mean Flow., Journal of Atmospheric Sciences, 44, 1257–1265, 10.1175/1520-0469(1987)044<1257:AANSOT>2.0.CO;2, 1987.
* Coumou et al. (2015) Coumou, D., Lehmann, J., and Beckmann, J.: The weakening summer circulation in the Northern Hemisphere mid-latitudes, Science, 348, 324–327, 10.1126/science.1261768, 2015.
* Dee et al. (2011) Dee, D. P., Uppala, S. M., Simmons, A. J., Berrisford, P., Poli, P., Kobayashi, S., Andrae, U., Balmaseda, M. A., Balsamo, G., Bauer, P., Bechtold, P., Beljaars, A. C. M., van de Berg, L., Bidlot, J., Bormann, N., Delsol, C., Dragani, R., Fuentes, M., Geer, A. J., Haimberger, L., Healy, S. B., Hersbach, H., Hólm, E. V., Isaksen, L., Kållberg, P., Köhler, M., Matricardi, M., McNally, A. P., Monge-Sanz, B. M., Morcrette, J.-J., Park, B.-K., Peubey, C., de Rosnay, P., Tavolato, C., Thépaut, J.-N., and Vitart, F.: The ERA-Interim reanalysis: configuration and performance of the data assimilation system, Quart. J. Roy. Meteor. Soc., 137, 553–597, 10.1002/qj.828, 2011.
* Emanuel (2018) Emanuel, K.: 100 Years of Progress in Tropical Cyclone Research, vol. 59, pp. 15.1–15.68, 10.1175/AMSMONOGRAPHS-D-18-0016.1, 2018.
* Evans et al. (2017) Evans, C., Wood, K. M., Aberson, S. D., Archambault, H. M., Milrad, S. M., Bosart, L. F., Corbosiero, K. L., Davis, C. A., Dias Pinto, J. R., Doyle, J., Fogarty, C., Galarneau, Thomas J., J., Grams, C. M., Griffin, K. S., Gyakum, J., Hart, R. E., Kitabatake, N., Lentink, H. S., McTaggart-Cowan, R., Perrie, W., Quinting, J. F. D., Reynolds, C. A., Riemer, M., Ritchie, E. A., Sun, Y., and Zhang, F.: The Extratropical Transition of Tropical Cyclones. Part I: Cyclone Evolution and Direct Impacts, Monthly Weather Review, 145, 4317–4344, 10.1175/MWR-D-17-0027.1, 2017.
* Gao and Franzke (2017) Gao, M. and Franzke, C. L. E.: Quantile Regression-Based Spatiotemporal Analysis of Extreme Temperature Change in China, J. Climate, 30, 9897–9914, 10.1175/JCLI-D-17-0356.1, 2017.
* Grams et al. (2013) Grams, C. M., Jones, S. C., and Davis, C. A.: The impact of Typhoon Jangmi (2008) on the midlatitude flow. Part II: Downstream evolution, Quarterly Journal of the Royal Meteorological Society, 139, 2165–2180, 10.1002/qj.2119, 2013.
* Haines and Marshall (1987) Haines, K. and Marshall, J.: Eddy-forced coherent structures as a prototype of atmospheric blocking, Quarterly Journal of the Royal Meteorological Society, 113, 681–704, 10.1002/qj.49711347613, 1987.
* Hart (2003) Hart, R. E.: A Cyclone Phase Space Derived from Thermal Wind and Thermal Asymmetry, Monthly Weather Review, 131, 585, 10.1175/1520-0493(2003)131<0585:ACPSDF>2.0.CO;2, 2003.
* Hart and Evans (2001) Hart, R. E. and Evans, J. L.: A Climatology of the Extratropical Transition of Atlantic Tropical Cyclones., Journal of Climate, 14, 546–564, 10.1175/1520-0442(2001)014<0546:ACOTET>2.0.CO;2, 2001.
* Hart et al. (2006) Hart, R. E., Evans, J. L., and Evans, C.: Synoptic Composites of the Extratropical Transition Life Cycle of North Atlantic Tropical Cyclones: Factors Determining Posttransition Evolution, Monthly Weather Review, 134, 553, 10.1175/MWR3082.1, 2006.
* Hodanish and Gray (1993) Hodanish, S. and Gray, W. M.: An Observational Analysis of Tropical Cyclone Recurvature, Monthly Weather Review, 121, 2665, 10.1175/1520-0493(1993)121<2665:AOAOTC>2.0.CO;2, 1993.
* Jones et al. (2003) Jones, S. C., Harr, P. A., Abraham, J., Bosart, L. F., Bowyer, P. J., Evans, J. L., Hanley, D. E., Hanstrum, B. N., Hart, R. E., Lalaurette, F., Sinclair, M. R., Smith, R. K., and Thorncroft, C.: The Extratropical Transition of Tropical Cyclones: Forecast Challenges, Current Understanding, and Future Directions, Weather and Forecasting, 18, 1052–1092, 10.1175/1520-0434(2003)018<1052:TETOTC>2.0.CO;2, 2003.
* Keller et al. (2019) Keller, J. H., Grams, C. M., Riemer, M., Archambault, H. M., Bosart, L., Doyle, J. D., Evans, J. L., Galarneau, Thomas J., J., Griffin, K., Harr, P. A., Kitabatake, N., McTaggart-Cowan, R., Pantillon, F., Quinting, J. F., Reynolds, C. A., Ritchie, E. A., Torn, R. D., and Zhang, F.: The Extratropical Transition of Tropical Cyclones. Part II: Interaction with the Midlatitude Flow, Downstream Impacts, and Implications for Predictability, Monthly Weather Review, 147, 1077–1106, 10.1175/MWR-D-17-0329.1, 2019.
* Klein et al. (2000) Klein, P. M., Harr, P. A., and Elsberry, R. L.: Extratropical Transition of Western North Pacific Tropical Cyclones: An Overview and Conceptual Model of the Transformation Stage, Weather and Forecasting, 15, 373–396, 10.1175/1520-0434(2000)015<0373:ETOWNP>2.0.CO;2, 2000.
* Knapp et al. (2010) Knapp, K. R., Kruk, M. C., Levinson, D. H., Diamond, H. J., and Neumann, C. J.: The International Best Track Archive for Climate Stewardship (IBTrACS), Bulletin of the American Meteorological Society, 91, 363–376, 10.1175/2009BAMS2755.1, 2010.
* Koenker and Bassett (1978) Koenker, R. and Bassett, G.: Regression quantiles, Econometrica, 46, 33–50, doi: 10.2307/1913643, 1978.
* Koenker and Hallock (2001) Koenker, R. and Hallock, K.: Quantile Regression, Journal of Economic Perspectives—Volume 15, Number 4—Fall 2001—Pages 143–156, 15, 143–156, 2001.
* Koenker and Schorfheide (1994) Koenker, R. and Schorfheide, F.: Quantile spline models for global temperature change, Climatic Change, 28, 395–404, 10.1007/BF01104081, 1994\.
* Kofron et al. (2010) Kofron, D. E., Ritchie, E. A., and Tyo, J. S.: Determination of a Consistent Time for the Extratropical Transition of Tropical Cyclones. Part II: Potential Vorticity Metrics, Monthly Weather Review, 138, 4344–4361, 10.1175/2010MWR3181.1, 2010.
* Kossin (2018) Kossin, J. P.: A global slowdown of tropical-cyclone translation speed, nature, 558, 104–107, 10.1038/s41586-018-0158-3, 2018.
* Landsea (2007) Landsea, C. W.: Counting Atlantic Tropical Cyclones Back to 1900, EOS Transactions, 88, 197–202, 10.1029/2007EO180001, 2007.
* Lanzante (2019) Lanzante, J. R.: Uncertainties in tropical-cyclone translation speed, nature, 570, E6–E15, 10.1038/s41586-019-1223-2, 2019.
* Lausier and Jain (2018) Lausier, A. M. and Jain, S.: Overlooked Trends in Observed Global Annual Precipitation Reveal Underestimated Risks, Scientific Reports, 8, 16746, 10.1038/s41598-018-34993-5, 2018.
* McTaggart-Cowan et al. (2003) McTaggart-Cowan, R., Gyakum, J. R., and Yau, M. K.: The Influence of the Downstream State on Extratropical Transition: Hurricane Earl (1998) Case Study, Monthly Weather Review, 131, 1910, 10.1175//2589.1, 2003.
* McTaggart-Cowan et al. (2006) McTaggart-Cowan, R., Bosart, L. F., Davis, C. A., Atallah, E. H., Gyakum, J. R., and Emanuel, K. A.: Analysis of Hurricane Catarina (2004), Monthly Weather Review, 134, 3029, 10.1175/MWR3330.1, 2006.
* McWilliams (1980) McWilliams, J. C.: An application of equivalent modons to atmospheric blocking, Dynamics of Atmospheres and Oceans, 5, 43–66, 10.1016/0377-0265(80)90010-X, 1980.
* Moon et al. (2019) Moon, I.-J., Kim, S.-H., and Chan, J. C. L.: Climate change and tropical cyclone trend, nature, 570, E3–E5, 10.1038/s41586-019-1222-3, 2019.
* Nie et al. (2018) Nie, J., Sobel, A. H., Shaevitz, D. A., and Wang, S.: Dynamic amplification of extreme precipitation sensitivity, Proceedings of the National Academy of Science, 115, 9467–9472, 10.1073/pnas.1800357115, 2018\.
* Pantillon et al. (2016) Pantillon, F., Chaboureau, J.-P., and Richard, E.: Vortex-vortex interaction between HurricaneNadine(2012) and an Atlantic cut-off dropping the predictability over the Mediterranean, Quarterly Journal of the Royal Meteorological Society, 142, 419–432, 10.1002/qj.2635, 2016.
* Passow and Donner (2019) Passow, C. and Donner, R. V.: A Rigorous Statistical Assessment of Recent Trends in Intensity of Heavy Precipitation Over Germany, Frontiers in Environmental Science, 7, 143, 10.3389/fenvs.2019.00143, URL https://www.frontiersin.org/article/10.3389/fenvs.2019.00143, 2019\.
* Pelly and Hoskins (2003) Pelly, J. L. and Hoskins, B. J.: A New Perspective on Blocking., Journal of Atmospheric Sciences, 60, 743–755, 10.1175/1520-0469(2003)060<0743:ANPOB>2.0.CO;2, 2003.
* Riboldi et al. (2019) Riboldi, J., Grams, C. M., Riemer, M., and Archambault, H. M.: A Phase Locking Perspective on Rossby Wave Amplification and Atmospheric Blocking Downstream of Recurving Western North Pacific Tropical Cyclones, Monthly Weather Review, 147, 567–589, 10.1175/MWR-D-18-0271.1, 2019.
* Riemer and Jones (2014) Riemer, M. and Jones, S. C.: Interaction of a tropical cyclone with a high-amplitude, midlatitude wave pattern: Waviness analysis, trough deformation and track bifurcation, Quarterly Journal of the Royal Meteorological Society, 140, 1362–1376, 10.1002/qj.2221, 2014.
* Riemer et al. (2008) Riemer, M., Jones, S. C., and Davis, C. A.: The impact of extratropical transition on the downstream flow: An idealized modelling study with a straight jet, Quarterly Journal of the Royal Meteorological Society, 134, 69–91, 10.1002/qj.189, 2008.
* Schreck et al. (2014) Schreck, Carl J., I., Knapp, K. R., and Kossin, J. P.: The Impact of Best Track Discrepancies on Global Tropical Cyclone Climatologies using IBTrACS, Monthly Weather Review, 142, 3881–3899, 10.1175/MWR-D-14-00021.1, 2014.
* Tamarin and Kaspi (2017) Tamarin, T. and Kaspi, Y.: The poleward shift of storm tracks under global warming: A Lagrangian perspective, Geophysical Research Letters, 44, 10,666–10,674, 10.1002/2017GL073633, 2017.
* Tang and Emanuel (2010) Tang, B. and Emanuel, K.: Midlevel Ventilation’s Constraint on Tropical Cyclone Intensity, Journal of Atmospheric Sciences, 67, 1817–1830, 10.1175/2010JAS3318.1, 2010.
* Vecchi and Knutson (2008) Vecchi, G. A. and Knutson, T. R.: On Estimates of Historical North Atlantic Tropical Cyclone Activity*, Journal of Climate, 21, 3580, 10.1175/2008JCLI2178.1, 2008.
* Woollings and Barriopedro (2018) Woollings, T. and Barriopedro, D., M.: Blocking and its Response to Climate Change, Curr Clim Change Rep, 4, 287–300, https://doi.org/10.1007/s40641-018-0108-z 1, 2018.
* Wu and Emanuel (1993) Wu, C.-C. and Emanuel, K. A.: Interaction of a Baroclinic Vortex with Background Shear: Application to Hurricane Movement., Journal of Atmospheric Sciences, 50, 62–76, 10.1175/1520-0469(1993)050<0062:IOABVW>2.0.CO;2, 1993\.
|
Data-driven discovery of multiscale chemical reactions
governed by the law of mass action
Juntao Huang [Department of Mathematics, Michigan State University, East Lansing, MI 48824, USA. E-mail<EMAIL_ADDRESS> Yizhou Zhou [School of Mathematical Sciences, Peking University, Beijing, China. E-mail<EMAIL_ADDRESS>Wen-An Yong [Department of Mathematical Sciences, Tsinghua University, Beijing, China. E-mail<EMAIL_ADDRESS>
In this paper, we propose a data-driven method to discover multiscale chemical reactions governed by the law of mass action.
First, we use a single matrix to represent the stoichiometric coefficients for both the reactants and products in a system without catalysis reactions.
The negative entries in the matrix denote the stoichiometric coefficients for the reactants and the positive ones for the products.
Second, we find that the conventional optimization methods usually get stuck in the local minima and could not find the true solution in learning the multiscale chemical reactions. To overcome this difficulty, we propose a partial-parameters-freezing (PPF) technique to progressively determine the network parameters by using the fact that the stoichiometric coefficients are integers. With such a technique, the dimension of the searching space is gradually reduced in the training process and the global mimina can be eventually obtained. Several numerical experiments including the classical Michaelis–Menten kinetics, the hydrogen oxidation reactions and the simplified GRI-3.0 mechanism verify the good performance of our algorithm in learning the multiscale chemical reactions. The code is available at <https://github.com/JuntaoHuang/multiscale-chemical-reaction>.
Key Words:
Chemical Reactions; Multiscale; Machine Learning; Nonlinear Regression; Ordinary Differential Equations
§ INTRODUCTION
Chemical reactions are fundamental in many scientific fields including biology, material science, chemical engineering and so on.
To identity the reactions from experimental data, the traditional methods are mainly based on some empirical laws and expert knowledge [10]. Recently, thanks to the rapid development of machine learning [19] and data-driven modeling [34, 22, 6, 3, 29, 20, 30, 14], it is desirable to develop a data-driven method of discovering the underlying chemical reactions from massive data automatically.
Consider a reaction system with $n_s$ species participating in $n_r$ reactions:
\nu_{i1}'\mathcal{S}_1 + \nu_{i2}'\mathcal{S}_2 + \cdots + \nu_{in_s}'\mathcal{S}_{n_s} \ch{<=>[$k\sb{if}$][$k\sb{ir}$]}
\nu_{i1}''\mathcal{S}_1 + \nu_{i2}''\mathcal{S}_2 + \cdots + \nu_{in_s}''\mathcal{S}_{n_s}
for $i = 1,2, \cdots, n_r$. Here $\mathcal{S}_k$ is the chemical symbol for the $k$-th species, the nonnegative integers $\nu_{ik}'$ and $\nu_{ik}''$ are the stoichiometric coefficients of the $k$-th species in the $i$-th reaction, and $k_{if}$ and $k_{ir}$ are the direct and reverse reaction rates of the $i$-th reaction. The reaction is reversible if both $k_{if}$ and $k_{if}$ are positive. Strictly speaking, all elementary chemical reactions are reversible due to microscopic reversibility. However, in real applications, some of the rate constants are negligible, thus the corresponding reactions can be omitted and the retained ones can be considered as irreversible.
Denote by $u_k = u_k(t)$ the concentration of the $k$-th species at time $t$ for $k=1,2,\cdots,n_s$.
According to the law of mass action [38], the evolution of $u_k$ obeys the ordinary differential equations (ODEs) [27]
\begin{equation}\label{eq:reaction-ODE}
\frac{du_k}{dt} = \sum_{i=1}^{n_r}(\nu_{ik}''-\nu_{ik}')\left(k_{if}\prod_{j=1}^{n_s}u_j^{\nu_{ij}'}-k_{ir}\prod_{j=1}^{n_s}u_j^{\nu_{ij}''} \right),
\end{equation}
for $k=1,2,\cdots,n_s$.
Given the concentration time series data $\{u_k(t_{n}), \ k=1,\cdots,n_s, \ n=1,\cdots,N \}$, our goal is to learn the stoichiometric coefficients $\nu_{ik}'$, $\nu_{ik}''$ and reaction rates $k_{if}$ and $k_{ir}$.
In the literature there are already some works on this topic.
In [5], the authors applied linear regressions to infer the chemical reactions, with the assumption that the reactions are at most the result of bimolecular collisions and the total reaction order is not greater than two. In [40], the linear regression was utilized with an L1 objective, which transforms the problem into a mixed-integer linear programming (MILP). This approach suffers from the same restrictive assumptions as in [5]. In [18], the authors presented an approach to infer the stoichiometric subspace of a chemical reaction network from steady-state concentration data profiles, which is then cast as a series of MILP.
In [25], some chemically reasonable requirements were considered such as the mass conservation and the principle of detailed balance.
The deep neural networks (DNNs) were applied to extract the chemical reaction rate information in [31, 32], but the weights are difficult to interpret physically.
In [13], the authors adapted the sparse identification of nonlinear dynamics (SINDy) method [4, 9] to the present problem. However, the approach relies on expert knowledge, which precludes the application in a new reaction system with unknown reaction pathways.
Within the framework of SINDy, other works are [2, 1, 24]. In order to improve the performance of SINDy, two additional steps including least-squares regression and stepwise regression in the identification were introduced in [2], which are based on the traditional statistical methods. In [1], SINDy was combined with the DNNs to adaptively model and control the process dynamics. An implicit-SINDy was proposed and applied to infer the Michaelis-Menten enzyme kinetics in [24].
Additionally, a statistical learning framework was proposed based on group-sparse regression which leverage prior knowledge from physical principles in [23]. For example, the mass conservation is enforced in the JAK-STAT reaction pathway for signal transduction in [23].
Our work is mainly motivated by [15], where the authors proposed a Chemical Reaction Neural Network (CRNN) by resorting to the feature of the equations in (<ref>). The discovery of chemical reactions usually involves two steps: the identification of the reaction pathways (i.e., the stoichiometric coefficients) and the determination of the reaction rates. For complex reaction processes, one could not even identify the reaction pathways and has to infer both the stoichiometric coefficients and the rate constants from data.
The work in [15] presents a neural network approach for discovering unknown reaction pathways from concentration data. The parameters in CRNN correspond to the stoichiometric coefficients and reaction rates and the network has only one hidden layer with the exponential activation functions.
Different from CRNN in [15], we use a single matrix of order $n_r\times n_s$ to represent the stoichiometric coefficients for both the forward and reverse reactions by assuming no catalysis reactions. The negative entries in the matrix denote the stoichiometric coefficients for the reactants and the positive for the products.
On the other hand, the reaction rates often differ in a wide range of magnitudes, which causes a lot of troubles in learning the multiscale chemical reactions.
To provide some insights into this difficulty, we design a nonlinear regression problem to fit a polynomial with two terms, see (<ref>) in Section <ref>. The given coefficients of the polynomial differ in several orders of magnitudes and the polynomial degree is to be determined. We find numerically that the conventional optimization algorithm usually gets stuck in the local minima and could not find the true solution. Another observation in the numerical experiment is that the learned polynomial degree of the terms with larger coefficient is close to the true solution. Inspired by this observation, we propose a partial-parameters-freezing (PPF) technique to escape from the local minima. Specifically, we perform a round operation on the learned polynomial degree which are close to integer in the optimization process if the loss function does not decrease. The revised algorithm works well for this problem. Some theoretical analysis is also provided to explain the numerical phenomenon.
We then generalize the PPF technique to learn the multiscale chemical reactions. Notice that the stoichiometric coefficients are integers. In the training process, if the loss function stops to decrease, the stoichiometric coefficients which are close to integers are rounded and then frozen afterwards. With such a treatment, the stoichiometric coefficients are gradually determined, the dimension of the searching space is reduced in the training process, and eventually the global mimina can be obtained. Several numerical experiments including the classical Michaelis–Menten kinetics, the hydrogen oxidation reactions and the simplified GRI-3.0 mechanism verify that our method performs much better in learning the mutiscale chemical reactions.
This paper is organized as follows. In Section <ref>, we investigate a multiscale nonlinear regression problem numerically and theoretically. Our algorithm for learning the multiscale chemical reactions is presented in Section <ref>. In Section <ref>, the performance of the algorithm is validated through several numerical examples. Finally, conclusions and the outlook of future work are presented in Section <ref>.
§ MULTISCALE NONLINEAR REGRESSION PROBLEM
To provide some insights into the difficulties in learning the multiscale chemical reactions, we consider a nonlinear regression problem to fit the following function:
\begin{equation}\label{eq:regression-function}
y = f(x;\theta_1, \theta_2) = c_1 x^{\theta_1} + c_2 x^{\theta_2}.
\end{equation}
Here $c_1$ and $c_2$ are two given constants satisfying $\abs{c_1}\ll\abs{c_2}$, and $\theta_1, \theta_2$ are two integers to be determined. This simple toy model captures two key features of the multiscale chemical reactions. The first feature is that the right-hand side of the chemical reaction ODEs (<ref>) is polynomials and the stoichiometric coefficients are integers. The second one is that the multiscale chemical reactions often have reaction rates which differ in several orders of magnitudes.
Given the dataset $\{(x_i, y_i): ~ i=1,\cdots,N \}$, we define the loss function to be the mean squared error (MSE):
\begin{equation}\label{eq:regression-loss}
\mathcal{L}(\theta_1, \theta_2) = \frac{1}{N}\sum_{i=1}^N(f(x_i; \theta_1, \theta_2) - y_i)^2,
\end{equation}
to estimate the parameters $\theta_1$ and $\theta_2$.
Next, conventional optimization methods can be used to obtain the estimation of $\theta_1$ and $\theta_2$.
In the numerical experiment, we take $c_1=1$ and $c_2=100$. The ground truth solutions are $\theta_1=1$ and $\theta_2=2$. The data $x_i$ for $i=1,\cdots,N$ are randomly sampled from a uniform distribution in $(0, 1)$ with the number of data $N=1000$, and $y_i=c_1x_i+ c_2x_i^2$. The Adam optimization method [17] is applied with the full batch gradient decent. The learning rate is taken to be $10^{-4}$. The initial guess of $\theta_1$ and $\theta_2$ is randomly chosen in $(-1, 1)$.
For this toy model, we numerically find that the naive implementation will get stuck in the local minima $(\theta_1, \theta_2) = (3.8286, 1.9745)$ and could not find the true solution. The history of the loss function and the parameters $\theta_1$ and $\theta_2$ in the training is presented in Figure <ref>, see the dashed lines.
Although the naive optimization could not find the global minima, we notice that $\theta_2=1.9745$ in this local minima is close to the true solution $\theta_2=2$. Inspired by this observation, we propose a partial-parameters-freezing (PPF) technique to escape from the local minima. To be more specific, we keep track of the loss function in the training. If the loss does not decrease, we check the parameters $\theta_1$ and $\theta_2$: if any of these is close to its nearest integer with a given threshold, we round it to the integer and do not update it in the afterwards optimization process.
For comparison, we also plot the history of the loss function and the parameters with the PPF technique in Figure <ref>, see the solid lines. The threshold is taken to be 0.05 in this test. The loss stops decreasing with the epoch around 7000. Then $\theta_2$ is rounded to 2 and only $\theta_1$ is updated afterwards. The true solution is eventually obtained when the epoch is around 10000.
[loss vs. epoch]
[parameters $\theta_1$ and $\theta_2$ vs. epoch]
Multiscale nonlinear regression problem: the history of loss function in (<ref>) and the parameters $\theta_1$ and $\theta_2$ in the training process. Solid lines: the method with the PPF technique; dashed lines: the method without the PPF technique.
To better understand why it is easy to get stuck in the local minima without the PPF treatment, we investigate the landscape of the loss function. In Figure <ref>, we plot the 3D surface and the contour map for the loss as a function of $(\theta_1,\theta_2)$. In Figure <ref> (a), it is observed that the loss function has several local minima in which $\theta_2$ is close to 2. Moreover, the local minima $(\theta_1, \theta_2) = (3.8286, 1.9745)$ in the naive implementation is also labeled in Figure <ref> (b).
[loss function surface plot]
[loss function contour map]
Multiscale nonlinear regression problem: the landscape of the loss function in (<ref>). Left: loss function surface plot (in log scale); right: loss function contour map (in log scale), local minima $(\theta_1, \theta_2) = (3.8286, 1.9745)$.
We also plot the profiles of the loss function with fixed $\theta_2=1.99$, 2 and 2.01 in Figure <ref>. It is observed that slight perturbations in $\theta_2$ have a considerable impact on the minima of the loss function. Moreover, the loss as a 1D function with fixed $\theta_2=2$ is well-behaved. This explains why our algorithm is easy to find the global minima after freezing the integer parameter $\theta_2$.
Multiscale nonlinear regression problem: loss function in (<ref>) with fixed parameters $\theta_2=1.99$, 2 and 2.01.
We mention that we also test other cases with different coefficients $c_1$ and $c_2$ satisfying $\abs{{c_2}/{c_1}}=10^3, 10^4, 10^5$ and different integers $\theta_1$ and $\theta_2$. The results are similar and thus omitted here.
We conclude this section with some theoretical analysis to explain the local minima phenomenon observed above. By taking gradient of the loss function in (<ref>), we have
\begin{equation}\label{eq:regression-grad-loss}
\frac{\partial\mathcal{L}}{\partial\theta_j} = \frac{2c_jc_2 }{N}\sum_{i=1}^N \brac{\frac{c_1}{c_2} (x_i^{\theta_1} - x_i^{\theta_1^{\textrm{e}}}) + (x_i^{\theta_2} - x_i^{\theta_2^{\textrm{e}}})} x_i^{\theta_j} \ln x_i, \quad j=1,2.
\end{equation}
Here $\theta_i^{\textrm{e}}$ denotes the true solution of the parameter $\theta_i$ for $i=1,2$.
From the expression (<ref>), we can provide some insights on the phenomenon that the local minina $\theta_2$ is close to the true solution $\theta_2^{\textrm{e}}$.
To reach the local minima, the gradient should be zero.
Refer to the expression (<ref>).
Since $\abs{c_1/c_2}\ll 1$, whether or not the gradient is close to zero depends mainly on the fact that $\theta_2$, instead of $\theta_1$, is close to the ground truth.
§ ALGORITHM
In this section, we present our algorithm for learning the multiscale chemical reactions. First, we use a single matrix to represent the stoichiometric coefficients for both the reactants and products. Each row of the matrix represents one reaction, where the negative entries denote the stoichiometric coefficients for the reactants and the positive ones for the products. This setup is valid for systems without catalysis reactions. In addition, we adapt the PPF technique for the multiscale nonlinear regression problem proposed in Section <ref> to learn the multiscale chemical reactions (<ref>).
We assume that the data are given in the form of the concentrations and the time derivatives in different time snapshots $\{(u_k(t_{n}), u_k'(t_{n})), \ k=1,\cdots,n_s, \ n=1,\cdots,N \}$, our goal is to learn the stoichiometric coefficients and the reaction rates. Realistically, often only $u_k(t_{n})$ is available, and the time derivatives $u_k'(t_{n})$ could be approximated using numerical differentiations [33, 7].
To better illustrate the algorithm, we firstly introduce some vector notations. We denote the forward and reverse reaction rates in (<ref>) by $\bm{k}_f = (k_{1f},k_{2f},...,k_{n_rf})$ and $\bm{k}_r = (k_{1r},k_{2r},...,k_{n_rr})$. The stoichiometric coefficients in (<ref>) are collected in two matrices:
\begin{equation}
\bm{V}' =
\begin{pmatrix}
\nu_{11}' & \nu_{12}' & \cdots & \nu_{1n_s}'\\
\nu_{21}' & \nu_{22}' & \cdots & \nu_{2n_s}'\\
\vdots & \vdots & \ddots & \vdots\\
\nu_{n_r1}' & \nu_{n_r2}' & \cdots & \nu_{n_rn_s}'
\end{pmatrix}
, \qquad
\bm{V}'' =
\begin{pmatrix}
\nu_{11}'' & \nu_{12}'' & \cdots & \nu_{1n_s}''\\
\nu_{21}'' & \nu_{22}'' & \cdots & \nu_{2n_s}''\\
\vdots & \vdots & \ddots & \vdots\\
\nu_{n_r1}'' & \nu_{n_r2}'' & \cdots & \nu_{n_rn_s}''
\end{pmatrix}.
\end{equation}
Assume that there is no catalysis reactions. Therefore, only one of $\nu_{ik}'$ and $\nu_{ik}''$ can be non-zero for any $(i,k)$.
In this case, the matrix $\bm{V}=(\nu_{ik}):=\bm{V}''-\bm{V}'$ satisfies
\nu_{ik}=\left\{ {\begin{array}{*{20}c}
\vspace{1.5mm} \nu_{ik}'',\qquad & \textrm{if} \quad \nu_{ik}\ge0,\\[2mm]
-\nu_{ik}',\qquad & \textrm{if} \quad \nu_{ik}<0.
\end{array}} \right.
According to this property, we only need to pin down the matrix $\bm{V}$. Then $\bm{V}'$ and $\bm{V}''$ can be recovered by $\nu_{ik}''=\max(0,\nu_{ik})$ and $\nu_{ik}'=-\min(0,\nu_{ik})$, respectively.
Next, we define the neural network $\mathcal{N}=\mathcal{N}(u_1,\cdots,u_{n_s}):\mathbb{R}^{n_s}\rightarrow\mathbb{R}^{n_s}$ which has the input $\bm{u}:=(u_1,\cdots,u_{n_s})$ and the parameters $\bm{l}_f = (l_{1f}, l_{2f},..., l_{n_rf})$, $\bm{l}_r = (l_{1r}, l_{2r},..., l_{n_rr})$ and $\bm{V}$:
\begin{equation*}
\mathcal{N}(u_1,\cdots,u_{n_s})_k = \sum_{i=1}^{n_r}\nu_{ik}\left(\exp({l_{if}})\prod_{j=1}^{n_s}u_j^{-\min(0,\nu_{ik})} - \exp({l_{ir}})\prod_{j=1}^{n_s}u_j^{\max(0,\nu_{ik})} \right)
\end{equation*}
for $k=1,\cdots,n_s$.
Here the parameters $l_{if}$ and $l_{ir}$ denote the logarithms of the reaction rates $k_{if}$ and $k_{ir}$ [15]. This change of variables technique has two advantages. The first one is that the positivity of the reaction rates is guaranteed automatically. The second one is that the reaction rates for the multiscale chemical reactions usually differ in several orders of magnitudes. The slight changes of $l_{if}$ and $l_{ir}$ will make $k_{if}$ and $k_{ir}$ change a lot, which could potentially make the neural network to be more robust in the training process.
The loss function is defined as the mean squared error (MSE) between the data for the time derivatives and the output of the neural network:
\begin{equation}\label{3.2}
\mathcal{L} = \frac{1}{N}\sum_{n=1}^N\sum_{k=1}^{n_s}\brac{\mathcal{N}(u_1(t_{n}),\cdots,u_{n_s}(t_{n}))_k - u_k'(t_{n})}^2 + \lambda \mathcal{L}_r.
\end{equation}
Here $\lambda \mathcal{L}_r$ is a regularization term with $\lambda>0$ the regularization constant and
\begin{equation}
\mathcal{L}_r = \sum_{i=1}^{n_r}\sum_{k=1}^{n_s} \abs{\nu_{ik}} + \sum_{i=1}^{n_r}\sum_{k=1}^{n_s} \nu_{ik}^2 + \sum_{i=1}^{n_r}(\abs{l_{if}} + \abs{l_{ir}}) + \sum_{i=1}^{n_r}(l_{if}^2 + l_{ir}^2).
\end{equation}
Here both $L_1$ and $L_2$ regularization terms are included.
This neural network works quite well for non-stiff chemical reactions. However, for stiff reactions, we observe that the optimization usually gets stuck in the local minima in the training process and could not find the true solution. The common techniques such as the mini-batch and reducing the learning rate do not work in such a situation. To attack this problem, we adapt the PPF technique proposed in the previous section.
The training procedure is split into two parts. The first part is to learn the matrix $\bm{V}$. To better illustrate the algorithm, we introduce some notation. Denote the vector in the $j$-th row of $\bm{V}$ by $\bm{v}_j$ for $j=1,\cdots,n_r$. Define the distance to the nearest integer for any vector $\bm{v}\in\mathbb{R}^{n_s}$ as
\begin{equation}\label{3.4}
d_{\textrm{int}}(\bm{v}) := \norm{\bm{v} - \nint{\bm{v}}}_{\infty} = \max_{i\in\{1,\cdots,n_s\}} \abs{v_i - \nint{v_i}},
\end{equation}
where $\nint{}$ denotes the function rounding an arbitrary real number to its nearest integer and it is defined to work element-wise on vectors. We keep track of the loss function in the training process. If the loss function stops decreasing, we check if any row of $\bm{V}$ is close to the nearest integers, i.e., $d_{\textrm{int}}(\bm{v}_j)\le\epsilon$. Here, $\epsilon>0$ is a hyperparameter and we take $\epsilon=0.05$ in all the numerical examples in Section <ref>. If the $j$-th row of $\bm{V}$ satisfies the condition $d_{\textrm{int}}(\bm{v}_j)\le\epsilon$, then we round $\bm{v}_j$ to $\nint{\bm{v}_j}$ and do not update it in the afterwards training. In addition, to help the optimization algorithm escape from the local minima, we randomly reinitialize other non-integer entries in $\bm{V}$ when the loss stops decreasing. After all the entries in $\bm{V}$ reach integer, we freeze them and then learn the parameters $\bm{l}_f$ and $\bm{l}_r$ related to the reaction rates. We remark that the SINDy algorithms [4, 13] can also be applied in learning the reaction rates when the stoichiometric coefficients $\bm{V}$ are known. The algorithm is summarized in Algorithm <ref>.
Here we assume that all the reactions are reversible. However, the algorithm can be also applied to irreversible reactions without any modification. The expected result is that the learned reverse reaction rates for the irreversible reactions will be close to zero. This will be demonstrated numerically in Example <ref> in Section <ref>.
The number of reactions can be learned by repeatedly executing the algorithm with different $n_r$. The ground truth of $n_r$ can be inferred from the best one. This will be shown in the numerical examples in the next section.
In many chemical reaction systems, the rate constants usually depend on the temperature. For example, the Arrhenius law can describe such a dependence:
\begin{equation}
k = A \exp\brac{-\frac{E_a}{RT}},
\end{equation}
where $k$ is the reaction rate, $A$ is the pre-exponential factor, $E_a$ is the activation energy and $R$ is the gas constant. In this case, the unknown parameters will include the pre-exponential factor, the activation energy and the stoichiometric coefficients. Our PPF technique can be directly applied without much modification. The performance will be verified numerically in the test in the next section.
time series data $\{(u_k(t_{n}), u_k'(t_{n})), \ k=1,\cdots,n_s, \ n=1,\cdots,N \}$
stoichiometric coefficient matrix $\bm{V}$, chemical reaction rates $\bm{k}_f$ and $\bm{k}_r$
initialize hyperparameters: number of reactions $n_r$, total number of epoch $M$, learning rate $lr$, regularization coefficient $\lambda$, integer threshold $\epsilon$
initialize parameters: $\bm{V}$, $\bm{l}_f$ and $\bm{l}_r$
step 1: learning ${V}$
$L_{\textrm{rec}}$ = np.zeros($M$); *[f]record loss function in each epoch
Compute loss $\mathcal{L}$
Compute $\frac{\partial \mathcal{L}}{\partial\theta}$ by backpropagation
Update parameters (excluding the integer entries in $\bm{V}$) by Adam method
if loss increase, then check if any row of ${V}$ is close to integer
$L_{rec}[i] \ge L_{rec}[i-1]$
$\bm{v}_j \leftarrow \nint{\bm{v}_j} $
j $\notin$ $S_{\textrm{int}}$
$\bm{v}_j \leftarrow \textrm{rand}(-2, 2)$; *[f]random reinitialize non-integer entries in $V$
if all the entries in $V$ are integers, then stop learning $V$
$S_{\textrm{int}} = \{ 1,\cdots,n_r \}$
step 2: learning $k_f$ and $k_r$
Compute loss $\mathcal{L}$
Compute $\frac{\partial \mathcal{L}}{\partial\theta}$ by backpropagation
Update parameters $\theta$ (excluding $\bm{V}$) by Adam method
$k_{if} \leftarrow \exp({l_{if}})$
$k_{ir} \leftarrow \exp({l_{ir}})$
Algorithm for learning chemical reactions
§ NUMERICAL RESULTS
Here the performance of our algorithm will be shown with five examples. The first example is an artificial reaction mechanism with two reactions [21]. The second one is the well-known Michaelis-Menten kinetics [16] in biochemistry. The third one is the hydrogen oxidation reactions [11, 8]. The fourth one is the extended Zeldovich mechanism, a typical chemical mechanism describing the oxidation of nitrogen and NOx formation [41]. The last one is the simplified GRI-3.0 mechanism, a chemical mechanism describing the methane oxidation [37].
In each numerical example, we randomly take 100 different initial conditions to generate the data. For each initial condition, we take uniform time snapshots at $t_n=n\Delta t$ with $n=0,\dots,10$ and $\Delta t=0.1$. The data is generated by solving the governing ODEs numerically using implicit Runge-Kutta method of the Radau IIA family of the fifth order [39] with small enough tolerance. The datasets are randomly split into the training datasets and the validation datasets by a ratio of 4:1. It is worthy to note that here we do not take $\Delta t$ to be too small so that the datasets could be potentially replaced by the experiment data in the future. The algorithm is implemented with PyTorch [28].
Now we present some details of the training and validation for the following four numerical tests. In the training process, all the parameters in the neural network are first randomly initialized from the uniform distribution in the interval $(-0.5, 0.5)$. Then, we update the parameters by minimizing the loss in (<ref>) using the standard Adams algorithm [17]. The learning rate is taken to be $10^{-3}$ and the regularization coefficient $\lambda$ in (<ref>) is $10^{-8}$. Recall the training method following (<ref>), we take the integer threshold to be 0.05. Besides, the total epoch number is $10^6$ and the mini-batch gradient descent is applied with the batch size 10.
For the validation, we use the following relative $L^2$ error:
\begin{equation*}
E = \sqrt{\frac{\sum_{n=1}^N\sum_{k=1}^{n_s}\abs{\mathcal{N}(u_1(t_{n}),\cdots,u_{n_s}(t_{n}))_k - u_k'(t_{n})}^2}{\sum_{n=1}^N\sum_{k=1}^{n_s}\abs{ u_k'(t_{n})}^2}}.
\end{equation*}
Here the $(u_k(t_{n}), u_k'(t_{n}))$'s
come from the validation dataset.
For the other details, we refer the interested readers to our code in <https://github.com/JuntaoHuang/multiscale-chemical-reaction>.
[hypothetical stiff reaction network]
The first test case is an artificial reaction network with two reactions, taken from [21]:
\begin{align}
\ch{F <=>[$k\sb{1}\sp{+}$][$k\sb{1}\sp{-}$] R} \label{eq:artificial-reaction-1} \\
\ch{R <=>[$k\sb{2}\sp{+}$][$k\sb{2}\sp{-}$] P} \label{eq:artificial-reaction-2}
\end{align}
Here F, R and P indicate the fuel, radical and product in combustions, respectively. The reaction rates are taken to be $k_1^+ = k_1^- = 1$ and $k_2^+ = k_2^- = 10^3$. The two reversible reactions in (<ref>) have dramatically different reaction rates. Thus, the second reaction (<ref>) will quickly approach to equilibrium after a transient period, after which the first one (<ref>) becomes rate-limiting. This simple model is chosen to test the correctness of our code for stiff reactions.
The corresponding ODE system for (<ref>) is linear. The eigenvalues of the coefficient matrix are $\lambda_1=-2000$, $\lambda_2=-1.5$ and $\lambda_3=0$, which differ in several orders of magnitudes. This indicates that the ODE system is stiff [39].
To illustrate the advantage of the PPF technique, we compare the performance of the algorithm with and without this technique. The history of the training and validation errors is shown in Figure <ref>. The relative error stays around $10^{-3}$ without this technique, and decreases to $10^{-6}$ after applying this technique. The learned parameters are listed in Table <ref>. The upper part of the table is the learned parameters with the PPF technique, which agrees well with the ground truth in (<ref>). By contrast, the algorithm without imposing this technique could not generate the correct result. Moreover, it is interesting to see that, without using the technique, the learned stoichiometric coefficients in the first reaction and the second one has the opposite sign. We also notice that the summation of the forward rate $k_f$ of the first reaction and the reverse rate $k_r$ of the second one is close to the true reaction rate $10^3$. The same holds true for the reverse rate of the first reaction and the forward rate of the second one. This indicates that the effect of these two learned reactions is identical to the fast reaction (<ref>) and the slow reaction (<ref>) is not captured here. This is similar to the phenomenon we observed in the multiscale nonlinear regression problem in Section <ref>.
Example <ref>: the history of the relative error for the training data and the verification data. Solid line: the method with the PPF technique; dashed line: the method without the PPF technique.
freezing $x_1$ $x_2$ $x_3$ $k_f$ $k_r$
$1$ $0.000$ $1.000$ $-1.000$ 1.000e+03 1.000e+03
$2$ $-1.000$ $1.000$ $0.000$ 1.000e+00 1.000e+00
no freezing $x_1$ $x_2$ $x_3$ $k_f$ $k_r$
$1$ $-0.001$ $0.999$ $-0.999$ 7.448e+02 5.731e+02
$2$ $0.000$ $-0.999$ $0.999$ 4.277e+02 2.559e+02
Example <ref>: learned parameters. Upper part: with the PPF technique; lower part: without the PPF technique. Here $(x_1, x_2, x_3)$ denotes the row vector of the matrix $\bm{V}$.
Next, we test the algorithm with different number of chemical reactions. We take the number of reactions ranging from 1 to 4. The relative errors in the training data and the validation data are shown in Figure <ref>. The relative error decreases by three magnitudes when increasing the number of proposed reactions from one to two and reaches a plateau after that. Moreover, it is observed from Table <ref> that some of the learned stoichiometric coefficients or reaction rates are close to zero if the number of reactions are larger than two. It then can be inferred that the kinetics could be well described with two reactions.
reaction num 1 $x_1$ $x_2$ $x_3$ $k_f$ $k_r$
$1$ $0.000$ $-1.000$ $1.000$ 1.000e+03 1.000e+03
reaction num 2 $x_1$ $x_2$ $x_3$ $k_f$ $k_r$
$1$ $0.000$ $1.000$ $-1.000$ 1.000e+03 1.000e+03
$2$ $-1.000$ $1.000$ $0.000$ 1.000e+00 1.000e+00
reaction num 3 $x_1$ $x_2$ $x_3$ $k_f$ $k_r$
$1$ $0.000$ $1.000$ $-1.000$ 1.000e+03 1.000e+03
$2$ $-1.000$ $1.000$ $0.000$ 9.217e+02 1.218e+02
$3$ $0.750$ $-0.101$ $0.384$ 7.887e$-$04 1.813e$-$03
reaction num 4 $x_1$ $x_2$ $x_3$ $k_f$ $k_r$
$1$ $0.000$ $-1.000$ $1.000$ 1.000e+03 1.000e+03
$2$ $0.000$ $0.000$ $0.000$ 5.931e+01 5.929e+01
$3$ $0.000$ $0.000$ $0.000$ 2.526e+01 2.524e+01
$4$ $1.000$ $-1.000$ $0.000$ 1.000e+00 1.000e+00
Example <ref>: learned parameters with different number of reactions. Here $(x_1, x_2, x_3)$ denotes the row vector of the matrix $\bm{V}$.
Example <ref>: relative error for the training data and the validation data with different number of reactions.
[enzyme kinetics]
In this example, we consider the Michaelis–Menten kinetics [16], one of the best-known models of enzyme kinetics in biochemistry. It involves an enzyme E, binding to a substrate S, to form a complex ES, which in turn releases a product P, regenerating the original enzyme. This can be represented schematically as [16]
\begin{equation}\label{eq:enzyme-reaction}
\ch{E + S {<=>[$k\sb{f}$][$k\sb{r}$]} ES {->[$k\sb{cat}$]} E + P}
\end{equation}
Here $k_f$ denotes the forward rate constant, $k_r$ the reverse rate constant, and $k_{cat}$ the catalytic rate constant. This model is used in a variety of biochemical situations other than enzyme-substrate interaction, including antigen–antibody binding, DNA-DNA hybridization, and protein–protein interaction [26]. Moreover, the reaction rates vary widely between different enzymes. In our test case, we follow [36] and take $k_f=10^6$, $k_r=10^3$ and $k_{cat}=10$.
Note that the second reaction in (<ref>) is not reversible. Here, we show that the exactly same algorithm can be applied to this situation. The results with and without the PPF technique are listed in Table <ref>. In the upper part of the table, the reverse rate for the second reaction is $1.949\times10^{-4}$. It then can be inferred that the system could be well described using two reactions with the second one to be irreversible. Again, the algorithm without this treatment could only get the correct result for the first faster reaction in (<ref>). The evolution of the loss function is similar to that in Example <ref> and thus omitted here.
freezing $x_1$ $x_2$ $x_3$ $x_4$ $k_f$ $k_r$
$1$ $-1.000$ $-1.000$ $1.000$ $0.000$ 1.000e+06 1.000e+03
$2$ $1.000$ $0.000$ $-1.000$ $1.000$ 1.000e+01 1.949e-04
no freezing $x_1$ $x_2$ $x_3$ $x_4$ $k_f$ $k_r$
$1$ $-0.999$ $-0.999$ $0.989$ $0.000$ 9.921e+05 9.929e+02
$2$ $-1.001$ $-1.000$ $2.385$ $0.000$ 7.956e+03 1.392e+01
Example <ref>: learned parameters. Upper part: with the PPF technique; lower part: without the PPF technique. Here $(x_1, x_2, x_3, x_4)$ denotes the row vector of the matrix $\bm{V}$.
Next, we test the performance of the algorithm when the reaction rates depend on temperature. We assume that the rate constants in (<ref>) satisfy the Arrhenius law:
\begin{equation}
k_f = A_f \exp\brac{-\frac{E_{a,f}}{R T}}, \quad k_r = A_r \exp\brac{-\frac{E_{a,r}}{R T}}, \quad k_{cat} = A_{cat} \exp\brac{-\frac{E_{a,cat}}{R T}}
\end{equation}
where the pre-exponential factors are given by
\begin{equation}
A_f = 1, \quad A_r = 4, \quad A_{cat} = 10^3
\end{equation}
and the activation energy are
\begin{equation}
E_f = 1600, \quad E_r = 3680, \quad E_{cat} = 2240
\end{equation}
and the gas constant $R = 8.3145$. The temperature is randomly taken in a uniform distribution in the interval $[200, 400]$. In this case, the unknown parameters will include the pre-exponential factor, the activation energy in the Arrhenius law, and the stoichiometric coefficients. Our PPF technique can be directly applied without much modification.
We compare the performance of the algorithm with and without the PPF technique. The history of the relative error for the training data and the verification data with variable temperature is shown in Figure <ref>. We see clearly that the errors with the PPF technique are much smaller than those without the technique. We also show the learned parameters in Table <ref>. The upper part of the table is the learned parameters with the PPF technique, which agrees well with the ground truth. By contrast, the algorithm without imposing this technique could not generate the correct result.
Example <ref>: the history of the relative error for the training data and the verification data with variable temperature. Solid line: the method with the PPF technique; dashed line: the method without the PPF technique.
freezing $x_1$ $x_2$ $x_3$ $x_4$ $A_f$ $A_r$ $E_f$ $E_r$
$1$ $1.000$ $0.000$ $-1.000$ $1.000$ 1.000e+03 1.801e-05 2.240e+03 5.203e+03
$2$ $-1.000$ $-1.000$ $1.000$ $0.000$ 1.000e+00 4.000e+00 1.600e+03 3.680e+03
no freezing $x_1$ $x_2$ $x_3$ $x_4$ $A_f$ $A_r$ $E_f$ $E_r$
$1$ $1.061$ $0.001$ $-1.000$ $2.887$ 3.391e+02 1.665e-06 2.240e+03 2.244e+03
$2$ $0.969$ $0.002$ $-1.000$ $0.032$ 6.640e+02 2.689e-01 6.727e+01 6.684e+02
Example <ref>: learned parameters with variable temperatures. Upper part: with the PPF technique; lower part: without the PPF technique. Here $(x_1, x_2, x_3, x_4)$ denotes the row vector of the matrix $\bm{V}$.
[hydrogen oxidation reaction]
In this example, we consider a model for hydrogen oxidation reaction where six species H2 (hydrogen), O2 (oxygen), H2O (water), H, O, OH (radicals) are involved in six steps in a closed system under constant volume and temperature [11, 8]:
\begin{align}
\ch{H2 &<=>[$k\sb{1}\sp{+}$][$k\sb{1}\sp{-}$] 2 H} \\
\ch{O2 &<=>[$k\sb{2}\sp{+}$][$k\sb{2}\sp{-}$] 2 O} \\
\ch{H2O &<=>[$k\sb{3}\sp{+}$][$k\sb{3}\sp{-}$] H + OH} \\
\ch{H2 + O &<=>[$k\sb{4}\sp{+}$][$k\sb{4}\sp{-}$] H + OH} \\
\ch{O2 + H &<=>[$k\sb{5}\sp{+}$][$k\sb{5}\sp{-}$] O + OH} \\
\ch{H2 + O &<=>[$k\sb{6}\sp{+}$][$k\sb{6}\sp{-}$] H2O}
\end{align}
with the reaction rates $k_1^+=2$, $k_2^+=k_3^+=1$, $k_4^+=k_5^+=1\times10^3$, $k_1^+=1\times10^2$, $k_1^- = 2.16\times10^2$, $k_2^- = 3.375\times10^2$, $k_3^- = 1.4\times10^3$, $k_4^- = 1.08\times10^4$, $k_5^- = 3.375\times10^4$, $k_6^- = 7.714285714285716\times10^{-1}$.
The system (<ref>) corresponds to the simplified picture of this chemical process and the reaction rates reflect only orders of magnitude for relevant real-word systems.
The magnitude of the reaction rates vary from $10^{-1}$ to $10^4$, which leads to the multiscale phenomena. This reaction network has much more reactions and is more realistic than the first two test cases.
We first compare the performance of our algorithm with and without the PPF treatment. The history of the training and the validation error is shown in Figure <ref>. Again, we observe that this technique greatly reduces the training and validation errors. The learned parameters are listed in Table <ref>. The algorithm can generate the correct result with this technique. On the other hand, without using this technique, the phenomenon of the opposite signs observed in Table <ref> also appears.
Example <ref>: the history of the relative error for the training data and the verification data. Solid line: the method with the PPF technique; dashed line: the method without the PPF technique.
freezing $x_1$ $x_2$ $x_3$ $x_4$ $x_5$ $x_6$ $k_f$ $k_r$
$1$ $0.000$ $1.000$ $0.000$ $1.000$ $-1.000$ $-1.000$
3.375e+04 1.000e+03
$2$ $1.000$ $0.000$ $0.000$ $-1.000$ $1.000$ $-1.000$
1.080e+04 1.000e+03
$3$ $0.000$ $0.000$ $1.000$ $-1.000$ $0.000$ $-1.000$
1.400e+03 1.000e+00
$4$ $0.000$ $1.000$ $0.000$ $0.000$ $-2.000$ $0.000$
3.375e+02 1.000e+00
$5$ $1.000$ $0.000$ $0.000$ $-2.000$ $0.000$ $0.000$
2.160e+02 2.000e+00
$6$ $-1.000$ $0.000$ $1.000$ $0.000$ $-1.000$ $0.000$
1.000e+02 7.714e$-$01
no freezing $x_1$ $x_2$ $x_3$ $x_4$ $x_5$ $x_6$ $k_f$ $k_r$
1 $0.000 $ $0.938$ $0.000$ $ 0.923$ $-1.000$ $-1.001$
1.971e+04 5.512e+02
2 $0.000 $ $1.087$ $0.000$ $ 1.108$ $-0.999$ $-0.999$
1.407e+04 4.488e+02
3 $0.885 $ $0.000$ $0.115$ $-1.000$ $ 0.885$ $-1.000$
1.217e+04 2.112e+01
4 $-1.004$ $0.000$ $0.087$ $ 0.910$ $-1.008$ $ 0.913$
1.077e+03 2.926e+01
5 $0.000 $ $0.996$ $0.008$ $ 0.000$ $-1.997$ $ 0.000$
3.379e+02 8.912e$-$01
6 $0.987 $ $0.000$ $0.007$ $-1.984$ $ 0.004$ $ 0.004$
2.170e+02 3.377e+00
Example <ref>: learned parameters. Upper part: with the PPF technique; lower part: without the PPF technique. Here $(x_1, x_2, x_3, x_4, x_5, x_6)$ denotes the row vector of the matrix $\bm{V}$.
We also test the performance of the algorithm with Gaussian noise. The algorithm can get the correct prediction of the stoichiometric coefficients with the noise level $10^{-4}$ and $10^{-3}$. The learned reaction rates with noise are shown in Table <ref>. The relative errors for reaction rates are typically less than the order of $10^{-2}$ for $10^{-3}$ noise and $10^{-3}$ for $10^{-4}$ noise.
noise $10^{-3}$ $k_f$ relative error $k_r$ relative error
1 3.375e+04 5.706e$-$05 1.002e+03 2.097e$-$03
2 1.080e+04 2.789e$-$04 1.001e+03 1.374e$-$03
3 1.399e+03 4.413e$-$04 9.631e$-$01 3.836e$-$02
4 3.399e+02 7.103e$-$03 9.235e$-$01 8.278e$-$02
5 2.161e+02 6.927e$-$04 2.130e+00 6.107e$-$02
6 9.764e+01 2.417e$-$02 8.047e$-$01 4.137e$-$02
noise $10^{-4}$ $k_f$ relative error $k_r$ relative error
1 3.375e+04 6.482e$-$06 1.000e+03 2.099e$-$04
2 1.080e+04 2.803e$-$05 1.000e+03 1.379e$-$04
3 1.400e+03 4.491e$-$05 9.963e$-$01 3.707e$-$03
4 3.377e+02 7.161e$-$04 9.923e$-$01 7.717e$-$03
5 2.160e+02 6.965e$-$05 2.013e+00 6.469e$-$03
6 9.976e+01 2.366e$-$03 7.748e$-$01 4.294e$-$03
Example <ref>: learned reaction rates with noise.
Moreover, we plot the evolution of the concentrations of the six species with the noise level $10^{-3}$ in Figure <ref>. We observe a good agreement of the solution generated by our learned model and the exact solution. We also measure the prediction errors of the learned model at 100 uniformly points in the time interval $[0, 10]$. The prediction errors are $1.953\times10^{-6}$ with zero noise, $9.152\times10^{-4}$ with noise level $10^{-4}$ and $8.710\times10^{-4}$ with noise level $10^{-3}$.
Example <ref>: the evolution of the concentration of the 6 species in the hydrogen oxidation reaction problem obtained by solving the original ODEs (<ref>) and our learned ODEs. noise level $10^{-3}$.
[extended Zeldovich mechanism]
In this example, we test our algorithm on the extended Zeldovich mechanism, which is a chemical mechanism describing the oxidation of nitrogen and NOx formation [41]. Similar to Example <ref>, this is another realistic test case. The reaction mechanisms read as
\begin{align}
\ch{N2 + O &<=>[$k\sb{1}\sp{+}$][$k\sb{1}\sp{-}$] NO + N} \\
\ch{N + O2 &<=>[$k\sb{2}\sp{+}$][$k\sb{2}\sp{-}$] NO + O} \\
\ch{N + OH &<=>[$k\sb{3}\sp{+}$][$k\sb{3}\sp{-}$] NO + H}
\end{align}
and the reaction rates are given by the Arrhenius law [12]:
\begin{equation}
\begin{aligned}
k_1^+ &= 1.8\times10^{11} \exp(-38370/T), \quad k_1^- = 3.8\times10^{10} \exp(-425/T), \\
k_2^+ &= 1.8\times10^{7} \exp(-4680/T), \quad k_2^- = 3.8\times10^{6} \exp(-20820/T), \\
k_3^- &= 7.1\times10^{10} \exp(-450/T), \quad k_3^- = 1.7\times10^{11} \exp(-24560/T),
\end{aligned}
\end{equation}
with $T$ the temperature.
In the numerical test, we fix the temperature to be $T=3000$, which is a reasonable temperature in real applications [12]. At this temperature, the reaction rates are
\begin{equation}
\begin{aligned}
& k_1^+ = 5.019\times10^5, \quad k_2^+ = 3.782\times10^6, \quad k_3^+ = 6.111\times10^{10}, \\
& k_1^- = 3.298\times10^{10}, \quad k_2^- = 3.679\times10^3, \quad k_3^- = 4.732\times10^7.
\end{aligned}
\end{equation}
Then, we follow the same procedure in the previous examples to generate the data and execute the algorithm to discover the stoichiometric coefficients and the reaction rates. Again, the algorithm with the PPF treatment can predict the correct result, which is shown in Table <ref>. We observe that the accurate reaction rates are obtained.
freezing $x_1$ $x_2$ $x_3$ $x_4$ $x_5$ $x_6$ $x_7$ $k_f$ $k_r$
$1$ $0.000$ $0.000$ $1.000$ $-1.000$ $0.000$ $-1.000$ $1.000$
6.111e+10 4.732e+07
$2$ $1.000$ $1.000$ $-1.000$ $-1.000$ $0.000$ $0.000$ $0.000$
3.298e+10 5.019e+05
$3$ $0.000$ $1.000$ $1.000$ $-1.000$ $-1.000$ $0.000$ $0.000$
3.782+06 3.931e+03
Example <ref>: learned parameters with the PPF technique. Here $(x_1, x_2, x_3, x_4, x_5, x_6, x_7)$ denotes the row vector of the matrix $\bm{V}$.
[simplified GRI-3.0 mechanism]
In this example, we test our algorithm on the simplified GRI-3.0 mechanism, which is a chemical mechanism describing the methane oxidation [37]. This is the most complicated reaction system tested in the paper. The mechanism includes 16 species with 12 reactions and reads as
\begin{align}
\ch{CH_4 + H & ->[$k\sb{1}$] CH_3 + H_2} \\
\ch{CH_2O + H_2 & ->[$k\sb{2}$] CH_3 + OH} \\
\ch{CH_2O & ->[$k\sb{3}$] CO + H_2} \\
\ch{C_2H_6 & ->[$k\sb{4}$] C_2H_4 + H_2} \\
\ch{C_2H_4 + OH & ->[$k\sb{5}$] CH_3 + CO + H_2} \\
\ch{2 CO + H_2 & ->[$k\sb{6}$] C_2H_2 + O_2} \\
\ch{CO + OH + H & ->[$k\sb{7}$] CO_2 + H_2} \\
\ch{H + OH & ->[$k\sb{8}$] H_2O} \\
\ch{2 H + 2 OH & ->[$k\sb{9}$] 2 H_2 + O_2} \\
\ch{H_2 & ->[$k\sb{10}$] 2 H} \\
\ch{H_2 + O_2 & ->[$k\sb{11}$] HO_2 + H} \\
\ch{H_2O_2 + H & ->[$k\sb{12}$] H_2 + HO_2}
\end{align}
The reaction rates are given in [37], which are derived from the reaction rates of the standard GRI-3.0 Mech [35]. We compute the reaction rates with the temperature $T=3000$ and list them in Table 4.8. Here, the reaction rates are normalized such that the smallest one is of order 1.
$k_1$ $k_2$ $k_3$ $k_4$ $k_5$ $k_6$
5.088e+00 1.891e+00 2.607e+00 6.268e+00 5.446e+00 1.283e+01
$k_7$ $k_8$ $k_9$ $k_{10}$ $k_{11}$ $k_{12}$
1.349e+00 5.264e+03 3.268e+01 4.873e+03 2.978e+02 5.227e+03
Example <ref>: reaction rates in simplified GRI-3.0 Mech
Note that all the reactions in (<ref>) are not reversible. Here, we apply exactly the same algorithm to this situation, similar to Example <ref>.
To illustrate the advantage of the PPF technique, we first compare the performance of the algorithm with and without this technique. The history of the training and validation errors is shown in Figure <ref>. The relative error stays around $10^{-3}$ without this technique, and decreases to $10^{-6}$ after applying this technique.
Example <ref>: the history of the relative error for the training data and the verification data. Solid line: the method with the PPF technique; dashed line: the method without the PPF technique.
$k^+_1$ $k^+_2$ $k^+_3$ $k^+_4$ $k^+_5$ $k^+_6$
5.088e+00 1.891e+00 2.607e+00 6.268e+00 5.446e+00 1.283e+01
$k^+_7$ $k^+_8$ $k^+_9$ $k^+_{10}$ $k^+_{11}$ $k^+_{12}$
1.349e+00 5.264e+03 3.268e+01 4.873e+03 2.978e+02 5.227e+03
$k^-_1$ $k^-_2$ $k^-_3$ $k^-_4$ $k^-_5$ $k^-_6$
2.546e-04 1.695e-04 1.091e-04 1.751e-04 1.103e-04 9.052e-06
$k^-_7$ $k^-_8$ $k^-_9$ $k^-_{10}$ $k^-_{11}$ $k^-_{12}$
8.462e-05 1.146e-05 5.472e-04 2.625e-07 8.566e-07 3.863e-04
Example <ref>: learned reaction rates in simplified GRI-3.0 Mech. Upper part: reaction rates in the forward reaction; lower part: reaction rates in the reverse reaction.
We also list the learned parameters with the PPF technique in Table 4.9.
Here, the learned stoichiometric coefficients are the same with the true coefficients in (<ref>) and they are omitted here. The upper part of the table is the learned rates in the forward reactions with the PPF technique, which agrees well with the ground truth in Table 4.8.
The learned rates in the reverse reactions are in the magnitude of $10^{-7}$ to $10^{-4}$. It then can be inferred that the system can be well described using only forward reactions. By contrast, the algorithm without imposing this technique could not generate the correct result and we omit the results here.
§ CONCLUSION
In this paper, we propose a data-driven method to discover multiscale chemical reactions governed by the law of mass action. The method mainly contains two novel points.
First, we use a single matrix to represent the stoichiometric coefficients for both the reactants and products in a system without catalysis reactions.
The negative entries in the matrix denote the stoichiometric coefficients for the reactants and the positive ones for the products.
Second, by considering a multiscale nonlinear regression problem, we find that the conventional optimization methods usually get stuck in the local minima and could not find the true solution. To escape from the local minima, we propose a PPF technique. Notice that the stoichiometric coefficients are integers. In the training process, if the loss function stops to decrease, the stoichiometric coefficients which are close to integers are rounded and then frozen afterwards. With such a treatment, the stoichiometric coefficients are gradually determined, the dimension of the searching space is reduced in the training process, and eventually the global mimina can be obtained. Several numerical experiments including the classical Michaelis–Menten kinetics, the hydrogen oxidation reactions and simplified GRI-3.0 mechanism verify the validity of our algorithm in learning the multiscale chemical reactions.
There are still some problems to be addressed in order to develop a robust and general framework for discovering multiscale chemical reactions from data. We shall highlight some of the challenges that could guide future advances. First, it is interesting to generalize the PPF technique to the catalysis reactions.
Second, the number of species $n_s$ cannot be determined from our algorithm. In principle, to infer the unknown chemical reaction systems, we should have the concentration time series data for all the species. Our algorithm cannot treat the problem when the concentrations for some partial species are unknown. This difficulty may be overcomed by combining the current algorithm with the Neural ODE approach in [15].
The third challenge is that for very complex reaction networks with large number of reactions (hundreds or thousands), our algorithm may not always find out the correct solution. New ideas are needed at this point.
[1]
B. Bhadriraju, M. S. F. Bangi, A. Narasingam, and S. I. Kwon.
Operable adaptive sparse identification of systems (OASIS):
application to chemical processes.
AIChE Journal, 66(11):e16980, 2020.
[2]
B. Bhadriraju, A. Narasingam, and J. S.-I. Kwon.
Machine learning-based adaptive model identification of systems:
Application to a chemical process.
Chemical Engineering Research and Design, 152:372–383, 2019.
[3]
S. L. Brunton, B. R. Noack, and P. Koumoutsakos.
Machine learning for fluid mechanics.
Annual Review of Fluid Mechanics, 52:477–508, 2020.
[4]
S. L. Brunton, J. L. Proctor, and J. N. Kutz.
Discovering governing equations from data by sparse identification of
nonlinear dynamical systems.
Proceedings of the National Academy of Sciences,
113(15):3932–3937, 2016.
[5]
S. C. Burnham, D. P. Searson, M. J. Willis, and A. R. Wright.
Inference of chemical reaction networks.
Chemical Engineering Science, 63(4):862–873, 2008.
[6]
K. Champion, B. Lusch, J. N. Kutz, and S. L. Brunton.
Data-driven discovery of coordinates and governing equations.
Proceedings of the National Academy of Sciences,
116(45):22445–22451, 2019.
[7]
R. Chartrand.
Numerical differentiation of noisy, nonsmooth data.
ISRN Applied Mathematics, 2011, 2011.
[8]
E. Chiavazzo and I. V. Karlin.
Quasi-equilibrium grid algorithm: Geometric construction for model
Journal of Computational Physics, 227(11):5535–5560, 2008.
[9]
B. de Silva, K. Champion, M. Quade, J.-C. Loiseau, J. N. Kutz, and S. Brunton.
PySINDy: A Python package for the sparse identification of
nonlinear dynamical systems from data.
Journal of Open Source Software, 5(49):1–4, 2020.
[10]
C. W. Gao, J. W. Allen, W. H. Green, and R. H. West.
Reaction mechanism generator: Automatic construction of chemical
kinetic mechanisms.
Computer Physics Communications, 203:212–225, 2016.
[11]
A. N. Gorban and I. V. Karlin.
Invariant manifolds for physical and chemical kinetics, volume
Springer Science & Business Media, 2005.
[12]
R. K. Hanson and S. Salimian.
Survey of rate constants in the n/h/o system.
In Combustion chemistry, pages 361–421. Springer, 1984.
[13]
M. Hoffmann, C. Fröhner, and F. Noé.
Reactive SINDy: Discovering governing reactions from concentration
The Journal of Chemical Physics, 150(2):025101, 2019.
[14]
J. Huang, Z. Ma, Y. Zhou, and W.-A. Yong.
Learning thermodynamically stable and galilean invariant partial
differential equations for non-equilibrium flows.
Journal of Non-Equilibrium Thermodynamics, 2021.
[15]
W. Ji and S. Deng.
Autonomous discovery of unknown reaction pathways from data by
chemical reaction neural network.
The Journal of Physical Chemistry A, 125(4):1082–1092, 2021.
[16]
J. P. Keener and J. Sneyd.
Mathematical Physiology, volume 1.
Springer, 1998.
[17]
D. P. Kingma and J. Ba.
Adam: A method for stochastic optimization.
arXiv preprint arXiv:1412.6980, 2014.
[18]
D. Langary and Z. Nikoloski.
Inference of chemical reaction networks based on concentration
profiles using an optimization framework.
Chaos: An Interdisciplinary Journal of Nonlinear Science,
29(11):113121, 2019.
[19]
Y. LeCun, Y. Bengio, and G. Hinton.
Deep learning.
Nature, 521(7553):436–444, 2015.
[20]
L. Lu, X. Meng, Z. Mao, and G. E. Karniadakis.
DeepXDE: A deep learning library for solving differential
arXiv preprint arXiv:1907.04502, 2019.
[21]
T. Lu and C. K. Law.
On the applicability of directed relation graphs to the reduction of
reaction mechanisms.
Combustion and Flame, 146(3):472–483, 2006.
[22]
B. Lusch, J. N. Kutz, and S. L. Brunton.
Deep learning for universal linear embeddings of nonlinear dynamics.
Nature Communications, 9(1):1–10, 2018.
[23]
S. Maddu, B. L. Cheeseman, C. L. Müller, and I. F. Sbalzarini.
Learning physically consistent mathematical models from data using
group sparsity.
arXiv preprint arXiv:2012.06391, 2020.
[24]
N. M. Mangan, S. L. Brunton, J. L. Proctor, and J. N. Kutz.
Inferring biological networks by sparse identification of nonlinear
IEEE Transactions on Molecular, Biological and Multi-Scale
Communications, 2(1):52–63, 2016.
[25]
T. Nagy, J. Tóth, and T. Ladics.
Automatic kinetic model generation and selection based on
concentration versus time curves.
International Journal of Chemical Kinetics, 52(2):109–123,
[26]
D. L. Nelson, A. L. Lehninger, and M. M. Cox.
Lehninger principles of biochemistry.
Macmillan, 2008.
[27]
H. G. Othmer.
Analysis of complex reaction networks.
Lecture Notes, School of Mathematics, University of Minnesota,
[28]
A. Paszke, S. Gross, F. Massa, A. Lerer, J. Bradbury, G. Chanan, T. Killeen,
Z. Lin, N. Gimelshein, L. Antiga, et al.
Pytorch: An imperative style, high-performance deep learning library.
Advances in Neural Information Processing Systems,
32:8026–8037, 2019.
[29]
M. Raissi, P. Perdikaris, and G. E. Karniadakis.
Physics-informed neural networks: A deep learning framework for
solving forward and inverse problems involving nonlinear partial differential
Journal of Computational Physics, 378:686–707, 2019.
[30]
M. Raissi, A. Yazdani, and G. E. Karniadakis.
Hidden fluid mechanics: Learning velocity and pressure fields from
flow visualizations.
Science, 367(6481):1026–1030, 2020.
[31]
R. Ranade, S. Alqahtani, A. Farooq, and T. Echekki.
An ANN based hybrid chemistry framework for complex fuels.
Fuel, 241:625–636, 2019.
[32]
R. Ranade, S. Alqahtani, A. Farooq, and T. Echekki.
An extended hybrid chemistry framework for complex hydrocarbon fuels.
Fuel, 251:276–284, 2019.
[33]
L. I. Rudin, S. Osher, and E. Fatemi.
Nonlinear total variation based noise removal algorithms.
Physica D: Nonlinear Phenomena, 60(1-4):259–268, 1992.
[34]
S. H. Rudy, S. L. Brunton, J. L. Proctor, and J. N. Kutz.
Data-driven discovery of partial differential equations.
Science Advances, 3(4):e1602614, 2017.
[35]
G. P. Smith, D. M. Golden, M. Frenklach, N. W. Moriarty, B. Eiteneer,
M. Goldenberg, C. T. Bowman, R. K. Hanson, S. Song, W. C. J. Gardiner, V. V.
Lissianski, and Z. Qin.
Gri-mech home page, http://www.me.berkeley.edu/gri_mech/.
[36]
V. Srinivasan and R. Aiken.
Stage-wise parameter estimation for stiff differential equations.
AIChE journal, 32(2):195–199, 1986.
[37]
C. Sung, C. Law, and J.-Y. Chen.
Augmented reduced mechanisms for no emission in methane oxidation.
Combustion and Flame, 125(1-2):906–919, 2001.
[38]
E. O. Voit, H. A. Martens, and S. W. Omholt.
150 years of the mass action law.
PLoS Comput Biol, 11(1):e1004012, 2015.
[39]
G. Wanner and E. Hairer.
Solving Ordinary Differential Equations II.
Springer Berlin Heidelberg, 1996.
[40]
M. J. Willis and M. von Stosch.
Inference of chemical reaction networks using mixed integer linear
Computers & Chemical Engineering, 90:31–43, 2016.
[41]
I. Zeldovich, G. I. Barenblatt, V. Librovich, and G. Makhviladze.
Mathematical theory of combustion and explosions.
|
# Unsteady motion past a sphere translating steadily in wormlike micellar
solutions: A numerical analysis
Chandi Sasmal1<EMAIL_ADDRESS>1Soft Matter Engineering and
Microfluidics Lab, Department of Chemical Engineering, Indian Institute of
Technology Ropar, Rupnagar, India-140001
###### Abstract
This study numerically investigates the flow characteristics past a solid and
smooth sphere translating steadily along the axis of a cylindrical tube filled
with wormlike micellar solutions in the creeping flow regime. The two-species
VCM (Vasquez-Cook-McKinley) and single-species Giesekus constitutive models
are used to characterize the rheological behaviour of micellar solutions. Once
the Weissenberg number exceeds a critical value, an unsteady motion downstream
of the sphere is observed in the case of two-species model. We provide the
evidence that this unsteady motion downstream of the sphere is caused due to
the sudden rupture of long and stretched micelles in this region, resulting
from an increase in the extensional flow strength. The corresponding single-
species Giesekus model for the wormlike micellar solution, with no breakage
and reformation, predicts a steady flow field under otherwise identical
conditions. Therefore, it further ascertains the evidence presented herein for
the onset of this unsteady motion. Furthermore, we find that the onset of this
unsteady motion downstream of the sphere is delayed as the sphere to tube
diameter ratio decreases. A similar kind of unsteady motion has also been
observed in earlier several experiments for the problem of a sphere
sedimenting in a tube filled wormlike micellar solutions. We find a remarkable
qualitative similarity in the flow characteristics between the present
numerical results on the steadily translating sphere and prior experimental
results on the falling sphere.
###### keywords:
Wormlike micelles, sphere, unsteady motion, VCM model
## 1 Introduction
A solid sphere translating in a cylindrical tube filled with a quiescent
liquid is one of the classical and benchmark problems in the helm of transport
phenomena for many decades. It represents an idealization of many industrially
relevant processes, for instance, fluidized and fixed bed reactors, slurry
reactors, falling ball viscometer, equipment for separating solid-liquid
mixture in mining, and petrochemical industries, processing of polymer melts,
etc. Not only of practical significance, but this problem is also of
fundamental interest in its own right. As a result, this problem has been
extensively investigated in the research community, and much has been written
about it in the literature for both Newtonian and non-Newtonian fluids
(McKinley, 2002; Chhabra, 2006; Michaelides, 2006). Earlier investigations on
this problem were restricted to simple Newtonian fluids like water, and it has
been then gradually extended to complex non-Newtonian fluids like polymer
solutions and melts due to their overwhelming applications in scores of
industrial settings like food, petrochemical, personal care products, etc
(Chhabra, 2006). Earlier investigations revealed that both the blockage ratio
(ratio of the sphere to tube diameter) and non-linear rheological properties
of fluids like shear-thinning, shear-thickening, visco-plasticity, etc.
greatly influenced the flow characteristics like the drag force, wake length,
etc., in comparison to that for an unconfined situation and for Newtonian
fluids. In addition to the investigations carried out for the generalized
Newtonian fluids, many studies have also been presented for viscoelastic
fluids. Some typical and complex flow features were seen in these fluids as
compared to that seen either in Newtonian or any GNF fluid. This complexity
was not only observed in the variation of the integral parameters like the
drag force but also seen in the flow fields near the sphere. For instance, a
downward or upward shifting in the axial velocity profile along the upstream
or downstream axis of the sphere has been observed both experimentally and
numerically for viscoelastic fluids in comparison to that seen in Newtonian
fluids (Arigo et al., 1995; Arigo & McKinley, 1998; Bush, 1994). Additionally,
a flow reversal phenomenon and/or the presence of a “negative wake” downstream
of the sphere has also been observed in viscoelastic fluids (Harlen, 2002;
Bisgaard, 1983).
The next generation of studies on this benchmark problem has considered a
solution comprised of various types of surfactant molecules. When these
molecules dissolve in water above a critical concentration, they spontaneously
self-assemble into large and flexible aggregates of micelles of different
shapes like spherical, ellipsoidal, wormlike, or lamellae (Moroi, 1992). The
rheological properties of these wormlike micellar (WLM) solutions were found
to be more complex than that seen for polymer solutions or melts (Rothstein,
2008, 2003). This is due to the fact that these wormlike micelles can undergo
continuous scission and reformation in an imposed shear or extensional flow
field, unlike polymer molecules, which are unlikely to break due to the
presence of a strong covalent backbone. Because of their extensive
applications over a wide range of industrial settings, a considerable amount
of studies have also been performed on the falling sphere problem in these
fluids in the creeping flow regime. For instance, Jayaraman and Belmonte
(Jayaraman & Belmonte, 2003) conducted an experimental investigation on this
problem in CTAB (Cetyl trimethyl ammonium bromide)/NaSal(Sodium salicylate)
wormlike micellar solution. They found an unsteady motion of the sphere in the
direction of its sedimentation. They proposed that the cause for this
instability was due to the destruction of the flow-induced structure (FIS)
formed in the sphere’s vicinity. However, in a later study with the same WLM
solution, Chen and Rothstein (Chen & Rothstein, 2004) claimed that this
instability was due to the sudden rupture of the long micelles downstream of
the sphere. This reason was further established in a later study with CPyCl
(Cetylpyridinium chloride)/NaSal WLM solution by Wu and Mohammadigoushki (Wu &
Mohammadigoushki, 2018). The unsteady motion of the falling sphere has also
been observed in the study by Kumar et al. (Kumar et al., 2012) with CTAT
(Cetyl trimethyl ammonium p-toluenesulphonate)/NaCl (Sodium chloride) micellar
solution and in a recent study by Wang et al. (Wang et al., 2020) with OTAC
(Octadecyl trimethyl ammonium chloride)/NaSal micellar solution. To
characterize the falling sphere’s onset of this unsteady motion,
Mohammadigoushki and Muller (Mohammadigoushki & Muller, 2016) and Zhang and
Muller (Zhang & Muller, 2018) found out a criteria by calculating the local
extensional Weissenberg number downstream of the sphere based on the local
maximum extension rate. This criterion is found to be universally valid as
they discovered that it doesn’t depend on the micelles chemistry and solution
rheological behaviours, for instance, whether the solution shows shear banding
phenomena or not. Furthermore, their predictions for the unsteady motion were
in line with that predicted by Chen and Rothstein (Chen & Rothstein, 2004) and
Wu and Mohammadigoushki (Wu & Mohammadigoushki, 2018).
Therefore, most of the studies proposed that the unsteady motion of a
sedimenting sphere in wormlike micellar solutions is due to the presence of
strong extensional flow downstream of the sphere, causing the sudden rupture
of highly aligned and stretched micelles in this region (Rothstein, 2008).
However, there is no direct evidence present on this, and it was only
indirectly proved using the FIB (flow induced birefringence) and PIV (particle
image velocimetry) experiments (Chen & Rothstein, 2004; Wu & Mohammadigoushki,
2018). The present study aims to establish this hypothesis using numerical
simulations based on the Vasquez-Cook-McKinley (VCM) constitutive model
(Vasquez et al., 2007) for the wormlike micellar solution. However, it should
be mentioned here that the problem considered in this study is not the exact
representation of the prior experimental settings wherein the sphere is
allowed to sediment in the tube due to its own weight (Chen & Rothstein, 2004;
Wu & Mohammadigoushki, 2018; Zhang & Muller, 2018; Jayaraman & Belmonte,
2003). The sphere may rotate or undergo lateral motion during the
sedimentation or even it may not reach to a terminal velocity
(Mohammadigoushki & Muller, 2016). Therefore, in actual experiments, the flow
may become three-dimensional and non-axisymmetric. To realize the
corresponding experimental conditions accurately, one has to solve numerically
the full governing field equations, namely, continuity, momentum and micellar
constitutive equations in a three dimensional computational domain along with
an equation of the sphere motion. In the present simulations, we consider a
problem wherein the sphere is translating steadily along the axis of a tube,
and this can be a situation in the corresponding experiments on the falling
sphere problem when the sphere will reach a terminal velocity. Although this
is not a case in actual experiments; however, by using this simplified
problem, we aim to show that this unsteady motion downstream of the sphere is,
indeed, caused due to the breakage of micelles. Therefore, this will further
establish the hypothesis for the unsteady motion of the sphere falling in
wormlike micellar solutions, as observed in prior experiments (Chen &
Rothstein, 2004; Wu & Mohammadigoushki, 2018; Zhang & Muller, 2018; Jayaraman
& Belmonte, 2003).
To prove the aforementioned hypothesis, as stated above, the present study
plans to use the two-species VCM constitutive model for characterizing the
rheological behaviour of wormlike micellar solutions. This model considers the
micelles as elastic segments composed of Hookean springs, which all together
form an elastic network. The breakage and reformation dynamics were
incorporated in this model based on Cate’s original reversible breaking theory
for wormlike micelles (Cates, 1987). For different viscometric flows, a very
good agreement has been found between the predictions obtained with the VCM
model and the corresponding experimental results (Pipe et al., 2010; H.
Mohammadigoushki & Cook, 2019) whereas for a non-viscometric complex flow, a
good qualitative correspondence has been seen in recent studies (Sasmal, 2020;
Khan & Sasmal, 2020; Kalb et al., 2017, 2018). Therefore, this VCM model’s
capability to predict the flow behaviour of wormlike micellar solution in
various flow fields is well established. We also use the single-species
Giesekus constitutive equation in our analysis to show the importance of
breakage and reformation of micelles for the onset of this unsteady motion.
## 2 Problem formulation and governing equations
The problem considered herein is the study of the flow characteristics of a
sphere of diameter $d$, translating steadily along the axis of a cylindrical
tube of diameter $D$ filled with an incompressible wormlike micellar solution
in the creeping flow regime, as schematically shown in figure 1(a).
Figure 1: (a) Schematic of the present problem with both Cartesian and
spherical coordinates (b) Different mesh densities used in the present study
with a zoomed view near the sphere surface (c) Implication of the wedge
boundary condition to approximate two-dimensional and axisymmetric condition
of the present problem in OpenFOAM.
The present problem is solved in an Eulerian reference frame wherein the
coordinate system is centered on and traveling with the sphere. In this
coordinate system, the velocity vector is assumed to be zero on the sphere
surface. Furthermore, at the inlet and tube walls, the dimensionless fluid
axial velocity is set to be unity, and the radial velocity is set to be zero
(discussed in detail in the subsequent section) as schematically shown in
figure 1(a). Furthermore, the flow is assumed to be two-dimensional and
axisymmetric in nature.
Two values of the blockage ratio (the ratio of the sphere diameter to that of
the tube diameter, i.e., $d/D$), namely, 0.33 and 0.1, are considered in this
study, whereas the upstream $(L_{u})$ and downstream $(L_{d})$ lengths of the
tube are chosen as 65$d$. These values are sufficiently high so that the end
effects are negligible. This was further confirmed by performing a systematic
domain independence study.
### 2.1 Flow equations
Under the circumstances mentioned above, the flow field will be governed by
the following equations written in their dimensionless forms as follows:
Equation of continuity
$\bm{\nabla}\cdot\bm{U}=0$ (1)
Cauchy momentum equation
$El^{-1}\frac{D\bm{U}}{Dt}=-\nabla P+\nabla\cdot\bm{\tau}$ (2)
In the above eqs, $\bm{U}$, $t$ and $\bm{\tau}$ are the velocity vector, time
and total extra stress tensor respectively, whereas $El$ is the elasticity
number defined at the end of this section. For an intertialess flow, the left
hand side of equation 2 is essentially zero. The total extra stress tensor,
$\bm{\tau}$, for a wormlike micellar solution is given as:
$\bm{\tau}=\bm{\tau_{w}}+\bm{\tau_{s}}$ (3)
where $\bm{\tau_{w}}$ is the non-Newtonian contribution from the wormlike
micelles whereas $\bm{\tau_{s}}$ is the contribution from that of the
Newtonian solvent which is equal to $\beta\dot{\bm{\gamma}}$. Here the
parameter $\beta$ is the ratio of the solvent viscosity to that of the zero-
shear rate viscosity of the wormlike micellar solution and
$\dot{\bm{\gamma}}=\nabla\bm{U}+\nabla\bm{U}^{T}$ is the strain-rate tensor.
For the two-species VCM model, the total extra stress tensor is given by
$\bm{\tau}=\bm{\tau}_{w}^{VCM}+\bm{\tau_{s}}=(\bm{A}+2\bm{B})-\left(n_{A}+n_{B}\right)\bm{I}+\beta_{VCM}\dot{\bm{\gamma}}$
(4)
Here $n_{A}$ and $\bm{A}$ are the number density and conformation tensor of
the long worm A respectively, whereas $n_{B}$ and $\bm{B}$ are to that of the
short worm B in the two-species model. The temporal and spatial evaluation of
the number density and conformation tensor of worms are written in the
following subsection. For the single-species Giesekus model, this is given by
$\bm{\tau}=\bm{\tau}_{w}^{G}+\bm{\tau_{s}}=(\bm{A}-\bm{I})+\beta_{G}\dot{\bm{\gamma}}$
(5)
Note that here all the lengths, velocity, time and conformation tensors are
non-dimensionalized using $d$, $d/\lambda_{eff}$, $\lambda_{eff}$, and
$G_{0}^{-1}$ respectively, where
$\lambda_{eff}=\frac{\lambda_{A}}{1+c_{Aeq}^{{}^{\prime}}\lambda_{A}}$ is the
effective relaxation time in the two-species VCM model, $G_{0}$ is the elastic
modulus, $\lambda_{A}$ and $c_{Aeq}^{{}^{\prime}}$ are the dimensional
relaxation time and equilibrium breakage rate of the long chain A
respectively. In case of the single species model, $\lambda_{eff}$ is replaced
by the Maxwell relaxation time $\lambda$ during the non-dimensionalization.
The elasticity number is defined $El=\frac{Wi}{Re}$, where
$Wi_{S}=\frac{\lambda_{eff}U}{d}$ is the shear Weissenberg number and
$Re=\frac{dU\rho}{\eta_{0}}$ is the Reynolds number.
### 2.2 Two-species VCM constitutive equations
The VCM constitutive equations provide the species conservation equations for
the long $(n_{A})$ and short worms $(n_{B})$ along with the equations for the
evolution of the conformation tensors of the long $(\bm{A})$ and short worms
$(\bm{B})$. According to this model, the equations for the variation of
$n_{A}$, $n_{B}$, $\bm{A}$ and $\bm{B}$ are given in their non-dimensional
forms as follows (Vasquez et al., 2007):
$\mu\frac{Dn_{A}}{Dt}-2\delta_{A}\nabla^{2}n_{A}=\frac{1}{2}c_{B}n_{B}^{2}-c_{A}n_{A}$
(6)
$\mu\frac{Dn_{B}}{Dt}-2\delta_{B}\nabla^{2}n_{B}=-c_{B}n_{B}^{2}+2c_{A}n_{A}$
(7)
$\mu\bm{A}_{(1)}+A-n_{A}\bm{I}-\delta_{A}\nabla^{2}\bm{A}=c_{B}n_{B}\bm{B}-c_{A}\bm{A}$
(8)
$\epsilon\mu\bm{B}_{(1)}+B-\frac{n_{B}}{2}\bm{I}-\epsilon\delta_{B}\nabla^{2}\bm{B}=-2\epsilon
c_{B}n_{B}\bm{B}+2\epsilon c_{A}\bm{A}$ (9)
Here the subscript $()_{(1)}$ denotes the upper-convected derivative which is
given as $\frac{\partial()}{\partial
t}+\bm{U}\cdot\nabla()-\left((\nabla\bm{U})^{T}\cdot()+()\cdot\nabla\bm{U}\right)$.
The non-dimensional parameters $\mu$, $\epsilon$ and $\delta_{A,B}$ are given
as $\frac{\lambda_{A}}{\lambda_{eff}}$, $\frac{\lambda_{B}}{\lambda_{A}}$ and
$\frac{\lambda_{A}D_{A,B}}{d^{2}}$ respectively where $\lambda_{B}$ is the
relaxation time of the short chain $B$ and $D_{A,B}$ are the dimensional
diffusivities of the long and short species A and B respectively. Furthermore,
according to the VCM model, the non-dimensional breakage rate $(c_{A})$ of the
long chain A into two equally sized small chains B depends on the local state
of the stress field, and it is given by the expression as
$c_{A}=c_{Aeq}+\mu\frac{\xi}{3}\left(\dot{\bm{\gamma}}:\frac{\bm{A}}{n_{A}}\right)$
whereas the reforming rate of the long chain A from the two short chains B is
assumed to be constant which is given by the equilibrium reforming rate, i.e.,
$c_{B}=c_{Beq}$. Here the non-linear parameter $\xi$ is the scission energy
required to break a long micelle chain into two shorter chains. The
significance of this parameter is that as its value increases, the amount of
stress needed to break the chain increases.
### 2.3 Single-species Giesekus constitutive equation
In the single-species constitutive equation, the number density of the
wormlike micelles remains constant due to the absence of breakage and
reformation, and hence one doesn’t need to solve any species conservation
equation, as solved in the two-species VCM model. However, one has to solve
the equation for the evaluation of the polymer conformation tensor (which is
related to the stresses, as mentioned above) as follows
$\bm{A}_{(1)}+\bm{A}-\bm{I}=-\alpha\left(\bm{A}-\bm{I}\right)\cdot\left(\bm{A}-\bm{I}\right)$
(10)
The dimensionless parameter $\alpha$ is known as the Giesekus mobility factor,
and for $0<\alpha<1$, the above equation is known as the Giesekus constitutive
equation. This equation is derived based on the kinetic theory of closely
packed polymer chains, and the mobility factor $\alpha$ is introduced in this
model in order to take into account the anisotropic hydrodynamic drag on the
polymer molecules (Giesekus, 1982).
## 3 Numerical details
All the governing equations, namely, mass, momentum, Giesekus, and VCM
constitutive equations, have been solved using the finite volume method based
open-source computational fluid dynamics code OpenFOAM (Weller et al., 1998).
In particular, the recently developed rheoFoam solver available in rheoTool
(Pimenta & Alves, 2016) has been used in the present study. A detailed
discussion of the present numerical set up and its validation has been
presented in our recent studies (Sasmal, 2020; Khan & Sasmal, 2020), and hence
it is not repeated here. The following boundary conditions were employed in
order to solve the present problem. On the sphere surface, the standard no-
slip boundary condition for the velocity, i.e., $U_{x}=U_{y}=0$, is imposed
whereas a no-flux boundary condition is assumed for both the stress and
micellar number density, i.e.,
$\bm{n}\cdot\nabla\bm{A}=\bm{n}\cdot\nabla\bm{B}=0$ and $\bm{n}\cdot\nabla
n_{A}=\bm{n}\cdot\nabla n_{B}=0$. It should be mentioned here that micelles
may undergo a slip flow at the sphere surface, particularly if the sphere
surface is roughened in nature (Mohammadigoushki & Muller, 2018). However, in
the present study, the sphere is assumed to be solid with a smooth surface,
and hence the application of the no-slip boundary condition is justified at
this stage. On the tube wall, $U_{x}=U$ and $U_{y}=0$, and again no-flux
boundary conditions for the stress and micellar number density are imposed. At
the tube outlet, a Neumann type of boundary condition is applied for all
variables except for the pressure for which a zero value is assigned here. A
uniform velocity of $U_{x}=U$, a zero gradient for the pressure, and a fixed
value for the micellar number density are employed at the tube inlet.
Furthermore, the whole computational domain was sub-divided into 8 blocks in
order to mesh it, as shown in figure 1(b). Three different meshes of hexagonal
block-structured, namely, M1 , M2, and M3 with different numbers of cells on
the sphere surface $(N_{s})$ as well as in the whole computational domain
$(N_{t})$, were created for each blockage ratio. A schematic of three
different mesh densities is shown in figure 1 for $BR=0.33$. In creating any
mesh density, the cells were further compressed towards the sphere surface in
order to capture the steep gradients of velocity, stress, and micellar
concentration. After performing the standard mesh independent study, the mesh
M2 (with $N_{s}=240$ and $N_{t}=74200$ for $BR=0.33$ and $N_{s}=240$ and
$N_{t}=78600$ for $BR=0.1$) was found to be adequate to capture the flow
physics for the whole range of conditions encompassed here for both the
blockage ratios. Similarly, a time step size of $\Delta t=0.000055$ was found
to be suitable to carry out the present study. Finally, the two-dimensional
and axisymmetric problem is realized in OpenFOAM by applying the standard
wedge boundary condition (with wedge angle less than $5^{0}$) on the front and
back surfaces of the computational domain, as schematically shown in figure
1(c). The computational domain is kept one cell thick in the $\theta$
direction, and the axis of the wedge lies on the x-coordinate, as per the
requirement for applying the wedge boundary condition (OpenFOAM, 2020). Such
simplification does not compromise the accuracy of the results as long as the
flow is two-dimensional and axisymmetric, and also drastically reduces the
computational cost compared to a full three-dimensional simulation.
## 4 Results and discussion
The VCM model parameters chosen in the present study are as $\mu=5.7$,
$\epsilon=4.5\times 10^{-4}$, $\beta_{VCM}=6.8\times 10^{-5}$, $\xi=0.7$,
$n_{B}^{0}=1.13$, $\delta_{A}=\delta_{B}=\delta=10^{-3}$. These values are
obtained by fitting the experimental results on small amplitude oscillatory
shear (SAOS) and step strain experiments for a mixture of CPCl/NaSal added to
water (Pipe et al., 2010; Zhou et al., 2014). The rheological characteristics
of the WLM solution with these parameter values in homogeneous shear and
uniaxial extensional flows are shown in figure 2. One can clearly see the two
typical properties of a WLM solution, namely, the shear-thinning in shear
flows and the extensional hardening and subsequent thinning in extensional
flows in this figure. Additionally, the present WLM solution also shows a
shear-banding phenomena. The corresponding parameters for the single-species
Giesekus model are chosen as $\beta_{G}=4.98\times 10^{-3}$ and $\alpha=0.2$
and 0.8. For the single-species model, at $\alpha=0.8$, the solution shows the
shear banding and extensional thinning properties.
Figure 2: Variations of the non-dimensional shear stress with the shear rate
(a) and non-dimensional first normal stress difference with the extension rate
(b) in homogeneous shear and extensional flows respectively. Here the inset
figures show the corresponding variations in the shear and extensional
viscosities.
Simulations were carried out for the shear Weissenberg number $(Wi_{S})$ of up
to 2 for both the two-species VCM and single-species Giesekus models in the
creeping flow regime.
Figure 3: Representative streamlines and velocity magnitude plots for the
Giesekus model (a) and for the VCM model at three different times, namely,
$t=50.7$ (b), $t=50.8$ (c), and $t=50.9$ at $Wi_{S}=2.0$. Temporal variation
of the stream-wise velocity component $(U_{X})$ for both Giesekus and VCM
models (e). Power spectrum plot of the velocity fluctuations obtained with the
VCM model at $Wi_{S}=1.0$ (f) and $Wi_{S}=2.0$ (g).
Up to the shear Weissenberg number of 0.6 (not shown here), the streamlines
are attached to the sphere surface, and they follow a nice and order path
without crossing to each other for both the single-species Giesekus and two-
species VCM models. Hence there is a perfect fore and aft symmetry present in
the streamline patterns, and also the flow remains steady up to this value of
the Weissenberg number.
However, as the Weissenberg number gradually starts to increase, clear
differences have been observed in the flow patterns obtained with the Giesekus
and VCM models. For instance, at $Wi_{\text{S}}=2.0$ (sub figure 3(a)), the
streamlines are still attached to the sphere surface for the single-species
Giesekus model, thereby suggesting no boundary layer separation happens for
this model. Furthermore, the flow remains in the steady-state at this value of
the Weissenberg number. This is confirmed by plotting the temporal variation
of the stream-wise velocity at a probe location downstream of the sphere
$(X=1.0,Y=0)$, sub figure 3(e). One can see that the velocity reaches a steady
value with time. On the other hand, at the same Weissenberg number, for the
two-species VCM model, separation of the boundary layer happens, and as a
result, a small recirculation region is seen to form downstream of the sphere,
sub figure 3(b). As time is further progressed, the wake detaches from the
sphere surface, and its size becomes small (sub figure 3(c)), and ultimately,
it is disappeared as can be seen in sub figure 3(d). This formation and
disappearance of the wake is seen to repeat with time. It should be mentioned
here that a weak recirculation region was seen besides the sphere but not
downstream of the sphere for the falling sphere problem (Chen & Rothstein,
2004). All these observations suggest that the flow becomes unsteady at this
Weissenberg number. This is further confirmed by plotting the stream-wise
velocity in sub figure 3(e) at the same probe location as that obtained for
the Giesekus model, and one can clearly see how the velocity fluctuates with
time. In fact, the unsteadiness in the flow field appears at a much lower
Weissenberg number of about $Wi_{S}=0.6$ for the VCM model. Furthermore, a
region of very high velocity magnitude is seen to appear at about one sphere
diameter away downstream of the sphere at $t=50.7$ (sub figure 3(b)). This
indicates the presence of a “negative wake” downstream of the sphere. As time
gradually increases, the magnitude of this region progressively decreases, and
it is further shifted towards the downstream of the sphere, sub figure 3(b).
Finally, it is vanished at a time $t=50.9$, sub figure 3(c). On further
increasing the time, it appears again, and these processes of appearance and
disappearance of the negative wake downstream of the sphere repeat with time.
This was also observed in the experiments for the falling sphere problem by
Chen and Rothstein (Chen & Rothstein, 2004).
To further analyze the nature of the unsteadiness in the flow field, the power
spectrum of these velocity fluctuations is plotted in sub figures 3(f) and (g)
at shear Weissenberg numbers 1.0 and 2.0 respectively for the VCM model. At
$Wi_{S}=1.0$, the velocity fluctuations are characterized by a single and
large narrow peak in the frequency spectrum, thereby suggesting the flow field
to be a periodic one. On the other hand, at $Wi_{S}=2.0$, the frequency
spectrum of the velocity fluctuations is characterized by a broad range with
the significant contribution from higher-order frequencies, sub figure 3(g).
This suggests that the flow becomes quasi-periodic at this value of the
Weissenberg number. Therefore, it can be seen that as the shear Weissenberg
number gradually increases, the flow transits from a steady to periodic and
then periodic to quasi-periodic for the two-species VCM model. This transition
in the flow regime with the shear Weissenberg number is entirely in line with
that observed experimentally for the falling sphere problem by Zhang and
Muller (Zhang & Muller, 2018). Their experimental observations further found
an irregular flow pattern after the quasi-periodic flow regime on further
increasing the Weissenberg number. However, due to the high Weissenberg number
numerical stability problem, we were unable to run simulations at Weissenberg
numbers beyond 2, and hence this irregular flow pattern was not observed in
our simulations within the ranges of conditions encompassed in this study.
Next, we explore the reason behind this unsteady motion, which is observed for
the two-species VCM model but not seen for the single-species Giesekus model
over the ranges of conditions considered in this study. The explanation goes
as follows: at high values of the Weissenberg numbers, for instance, at
$Wi_{S}=2.0$ and $t=50.7$, the long micelles tend to break into smaller ones
downstream of the sphere due to the presence of high flow strength in this
region, for instance, see sub figure 4(a) where the surface plot of long
micelles number density is presented at the same three different times as that
used for showing the streamline and velocity magnitude plots in figure 3. To
present it more quantitatively, the variation of long micelles number density
along the downstream axis is plotted in sub figure 4(d). It can be seen that
the number density of long micelles shows a minimum at about $X=0.5d$. The
corresponding variation of the stream-wise velocity component along the
downstream axis of the sphere is depicted in sub figure 4(e). Due to the
breakage of long micelles, the extensional load, which was carried out by the
long micelles is unable to be carried out by the small micelles. This results
in the formation of a small recirculation region downstream of the sphere. Due
to the formation of this recirculation region, the velocity first gradually
decreases, attains a minimum, and then gradually starts to increase. It then
shows a peak and ultimately reaches the velocity far away downstream of the
sphere. This stage of the unsteady motion was termed as the acceleration stage
in case of the falling sphere problem (Chen & Rothstein, 2004;
Mohammadigoushki & Muller, 2016). During this stage, a negative wake was
formed, as shown in sub figure 3(b). The presence of a velocity overshoot in
sub figure 4(e) further confirms the existence of this negative wake
downstream of the sphere. The region of velocity overshoot is seen to present
just beside the region where the concentration of long micelles is minimum
downstream of the sphere. As time is further progressed, new long micelles
come into the downstream region, and the extensional stresses again start to
develop. The velocity downstream of the sphere gradually starts to decrease,
and the position at which the velocity changes its sign, further moves
downstream, as can be seen in sub figure 4(e) at $t=50.8$. Ultimately, at
$t=50.9$, the velocity gradually reaches the far away downstream velocity
without showing any minimum or peak in its profile. This means that at this
time, no recirculation region as well as no negative wake, are present in the
flow field (which are also evident in the streamline and velocity magnitude
plots shown in figure 3), and under these conditions, the fluid behaves like a
high elastic Boger fluid. This was called the deceleration stage in the
unsteady motion for the falling sphere problem (Chen & Rothstein, 2004;
Mohammadigoushki & Muller, 2016). Due to the deceleration in the flow field at
later times, the breakage of long micelles also decreases downstream of the
sphere, as can be seen in sub figure 4(e). On further increasing the time, the
acceleration stage again appears, and it repeats with time. Therefore, this
study provides evidence that the acceleration and deceleration motion past a
steadily translating sphere in wormlike micellar solutions is solely caused
due to the breakage of long micelles downstream of the sphere. This is further
confirmed by the fact that the single-species Giesekus model shows a steady
flow field under otherwise identical conditions. This explanation presented
herein was also proposed by Chen and Rothstein (Chen & Rothstein, 2004) in
their experimental investigation for the unsteady motion of the falling sphere
problem in WLM solutions. The present study further confirms their hypothesis
about the onset of this instability using the two-species VCM model.
Figure 4: Surface plot of the long micelles number density at $Wi_{S}=2.0$ and
at three different times, namely, 50.7 (a), 50.8 (b) and 50.9 (c). Variation
of the number density of long micelles (d) and stream-wise velocity component
along the downstream axis of the sphere (e).
Some studies associated with the falling sphere problem revealed that the
onset of the sphere’s unsteady motion is directly associated with the strong
extensional flow behaviour downstream of the sphere, which ultimately leads to
the breakage of long micelles (Mohammadigoushki & Muller, 2016; Zhang &
Muller, 2018). Therefore, we also calculate the extension rate along the
downstream axis of the steadily translating sphere, defined as
$\dot{\epsilon}_{XX}=\frac{\partial U_{X}}{\partial X}$. This is presented in
sub figure 5(a) again at three different times, namely, $t=50.7,50.8$ and 50.9
and at $Wi_{S}=2.0$. One can see that there is a large temporal variation
present in the extension rate downstream of the sphere up to a distance of
around $X=1d$ from the rear stagnation point of the sphere, and beyond that,
it becomes almost zero. At $t$ = 50.7, the variation in the strain rate is
seen to be higher as compared to that seen at later times. At this time, the
maximum breakage of long micelles occurs downstream of the sphere, as shown in
sub figure 4(a). The temporal variation of the maximum strain rate
$(\dot{\epsilon}_{M})$ along the downstream axis of the sphere is shown in sub
figure 5(b). It can be clearly seen that the maximum strain rate is varied
quasi-periodically, and a large variation in its value is present. This is
again, at least, qualitatively in line with that observed experimentally for
the falling sphere problem (Mohammadigoushki & Muller, 2016; Zhang & Muller,
2018).
Figure 5: (a) Variation of the extension rate along the downstream axis of
sphere at three different times and at $Wi_{\text{S}}=2.0$ (b) Temporal
variation of the maximum extension rate downstream of the sphere at
$Wi_{\text{s}}=2.0$ (e) Variation of the extensional Weissenberg number
$(Wi_{\text{Ext}})$ versus the shear Weissenberg number$(Wi_{\text{s}})$.
Likewise the experimental investigations on the falling sphere problem, we
also define an extensional Weissenberg number based on the time-averaged
values of this maximum strain rate as
$Wi_{\text{Ext}}=\lambda_{eff}\dot{\epsilon}_{M}$. The variation of this
extensional Weissenberg number with the corresponding shear Weissenberg number
is shown in sub figure 5(c). From this figure, it is seen that the value of
the extensional Weissenberg increases with the shear Weissenberg number, and
the transition from steady to unsteady periodic is marked by a sharp increase
in the value of the extensional Weissenberg number, which is again, in line
with the corresponding experimental observations for the falling sphere
problem (Mohammadigoushki & Muller, 2016; Zhang & Muller, 2018).
Finally, the effect of the sphere to tube diameter ratio on the onset and
generation of this unsteady motion past the sphere is discussed. Simulations
were carried out at another value of $\frac{d}{D}=0.1$ in order to compare
with flow characteristics as discussed above at $\frac{d}{D}=0.33$ under
otherwise identical conditions. Figure 6(a) shows the temporal variation of
the stream-wise velocity at two sphere to tube diameter ratios at a probe
location (X = 1, Y = 0) downstream of the sphere and at a shear Weissenberg
number of $Wi_{s}=2.0$. One can see that the velocity field again shows a
similar quasi-periodic nature at $\frac{d}{D}$ = 0.1 as that seen at
$\frac{d}{D}$ = 0.33. However, the magnitude of the velocity during the
acceleration stage decreases, whereas it increases during the deceleration
stage. Moreover, the magnitude of the velocity fluctuations slightly decreases
with the decreasing values of the sphere to the tube diameter ratio. This is
clearly evident in the power spectrum plot (sub figure 6(b)) wherein the
amplitude of the maximum frequency spectrum is seen to be slightly higher at
$\frac{d}{D}=0.33$ than that seen at $\frac{d}{D}=0.1$. At $\frac{d}{D}=0.1$,
a similar transition in the velocity field is seen as that observed at
$\frac{d}{D}=0.33$, i.e., it transits from steady to unsteady periodic to
unsteady quasi-periodic upon increasing the Weissenberg number. However, the
onset of the unsteady motion is slightly delayed as the sphere to tube
diameter ratio decreases. For instance, at $\frac{d}{D}=0.33$, the unsteady
motion starts at $Wi_{s}=0.6$ whereas it starts at around $Wi_{s}=1.0$ for the
Figure 6: Effect of the sphere to tube diameter ratio or the blockage ratio
(BR) on the temporal variation of the stream-wise velocity (a), power spectrum
plot (b) and the temporal variation of the maximum extension rate (c)
downstream of the sphere at $Wi_{s}=2.0$. Effect of the blockage ratio on the
variation of the extensional Weissenberg number with the shear Weissenberg
number (d).
sphere to tube diameter ratio 0.1. The reason behind this can be explained as
follows: as the sphere to tube diameter ratio decreases, the extensional flow
strength downstream of the sphere also decreases due to the presence of less
confinement. This can be seen in sub sub figure 6(c) wherein the temporal
variation of the maximum extension rate downstream of the sphere is seen to be
less at $\frac{d}{D}=0.1$ in comparison to that seen at $\frac{d}{D}=0.33$. As
a result, the time-averaged extensional Weissenberg number is also found to be
less for the former one in comparison to the latter one under otherwise
identical conditions, sub figure 6(d). This lowering in the extensional flow
strength downstream of the sphere tends to decrease the breakage of the
micelles in this region, which in turn, delays the tendency of appearing the
unsteady motion. This also further confirms the hypothesis that the unsteady
motion past a sphere translating steadily in micellar solutions is caused due
to the breakage of stretched micelles downstream of the sphere.
## 5 Conclusions
This study presents an extensive numerical investigation of the flow
characteristics past a sphere translating steadily along the axis of a
cylindrical tube filled with wormlike micellar solutions. For doing so, the
present study uses the two-species VCM (Vasquez-Cook-McKinley) and single-
species Giesekus constitutive models for representing the rheological
behaviour of wormlike micellar solutions. Over the ranges of conditions
encompassed in this study, a transition of the flow field downstream of the
sphere from a steady to unsteady periodic to unsteady quasi-periodic regime,
is seen as the shear Weissenberg number gradually increases. A similar
transition in the velocity field was also observed in the experiments on the
sedimentation of a sphere in wormlike micellar solutions. The onset of this
unsteady motion is marked by a steep increase in the value of the extensional
Weissenberg number, defined downstream of the sphere based on the maximum
extension rate, once again in accordance with the experiments on the falling
sphere problem. Due to this increase in the extensional flow strength
downstream of the sphere, the breakage of long micelles occurs, which thereby
causing the unsteady motion in the flow field downstream of the sphere. This
is further confirmed as the single-species Giesekus model for the wormlike
micellar solution predicts a steady velocity field under otherwise identical
conditions. This explanation is in line with that proposed by the earlier
experimental investigations carried out in the literature for the
sedimentation of a sphere in wormlike micellar solutions. Furthermore, it is
seen that the onset of this unsteady motion is delayed as the sphere to tube
diameter ratio decreases due to the decrease in the extensional flow strength
downstream of the sphere.
Although a very good qualitative agreement is found between the present
numerical predictions on the steadily translating sphere and the experimental
findings on the sedimentation of sphere; however, it should be mentioned here
that the present simulation set up does not mimic the exact experimental
settings for the falling sphere problem. In the experiments on the falling
sphere problem, the sphere may rotate or undergo lateral motion or even may
not reach a terminal velocity. Therefore, to realize the experimental settings
on the falling sphere problem, the governing equations, namely, continuity,
momentum and micelle constitutive equations need to be solved in full three-
dimensional numerical settings along with an equation for the sphere motion.
In contrast to this, in the present study, the sphere is assumed to be
translating steadily along the axis of the tube, and the problem is solved in
a co-ordinate system which is centered on and travelling with the sphere. This
could be a situation in the corresponding experiments on the falling sphere
problem when it reaches to its terminal velocity without rotation and lateral
motion. Although this is hardly a case in actual experiments; however, with
this simplified problem, for the first time, we have provided the eveidence
that the unsteady motion past a sphere is caused due to the breakage of long
micelles, resulting from an increase in the extensional flow strength
downstream of it. We believe that the analysis shown in this study will
further support the hypothesis presented earlier for the unsteady motion of a
falling sphere in micellar solutions. In our future study, we plan to carry
out full three-dimensional numerical simulations with the exact settings as
that followed in the experiments for the sedimentation of sphere in micellar
solutions. This will give us the opportunity to conduct a more accurate and
quantitative comparison with the experiments carried out for the falling
sphere problem.
## 6 Acknowledgements
Author would like to thank IIT Ropar for providing funding through the ISIRD
research grant (Establishment1/2018/IITRPR/921) to carry out this work. Author
would also like to thank Mr. Anant Chauhan for his help in the meshing of the
geometry in OpenFOAM.
## References
* Arigo & McKinley (1998) Arigo, M. T. & McKinley, G. H. 1998 An experimental investigation of negative wakes behind spheres settling in a shear-thinning viscoelastic fluid. Rheo. Acta 37, 307–327.
* Arigo et al. (1995) Arigo, M. T., Rajagopalan, D., Shapley, N. & McKinley, G. H. 1995 The sedimentation of a sphere through an elastic fluid. part 1. steady motion. J. Non-Newt. Fluid Mech. 60 (2-3), 225–257.
* Bisgaard (1983) Bisgaard, C. 1983 Velocity fields around spheres and bubbles investigated by laser-Doppler anemometry. J. Non-Newt. Fluid Mech. 12 (3), 283–302.
* Bush (1994) Bush, M. B. 1994 On the stagnation flow behind a sphere in a shear-thinning viscoelastic liquid. J. Non-Newt. Fluid Mech. 55 (3), 229–247.
* Cates (1987) Cates, M. E. 1987 Reptation of living polymers: dynamics of entangled polymers in the presence of reversible chain-scission reactions. Macromolecules 20, 2289–2296.
* Chen & Rothstein (2004) Chen, S. & Rothstein, J. P. 2004 Flow of a wormlike micelle solution past a falling sphere. J. Non-Newt. Fluid Mech. 116, 205–234.
* Chhabra (2006) Chhabra, R. P. 2006 Bubbles, Drops, and Particles in Non-Newtonian Fluids. CRC press.
* Giesekus (1982) Giesekus, H. 1982 A simple constitutive equation for polymer fluids based on the concept of deformation-dependent tensorial mobility. J. Non-Newt. Fluid Mech. 11, 69–109.
* H. Mohammadigoushki & Cook (2019) H. Mohammadigoushki, A. Dalili, L. Zhou & Cook, P. 2019 Transient evolution of flow profiles in a shear banding wormlike micellar solution: Experimental results and a comparison with the VCM model. Soft Mat. 15, 5483–5494.
* Harlen (2002) Harlen, O. G. 2002 The negative wake behind a sphere sedimenting through a viscoelastic fluid. J. Non-Newt. Fluid Mech. 108, 411–430.
* Jayaraman & Belmonte (2003) Jayaraman, A. & Belmonte, A. 2003 Oscillations of a solid sphere falling through a wormlike micellar fluid. Phy. Rev. E 67, 065301.
* Kalb et al. (2017) Kalb, A., L. A. Villasmil, U. & Cromer, M. 2017 Role of chain scission in cross-slot flow of wormlike micellar solutions. Phy. Rev. Fluids 2, 071301.
* Kalb et al. (2018) Kalb, A., L. A. Villasmil, U. & Cromer, M. 2018 Elastic instability and secondary flow in cross-slot flow of wormlike micellar solutions. J. Non-Newt. Fluid Mech. 262, 79–91.
* Khan & Sasmal (2020) Khan, M. B. & Sasmal, C. 2020 Effect of chain scission on flow characteristics of wormlike micellar solutions past a confined microfluidic cylinder: A numerical analysis. Soft Mat. 16, 5261–5272.
* Kumar et al. (2012) Kumar, N., Majumdar, S., Sood, A., Govindarajan, R., Ramaswamy, S. & Sood, A. K. 2012 Oscillatory settling in wormlike-micelle solutions: Bursts and a long time scale. Soft Mat. 8, 4310–4313.
* McKinley (2002) McKinley, G. H. 2002 Steady and transient motion of spherical particles in viscoelastic liquids. Transport Processes in Bubble, Drops, and Particles pp. 338–375.
* Michaelides (2006) Michaelides, E. 2006 Particles, Bubbles & Drops: Their Motion, Heat and Mass transfer. World Scientific.
* Mohammadigoushki & Muller (2016) Mohammadigoushki, H. & Muller, S. J. 2016 Sedimentation of a sphere in wormlike micellar fluids. J. Rheol. 60, 587–601.
* Mohammadigoushki & Muller (2018) Mohammadigoushki, H. & Muller, S. J. 2018 Creeping flow of a wormlike micelle solution past a falling sphere: Role of boundary conditions. J. Non-Newt. Fluid Mech. 257, 44–49.
* Moroi (1992) Moroi, Y. 1992 Micelles: Theoretical and Applied Aspects. Springer Science & Business Media.
* OpenFOAM (2020) OpenFOAM 2020 Openfoam user guide. https://www.openfoam.com/documentation/user-guide/.
* Pimenta & Alves (2016) Pimenta, F. & Alves, M.A. 2016 rheoTool. https://github.com/fppimenta/rheoTool.
* Pipe et al. (2010) Pipe, C. J., Kim, N. J., Vasquez, P. A., Cook, L. P. & McKinley, G. H. 2010 Wormlike micellar solutions: II. Comparison between experimental data and scission model predictions. J. Rheol. 54, 881–913.
* Rothstein (2003) Rothstein, J. P. 2003 Transient extensional rheology of wormlike micelle solutions. J. Rheol. 47, 1227–1247.
* Rothstein (2008) Rothstein, J. P. 2008 Strong flows of viscoelastic wormlike micelle solutions. Rheol. Rev. 2008, 1–46.
* Sasmal (2020) Sasmal, C. 2020 Flow of wormlike micellar solutions through a long micropore with step expansion and contraction. Phys. Fluids 32, 013103.
* Vasquez et al. (2007) Vasquez, P. A., McKinley, G. H. & Cook, L. P. 2007 A network scission model for wormlike micellar solutions: I. Model formulation and viscometric flow predictions. J. Non-Newt. Fluid Mech. 144, 122–139.
* Wang et al. (2020) Wang, Z., Wang, S., Xu, L., Dou, Y. & Su, X. 2020 Extremely slow settling behavior of particles in dilute wormlike micellar fluid with broad spectrum of relaxation times. J. Dis. Sci. Tech. 41, 639–647.
* Weller et al. (1998) Weller, H. G., Tabor, G., Jasak, H. & Fureby, C. 1998 A tensorial approach to computational continuum mechanics using object-oriented techniques. Comp. Phys. 12, 620–631.
* Wu & Mohammadigoushki (2018) Wu, S. & Mohammadigoushki, H. 2018 Sphere sedimentation in wormlike micelles: Effect of micellar relaxation spectrum and gradients in micellar extensions. J. Rheol. 62, 1061–1069.
* Zhang & Muller (2018) Zhang, Y. & Muller, S. J. 2018 Unsteady sedimentation of a sphere in wormlike micellar fluids. Phys. Rev. Fluids 3, 043301.
* Zhou et al. (2014) Zhou, L., McKinley, G. H. & Cook, L. P. 2014 Wormlike micellar solutions: III. VCM model predictions in steady and transient shearing flows. J. Non-Newt. Fluid Mech. 211, 70–83.
|
# Black hole quasinormal modes and isospectrality in Deser-Woodard nonlocal
gravity
Che-Yu Chen<EMAIL_ADDRESS>Institute of Physics, Academia Sinica,
Taipei, Taiwan 11529 Sohyun Park<EMAIL_ADDRESS>Theoretical Physics
Department, CERN, CH-1211 Genève 23, Switzerland
###### Abstract
We investigate the gravitational perturbations of the Schwarzschild black hole
in the nonlocal gravity model recently proposed by Deser and Woodard (DW-II).
The analysis is performed in the localized version in which the nonlocal
corrections are represented by some auxiliary scalar fields. We find that the
nonlocal corrections do not affect the axial gravitational perturbations, and
hence the axial modes are completely identical to those in General Relativity
(GR). However, the polar modes get different from their GR counterparts when
the scalar fields are excited at the background level. In such a case the
polar modes are sourced by an additional massless scalar mode and, as a
result, the isospectrality between the axial and the polar modes breaks down.
We also perform a similar analysis for the predecessor of this model (DW-I)
and arrive at the same conclusion for it.
††preprint: CERN-TH-2021-011
## I Introduction
The direct detection of gravitational waves emitted from binary black hole
mergers Abbott:2016blz ; LIGOScientific:2018mvr ; Abbott:2020niy ushers in an
exciting era of gravitational wave astronomy. More surprisingly, the current
observing sensitivity already allows the detection of roughly one candidate
event per week in average Abbott:2020niy , not to mention the amount of data
available with the further upgrades and improvements of the detectors in the
future. The gravitational waves emitted from these events carry useful
information about the spacetime under strong gravitational fields. Therefore,
such a detection facilitates us to probe the physics in the extreme
environments such as black holes, and to test whether our current
understanding of black holes and gravity based on general relativity (GR) is
correct or not.
Essentially, the gravitational wave signals associated with a binary black
hole merger consist of three stages. The first is the inspiral stage during
which the binary black holes rotate around each other. At this stage, the
gravitational waves can be modeled well by the post-Newtonian approaches. The
second is the merger stage, in which the gravitational fields are so strong
that purely numerical analysis is inevitable. The third stage, which is called
the ringdown, corresponds to the post-merger phase of the event. At this
stage, the binary black holes have coalesced, and the distortion of the final
black hole is gradually settling. The black hole during this ringdown stage
loses its energy by emitting gravitational waves, with frequencies governed by
a superposition of decaying oscillations, i.e., the quasinormal modes (QNMs)
Kokkotas:1999bd ; Berti:2009kk ; Konoplya:2011qq . The ringing black hole and
the associated QNMs can be described well by black hole perturbation theories.
An important realization is that the QNM spectrum only depends on the
parameters quantifying the black hole, such as mass, spin, and whatever
parameters that characterize the black hole in a particular gravitational
theory. For instance, the QNM spectrum of a Kerr black hole in GR is
completely determined by its mass and spin. The spectrum does not depend on
what initially drives the ringings. This means that by examining the QNM
frequencies one can test the no-hair theorem in the underlying theory of
gravity. That is, the QNM spectrum is an observable to distinguish between GR
and other gravitational theories. In this respect, the investigation of black
hole perturbations and the associated QNM spectra in the context of theories
beyond GR has been an intensive field of research, see for instance Refs.
Kobayashi:2012kh ; Kobayashi:2014wsa ; Blazquez-Salcedo:2016enn ;
Bhattacharyya:2017tyc ; Glampedakis:2017dvb ; Chen:2018mkf ; Chen:2018vuw ;
Chen:2019iuo ; Moulin:2019ekf ; Chen:2020evr .
In addition to the no-hair theorem, there are other fundamental properties
which, by examining black hole QNMs, can be used to discriminate GR from other
theories. For example, it is well-known that in the eikonal approximation, the
QNM frequencies for stationary, axisymmetric, and asymptotically flat black
holes in GR are tightly related to the properties of the unstable photon
orbits around the black hole Cardoso:2008bp ; Dolan:2010wr ; Yang:2012he . The
eikonal QNMs are also related to the apparent size of black hole shadows
Cuadros-Melgar:2020kqn . Therefore, apart from extracting the mode frequencies
and comparing the spectra with those in GR, one can test gravitational
theories by examining these properties. Any breaking of them would be a
smoking-gun for different physics beyond Einstein’s gravity Konoplya:2017wot ;
Churilova:2019jqx ; Glampedakis:2019dqh ; Chen:2019dip .
Another important feature of QNMs in GR is the isospectrality. For the
Schwarzschild black hole in GR, the gravitational perturbations of the axial
(parity-odd) Regge:1957td and the polar (parity-even) modes Zerilli:1970se
share an identical spectrum, even though their master equations are completely
different from each other Chandrabook . This interesting property also holds
for the Reissner-Nordström, Kerr, as well as Kerr-Newman (to linear-order in
the spin) black holes Pani:2013ija ; Pani:2013wsa . It was discovered also by
Chandrasekhar Chandrabook , in particular for Schwarzschild black holes, that
the master equations governing these two modes are related by a specific
transformation. It was not realized until very recently Glampedakis:2017rar
that the transformation introduced in Ref. Chandrabook is actually a
particular subclass of the Darboux transformation Darboux:1882 . The
identification of this relation and its generalization Glampedakis:2017rar ;
Yurov:2018ynn helps to investigate the isospectrality of more complex
systems, such as the Kerr black hole Teukolsky:1973ha ; Sasaki:1981sx . In
fact, the isospectrality of black hole QNMs is a longstanding issue
Moulin:2019bfh . For example, the physical origin of this property is unclear.
In addition, although the isospectrality holds for most of the astrophysically
relevant black holes in GR, not all black hole solutions in GR share this
property Ferrari:2000ep ; Cardoso:2001bb ; Berti:2003ud ; Blazquez-
Salcedo:2019nwd and the range of its viability is still an open question. In
many theories beyond GR, such as $f(R)$ gravity Bhattacharyya:2017tyc ;
Bhattacharyya:2018qbe ; Datta:2019npq , dynamical Chern-Simons gravity
Bhattacharyya:2018hsj , and scalar-tensor theories Kobayashi:2012kh ;
Kobayashi:2014wsa , the isospectrality is generically broken. Furthermore, it
has been shown recently using a parameterization approach that the
isospectrality is actually very fragile Cardoso:2019mqo . Therefore, any
observational indication of the isospectrality breaking would strongly hint
toward new physics beyond GR Shankaranarayanan:2019yjx .
In this work, we will focus on the possibility of testing gravitational
theories by examining the isospectrality of black hole QNMs. In particular, we
will investigate the axial and the polar gravitational perturbations of the
Schwarzschild black hole in the Deser-Woodard nonlocal gravity models I and II
(DW-I Deser:2007jk and DW-II Deser:2019lmm ). This class of nonlocal gravity
models was proposed by Deser and Woodard Deser:2007jk ; Deser:2019lmm with
the motivation that quantum loop effects of infrared gravitons abundant in the
early universe are essentially nonlocal and the effects might persist in the
late universe Woodard:2014iga ; Woodard:2018gfj . Deser and Woodard
particularly aimed at a phenomenological model that can generate the current
phase of cosmic acceleration without invoking dark energy.111Other classes of
nonlocal gravity models with the same goal of reproducing the late-time
acceleration without dark energy have been proposed and studied extensively
Barvinsky:2011hd ; Maggiore:2013mea ; Maggiore:2014sia ; Vardanyan:2017kal ;
Amendola:2017qge ; Tian:2018bmn . Their first model DW-I Deser:2007jk had
been very successful in various aspects. It reproduced the background
expansion identical to that of the $\Lambda$CDM expansion (without $\Lambda$)
in GR Deffayet:2009ca . In addition, once the background was fixed, the large
scale structure formation predicted by the model was comparable to the one in
GR Park:2012cp ; Dodelson:2013sma ; Park:2016jym ; Nersisyan:2017mgj ;
Park:2017zls ; Amendola:2019fhc . Even though the localized version
Nojiri:2007uq of this model possesses a scalar ghost Zhang:2016ykx ;
Nojiri:2010pw ; Zhang:2011uv , the constraint of the model precludes explosive
growth of the ghost Park:2019btx . However a recent analysis showed that DW-I
violates the solar system constraints Belgacem:2018wtb . Deser and Woodard
then proposed an improved model (DW-II) Deser:2019lmm that might pass the
solar system constraints. Therefore, in this paper we focus on the DW-II
gravity and briefly remark for the DW-I gravity at the end.
For the Schwarzschild QNMs in the DW-II model, we first find that the axial
modes are not affected by the nonlocal corrections. Furthermore, due to the
special role played by the nonlocal term that whether the auxiliary scalar
fields are excited or not depends on the boundary conditions, we find that the
master equation of the polar modes is sourced by an additional scalar mode
corresponding to the auxiliary scalar fields. However, this scalar mode is not
excited as long as the auxiliary scalar fields are quiescent at the
unperturbed level. If the scalar mode is not excited, the QNMs of the axial
and the polar modes reduce to those in GR and the isospectrality is preserved,
otherwise the isospectrality gets broken. As a remark, due to the fact that
the DW-I and DW-II models are similar in their mathematical structures, the
same conclusion can be drawn for the DW-I model.
The rest of this paper is organized as follows. In Section II, we briefly
review the DW-II nonlocal gravity including its action, equations of motion,
and localization. In Section III, we discuss the Schwarzschild black hole in
the DW-II nonlocal gravity. Then, we perturb the Schwarzschild black hole in
Section IV, and derive the master equations of the axial and polar
perturbations. The results will be compared with those in GR. In Section V, we
perform a similar analysis in the framework of DW-I nonlocal gravity. Finally,
Section VI draws our conclusions and discussions.
## II Deser-Woodard-II nonlocal gravity
The action of the DW-II nonlocal gravity is Deser:2019lmm
$\mathcal{S}_{\textrm{DW-II}}=\frac{1}{16\pi}\int
d^{4}x\sqrt{-g}R\left[1+f\left(\zeta\right)\right]+\mathcal{S}_{m}\,,$ (1)
where $\mathcal{S}_{m}$ is the matter action. The nonlocal distortion function
$f$ is an arbitrary function of the inverse scalar d’Alembertian acting on
$(\partial\phi)^{2}\equiv g^{\mu\nu}\partial_{\mu}\phi\partial_{\nu}\phi$,
where $\phi$ is the inverse scalar d’Alembertian acting on the Ricci scalar
$R$. More precisely, the scalar fields $\zeta$ and $\phi$ can be written
explicitly as follows:
$\displaystyle\zeta$
$\displaystyle\equiv\Box^{-1}\left(\partial\phi\right)^{2}\,,$ (2)
$\displaystyle\phi$ $\displaystyle\equiv\Box^{-1}R\,.$ (3)
The distortion function $f$ is thus a function of $\zeta$. The presence of the
distortion function adds nonlocal corrections to the Einstein-Hilbert action.
Note that we have assumed $c=G=1$ in this paper, where $c$ is the speed of
light and $G$ is the gravitational constant.
As has been shown in Ref. Deser:2019lmm , the DW-II model can be localized by
introducing two additional Lagrange multipliers $\Xi$ and $\psi$ as
follows222The notations for the auxiliary scalar fields used in Ref.
Deser:2019lmm ($X,Y,U,V$) and ours ($\phi,\zeta,\Xi,\psi$) are different.
They can be directly converted as follows: $X\rightarrow\phi$,
$Y\rightarrow\zeta$, $U\rightarrow-\Xi$, and $V\rightarrow-\psi$.
$\displaystyle\mathcal{S}_{\textrm{DW-II}}$ $\displaystyle=\frac{1}{16\pi}\int
d^{4}x\sqrt{-g}\Big{[}R\left(1+f-\Xi\right)-\partial_{\mu}\Xi\partial^{\mu}\phi$
$\displaystyle-\psi\left(\partial\phi\right)^{2}-\partial_{\mu}\psi\partial^{\mu}\zeta\Big{]}+\mathcal{S}_{m}\,.$
(4)
The equations of motion are derived by varying the action (4) with respect to
$\Xi$, $\psi$, $\phi$, $\zeta$, and the metric $g_{\mu\nu}$. Varying the
action with respect to the Lagrange multipliers $\Xi$ and $\psi$ gives
$\displaystyle\Box\phi$ $\displaystyle=R\,,$ (5) $\displaystyle\Box\zeta$
$\displaystyle=\left(\partial\phi\right)^{2}\,,$ (6)
which correspond to the definition of the scalar fields $\phi$ and $\zeta$
given by Eqs. (3) and (2), respectively. Then, the variation with respect to
the scalar fields $\phi$ and $\zeta$ gives
$\displaystyle\Box\Xi$
$\displaystyle=-2\nabla_{\mu}\left(\psi\nabla^{\mu}\phi\right)\,,$ (7)
$\displaystyle\Box\psi$ $\displaystyle=-R\frac{df}{d\zeta}\,.$ (8)
Finally, the equation of motion derived by varying the action with respect to
the metric reads
$\displaystyle\left(G_{\mu\nu}+g_{\mu\nu}\Box-\nabla_{\mu}\nabla_{\nu}\right)\left(1+f-\Xi\right)$
$\displaystyle+\frac{1}{2}g_{\mu\nu}\left[\partial_{\alpha}\Xi\partial^{\alpha}\phi+\partial_{\alpha}\psi\partial^{\alpha}\zeta+\psi\left(\partial\phi\right)^{2}\right]$
$\displaystyle-\partial_{(\mu}\Xi\partial_{\nu)}\phi-\partial_{(\mu}\psi\partial_{\nu)}\zeta-\psi\partial_{\mu}\phi\partial_{\nu}\phi=8\pi
T_{\mu\nu}\,,$ (9)
where $G_{\mu\nu}$ and $T_{\mu\nu}$ stand for the Einstein tensor and the
energy-momentum tensor, respectively.
## III Schwarzschild spacetime in DW-II gravity
Before investigating black hole perturbations in the DW-II nonlocal gravity,
it is necessary to specify the background spacetime that is going to be
perturbed. In this paper, we are going to consider the perturbations of the
Schwarzschild black hole in the DW-II nonlocal gravity. In this section, we
will use the requirement that the Schwarzschild metric is an exact vacuum
solution ($T_{\mu\nu}=0$) of the DW-II nonlocal gravity to constrain the
expressions of the auxiliary scalar fields. By doing so, we thus exhibit that
the Schwarzschild solution does satisfy the field equations of the DW-II
nonlocal gravity.
We consider the Schwarzschild metric
$ds^{2}=-e^{2\bar{\nu}}dt^{2}+e^{-2\bar{\nu}}dr^{2}+r^{2}d\Omega_{2}^{2}\,,$
(10)
where $e^{2\bar{\nu}}=1-2m/r$ with $m$ being the mass of the black hole. From
now on, quantities with a bar represent the quantities at the background
level, in order to distinguish them from their linear perturbations. Since the
Ricci scalar $R$ is identically zero, the scalar field equations (5) and (8)
can be written as
$\displaystyle\bar{\phi}_{,rr}+2\left(\frac{1}{r}+\bar{\nu}_{,r}\right)\bar{\phi}_{,r}$
$\displaystyle=0\,,$ (11)
$\displaystyle\bar{\psi}_{,rr}+2\left(\frac{1}{r}+\bar{\nu}_{,r}\right)\bar{\psi}_{,r}$
$\displaystyle=0\,.$ (12)
Note that we have assumed that the scalar fields are functions of $r$ only at
the background level. The above equations (11) and (12) can be solved to get
$\bar{\phi}_{,r}=C_{\phi}\frac{e^{-2\bar{\nu}}}{r^{2}}\,,\qquad\bar{\psi}_{,r}=C_{\psi}\frac{e^{-2\bar{\nu}}}{r^{2}}\,,$
(13)
where $C_{\phi}$ and $C_{\psi}$ are integration constants. Furthermore, using
Eqs. (6) and (7), the other two scalar fields can be solved as
$\displaystyle\bar{\zeta}_{,r}$
$\displaystyle=\left(C_{\phi}\bar{\phi}+C_{\zeta}\right)\frac{e^{-2\bar{\nu}}}{r^{2}}\,,$
(14) $\displaystyle\bar{\Xi}_{,r}$
$\displaystyle=\left(-2C_{\phi}\bar{\psi}+C_{\Xi}\right)\frac{e^{-2\bar{\nu}}}{r^{2}}\,,$
(15)
where $C_{\zeta}$ and $C_{\Xi}$ are integration constants as well.
To proceed, we define $K\equiv f-\Xi$ and $\bar{K}\equiv\bar{f}-\bar{\Xi}$.
Using the gravitational equation (9), we find that
$\displaystyle\bar{K}=\textrm{constant}\,,$ (16) $\displaystyle-
C_{\phi}^{2}\bar{\psi}+C_{\phi}C_{\psi}\bar{\phi}$
$\displaystyle+C_{\phi}C_{\Xi}+C_{\psi}C_{\zeta}=0\,,$ (17)
in which the later can be treated as a constraint to be satisfied by the
integration constants. Since $\bar{\phi}$ and $\bar{\psi}$ satisfy Eq. (13),
the constraint Eq. (17) can be rewritten as
$C_{\phi}C_{\Xi}+C_{\psi}C_{\zeta}=C_{\infty}\,,$ (18)
where $C_{\infty}$ is a constant, whose value is determined by the asymptotic
value of $\bar{\phi}$ and $\bar{\psi}$.333In fact, the asymptotic values of
the scalar fields may be set to zero in order to satisfy the asymptotic
flatness condition. Note that at the weak-field regime of the DW-I model, the
values of the scalar fields, as well as that of the distortion function
$\bar{f}$, are supposed to approach zero asymptotically Chu:2018mld . We
expect this conclusion is also true for the DW-II model.
Essentially, if any of the scalar fields ($\phi$, $\psi$, $\Xi$, $\zeta$) is
excited at the background level, it could be a varying function of $r$.
Whether the field is excited or not is determined by the value of the
associated integration constants ($C_{\phi}$, $C_{\psi}$, $C_{\Xi}$,
$C_{\zeta}$). We will show explicitly later how the excited fields are related
to the black hole perturbations within the DW-II nonlocal gravity.
## IV Black hole perturbations
In this section, we will investigate the gravitational perturbations of the
Schwarzschild black hole in the DW-II nonlocal gravity. Without loss of
generality, the perturbed spacetime can be described by a non-stationary and
axisymmetric metric in which the symmetrical axis is turned in such a way that
no $\varphi$ (the azimuthal angle) dependence appears in the metric functions.
In general, the metric can be written as follows Chandrabook :
$\displaystyle ds^{2}=$
$\displaystyle-e^{2\nu}\left(dx^{0}\right)^{2}+e^{2\mu_{1}}\left(dx^{1}-\sigma
dx^{0}-q_{2}dx^{2}-q_{3}dx^{3}\right)^{2}$
$\displaystyle+e^{2\mu_{2}}\left(dx^{2}\right)^{2}+e^{2\mu_{3}}\left(dx^{3}\right)^{2}\,.$
(19)
Up to their first-order perturbations, the metric functions
$\displaystyle e^{2\nu}$
$\displaystyle=e^{2\bar{\nu}}\left(1+2\delta\nu\right)\,,$ $\displaystyle
e^{2\mu_{1}}$
$\displaystyle=r^{2}\sin^{2}\theta\left(1+2\delta\mu_{1}\right)\,,$
$\displaystyle e^{2\mu_{2}}$
$\displaystyle=e^{-2\bar{\nu}}\left(1+2\delta\mu_{2}\right)\,,$ $\displaystyle
e^{2\mu_{3}}$ $\displaystyle=r^{2}\left(1+2\delta\mu_{3}\right)\,,$ (20)
are functions of time $t$ ($t=x^{0}$), radial coordinate $r$ ($r=x^{2}$), and
polar angle $\theta$ ($\theta=x^{3}$). As we have mentioned, the perturbed
metric is axisymmetric, therefore, we will assume that the metric functions
are independent of the azimuthal angle $\varphi$ ($\varphi=x^{1}$). The
quantities with a delta denote the corresponding first-order terms of the
fields, and contribute to the polar perturbations in general. On the other
hand, the functions $\sigma$, $q_{2}$, and $q_{3}$ are also functions of $t$,
$r$, and $\theta$, and they correspond to the axial perturbations of the
metric. We will discuss these two types (the polar and axial types) of
perturbations in more detail later. In addition, the fields that induce
nonlocal corrections should be perturbed, hence we have $K=\bar{K}+\delta K$,
$\phi=\bar{\phi}+\delta\phi$, $\psi=\bar{\psi}+\delta\psi$,
$\Xi=\bar{\Xi}+\delta\Xi$, and $\zeta=\bar{\zeta}+\delta\zeta$. We would like
to emphasize again that the background spacetime that we are considering is
the Schwarzschild metric where $e^{2\bar{\nu}}=1-2m/r$, and the scalar fields
at the background level are functions of $r$ only. The field $\bar{K}$ is a
constant, as shown in Eq. (16).
### IV.1 Tetrad formalism
To study the perturbations of the spacetime metric (19), we use the tetrad
formalism in which one defines a basis associated with the metric (19)
Chandrabook :
$\displaystyle e^{\mu}_{(0)}$ $\displaystyle=\left(e^{-\nu},\quad\sigma
e^{-\nu},\quad 0,\quad 0\right)\,,$ $\displaystyle e^{\mu}_{(1)}$
$\displaystyle=\left(0,\quad e^{-\mu_{1}},\quad 0,\quad 0\right)\,,$
$\displaystyle e^{\mu}_{(2)}$ $\displaystyle=\left(0,\quad
q_{2}e^{-\mu_{2}},\quad e^{-\mu_{2}},\quad 0\right)\,,$ $\displaystyle
e^{\mu}_{(3)}$ $\displaystyle=\left(0,\quad q_{3}e^{-\mu_{3}},\quad 0,\quad
e^{-\mu_{3}}\right)\,,$ (21)
and
$\displaystyle e_{\mu}^{(0)}$ $\displaystyle=\left(e^{\nu},\quad 0,\quad
0,\quad 0\right)\,,$ $\displaystyle e_{\mu}^{(1)}$
$\displaystyle=\left(-\sigma e^{\mu_{1}},\quad e^{\mu_{1}},\quad-
q_{2}e^{\mu_{1}},\quad-q_{3}e^{\mu_{1}}\right)\,,$ $\displaystyle
e_{\mu}^{(2)}$ $\displaystyle=\left(0,\quad 0,\quad e^{\mu_{2}},\quad
0\right)\,,$ $\displaystyle e_{\mu}^{(3)}$ $\displaystyle=\left(0,\quad
0,\quad 0,\quad e^{\mu_{3}}\right)\,,$ (22)
where the tetrad indices are enclosed in parentheses to distinguish them from
the tensor indices. The tetrad basis then satisfies
$\displaystyle e_{\mu}^{(a)}e^{\mu}_{(b)}$
$\displaystyle=\delta^{(a)}_{(b)}\,,\quad
e_{\mu}^{(a)}e^{\nu}_{(a)}=\delta^{\nu}_{\mu}\,,$ $\displaystyle
e_{\mu}^{(a)}$ $\displaystyle=g_{\mu\nu}\eta^{(a)(b)}e^{\nu}_{(b)}\,,$
$\displaystyle g_{\mu\nu}$
$\displaystyle=\eta_{(a)(b)}e_{\mu}^{(a)}e_{\nu}^{(b)}\equiv
e_{(a)\mu}e_{\nu}^{(a)}\,.$ (23)
Essentially, in the tetrad formalism we project the relevant quantities
defined on the coordinate basis of $g_{\mu\nu}$ onto a chosen basis of
$\eta_{(a)(b)}$ by constructing the tetrad basis correspondingly. Usually,
$\eta_{(a)(b)}$ is assumed to be the Minkowskian matrix
$\eta_{(a)(b)}=\eta^{(a)(b)}=\textrm{diag}\left(-1,1,1,1\right)\,.$ (24)
In this regard, any vector or tensor field can be projected onto the tetrad
frame, in which the field can be expressed through its tetrad components:
$\displaystyle A_{\mu}$ $\displaystyle=e_{\mu}^{(a)}A_{(a)}\,,\quad
A_{(a)}=e_{(a)}^{\mu}A_{\mu}\,,$ $\displaystyle B_{\mu\nu}$
$\displaystyle=e_{\mu}^{(a)}e_{\nu}^{(b)}B_{(a)(b)}\,,\quad
B_{(a)(b)}=e_{(a)}^{\mu}e_{(b)}^{\nu}B_{\mu\nu}\,.$ (25)
One should notice that in the tetrad formalism, the covariant (partial)
derivative in the original coordinate frame is replaced by the intrinsic
(directional) derivative in the tetrad frame. For instance, the derivatives of
an arbitrary rank-two object $H_{\mu\nu}$ in the two frames are related as
follows Chandrabook
$\displaystyle\,H_{(a)(b)|(c)}\equiv
e^{\lambda}_{(c)}H_{\mu\nu;\lambda}e_{(a)}^{\mu}e_{(b)}^{\nu}$
$\displaystyle=$ $\displaystyle\,H_{(a)(b),(c)}$
$\displaystyle-\eta^{(m)(n)}\left(\gamma_{(n)(a)(c)}H_{(m)(b)}+\gamma_{(n)(b)(c)}H_{(a)(m)}\right)\,,$
(26)
where a vertical rule and a comma denote the intrinsic and directional
derivative with respect to the tetrad indices, respectively. A semicolon
denotes a covariant derivative with respect to the tensor indices.
Furthermore, the Ricci rotation coefficients are defined by
$\gamma_{(c)(a)(b)}\equiv e_{(b)}^{\mu}e_{(a)\nu;\mu}e_{(c)}^{\nu}\,,$ (27)
and their components corresponding to the metric (19) are given in Ref.
Chandrabook .
In the tetrad frame, the gravitational equation (9) can be written as
$\displaystyle
R_{(a)(b)}\left(1+K\right)-e_{(a)}^{\mu}\left(K_{,(b)}\right)_{,\mu}+\gamma_{(c)(b)(a)}K_{,(d)}\eta^{(c)(d)}$
$\displaystyle+\eta_{(a)(b)}\left\\{\Box
K-\frac{R}{2}\left(1+K\right)+\frac{1}{2}\left[\partial_{\rho}\Xi\partial^{\rho}\phi+\partial_{\rho}\psi\partial^{\rho}\zeta+\psi\left(\partial\phi\right)^{2}\right]\right\\}$
$\displaystyle-e^{\rho}_{((a)}e^{\lambda}_{(b))}\left(\partial_{\rho}\Xi\partial_{\lambda}\phi+\partial_{\rho}\psi\partial_{\lambda}\zeta+\psi\partial_{\rho}\phi\partial_{\lambda}\phi\right)=0\,.$
(28)
Note that we have assumed a vacuum spacetime: $T_{\mu\nu}=0$.
### IV.2 Axial perturbations
According to how they react with respect to the parity change, the
gravitational perturbations of a spherically symmetric black hole can be
classified into two parts: the axial perturbations and the polar
perturbations. The master equation governing the axial perturbations can be
derived by perturbing Eq. (28) and considering its ($1,3$) and $(1,2)$
components (or equivalently the ($\varphi,\theta$) and $(\varphi,r)$
components).
Since the perturbed metric is axisymmetric and $\bar{K}$ is a constant, one
can easily verify that the master equation of the axial perturbations
$\Psi_{\textrm{aixal}}$ is described by
$R_{(1)(2)}=R_{(1)(3)}=0\,.$ (29)
Therefore, the nonlocal terms do not contribute to the axial perturbations,
and the master equation is the same as the one for the Schwarzschild black
hole in GR:
$\left(\frac{d^{2}}{dr_{*}^{2}}+\omega^{2}\right)\Psi_{\textrm{aixal}}=V_{RW}\Psi_{\textrm{aixal}}\,,$
(30)
where $r_{*}$ is the tortoise radius and $\omega$ is the quasinormal
frequencies. The effective potential $V_{RW}$, or the so-called Regge-Wheeler
potential, is defined as
${V_{RW}=\frac{e^{2\bar{\nu}}}{r^{2}}\left[l(l+1)-\frac{6m}{r}\right]}\,,\quad
e^{2\bar{\nu}}=1-\frac{2m}{r}\,,$ (31)
where $l$ is the multipole number. The master equation (30) is equivalent to
the Regge-Wheeler equation in GR Regge:1957td , which describes the axial
perturbations of the Schwarzschild black hole. Therefore, we conclude that the
nonlocal terms do not change the axial modes.
### IV.3 Polar perturbations
In GR, the polar mode $\Psi_{\textrm{polar}}$ of the spacetime metric (19) is
essentially a linear combination of the metric perturbations $\delta\nu$,
$\delta\mu_{1}$, $\delta\mu_{2}$, and $\delta\mu_{3}$. In the DW-II nonlocal
gravity, the polar mode may also depend on the perturbations of the nonlocal
terms, such as $\delta K$, $\delta\phi$, $\delta\psi$, $\delta\Xi$, and
$\delta\zeta$. In this subsection, we will use the tetrad formalism to derive
the master equation of the polar perturbations for the Schwarzschild black
hole in the DW-II nonlocal gravity. After obtaining the master equation, one
can see how the nonlocal terms affect the polar modes. Regarding the
derivation of the master equations of the Schwarzschild black hole in GR using
the tetrad formalism, we refer the readers to Ref. Chandrabook .
First, the $(0,2)$ component of the linearized Eq. (28) can be written as
$\displaystyle\left(1+\bar{K}\right)\left[\left(\delta\mu_{1}+\delta\mu_{3}\right)_{,r}+\left(\frac{1}{r}-\bar{\nu}_{,r}\right)\left(\delta\mu_{1}+\delta\mu_{3}\right)-\frac{2}{r}\delta\mu_{2}\right]$
$\displaystyle+\,e^{\bar{\nu}}\left(e^{-\bar{\nu}}\delta K\right)_{,r}$
$\displaystyle+\frac{1}{2}\left(\bar{\phi}_{,r}\delta\Xi+\bar{\Xi}_{,r}\delta\phi+\bar{\psi}_{,r}\delta\zeta+\bar{\zeta}_{,r}\delta\psi+2\bar{\psi}\bar{\phi}_{,r}\delta\phi\right)=0\,.$
(32)
Note that we can drop the derivatives with respect to $x^{0}$ in the Fourier
space.
The $(0,3)$, $(2,3)$, and $(1,1)$ components of the linearized Eq. (28) read
$\displaystyle\left(1+\bar{K}\right)$
$\displaystyle\,\left[\left(\delta\mu_{1}+\delta\mu_{2}\right)_{,\theta}+\left(\delta\mu_{1}-\delta\mu_{3}\right)\cot\theta\right]=-\delta
K_{,\theta}\,,$ (33) $\displaystyle\left(1+\bar{K}\right)\,$
$\displaystyle\Bigg{[}\left(\delta\mu_{1}+\delta\nu\right)_{,r\theta}+\left(\delta\mu_{1}-\delta\mu_{3}\right)_{,r}\cot\theta$
$\displaystyle+\left(\bar{\nu}_{,r}-\frac{1}{r}\right)\delta\nu_{,\theta}-\left(\bar{\nu}_{,r}+\frac{1}{r}\right)\delta\mu_{2,\theta}\Bigg{]}$
$\displaystyle=$ $\displaystyle-\delta K_{,r\theta}+\frac{1}{r}\delta
K_{,\theta}-\frac{1}{2}\big{(}\bar{\phi}_{,r}\delta\Xi_{,\theta}+\bar{\Xi}_{,r}\delta\phi_{,\theta}$
$\displaystyle+\bar{\psi}_{,r}\delta\zeta_{,\theta}+\bar{\zeta}_{,r}\delta\psi_{,\theta}+2\bar{\psi}\bar{\phi}_{,r}\delta\phi_{,\theta}\big{)}\,,$
(34)
and
$\displaystyle
e^{2\bar{\nu}}\,\Bigg{[}\delta\mu_{1,rr}+2\left(\frac{1}{r}+\bar{\nu}_{,r}\right)\delta\mu_{1,r}$
$\displaystyle+\frac{1}{r}\left(\delta\mu_{1}+\delta\nu+\delta\mu_{3}-\delta\mu_{2}\right)_{,r}-\frac{2\delta\mu_{2}}{r}\left(\frac{1}{r}+2\bar{\nu}_{,r}\right)\Bigg{]}$
$\displaystyle+$
$\displaystyle\,\frac{1}{r^{2}}\left[\delta\mu_{1,\theta\theta}+\cot\theta\left(2\delta\mu_{1}+\delta\nu+\delta\mu_{2}-\delta\mu_{3}\right)_{,\theta}+2\delta\mu_{3}\right]$
$\displaystyle-e^{-2\bar{\nu}}\delta\mu_{1,tt}$ $\displaystyle=$
$\displaystyle\,\frac{-1}{1+\bar{K}}\left(\frac{\Box\delta
K}{2}+\frac{e^{2\bar{\nu}}}{r}\delta K_{,r}+\frac{\cot\theta}{r^{2}}\delta
K_{,\theta}\right)\,,$ (35)
respectively. Finally, the $(2,2)$ component of linearized Eq. (28) can be
written as
$\displaystyle e^{2\bar{\nu}}$
$\displaystyle\left[\frac{2}{r}\delta\nu_{,r}+\left(\frac{1}{r}+\bar{\nu}_{,r}\right)\left(\delta\mu_{1}+\delta\mu_{3}\right)_{,r}-2\delta\mu_{2}\left(\frac{1}{r^{2}}+\frac{2\bar{\nu}_{,r}}{r}\right)\right]$
$\displaystyle+\frac{1}{r^{2}}\left[\left(\delta\mu_{1}+\delta\nu\right)_{,\theta\theta}+\left(2\delta\mu_{1}+\delta\nu-\delta\mu_{3}\right)_{,\theta}\cot\theta+2\delta\mu_{3}\right]$
$\displaystyle-e^{-2\bar{\nu}}\left(\delta\mu_{1}+\delta\mu_{3}\right)_{,tt}$
$\displaystyle=$ $\displaystyle\,\frac{-1}{1+\bar{K}}\Bigg{[}\Box\delta
K-e^{\bar{\nu}}\left(\delta
K_{,r}e^{\bar{\nu}}\right)_{,r}-\frac{e^{2\bar{\nu}}}{2}\big{(}\bar{\phi}_{,r}\delta\Xi_{,r}+\delta\phi_{,r}\bar{\Xi}_{,r}$
$\displaystyle+\bar{\psi}_{,r}\delta\zeta_{,r}+\delta\psi_{,r}\bar{\zeta}_{,r}+\delta\psi\bar{\phi}_{,r}^{2}+2\bar{\psi}\bar{\phi}_{,r}\delta\phi_{,r}\big{)}\Bigg{]}\,.$
(36)
To proceed further, we consider the following field decompositions Chandrabook
:
$\displaystyle\delta\nu$ $\displaystyle=N(r)P_{l}e^{i\omega t}\,,$
$\displaystyle\delta\mu_{2}$ $\displaystyle=L(r)P_{l}e^{i\omega t}\,,$
$\displaystyle\delta\mu_{3}$
$\displaystyle=\left[T(r)P_{l}+V(r)P_{l,\theta\theta}\right]e^{i\omega t}\,,$
$\displaystyle\delta\mu_{1}$
$\displaystyle=\left[T(r)P_{l}+V(r)P_{l,\theta}\cot\theta\right]e^{i\omega
t}\,,$ $\displaystyle\delta\mu_{1}+\delta\mu_{3}$
$\displaystyle=\left[2T-l(l+1)V\right]P_{l}e^{i\omega t}\,,$
$\displaystyle\delta K$
$\displaystyle=(1+\bar{K})\delta\tilde{K}(r)P_{l}e^{i\omega t}\,,$ (37)
where $P_{l}$ is the Legendre polynomials. Furthermore, according to Eqs.
(13), (14), and (15), we find it convenient to redefine a new scalar field and
make the following decomposition
$\displaystyle U$ $\displaystyle=\frac{\tilde{U}(r)}{r}P_{l}e^{i\omega t}$
$\displaystyle\equiv\frac{C_{\phi}\delta\Xi+C_{\Xi}\delta\phi+C_{\psi}\delta\zeta+\left(C_{\phi}\bar{\phi}+C_{\zeta}\right)\delta\psi}{2\left(1+\bar{K}\right)}\,.$
(38)
Then, Eqs. (32), (33), (34), (36) can be written as
$\displaystyle\left[\frac{d}{dr}+\left(\frac{1}{r}-\bar{\nu}_{,r}\right)\right]$
$\displaystyle\left[2T-l(l+1)V\right]-\frac{2}{r}L$ $\displaystyle=$
$\displaystyle-e^{\bar{\nu}}\left(e^{-\bar{\nu}}\delta\tilde{K}\right)_{,r}-\frac{e^{-2\bar{\nu}}}{r^{3}}\tilde{U}\,,$
(39) $\displaystyle T-V+L=$ $\displaystyle-\delta\tilde{K}\,,$ (40)
$\displaystyle\left(T-V+N\right)_{,r}$
$\displaystyle-\left(\frac{1}{r}-\bar{\nu}_{,r}\right)N-\left(\frac{1}{r}+\bar{\nu}_{,r}\right)L$
$\displaystyle=$
$\displaystyle-\delta\tilde{K}_{,r}+\frac{1}{r}\delta\tilde{K}-\frac{e^{-2\bar{\nu}}}{r^{3}}\tilde{U}\,,$
(41)
and
$\displaystyle
e^{2\bar{\nu}}\Bigg{[}\frac{2}{r}N_{,r}+\left(\frac{1}{r}+\bar{\nu}_{,r}\right)\left(2T-l(l+1)V\right)_{,r}$
$\displaystyle-\frac{2}{r}\left(\frac{1}{r}+2\bar{\nu}_{,r}\right)L\Bigg{]}-\frac{l(l+1)N}{r^{2}}-\frac{(l+2)(l-1)}{r^{2}}T$
$\displaystyle+e^{-2\bar{\nu}}\omega^{2}\left(2T-l(l+1)V\right)$
$\displaystyle=$
$\displaystyle\,-e^{-2\bar{\nu}}\omega^{2}\delta\tilde{K}-\left(\frac{2}{r}+\bar{\nu}_{,r}\right)e^{2\bar{\nu}}\delta\tilde{K}_{,r}$
$\displaystyle+\frac{l(l+1)}{r^{2}}\delta\tilde{K}+\frac{1}{r^{2}}\left({\frac{\tilde{U}}{r}}\right)_{,r}\,,$
(42)
respectively.
Defining $X\equiv nV$ where $n\equiv(l+2)(l-1)/2$, one gets
$2T-l(l+1)V=-2(L+X+\delta\tilde{K})\,,$ (43)
where we have used Eq. (40). Using Eq. (43), one can rewrite Eqs. (39) and
(41) as
$\displaystyle\left(L+X+\frac{\delta\tilde{K}}{2}\right)_{,r}=$
$\displaystyle-\left(\frac{1}{r}-\bar{\nu}_{,r}\right)\left(L+X+\frac{\delta\tilde{K}}{2}\right)$
$\displaystyle-\frac{L}{r}$
$\displaystyle-\frac{1}{2}\left(\frac{\delta\tilde{K}}{r}-\frac{e^{-2\bar{\nu}}}{r^{3}}\tilde{U}\right)\,,$
(44) $\displaystyle N_{,r}-L_{,r}=\left(\frac{1}{r}-\bar{\nu}_{,r}\right)N$
$\displaystyle+\left(\frac{1}{r}+\bar{\nu}_{,r}\right)L$
$\displaystyle+\frac{1}{r}\delta\tilde{K}-\frac{e^{-2\bar{\nu}}}{r^{3}}\tilde{U}\,.$
(45)
Then, subtracting Eqs. (35) from (36), and taking only the terms proportional
to $P_{l,\theta}\cot\theta$, one gets
$\displaystyle V_{,rr}$
$\displaystyle+2\left(\frac{1}{r}+\bar{\nu}_{,r}\right)V_{,r}$
$\displaystyle+\frac{e^{-2\bar{\nu}}}{r^{2}}\left(N+L+\delta\tilde{K}\right)+\omega^{2}e^{-4\bar{\nu}}V=0\,.$
(46)
Furthermore, one can remove all the $\delta\tilde{K}$ terms by making the
following field redefinitions:
$\tilde{L}\equiv L+\frac{\delta\tilde{K}}{2}\,,\qquad\tilde{N}\equiv
N+\frac{\delta\tilde{K}}{2}\,,$ (47)
and the above equations can be rewritten as
$\displaystyle\left(\tilde{L}+X\right)_{,r}=-\left(\frac{1}{r}-\bar{\nu}_{,r}\right)\left(\tilde{L}+X\right)-\frac{\tilde{L}}{r}+\frac{e^{-2\bar{\nu}}}{2r^{3}}\tilde{U}\,,$
(48)
$\displaystyle\tilde{N}_{,r}-\tilde{L}_{,r}=\left(\frac{1}{r}-\bar{\nu}_{,r}\right)\tilde{N}+\left(\frac{1}{r}+\bar{\nu}_{,r}\right)\tilde{L}-\frac{e^{-2\bar{\nu}}}{r^{3}}\tilde{U}\,,$
(49) $\displaystyle
V_{,rr}+2\left(\frac{1}{r}+\bar{\nu}_{,r}\right)V_{,r}+\frac{e^{-2\bar{\nu}}}{r^{2}}\left(\tilde{N}+\tilde{L}\right)+\omega^{2}e^{-4\bar{\nu}}V=0\,.$
(50)
Also, Eq. (42) can be written as
$\displaystyle\frac{2}{r}\tilde{N}_{,r}-2\left(\frac{1}{r}+\bar{\nu}_{,r}\right)\left(\tilde{L}+X\right)_{,r}-\frac{2}{r}\left(\frac{1}{r}+2\bar{\nu}_{,r}\right)\tilde{L}$
$\displaystyle-$
$\displaystyle\,\frac{l(l+1)}{r^{2}}e^{-2\bar{\nu}}\tilde{N}-\frac{2n}{r^{2}}\left(V-\tilde{L}\right)e^{-2\bar{\nu}}-2\omega^{2}e^{-4\bar{\nu}}\left(\tilde{L}+X\right)$
$\displaystyle=$
$\displaystyle\,\left(\frac{e^{-2\bar{\nu}}}{r^{2}}-\frac{1}{r^{2}}-\frac{2\bar{\nu}_{,r}}{r}\right)\delta\tilde{K}+\frac{e^{-2\bar{\nu}}}{r^{2}}\left({\frac{\tilde{U}}{r}}\right)_{,r}$
$\displaystyle=$
$\displaystyle\,\frac{e^{-2\bar{\nu}}}{r^{2}}\left({\frac{\tilde{U}}{r}}\right)_{,r}\,,$
(51)
where we have used $e^{2\bar{\nu}}=1-2m/r$ in the last equality.
Then, using Eqs. (48), (49), and (51), one can obtain
$\displaystyle\tilde{N}_{,r}=$
$\displaystyle\,\alpha\tilde{N}+\beta\tilde{L}+\gamma X$
$\displaystyle+\frac{e^{-2\bar{\nu}}}{2r^{3}}\left[\left(1+r\bar{\nu}_{,r}\right)\tilde{U}+r^{2}\left(\frac{\tilde{U}}{r}\right)_{,r}\right]\,,$
(52) $\displaystyle\tilde{L}_{,r}=$
$\displaystyle\,\left(\alpha-\frac{1}{r}+\bar{\nu}_{,r}\right)\tilde{N}+\left(\beta-\frac{1}{r}-\bar{\nu}_{,r}\right)\tilde{L}+\gamma
X$
$\displaystyle+\frac{e^{-2\bar{\nu}}}{2r^{3}}\left[\left(3+r\bar{\nu}_{,r}\right)\tilde{U}+r^{2}\left(\frac{\tilde{U}}{r}\right)_{,r}\right]\,,$
(53) $\displaystyle X_{,r}=$
$\displaystyle-\left(\alpha-\frac{1}{r}+\bar{\nu}_{,r}\right)\tilde{N}-\left(\beta-2\bar{\nu}_{,r}+\frac{1}{r}\right)\tilde{L}$
$\displaystyle-\left(\gamma+\frac{1}{r}-\bar{\nu}_{,r}\right)X$
$\displaystyle-\frac{e^{-2\bar{\nu}}}{2r^{3}}\left[\left(2+r\bar{\nu}_{,r}\right)\tilde{U}+r^{2}\left(\frac{\tilde{U}}{r}\right)_{,r}\right]$
$\displaystyle=$ $\displaystyle\,\frac{\lambda
e^{-2\bar{\nu}}}{r^{2}}\tilde{N}-\left(\gamma+\frac{1}{r}-\bar{\nu}_{,r}\right)\left(\tilde{L}+X\right)+\alpha\tilde{L}$
$\displaystyle-\frac{e^{-2\bar{\nu}}}{2r^{3}}\left[\left(2+r\bar{\nu}_{,r}\right)\tilde{U}+r^{2}\left(\frac{\tilde{U}}{r}\right)_{,r}\right]\,,$
(54)
where
$\displaystyle\alpha$ $\displaystyle=\frac{l(l+1)}{2r}e^{-2\bar{\nu}}\,,$
$\displaystyle\beta$
$\displaystyle=-\frac{1}{r}+r\left(\bar{\nu}_{,r}\right)^{2}+\bar{\nu}_{,r}-\frac{ne^{-2\bar{\nu}}}{r}+re^{-4\bar{\nu}}\omega^{2}\,,$
$\displaystyle\gamma$
$\displaystyle=-\frac{1}{r}+r\left(\bar{\nu}_{,r}\right)^{2}+\frac{e^{-2\bar{\nu}}}{r}+re^{-4\bar{\nu}}\omega^{2}\,,$
$\displaystyle\lambda$ $\displaystyle=-nr-3m\,.$ (55)
Defining
$\Psi_{\textrm{polar}}\equiv\frac{r}{n}X+\frac{r^{2}}{\lambda}\left(\tilde{L}+X\right)\,,$
(56)
and differentiating it with respect to the tortoise radius $r_{*}$ twice, we
get the master equation
$\displaystyle\left(\frac{d^{2}}{dr_{*}^{2}}+\omega^{2}\right)\Psi_{\textrm{polar}}=$
$\displaystyle\,V_{Z}\Psi_{\textrm{polar}}$ $\displaystyle+\,e^{2\bar{\nu}}$
$\displaystyle\,\left[\frac{3m+2nr}{r^{2}\left(3m+nr\right)^{2}}\right]\tilde{U}\,,$
(57)
where the Zerilli potential $V_{Z}$ is Zerilli:1970se
$V_{Z}=\frac{2e^{2\bar{\nu}}\left[n^{2}(n+1)r^{3}+3mn^{2}r^{2}+9m^{2}nr+9m^{3}\right]}{r^{3}\left(nr+3m\right)^{2}}\,.$
(58)
Therefore, one can see that the polar modes are sourced by the scalar mode
$\tilde{U}$. If the scalar mode is not excited, the master equation reduces to
the Zerilli equation in GR Zerilli:1970se .
From the linearized scalar field equations (5), (6), (7), and (8), one finds
that the scalar mode $U$ satisfies a massless Klein-Gordon equation444It
should be mentioned that the constraint Eq. (17) has to be used in order to
prove Eq. (59).:
$\Box U=0\,,$ (59)
which, using the decomposition given by the first line of Eq. (38), can be
written as
$\left(\frac{d^{2}}{dr_{*}^{2}}+\omega^{2}\right)\tilde{U}=V_{s}\tilde{U}\,,$
(60)
where the effective potential is
$V_{s}=\frac{e^{2\bar{\nu}}}{r^{2}}\left[l(l+1)+\frac{2m}{r}\right]\,.$ (61)
It should be noted that even if the scalar mode $U$ satisfies the massless
Klein-Gordon equation, it is excited only when at least one of the scalar
fields ($\bar{\phi},\bar{\Xi},\bar{\psi},\bar{\zeta}$) is excited already at
the background level. More precisely, as one can see from Eq. (38), the scalar
mode is excited when there exists at least one non-zero integration constant
in the set ($C_{\phi},C_{\Xi},C_{\psi},C_{\zeta}$). If so, the scalar mode
$\tilde{U}$ is dynamical and it would source the polar modes. As a result, the
polar modes will be affected by the scalar mode and the isospectrality breaks
down. On the other hand, if all the scalar fields are quiescent at the
background level ($C_{\phi}=C_{\Xi}=C_{\psi}=C_{\zeta}=0$), then the scalar
mode vanishes and the Schwarzschild black hole in the DW-II nonlocal gravity
is completely indistinguishable from its GR counterpart even at the
perturbative level.
## V Remarks for DW-I nonlocal model
In this section, we will briefly exhibit that in the DW-I nonlocal gravity,
the master equations of the gravitational perturbations share the similar
property to those in the DW-II gravity. The resemblance between the two models
results from the structural similarity of their equations of motion. In short,
the conclusions that the axial gravitational perturbations are the same as in
GR, and that the polar gravitational perturbations are sourced by a massless
scalar mode, which depends on whether the scalar fields are excited at the
background level or not, remain true for the DW-I model.
The action of the DW-I nonlocal gravity is Deser:2007jk
$\mathcal{S}=\frac{1}{16\pi}\int
d^{4}x\sqrt{-g}R\left[1+f\left(\Box^{-1}R\right)\right]+\mathcal{S}_{m}\,,$
(62)
Following the procedure in Ref. Nojiri:2007uq , one can localize the
gravitational action (62) by introducing an auxiliary scalar field $\phi$, as
in Eq. (3), and a Lagrange multiplier $\xi$, such that the action can be
written as
$\mathcal{S}=\frac{1}{16\pi}\int
d^{4}x\sqrt{-g}\left[R\left(1+f\right)-\partial_{\mu}\xi\partial^{\mu}\phi-\xi
R\right]+\mathcal{S}_{m}\,.$ (63)
The equations of motion are derived by varying the action (63) with respect to
$\xi$, $\phi$, and the metric $g_{\mu\nu}$. Varying the action with respect to
the Lagrange multiplier $\xi$ essentially gives Eq. (5). Then, after varying
the action with respect to the auxiliary scalar field $\phi$, one can obtain
$\Box\xi=-R\frac{df}{d\phi}\,.$ (64)
Finally, the variation of the action (63) with respect to the metric
$g_{\mu\nu}$ leads to the modified Einstein field equation
$G_{\mu\nu}+\Delta G_{\mu\nu}=8\pi T_{\mu\nu}\,,$ (65)
where
$\displaystyle\Delta G_{\mu\nu}=$
$\displaystyle\,\left(G_{\mu\nu}+g_{\mu\nu}\Box-\nabla_{\mu}\nabla_{\nu}\right)\left(f-\xi\right)$
$\displaystyle+\frac{1}{2}g_{\mu\nu}\partial_{\rho}\xi\partial^{\rho}\phi-\partial_{(\mu}\xi\partial_{\nu)}\phi\,.$
(66)
For the non-perturbed Schwarzschild metric with the Ricci scalar being zero,
one gets
$\bar{\phi}_{,r}=C_{\phi}\frac{e^{-2\bar{\nu}}}{r^{2}}\,,\qquad\bar{\xi}_{,r}=C_{\xi}\frac{e^{-2\bar{\nu}}}{r^{2}}\,,$
(67)
where $C_{\xi}$ is an integration constant. Furthermore, at the background
level, the gravitational equation (65) implies that
$\bar{F}\equiv\bar{f}-\bar{\xi}=\textrm{constant}\,,\quad
C_{\phi}C_{\xi}=0\,.$ (68)
Therefore, in the DW-I model, at least one of the scalar fields ($\bar{\phi}$
and $\bar{\xi}$) has to be a constant.
For the perturbations of the Schwarzschild black hole, it is easy to find that
the master equation of the axial perturbations is again the Regge-Wheeler
equation
$\left(\frac{d^{2}}{dr_{*}^{2}}+\omega^{2}\right)\Psi_{\textrm{aixal}}=V_{RW}\Psi_{\textrm{aixal}}\,,$
(69)
where the effective potential $V_{RW}$ is
${V_{RW}=\frac{e^{2\bar{\nu}}}{r^{2}}\left[l(l+1)-\frac{6m}{r}\right]}\,,\quad
e^{2\bar{\nu}}=1-\frac{2m}{r}\,.$ (70)
As for the polar perturbations, we find that upon defining the following
scalar mode
$W\equiv\frac{\tilde{W}(r)}{r}P_{l}e^{i\omega
t}\equiv\frac{1}{2\left(1+\bar{F}\right)}\left(C_{\phi}\delta\xi+C_{\xi}\delta\phi\right)\,,$
(71)
where we have defined $F\equiv f-\xi$ and decompose $F$ as $F=\bar{F}+\delta
F$, the master equation of the polar perturbations in the DW-I gravity can be
derived under the recipe of the DW-II gravity, with the replacement
$(\bar{K},\delta K,\tilde{U})\rightarrow(\bar{F},\delta F,\tilde{W})$.
Therefore, the master equation of the polar perturbations takes the following
form:
$\displaystyle\left(\frac{d^{2}}{dr_{*}^{2}}+\omega^{2}\right)\Psi_{\textrm{polar}}=$
$\displaystyle\,V_{Z}\Psi_{\textrm{polar}}$ $\displaystyle+\,e^{2\bar{\nu}}$
$\displaystyle\,\left[\frac{3m+2nr}{r^{2}\left(3m+nr\right)^{2}}\right]\tilde{W}\,,$
(72)
where the Zerilli potential $V_{Z}$ is given by
$V_{Z}=\frac{2e^{2\bar{\nu}}\left[n^{2}(n+1)r^{3}+3mn^{2}r^{2}+9m^{2}nr+9m^{3}\right]}{r^{3}\left(nr+3m\right)^{2}}\,.$
(73)
Similar to what we have done in the DW-II nonlocal gravity, by using the
linearized Eqs. (5), (64), and the constraint $C_{\phi}C_{\xi}=0$, it can be
proven that $\Box W=0$. Therefore, the polar modes are sourced by a massless
scalar mode $W$, which is defined by Eq. (71). One can see that whether the
scalar mode is excited or not depends on whether the scalar fields
$(\phi,\xi)$ are excited or not at the background level. More specifically, if
none of them is excited at the background level ($C_{\phi}=C_{\xi}=0$), the
polar gravitational perturbations are completely the same as those in GR. On
the other hand, if one of the integration constants ($C_{\phi},C_{\xi}$) is
not zero, the polar modes would be sourced by the massless scalar mode $W$,
and the isospectrality would break.
## VI Conclusions
The DW-I nonlocal gravity was originally proposed in order to explain the
late-time acceleration of the universe without resorting to any mysterious
dark energy. The recent discovery Belgacem:2018wtb that the DW-I is conflict
with the solar system tests has prompted the authors of Ref. Deser:2019lmm to
improve the model and propose the DW-II nonlocal gravity. These two theories
share an interesting property, which is that the auxiliary scalar fields in
the localized framework are not completely dynamical, but subject to the
initial data. It is thus interesting to see whether these nonlocal corrections
would alter the QNMs or not, as compared with those in GR.
In this paper, we investigated the gravitational perturbations of the
Schwarzschild black hole in the DW-II nonlocal gravity. The gravitational
perturbations of the Schwarzschild black hole in GR are governed by the Regge-
Wheeler equation (axial modes) and the Zerilli equation (polar modes), and the
two corresponding modes share the same QNM spectrum. In fact, the
isospectrality between the axial and the polar modes have been shown to be
very fragile Cardoso:2019mqo , in the sense that it easily breaks when the
underlying theory is slightly changed. For instance, the isospectrality is no
longer satisfied in the presence of a dynamical scalar field, such as in the
$f(R)$ gravity Bhattacharyya:2017tyc ; Datta:2019npq .
Essentially, in this work we showed that the axial gravitational perturbations
in the DW-II gravity are completely identical with those in GR. On the other
hand, the polar modes are sourced by a massless scalar mode, if there is at
least one auxiliary scalar field being excited at the background level.
However, if all the auxiliary scalar fields stay quiescent at the background
level, the scalar mode disappears and the polar modes reduce to those of the
Schwarzschild black hole in GR. Whether the auxiliary fields are excited or
not depends on the values of their corresponding integration constants
($C_{\phi},C_{\psi},C_{\Xi},C_{\zeta}$). We also performed a similar analysis
in the DW-I gravity and, thanks to their structural resemblance, a similar
conclusion was drawn.
Remarkably, although the excitation of the auxiliary scalar fields does break
the isospectrality, meaning that the observations of isospectrality breaking
can constrain the integration constants of the auxiliary scalar fields,
however, the question about whether one can really falsify the DW-II gravity
via isospectrality tests should be answered with great care. The reason is
that what one really constrains, through isospectrality tests, are just some
integration constants which do not directly characterize the theory. This is
different from the cases in some other models (e.g. $f(R)$ gravity or
dynamical Chern-Simons gravity) in which the isospectrality tests directly
constrain the coupling constants of the theories. This thus makes the DW-II
gravity more intrinsically tenacious against isospectrality tests. Therefore,
at this point, since DW-I model has been ruled out, we want to point out that
the DW-II nonlocal gravity seems not only successful in describing the
accelerating expansion of our universe in the cosmological scale, but also
theoretically consistent down to small scales as such of black holes. From an
astrophysical point of view, to confirm the validity of the DW model on a more
concrete basis, one should then consider rotating black holes since
astrophysical black holes are usually spinning. It would be interesting to see
in such cases, whether the DW model still gives predictions identical with
those made in GR, or some distinctive features would emerge. We leave these
issues for our future works.
## Acknowledgments
CYC is supported by the Institute of Physics of Academia Sinica.
## References
## References
* (1) B. P. Abbott et al. [LIGO Scientific and Virgo], Phys. Rev. Lett. 116 (2016) no.6, 061102.
* (2) B. P. Abbott et al. [LIGO Scientific and Virgo], Phys. Rev. X 9 (2019) no.3, 031040.
* (3) R. Abbott et al. [LIGO Scientific and Virgo], [arXiv:2010.14527 [gr-qc]].
* (4) K. D. Kokkotas and B. G. Schmidt, Living Rev. Rel. 2 (1999), 2.
* (5) E. Berti, V. Cardoso and A. O. Starinets, Class. Quant. Grav. 26, 163001 (2009).
* (6) R. A. Konoplya and A. Zhidenko, Rev. Mod. Phys. 83 (2011), 793-836.
* (7) T. Kobayashi, H. Motohashi and T. Suyama, Phys. Rev. D 85, 084025 (2012) [erratum: Phys. Rev. D 96, no.10, 109903 (2017)].
* (8) T. Kobayashi, H. Motohashi and T. Suyama, Phys. Rev. D 89, no.8, 084042 (2014).
* (9) J. L. Blázquez-Salcedo, C. F. B. Macedo, V. Cardoso, V. Ferrari, L. Gualtieri, F. S. Khoo, J. Kunz and P. Pani, Phys. Rev. D 94, no.10, 104024 (2016).
* (10) S. Bhattacharyya and S. Shankaranarayanan, Phys. Rev. D 96 (2017) no.6, 064044.
* (11) K. Glampedakis, G. Pappas, H. O. Silva and E. Berti, Phys. Rev. D 96, no.6, 064054 (2017).
* (12) C. Y. Chen and P. Chen, Phys. Rev. D 98, no.4, 044042 (2018).
* (13) C. Y. Chen, M. Bouhmadi-López and P. Chen, Eur. Phys. J. C 79, no.1, 63 (2019).
* (14) C. Y. Chen and P. Chen, Phys. Rev. D 99, no.10, 104003 (2019).
* (15) F. Moulin, A. Barrau and K. Martineau, Universe 5, no.9, 202 (2019).
* (16) C. Y. Chen, Y. H. Kung and P. Chen, Phys. Rev. D 102, 124033 (2020).
* (17) V. Cardoso, A. S. Miranda, E. Berti, H. Witek and V. T. Zanchin, Phys. Rev. D 79, 064016 (2009).
* (18) S. R. Dolan, Phys. Rev. D 82, 104003 (2010).
* (19) H. Yang, D. A. Nichols, F. Zhang, A. Zimmerman, Z. Zhang and Y. Chen, Phys. Rev. D 86, 104006 (2012).
* (20) B. Cuadros-Melgar, R. D. B. Fontana and J. de Oliveira, Phys. Lett. B 811, 135966 (2020).
* (21) R. A. Konoplya and Z. Stuchlík, Phys. Lett. B 771, 597-602 (2017).
* (22) M. S. Churilova, Eur. Phys. J. C 79, no.7, 629 (2019).
* (23) K. Glampedakis and H. O. Silva, Phys. Rev. D 100, no.4, 044040 (2019).
* (24) C. Y. Chen and P. Chen, Phys. Rev. D 101, no.6, 064021 (2020).
* (25) S. Chandrasekhar(ed.): The Mathematical Theory of Black Holes. Oxford University Press, Oxford (1992).
* (26) P. Pani, E. Berti and L. Gualtieri, Phys. Rev. Lett. 110, no.24, 241103 (2013).
* (27) P. Pani, E. Berti and L. Gualtieri, Phys. Rev. D 88, 064048 (2013).
* (28) K. Glampedakis, A. D. Johnson and D. Kennefick, Phys. Rev. D 96, no.2, 024036 (2017).
* (29) G. Darboux, C. R. Academy Sci. (Paris) 94, 1456 (1882).
* (30) A. V. Yurov and V. A. Yurov, Phys. Lett. A 383, no.22, 2571-2578 (2019).
* (31) S. A. Teukolsky, Astrophys. J. 185, 635-647 (1973).
* (32) M. Sasaki and T. Nakamura, Prog. Theor. Phys. 67, 1788 (1982).
* (33) F. Moulin and A. Barrau, Gen. Rel. Grav. 52, no.8, 82 (2020).
* (34) V. Ferrari, M. Pauri and F. Piazza, Phys. Rev. D 63, 064009 (2001).
* (35) V. Cardoso and J. P. S. Lemos, Phys. Rev. D 64, 084017 (2001).
* (36) E. Berti and K. D. Kokkotas, Phys. Rev. D 67, 064020 (2003).
* (37) J. L. Blázquez-Salcedo, S. Kahlen and J. Kunz, Eur. Phys. J. C 79, no.12, 1021 (2019).
* (38) S. Bhattacharyya and S. Shankaranarayanan, Eur. Phys. J. C 78 (2018) no.9, 737.
* (39) S. Datta and S. Bose, Eur. Phys. J. C 80 (2020) no.1, 14.
* (40) S. Bhattacharyya and S. Shankaranarayanan, Phys. Rev. D 100, no.2, 024022 (2019).
* (41) V. Cardoso, M. Kimura, A. Maselli, E. Berti, C. F. B. Macedo and R. McManus, Phys. Rev. D 99, no.10, 104077 (2019).
* (42) S. Shankaranarayanan, Int. J. Mod. Phys. D 28, no.14, 1944020 (2019).
* (43) S. Deser and R. P. Woodard, Phys. Rev. Lett. 99 (2007) 111301.
* (44) S. Deser and R. P. Woodard, JCAP 06, 034 (2019).
* (45) R. P. Woodard, Found. Phys. 44 (2014), 213.
* (46) R. P. Woodard, Universe 4 (2018), 88.
* (47) A. O. Barvinsky, Phys. Lett. B 710 (2012) 12.
* (48) M. Maggiore, Phys. Rev. D 89 (2014) 043008.
* (49) M. Maggiore and M. Mancarella, Phys. Rev. D 90 (2014) 023005.
* (50) V. Vardanyan, Y. Akrami, L. Amendola and A. Silvestri, JCAP 1803 (2018) 048.
* (51) L. Amendola, N. Burzilla and H. Nersisyan, Phys. Rev. D 96 (2017) 084031.
* (52) S. Tian, Phys. Rev. D 98 (2018) 084040.
* (53) C. Deffayet and R. P. Woodard, JCAP 08 (2009), 023.
* (54) S. Park and S. Dodelson, Phys. Rev. D 87 (2013) 024003.
* (55) S. Dodelson and S. Park, Phys. Rev. D 90 (2014) 043535, Erratum: [Phys. Rev. D 98 (2018) 029904].
* (56) S. Park and A. Shafieloo, Phys. Rev. D 95 (2017) 064061.
* (57) H. Nersisyan, A. F. Cid and L. Amendola, JCAP 04 (2017), 046.
* (58) S. Park, Phys. Rev. D 97 (2018) 044006.
* (59) L. Amendola, Y. Dirian, H. Nersisyan and S. Park, JCAP 03 (2019), 045.
* (60) S. Nojiri and S. D. Odintsov, Phys. Lett. B 659, 821-826 (2008).
* (61) Y. l. Zhang, K. Koyama, M. Sasaki and G. B. Zhao, JHEP 1603 (2016) 039.
* (62) S. Nojiri, S. D. Odintsov, M. Sasaki and Y. l. Zhang, Phys. Lett. B 696 (2011) 278.
* (63) Y. l. Zhang and M. Sasaki, Int. J. Mod. Phys. D 21 (2012) 1250006.
* (64) S. Park and R. P. Woodard, Phys. Rev. D 99 (2019) 024014.
* (65) E. Belgacem, A. Finke, A. Frassino and M. Maggiore, JCAP 1902 (2019) 035.
* (66) Y. Z. Chu and S. Park, Phys. Rev. D 99 (2019) 044052.
* (67) T. Regge and J. A. Wheeler, Phys. Rev. 108 (1957) 1063.
* (68) F. J. Zerilli, Phys. Rev. Lett. 24, 737-738 (1970).
|
# High-Frequency Instabilities of the Kawahara Equation: A Perturbative
Approach
Ryan Creedon1, Bernard Deconinck2, Olga Trichtchenko3
1 Dept. of Applied Mathematics, U. of Washington, Seattle, WA, 98105, USA
<EMAIL_ADDRESS>
2 Dept. of Applied Mathematics, U. of Washington, Seattle, WA, 98105, USA
<EMAIL_ADDRESS>
3 Dept. of Physics and Astronomy, U. of Western Ontario, London, ON, N6A 3K7,
CA<EMAIL_ADDRESS>
January 17, 2021
###### Abstract.
We analyze the spectral stability of small-amplitude, periodic, traveling-wave
solutions of the Kawahara equation. These solutions exhibit high-frequency
instabilities when subject to bounded perturbations on the whole real line. We
introduce a formal perturbation method to determine the asymptotic growth
rates of these instabilities, among other properties. Explicit numerical
computations are used to verify our asymptotic results.
###### Key words and phrases:
Kawahara equation, spectral stability, high-frequency instabilities,
perturbation methods, dispersive Hamiltonian systems, Stokes waves
## 1\. Introduction
We investigate small-amplitude, $L$-periodic, traveling-wave solutions of the
Kawahara equation
$\displaystyle u_{t}=\alpha u_{xxx}+\beta u_{5x}+\sigma(u^{2})_{x},$ (1.1)
where $\alpha$, $\beta$, and $\sigma$ are nonzero, real parameters [20].
Similar to Stokes waves of the Euler water wave problem [26, 29], these
solutions are obtained order by order as power series in a small parameter
that scales with the amplitude of the solutions; see [14] and Section 2 below
for more details. We refer to these solutions as the Stokes waves of the
Kawahara equation.
The Kawahara equation is dispersive with linear dispersion relation
$\displaystyle\omega(k)=\alpha k^{3}-\beta k^{5}.$ (1.2)
The equation is Hamiltonian,
$\displaystyle u_{t}=\partial_{x}\frac{\delta\mathcal{H}}{\delta u},$ (1.3)
with
$\displaystyle\mathcal{H}=\int_{0}^{L}\left(-\frac{\alpha}{2}u_{x}^{2}+\frac{\beta}{2}u_{xx}^{2}+\frac{\sigma}{3}u^{3}\right)dx.$
(1.4)
In an appropriate traveling frame, the Stokes waves of (1.1) are critical
points of the Hamiltonian, prompting an investigation of the flow generated by
(1.4) about the Stokes wave solutions.
Perturbing the Stokes waves by functions bounded in space and exponential in
time yields a spectral problem whose spectral elements characterize the
temporal growth rates of the perturbations; see Section 3 for more details. We
refer to this collection of spectral elements as the stability spectrum of the
Stokes waves.
A standard argument [18] shows that the stability spectrum is purely
continuous, but Floquet theory can decompose the spectrum into an uncountably
infinite collection of point spectra. Each point spectra is indexed by a real
number, called the Floquet exponent, that is contained within a compact
interval of the real line [15, 17].
For the Euler water wave problem, these point spectra depend analytically on
the amplitude of the Stokes waves [2, 3]. Based on numerical experiments [28],
similar results appear to hold for the Kawahara equation. The spectrum also
exhibits quadrafold symmetry due to the underlying Hamiltonian nature of (1.1)
[15],[23]. Therefore, for a Stokes wave with given amplitude to be spectrally
stable, all point spectra must be on the imaginary axis. Otherwise, there
exist perturbations to the Stokes waves that grow exponentially in time, and
the Stokes waves are spectrally unstable.
In contrast with the completely integrable KdV equation ($\beta=0$) [8, 24,
25], considerably less is known about the stability spectrum of Stokes waves
to (1.1). Haragus, Lombardi, and Scheel [14] prove that this spectrum lies on
the imaginary axis for small-amplitude Stokes waves in a particular scaling
regime. Such solutions are, therefore, spectrally stable. Work by
Trichtchenko, Deconinck, and Kollár [28] develops necessary criteria for the
stability spectrum of a broader class of small-amplitude Stokes waves to leave
the imaginary axis and provide numerical evidence for the high-frequency
instabilities that result.
High-frequency instabilities arise from pairwise collisions of nonzero,
imaginary elements of the stability spectrum. Upon colliding, these elements
may symmetrically bifurcate from the imaginary axis as the amplitude of the
Stokes wave grows, resulting in instability [11],[23]. An example of a high-
frequency instability for a small-amplitude Stokes wave of (1.1) is seen in
Figure 1. We refer to the locus of spectral elements off the imaginary axis
and bounded away from the origin as high-frequency isolas. The isolas of
Figure 1, as well as the rest of the stability spectrum, are obtained
numerically using the Floquet-Fourier-Hill (FFH) method; see [9] for a
detailed description of this method.
Figure 1. (Left) A stability spectrum of Stokes wave solutions of (1.1) with
$\alpha=1$, $\beta=0.7$, $\sigma=1$, and small-amplitude parameter
$\varepsilon=10^{-3}$, computed using the FFH method. A uniform grid of
$10^{3}$ Floquet exponents between $[-1/2,1/2]$ approximates purely imaginary
point spectra but misses the high-frequency isolas. A uniform grid of $4\times
10^{3}$ Floquet exponents in the interval described by (4.13), obtained in
Section 4, captures these isolas. (Right) Zoom-in of the high-frequency isola
boxed in the left plot (with fewer point spectra shown for ease of
visibility). The red curve is obtained in Section 5 and approximates the
isola.
High-frequency instabilities are not as well-studied as the modulational (or
Benjamin-Feir) instability that arises from collisions of spectral elements at
the origin of the complex spectral plane [5],[7]. Current understanding of
high-frequency instabilities is limited mostly to numerical experiments.
Exceptions include the works of Akers [4] and Trichtchenko, Deconinck, and
Kollár [28], which obtain asymptotic information about the high-frequency
isolas for the Euler problem in infinitely deep water and for the Kawahara
equation, respectively.
The purpose of our present work is to build on these results. In particular,
for sufficiently small-amplitude solutions, we seek the following:
1. (i)
the asymptotic range of Floquet exponents that parameterize the high-frequency
isolas observed in numerical computations of the stability spectrum,
2. (ii)
asymptotic estimates of the most unstable spectral elements of the high-
frequency isolas, and
3. (iii)
expressions for curves asymptotic to these isolas, as seen in Figure 1.
To obtain these quantities, we develop a perturbation method inspired by [4]
that readily extends to higher-order calculations. Asymptotic results obtained
by this method are then compared with numerical results from the FFH method.
## 2\. Small-Amplitude Stokes Waves
We move to a frame traveling with velocity $c$ so that $x\rightarrow x-ct$.
Equation (1.1) becomes
$\displaystyle u_{t}=cu_{x}+\alpha u_{xxx}+\beta u_{5x}+\sigma(u^{2})_{x}.$
(2.1)
We seek $L$-periodic, steady-state solutions of (2.1). Equating time
derivatives to zero and integrating in $x$, we arrive at
$\displaystyle cu+\alpha u_{xx}+\beta u_{4x}+\sigma u^{2}=\mathcal{C},$ (2.2)
where $\mathcal{C}$ is a constant of integration. Using the Galilean symmetry
of (1.1), there exists a boost $\xi$ such that, with $c\rightarrow c+\xi$ and
$u\rightarrow u+\xi$, $\mathcal{C}$ can be omitted from (2.2):
$\displaystyle cu+\alpha u_{xx}+\beta u_{4x}+\sigma u^{2}=0.$ (2.3)
Scaling $x\rightarrow 2\pi x/L$ and $u\rightarrow 2\pi u/(\alpha L)$ allows us
to consider $2\pi$-periodic solutions of
$\displaystyle cu+u_{xx}+\beta u_{4x}+\sigma u^{2}=0,$ (2.4)
without loss of generality, provided $c$, $\beta$, and $\sigma$ are
appropriately redefined.
Let $u=u_{S}(x;\varepsilon)$ be a one-parameter family of $2\pi$-periodic
solutions of (2.4) with corresponding velocity $c=c(\varepsilon)$. The
existence of such a family is rigorously justified by Lyapunov-Schmidt
reduction; see [14]. In what follows, we define the parameter $\varepsilon$ as
twice the first Fourier coefficient of $u_{S}(x;\varepsilon)$:
$\displaystyle\varepsilon:=2\widehat{u_{S}(x;\varepsilon)}_{1}=\frac{1}{\pi}\int_{0}^{2\pi}u_{S}(x;\varepsilon)e^{ix}dx,$
(2.5)
where $~{}\widehat{\cdot}~{}$ is the Fourier transform on the interval
$(0,2\pi)$. Because the $\textrm{L}^{2}(0,2\pi)$ norm of
$u_{S}(x;\varepsilon)$ scales like $\varepsilon$ when $|\varepsilon|\ll 1$, we
call $\varepsilon$ the small-amplitude parameter.
From [14], expansions for $u_{S}(x;\varepsilon)$ and $c(\varepsilon)$ take the
form
$\displaystyle u_{S}(x;\varepsilon)$
$\displaystyle=\sum_{k=1}^{\infty}u_{k}(x)\varepsilon^{k},$ (2.6a)
$\displaystyle c(\varepsilon)$
$\displaystyle=\sum_{k=0}^{\infty}c_{2k}\varepsilon^{2k},$ (2.6b)
where $u_{k}(x)$ is analytic and $2\pi$-periodic for each $k$. Exploiting the
invariance of (2.4) under $x\rightarrow-x$ and $x\rightarrow x+\phi$, we
require $u_{k}(x)=u_{k}(-x)$ so that $u_{S}(x;\varepsilon)$ is even in $x$
without loss of generality. Substituting these expansions into (2.4) and
following a Poincaré-Lindstedt perturbation method [29], one finds corrections
to $u_{S}(x;\varepsilon)$ and $c(\varepsilon)$ order by order.
One difficulty occurs at leading order of the Poincaré-Lindstedt method.
Substituting expansions (2.6) into (2.4) and collecting terms of
$\mathcal{O}(\varepsilon)$, we find
$\displaystyle\left[c_{0}+\partial_{x}^{2}+\beta\partial_{x}^{4}\right]u_{1}(x)=0.$
(2.7)
From (2.5), $\widehat{u_{1}(x)}_{1}=1/2$. Taking the Fourier transform of
(2.7) and evaluating at the first mode, we find
$\displaystyle\left[c_{0}-1+\beta\right]\widehat{u_{1}(x)}_{1}=\frac{1}{2}(c_{0}-1+\beta)=0,$
(2.8)
which implies that
$\displaystyle c_{0}=1-\beta.$ (2.9)
By inspection,
$\displaystyle u_{1}(x)=\cos(x)$ (2.10)
is a solution to (2.7) that is analytic, $2\pi$-periodic, even in $x$, and
satisfies the normalization $\widehat{u_{1}(x)}_{1}=1/2$. If
$\beta=1/(1+N^{2})$ for any integer $N>1$, then
$\displaystyle u_{1}(x)=\cos(x)+C_{N}\cos(Nx),$ (2.11)
where $C_{N}$ is an arbitrary real constant, is an equally valid solution to
(2.7) with the requisite properties. In this case, the Stokes waves are said
to be resonant and exhibit Wilton ripples [30]. Expansions (2.6) must be
modified as a result; see [1, 16], for instance.
In this manuscript, we restrict to nonresonant Stokes waves:
$\displaystyle\beta\neq\frac{1}{1+N^{2}},$ (2.12)
for $N$ stated above, and (2.9) and (2.10) are the unique leading-order
behaviors of $c(\varepsilon)$ and $u_{S}(x;\varepsilon)$, respectively. The
remainder of the Poincaré-Lindstedt method follows as usual. We terminate the
method after third-order corrections, as this is sufficient for our
calculations that follow. We find
$\displaystyle u_{S}(x;\varepsilon)$ $\displaystyle=\varepsilon
u_{1}(x)+\varepsilon^{2}u_{2}(x)+\varepsilon^{3}u_{3}(x)+\mathcal{O}(\varepsilon^{4})$
(2.13a)
$\displaystyle=\varepsilon\cos(x)+\varepsilon^{2}\frac{\sigma}{2}\left(-\frac{1}{c_{0}}+\frac{2}{\Omega(2)}\cos(2x)\right)+\varepsilon^{3}\frac{3\sigma^{2}}{\Omega(2)\Omega(3)}\cos(3x)+\mathcal{O}(\varepsilon^{4}),$
$\displaystyle c(\varepsilon)$
$\displaystyle=c_{0}+c_{2}\varepsilon^{2}+\mathcal{O}(\varepsilon^{4})$
(2.13b)
$\displaystyle=1-\beta+\sigma^{2}\left(\frac{1}{c_{0}}-\frac{1}{\Omega(2)}\right)\varepsilon^{2}+\mathcal{O}(\varepsilon^{4}),$
where $\Omega(\cdot)$ is the linear dispersion relation of the Kawahara
equation (1.1) (with $\alpha=1$) in a frame traveling at velocity
$c(\varepsilon)$:
$\displaystyle\Omega(k)=-c_{0}k+k^{3}-\beta k^{5}.$ (2.14)
## 3\. Necessary Conditions for High-Frequency Instability
### 3.1. The Stability Spectrum
We consider a perturbation to $u_{S}(x;\varepsilon)$ of the form
$\displaystyle u(x,t)=u_{S}(x;\varepsilon)+\rho v(x,t)+\mathcal{O}(\rho^{2}),$
(3.1)
where $|\rho|\ll 1$ is a small parameter independent of $\varepsilon$ and
$v(x,t)$ is a sufficiently smooth, bounded function of $x$ on the whole real
line for each $t\geq 0$. Substituting (1.1) (with $\alpha=1$) and collecting
terms of $\mathcal{O}(\rho)$, we find by formally separating variables
$\displaystyle v(x,t)=e^{\lambda t}W(x)+c.c.,$ (3.2)
where $c.c.$ denotes complex conjugation of what precedes and $W(x)$ satisfies
the spectral problem
$\displaystyle\lambda
W(x)=\mathcal{L}(u_{S}(x;\varepsilon),c(\varepsilon),\beta,\sigma)W(x),$ (3.3)
with
$\displaystyle\mathcal{L}(u_{S}(x;\varepsilon),c(\varepsilon),\beta,\sigma)=c(\varepsilon)\partial_{x}+\partial_{x}^{3}+\beta\partial_{x}^{5}+2\sigma
u_{S}(x;\varepsilon)\partial_{x}+2\sigma u_{S}^{\prime}(x;\varepsilon),$ (3.4)
where primes denote differentiation with respect to $x$. From Floquet theory
[15], all solutions of (3.3) that are bounded over $\mathbb{R}$ take the form
$\displaystyle W(x)=e^{i\mu x}w(x),$ (3.5)
where $\mu\in[-1/2,1/2]$ is the Floquet exponent and $w(x)$ is $2\pi$-periodic
in an appropriately chosen function space.
Remark. The conjugate of $W(x)$ is a solution of (3.3) with spectral parameter
$\overline{\lambda}$. Since the spectrum of $\mathcal{L}$ is invariant under
conjugation according to [15], one can restrict $\mu$ to the interval
$[0,1/2]$ without loss of generality.
Substituting (3.5) into (3.3), our spectral problem becomes a one-parameter
family of spectral problems:
$\displaystyle\lambda^{\mu}w(x)=\mathcal{L}^{\mu}(u_{S}(x;\varepsilon),c(\varepsilon),\beta,\sigma)w(x),$
(3.6)
where $\mathcal{L}^{\mu}$ is $\mathcal{L}$ with $\partial_{x}\rightarrow
i\mu+\partial_{x}$. In light of (3.6), we require
$w(x)\in\textrm{H}^{5}_{\textrm{per}}(0,2\pi)$ so that $\mathcal{L}^{\mu}$ is
a closed operator densely defined on the separable Hilbert space
$\textrm{L}_{\textrm{per}}^{2}(0,2\pi)$ for a given $\mu$. Then,
$\mathcal{L}^{\mu}$ has a discrete spectrum of eigenvalues $\lambda^{\mu}$ for
each $\mu$ and the union of $\lambda^{\mu}$ over all $\mu\in[0,1/2]$ yields
the purely continuous spectrum of $\mathcal{L}$, which is the stability
spectrum of Stokes waves. See [15] for more details.
As stated in the introduction, if there exists $\lambda^{\mu}$ with
$\textrm{Re}\left(\lambda^{\mu}\right)>0$, then there exists a perturbation to
the Stokes wave that grows exponentially in time, and we say that the Stokes
wave is spectrally unstable. Otherwise, the wave is spectrally stable. Since
(3.6) is obtained from a linearization of a Hamiltonian system (1.1), the
stability spectrum is invariant under conjugation and negation. As a result,
spectral stability implies that all eigenvalues of $\mathcal{L}^{\mu}$ are
purely imaginary.
### 3.2. The Necessary Conditions for High-Frequency Instability
For fixed $\mu$, the operator $\mathcal{L}^{\mu}$ depends implicitly on the
small parameter $\varepsilon$ through its dependence on $u_{S}(x;\varepsilon)$
and $c(\varepsilon)$. If $\varepsilon=0$, $\mathcal{L}^{\mu}$ reduces to
$\displaystyle\mathcal{L}^{\mu}_{0}=c_{0}(i\mu+\partial_{x})+(i\mu+\partial_{x})^{3}+\beta(i\mu+\partial_{x})^{5},$
(3.7)
a constant-coefficient operator, and its eigenvalues $\lambda^{\mu}_{0}$ are
explicitly given by
$\displaystyle\lambda^{\mu}_{0,n}$ $\displaystyle=-i\Omega(\mu+n),$ (3.8)
where $n\in\mathbb{Z}$. For all $\mu$ and $n$, these eigenvalues are purely
imaginary, implying that the zero-amplitude solution of the Kawahara equation
is spectrally stable.
Importantly, not all eigenvalues given by (3.8) are simple. Using the theory
outlined in [21] and [28], one has
Theorem 1. For each $\Delta n\in\mathbb{N}$, there exists a unique Floquet
exponent $\mu_{0}\in[0,1/2]$ and unique integers $m$ and $n$ such that
$m-n=\Delta n$ and
$\displaystyle\lambda^{\mu_{0}}_{0,n}=\lambda^{\mu_{0}}_{0,m}\neq 0,$ (3.9)
provided that the parameter $\beta$ is nonresonant (2.12) and that $\beta$
satisfies the inequality111A similar statement holds for $\Delta n<0$. This
yields the complex conjugate eigenvalues that satisfy (3.9).:
$\displaystyle\textrm{max}\left(\frac{3}{5(\Delta n)^{2}},\frac{1}{1+(\Delta
n)^{2}}\right)<$ $\displaystyle~{}\beta<\textrm{min}\left(\frac{6}{5(\Delta
n)^{2}},\frac{1}{\left(\frac{\Delta n}{2}\right)^{2}+1}\right),\quad\Delta
n<3,$ (3.10a) $\displaystyle\frac{1}{1+(\Delta n)^{2}}<$
$\displaystyle~{}\beta<\frac{1}{1+\left(\frac{\Delta
n}{2}\right)^{2}},\quad\Delta n\geq 3.$ (3.10b)
The proof is found in the Appendix.
The eigenfunctions of these nonsimple eigenvalues take the form
$\displaystyle w_{0}(x)=\gamma_{0}e^{imx}+\gamma_{1}e^{inx},$ (3.11)
where $\gamma_{0},\gamma_{1}$ are arbitrary, complex constants. We assume the
eigenvalues that satisfy (3.9) are semi-simple with geometric and algebraic
multiplicity two. Then, these eigenvalues represent the collision of two
simple eigenvalues at $\varepsilon=0$, and (3.9) is referred to as the
collision condition.
Collision of eigenvalues away from the origin is a necessary condition for the
development of high-frequency instabilities. Inequality (3.10) guarantees that
there are a finite222For $\Delta n$ sufficiently large, $\beta$ fails to
satisfy inequality (3.10), and no high-frequency instabilities occur. number
of such collisions for a given $\beta$: this is in contrast to the water wave
problem, where a countably infinite number of collisions occur [11]. Each
collision site can be enumerated by $\Delta n$. The largest high-frequency
isola occurs from the $\Delta n=1$ collision, which we study in Section 4.1.
A second condition for high-frequency instabilities necessitates that the
Krein signatures [22] of the two collided eigenvalues have opposite signs
[23]. It is shown in [11, 21, 28] that this condition is equivalent to
$\displaystyle(\mu_{0}+n)(\mu_{0}+m)<0,$ (3.12)
where $\mu_{0}$, $m$, and $n$ are obtained from the collision condition (3.9).
For any $\beta$ that satisfies conditions (2.12) and (3.10) and any $\mu_{0}$,
$m$, and $n$ that satisfies the condition (3.9), (3.12) is automatically
satisfied; see [21] and [28] for the proof.
As $|\varepsilon|$ increases in magnitude, a neighborhood of spectral elements
around the collided eigenvalues of $\mathcal{L}^{\mu_{0}}_{0}$ (3.7) can leave
the imaginary axis, generating high-frequency instabilities. This is seen
explicitly in Figure 1 for the parameter choice $\beta=0.7$, where a $\Delta
n=1$ collision occurs at $\varepsilon=0$.
## 4\. Asymptotics of High-Frequency Instabilities
We obtain spectral data of $\mathcal{L}^{\mu}$ as a power series expansion in
$\varepsilon$ about the collided eigenvalues of $\mathcal{L}^{\mu_{0}}_{0}$.
First, we apply our method to the largest high-frequency instability
corresponding to $\Delta n=1$. Then, we consider $\Delta n\geq 2$.
### 4.1. High-Frequency Instabilities: $\Delta n=1$
Let $m$ and $n$ be the unique integers that satisfy the collision condition
(3.9) with $m-n=1$, and let $\mu_{0}$ be the corresponding unique Floquet
exponent in $[0,1/2]$. Then, the spectral data of $\mathcal{L}^{\mu_{0}}_{0}$
that gives rise to a $\Delta n=1$ high-frequency instability is
$\displaystyle\lambda_{0}:=\lambda^{\mu_{0}}_{0,n}$
$\displaystyle=-i\Omega(\mu_{0}+n)=\lambda^{\mu_{0}}_{0,m}=-i\Omega(\mu_{0}+m)\neq
0,$ (4.1a) $\displaystyle w_{0}(x)$
$\displaystyle:=\gamma_{0}e^{imx}+\gamma_{1}e^{inx}.$ (4.1b)
As $|\varepsilon|$ increases, we assume these data depend analytically on
$\varepsilon$:
$\displaystyle\lambda(\varepsilon)$
$\displaystyle=\lambda_{0}+\varepsilon\lambda_{1}+\mathcal{O}(\varepsilon^{2}),$
(4.2a) $\displaystyle w(x;\varepsilon)$ $\displaystyle=w_{0}(x)+\varepsilon
w_{1}(x)+\mathcal{O}(\varepsilon^{2}),$ (4.2b)
where $\lambda(\varepsilon)$ and $w(x;\varepsilon)$ solve the spectral problem
(3.6).
If $\lambda_{0}$ is a semi-simple and isolated eigenvalue of
$\mathcal{L}^{\mu_{0}}_{0}$, (4.2a) and (4.2b) may be justified using results
of analytic perturbation theory, provided $\mu_{0}$ is fixed [19]. Fixing the
Floquet exponent in this way, however, gives at most two elements on the high-
frequency isola (provided $|\varepsilon|$ is sufficiently small) and these
elements do not, in general, correspond to the spectral elements of largest
real part on the isola. For these reasons, we expand the Floquet exponent
about its resonant value as well:
$\displaystyle\mu=\mu(\varepsilon)=\mu_{0}+\varepsilon\mu_{1}+\mathcal{O}(\varepsilon^{2}).$
(4.3)
As we shall see, $\mu_{1}$ is constrained to an interval of values that
parameterizes an ellipse asymptotic to the high-frequency isola.
Like Akers [4], we impose the following normalization condition on our
eigenfunction $w(x;\varepsilon)$ for uniqueness:
$\displaystyle\widehat{w(x;\varepsilon)}_{n}=1.$ (4.4)
Substituting (4.2b) into this normalization condition, we find
$\widehat{w_{0}(x)}_{n}=1$ and $\widehat{w_{j}(x)}_{n}=0$ for
$j\in\mathbb{N}$, meaning $w_{0}(x)$ fully supports the $n^{\textrm{th}}$
Fourier mode of the eigenfunction $w(x;\varepsilon)$. As a consequence, (4.1)
becomes
$\displaystyle w_{0}(x)=e^{inx}+\gamma_{0}e^{imx}.$ (4.5)
Although $w_{0}(x)$ does not appear unique at this order, we find an
expression for $\gamma_{0}$ at the next order.
The $\mathcal{O}(\varepsilon)$ Problem
Substituting (4.2a), (4.2b), and (4.3) into (3.6) and collecting terms of
$\mathcal{O}(\varepsilon)$ yields
$\displaystyle(\mathcal{L}_{0}^{\mu_{0}}-\lambda_{0})w_{1}(x)=\lambda_{1}w_{0}(x)-\mathcal{L}_{1}w_{0}(x),$
(4.6)
where
$\displaystyle\mathcal{L}_{1}=ic_{0}\mu_{1}+3i\mu_{1}(i\mu_{0}+\partial_{x})^{2}+5i\beta\mu_{1}(i\mu_{0}+\partial_{x})^{4}+2\sigma
u_{1}(x)(i\mu_{0}+\partial_{x})+2\sigma u_{1}^{\prime}(x).$ (4.7)
Using (2.13) to replace $u_{1}(x)$, (4.5) to replace $w_{0}(x)$, and $m-n=1$,
(4.6) becomes
$\displaystyle(\mathcal{L}^{\mu_{0}}_{0}-\lambda_{0})w_{1}(x)=$
$\displaystyle\left[\lambda_{1}+i\mu_{1}c_{g}(\mu_{0}+n)-i\sigma\gamma_{0}(\mu_{0}+n)\right]e^{inx}$
(4.8)
$\displaystyle+\left[\gamma_{0}\left(\lambda_{1}+i\mu_{1}c_{g}(\mu_{0}+m)\right)-i\sigma(\mu_{0}+m)\right]e^{imx}$
$\displaystyle-i\sigma(\mu_{0}+n-1)e^{i(n-1)x}-i\sigma\gamma_{0}(\mu_{0}+m+1)e^{i(m+1)x},$
where $c_{g}(k)=\Omega^{\prime}(k)$ is the group velocity of $\Omega$.
If (4.8) can be solved for
$w_{1}(x)\in\textrm{H}^{5}_{\textrm{per}}\left(0,2\pi\right)$, the Fredholm
alternative necessitates that the inhomogeneous terms on the RHS of (4.8) must
be orthogonal333In the $\textrm{L}^{2}_{\textrm{per}}\left(0,2\pi\right)$
sense to the nullspace of $(\mathcal{L}^{\mu_{0}}_{0}-\lambda_{0})^{\dagger}$,
the hermitian adjoint of $\mathcal{L}^{\mu_{0}}_{0}-\lambda_{0}$. A quick
computation shows that $\mathcal{L}^{\mu_{0}}_{0}-\lambda_{0}$ is skew-
Hermitian, and so its nullspace coincides with that of its Hermitian adjoint.
The nullspace of $\mathcal{L}^{\mu_{0}}_{0}-\lambda_{0}$ is, by construction,
$\displaystyle\textrm{Null}(\mathcal{L}^{\mu_{0}}_{0}-\lambda_{0})=\textrm{Span}\left(e^{inx},e^{imx}\right).$
(4.9)
Thus, the solvability conditions for (4.8) are
$\displaystyle\left<e^{inx},\left[\lambda_{1}+i\mu_{1}c_{g}(\mu_{0}+n)-i\sigma\gamma_{0}(\mu_{0}+n)\right]e^{inx}\right>$
$\displaystyle=0,$ (4.10a)
$\displaystyle\left<e^{imx},\left[\gamma_{0}\left(\lambda_{1}+i\mu_{1}c_{g}(\mu_{0}+m)\right)-i\sigma(\mu_{0}+m)\right]e^{imx}\right>$
$\displaystyle=0,$ (4.10b)
where $\left<\cdot,\cdot\right>$ is the standard inner product on
$\textrm{L}_{\textrm{per}}^{2}(0,2\pi)$.
Remark. Both solvability conditions can be reinterpreted as removing secular
terms from (4.8). Moreover, solvability condition (4.10a) coincides with the
normalization condition $\widehat{w_{1}(x)}_{n}=0$.
The solvability conditions (4.10a) and (4.10b) yield a nonlinear system for
$\lambda_{1}$ and $\gamma_{0}$ with solution
$\displaystyle\lambda_{1}=$
$\displaystyle-i\mu_{1}\left(\frac{c_{g}(\mu_{0}+m)+c_{g}(\mu_{0}+n)}{2}\right)$
(4.11a)
$\displaystyle\quad\pm\sqrt{-\mu_{1}^{2}\left[\frac{c_{g}(\mu_{0}+m)-c_{g}(\mu_{0}+n)}{2}\right]^{2}-\sigma^{2}(\mu_{0}+m)(\mu_{0}+n)},$
$\displaystyle\gamma_{0}=$
$\displaystyle\frac{i\sigma(\mu_{0}+m)}{\lambda_{1}+i\mu_{1}c_{g}(\mu_{0}+m)}.$
(4.11b)
If $\mu_{1}\in(-M_{1},M_{1})$ with
$\displaystyle
M_{1}=\frac{2|\sigma|\sqrt{-(\mu_{0}+m)(\mu_{0}+n)}}{\left|c_{g}(\mu_{0}+m)-c_{g}(\mu_{0}+n)\right|},$
(4.12)
it follows that $\lambda_{1}$ has nonzero real part, since
$(\mu_{0}+m)(\mu_{0}+n)<0$ by the choice of $\beta$. Therefore, to
$\mathcal{O}(\varepsilon)$, the $\Delta n=1$ high-frequency instability is
parameterized by
$\displaystyle\mu\in(\mu_{0}-\varepsilon M_{1},\mu_{0}+\varepsilon M_{1}).$
(4.13)
This interval is asymptotically close to the numerically observed interval of
Floquet exponents that parameterize the high-frequency isola for
$|\varepsilon|\ll 1$; see Figure 2.
Remark. The quantity $M_{1}$ is well-defined since $c_{g}(\mu_{0}+m)\neq
c_{g}(\mu_{0}+n)$. See the Appendix for the proof. The quantity $\gamma_{0}$
is also well-defined, as $\lambda_{1}+i\mu_{1}c_{g}(\mu_{0}+m)$ is guaranteed
to be a complex number with nonzero real part.
Equating $\mu_{1}=0$ maximizes the real part of $\lambda_{1}$ in (4.10a).
Thus, the Floquet exponent that corresponds to the most unstable spectral
element of the high-frequency isola has asymptotic expansion
$\displaystyle\mu_{*}=\mu_{0}+\mathcal{O}(\varepsilon^{2}).$ (4.14)
The corresponding real and imaginary components of this spectral element have
asymptotic expansions
$\displaystyle\lambda_{*,r}$
$\displaystyle=\varepsilon|\sigma|\sqrt{-(\mu_{0}+m)(\mu_{0}+n)}+\mathcal{O}(\varepsilon^{2}),$
(4.15a) $\displaystyle\lambda_{*,i}$
$\displaystyle=-\Omega(\mu_{0}+n)+\mathcal{O}(\varepsilon^{2}),$ (4.15b)
respectively. The former of these expansions provides an estimate for the
growth rate of the $\Delta n=1$ high-frequency instabilities. Figure 2
compares these expansions with numerical results from FFH. Observe that, while
the expansion for the real part is accurate, the expansion for the imaginary
part requires a higher-order calculation; see Section 5.
Figure 2. (Left) Interval of Floquet exponents that parameterize the $\Delta
n=1$ high-frequency isola for parameters $\alpha=1$, $\beta=0.7$, and
$\sigma=1$ as a function of $\varepsilon$. Solid blue curves indicate the
asymptotic boundaries of this interval according to (4.13). Blue circles
indicate the numerical boundaries computed using FFH. The solid red curve
gives the Floquet exponent corresponding to the most unstable spectral element
of the isola according to (4.14). The red circles indicate the same but
computed numerically using FFH. (Right) The real (blue) and imaginary (red)
parts of the most unstable spectral element of the isola as a function of
$\varepsilon$. Solid curves illustrate asymptotic result (4.15). Circles
illustrate results of FFH.
If $\lambda$ is written as a sum of its real and imaginary components,
$\lambda_{r}$ and $\lambda_{i}$, respectively, then eliminating dependence on
$\mu_{1}$ between these quantities yields
$\displaystyle\frac{\lambda_{r}^{2}}{\varepsilon^{2}}+\frac{\left(\lambda_{i}+\Omega(\mu_{0}+n)\right)^{2}}{\varepsilon^{2}\left(\frac{c_{g}(\mu_{0}+m)+c_{g}(\mu_{0}+n)}{c_{g}(\mu_{0}+m)-c_{g}(\mu_{0}+n)}\right)^{2}}=-\sigma^{2}(\mu_{0}+m)(\mu_{0}+n)+\mathcal{O}(\varepsilon).$
(4.16)
Thus, the $\Delta n=1$ high-frequency isola is an ellipse to
$\mathcal{O}(\varepsilon)$ with center at the collision site of eigenvalues
$\lambda^{\mu_{0}}_{0,n}$ and $\lambda^{\mu_{0}}_{0,m}$ and with semi-major
and -minor axes
$\displaystyle a_{1}$
$\displaystyle=\varepsilon|\sigma|\sqrt{-(\mu_{0}+m)(\mu_{0}+n)},$ (4.17a)
$\displaystyle b_{1}$
$\displaystyle=a_{1}\left|\frac{c_{g}(\mu_{0}+m)+c_{g}(\mu_{0}+n)}{c_{g}(\mu_{0}+m)-c_{g}(\mu_{0}+n)}\right|,$
(4.17b)
respectively.
Our asymptotic predictions agree well with numerics, particularly for the real
component of the isola. There is some discrepancy between asymptotic and
numerical results of the Floquet exponents and imaginary component of the
isola, even when $\varepsilon=10^{-3}$; see Figure 3. As noted before, this
discrepancy is resolved in Section 5.
Figure 3. (Left) $\Delta n=1$ high-frequency isola for $\alpha=1$,
$\beta=0.7$, $\sigma=1$, and $\varepsilon=10^{-3}$. The solid red curve is
ellipse (4.16). Blue circles are a subset of spectral elements from the
numerically computed isola using FFH. (Right) Floquet parameterization of the
real (blue) and imaginary (red) parts of the isola. Solid curves illustrate
asymptotic result (4.11a). Circles indicate results of FFH.
### 4.2. High-Frequency Instabilities: $\Delta n=2$
Suppose $m$, $n$, and $\mu_{0}$ satisfy the collision condition (3.9) for
$\Delta n=2$ and appropriately chosen $\beta$ parameter. Then, (4.1) gives a
semi-simple eigenpair of $\mathcal{L}^{\mu_{0}}_{0}$, and we assume (4.2a),
(4.2b), and (4.3) remain valid expansions for the eigenvalue, eigenfunction,
and Floquet exponents in the vicinity of this semi-simple eigenpair,
respectively. We obtain the coefficients of these expansions order by order in
much the same way as for the $\Delta n=1$ high-frequency instabilities.
The $\mathcal{O}(\varepsilon)$ Problem
Substituting expansions (4.2a), (4.2b), and (4.3) into the spectral problem
(3.6) and collecting $\mathcal{O}(\varepsilon)$ terms gives
$\displaystyle(\mathcal{L}^{\mu_{0}}_{0}-\lambda_{0})w_{1}(x)=$
$\displaystyle-i\sigma(\mu_{0}+n-1)e^{i(n-1)x}+\left[\lambda_{1}+i\mu_{1}c_{g}(\mu_{0}+n)\right]e^{inx}$
(4.18)
$\displaystyle-i\sigma(\mu_{0}+n+1)(1+\gamma_{0})e^{i(n+1)x}+\gamma_{0}\left[\lambda_{1}+i\mu_{1}c_{g}(\mu_{0}+m)\right]e^{imx}$
$\displaystyle-i\sigma(\mu_{0}+m+1)e^{i(m+1)x},$
where we have used (2.13) to replace $u_{1}(x)$. Though equation (4.18) shares
similar features with (4.8), $m-n\neq 1$ in this case. Thus, (4.18) cannot be
simplified further.
The solvability conditions of (4.18) are
$\displaystyle\lambda_{1}+i\mu_{1}c_{g}(\mu_{0}+n)$ $\displaystyle=0,$ (4.19a)
$\displaystyle\gamma_{0}\left[\lambda_{1}+i\mu_{1}c_{g}(\mu_{0}+m)\right]$
$\displaystyle=0.$ (4.19b)
Since $c_{g}(\mu_{0}+m)\neq c_{g}(\mu_{0}+n)$ by the corollary provided in the
Appendix and $\gamma_{0}\neq 0$444Otherwise, our unperturbed eigenfunction
$w_{0}(x)$ is not a superposition of two distinct Fourier modes., we must have
$\displaystyle\lambda_{1}=\mu_{1}=0.$ (4.20)
Solving (4.18) for $w_{1}(x)$ by the the method of undetermined coefficients,
one finds
$\displaystyle
w_{1}(x)=\tau_{1,n-1}e^{i(n-1)x}+\tau_{1,n+1}e^{i(n+1)x}+\tau_{1,m+1}e^{i(m+1)x}+\gamma_{1}e^{imx},$
(4.21)
where $\gamma_{1}$ is a constant to be determined at higher order,
$\displaystyle\tau_{1,n-1}$ $\displaystyle=Q_{n,n-1},$ (4.22a)
$\displaystyle\tau_{1,n+1}$ $\displaystyle=(1+\gamma_{0})Q_{n,n+1},$ (4.22b)
$\displaystyle\tau_{1,m+1}$ $\displaystyle=\gamma_{0}Q_{n,m+1},$ (4.22c)
and
$\displaystyle
Q_{N,M}=\frac{\sigma(\mu_{0}+M)}{\Omega(\mu_{0}+M)-\Omega(\mu_{0}+N)}.$ (4.23)
Note that $w_{1}(x)$ does not have an $n^{\textrm{th}}$ Fourier mode, which is
a consequence of the normalization (4.4).
The $\mathcal{O}(\varepsilon^{2})$ Problem
Substituting (4.2a), (4.2b), and (4.3) into (3.6) and collecting terms of
$\mathcal{O}(\varepsilon^{2})$ yields
$\displaystyle(\mathcal{L}_{0}^{\mu_{0}}-\lambda_{0})w_{1}(x)=\lambda_{2}w_{0}(x)-\mathcal{L}_{2}|_{\mu_{1}=0}w_{0}(x)-\mathcal{L}_{1}|_{\mu_{1}=0}w_{1}(x),$
(4.24)
where $\mathcal{L}_{1}|_{\mu_{1}=0}$ is as before (but evaluated at
$\mu_{1}=0$) and
$\displaystyle\mathcal{L}_{2}|_{\mu_{1}=0}=ic_{0}\mu_{2}+c_{2}(i\mu_{0}+\partial_{x})+3\mu_{2}i(i\mu_{0}+\partial_{x})^{2}+5\mu_{2}i(i\mu_{0}+\partial_{x})^{4}+2\sigma
u_{2}(x)(i\mu_{0}+\partial_{x})+2\sigma u_{2}^{\prime}(x).$ (4.25)
As in the previous order, we evaluate the RHS of (4.24) using (2.13) to
replace $u_{1}(x)$ and $u_{2}(x)$, (4.5) to replace $w_{0}(x)$, (4.21) to
replace $w_{1}(x)$, and $m-n=2$ to combine terms with exponential arguments
proportional to $m-1$ and $n+1$. After some work, one arrives at the
solvability conditions555To obtain (4.26a) and (4.26b), one also needs
evenness of $u_{2}(x)$ so that
$\widehat{u_{2}(x)}_{-2}=\widehat{u_{2}(x)}_{2}$.:
$\displaystyle\lambda_{2}+i\mathcal{C}_{\mu_{2},\mu_{0}}^{n}$
$\displaystyle=i\gamma_{0}\mathcal{S}_{2}(\mu_{0}+n),$ (4.26a)
$\displaystyle\gamma_{0}\left[\lambda_{2}+i\mathcal{C}_{\mu_{2},\mu_{0}}^{m}\right]$
$\displaystyle=i\mathcal{S}_{2}(\mu_{0}+m),$ (4.26b)
where
$\displaystyle\mathcal{C}_{\mu_{2},\mu_{0}}^{N}$
$\displaystyle=\mu_{2}c_{g}(\mu_{0}+N)-\mathcal{P}^{N}_{\mu_{0}},$ (4.27a)
$\displaystyle\mathcal{P}^{N}_{\mu_{0}}$
$\displaystyle=(\mu_{0}+N)\left[\sigma(Q_{n,N-1}+Q_{n,N+1}+2\widehat{u_{2}(x)}_{0})+c_{2}\right],$
(4.27b) $\displaystyle\mathcal{S}_{2}$
$\displaystyle=\sigma(Q_{n,n+1}+2\widehat{u_{2}(x)}_{2}).$ (4.27c)
Similar to the $\Delta n=1$ case, (4.26a) and (4.26b) are a nonlinear system
for $\lambda_{2}$ and $\gamma_{0}$. The solution of this system is
$\displaystyle\lambda_{2}=$
$\displaystyle~{}-i\left(\frac{\mathcal{C}_{\mu_{2},\mu_{0}}^{m}+\mathcal{C}_{\mu_{2},\mu_{0}}^{n}}{2}\right)\pm\sqrt{-\left[\frac{\mathcal{C}_{\mu_{2},\mu_{0}}^{m}-\mathcal{C}_{\mu_{2},\mu_{0}}^{n}}{2}\right]^{2}-\mathcal{S}_{2}^{2}(\mu_{0}+m)(\mu_{0}+n)},$
(4.28a) $\displaystyle\gamma_{0}=$
$\displaystyle~{}\frac{i\sigma(\mu_{0}+m)(Q_{n,n+1}+2\upsilon_{2,-2})}{\lambda_{2}+i\mathcal{C}_{\mu_{2},\mu_{0}}^{m}}.$
(4.28b)
Provided $\mathcal{S}_{2}\neq 0$, there exists an interval of
$\mu_{2}\in(M_{2,-},M_{2,+})$, where
$\displaystyle
M_{2,\pm}=\frac{\mathcal{P}^{m}_{\mu_{0}}-\mathcal{P}^{n}_{\mu_{0}}}{c_{g}(\mu_{0}+m)-c_{g}(\mu_{0}+n)}\pm
2\left|\frac{\mathcal{S}_{2}}{c_{g}(\mu_{0}+m)-c_{g}(\mu_{0}+n)}\right|\sqrt{-(\mu_{0}+m)(\mu_{0}+n)},$
(4.29)
such that $\lambda_{2}$ has a nonzero real part. It is shown in the Appendix
that $\mathcal{S}_{2}\neq 0$ for all relevant values of $\beta$. Thus, the
interval of Floquet exponents that parameterizes the $\Delta n=2$ high-
frequency isola to $\mathcal{O}(\varepsilon^{2})$ is
$\displaystyle\mu\in\left(\mu_{0}+\varepsilon^{2}M_{2,-},\mu_{0}+\varepsilon^{2}M_{2,+}\right).$
(4.30)
Unlike when $\Delta n=1$ (4.13), the center of this interval changes at the
same rate as its width, and this width is an order of magnitude smaller than
for the $\Delta n=1$ instabilities. This explains why numerical detection of
$\Delta n=2$ instabilities presents a greater challenge than for $\Delta n=1$
instabilities; see Figure 4.
Figure 4. (Left) Interval of Floquet exponents that parameterize the $\Delta
n=2$ high-frequency isola for parameters $\alpha=1$, $\beta=0.25$, and
$\sigma=1$ as a function of $\varepsilon$. ($\beta=0.7$ only gives rise to a
$\Delta n=1$ isola: $\beta$ must be changed to satisfy (3.10) for a $\Delta
n=2$ isola to arise.) Solid blue curves indicate the asymptotic boundaries of
this interval according to (4.30). Blue circles indicate the numerical
boundaries computed using FFH. The solid red curve gives the Floquet exponent
corresponding to the most unstable spectral element of the isola according to
(4.31). The red circles indicate the same but computed numerically using FFH.
(Right) The real (blue) and imaginary (red) parts of the most unstable
spectral element of the isola as a function of $\varepsilon$. Solid curves
illustrate asymptotic result (4.32). Circles illustrate results of FFH.
From the results above, we obtain an asymptotic expansion for the Floquet
exponent of the most unstable spectral element of the $\Delta n=2$ high-
frequency isola:
$\displaystyle\mu_{*}=\mu_{0}+\frac{\mathcal{P}^{m}_{\mu_{0}}-\mathcal{P}^{n}_{\mu_{0}}}{c_{g}(\mu_{0}+m)-c_{g}(\mu_{0}+n)}\varepsilon^{2}+\mathcal{O}(\varepsilon^{3}).$
(4.31)
Asymptotic expansions for the real and imaginary component of this spectral
element are
$\displaystyle\lambda_{*,r}$
$\displaystyle=\varepsilon^{2}|\mathcal{S}_{2}|\sqrt{-(\mu_{0}+m)(\mu_{0}+n)}+\mathcal{O}(\varepsilon^{3}),$
(4.32a) $\displaystyle\lambda_{*,i}$
$\displaystyle=-\Omega(\mu_{0}+n)-\left[\frac{\mathcal{P}_{\mu_{0}}^{m}c_{g}(\mu_{0}+n)-\mathcal{P}^{n}_{\mu_{0}}c_{g}(\mu_{0}+m)}{c_{g}(\mu_{0}+m)-c_{g}(\mu_{0}+n)}\right]\varepsilon^{2}+\mathcal{O}(\varepsilon^{3}).$
(4.32b)
These expansions are in excellent agreement with numerical computations from
FFH, as is seen in Figure 4. This is a consequence of resolving quadratic
corrections to the real and imaginary components of $\Delta n=2$ high-
frequency isolas simultaneously, unlike in the $\Delta n=1$ case.
Analogous to the derivation of (4.16), the ellipse given by
$\displaystyle\frac{\lambda_{r}^{2}}{\varepsilon^{4}}+$
$\displaystyle\frac{\left[\lambda_{i}+\Omega(\mu_{0}+n)+\varepsilon^{2}\left(\frac{\mathcal{P}_{\mu_{0}}^{m}c_{g}(\mu_{0}+n)-\mathcal{P}^{n}_{\mu_{0}}c_{g}(\mu_{0}+m)}{c_{g}(\mu_{0}+m)-c_{g}(\mu_{0}+n)}\right)\right]^{2}}{\varepsilon^{4}\left(\frac{c_{g}(\mu_{0}+m)+c_{g}(\mu_{0}+n)}{c_{g}(\mu_{0}+m)-c_{g}(\mu_{0}+n)}\right)^{2}}=-\mathcal{S}_{2}^{2}(\mu_{0}+m)(\mu_{0}+n)+O(\varepsilon).$
(4.33)
is asymptotic to the $\Delta n=2$ high-frequency isola. This ellipse has
center that drifts from the collision site at a rate comparable to its semi-
major and -minor axes,
$\displaystyle a_{2}$
$\displaystyle=\varepsilon^{2}|\mathcal{S}_{2}|\sqrt{-(\mu_{0}+m)(\mu_{0}+n)}$
(4.34a) $\displaystyle b_{2}$
$\displaystyle=a_{2}\left|\frac{c_{g}(\mu_{0}+m)+c_{g}(\mu_{0}+n)}{c_{g}(\mu_{0}+m)-c_{g}(\mu_{0}+n)}\right|,$
(4.34b)
respectively. This behavior contrasts with that seen in the $\Delta n=1$ case,
where the center drifts slower than the axes grow. Comparison with numerical
computations using FFH show that (4.33) is an excellent approximation for
$\Delta n=2$ high-frequency isolas; see Figure 5.
Figure 5. (Left) $\Delta n=2$ high-frequency isola for $\alpha=1$,
$\beta=0.25$, $\sigma=1$, and $\varepsilon=10^{-3}$. The solid red curve is
ellipse (4.33). Blue circles are a subset of spectral elements from the
numerically computed isola using FFH. Blue circles are a subset of spectral
elements from the numerically computed isola using FFH. (Right) Floquet
parameterization of the real (blue) and imaginary (red) parts of the isola.
Solid curves illustrate asymptotic result (4.28a). Circles indicate results of
FFH.
### 4.3. High-Frequency Instabilities: $\Delta n\geq 3$
The approach used to obtain leading-order behavior of the $\Delta n=1,2$ high-
frequency isolas generalizes to higher-order isolas. The method consists of
the following steps, each of which is readily implemented in a symbolic
programming language:
1. (i)
Given $\Delta n\in\mathbb{N}$, determine the unique $\mu_{0}$, $m$, and $n$ to
satisfy collision condition (3.9), assuming $\beta$ satisfies (3.10).
2. (ii)
Expand about the collided eigenvalues in a formal power series of
$\varepsilon$ and similarly expand their corresponding eigenfunctions and
Floquet exponents. To maintain uniqueness of the eigenfunctions, choose the
normalization (4.4).
3. (iii)
Substitute these expansions into the spectral problem (3.6). Collecting like
powers of $\varepsilon$, construct a hierarchy of inhomogeneous linear
problems to solve.
4. (iv)
Proceed order by order. At each order, impose solvability and normalization
conditions. Invert the linear operator against its range using the method of
undetermined coefficients. Use previous normalization and solvability
conditions as well as the collision condition to simplify problems if
necessary.
We conjecture that this method yields the first nonzero real part correction
to the $\Delta n^{\textrm{th}}$ high-frequency isola at
$\mathcal{O}(\varepsilon^{\Delta n})$. We have shown that this conjecture
holds for $\Delta n=1,2$. For $\Delta n=3$, one can show that the high-
frequency isola is asymptotic to the ellipse
$\displaystyle\frac{\lambda_{r}^{2}}{\varepsilon^{6}}+\frac{\left(\lambda_{i}+\Omega(\mu_{0}+n)+\varepsilon^{2}\left[\frac{\mathcal{P}^{m}_{\mu_{0}}c_{g}(\mu_{0}+n)-\mathcal{P}^{n}_{\mu_{0}}c_{g}(\mu_{0}+m)}{c_{g}(\mu_{0}+m)-c_{g}(\mu_{0}+n)}\right]\right)^{2}}{\varepsilon^{6}\left(\frac{c_{g}(\mu_{0}+m)+c_{g}(\mu_{0}+n)}{c_{g}(\mu_{0}+m)-c_{g}(\mu_{0}+n)}\right)^{2}}=-\mathcal{S}_{3}^{2}(\mu_{0}+m)(\mu_{0}+n)+O(\varepsilon),$
(4.35)
where
$\displaystyle\mathcal{S}_{3}=\sigma\left[Q_{n,n+1}Q_{n,n+2}+2\widehat{u_{2}(x)}_{2}(Q_{n,n+1}+Q_{n,n+2})+2\widehat{u_{3}(x)}_{3}\right].$
(4.36)
The semi-major and -minor axes of (4.35) scale as
$\mathcal{O}(\varepsilon^{3})$, as the conjecture predicts. If true for all
$\Delta n$, this conjecture explains why higher-order isolas are challenging
to detect both in numerical and perturbation computations of the stability
spectrum.
One notices that the center of (4.35) drifts similarly to that of the $\Delta
n=2$ high-frequency isolas (4.33). In fact, centers of higher-order isolas
(beyond $\Delta n=1$) all drift at a similar rate, as these isolas all satisfy
the same $\mathcal{O}(\varepsilon^{2})$ problem and, hence, yield corrections
at this order. Consequently, one can expect to incur corrections to the
imaginary component of the high-frequency isola before reaching
$\mathcal{O}(\varepsilon^{\Delta n})$, making it more difficult to prove our
conjecture about the first occurence of a nonzero real part correction.
## 5\. $\Delta n=1$ High-Frequency Instabilities at Higher-Order
As we saw in Section 4.1, the asymptotic formulas for the $\Delta n=1$ high-
frequency isola fail to capture its $\mathcal{O}(\varepsilon^{2})$ drift along
the imaginary axis. This is expected, as we only considered the
$\mathcal{O}(\varepsilon)$ problem. In this section, we go beyond the leading-
order behavoir of these instabilities. We expect similar calculations to arise
if one considered the $\mathcal{O}(\varepsilon^{p})$ problem for a generic
$\Delta n$ isola, where $p>\Delta n$.
### 5.1. The $\mathcal{O}(\varepsilon)$ Problem Revisited
Finishing our work from Section 4.1, we solve for $w_{1}(x)$ in (4.8). We find
$\displaystyle
w_{1}(x)=Q_{n,n-1}e^{i(n-1)x}+\gamma_{0}Q_{n,m+1}e^{i(m+1)x}+\gamma_{1}e^{imx},$
(5.1)
where $Q_{N,M}$ is defined as in (4.23) and $\gamma_{1}$ is an undetermined
constant at this order.
### 5.2. The $\mathcal{O}(\varepsilon^{2})$ Problem
At $\mathcal{O}(\varepsilon^{2})$, we have
$\displaystyle(L_{0}-\lambda_{0})w_{2}(x)=$
$\displaystyle~{}\lambda_{2}w_{0}(x)+\lambda_{1}w_{1}(x)-ic_{0}(\mu_{1}w_{1}(x)+\mu_{2}w_{0}(x))-c_{2}(i\mu_{0}+\partial_{x})w_{0}(x)$
(5.2)
$\displaystyle-3i(i\mu_{0}+\partial_{x})^{2}(\mu_{1}w_{1}(x)+\mu_{2}w_{0}(x))+3\mu_{1}^{2}(i\mu_{0}+\partial_{x})w_{0}(x)$
$\displaystyle-5\beta
i(i\mu_{0}+\partial_{x})^{4}(\mu_{1}w_{1}(x)+\mu_{2}w_{0}(x))+10\beta\mu_{1}^{2}(i\mu_{0}+\partial_{x})^{3}w_{0}(x)$
$\displaystyle-2\sigma(i\mu_{0}+\partial_{x})(u_{1}(x)w_{1}(x)+u_{2}(x)w_{0}(x))-2\sigma
i\mu_{1}u_{1}(x)w_{0}(x).$
After substituting $w_{0}(x)$, $w_{1}(x)$, $u_{1}(x)$, and $u_{2}(x)$ into
(5.2), solvability conditions of the second-order problem form the linear
system
$\displaystyle\displaystyle\begin{pmatrix}1&-i\sigma(\mu_{0}+n)\\\
\gamma_{0}&\lambda_{1}+i\mu_{1}c_{g}(\mu_{0}+m)\end{pmatrix}\begin{pmatrix}\lambda_{2}\\\
\gamma_{1}\end{pmatrix}=i\begin{pmatrix}\sigma\gamma_{0}\mu_{1}-\tilde{\mathcal{C}}_{\mu_{2},\mu_{1},\mu_{0}}^{n,-1}\\\
\sigma\mu_{1}-\gamma_{0}\tilde{\mathcal{C}}_{\mu_{2},\mu_{1},\mu_{0}}^{m,1}\end{pmatrix},$
(5.3)
where
$\displaystyle\tilde{C}_{\mu_{2},\mu_{1},\mu_{0}}^{N,k}$
$\displaystyle=\mu_{2}c_{g}(\mu_{0}+N)-\tilde{\mathcal{P}}_{\mu_{0}}^{N,k}+\mu_{1}^{2}\mathcal{D}^{N}_{\mu_{0}},$
(5.4a) $\displaystyle\tilde{P}_{\mu_{0}}^{N,k}$
$\displaystyle=(\mu_{0}+N)\left[\sigma(Q_{n,N+k}+2\upsilon_{2,0})+c_{2}\right],$
(5.4b) $\displaystyle D_{\mu_{0}}^{N}$
$\displaystyle=3(\mu_{0}+N)-10\beta(\mu_{0}+N)^{3}.$ (5.4c)
For $\mu_{1}\in(-M_{1},M_{1})$ (4.12), one can show that
$\displaystyle\textrm{det}\begin{pmatrix}1&-i\sigma(\mu_{0}+n)\\\
\gamma_{0}&\lambda_{1}+i\mu_{1}c_{g}(\mu_{0}+m)\end{pmatrix}$
$\displaystyle=2\lambda_{1,r}.$ (5.5)
Since $\lambda_{1,r}\neq 0$ for this interval of $\mu_{1}$, it follows that
(5.3) is invertible. Using Cramer’s rule and (4.10a) and (4.10b), the
solvability conditions at $\mathcal{O}(\varepsilon)$, gives
$\displaystyle\lambda_{2}=-\frac{i}{2\lambda_{1,r}}\left(\mathcal{A}\lambda_{1}+i\mu_{1}\mathcal{B}\right),$
(5.6)
where
$\displaystyle\mathcal{A}$
$\displaystyle=\tilde{\mathcal{C}}_{\mu_{2},\mu_{1},\mu_{0}}^{m,1}+\tilde{\mathcal{C}}_{\mu_{2},\mu_{1},\mu_{0}}^{n,-1},$
(5.7a) $\displaystyle\mathcal{B}$
$\displaystyle=c_{g}(\mu_{0}+m)\tilde{\mathcal{C}}_{\mu_{2},\mu_{1},\mu_{0}}^{n,-1}+c_{g}(\mu_{0}+n)\tilde{\mathcal{C}}_{\mu_{2},\mu_{1},\mu_{0}}^{m,1}-\sigma^{2}(2\mu_{0}+m+n).$
(5.7b)
### 5.3. Determination of $\mu_{2}$: The Regular Curve Condition
A quick calculation shows that $\lambda_{2}$ has two branches, $\lambda_{2,+}$
and $\lambda_{2,-}$, and, for any $\mu_{2}\in\mathbb{R}$,
$\lambda_{2,+}=-\overline{\lambda_{2,-}}$. Consequently, (5.6) results in a
spectrum that is symmetric about the imaginary axis regardless of $\mu_{2}$.
We want this spectrum to be a continuous, closed curve about the imaginary
axis. As we shall see, this additional constraint is enough to determine
$\mu_{2}$ uniquely. We call this additional constraint the regular curve
condition.
To motivate the regular curve condition, consider the real and imaginary parts
of (5.6):
$\displaystyle\lambda_{2,r}$
$\displaystyle=\frac{1}{2\lambda_{1,r}}\left(\mathcal{A}\lambda_{1,i}+\mu_{1}\mathcal{B}\right),$
(5.8a) $\displaystyle\lambda_{2,i}$ $\displaystyle=-\frac{\mathcal{A}}{2}.$
(5.8b)
As $|\mu_{1}|$ approaches $M_{1}$, $\lambda_{1,r}$ approaches zero. To avoid
unwanted blow-up of $\lambda_{2,r}$, we must impose
$\displaystyle\displaystyle\lim_{|\mu_{1}|\rightarrow
M_{1}}\left(\mathcal{A}\lambda_{1,i}+\mu_{1}\mathcal{B}\right)=0.$ (5.9)
Since $\mu_{1}$ appears in $\mathcal{A}$ only as $\mu_{1}^{2}$, we can rewrite
(5.9) with the help of (4.11a) as
$\displaystyle\lim_{\mu_{1}^{2}\rightarrow
M_{1}^{2}}\left(-\frac{\mathcal{A}}{2}(c_{g}(\mu_{0}+m)+c_{g}(\mu_{0}+n))+\mathcal{B}\right)=0.$
(5.10)
Equation (5.10) is the regular curve condition for second-order corrections to
the $\Delta n=1$ isola. Unpacking the definitions of $\mathcal{A}$ and
$\mathcal{B}$ above, the regular curve condition implies that
$\displaystyle\mu_{2}=\frac{\textrm{P}^{m,1}_{\mu_{0}}-\textrm{P}^{n,-1}_{\mu_{0}}}{c_{g}(\mu_{0}+m)-c_{g}(\mu_{0}+n)}-\frac{2\sigma^{2}(2\mu_{0}+m+n)}{(c_{g}(\mu_{0}+m)-c_{g}(\mu_{0}+n))^{2}},$
(5.11)
where
$\displaystyle\textrm{P}^{N,k}_{\mu_{0}}=\tilde{\mathcal{P}}^{N,k}_{\mu_{0}}+\frac{4\sigma^{2}(\mu_{0}+m)(\mu_{0}+n)}{(c_{g}(\mu_{0}+m)-c_{g}(\mu_{0}+n))^{2}}\mathcal{D}^{N}_{\mu_{0}}.$
(5.12)
Therefore, to $\mathcal{O}(\varepsilon^{2})$, the asymptotic interval of
Floquet exponents that parameterizes the $\Delta n=1$ high-frequency isola is
$\displaystyle\mu\in\left(\mu_{0}-\varepsilon
M_{1}+\varepsilon^{2}\mu_{2},\mu_{0}+\varepsilon
M_{1}+\varepsilon^{2}\mu_{2}\right).$ (5.13)
### 5.4. The Most Unstable Eigenvalue
To $\mathcal{O}(\varepsilon)$, the expression for the real part of the $\Delta
n=1$ high-frequency isola is
$\displaystyle\lambda^{(1)}_{r}:=\varepsilon\lambda_{1,r}=\pm\varepsilon\sqrt{-\mu_{1}^{2}\left[\frac{c_{g}(\mu_{0}+m)-c_{g}(\mu_{0}+n)}{2}\right]^{2}-\sigma^{2}(\mu_{0}+m)(\mu_{0}+n)}.$
(5.14)
The most unstable eigenvalue of (5.14) occurs when $\mu_{1}=\mu_{*,1}$, where
$\mu_{*,1}$ is a critical point of $\lambda^{(1)}_{r}$:
$\displaystyle\frac{\partial\lambda^{(1)}_{r}}{\partial\mu_{1}}\biggr{|}_{\mu_{*,1}}=0.$
(5.15)
Solving (5.15), one finds $\mu_{*,1}=0$, and we conclude that the Floquet
exponent that corresponds to the most unstable eigenvalue is
$\mu_{*}=\mu_{0}+O(\varepsilon^{2})$, as found in Section 4.1.
To $\mathcal{O}(\varepsilon^{2})$, the real part of our isola is
$\lambda^{(2)}_{r}:=\varepsilon\lambda_{1,r}+\varepsilon^{2}\lambda_{2,r},$
(5.16)
where $\lambda_{1,r}$ is given in (5.14) and $\lambda_{2,r}$ is given in
(5.8a). Without loss of generality, we choose the positive branch of
$\lambda_{1,r}$.
Taking inspiration from (5.15), we consider the critical points of (5.16):
$\frac{\partial\lambda_{r}^{(2)}}{\partial\mu_{1}}\biggr{|}_{\mu_{*,1}}=0.$
(5.17)
After some tedious calculations, (5.17) yields the following equation for
$\mu_{*,1}$:
$\displaystyle-\mu_{*,1}\lambda_{*,1,r}^{2}\left(\frac{c_{g}(\mu_{0}+m)-c_{g}(\mu_{0}+n)}{2}\right)^{2}+\frac{\varepsilon}{2}\biggr{[}\lambda_{*,1,r}^{2}\biggr{(}\lambda_{*,1,i}\frac{\partial\mathcal{A}}{\partial\mu_{1}}\biggr{|}_{\mu_{*,1}}+\mathcal{A}_{*}\frac{\partial\lambda_{1,i}}{\partial\mu_{1}}\biggr{|}_{\mu_{*,1}}$
(5.18)
$\displaystyle\phantom{[}\phantom{(}+\mu_{*,1}\frac{\partial\mathcal{B}}{\partial\mu_{1}}\biggr{|}_{\mu_{*,1}}+\mathcal{B}_{*}\biggr{)}+\mu_{*,1}\left(\mathcal{A}_{*}\lambda_{*,1,i}+\mu_{*,1}\mathcal{B}_{*}\right)\left(\frac{c_{g}(\mu_{0}+m)-c_{g}(\mu_{0}+n)}{2}\right)^{2}\biggr{]}=0,$
where it is understood that starred variables are evaluated at $\mu_{*,1}$.
Unpacking the definitions of $\mathcal{A}$, $\mathcal{B}$, $\lambda_{1,r}$,
and $\lambda_{1,i}$ reveals that (5.18) is a quartic equation for $\mu_{*,1}$
with the highest degree coefficient multiplied by the small parameter
$\varepsilon$. Rather than solving for $\mu_{*,1}$ directly, we obtain the
roots perturbatively.
An application of the method of dominant balance to (5.18) shows that all of
its roots have leading order behavior $\mathcal{O}(\varepsilon^{-1})$, except
for one. Because we anticipate that $\lim_{\varepsilon\rightarrow
0}\mu_{*,1}=0$ to match results at the previous order, it is this non-singular
root that we expect to yield the next order correction for $\mu_{*,1}$.
Therefore, we need not concern ourselves with singular perturbation methods
and, instead, make the following ansatz:
$\displaystyle\mu_{*,1}=0+\varepsilon\mu_{*,1,1}+O(\varepsilon^{2}).$ (5.19)
Plugging our ansatz into (5.18) and keeping terms of lowest power in
$\varepsilon$, we find the following linear equation to solve for
$\mu_{*,1,1}$:
$\displaystyle-\mu_{*,1,1}\left(\frac{c_{g}(\mu_{0}+m)-c_{g}(\mu_{0}+n)}{2}\right)^{2}+\frac{1}{2}\left(\mathcal{B}_{0}-\mathcal{A}_{0}\right)=0,$
(5.20)
where $\mathcal{A}_{0}$ and $\mathcal{B}_{0}$ are $\mathcal{A}$ and
$\mathcal{B}$ evaluated at $\mu_{1}=0$, respectively. Using the definition of
$\mathcal{A}$ and $\mathcal{B}$ together with the expression for $\mu_{2}$ in
(5.11) above, one finds that
$\displaystyle\mu_{*,1,1}=-4\sigma^{2}(\mu_{0}+m)(\mu_{0}+n)\left(\frac{\mathcal{D}^{m}_{\mu_{0}}-\mathcal{D}^{n}_{\mu_{0}}}{(c_{g}(\mu_{0}+m)-c_{g}(\mu_{0}+n))^{3}}\right).$
(5.21)
It follows that the Floquet exponent corresponding to the most unstable
eigenvalue of the $\Delta n=1$ high-frequency isola is
$\displaystyle\mu_{*}=\mu_{0}+\varepsilon^{2}(\mu_{2}+\mu_{*,1,1})+O(\varepsilon^{3}).$
(5.22)
The most unstable eigenvalue is then
$\displaystyle\lambda_{*}=\lambda_{0}+\varepsilon\lambda_{1}|_{\mu_{1}=0}+\varepsilon^{2}\lambda_{2}|_{\mu_{1}=0,\mu_{2}=\mu_{2}+\mu_{*,1,1}}+O(\varepsilon^{3}).$
(5.23)
Figure 6 and Figure 7 show improvements to results in Figure 2 and Figure 3,
respectively, as a result of our higher-order calculations.
Figure 6. (Left) Interval of Floquet exponents that parameterize the $\Delta
n=1$ high-frequency isola for parameters $\alpha=1$, $\beta=0.7$, and
$\sigma=1$ as a function of $\varepsilon$. Solid blue curves indicate the
asymptotic boundaries of this interval according to (5.13), while the dotted
blue curves give the $\mathcal{O}(\varepsilon)$ result. Blue circles indicate
the numerical boundaries computed using FFH. The solid red curve gives the
Floquet exponent corresponding to the most unstable spectral element of the
isola according to (5.22), while the dotted red gives the
$\mathcal{O}(\varepsilon)$ result. The red circles indicate the same but
computed numerically using FFH. (Right) The real (blue) and imaginary (red)
parts of the most unstable spectral element of the isola as a function of
$\varepsilon$. Solid curves illustrate asymptotic result (5.23). Dotted curves
illustrate the asymptotic results only to $\mathcal{O}(\varepsilon)$. Circles
illustrate results of FFH. Figure 7. (Left) $\Delta n=1$ high-frequency isola
for $\alpha=1$, $\beta=0.7$, $\sigma=1$, and $\varepsilon=10^{-3}$. The solid
red curve is parameterized by (5.6). This curve is no longer an ellipse, but a
more complicated algebraic curve. For comparison, the dotted red curve is the
ellipse found at $\mathcal{O}(\varepsilon)$. Blue circles are a subset of
spectral elements from the numerically computed isola using FFH. (Right)
Floquet parameterization of the real (blue) and imaginary (red) parts of the
isola. Solid curves illustrate asymptotic result (5.6). Dotted curves
illustrate the asymptotic results only to $\mathcal{O}(\varepsilon)$. Circles
indicate results of FFH.
## 6\. Conclusions
In this work, we investigate the asymptotic behavior of high-frequency
instabilities of small-amplitude Stokes waves of the Kawahara equation. For
the largest of these instabilities ($\Delta n=1,2$), we introduce a
perturbation method to compute explicitly
1. (i)
the asymptotic interval of Floquet exponents that parameterize the high-
frequency isola,
2. (ii)
the leading-order behavior of its most unstable spectral elements, and
3. (iii)
the leading-order curve asymptotic to the isola.
We outline the procedure to compute these quantities for higher-order isolas.
For the first time, we compare these asymptotic results with numerical results
and find excellent agreement between the two. We also obtain higher-order
asymptotic results for the $\Delta n=1$ high-frequency isolas by introducing
the regular curve condition.
The perturbation method used throughout our investigation holds only for
nonresonant Stokes waves (2.12). Resonant waves require a modified Stokes
expansion, and as a result of this modification, the leading-order behavior of
the high-frequency isolas will change. Some numerical work has been done to
investigate this effect [28], but no perturbation methods have been proposed.
## 7\. Appendix
Theorem 1. For each $\Delta n\in\mathbb{N}$, if $\beta$ satisfies (2.12) and
(3.10), there exists a unique $\mu_{0}\in[0,1/2]$ and unique $m\neq
n\in\mathbb{Z}$ such that the collision condition (3.9) is satisfied.
Proof. Define
$\displaystyle F(k;\Delta n)=\frac{\Omega(k+\Delta n)-\Omega(k)}{\Delta n}.$
(7.1)
Using the definition of the dispersion relation $\Omega$,
$\displaystyle F(k;\Delta n)=$ $\displaystyle~{}5\beta k^{4}+10\beta\Delta
nk^{3}+(10\beta(\Delta n)^{2}-3)k^{2}+(5\beta(\Delta n)^{3}-3\Delta n)k$ (7.2)
$\displaystyle+\beta(\Delta n)^{4}-(\Delta n)^{2}+1-\beta.$
A direct calculation shows that
$\displaystyle F(k;\Delta n)=F(-(k+\Delta n);\Delta n).$ (7.3)
Hence, the graph of $F$ is symmetric about $k=-\Delta n/2$. We prove the
desired result for the various cases of $\Delta n$.
Case 1. Suppose $\Delta n=1$. Then, $k_{1}=0$ and $k_{2}=-1$ are roots of $F$
by inspection. The remaining roots are
$\displaystyle k_{3,4}=\frac{-1\pm\sqrt{\frac{12}{5\beta}-3}}{2}.$ (7.4)
Because $\beta$ satisfies (3.10), one can show that
$\displaystyle 0<\frac{12}{5\beta}-3<1,$ (7.5)
so that $k_{3,4}\in(k_{2},k_{1})$. Because $F$ is symmetric about $k=-\Delta
n/2$, we have $k_{3}\in\left(-1/2,0\right)$ and
$k_{4}\in\left(-1,-1/2\right)$.
Each of these wavenumbers $k_{j}$ is mapped to a Floquet exponent
$\mu\in(-1/2,1/2]$ according to
$\displaystyle\mu(k)=k-[k],$ (7.6)
where $[\cdot]$ denotes the nearest integer function666Our convention is
$\left[p/2\right]=(p-1)/2$ for $p$ odd.. Both $k_{1}$ and $k_{2}$ map to
$\mu=0$. One checks that $\mu(k_{3})=-\mu(k_{4})\neq 0$ and
$|\mu(k_{3})|=|\mu(k_{4})|<1/2$, since $k_{4}=-(k_{3}+1)$ and $-1/2<k_{3}<0$.
Thus, the requisite $\mu_{0}\in(0,1/2)$ is $\mu(k_{j})$, where $j$ is either
$3$ or $4$ depending on which has the correct sign. Then, $n=[k_{j}]$ and
$m=n+\Delta n$. These are unique by the uniqueness of $k_{j}$.
Case 2. Suppose $\Delta n=2$. A calculation of $F(-1;2)$ and $F_{k}(-1;2)$
shows that $k_{1,2}=-1$ is a double root. The remaining roots are
$\displaystyle k_{3,4}=-1\pm\sqrt{\frac{3}{5\beta}-2}.$ (7.7)
Clearly, $\mu(k_{1,2})=0$. Since $k_{4}=-(k_{3}+2)$, we again have
$\mu(k_{3})=-\mu(k_{4})$. Also, from the formula for $k_{3}$ above, we have
that $-1<k_{3}<0$ by (3.10), so $\mu(k_{3})$ is nonzero. Thus, $\mu(k_{j})$ is
the requisite $\mu_{0}\in(0,1/2]$, where $j$ is either $3$ or $4$ depending on
which has the correct sign. Again, $n=[k_{j}]$ and $m=n+\Delta n$ are uniquely
defined.
Remark. Unlike in the first case, we cannot guarantee $\mu_{0}\neq 1/2$.
Indeed, this value can be achieved when $\beta=4/15$.
Case 3. Suppose $\Delta n\geq 3$. The discriminant of $F(k;\Delta n)$ with
respect to $k$ is
$\displaystyle\Delta_{k}[F]=5\beta\left[(\Delta
n)^{2}-4\right]\left[\beta((\Delta
n)^{2}+4)-4\right]\left[5\beta\left(\beta\left((\Delta
n)^{4}+4\right)-2\left((\Delta n)^{2}+2\right)\right)+9\right]^{2}.$ (7.8)
For $\beta$ satisfying inequality (3.10), we have $\Delta_{k}[F]<0$, implying
there are two distinct real roots of $F$. These roots must be nonpositive by
an application of Descartes’ Rule of Signs on $F$. Without loss of generality,
suppose $k_{2}<k_{1}$. Then, by the symmetry of $F$ about $k=-\Delta n/2$,
$k_{2}=-(k_{1}+\Delta n)$. It follows that $\mu(k_{1})=-\mu(k_{2})$. Thus,
$\mu(k_{j})$ is the requisite value of $\mu_{0}\in[0,\frac{1}{2}]$, where
$j=1$ or $2$ depending on which has the correct sign. The integers $n$ and $m$
are uniquely defined as before.
Remark. In [21], $\beta=1/((\Delta n/2)^{2}+1)$ is included in inequality
(3.10) when $\Delta n\geq 3$. For this $\beta$, $F(k;\Delta n)$ has a double
root at $k_{*}=-\Delta n/2$. However, $\Omega(k_{*})=0$, which corresponds to
an eigenvalue collision at the origin. Eigenvalue collisions at the origin do
not satisfy (3.9).
In each of these cases, we have found $k_{j}<0$ such that $F(k_{j};\Delta
n)=0$. Importantly, one must check that $\Omega(k_{j})\neq 0$ for such
$k_{j}$. Indeed, suppose $\Omega(k_{j})=0$. A direct calculation shows that
$k_{j}=\pm 1$, $0$, or $k_{j}^{2}=(1-\beta)/\beta$. Clearly $k_{j}=0$ or $1$
contradict that $k_{j}<0$. If $k_{j}=-1$, then $F(-1;\Delta n)=0$ implies
$\beta=1/[(\Delta n-1)^{2}+1]$, which contradicts (2.12) when $\Delta n\neq
2$. If $\Delta n=2$, $\beta=1/2$, which contradicts (3.10).
It remains to be seen if $k_{j}^{2}=(1-\beta)/\beta$ leads to contradiction.
Indeed, a straightforward (although tedious) calculation shows that, if
$k_{j}^{2}=(1-\beta)/\beta$, $F(k_{j};\Delta n)=0$ implies $\beta=0$,
$\beta=1/[1+(\Delta n/2)^{2}]$, $\beta=1/[1+(\Delta n-1)^{2}]$, or
$\beta=1/[1+(\Delta n+1)^{2}]$. All of these lead to contradictions of (2.12)
or (3.10). Therefore, we must have
$\Omega(k_{j})=\Omega(\mu_{0}+n)=\Omega(\mu_{0}+m)\neq 0$ in all cases, as
desired.
$\square$
In expressions for the isolas derived in Sections 4 and 5, factors of
$c_{g}(\mu_{0}+m)-c_{g}(\mu_{0}+n)$ appear in denominators. A consequence of
Theorem 1 is that this factor is never zero:
Corollary 1. Fix $\Delta n\in\mathbb{N}$ and choose $\beta$ to satisfy (2.12)
and (3.10). Consider $\mu_{0}\in[0,1/2]$ that solves
$\Omega(\mu_{0}+m)=\Omega(\mu_{0}+n)$ for unique integers $m,n$ such that
$m=n+\Delta n$. Suppose, in addition, that $\mu_{0}$ solves
$c_{g}(\mu_{0}+m)=c_{g}(\mu_{0}+n)$, where $c_{g}(k)=\Omega^{\prime}(k)$.
Then, $\Delta n=2$ and $\mu_{0}=0$.
Proof. If $\Omega(\mu_{0}+m)=\Omega(\mu_{0}+n)$ and
$c_{g}(\mu_{0}+m)=c_{g}(\mu_{0}+n)$, then $k_{0}=\mu_{0}+m$ is a double root
of $F(k;\Delta n)$. From the proof of the theorem above, the only double root
is $k_{0}=-1$ (i.e. $\mu_{0}=0$) when $\Delta n=2$.
$\square$
The corresponding eigenvalue collision for this $\Delta n$ and $\mu_{0}$
happens at the origin in the complex spectral plane and is not of interest to
us. Thus, $c_{g}(\mu_{0}+m)\neq c_{g}(\mu_{0}+n)$.
In Section 4.2, the quantity $\mathcal{S}_{2}$ (4.27c) must be nonzero in
order for $\Delta n=2$ high-frequency instabilities to exist at
$\mathcal{O}\left(\varepsilon^{2}\right)$. The following corollary shows
$\mathcal{S}_{2}\neq 0$ for $\beta$ satisfying inequality (3.10).
Corollary 2. For $\mathcal{S}_{2}$ defined in (4.27c) and $\beta$ satisfying
inequality (3.10) for $\Delta n=2$, $\mathcal{S}_{2}\neq 0$.
Proof. Since $\Delta n=2$, we have from (3.10) that $1/5<\beta<3/10$. In
addition, from Theorem 1 and Corollary 1, we know $k_{1,2}=1$ is a double root
of $F(k;\Delta n)$ for all $\beta$ in this interval, and the remaining roots
of $F$ are
$\displaystyle k_{3,4}=-1\pm\sqrt{\frac{3}{5\beta}-2}.$ (7.9)
These remaining roots correspond to the nonzero eigenvalue collisions that
give rise to the $\Delta n=2$ high-frequency instability.
The quantity $\mathcal{S}_{2}$ can be written in terms of $k_{3,4}$ as
$\displaystyle\mathcal{S}_{2}=\sigma^{2}\left[\frac{k_{3,4}+1}{\Omega(k_{3,4}+1)-\Omega(k_{3,4})}+\frac{1}{\Omega(2)}\right],$
(7.10)
Because $k_{3,4}$ are symmetric about $k=1$ (from the symmetry of $F$), the
value of $\mathcal{S}_{2}$ is independent of the choice of $k_{3,4}$. Using
the definition of the dispersion relation $\Omega$ (2.14), (7.10) simplifies
to
$\displaystyle\mathcal{S}_{2}=\frac{\sigma^{2}}{2(1-5\beta)},$ (7.11)
which is nonzero for $1/5<\beta<3/10$.
$\square$
## References
* [1] B. Akers and W. Gao. Wilton ripples in weakly nonlinear model equations. _Commun. Math. Sci._ 10(3): 1015-1024, 2012.
* [2] B. Akers and D. P. Nicholls. Spectral stability of deep two-dimensional gravity water waves: repeated eigenvalues. _SIAM J. Appl. Math._ , 130(2): 81-107, 2012.
* [3] B. Akers and D. P. Nicholls. The spectrum of finite depth water waves. _European Journal of Mechanics-B/Fluids_ , 46: 181-189, 2014.
* [4] B. Akers. Modulational instabilities of periodic traveling waves in deep water. _Physica D_ , 300: 26-33, 2015
* [5] T. B. Benjamin. Instability of periodic wave trains in nonlinear dispersive systems. _Proceedings, Royal Society of London, A,_ 299:59-79, 1967.
* [6] T. B. Benjamin and J. E. Feir. The disintegration of wave trains on deep water. part i. theory. _Journal of Fluid Mechanics_ , 27:417-430, 1967.
* [7] T. H. Bridges and A. Mielke. A proof of the Benjamin-Feir instability. _Archive for Rational Mechanics and Analysis_ , 133:145-198, 1995.
* [8] B. Deconinck and T. Kapitula. The orbital stability of the cnoidal waves of the Korteweg-deVries equation. _Phys. Letters A_ , 374: 4018-4022, 2010.
* [9] B. Deconinck and J. N. Kutz. Computing spectra of linear operators using the Floque-Fourier-Hill method. _Journal of Computational Physics_ , 219(1): 296-321, 2006.
* [10] B. Deconinck and K. Oliveras. The instability of periodic surface gravity waves. _Journal of Fluid Mechanics_ , 675:141-167, 2011.
* [11] B. Deconinck and O. Trichtchenko. High-frequency instabilities of small-amplitude of Hamiltonian PDE’s. _Discrete & Continuous Dynamical Systems-A_, 37(3):1323-1358, 2017.
* [12] F. Dias and C. Kharif. Nonlinear gravity and gravity-capillary waves. _Annu. Rev. Fluid Mech._ , 31: 331-341, 1999.
* [13] J. Hammack and D. Henderson. Resonant interations among surface water waves. _Annu. Rev. Fluid Mech._ , 25:55-97, 1993.
* [14] M. Haragus, E. Lombardi, and A. Scheel. Spectral stability of wave trains in the Kawahara equation. _Journal of Mathematical Fluid Mechanics_ , 8(4):482-509, 2006.
* [15] M. Haragus and T. Kapitula. On the spectra of periodic waves for infinite-dimensional Hamiltonian systems, _Phys. D_ , 237: 2649-2671, 2008.
* [16] S. E. Haupt and J. P. Boyd. Modeling nonlinear resonance: a modification to the Stokes’ perturbation expansion. _Wave Motion_ , 10(1):83-98, 1988.
* [17] M. A. Johnson, K. Zumbrun, and J. C. Bronski].] On the modulation equations and stability of periodic generalized Korteweg–de Vries waves via Bloch decompositions, _Phys. D_ , 239: 2067-2065, 2010.
* [18] T. Kapitula and K. Promislow. _Spectral and Dynamical Stability of Nonlinear Waves._ Springer, New York, 2013.
* [19] T. Kato. _Perturbation Theory for Linear Operators._ Springer-Verlag, Berlin, 1966.
* [20] T. Kawahara. Oscillatory solitary waves in dispersive media. _J. Phys. Soc. Jpn._ , 33: 1015-1024, 1972.
* [21] R. Kollár, B. Deconinck, and O. Trichtchenko. Direct characterization of spectral stability of small-amplitude periodic waves in scalar Hamiltonian problems via dispersion relation. _SIAM Journal on Mathematical Analysis_ , 51(4): 3145-3169, 2019.
* [22] M. G. Krein. On the application of an algebraic proposition in the theory of matrices of monodromy. _Uspehi Matem. Nauk (N.S.)_ , 6(1(41)):171-177, 1951.
* [23] R. S. MacKay and P. G. Saffman. Stability of water waves. _Proceedings of the Royal Society A_ , 406(1830):115-125, 1986.
* [24] M. Nivala and B. Deconinck. Periodic finite-genus solutions of the KdV equation are orbitally stable. _Physica D_ , 239(13): 1147-1158, 2010.
* [25] J. Pava and F. Natali. (Non)linear instability of periodic traveling waves: Klein-Gordon and KdV type equations. _Adv. Nonlinear Anal._ , 3(2): 95-123, 2014.
* [26] G. G. Stokes. On the theory of oscillatory waves. _Trans. Camb. Phil. Soc._ , 8:441-455, 1847.
* [27] O. Trichtchenko, B. Deconinck, and J. Wilkening. The instability of Wilton ripples. _Wave Motion_ , 66: 147-155, 2016.
* [28] O. Trichtchenko, B. Deconinck, and R. Kollár. Stability of periodic traveling wave solutions to the Kawahara equation. _SIAM Journal on Applied Dynamical Systems_ , 17(4): 2761-2783, 2018.
* [29] G. B. Whitham. Non-linear dispersion of water waves. _Journal of Fluid Mechanics_ , 27:399-412, 1967.
* [30] J. R. Wilton. On ripples. _Phil. Mag. Ser._ 629(173): 688-700, 1915.
|
Modules induced from a normal subgroup of prime index
###### Abstract.
Let $G$ be a finite group and $H$ a normal subgroup of prime index $p$. Let
$V$ be an irreducible $\mathbb{F}H$-module and $U$ a quotient of the induced
$\mathbb{F}G$-module $V{\uparrow}$. We describe the structure of $U$, which is
semisimple when $\textup{char}(\mathbb{F})\neq p$ and uniserial if
$\textup{char}(\mathbb{F})=p$. Furthermore, we describe the division rings
arising as endomorphism algebras of the simple components of $U$. We use
techniques from noncommutative ring theory to study
$\textup{End}_{\mathbb{F}G}(V{\uparrow})$ and relate the right ideal structure
of $\textup{End}_{\mathbb{F}G}(V{\uparrow})$ to the submodule structure of
$V{\uparrow}$.
2000 Mathematics subject classification: 20C40, 16S35
## 1\. Introduction
Throughout this paper $G$ will denote a finite group and $H$ will denote a
normal subgroup of prime index $p$. Furthermore, $V$ will denote an
irreducible (right) $\mathbb{F}H$-module, and
$V{\uparrow}=V\otimes_{\mathbb{F}H}\mathbb{F}G$ is the associated induced
$\mathbb{F}G$-module. Let $a$ be an element of $G$ not in $H$, and let
$\Delta:=\textup{End}_{\mathbb{F}H}(V)$ and
$\Gamma:=\textup{End}_{\mathbb{F}G}(V{\uparrow})$.
This paper is motivated by the following problem: “Given an irreducible
$\mathbb{F}H$-module $V$, where $\mathbb{F}$ is an arbitrary field, and a
quotient $U$ of $V{\uparrow}$, determine the submodule structure of $U$ and
the endomorphism algebras of the simple modules.” By Schur’s lemma, $\Delta$
is a division algebra over $\mathbb{F}$, so we shall need techniques from
noncommutative ring theory.
We determine the submodule structure of $U$ by explicitly realizing
$\textup{End}_{\mathbb{F}G}(U)$ as a direct sum of minimal right ideals, or as
a local ring. It suffices to solve our problem in the case when
$U=V{\uparrow}$. Henceforth $U=V{\uparrow}$.
In the case when $\mathbb{F}$ is algebraically closed of characteristic zero,
it is well known that two cases arise. Either $V$ is $G$-stable and
$V{\uparrow}$ is irreducible, or $V$ is not $G$-stable and $V{\uparrow}$ is a
direct sum of $p$ pairwise nonisomorphic irreducible submodules. In [GK96] the
structure of $V{\uparrow}$ is analyzed in the case when $\mathbb{F}$ is an
arbitrary field satisfying $\textup{char}(\mathbb{F})\neq 0$. The assumption
that $\textup{char}(\mathbb{F})\neq 0$ was made to ensure that $\Delta$ is a
field. The main theorem of [GK96] states that the structure of $V{\uparrow}$
is divided into five cases when $V$ is $G$-stable. In this paper, we drop the
hypothesis that $\textup{char}(\mathbb{F})\neq 0$, and even more cases arise
in the stable case (Theorems 5, 8 and 9). Fortunately, all these cases can be
unified by considering the factorization of a certain binomial $t^{p}-\lambda$
in a twisted polynomial ring $\Delta[t;\alpha]$, which is a (left and right)
principal ideal domain.
As we will focus on the case when $\mathbb{F}$ need not be algebraically
closed, a crucial role will be played by the endomorphism algebra
$\Delta=\textup{End}_{\mathbb{F}H}(V)$. In [GK96] the submodules of
$V{\uparrow}$ are described _up to isomorphism_. As this paper is motivated by
computational applications we will strive towards a higher standard: an
explicit description of the vectors in the submodule, and an explicit
description of the matrices in the endomorphism algebra of the submodule. This
is easily achieved in the non-stable case, which we describe for the sake of
completeness.
## 2\. The non-stable case
Let $e_{0},e_{1},\dots,e_{d-1}$ be an $\mathbb{F}$-basis for $V$ and let
$\sigma\colon H\to\textup{GL}(V)$ be the representation afforded by the
irreducible $\mathbb{F}H$-module $V$ relative to this basis. The _$g$
-conjugate_ of $\sigma$ ($g\in G$) is the representation
$g\mapsto(ghg^{-1})\sigma$, and we say that $\sigma$ is _$G$ -stable_ if for
each $g\in G$, $\sigma$ is equivalent to its $g$-conjugate. In this section we
shall assume that $\sigma$ is _not_ $G$-stable.
Let $\sigma{\uparrow}\colon G\to\textup{GL}(V{\uparrow})$ be the
representation afforded by $V{\uparrow}$ relative to the basis
$e_{0},\dots,e_{d-1},e_{0}a,\dots,e_{d-1}a,\dots,e_{0}a^{p-1},\dots,e_{d-1}a^{p-1}.$
Note that $G/H=\langle aH\rangle$ has order $p$, and we are writing
$e_{i}a^{j}$ rather than $e_{i}\otimes a^{j}$. Then
$a\sigma{\uparrow}=\begin{pmatrix}0&I&&0\\\ &&\ddots&\\\ 0&0&&I\\\
a^{p}\sigma&0&&0\end{pmatrix},\qquad
h\sigma{\uparrow}=\begin{pmatrix}h\sigma&&&\\\ &({}^{a}h)\sigma&&\\\
&&\ddots&\\\ &&&({}^{a^{p-1}}h)\sigma\end{pmatrix}$
where $h\in H$ and ${}^{a^{i}}h=a^{i}ha^{-i}$. The elements of
$\textup{End}_{\mathbb{F}G}(V{\uparrow})$ are the matrices commuting with
$G\sigma{\uparrow}$, namely the $p\times p$ block scalar matrices ${\rm
diag}(\delta,\dots,\delta)$ where $\delta\in\Delta$.
We shall henceforth assume that $V$ is $G$-stable. In particular, assume that
we know $\alpha\in\textup{Aut}_{\mathbb{F}}(V)$ satisfying
$(aha^{-1})\sigma=\alpha(h\sigma)\alpha^{-1}\qquad\text{for all $h\in H$.}$
There are ‘practical’ methods for computing $\alpha$. A crude method involves
viewing $(ah_{i}a^{-1})\sigma\alpha=\alpha(h_{i}\sigma)$, where $H=\langle
h_{1},\dots,h_{r}\rangle$, as a system of $(d/e)^{2}r$ homogeneous linear
equations over $\Delta$ in $(d/e)^{2}$ unknowns where $e=|\Delta:\mathbb{F}|$.
The solution space is $1$-dimensional if $V$ is $G$-stable, and
$0$-dimensional otherwise. A more sophisticated method, especially when
$\textup{char}(\mathbb{F})\neq 0$, involves using the meat-axe algorithm, see
[P84], [HR94], [IL00], and [NP95]. There is also a recursive method for
finding $\alpha$ which we shall not discuss here.
## 3\. The elements of $\textup{End}_{\mathbb{F}G}(V{\uparrow})$
In this section we explicitly describe the matrices in
$\Gamma=\textup{End}_{\mathbb{F}G}(V{\uparrow})$ and give an isomorphism
$\Gamma\to(\Delta,\alpha,\lambda)$ where $(\Delta,\alpha,\lambda)$ is a
general cyclic $\mathbb{F}$-algebra, see [L91, 14.5]. It is worth recalling
that if $\textup{char}(\mathbb{F})\neq 0$, then the endomorphism algebra
$\Delta$ is commutative. Even in this case, though, $\Gamma$ can be
noncommutative.
###### Lemma 1.
If $i\in\mathbb{Z}$, then $\alpha^{-i}a^{i}\colon V\to Va^{i}$ is an
$\mathbb{F}H$-isomorphism between the submodules $V$ and $Va^{i}$ of
$V{\uparrow}{\downarrow}$.
###### Proof.
It follows from Eqn (1) that
$va^{i}ha^{-i}=v\alpha^{i}h\alpha^{-i}\qquad\text{($i\in\mathbb{Z}$, $v\in
V$)}$
Replacing $v$ by $v\alpha^{-i}$ gives $v\alpha^{-i}a^{i}h=vh\alpha^{-i}a^{i}$.
Hence $\alpha^{-i}a^{i}$ is an $\mathbb{F}H$-homomorphism, and since it is
invertible, it is an $\mathbb{F}H$-isomorphism. ∎
Conjugation by $\alpha$ induces an automorphism of $\Delta$, which we also
call $\alpha$. [Proof: Conjugating the equation $h\delta=\delta h$ by $\alpha$
and using (1) shows that $\alpha^{-1}\delta\alpha\in\Delta$.] We abbreviate
$\alpha^{-1}\delta\alpha=\delta^{\alpha}$ by $\alpha(\delta)$. The reader can
determine from the context whether the symbol $\alpha$ refers to an element of
$\textup{Aut}_{\mathbb{F}}(V)$, or $\textup{Aut}_{\mathbb{F}}(\Delta)$.
It follows from Eqn (1) that
$(a^{p}ha^{-p})\sigma=\alpha^{p}h\sigma\alpha^{-p}$
for all $h\in H$. Hence $\alpha^{-p}(a^{p}\sigma)$ centralizes $H$. Therefore
$\lambda:=\alpha^{-p}(a^{p}\sigma)$ lies in $\Delta^{\times}$. Setting
$h=a^{p}$ in Eqn (1) shows $a^{p}\sigma=\alpha(a^{p}\sigma)\alpha^{-1}$. Thus
$\alpha(\lambda)=\lambda^{\alpha}=(\alpha^{-p}(a^{p}\sigma))^{\alpha}=\alpha^{-p}(a^{p}\sigma)=\lambda.$
Conjugating by $\alpha^{p}=(a^{p}\sigma)\lambda^{-1}$ induces an inner
automorphism:
$\alpha^{p}(\delta)=\delta^{\alpha^{p}}=\delta^{(a^{p}\sigma)\lambda^{-1}}=\delta^{\lambda^{-1}}=\lambda\delta\lambda^{-1}\qquad\text{($\delta\in\Delta$).}$
In summary, we have proved
###### Lemma 2.
The element $\alpha\in\textup{Aut}_{\mathbb{F}}V$ satisfying Eqn (1) induces
via conjugation an automorphism of $\Delta=\textup{End}_{\mathbb{F}H}(V)$,
also called $\alpha$. There exists $\lambda\in\Delta^{\times}$ satisfying
$\alpha^{-p}(a^{p}\sigma)=\lambda,\quad\alpha(\lambda)=\lambda,\quad\alpha^{p}(\delta)=\lambda\delta\lambda^{-1}$
for all $\delta\in\Delta$.
###### Theorem 3.
The representation $\sigma{\uparrow}\colon G\to\textup{GL}(V{\uparrow})$
afforded by $V{\uparrow}$ relative to the $\mathbb{F}$-basis
$\displaystyle e_{0},$ $\displaystyle
e_{1},\dots,e_{d-1},\dots,e_{0}\alpha^{-i}a^{i},e_{1}\alpha^{-i}a^{i},\dots,e_{d-1}\alpha^{-i}a^{i},$
(3)
$\displaystyle\dots,e_{0}\alpha^{-(p-1)}a^{p-1},e_{1}\alpha^{-(p-1)}a^{p-1},\dots,e_{d-1}\alpha^{-(p-1)}a^{p-1}$
for $V{\uparrow}$ is given by
$a\sigma{\uparrow}=\begin{pmatrix}0&\alpha&&0\\\ &&\ddots&\\\ 0&0&&\alpha\\\
\alpha\lambda&0&&0\end{pmatrix},\quad
h\sigma{\uparrow}=\begin{pmatrix}h\sigma&&&\\\ &h\sigma&&\\\ &&\ddots&\\\
&&&h\sigma\end{pmatrix}$
where $h\in H$. Moreover, there is an isomorphism from the general cyclic
algebra $(\Delta,\alpha,\lambda)$ to $\textup{End}_{\mathbb{F}G}(V{\uparrow})$
$(\Delta,\alpha,\lambda)\to\textup{End}_{{\mathbb{F}G}}(V{\uparrow})\colon\sum_{i=0}^{p-1}\delta_{i}x^{i}\mapsto\sum_{i=0}^{p-1}D(\delta_{i})X^{i}$
where
$X=\begin{pmatrix}0&I&&0\\\ &&\ddots&\\\ 0&0&&I\\\
\lambda&0&&0\end{pmatrix},\quad D(\delta)=\begin{pmatrix}\delta&&&\\\
&\alpha(\delta)&&\\\ &&\ddots&\\\ &&&\alpha^{p-1}(\delta)\end{pmatrix}$
and $\delta\in\Delta$.
###### Proof.
By Lemma 1, $\alpha^{-i}a^{i}\colon V\to Va^{i}$ is an
$\mathbb{F}H$-isomorphism. Hence $h\sigma{\uparrow}$ is the $p\times p$ block
scalar matrix given by Eqn (4b). Similarly, $a\sigma{\uparrow}$ is given by
Eqn (4a) as
$(v\alpha^{-i}a^{i})a=v\alpha\alpha^{-(i+1)}a^{i+1}\quad\text{and}\quad(v\alpha^{-(p-1)}a^{p-1})a=v\alpha(\alpha^{-p}a^{p})=v\alpha\lambda$
where the last step follows from Eqn (2a).
We follow [J96] and write $R=\Delta[t;\alpha]$ for the twisted polynomial ring
with the usual addition, and multiplication determined by
$t\delta=\alpha(\delta)t$ for $\delta\in\Delta$. The right ideal
$(t^{p}-\lambda)R$ is two-sided as
$t(t^{p}-\lambda)=(t^{p}-\lambda)t\quad\text{and}\quad\delta(t^{p}-\lambda)=(t^{p}-\lambda)\lambda^{-1}\delta\lambda$
by virtue of Eqns (2b) and (2c). The general cyclic algebra
$(\Delta,\alpha,\lambda)$ is defined to be the quotient ring
$R/(t^{p}-\lambda)R$. Since $R$ is a (left) euclidean domain, the elements of
$(\Delta,\alpha,\lambda)$ may be written uniquely as
$\sum_{i=0}^{p-1}\delta_{i}x^{i}$ where $x=t+(t^{p}-\lambda)R$, and
multiplication is determined by the rules $x^{p}=\lambda$ and
$x\delta=\alpha(\delta)x$ where $\delta\in\Delta$.
The matrices commuting with $H\sigma{\uparrow}$ are precisely the block
matrices $(\delta_{i,j})_{0\leq i,j<p}$ where $\delta_{i,j}\in\Delta$. To
compute $\textup{End}_{\mathbb{F}G}(V{\uparrow})$, we determine the matrices
$(\delta_{i,j})$ that commute with $a\sigma{\uparrow}$. If $0<i<p-1$, then
comparing row $i-1$ of both sides of
$(a\sigma{\uparrow})(\delta_{i,j})=(\delta_{i,j})(a\sigma{\uparrow})$ shows
how to express $\delta_{i,j}$ in terms of the $\delta_{i-1,k}$. Similarly, row
$p-1$ shows how to express $\delta_{p-1,j}$ in terms of $\delta_{0,k}$. It
follows that a matrix $(\delta_{i,j})$ commuting with $a\sigma{\uparrow}$ is
completely determined once we know the $0$th row $\delta_{0,j}$. We show that
the $0$th row can be arbitrary. The $0$th row of
$\sum_{i=0}^{p-1}D(\delta_{i})X^{i}$ is
$(\delta_{0},\delta_{1},\dots,\delta_{p-1})$. This element lies in $\Gamma$ as
we show that $D(\delta_{i}),X\in\Gamma$.
To see that $D(\delta)\in\Gamma$, we show that
$(a\sigma{\uparrow})D(\delta)=D(\delta)(a\sigma{\uparrow})$. The first product
equals
$(a\sigma{\uparrow})D(\delta)=\begin{pmatrix}0&\delta\alpha&&0\\\ &&\ddots&\\\
0&0&&\alpha^{-(p-2)}\delta\alpha^{p-1}\\\
\alpha\lambda\delta&0&&0\end{pmatrix}$
and the second product is identical if
$\alpha\lambda\delta=\delta^{\alpha^{p-1}}\alpha\lambda$. However, this is
true by Eqn (2c). To see that $X\in\Gamma$, write $a\sigma{\uparrow}=AX$ where
$A=\textup{diag}(\alpha,\dots,\alpha)$. It follows from Eqn (2b) that $A$ and
$X$ commute. Therefore $a\sigma{\uparrow}=AX$ and $X$ commute.
In summary, elements of $\Gamma$ may be written uniquely as
$\sum_{i=0}^{p-1}D(\delta_{i})X^{i}$ where $\delta_{i}\in\Delta$. Since
$X^{p}=\lambda I$ and $XD(\delta)=D(\alpha(\delta))X$ it follows that the map
$\sum_{i=0}^{p-1}\delta_{i}x^{i}\mapsto\sum_{i=0}^{p-1}D(\delta_{i})X^{i}$ is
an isomorphism $(\Delta,\alpha,\lambda)\to\Gamma$ as claimed. ∎
A consequence of Eqn (2c) is that $\alpha$ has order $p$ or $1$ modulo the
inner automorphisms of $\Delta$. It follows from the Skolem-Noether theorem
[CR90,3.62] that the order of $\alpha$ modulo inner automorphisms is precisely
the order of the restriction $\alpha|Z$, where $Z=Z(\Delta)$ is the centre of
$\Delta$.
## 4\. The case when $\alpha|Z$ has order $p$
In this section we determine the structure of
$\Gamma:=\textup{End}_{\mathbb{F}G}(V{\uparrow})$ in the case when $\alpha$
induces an automorphism of order $p$ on the field $Z(\Delta)$.
Of primary interest to us is Part (a) of the following classical theorem.
Although this result can be deduced from [J96, Theorem 1.1.22] and the fact
that $t^{p}-\lambda$ is a ‘two-sided maximal’ element of $\Delta[t;\alpha]$,
we prefer to give an elementary proof which generalizes [L91, Theorem 14.6].
###### Theorem 4.
Let $\Gamma$ be the general cyclic algebra $(\Delta,\alpha,\lambda)$ where
$\lambda\neq 0$ and $\alpha(\lambda)=\lambda$. Suppose that $\alpha|Z(\Delta)$
has order $p$, and fixed subfield $Z_{0}$. Then
(a) $\Gamma$ is a simple $Z_{0}$-algebra,
(b) $C_{\Gamma}(\Delta)=Z(\Delta)$,
(c) $Z(\Gamma)=Z_{0}$, and
(d) $|\Gamma:Z_{0}|=(p\textup{Deg}(\Delta))^{2}$ where
$|\Delta:Z(\Delta)|=\textup{Deg}(\Delta)^{2}$.
###### Proof.
The following proof does not assume that $p$ is prime. Let
$\gamma=\gamma_{1}x^{i_{1}}+\cdots+\gamma_{r}x^{i_{r}}$ be a nonzero element
of an ideal $I$ of $\Gamma$, where $0\leq i_{1}<\cdots<i_{r}<p$,
$\gamma_{i}\in\Delta$, and $r$ is chosen minimal. By minimality, each
$\gamma_{i}$ is nonzero. To prove Part (a) it suffices to to prove that $r=1$.
Then $I=\Gamma$ as $\gamma_{1}x^{i_{1}}\in I$ is a unit because $\gamma_{1}$
and $x$ are both units. Assume now that $r>1$. Then
$(\gamma_{1}\alpha^{i_{1}}(\delta)\gamma_{1}^{-1})\gamma-\gamma\delta=\sum_{k=2}^{r}\left(\gamma_{1}\alpha^{i_{1}}(\delta)\gamma_{1}^{-1}\gamma_{k}-\gamma_{k}\alpha^{i_{k}}(\delta)\right)x^{i_{k}}$
lies in $I$ for each $\delta\in\Delta$. By the minimality of $r$, each
coefficient of $x^{i_{k}}$ is zero. This implies that $\alpha^{i_{1}}$ equals
$\alpha^{i_{k}}$ modulo inner automorphisms for $k=2,\dots,r$. This
contradiction proves Part (a).
The proofs of Parts (b) and (c) are straightforward, so we shall omit their
proofs. Part (d) follows from $|\Gamma:\Delta|=|Z(\Delta):Z_{0}|=p$, and
$|\Delta:Z(\Delta)|=\textup{Deg}(\Delta)^{2}$ is a square. ∎
Before proceeding to Theorem 5, we define the _left-_ and _right-twisted
powers_ , $\mu^{{\sqsupset}i}$ and $\mu^{i{\sqsubset}}$, where $\mu\in\Delta$
and $i\in\mathbb{Z}$. These expressions are like norms, indeed Jacobson [J96]
uses the notation $N_{i}(\mu)$ to suggest this. These “norms”, however, are
not multiplicative in general. Consider the twisted polynomial ring
$\Delta[t;\alpha]$ and define
$(\mu
t)^{i}=\mu^{{\sqsupset}i}t^{i},\quad\textup{and}\quad(t\mu)^{i}=t^{i}\mu^{i{\sqsubset}}$
for $\mu\in\Delta$ and $i\in\mathbb{Z}$. It follows from the power laws $(\mu
t)^{i}(\mu t)^{j}=(\mu t)^{i+j}$ and $((\mu t)^{i})^{j}=(\mu t)^{ij}$ that
$\mu^{{\sqsupset}i}\alpha^{i}(\mu^{{\sqsupset}j})=\mu^{{\sqsupset}(i+j)},\quad\textup{and}\quad\mu^{{\sqsupset}i}\alpha^{i}(\mu^{{\sqsupset}i})\cdots\alpha^{i(j-1)}(\mu^{{\sqsupset}i})=\mu^{{\sqsupset}(ij)}$
for $i,j\in\mathbb{Z}$. Similar laws hold for right-twisted powers. The left-
twisted powers of nonnegative integers can be defined by the recurrence
relation
$\mu^{{\sqsupset}0}=1,\quad\textup{and}\quad\mu^{{\sqsupset}(i+1)}=\mu^{{\sqsupset}i}\alpha^{i}(\mu)=\mu\alpha(\mu^{{\sqsupset}i})\quad\textup{for
$i\geq 0$},$
and negative powers can be defined by
$\mu^{{\sqsupset}-i}=\alpha^{i}(\mu^{{\sqsupset}i})^{-1}$.
It is important in the sequel whether or not $\lambda^{-1}$ has a left-twisted
$p$th root.
###### Theorem 5.
Let $V$ be a $G$-stable irreducible $\mathbb{F}H$-module where $H\triangleleft
G$ and $|G/H|=p$ is prime. Let $\alpha$ and $\lambda$ be as in Lemma 2.
Suppose that $\alpha|Z$ has order $p$ where $Z=Z(\Delta)$ and
$\Delta=\textup{End}_{{\mathbb{F}G}}(V)$.
(a) If the equation $\mu^{{\sqsupset}p}=\lambda^{-1}$ has no solution for
$\mu\in\Delta^{\times}$, then $V{\uparrow}$ is irreducible, and
$\textup{End}_{\mathbb{F}G}(V{\uparrow})$ is isomorphic to the general cyclic
algebra $(\Delta,\alpha,\lambda)$ as per Theorem 3.
(b) If $\mu\in\Delta^{\times}$ satisfies $\mu^{{\sqsupset}p}=\lambda^{-1}$,
then $V{\uparrow}=U(\mu_{0})\dotplus\cdots\dotplus U(\mu_{p-1})$ where
$U(\mu_{j})=V\sum_{i=0}^{p-1}\mu_{j}^{{\sqsupset}i}\alpha^{-i}a^{i}\qquad\qquad(j=0,1,\dots,p-1)$
are isomorphic irreducible submodules satisfying $U(\mu_{j}){\downarrow}\cong
V$, and where $\mu_{j}^{{\sqsupset}p}=\lambda^{-1}$. Moreover, if $\rho\colon
G\to\textup{GL}(U(\mu))$ is the representation afforded by $U(\mu)$ relative
to the basis $e^{\prime}_{0},\dots,e^{\prime}_{p-1}$ where
$e^{\prime}_{j}=e_{j}\sum_{i=0}^{p-1}\mu^{{\sqsupset}i}\alpha^{-i}a^{i}\qquad(j=0,1,\dots,d-1),$
then $a\rho=\alpha\mu^{-1}$, $h\rho=h\sigma$ for $h\in H$, and
$\textup{End}_{{\mathbb{F}G}}(U(\mu))=C_{\Delta}(\alpha\mu^{-1})=\\{\delta\in\Delta\mid\delta^{\alpha}=\delta^{\mu}\\}.$
###### Proof.
By Theorem 4(a), $(\Delta,\alpha,\lambda)$ is a simple ring. In Part(a) more
is true: $(\Delta,\alpha,\lambda)$ is a division ring by [J96, Theorem
1.3.16]. By Theorem 3, $(\Delta,\alpha,\lambda)$ is isomorphic to
$\textup{End}_{\mathbb{F}G}(V{\uparrow})$ and so we have proved that
$V{\uparrow}$ is irreducible as desired.
Consider Part (b). Let $s=\mu t$ be an element of the twisted polynomial ring
$\Delta[t;\alpha]$, then $s^{i}=(\mu t)^{i}=\mu^{{\sqsupset}i}t^{i}$ and
$s\delta=\mu t\delta=(\mu\alpha(\delta)\mu^{-1})\mu
t=\mu\alpha(\delta)\mu^{-1}s.$
Therefore the map
$\Delta[t;\alpha]\to\Delta[s;\alpha\mu^{-1}]\colon\kern-2.0pt\sum_{i=0}^{p-1}\delta_{i}t^{i}\mapsto\sum_{i=0}^{p-1}\delta_{i}(\mu^{{\sqsupset}i})^{-1}s^{i}$
is an isomorphism. We are abusing notation here by identifying
$\alpha\mu^{-1}$ in $\textup{Aut}_{\mathbb{F}}(V)$ with
$\delta\mapsto\delta^{\alpha\mu^{-1}}$ in $\textup{Aut}_{\mathbb{F}}(\Delta)$.
If $y=\mu x$, then
$y^{p}=(\mu x)^{p}=\mu^{{\sqsupset}p}\lambda=\lambda^{-1}\lambda=1.$
By taking quotients we get an isomorphism
$(\Delta,\alpha,\lambda)\to(\Delta,\alpha\mu^{-1},1)$ given by
$\sum_{i=0}^{p-1}\delta_{i}x^{i}\mapsto\sum_{i=0}^{p-1}\delta_{i}(\mu^{{\sqsupset}i})^{-1}y^{i}$
where $x=t+(t^{p}-\lambda)$ and $y=s+(s^{p}-1)$.
As $y^{p}-1=(y-1)(y^{p-1}+\cdots+y+1)$, and $\Delta[s,\alpha\mu^{-1}]$ is
right euclidean it follows that $(y-1)(\Delta,\alpha\mu^{-1},1)$ is a maximal
right ideal of $(\Delta,\alpha\mu^{-1},1)$. Now $y-1$ corresponds to $\mu x-1$
which corresponds to $D(\mu)X-1$ whose kernel gives rise to the irreducible
submodule $U(\mu)$ of $V{\uparrow}$ in the statement of Part(b). We shall
reprove this, and prove a little more, using a more elementary argument.
Let $U$ be a submodule of $V{\uparrow}$ satisfying $U{\downarrow}\cong V$. Let
$\phi\colon V\to V{\uparrow}$ be an $\mathbb{F}H$-homomorphism such that
$V\phi=U{\downarrow}$. Let $\pi_{i}\colon V{\uparrow}\to Va^{i}$ be the
$\mathbb{F}H$-epimorphism given by
$(\sum_{i=0}^{p-1}v_{i}\alpha^{-i}a^{i})\pi_{i}=v_{i}\alpha^{-i}a^{i}$. Then
$\delta_{i}=\phi\pi_{i}a^{-i}\alpha^{i}$ is an $\mathbb{F}H$-homomorphism
$V\to V$, or an element of $\Delta$. Since $\pi_{0}+\pi_{1}+\cdots+\pi_{p-1}$
is the identity map $1\colon V{\uparrow}\to V{\uparrow}$, it follows that
$\phi=\phi
1=\phi(\pi_{0}+\pi_{1}+\cdots+\pi_{p-1})=\sum_{i=0}^{p-1}\delta_{i}\alpha^{-i}a^{i}.$
We now view $\phi$ as a map $V\to U$ and note that $U=Ua$. Then
$\alpha^{-1}a\colon V\to Va$, $a^{-1}\phi a\colon Va\to Ua$ and
$\phi^{-1}\colon Ua\to V$ are each ${\mathbb{F}H}$-isomorphisms. Hence their
composite, $(\alpha^{-1}a)(a^{-1}\phi a)\phi^{-1}$ is an isomorphism $V\to V$,
denoted $\mu^{-1}$ where $\mu\in\Delta^{\times}$. Rearranging gives $\phi
a=\alpha\mu^{-1}\phi$. Therefore,
$(v\phi)a=\left(v\sum_{i=0}^{p-1}\delta_{i}\alpha^{-i}a^{i}\right)a=v\alpha\mu^{-1}\sum_{i=0}^{p-1}\delta_{i}\alpha^{-i}a^{i}$
for all $v\in V$. The expression $(v\delta_{i}\alpha^{-i}a^{i})a$ equals
$v\delta_{i}\alpha\alpha^{-(i+1)}a^{i+1}=v\alpha\delta_{i}^{\alpha}\alpha^{-(i+1)}a^{i+1}=v\alpha\mu^{-1}\delta_{i+1}\alpha^{-(i+1)}a^{i+1}.$
Setting $i=p-1$ gives
$(v\delta_{p-1}\alpha^{-(p-1)}a^{p-1})a=v\alpha\delta_{p-1}^{\alpha}\alpha^{-p}a^{p}=v\alpha\delta_{p-1}^{\alpha}\lambda=v\alpha\mu^{-1}\delta_{0}.$
Therefore $\delta_{i}^{\alpha}=\mu^{-1}\delta_{i+1}$ for $i=0,\dots,p-2$ and
$\delta^{\alpha}_{p-1}\lambda=\mu^{-1}\delta_{0}$. If $\delta_{0}=0$, then
each $\delta_{i}=0$ and $\phi=0$, a contradiction. Thus $\delta_{0}\neq 0$ and
as $V\delta_{0}^{-1}\phi=U$, we may assume that $\delta_{0}=1$. It follows
from Eqn (6) that $\delta_{i}=\mu^{{\sqsupset}i}$ is the solution to the
recurrence relation: $\delta_{0}=1$ and $\mu\delta_{i}^{\alpha}=\delta_{i+1}$
for $i\geq 0$. Furthermore $\mu\delta^{\alpha}_{p-1}=\lambda^{-1}$ implies
that $\mu^{{\sqsupset}p}=\lambda^{-1}$. In summary, any submodule $U$ of
$V{\uparrow}$ satisfying $U{\downarrow}\cong V$ equals $U(\mu)$ for some $\mu$
satisfying $\mu^{{\sqsupset}p}=\lambda^{-1}$. Furthermore, by retracing the
above argument, if $\mu^{{\sqsupset}p}=\lambda^{-1}$, then $U(\mu)$ is an
irreducible submodule of $V{\uparrow}$ satisfying $U{\downarrow}\cong V$.
As $\textup{End}_{{\mathbb{F}G}}(V{\uparrow})$ is a simple ring, $V{\uparrow}$
is a direct sum of isomorphic simple submodules. Therefore,
$V{\uparrow}=U(\mu_{0})\dotplus\cdots\dotplus U(\mu_{p-1})$ as desired. It
follows from Lemma 1 that the representation $\rho\colon G\to\textup{GL}(V)$
satisfies $a\rho=\alpha\mu^{-1}$ and $h\rho=h\sigma$ for $h\in H$.
Consequently, the matrices commuting with $G\rho$ equal the elements of
$\Delta$ centralizing $a\rho$. Hence
$\textup{End}_{\mathbb{F}G}(U(\mu))=C_{\Delta}(\alpha\mu^{-1})$ as claimed. ∎
## 5\. The case when $\alpha$ is inner
In this section assume that $\alpha|Z(\Delta)$ has order $1$, or equivalently
by the Skolem-Noether theorem, that $\alpha$ is inner. Fix
$\varepsilon\in\Delta^{\times}$ such that $\alpha$ is the inner automorphism
$\alpha(\delta)=\varepsilon^{-1}\delta\varepsilon$. Clearly
$\alpha(\varepsilon)=\varepsilon$ and by Eqn (2c)
$\varepsilon^{-p}\delta\varepsilon^{p}=\alpha^{p}(\delta)=\lambda\delta\lambda^{-1}$.
Therefore, $\eta=\varepsilon^{p}\lambda\in Z(\Delta)$. If $y=\varepsilon x$,
then $y^{p}=\varepsilon^{{\sqsupset}p}x^{p}=\varepsilon^{p}\lambda=\eta$ and
$y\delta=\varepsilon
x\delta=\varepsilon\delta^{\varepsilon}x=\delta\varepsilon x=\delta y$. Hence
$(\Delta,\alpha,\lambda)\to(\Delta,1,\eta)\colon\sum_{i=0}^{p-1}\delta_{i}x^{i}\mapsto\sum_{i=0}^{p-1}\delta_{i}\varepsilon^{-i}y^{i}$
is an isomorphism. Thus we may untwist
$\textup{End}_{{\mathbb{F}G}}(V{\uparrow})$.
###### Theorem 6.
Let $V$ be a $G$-stable irreducible $\mathbb{F}H$-module where $H\triangleleft
G$ and $|G/H|=p$ is prime. Suppose that $\alpha$ induces the inner
automorphism $\alpha(\delta)=\delta^{\varepsilon}$ of the division algebra
$\Delta=\textup{End}_{\mathbb{F}H}(V)$. Then $\eta=\varepsilon^{p}\lambda\in
Z^{\times}$ where $Z=Z(\Delta)$. Suppose that $s^{p}-\eta=\nu(s)\mu(s)$ where
$\mu(s)=\sum_{i=0}^{m}\mu_{i}s^{i}$ and $\nu(s)=\sum_{i=0}^{p-m}\nu_{i}s^{i}$,
are monic polynomials in $\Delta[s]$. Then
$W_{\mu}=\sum_{i=0}^{m-1}V\sum_{j=0}^{p-m}\nu_{j}\varepsilon^{i+j}\alpha^{-(i+j)}a^{i+j}$
is a submodule of $V{\uparrow}$. Let $\rho\colon G\to\textup{GL}(W_{\mu})$ be
the representation afforded by $W_{\mu}$ relative to the basis
$e^{\prime}_{0},\dots,e^{\prime}_{d-1},\dots,e^{\prime}_{j}(\varepsilon
X)^{k},\dots,e^{\prime}_{0}(\varepsilon
X)^{m-1},\dots,e^{\prime}_{d-1}(\varepsilon X)^{m-1}$
where
$e^{\prime}_{k}=e_{k}\sum_{j=0}^{p-m}\nu_{j}\varepsilon^{j}\alpha^{-j}a^{j}=e_{k}\sum_{j=0}^{p-m}\nu_{j}(\varepsilon
X)^{j},$
and $X$ is given by Eqn (5a). Then
$a\rho=\alpha\varepsilon^{-1}\begin{pmatrix}0&1&&0\\\ &&\ddots&\\\ 0&0&&1\\\
-\mu_{0}&-\mu_{1}&&-\mu_{m-1}\end{pmatrix},$
and $h\rho=\textup{diag}(h\sigma,\dots,h\sigma)$ where $h\in H$. Moreover,
$\textup{End}_{{\mathbb{F}G}}(W_{\mu})=\left\\{\sum_{i=0}^{m-1}\delta_{i}(a\rho)^{i}\mid\delta_{i}\in\Delta\right\\}.$
If $\mu(s)\in Z[s]$, then
$\textup{End}_{{\mathbb{F}G}}(W_{\mu})\cong\Delta[s]/\mu(s)\Delta[s]\cong\Delta\otimes_{Z}\mathbb{K}$
where $\mathbb{K}=Z[s]/\mu(s)Z[s]$.
###### Proof.
Arguing as in Theorem 5, we have a series of right ideals:
$\nu(s)\Delta[s]\subseteq\Delta[s],\hskip
4.2679pt\nu(y)(\Delta,1,\eta)\subseteq(\Delta,1,\eta),\hskip
4.2679pt\nu(\varepsilon
x)(\Delta,\alpha,\lambda)\subseteq(\Delta,\alpha,\lambda),$
and $\sum_{i=0}^{n}D(\nu_{i})(\varepsilon X)^{i}\Gamma$ is a right ideal of
$\Gamma=\textup{End}_{\mathbb{F}G}(V{\uparrow})$. This right ideal corresponds
to the submodule $V{\uparrow}\sum_{i=0}^{n}D(\nu_{i})(\varepsilon
X)^{i}\Gamma$ of $V{\uparrow}$. It follows from Eqn (5a) and $(\varepsilon
X)^{p}-\eta=0$ that the minimum polynomial of $\varepsilon X$ equals
$s^{p}-\eta$.
Let $v^{\prime}=v\nu(\varepsilon X)$ where $v\in V$. Then
$v^{\prime}\mu(\varepsilon X)=v\nu(\varepsilon X)\mu(\varepsilon
X)=v((\varepsilon X)^{p}-\eta)=v\thinspace 0=0.$
This proves that (9) is a basis for
$W_{\mu}=\text{im}\;\nu(\varepsilon X)=\ker\nu(\varepsilon
X)=\sum_{i=0}^{m-1}V\sum_{j=0}^{n}\nu_{j}\varepsilon^{i+j}\alpha^{-(i+j)}a^{i+j}$
It follows from Lemma 1 that $h\rho=\textup{diag}(h\sigma,\dots,h\sigma)$ is a
block scalar matrix ($h\in H$). Since $a=\alpha X$,
$v^{\prime}(\varepsilon X)^{i}a=v^{\prime}(\varepsilon X)^{i}\alpha
X=v^{\prime}\alpha\varepsilon^{-1}(\varepsilon X)^{i+1}.$
It follows from Eqns (11) and (12) that the matrix for $a\rho$ is correct.
It is now a simple matter to show that
$\left\\{\sum_{i=0}^{m-1}\delta_{i}(a\rho)^{i}\mid\delta_{i}\in\Delta\right\\}$
is contained in $\textup{End}_{{\mathbb{F}G}}(W_{\mu})$. A familiar
calculation shows that an element of $\textup{End}_{{\mathbb{F}G}}(W_{\mu})$
is determined by the entries in its top row. As this may be arbitrary, we have
found all the elements of $\textup{End}_{{\mathbb{F}G}}(W_{\mu})$. ∎
It follows from Theorem 6 that a necessary condition for $W_{\mu}$ to be
irreducible is that $\mu(s)$ is irreducible in $\Delta[s]$. Lemma 7 describes
an important case when $\textup{End}_{\mathbb{F}G}(W_{\mu})$ is a division
ring, and hence $W_{\mu}$ is irreducible. The following proof follows Prof.
Deitmar’s suggestion [D02].
###### Lemma 7.
Let $\Delta$ be a division algebra with center $\mathbb{F}$, and let
$\mu(s)\in\mathbb{F}[s]$ be irreducible of prime degree. Suppose that no
$\delta\in\Delta$ satisfies $\mu(\delta)=0$. Then the quotient ring
$\Delta[s]/\mu(s)\Delta[s]$ is a division algebra.
###### Proof.
Let $\mathbb{K}=\mathbb{F}[s]/\mu(s)\mathbb{F}[s]$. Then $\mathbb{K}$ is a
field and $|\mathbb{K}:\mathbb{F}|=\deg\mu(s)$ is prime. Clearly
$\mu(s)\Delta[s]$ is a two-sided ideal of $\Delta[s]$, and
$\Delta[s]/\mu(s)\Delta[s]$ is isomorphic to
$\Delta_{\mathbb{K}}=\Delta\otimes_{\mathbb{F}}\mathbb{K}$. By [L91, 15.1(3)],
$\Delta_{\mathbb{K}}$ is a central simple $\mathbb{K}$-algebra, and hence is
isomorphic to $M_{n}(D)$ for some division algebra $D$ over $\mathbb{F}$. The
_degree_ of $D$ and the _Schur index_ of $\Delta_{\mathbb{K}}$ are defined as
follows
$\textup{Deg}(D)=(\dim_{\mathbb{F}}D)^{1/2}\quad\text{and}\quad\textup{Ind}(\Delta_{\mathbb{K}})=\textup{Deg}(D).$
By [P82, Prop. 13.4], $\textup{Ind}(\Delta_{\mathbb{K}})$ divides
$\textup{Ind}(\Delta)$, and $\textup{Ind}(\Delta)$ divides
$|\mathbb{K}:\mathbb{F}|\,\textup{Ind}(\Delta_{\mathbb{K}})$. Thus either
$\textup{Ind}(\Delta_{\mathbb{K}})=\textup{Ind}(\Delta)=\textup{Deg}(\Delta)=\textup{Deg}(D_{\mathbb{K}})$
and $\Delta_{\mathbb{K}}$ is a division algebra by [P82, Prop. 13.4(ii)], or
$\textup{Ind}(\Delta)$ equals
$|\mathbb{K}:\mathbb{F}|\,\textup{Ind}(\Delta_{\mathbb{K}})$. If the second
case occurred, then by [P82, Cor. 13.4], $\mathbb{K}$ is isomorphic to a
subfield of $\Delta$, and so $\mu(s)$ has a root in $\Delta$, contrary to our
hypothesis. ∎
If $\eta\not\in\Delta^{p}$, then $\eta\not\in Z^{p}$ and so $s^{p}-\eta$ is
irreducible in $Z[s]$, and it follows from Lemma 7 that
$V{\uparrow}=W_{s^{p}-\eta}$ is irreducible. Note that
$\textup{End}_{{\mathbb{F}G}}(V{\uparrow})\cong\Delta\otimes Z[\eta^{1/p}]$ is
a division algebra.
## 6\. The case when $\alpha$ is inner and $\xi^{p}=\eta$
In this section we shall assume that $\xi\in\Delta^{\times}$ satisfies
$\xi^{p}-\eta=0$. Let $y=\varepsilon x$ and $z=\xi^{-1}y=\xi^{-1}\varepsilon
x$. It is useful to consider the isomorphisms
$(\Delta,\alpha,\lambda)\to(\Delta,1,\eta)\to(\Delta,1,1)$ defined by
$x\mapsto\varepsilon^{-1}y$ and $y\mapsto\xi z$. Note $y$ and $z$ are central
in $(\Delta,1,\eta)$ and $(\Delta,1,1)$ respectively, and $y^{p}=\eta$ and
$z^{p}=1$.
###### Theorem 8.
Let $V$ be a $G$-stable irreducible $\mathbb{F}H$-module where $H\triangleleft
G$ and $|G/H|=p$ is prime. Suppose that $\alpha$ induces the inner
automorphism $\alpha(\delta)=\delta^{\varepsilon}$ of the division algebra
$\Delta=\textup{End}_{\mathbb{F}H}(V)$. Set $\eta=\varepsilon^{p}\lambda$ and
let $\xi,\omega\in\Delta$ satisfy $\xi^{p}=\eta$ and $\omega^{p}=1$. Then
$\xi\in Z=Z(\Delta)$.
(a) If $\textup{char}(\mathbb{F})\neq p$ and $\omega\neq 1$, then
$V{\uparrow}$ is an internal direct sum
$V{\uparrow}=U(\xi)\dotplus U(\xi\omega)\dotplus\cdots\dotplus
U(\xi\omega^{p-1})$
where
$U(\xi\omega^{j})=V\sum_{i=0}^{p-1}(\xi\omega^{j})^{-i}\varepsilon^{i}\alpha^{-i}a^{i}$
is irreducible, and $U(\xi\omega)\cong U(\xi\omega^{\prime})$ if and only if
$\omega$ and $\omega^{\prime}$ are conjugate in $\Delta$. If $\mu(s)$ is an
irreducible factor of $s^{p}-\eta$ in $Z[s]$, then $W_{\mu}$ defined in
Theorem 6 is a Wedderburn component of $V{\uparrow}$, and
$W_{\mu}=U(\theta_{1})\dotplus\cdots\dotplus U(\theta_{n})$ where
$\theta_{1},\dots,\theta_{n}$ are the roots of $\mu(s)$ in the field
$Z(\xi,\omega)$. In addition, the representation $\rho_{\theta}\colon
G\to\textup{GL}(U(\theta))$ afforded by $U(\theta)$ relative to the basis
$e^{\prime}_{0},\dots,e^{\prime}_{d-1}$ where
$e^{\prime}_{j}=e_{j}\sum_{i=0}^{p-1}\theta^{-i}\varepsilon^{i}\alpha^{-i}a^{i}$
satisfies
$a\rho_{\theta}=\alpha\varepsilon^{-1}\theta\qquad\text{and}\qquad
h\rho_{\theta}=h\sigma$
for $h\in H$, and
$\textup{End}_{{\mathbb{F}G}}(U(\theta))=C_{\Delta}(\theta)$.
(b) If $\textup{char}(\mathbb{F})=p$, then $\omega=1$ and $V{\uparrow}$ is
uniserial with unique composition series $\\{0\\}=W_{0}\subset
W_{1}\subset\cdots\subset W_{p}=V{\uparrow}$ where
$W_{k}=\sum_{i=1}^{k}V\sum_{j=0}^{p-i}{i+j-1\choose
j}\xi^{-j}\varepsilon^{j}\alpha^{-j}a^{j}.$
Moreover, $W_{k-1}/W_{k}\cong U(\xi)$ for $k=1,\dots,p$ and
$\textup{End}_{{\mathbb{F}G}}(U(\xi))=\Delta$.
###### Proof.
Since $z\delta=\delta z$, we see that $(\xi^{-1}\varepsilon
x)\delta=\delta(\xi^{-1}\varepsilon x)$. This implies that
$\xi^{-1}\delta=\delta\xi^{-1}$ and so $\xi\in Z$.
Case (a): Now $(\xi\omega)^{p}=\xi^{p}\omega^{p}=\eta$, hence
$y^{p}-\eta=y^{p}-(\xi\omega)^{p}=(y-\xi\omega)\left(\sum_{i=0}^{p-1}(\xi\omega)^{p-1-i}y^{i}\right).$
Therefore $V{\uparrow}\sum_{i=0}^{p-1}(\xi\omega)^{p-1-i}(\varepsilon
X)^{i}\Gamma$ is a submodule of $V{\uparrow}$ where $X$ is given by Eqn (5a).
We show directly that $U(\xi\omega)$ is a submodule of $V{\uparrow}$. This
follows from
$\displaystyle(v(\xi\omega)^{-i}\varepsilon^{i}\alpha^{-i}a^{i})a$
$\displaystyle=v\alpha(\alpha^{-1}(\xi\omega)^{-i}\varepsilon^{i}\alpha)\alpha^{-(i+1)}a^{i+1}$
(14)
$\displaystyle=v\alpha\varepsilon^{-1}\xi\omega(\xi\omega)^{-(i+1)}\varepsilon^{i+1}\alpha^{-(i+1)}a^{i+1}$
and setting $i=p-1$ in the right-hand side of Eqn (14) gives
$v\alpha\varepsilon^{-1}\xi\omega(\xi\omega)^{-p}\varepsilon^{p}\alpha^{-p}a^{p}=v\alpha\varepsilon^{-1}\xi\omega\eta^{-1}\varepsilon^{p}\lambda=v\alpha\varepsilon^{-1}\xi\omega.$
As $U(\xi\omega){\downarrow}\cong V$, we see that $U(\xi\omega)$ is an
irreducible ${\mathbb{F}G}$-submodule of $V{\uparrow}$. Setting
$\theta=\xi\omega$ establishes the truth of Eqns (12a,b).
We may calculate $\textup{Hom}(U(\xi\omega),U(\xi\omega^{\prime}))$ directly
by finding all $\delta$ in $\textup{End}_{\mathbb{F}}(V)$ that intertwine
$\rho_{\xi\omega}$ and $\rho_{\xi\omega^{\prime}}$. As $\delta$ intertwines
$h\rho_{\xi\omega}$ and $h\rho_{\xi\omega^{\prime}}$, it follows that $\delta$
commutes with $H\sigma$, and hence $\delta\in\Delta$. Also
$\delta(\alpha\varepsilon^{-1}\xi\omega)=(\alpha\varepsilon^{-1}\xi\omega^{\prime})\delta$
so $\delta^{\alpha\varepsilon^{-1}}\xi\omega=\xi\omega^{\prime}\delta$. Since
$\xi\in Z^{\times}$ and $\delta^{\alpha\varepsilon^{-1}}=\delta$, this amounts
to $\delta\omega=\omega^{\prime}\delta$. Setting $i=j$ shows that
$\textup{End}_{{\mathbb{F}G}}(U(\xi\omega))=C_{\Delta}(\omega)$.
The Galois group $\textup{Gal}(Z(\omega):Z)$ is cyclic of order dividing
$p-1$. Also $\omega$ and $\omega^{\prime}$ are conjugate in
$\textup{Gal}(Z(\omega):Z)$ if and only if they share the same minimal
polynomial over $Z$. The latter holds by Dixon’s Theorem [L91, 16.8] if and
only if $\omega$ and $\omega^{\prime}$ are conjugate in $\Delta$. Note that
$\omega$ and $\omega^{\prime}$ share the same minimal polynomial over $Z$
precisely when $\xi\omega$ and $\xi\omega^{\prime}$ share the same minimal
polynomial. This proves that $W_{\mu}$ is a Wedderburn component of
$V{\uparrow}$.
Case (b): Suppose now that $\textup{char}(\mathbb{F})=p$. Then $\omega=1$ and
Eqn (13) becomes $y^{p}-\eta=(y-\xi)^{p}=(y-\xi)(\sum_{i=0}^{p-1}{p-1\choose
i}(-\xi)^{p-1-i}y^{i})$. As
$\Gamma=\textup{End}_{\mathbb{F}G}(V{\uparrow})\cong(\Delta,\alpha,\lambda)\cong(\Delta,1,\eta)\cong(\Delta,1,1)\cong\Delta[z]/(z-1)^{p}\Delta[z]$
has a unique composition series, so too does $V{\uparrow}$. By noting that
$z=\xi^{-1}\varepsilon x$ and $D(\xi^{-1}\varepsilon)=\xi^{-1}\varepsilon$, we
see that $W_{i}=V{\uparrow}(\xi^{-1}\varepsilon X-1)^{p-i}\Gamma$ defines the
unique composition series for $V{\uparrow}$ where $X$ is given by Eqn (5a).
Let $R$ be the diagonal matrix
$\textup{diag}(1,\xi^{-1}\varepsilon,\dots,(\xi^{-1}\varepsilon)^{p-1})$, and
let $S$ be the matrix whose $(i,j)$th block is the binomial coefficient
${i\choose j}$ where $0\leq i,j<p$. A direct calculation verifies that
$R(\xi^{-1}\varepsilon X)R^{-1}=C$ and $S^{-1}CS=J$ where
$C=\begin{pmatrix}0&1&&0\\\ &&\ddots&\\\ 0&0&&1\\\
1&0&&0\end{pmatrix}\quad\text{and}\quad J=\begin{pmatrix}1&1&&\\\ &\ddots&&\\\
&&1&1\\\ &&&1\end{pmatrix}.$
Therefore $\xi^{-1}\varepsilon X-1=T^{-1}(J-1)T$ where $T=S^{-1}R$, and hence
$W_{k}=V{\uparrow}(\xi^{-1}\varepsilon
X-1)^{p-k}=V{\uparrow}\;T^{-1}(J-1)^{p-k}T=V{\uparrow}(J-1)^{p-k}T.$
It is easily seen that $\textup{im}(J-1)^{p-k}=\ker(J-1)^{k}$ is the subspace
$(0,\dots,0,V,\dots,V)$ where the first $V$ is in column $p-k$. The $(i,j)$th
entry of $T=S^{-1}R$ is $(-1)^{i+j}{i\choose j}(\xi^{-1}\varepsilon)^{j}$. The
last row of $T$ gives
$W_{1}=V\sum_{j=0}^{p-1}(-1)^{p-1+j}{p-1\choose
j}(\xi^{-1}\varepsilon)^{j}\alpha^{-j}a^{j}.$
More generally, the last $k$ rows of $T$ give
$W_{k}=\sum_{i=1}^{k}V\sum_{j=0}^{p-i}(-1)^{p-i+j}{p-i\choose
j}(\xi^{-1}\varepsilon)^{j}\alpha^{-j}a^{j}.$
Since $p-i-\ell=-(i+\ell)$ in a field (such as $\mathbb{F}$) of characteristic
$p$, we see that ${p-i\choose j}=(-1)^{j}{i+j-1\choose j}$ and the formula for
$W_{k}$ simplifies to
$W_{k}=\sum_{i=1}^{k}V\sum_{j=0}^{p-i}{i+j-1\choose
j}\xi^{-j}\varepsilon^{j}\alpha^{-j}a^{j}.$
Setting $k=1$ shows $W_{1}=U(\xi)$. A direct calculation shows that
$W_{i-1}/W_{i}\cong U(\xi)$. We showed in Part (a) that
$\textup{End}_{\mathbb{F}G}(U(\xi))$ equals $C_{\Delta}(\xi)=\Delta$. ∎
In Case (a), $C_{\Delta}(\xi\omega)$ equals $\Delta$ precisely when $\omega\in
Z$. If $\Delta$ is the rational quaternions, and $\omega$ is primitive cube
root of unity, then $C_{\Delta}(\omega)$ equals $\mathbb{Q}(\omega)$. There
are infinitely many primitive cube roots of 1 in this case, and they form a
conjugacy class of $\Delta$ by Dixon’s Theorem (as they all satisfy the
irreducible polynomial $s^{2}+s+1$ over $\mathbb{Q}$). Thus isomorphism of the
submodules $U(\xi\omega)$ is governed by conjugacy in $\Delta$, and not
conjugacy in $\textup{Gal}(\mathbb{Q}(\omega):\mathbb{Q})$.
Finally, it remains to generalize Theorem 8(a) to allow for the possibility
that $\Delta$ may not contain a primitive $p$th root of 1.
###### Theorem 9.
Let $V$ be a $G$-stable irreducible $\mathbb{F}H$-module where $H\triangleleft
G$ and $|G/H|=p$ is prime. Suppose that $\varepsilon,\xi\in\Delta$ satisfy
$\alpha(\delta)=\delta^{\varepsilon}$ ($\delta\in\Delta$) and $\xi^{p}-\eta=0$
where $\eta=\varepsilon^{p}\lambda\in Z=Z(\Delta)$. In addition, suppose that
$\textup{char}(\mathbb{F})\neq p$. Then $V{\uparrow}$ is an internal direct
sum
$V{\uparrow}=W_{\mu_{1}}\dotplus\cdots\dotplus W_{\mu_{r}}$
where $s^{p}-\eta=\mu_{1}(s)\cdots\mu_{r}(s)$ is a factorization into monic
irreducibles over $Z$, and where $W_{\mu}$ defined in Theorem 6. If $\mu(s)$
is a monic irreducible factor of $s^{p}-\eta$, and
$\mu(s)=\nu_{1}(s)\cdots\nu_{n}(s)$ where the $\nu_{i}(s)$ are monic and
irreducible in $\Delta[s]$, then $W_{\mu}$ is a Wedderburn component of
$V{\uparrow}$, and $W_{\mu}\cong W_{\nu_{n}}^{\oplus n}$ where $W_{\nu_{n}}$
is an irreducible ${\mathbb{F}G}$-module and
$\textup{End}_{{\mathbb{F}G}}(W_{\nu_{n}})$ is given in Theorem 6. In
addition,
$\textup{End}_{{\mathbb{F}G}}(W_{\nu_{n}})\cong B/\nu_{n}(s)\Delta[s]$
where
$B=\\{\delta(s)\in\Delta[s]\mid\delta(s)\nu_{n}(s)\in\nu_{n}(s)\Delta[s]\\}$
is the idealizer of the right ideal $\nu_{n}(s)\Delta[s]$.
###### Proof.
Since $\textup{char}(\mathbb{F})\neq p$, the monic polynomials
$\mu_{1}(s),\dots,\mu_{r}(s)$ are distinct and pairwise coprime in $Z[s]$.
From this it follows that $V{\uparrow}$ equals
$W_{\mu_{1}}\dotplus\cdots\dotplus W_{\mu_{r}}$. By Theorem 6,
$\textup{End}_{\mathbb{F}G}(W_{\mu})\cong\Delta_{\mathbb{K}}$ where
$\Delta_{\mathbb{K}}\cong\Delta[s]/\mu(s)\Delta[s]\cong\Delta\otimes_{Z}\mathbb{K}$,
and $\mathbb{K}$ is the field $Z[s]/\mu(s)Z[s]$. By [L91, 15.1(3)],
$\Delta_{\mathbb{K}}$ is a simple ring. Therefore $\mu(s)\Delta[s]$ is a two-
sided maximal ideal of $\Delta[s]$, and so $\mu(s)$ is called a two-sided
maximal element of $\Delta[s]$. By [J96, Theorem 1.2.19(b)],
$\Delta_{\mathbb{K}}\cong M_{n}(D)$ where $D$ is the division ring
$B/\nu_{n}(s)\Delta[s]$. Moreover, $Z(\Delta_{\mathbb{K}})\cong Z(M_{n}(D))$
so $\mathbb{K}\cong Z(D)$. Thus $W_{\mu}\cong W_{\nu_{n}}^{\oplus n}$ where
$W_{\nu_{n}}$ is an irreducible submodule of $V{\uparrow}$ and
$\textup{End}_{\mathbb{F}G}(W_{\nu_{n}})\cong D$. In addition,
$\nu_{1},\dots,\nu_{n}$ are similar [J96, Def. 1.2.7], and
$W_{\nu_{1}},\dots,W_{\nu_{n}}$ are isomorphic.
If $\mu(s),\mu^{\prime}(s)$ are distinct monic irreducible factors of
$s^{p}-\eta$ in $Z[s]$ and $\nu(s),\nu^{\prime}(s)$ in $\Delta[s]$ are monic
irreducible factors of $\mu(s)$ and $\mu^{\prime}(s)$ respectively, then it
follows from [J96, Def. 1.2.7] that $\nu(s)$ and $\nu^{\prime}(s)$ are not
similar. This means that an irreducible summand of $W_{\mu}$ is not isomorphic
to an irreducible summand of $W_{\mu^{\prime}}$. Hence the $W_{\mu}$ are
Wedderburn components as claimed. ∎
## Acknowledgment
I am very grateful to Prof. A. D. H. Deitmar for providing a proof [D02] of of
Lemma 7 in the case when $s^{p}-\eta$ has no root in $\Delta$.
## References
* [CR90]
C.W. Curtis and I. Reiner, Methods of Representation Theory: with Applications
to Finite Groups and Orders, Vol. 1, Classic Library Edn, John Wiley and Sons,
1990.
* [D02]
A. D. H. Deitmar, sci.math.research, September 6, 2002.
* [GK96]
S.P. Glasby and L.G. Kovács, _Irreducible modules and normal subgroups of
prime index_ , Comm. Algebra 24 (1996), no. 4, 1529–1546. (MR 97a:20012)
* [IL00]
G. Ivanyos and K. Lux, Treating the exceptional cases of the MeatAxe,
Experiment. Math. 9 (2000), no. 3, 373–381. (MR 2001j:16067)
* [HR94]
D.F. Holt and S. Rees, _Testing modules for irreducibility_ , J. Austral.
Math. Soc. Ser. A 57 (1994), no. 1, 1–16. (MR 95e:20023)
* [J96]
N. Jacobson, Finite-Dimensional Division Algebras over Fields, Springer-
Verlag, 1996.
* [L91]
T.Y. Lam, A First Course in Noncommutative Rings, Graduate Texts in
Mathematics 131, Springer-Verlag, 1991.
* [NP95]
P.M. Neumann and C.E. Praeger, _Cyclic matrices over finite fields_ , J.
London Math. Soc. 52 (1995), no. 2, 263–284. (MR 96j:15017)
* [P84]
R.A. Parker, The computer calculation of modular characters (the meat-axe),
Computational group theory (Durham, 1982), 267–274, Academic Press, London,
1984. (MR 84k:20041)
* [P82]
R. S. Pierce, Associative Algebras, Graduate Texts in Mathematics 88,
Springer-Verlag, 1982.
S. P. Glasby
---
Department of Mathematics
Central Washington University
WA 98926-7424, USA
<EMAIL_ADDRESS>
|
Multi-Body Segmentation and Motion Estimation via 3D Scan Synchronization
Jiahui Huang^1 He Wang^2 Tolga Birdal^2 Minhyuk Sung^3 Federica Arrigoni^4
Shi-Min Hu^1 Leonidas Guibas^2
^1Tsinghua University ^2Stanford University ^3KAIST ^4 University of Trento
[1]
Federica Arrigoni and Andrea Fusiello.
Synchronization problems in computer vision with closed-form
International Journal of Computer Vision, Sep 2019.
[2]
Federica Arrigoni, Eleonora Maset, and Andrea Fusiello.
Synchronization in the symmetric inverse semigroup.
In International Conference on Image Analysis and Processing,
pages 70–81. Springer, 2017.
[3]
Federica Arrigoni and Tomas Pajdla.
Motion segmentation via synchronization.
In Proceedings of the IEEE International Conference on Computer
Vision Workshops, 2019.
[4]
Federica Arrigoni, Beatrice Rossi, and Andrea Fusiello.
Spectral synchronization of multiple views in se (3).
SIAM Journal on Imaging Sciences, 9(4):1963–1990, 2016.
[5]
Aseem Behl, Despoina Paschalidou, Simon Donné, and Andreas Geiger.
Pointflownet: Learning representations for rigid motion estimation
from point clouds.
In Proceedings of the IEEE Conference on Computer Vision and
Pattern Recognition, pages 7962–7971, 2019.
[6]
Florian Bernard, Johan Thunberg, Peter Gemmar, Frank Hertel, Andreas Husch, and
Jorge Goncalves.
A solution for multi-alignment by transformation synchronisation.
In Proceedings of the IEEE Conference on Computer Vision and
Pattern Recognition, 2015.
[7]
Berta Bescos, Carlos Campos, Juan D Tardós, and José Neira.
Dynaslam ii: Tightly-coupled multi-object tracking and slam.
arXiv preprint arXiv:2010.07820, 2020.
[8]
Paul J Besl and Neil D McKay.
Method for registration of 3-d shapes.
In Sensor fusion IV: control paradigms and data structures,
volume 1611, pages 586–606. International Society for Optics and Photonics,
[9]
JiaWang Bian, Wen-Yan Lin, Yasuyuki Matsushita, Sai-Kit Yeung, Tan-Dat Nguyen,
and Ming-Ming Cheng.
Gms: Grid-based motion statistics for fast, ultra-robust feature
In Proceedings of the IEEE Conference on Computer Vision and
Pattern Recognition, 2017.
[10]
Tolga Birdal, Michael Arbel, Umut Simsekli, and Leonidas J Guibas.
Synchronizing probability measures on rotations via optimal
In Proceedings of the IEEE Conference on Computer Vision and
Pattern Recognition, 2020.
[11]
Tolga Birdal, Umut Şimşekli, M. Onur Eken, and Slobodan Ilic.
Bayesian Pose Graph Optimization via Bingham Distributions and
Tempered Geodesic MCMC.
In Advances in Neural Information Processing Systems, 2018.
[12]
Tolga Birdal and Slobodan Ilic.
Cad priors for accurate and flexible instance reconstruction.
In Proceedings of the IEEE International Conference on Computer
Vision, 2017.
[13]
Tolga Birdal and Umut Simsekli.
Probabilistic permutation synchronization using the riemannian
structure of the birkhoff polytope.
In Proceedings of the IEEE Conference on Computer Vision and
Pattern Recognition, 2019.
[14]
Jesus Briales and Javier Gonzalez-Jimenez.
Cartan-sync: Fast and global se (d)-synchronization.
IEEE Robotics and Automation Letters, 2(4):2127–2134, 2017.
[15]
Cesar Cadena, Luca Carlone, Henry Carrillo, Yasir Latif, Davide Scaramuzza,
José Neira, Ian Reid, and John J Leonard.
Past, present, and future of simultaneous localization and mapping:
Toward the robust-perception age.
IEEE Transactions on robotics, 32(6):1309–1332, 2016.
[16]
Luca Carlone, Roberto Tron, Kostas Daniilidis, and Frank Dellaert.
Initialization techniques for 3d slam: a survey on rotation
estimation and its use in pose graph optimization.
In International Conference on Robotics and Automation, pages
4597–4604. IEEE, 2015.
[17]
Angel X Chang, Thomas Funkhouser, Leonidas Guibas, Pat Hanrahan, Qixing Huang,
Zimo Li, Silvio Savarese, Manolis Savva, Shuran Song, Hao Su, et al.
Shapenet: An information-rich 3d model repository.
arXiv preprint arXiv:1512.03012, 2015.
[18]
Avishek Chatterjee and Venu Madhav Govindu.
Robust relative rotation averaging.
IEEE transactions on pattern analysis and machine intelligence,
40(4):958–972, 2017.
[19]
Kunal N Chaudhury, Yuehaw Khoo, and Amit Singer.
Global registration of multiple point clouds using semidefinite
SIAM Journal on Optimization, 25(1), 2015.
[20]
Christopher Choy, JunYoung Gwak, and Silvio Savarese.
4d spatio-temporal convnets: Minkowski convolutional neural networks.
In Proceedings of the IEEE Conference on Computer Vision and
Pattern Recognition, 2019.
[21]
João Paulo Costeira and Takeo Kanade.
A multibody factorization method for independently moving objects.
International Journal of Computer Vision, 29(3), 1998.
[22]
Zheng Dang, Kwang Moo Yi, Yinlin Hu, Fei Wang, Pascal Fua, and Mathieu
Eigendecomposition-free training of deep networks with zero
eigenvalue-based losses.
In European Conference on Computer Vision, pages 768–783,
[23]
Zan Gojcic, Caifa Zhou, Jan D Wegner, Leonidas J Guibas, and Tolga Birdal.
Learning multiview 3d point cloud registration.
In Proceedings of the IEEE Conference on Computer Vision and
Pattern Recognition, 2020.
[24]
Stephen Gould, Richard Hartley, and Dylan Campbell.
Deep declarative networks: A new hope.
Technical report, Australian National University (arXiv:1909.04866),
Sep 2019.
[25]
Venu Madhav Govindu.
Lie-algebraic averaging for globally consistent motion estimation.
In Proceedings of the IEEE Computer Society Conference on
Computer Vision and Pattern Recognition, volume 1, pages I–I. IEEE, 2004.
[26]
Venu Madhav Govindu and A Pooja.
On averaging multiview relations for 3d scan registration.
IEEE Transactions on Image Processing, 23(3):1289–1302, 2014.
[27]
Maciej Halber, Yifei Shi, Kai Xu, and Thomas Funkhouser.
Rescan: Inductive instance segmentation for indoor rgbd scans.
In Proceedings of the IEEE International Conference on Computer
Vision, pages 2541–2550, 2019.
[28]
Richard Hartley, Jochen Trumpf, Yuchao Dai, and Hongdong Li.
Rotation averaging.
International journal of computer vision, 103(3), 2013.
[29]
David S Hayden, Jason Pacheco, and John W Fisher.
Nonparametric object and parts modeling with lie group dynamics.
In Proceedings of the IEEE Conference on Computer Vision and
Pattern Recognition, 2020.
[30]
Roger A Horn and Charles R Johnson.
Matrix analysis.
Cambridge university press, 2012.
[31]
Hou-Ning Hu, Qi-Zhi Cai, Dequan Wang, Ji Lin, Min Sun, Philipp Krahenbuhl,
Trevor Darrell, and Fisher Yu.
Joint monocular 3d vehicle detection and tracking.
In Proceedings of the IEEE international conference on computer
vision, pages 5390–5399, 2019.
[32]
Nan Hu, Qixing Huang, Boris Thibert, UG Alpes, and Leonidas Guibas.
Distributable consistent multi-object matching.
In Proceedings of the IEEE Conference on Computer Vision and
Pattern Recognition, 2018.
[33]
Jiahui Huang, Sheng Yang, Tai-Jiang Mu, and Shi-Min Hu.
Clustervo: Clustering moving instances and estimating visual odometry
for self and surroundings.
In Proceedings of the IEEE Conference on Computer Vision and
Pattern Recognition, pages 2168–2177, 2020.
[34]
Jiahui Huang, Sheng Yang, Zishuo Zhao, Yu-Kun Lai, and Shi-Min Hu.
Clusterslam: A slam backend for simultaneous rigid body clustering
and motion estimation.
In Proceedings of the IEEE International Conference on Computer
Vision, pages 5875–5884, 2019.
[35]
Qixing Huang, Zhenxiao Liang, Haoyun Wang, Simiao Zuo, and Chandrajit Bajaj.
Tensor maps for synchronizing heterogeneous shape collections.
ACM Trans. Graph., 38(4):106, 2019.
[36]
Xiangru Huang, Zhenxiao Liang, Xiaowei Zhou, Yao Xie, Leonidas J Guibas, and
Qixing Huang.
Learning transformation synchronization.
In Proceedings of the IEEE Conference on Computer Vision and
Pattern Recognition, pages 8082–8091, 2019.
[37]
Hossam Isack and Yuri Boykov.
Energy-based geometric multi-model fitting.
International journal of computer vision, 97(2):123–147, 2012.
[38]
Li Jiang, Hengshuang Zhao, Shaoshuai Shi, Shu Liu, Chi-Wing Fu, and Jiaya Jia.
Pointgroup: Dual-set point grouping for 3d instance segmentation.
In Proceedings of the IEEE Conference on Computer Vision and
Pattern Recognition, pages 4867–4876, 2020.
[39]
Wolfgang Kabsch.
A solution for the best rotation to relate two sets of vectors.
Acta Crystallographica Section A: Crystal Physics, Diffraction,
Theoretical and General Crystallography, 1976.
[40]
Alex Kendall, Hayk Martirosyan, Saumitro Dasgupta, Peter Henry, Ryan Kennedy,
Abraham Bachrach, and Adam Bry.
End-to-end learning of geometry and context for deep stereo
In Proceedings of the IEEE International Conference on Computer
Vision, pages 66–75, 2017.
[41]
Florian Kluger, Eric Brachmann, Hanno Ackermann, Carsten Rother, Michael Ying
Yang, and Bodo Rosenhahn.
Consac: Robust multi-model fitting by conditional sample consensus.
In Proceedings of the IEEE Conference on Computer Vision and
Pattern Recognition, 2020.
[42]
Loic Landrieu and Martin Simonovsky.
Large-scale point cloud semantic segmentation with superpoint graphs.
In Proceedings of the IEEE Conference on Computer Vision and
Pattern Recognition, pages 4558–4567, 2018.
[43]
Ting Li, Vinutha Kallem, Dheeraj Singaraju, and René Vidal.
Projective factorization of multiple rigid-body motions.
In Proceedings of the IEEE Conference on Computer Vision and
Pattern Recognition, pages 1–6. IEEE, 2007.
[44]
Xiaolong Li, He Wang, Li Yi, Leonidas J Guibas, A Lynn Abbott, and Shuran Song.
Category-level articulated object pose estimation.
In Proceedings of the IEEE Conference on Computer Vision and
Pattern Recognition, 2020.
[45]
Xingyu Liu, Charles R Qi, and Leonidas J Guibas.
Flownet3d: Learning scene flow in 3d point clouds.
In Proceedings of the IEEE Conference on Computer Vision and
Pattern Recognition, pages 529–537, 2019.
[46]
Xingyu Liu, Mengyuan Yan, and Jeannette Bohg.
Meteornet: Deep learning on dynamic 3d point cloud sequences.
In Proceedings of the IEEE International Conference on Computer
Vision, pages 9246–9255, 2019.
[47]
Wei-Chiu Ma, Shenlong Wang, Rui Hu, Yuwen Xiong, and Raquel Urtasun.
Deep rigid instance scene flow.
In Proceedings of the IEEE Conference on Computer Vision and
Pattern Recognition, pages 3614–3622, 2019.
[48]
Luca Magri and Andrea Fusiello.
Fitting multiple heterogeneous models by multi-class cascaded
In Proceedings of the IEEE Conference on Computer Vision and
Pattern Recognition, pages 7460–7468, 2019.
[49]
Eleonora Maset, Federica Arrigoni, and Andrea Fusiello.
Practical and efficient multi-view matching.
In Proceedings of the IEEE International Conference on Computer
Vision, pages 4568–4576, 2017.
[50]
Kwang Moo Yi, Eduard Trulls, Yuki Ono, Vincent Lepetit, Mathieu Salzmann, and
Pascal Fua.
Learning to find good correspondences.
In Proceedings of the IEEE Conference on Computer Vision and
Pattern Recognition, 2018.
[51]
Michael Niemeyer, Lars Mescheder, Michael Oechsle, and Andreas Geiger.
Occupancy flow: 4d reconstruction by learning particle dynamics.
In Proceedings of the IEEE International Conference on Computer
Vision, 2019.
[52]
Deepti Pachauri, Risi Kondor, and Vikas Singh.
Solving the multi-way matching problem by permutation
In Advances in neural information processing systems, pages
1860–1868, 2013.
[53]
Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory
Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, Alban
Desmaison, Andreas Kopf, Edward Yang, Zachary DeVito, Martin Raison, Alykhan
Tejani, Sasank Chilamkurthy, Benoit Steiner, Lu Fang, Junjie Bai, and Soumith
Pytorch: An imperative style, high-performance deep learning library.
In H. Wallach, H. Larochelle, A. Beygelzimer, F. dÁlché Buc, E.
Fox, and R. Garnett, editors, Advances in Neural Information Processing
Systems. Curran Associates, Inc., 2019.
[54]
Pulak Purkait, Tat-Jun Chin, and Ian Reid.
Neurora: Neural robust rotation averaging.
arXiv preprint arXiv:1912.04485, 2019.
[55]
Gilles Puy, Alexandre Boulch, and Renaud Marlet.
FLOT: Scene Flow on Point Clouds Guided by Optimal
In European Conference on Computer Vision, 2020.
[56]
Charles R Qi, Hao Su, Kaichun Mo, and Leonidas J Guibas.
Pointnet: Deep learning on point sets for 3d classification and
In Proceedings of the IEEE Conference on Computer Vision and
Pattern Recognition, 2017.
[57]
Charles R Qi, Li Yi, Hao Su, and Leonidas J Guibas.
Pointnet++: Deep hierarchical feature learning on point sets in a
metric space.
arXiv preprint arXiv:1706.02413, 2017.
[58]
Davis Rempe, Tolga Birdal, Yongheng Zhao, Zan Gojcic, Srinath Sridhar, and
Leonidas J. Guibas.
Caspr: Learning canonical spatiotemporal point cloud representations.
In Advances in Neural Information Processing Systems, 2020.
[59]
David M Rosen, Luca Carlone, Afonso S Bandeira, and John J Leonard.
Se-sync: A certifiably correct algorithm for synchronization over the
special euclidean group.
The International Journal of Robotics Research, 38, 2019.
[60]
Renato F Salas-Moreno, Richard A Newcombe, Hauke Strasdat, Paul HJ Kelly, and
Andrew J Davison.
Slam++: Simultaneous localisation and mapping at the level of
In Proceedings of the IEEE conference on computer vision and
pattern recognition, pages 1352–1359, 2013.
[61]
Michele Schiavinato and Andrea Torsello.
Synchronization over the birkhoff polytope for multi-graph matching.
In International Workshop on Graph-Based Representations in
Pattern Recognition, pages 266–275. Springer, 2017.
[62]
M. Slavcheva, M. Baust, D. Cremers, and S. Ilic.
KillingFusion: Non-rigid 3D Reconstruction without Correspondences.
In Proceedings of the IEEE Conference on Computer Vision and
Pattern Recognition, 2017.
[63]
Pei Sun, Henrik Kretzschmar, Xerxes Dotiwalla, Aurelien Chouard, Vijaysai
Patnaik, Paul Tsui, James Guo, Yin Zhou, Yuning Chai, Benjamin Caine, et al.
Scalability in perception for autonomous driving: Waymo open dataset.
In Proceedings of the IEEE Conference on Computer Vision and
Pattern Recognition, pages 2446–2454, 2020.
[64]
Johan Thunberg, Florian Bernard, and Jorge Goncalves.
Distributed methods for synchronization of orthogonal matrices over
Automatica, 80:243–252, 2017.
[65]
Ivan Tishchenko, Sandro Lombardi, Martin R Oswald, and Marc Pollefeys.
Self-supervised learning of non-rigid residual flow and ego-motion.
arXiv preprint arXiv:2009.10467, 2020.
[66]
Roberto Tron and Kostas Daniilidis.
Statistical pose averaging with non-isotropic and incomplete relative
In European Conference on Computer Vision. Springer, 2014.
[67]
Roberto Tron and Rene Vidal.
Distributed 3-d localization of camera sensor networks from 2-d image
IEEE Transactions on Automatic Control, 59(12), 2014.
[68]
Dimitrios Tzionas and Juergen Gall.
Reconstructing articulated rigged models from rgb-d videos.
In European Conference on Computer Vision Workshops, 2016.
[69]
Sundar Vedula, Simon Baker, Peter Rander, Robert Collins, and Takeo Kanade.
Three-dimensional scene flow.
In Proceedings of the Seventh IEEE International Conference on
Computer Vision, volume 2, pages 722–729. IEEE, 1999.
[70]
Johanna Wald, Armen Avetisyan, Nassir Navab, Federico Tombari, and Matthias
Rio: 3d object instance re-localization in changing indoor
In Proceedings of the IEEE International Conference on Computer
Vision, pages 7658–7667, 2019.
[71]
Lanhui Wang and Amit Singer.
Exact and stable recovery of rotations for robust synchronization.
Information and Inference: A Journal of the IMA, 2(2):145–193,
[72]
Qianqian Wang, Xiaowei Zhou, and Kostas Daniilidis.
Multi-image semantic matching by mining consistent features.
In Proceedings of the IEEE Conference on Computer Vision and
Pattern Recognition, 2018.
[73]
Xiaogang Wang, Bin Zhou, Yahao Shi, Xiaowu Chen, Qinping Zhao, and Kai Xu.
Shape2motion: Joint analysis of motion parts and attributes from 3d
In Proceedings of the IEEE Conference on Computer Vision and
Pattern Recognition, pages 8876–8884, 2019.
[74]
Zirui Wang, Shuda Li, Henry Howard-Jenkins, Victor Prisacariu, and Min Chen.
Flownet3d++: Geometric losses for deep scene flow estimation.
In Proceedings of the IEEE/CVF Winter Conference on Applications
of Computer Vision, pages 91–98, 2020.
[75]
Joe H Ward Jr.
Hierarchical grouping to optimize an objective function.
Journal of the American statistical association,
58(301):236–244, 1963.
[76]
Thomas Whelan, Stefan Leutenegger, R Salas-Moreno, Ben Glocker, and Andrew
Elasticfusion: Dense slam without a pose graph.
In Robotics: Science and Systems, 2015.
[77]
Pengxiang Wu, Siheng Chen, and Dimitris N Metaxas.
Motionnet: Joint perception and motion prediction for autonomous
driving based on bird's eye view maps.
In Proceedings of the IEEE Conference on Computer Vision and
Pattern Recognition, pages 11385–11395, 2020.
[78]
Wenxuan Wu, Zhi Yuan Wang, Zhuwen Li, Wei Liu, and Li Fuxin.
Pointpwc-net: Cost volume on point clouds for (self-) supervised
scene flow estimation.
In European Conference on Computer Vision, pages 88–107.
Springer, 2020.
[79]
Fanbo Xiang, Yuzhe Qin, Kaichun Mo, Yikuan Xia, Hao Zhu, Fangchen Liu, Minghua
Liu, Hanxiao Jiang, Yifu Yuan, He Wang, Li Yi, Angel X. Chang, Leonidas J.
Guibas, and Hao Su.
SAPIEN: A simulated part-based interactive environment.
In Proceedings of the IEEE Conference on Computer Vision and
Pattern Recognition, June 2020.
[80]
Binbin Xu, Wenbin Li, Dimos Tzoumanikas, Michael Bloesch, Andrew Davison, and
Stefan Leutenegger.
Mid-fusion: Octree-based object-level multi-instance dynamic slam.
In International Conference on Robotics and Automation, pages
5231–5237. IEEE, 2019.
[81]
Xun Xu, Loong Fah Cheong, and Zhuwen Li.
3d rigid motion segmentation with mixed and unknown number of models.
IEEE Transactions on Pattern Analysis and Machine Intelligence,
[82]
Zihao Yan, Ruizhen Hu, Xingguang Yan, Luanmin Chen, Oliver van Kaick, Hao
Zhang, and Hui Huang.
Rpm-net: Recurrent prediction of motion and parts from point cloud.
ACM Trans. Graph., 38(6):240:1–240:15, 2019.
[83]
Li Yi, Haibin Huang, Difan Liu, Evangelos Kalogerakis, Hao Su, and Leonidas
Deep part induction from articulated object pairs.
ACM Trans. Graph., 37(6), 2018.
[84]
Li Yi, Vladimir G Kim, Duygu Ceylan, I-Chao Shen, Mengyan Yan, Hao Su, Cewu Lu,
Qixing Huang, Alla Sheffer, and Leonidas Guibas.
A scalable active framework for region annotation in 3d shape
ACM Trans. Graph., 35(6):1–12, 2016.
[85]
Jin-Gang Yu, Gui-Song Xia, Ashok Samal, and Jinwen Tian.
Globally consistent correspondence of multiple feature sets using
proximal gauss–seidel relaxation.
Pattern Recognition, 51:255–267, 2016.
[86]
Jun Zhang, Mina Henein, Robert Mahony, and Viorela Ila.
Vdo-slam: A visual dynamic object-aware slam system.
arXiv preprint arXiv:2005.11052, 2020.
[87]
Jiahui Zhang, Dawei Sun, Zixin Luo, Anbang Yao, Lei Zhou, Tianwei Shen, Yurong
Chen, Long Quan, and Hongen Liao.
Learning two-view correspondences and geometry using order-aware
In Proceedings of the IEEE International Conference on Computer
Vision, 2019.
Multi-Body Segmentation and Motion Estimation via 3D Scan Synchronization — Supplementary Material
In this supplementary material, we first give the proofs of the theorems in <ref>, then provide more details of our implementation and our dataset in <ref>.
Additional ablations and results are shown in <ref>.
§ PROOFS OF THEOREMS
§.§ Theorem 1
The energy function in eq:sync can be written as:
\begin{equation}
\begin{aligned}
E(\ps) =& \sum_{k=1}^K \sum_{l=1}^K w^{kl} \lVert \Pm^k - \Pm^{kl} \Pm^l \rVert_F^2 \\
=& \sum_{k=1}^K \sum_{l=1}^K \sum_{i=1}^N w^{kl} \lVert \Pm^k_{:i} - \Pm^{kl} \Pm^l_{:i} \rVert^2 \\
=& \sum_{i=1}^N \sum_{k=1}^K \sum_{l=1}^K w^{kl} \lVert \Pm^k_{:i} \rVert^2 + w^{lk} \lVert \Pm^l_{:i} \rVert^2 \\
&- w^{kl}(\Pm^k_{:i})^\top(\Pm^{kl} \Pm^l_{:i}) - w^{lk}(\Pm^l_{:i})^\top(\Pm^{lk} \Pm^k_{:i}) \\
=& \sum_{i=1}^N 2\sum_{k=1}^K (\Pm^k_{:i})^\top \left( \sum_{l=1}^K w^{kl} (\Pm^k_{:i} - \Pm^{kl} \Pm^l_{:i} ) \right) \\
=& \sum_{i=1}^N 2\sum_{k=1}^K (\Pm^k_{:i})^\top \left( \left(w^{k} \Id_N \right) \Pm^k_{:i} - \sum_{l\neq k} w^{kl} \Pm^{kl} \Pm^l_{:i} \right) \\
=& 2\sum_{i=1}^N \ps_{:i}^\top \Lap \ps_{:i} = 2 \mathrm{tr}( \ps^\top \Lap \ps ).\nonumber
\end{aligned}
\end{equation}
The spectral solution additionally requires each column of $\ps$ to be of unit norm and orthogonal to others relaxing $\{\Pm^{kl}\in\Man\}_{k,l}$:
\begin{equation}
\min_{\ps} \mathrm{tr}(\ps^\top \Lap \ps) \quad\mathrm{ s.t. }\quad \ps^{\top} \ps = \Id_N.
\end{equation}
This QCQP (Quadratically Constrained Quadratic Program) is known to have the closed form solution revealed by generalized Rayleigh problem [30] (or similarly, the Courant-Fischer-Weyl min-max principle).
The solution is given by the $N$ eigenvectors of $\Lap$ corresponding to the smallest $N$ eigenvalues.
§.§ Theorem 2
We first recall the spectral solution of the synchronization problem and then extend the result to the weighted variant we propose. For completeness, here we include $\Zm=\Gs \Gs^\top$, the unweighted motion segmentation matrix:
\begin{equation}
\Zm = \begin{bmatrix}
\zero & \Z^{12} & \dots & \Z^{1K} \\
\Z^{21} & \zero & \dots & \Z^{2K} \\
\vdots & \vdots & \ddots & \vdots \\
\Z^{K1} & \Z^{K2} & \dots & \zero \\
\end{bmatrix}.
\end{equation}
In the noiseless regime and under spectral relaxation, the synchronization problem can be cast as
\begin{equation}
\max_\U \mathrm{tr}(\U^\top \Zm \U) \quad\mathrm{ s.t. }\quad \U^\top \U = \Id_S,
\end{equation}
where $\U \in \R^{KN \times S}$ denotes the sought solution, absolute permutations. Then each column in $\U$ will be one of the $S$ leading eigenvectors of matrix $\Zm$ [3]:
\begin{equation}
\U \cdot \mathrm{diag} (\sqrt{\lambda_1}, \dots, \sqrt{\lambda_S}) \approx \Gs = \begin{bmatrix}
\G^1 \\
\G^2 \\
\vdots \\
\G^K \\
\end{bmatrix},
\end{equation}
where $\lambda_1, \dots, \lambda_S$ are the leading eigenvalues of $\Zm$.
We now recall the weighted synchronization problem.
Here we assume the $\Z^{kl}$ matrices are binary and satisfy the properties listed in [3]. The weighted synchronization matrix $\tilde{\Zm}$ is composed of a set of anisotropically-scaled $\Z^{kl}$ matrices:
\begin{equation}\label{eq:ZW}
\ZW = \begin{bmatrix}
\zero & \frac{1}{\sigma^{12}}\Z^{12} & \dots & \frac{1}{\sigma^{1K}}\Z^{1K} \\
\frac{1}{\sigma^{21}}\Z^{21} & \zero & \dots & \frac{1}{\sigma^{2K}}\Z^{2K} \\
\vdots & \vdots & \ddots & \vdots \\
\frac{1}{\sigma^{K1}}\Z^{K1} & \frac{1}{\sigma^{K2}}\Z^{K2} & \dots & \zero \\
\end{bmatrix}.
\end{equation}
Remind that in the main paper we use the unweighted synchronization (without $\frac{1}{\sigma}$) by cancelling the effect of the weights via a normalization. <ref>, which we now state more formally, is then concerned about the linear scaling of the solution proportional to the weights in the motion segmentation matrix:
The spectral solution to the weighted version of the synchronization problem
\begin{equation} \label{eq:uzu}
\max_{\UW} \mathrm{tr}(\UW^{\top} \tilde{\Zm} \UW) \quad\mathrm{ s.t. }\quad \UW^{\top} \UW = \Id_S
\end{equation}
is given by the columns of $\Gw$:
\begin{equation}\label{eq:Ug}
\UW \cdot \mathrm{diag} (\sqrt{\evalw_1}, \dots, \sqrt{\evalw_S}) \approx \Gw = \begin{bmatrix}
\G^1 \D^1 \\
\G^2 \D^2 \\
\vdots \\
\G^K \D^K \\
\end{bmatrix},
\end{equation}
Here $\evalw_1, \dots, \evalw_S$ are the leading eigenvalues of $\tilde{\Zm}$, and $(\D^1, \dots, \D^K \in \R^{S\times S})$ are diagonal matrices.
In other words, the columns of $\Gw$ being the eigenvectors of $\ZW$ are related to the non-weighted synchronization by a piecewise linear anisotropic scaling.
We begin by the observation that $\K^k=\G^{k\top}\G^k$ is a diagonal matrix where $K^k_{ss}$ counts[According to our assumption, this `count' hereafter is only valid when $\Z^{kl}$s are binary and can be viewed as soft counting when such an assumption is relaxed.] the number of points in point cloud $k$ belonging to part $s$. Hence, each element along $\Gs^\top\Gs=\sum_{k=1}^K (\K^k) $ counts the number of points over all point clouds that belong to part $s$. Because $\Zm=\Gs\Gs^\top$, we have the following spectral decomposition $\Zm\Gs=\Gs\evals$ [3]:
\begin{equation}
\Zm\Gs = \Gs\Gs^\top\Gs = \Gs \sum\limits_{k=1}^K \G^{k\top}\G^k = \Gs \evals.
\end{equation}
To simplify notation we overload $w^{kl}$ by setting $w^{kl}=\frac{1}{\sigma^{kl}}$ for the rest of this subsection.
Let us now write $\ZW\Gw$ in a similar fashion and seek the similar emergent property of eigen-decomposition:
\begin{align}\label{eq:ZWGw}
\ZW\Gw = \begin{bmatrix}
\sum\limits_{l=1}^K w^{1l} \Z^{1l} \G^l \D^l \\
\sum\limits_{l=1}^K w^{2l} \Z^{2l} \G^l \D^l \\
\vdots \\
\sum\limits_{l=1}^K w^{Kl} \Z^{Kl} \G^l \D^l \\
\end{bmatrix}.
\end{align}
Then, using $\Z^{kl}=\G^k\G^{l\top}$ we can express <ref> as:
\begin{align}\label{eq:Zg}
\ZW\Gw &= \begin{bmatrix}
\sum\limits_{l=1}^K w^{1l} \G^1\G^{l\top} \G^l \D^l \\
\sum\limits_{l=1}^K w^{2l} \G^2\G^{l\top} \G^l \D^l \\
\vdots \\
\sum\limits_{l=1}^K w^{Kl} \G^K\G^{l\top}\G^l \D^l \\
\end{bmatrix}\\
&= \begin{bmatrix}
\G^1\sum\limits_{l=1}^K w^{1l} \G^{l\top} \G^l \D^l \\
\G^2\sum\limits_{l=1}^K w^{2l} \G^{l\top} \G^l \D^l \\
\vdots \\
\G^K\sum\limits_{l=1}^K w^{Kl} \G^{l\top}\G^l \D^l \\
\end{bmatrix} = \begin{bmatrix}
\G^1 \Hw^1 \\
\G^2 \Hw^2 \\
\vdots \\
\G^K \Hw^K
\end{bmatrix}\label{eq:GH}
\end{align}
\begin{align}
\Hw^k = \sum\limits_{k=1}^K w^{kl} \G^{l\top} \G^l \D^l.
\end{align}
$\Hw$ is a diagonal matrix because $\D^l$ is diagonal by assumption.
Note that, the first part in the summation is assumed to be a known[We will see later in <ref> why this is only an assumption.] diagonal matrix (see the beginning of proof):
\begin{align}\label{eq:EH}
\E^{kl}=w^{kl} \G^{l\top} \G^l,
\end{align}
This form is very similar to <ref> scaled by the corresponding diagonal matrices.
Let us know consider the $s^{\mathrm{th}}$ column of $\Gw$ responsible for part $s$. We are interested in showing that such column is an eigenvector of $\ZW$:
\begin{align}\label{eq:Zgeigen}
\ZW \Gw^s = \evalw_s \Gw^s.
\end{align}
In other words, we seek the existence of $\evalw_s$ such that <ref> is satisfied. Moreover, a closed form expression of $\evalw_s$ would allow for the understanding of the effect of the weights on the problem. Let us now plug <ref> and <ref> into <ref> to see that:
\begin{align}\label{eq:Zgeigen2}
\begin{bmatrix} (\G^1 \Hw^1)^s \\
(\G^2 \Hw^2)^s \\
\vdots \\
(\G^K \Hw^K)^s
\end{bmatrix} = \evalw_s \begin{bmatrix} (\G^1 \D^1)^s \\
(\G^2 \D^2)^s \\
\vdots \\
(\G^K \D^K)^s
\end{bmatrix}.
\end{align}
As $\G^k$ is a binary matrix, it only actas as a column selector, where for a single part $s$, a column of the motion segmentation $\Gw$ should contain only ones. We can use this idea and the diagonal nature of $\ZW \Gw^s$ to cancel $\G^k$ on each side. Re-arranging the problem in terms of scalars on the diagonal yields:
\begin{align}\label{eq:sys}
\systeme*{H^1_{ss}= \sum\limits_{l=1}^K E^{1j}_{ss}D^{l}_{ss}=\evalw_s D^1_{ss}, H^2_{ss}=\sum\limits_{l=1}^K E^{2j}_{ss}D^{l}_{ss}=\evalw_s D^2_{ss}, &&\mathrel{\makebox[\widthof{=}]{\vdots}}\hfill, H^K_{ss}= \sum\limits_{l=1}^K E^{Kj}_{ss}D^{l}_{rr}=\evalw_s D^K_{ss}}
\end{align}
where $E$ is as defined in <ref>.
Note that both $D$ and $\evalw_s$ are unknowns in this seemingly non-linear problem. Yet, we can re-arrange
<ref> into another eigen-problem:
\begin{align}\label{eq:sysJEigen}
\J^s\dvec^s=\evalw_s^\prime\dvec^s,
\end{align}
\begin{align}\label{eq:sysJ}
\J^s = \begin{bmatrix}
E^{11}_{ss} & E^{12}_{ss} & \cdots & E^{1K}_{ss}\\
E^{21}_{ss} & E^{22}_{ss} & \cdots & E^{2K}_{ss}\\
\vdots & \ddots & \vdots\\
E^{K1}_{ss} & E^{K2}_{ss} & \cdots & E^{KK}_{ss}
\end{bmatrix} \, \dvec^s = \begin{bmatrix}
\vdots \\
\end{bmatrix}. %\nonumber
\end{align}
Hence, we conclude that the eigenvectors of the weighted synchronization have the form of <ref> if and only if we can solve <ref>. This is possible as soon as $\E^{kl}$ are known and $\J^s$ has real eigenvectors. Besides an existence condition, <ref> also provides an explicit closed form relationship between the weights and the eigenvectors once $\E^{kl}$ are available.
Note that the symmetric eigen-problem given in <ref> only requires the matrix $\E^{kl}$ for all $k,l$. By definition, each element along the diagonal of $\E^{kl}=w^{kl} \G^{k\top} \G^l$ denotes the number of points in each point cloud belonging to each part weighted by $w$. Hence, it does not require the complete knowledge on the part segmentation but only the amount of points per part. While this is unknown in practice, for the sake of our theoretical analysis, we might assume the availability of this information. Hence, we could speak of solving <ref> for each part $s$.
It is also interesting to analyze the scenario where one assumes $\dvec^s=\one$ for each $s$. In fact, this is what would happen if one were to naively use the unweighted solution for a weighted problem, use $\Gw$ itself as the estimate of motion segmentation, as our closed form expression for $\D^k$ (<ref>) cannot be evaluated in test time. Then, assuming $\D^k$ to be the identity, for each $k$ it holds:
\begin{align}
\sum\limits_{l=1}^K E^{kl}_{ss} &= \sum\limits_{l=1}^K w^{kl} \G^{k\top}(\G^l)^s \\
&= w^{k1}\G^{1\top}(\G^1)^s + \dots + w^{kK}\G^{K\top}(\G^K)^s \nonumber \\
&= \w_k \cdot \begin{bmatrix} \G^{1\top}(\G^1)^s & \cdots & \G^{K\top}(\G^K)^s \end{bmatrix} \nonumber \\
&= \w_{k} \bm{\varphi}^s = \evalw_s^\prime \label{eq:wlambda}.
\end{align}
where $(\G^l)^s$ is the $s$-th column of $\G^l$.
The final equality follows directly from <ref> when $D_{ss}=1$.
Note that we can find multiple weights $\w_{k}$ satisfying <ref>. For instance, if $\bm{\varphi}$ and $\lambda$ were known, one solution for any $s$ would be:
\begin{equation}
w^{kl} = \frac{\evalw_s}{K\varphi_k^s}.
\end{equation}
Because (i) we cannot assume a uniform prior on the number of points associated to each part and (ii) it would be costly to perform yet another eigendecomposition, we choose to cancel the effect of the predicted weights $w_{ij}$ as we do in the paper by a simple normalization procedure. However, such unweighted solution would only be possible because our design encoded the weights in the norm of each entry in the predicted $\Znet^{kl}$.
§ IMPLEMENTATION DETAILS
§.§ Network Structures
§.§.§ Flow Prediction Network
We adapt our own version of flow prediction network $\flownet$ from PointPWC-Net [78] by changing layer sizes and the number of pyramids.
As illustrated in <ref>, the network predicts 3D scene flow in a coarse-to-fine fashion.
Given input $\X^k$ as source point cloud and $\X^l$ as target point cloud, a three-level pyramid is built for them using furthest point sampling as $\{ \X^{k,(0)}=\X^k, \X^{k,(1)}, \X^{k,(2)} \}$ and $\{ \X^{l,(0)}=\X^l, \X^{l,(1)}, \X^{l,(2)} \}$, with point counts being 512, 128, 32, respectively.
Similarly, we denote the flow predicted at each level as $\{ \flow^{kl,(0)},\flow^{kl,(1)},\flow^{kl,(2)} \}$.
Per-point features for all points are then extracted with dimension 128, 192 and 384 for each hierarchy.
A 3D `Cost Volume' [40] is then computed for the source point cloud by aggregating the features from $\X^k$ and $\X^l$ for the point pyramid, with feature dimension 64, 128 and 256.
This aggregation uses the neighborhood information relating the target point cloud and the warped source point cloud in a patch-to-patch manner.
The cost volume, containing valuable information about the correlations between the point clouds, is fed into a scene flow prediction module for final flow prediction.
The predicted flow at the coarser level can be upsampled via interpolation and help the prediction of the finer level.
Readers are referred to [78] for more details.
Our adapted version of PointPWC-Net $\flownet$. Each rectangular block denotes a tensor, whose size is written as $N\times C$ (batch dimension is ignored) below its name, with $N$ being the number of points and $C$ being the feature dimension. The network is composed of 3 hierarchical levels. At each level, features from the two input point clouds are fused via a Cost Volume Layer, which digests warped point cloud and features from the upsampled coarse flow estimated from the last level and provides a cost volume for flow prediction.
§.§.§ Confidence Estimation Network
The confidence estimation network $\confnet$ we use, adapted from OANet (Order-Aware Network) [87], learns inlier probability of point correspondences.
In our case, each correspondence is represented as a $\R^7$ vector as described in the main paper.
Different from other network architectures like PointNet [56], OANet features in the novel differentiable pooling (DiffPool) and unpooling (DiffUnpool) operations as well as the order-aware filtering block, which are demonstrated to effectively gather local context and are hence useful in geometric learning settings, especially for outlier rejection [9].
The network starts and ends with 6 PointCN [50] layers, which globally exchanges the point feature information by context normalization (whitening along the channel dimension to build cross-point relationship).
In between the PointCNs lies the combination of DiffPool layer, order-aware filtering block and DiffUnpool layer.
The DiffPool layer learns an $N\times M$ soft assignment matrix, where each row represents the classification score of each point being assigned to one of the $M$ `local clusters'.
These local clusters represent local structures in the correspondence space and are implicitly learned.
As the $M$ clusters are in canonical order, the feature after the DiffPool layer is permutation-invariant, enabling the order-aware filtering block afterward to apply normalization along the spatial dimension (, `Spatial Correlation Layer') for capturing a more complex global context.
In our $\confnet$, we choose $M=64$.
§.§.§ Motion Segmentation Network
The architecture of $\segnet$ has been already introduced in the main paper.
Here we elaborate how the transformations $\Tp_i^{kl}$ are estimated by PointNet++.
The input to the network is the stacked $[ (\X^k)^\top \; (\hat{\flow}^{kl})^\top ]^\top \in \R^{6\times N}$ and the output is in $\R^{12 \times N}$, where for each point we take the first 9 dimensions as the elements in the rotation matrix and the last 3 dimensions as the translation vector.
In practice, direct transformation estimation from the PointNet++ is not accurate.
Given that we have already obtained the flow vectors, instead of estimating $\Tp^{kl}_i$, we compute a residual motion the given flow similar to the method in [83].
Specifically, when the actual outputs from the network are $\Rot_\mathrm{net} \in \R^{3\times 3}$ and $\tra_\mathrm{net} \in \mathbb{R}^{3}$, the transformations used in subsequent steps of the pipeline $\Tp^{kl}_i = [ \Rp_i^{kl} | \tp^{kl}_i ]$ are computed as follows:
\begin{equation}
\Rp_i^{kl} = \Rot_\mathrm{net} + \Id_3, \quad
\tp^{kl}_i = \tra_\mathrm{net} - \Rot_\mathrm{net} \x^k_i + \bm{f}^{kl}_i.
\end{equation}
Note that we do not ensure $\Tp_i^{kl}$ is in SE(3) with SVD-like techniques.
In fact the transformation is not directly supervised (neither in this module nor in the entire pipeline) and the nearest supervision comes from $\bm{\beta}^{kl}$ matrix through Eq (9).
This avoids the efforts to find a delicate weight for balancing the rotational and translational part of the transformation.
§.§ Pose Computation and Iterative Refinement
Given the synchronized pairwise flow $\hat{\bm{f}}^{kl}$ and motion segmentation $\G^k$, we estimate the motion separately for each rigid part using a weighted Kabsch algorithm [39].
The weight for point $\x_i^k$ and the rigid motion $s$ between $\X^k$ and $\X^l$ is taken as $c^{kl}_i G^k_{is}$.
We then use similar techniques as in [23, 36] to estimate the motions separately for each part.
Examples from our training set for (a) articulated objects and (b) multiple solid objects. Different colors indicate rigidly moving parts.
Visualization of the dataset. Each row shows 8 different dynamic configurations of the same set of rigid objects. Annotated bounding boxes are parallel to the ground plane and reflect the objects' absolute poses.
The point clouds to register can have a large difference in poses making it hard for the flow network to recover. This might lead to wrong results in the subsequent steps.
Inspired by point cloud registration works [83, 23], during test time we iterate our pipeline several times to gradually refine the correspondence and segmentation estimation.
In particular, we use the transformation $\T_s^{k^*}(\T_s^k)^{-1}$ estimated at iteration $t-1$ to transform all the points in all point sets belonging to part $s$ to the canonical pose of the $k^*$-th point cloud. Note that the choice of $k^\star$ is arbitrary, and we choose $k^\star=1$.
Then at iteration $t$, we feed the transformed point clouds to the flow network again to compute the residual flow, which is added back onto the flow predicted at iteration $t-1$ to form the input of the segmentation network.
The progress works reciprocally, as differences in poses of the point clouds are gradually minimized and the flow estimation will hence become more accurate, leading to better segmentation and transformations.
Specially, during the first iteration where pose differences are usually large, we treat the point clouds as if they are composed of only one rigid part to globally align the shapes.
This will provide a good pose initialization for subsequent iterations.
§.§ Dataset
Training Data
To demonstrate the generalizability of our method across different semantic categories, we ensure the categories used for training, validation and test have no overlap.
For articulated objects, the categories we use are shown in <ref>.
For multiple solid objects, the categories are listed in <ref>.
Examples from our training set are visualized in <ref>.
A full visualization of the dataset with manual annotations is shown in <ref>.
We will make the scans publicly available.
§ ADDITIONAL RESULTS
§.§ Extended Ablations
In this subsection we provide more complete ablations extending subsec:exp:ablation.
A full listing of the baselines we compare is as follows:
* Ours (1 iter): The pipeline is iterated only once, without the global alignment step as described in <ref>.
* Ours (NS, NW): Same as the main paper, we directly feed $\flow^{kl}$ instead of $\hat{\flow}^{kl}$ to the motion network $\segnet$.
* Ours (S, NW): Same as the main paper, we set all weights of the permutation synchronization $w^{kl}=1$.
* Ours (UNZ): The unnormalized matrix $\ZW$ (<ref>) is used as input to motion segmentation synchronization, , the normalizing factors are set to $\sigma^{kl}=1$.
* Ours (4 iters): Full pipeline of our method, with 4 steps of iterative refinement.
Visual comparisons of the pairwise flow. To visualize the flow we warp the source point cloud and compare the its similarity with the target point cloud. The color bar on the right shows the end-point error (EPE3D). `Ours (S, W)' represents the output of our method with the Weighted permutation Synchronization scheme.
We show comparisons of the final rigid flow error using EPE3D metric on both SAPIEN and dataset in <ref> respectively.
Results indicate that all the components introduced in our algorithm, including iterative refinement, weighted synchronization, and the pre-factoring of the motion segmentation matrix, contribute to the improvement of accuracy under different scenarios.
Note that in dataset, the performance of `Ours (UNZ)' is very similar to `Ours (4 iters)' because the motion segmentation accuracy is already high (tbl:ours-segm) due to the good quality of each individual $\Znet$ output, rendering normalization optional in practice.
We provide additional visual examples demonstrating the effectiveness of our weighted permutation synchronization in <ref>, where direct flow output fails due to large pose changes between the input clouds, and a naive unweighted synchronization still suffers from such failure because the influence of wrong correspondences is not eliminated.
For completeness we include per-category segmentation accuracy of articulated objects on SAPIEN [79] dataset in <ref>.
The variants of our method perform consistently better than other methods for nearly all categories, showing the robustness of our model for accurate multi-scan motion-based segmentation.
Empirical cumulative distribution function (ECDF) of rigid flow error (EPE3D) on SAPIEN [79] dataset. The higher the curve, the better the results.
Empirical cumulative distribution function (ECDF) of rigid flow error (EPE3D) on dataset. The higher the curve, the better the results.
Quantitative demonstrations on complex scans. The first row is estimated using our trained articulated objects model while the last row is obtained by hierarchically apply our method to each segmented part until convergence. 172-175 indicates scan index. Best viewed with 200% zoom in.
§.§ Qualitative Results
To provide the readers with a more intuitive understanding of our performance under different cases, we illustrate in <ref> the scenarios with co-existing articulated/solid objects and multiple cars in a scene of Waymo Open dataset [63] (though the car category is within our training set).
Moreover, we show in <ref> our segmentation and registration results for each category in SAPIEN [79] dataset, covering most of the articulated objects in real world.
Due to the irregular random point sampling pattern and the natural motion ambiguity, in some examples, our method may generate excessive rigid parts, which can be possibly eliminated by a carefully-designed post-processing step and is out of the scope of this work.
We also show results from the dataset in <ref>.
Our method can generate robust object associations under challenging settings.
Qualitative results on SAPIEN dataset (1/3).
Qualitative results on SAPIEN dataset (2/3).
Qualitative results on SAPIEN dataset (3/3).
Qualitative results on dataset.
|
# Network Automatic Pruning: Start NAP and Take a Nap
Wenyuan Zeng
University of Toronto
Uber ATG
<EMAIL_ADDRESS>Yuwen Xiong
University of Toronto
Uber ATG
<EMAIL_ADDRESS>Raquel Urtasun
University of Toronto
Uber ATG
<EMAIL_ADDRESS>
###### Abstract
Network pruning can significantly reduce the computation and memory footprint
of large neural networks. To achieve a good trade-off between model size and
performance, popular pruning techniques usually rely on hand-crafted
heuristics and require manually setting the compression ratio for each layer.
This process is typically time-consuming and requires expert knowledge to
achieve good results. In this paper, we propose NAP, an unified and automatic
pruning framework for both fine-grained and structured pruning. It can find
out unimportant components of a network and automatically decide appropriate
compression ratios for different layers, based on a theoretically sound
criterion. Towards this goal, NAP uses an efficient approximation of the
Hessian for evaluating the importances of components, based on a Kronecker-
factored Approximate Curvature method. Despite its simpleness to use, NAP
outperforms previous pruning methods by large margins. For fine-grained
pruning, NAP can compress AlexNet and VGG16 by 25x, and ResNet-50 by 6.7x
without loss in accuracy on ImageNet. For structured pruning (e.g. channel
pruning), it can reduce flops of VGG16 by 5.4x and ResNet-50 by 2.3x with only
1% accuracy drop. More importantly, this method is almost free from hyper-
parameter tuning and requires no expert knowledge. You can start NAP and then
take a nap!
## 1 Introduction
Deep neural networks have proven to be very successful in many artificial
intelligence tasks such as computer vision [11, 17, 32], natural language
processing [33], and robotic control [27]. In exchange of such success, modern
architectures are usually composed of many stacked layers parameterized with a
large number of learnable weights. As a consequence, modern architectures
require considerable memory storage and intensive computation. For example,
the VGG16 network [32] has 138 million parameters in total and requires 16
billion floating point operations (FLOPs) to finish one forward-pass. This is
problematic in applications that need to run on small embedded systems or that
require low-latency to make decisions.
Fortunately, we can compress these large networks with little to no loss in
performance by exploiting the fact that many redundancies exist within their
parameters. Several directions have been explored in the compression community
such as quantization [2, 5, 30, 36, 40] and low-rank approximation [5, 15,
18]. Among these directions, pruning techniques [8, 9, 10, 13, 20, 35] have
been widely exploited due to their simplicity and efficacy. They aim at
removing redundant and unimportant parameters from a network, and thus save
storage and computation during inference. More specifically, pruning
techniques can be categorized into two branches. Fine-grained pruning [9]
removes individual parameters from a network, and thus saves the storage space
and benefits for embedded systems. Structured pruning [13, 25, 35], on the
other hand, typically removes whole channels (filters) from a network and thus
achieves inference speed-up with no need of any hardware specialization. In
this paper, we follow the line of pruning techniques and propose an unified
pruning approach that can be used to prune either individual parameters or
channels.
The main challenges in pruning task are: 1) how to determine unimportant
parameters (channels) in a layer, 2) how to decide the compression ratio for
each layer in a network. Different pruning methods mainly differ on how they
tackle these two problems. In the fine-grained domain, popular pruning methods
typically rely on heuristics, such as weight magnitude to determine the
importance of a parameter [8, 9]. They manually tune the compression ratios
for different layers, and then prune the smallest number of parameters from
each layer according to the compression ratios. However, a small magnitude
does not necessarily mean an unimportant parameter [10, 20]. Furthermore,
because different layers (and different network architectures) usually have
different sensitivities for compression, manually tuning the compression
ratios is not only time-consuming, but also may lead to sub-optimal results.
In the structured pruning domain, importance of a channel is typically
evaluated by optimizing the reconstruction error of the feature-maps before
and after pruning this channel [13, 24, 25]. However, these methods still
require manually setting the compression ratio for each layer. Some other
efforts are explored to automatically determine these compression ratios for
different layers, such as using deep reinforcement learning [12, 22] or
learning sparse structure networks [23]. However, at the expense of avoiding
per-layer compression ratios, these methods introduce extra hyper-parameters.
In addition, structured pruning methods usually rely on the fact that there
are only a few thousand of channels in a neural network and the optimization
problem is tractable. Therefore, it’s not straight-forward for these
structured pruning techniques to adapt to the fine-grained scenario, where
optimization search space is so large (number of parameters) that make it
impossible to solve.
In this paper, we propose NAP, a unified pruning method for both fine-grained
pruning and structured pruning. Our method can automatically decide the
compression ratios for different layers, and can find out unimportant
parameters (channels) based on their effects on the loss function. Compared
with previous methods, NAP is much easier to use (almost no hyper-parameter
tuning), and shows better performance. To this end, we estimate the importance
of parameters using the Taylor expansion of the loss function, similar to
[10], and remove unimportant parameters accordingly. However, this involves
computing the Hessian matrix, which is tremendously huge and intractable for
modern neural networks. Here, we first notice that the Fisher Information
matrix is close to the Hessian under certain conditions. We then use a
Kronecker-factored Approximate Curvature (K-FAC) [26] method to efficiently
estimate the Fisher matrix and use it as a proxy for the Hessian. In our
experiments, the overhead of estimating the Hessian approximation is small
compared with the pre-train and fine-tune time. Therefore, our method is
efficient to use. Importantly, this importance criterion is calibrated across
layers, allowing us to automatically get the per-layer compression ratios. We
demonstrate the effectiveness of our method on several benchmark models and
datasets, outperforming previous pruning methods by large margins for both
fine-grained pruning and structured pruning.
## 2 Related work
Figure 1: NAP pipeline. Step 1, Updating Statistics: NAP takes $T$ steps of
normal forward-backward pass to estimate statistics of activation
$\mathbf{a}_{l-1}$ and derivative $\partial\mathbf{s}_{l}$ as
$\mathbb{E}\left[\mathbf{a}_{l-1}\mathbf{a}_{l-1}^{T}\right]$ and
$\mathbb{E}\left[\partial\mathbf{s}_{l}\partial\mathbf{s}_{l}^{T}\right]$
respectively, which are much smaller than full Hessian. Step 2, Computing
importance: NAP then inverts and diagonalizes
$\mathbb{E}\left[\mathbf{a}_{l-1}\mathbf{a}_{l-1}^{T}\right]$ and
$\mathbb{E}\left[\partial\mathbf{s}_{l}\partial\mathbf{s}_{l}^{T}\right]$, do
element-wise product to get $h_{i}$ and compute importance
$\Delta\mathcal{L}_{i}=w_{i}^{2}/2h_{i}$ for each parameter. (See Eq. 6) Step
3, Pruning: NAP finally prunes $p$ fraction of parameters with lowest
$\Delta\mathcal{L}_{i}$. These steps can be repeated multiple times. More
details of NAP can be found in Section. 3 and Algorithm 1.
##### Model Compression
Model compression aims to compress a large network into a smaller one while
maintaining good performance. There are several popular families of
compression approaches, including pruning, quantization, and low-rank
approximation. Quantization [2, 5, 30, 36, 40] aims to use fewer bits to
encode each parameter, e.g. binary neural network. Low-rank approximation [3,
15, 18, 29] approximates network parameters by low-rank representations,
saving storage and speeding up the network. Pruning, being one of the most
popular methods due to its simplicity and effectiveness, aims to remove
unimportant parameters from a large network. These techniques could be further
integrated together and result in better compression ratios [8].
##### Pruning
Han _et al_. [9] first compress modern neural networks with a magnitude-based
pruning method [7, 21, 35]. They iteratively prune parameters with smaller
value and show decent compression results. However, these fine-grained pruning
methods prune individual parameters, leaving layers with irregular sparsity,
which is not easy for common hardware to achieve real acceleration. To address
this issue, structured pruning proposes to prune a whole channel. Since the
search space is much smaller than fine-grained pruning, the structured pruning
problem is typically formulated as a minimization of the reconstruction error
of the feature-map before and after pruning, which can be approximated by
LASSO [13] or greedy algorithm [25]. However, these methods require manually
setting compression ratios for different layers, which is time-consuming and
not easy to tune. Some other methods focus on automatically setting these
compression ratios by, for instance, learning the importance of different
channels through back-propagation [23] or deep reinforcement learning [12,
22]. However, these methods bring new hyper-parameters to tune. In addition,
most of these methods can only handle either fine-grained or structured cases.
##### OBS and K-FAC
Although magnitude-based pruning methods are simple and show good results,
they implicitly assume that parameters with smaller values are less important
than others. Such an assumption doesn’t generally hold as noted in [10, 20].
Therefore, Optimal Brain Surgeon (OBS) [10] proposes using Taylor expansion to
estimate the importance of a parameter, and shows good performance on small
networks. Since our work is inspired by OBS, we briefly review the idea here.
Given a well-trained neural network with parameters $\mathbf{W}$, the local
surface of the training loss $\mathcal{L}$ can be characterized by its Taylor
expansion:
$\delta\mathcal{L}=\frac{1}{2}\delta\mathbf{W}^{T}\mathbf{H}\delta\mathbf{W},$
where $\delta\mathbf{W}$ is a small change in the parameter values (e.g.,
setting some parameters to zero) and $\mathbf{H}$ is the Hessian matrix
defined as $\partial^{2}\mathcal{L}/\partial\mathbf{W}^{2}$. Note that at
convergence the first-order term vanishes and the higher-order term can be
neglected, and thus only the second term remains. OBS aims to find an
parameter $w_{q}$ such that when it is removed, the change in the training
loss $\delta\mathcal{L}$ is minimized:
$\min\limits_{q}\left(\min\limits_{\delta\mathbf{W}}\left(\frac{1}{2}\delta\mathbf{W}^{T}\mathbf{H}\delta\mathbf{W}\right)\right),\quad
s.t.\quad\mathbf{e}_{q}^{T}\delta\mathbf{W}+\mathbf{W}_{q}=0,$ (1)
where $\mathbf{e}_{q}^{T}$ is the one-hot vector in the parameter space
corresponding to $w_{q}$. The inner minimization problem can be solved by
Lagrangian multipliers. Unfortunately, since the exact solution involves the
inverse of the Hessian matrix, original OBS is intractable for modern neural
networks that typically contain millions of parameters. Recently, layerwise
OBS [4] uses the layer-wise reconstruction loss instead of the final loss, and
ends up with a smaller Hessian to evaluate. However, similar to magnitude-
based methods, layerwise OBS has to tune compression ratios for different
layers.
In our work, we follow the line of OBS, and extend it to modern neural network
in both fine-grained and structured settings. We utilize K-FAC to approximate
the Fisher matrix, which in turn approximates the exact Hessian matrix. K-FAC
[6, 26] provides an efficient way to estimate and invert an approximation of
the Fisher matrix of a neural network. It first approximates the Fisher by a
block diagonal matrix, and then decomposes each block by two much smaller
matrices via Kronecker-product. This approximation and its variants [1, 37,
38] have shown success in the field of optimization.
## 3 NAP: Network Automatic Pruning
The essence of pruning is to remove unimportant parameters, i.e. parameters
that affect the model performance less but reduce the model complexity more.
Therefore, we formulate pruning as an optimization problem considering both
loss function and model complexity. We first introduce this optimization
problem and its ideal solution in Section 3.1. We then show how to make
approximations to this ideal solution for practical considerations in Section
3.2. Finally, we illustrate how our method can extend to channel pruning for
accelerating modern neural networks in Section 3.3. Our overall method is
summarized in Algorithm 1.
Through out this section, we consider a neural network with $L$ layers. The
input and output for layer $l$ is $\mathbf{a}_{l-1}$ and $\mathbf{s}_{l}$. To
indicate a parameter is retained or pruned, we introduce a mask variable
$\mathbf{\Gamma}_{l}$ and thus the forward pass can be written as
$\mathbf{s}_{l}=\left(\mathbf{W}_{l}\odot\mathbf{\Gamma}_{l}\right)\mathbf{a}_{l-1},\quad\mathbf{a}_{l}=\text{Relu}\left(\mathbf{s}_{l}\right),$
(2)
where $\mathbf{W}_{l}$ is the parameter matrix and $\odot$ denotes element-
wise product.111CNN can also be written in a similar way if we expand the
parameter matrix with duplicated weights.
### 3.1 Problem Formulation
Our objective is to prune as much as possible while maintaining good model
performance, e.g. high accuracy in classification. Similar to Minimum
Description Length (MDL, [31]), we formulate our objective $\Psi$ as
$\Psi=\min\limits_{\mathbf{\Theta}}\mathcal{L}(data|\mathbf{\Theta})+\mathcal{D}(\mathbf{\Theta}).$
(3)
The first term in Eq. (3) is the training loss $\mathcal{L}$, and the second
term is a measure of the model complexity. Such measurement can take different
forms, such as 1) number of parameters, which leads to a storage-friendly
small network, or 2) number of FLOPs, which leads to a faster network. If we
use number of parameters as the measurement (we’ll talk about using number of
FLOPs in Section 3.3), the objective can then be written as
$\Psi=\min\limits_{\mathbf{W},\mathbf{\Gamma}}\left[\mathcal{L}\left(y|x,\mathbf{W}\odot\mathbf{\Gamma}\right)+\lambda\sum_{l=1}^{L}\left(\sum_{(i,j)}\gamma_{l}^{(i,j)}\right)\right],$
(4)
where $\gamma_{l}^{(i,j)}$ is the mask of the $(i,j)^{th}$ parameter in the
$l^{th}$ layer, and thus the second term in Eq. 4 is the number of remaining
parameters in this network. Therefore, solving this minimization problem will
naturally prune a network in a fine-grained manner. $\lambda$ is introduced as
a relative importance which characterizes the trade-off between sparsity and
model performance, and controls the final size of a compressed model.222We’ll
soon explain how to set the value of $\lambda$ in practical applications.
Unfortunately, this objective cannot be directly optimized with vanilla SGD as
$\gamma_{l}^{(i,j)}$ is constrained to be binary. Instead, we consider an
easier case where we first pre-train the model to a local minimum of
$\mathcal{L}$, then update one $\gamma_{l}^{(i,j)}$ from 1 to 0 if it
decreases our objective $\Psi$. Recall that updating the mask
$\gamma_{l}^{(i,j)}$ is equivalent to pruning a parameter $w_{l}^{(i,j)}$, we
can repeatedly doing so until the objective $\Psi$ converges, resulting in a
smaller model.
To evaluate the effect of pruning a parameter upon $\Psi$, we use the Taylor
expansion of Eq. 4 and omit the first-order term similar to OBS [10], because
we assume we start from a local minimum of $\mathcal{L}$. Suppose we want to
prune one parameter $w^{(i,j)}_{l}$ and evaluate the change in $\Psi$, we need
to solve
$\Delta\Psi_{q}=\min\limits_{\mathbf{\delta W}}\left(\frac{1}{2}\mathbf{\delta
W^{T}H\delta W}-\lambda\right),s.t.\delta w_{l}^{(i,j)}+w_{l}^{(i,j)}=0.$ (5)
The subscript $q$ is simply an index number associated with parameter
$w_{l}^{(i,j)}$. Essentially, we want to perturb the network in such a way
($\delta\mathbf{W}$) that updates $w_{l}^{(i,j)}$ from its original value to
$0$ and minimize $\Delta\Psi_{q}$ at the same time. This optimization problem
in Eq. 5 can be solved with Lagrangian multipliers, resulting in
$\Delta\Psi_{q}=\Delta\mathcal{L}_{q}-\lambda$, and
$\Delta\mathcal{L}_{q}=\frac{1}{2}\frac{\left(w_{l}^{(i,j)}\right)^{2}}{\left[\mathbf{H}^{-1}\right]^{(q,q)}},\quad\mathbf{\delta
W}^{*}=\left[-\frac{w_{l}^{(i,j)}}{\left[\mathbf{H}^{-1}\right]^{(q,q)}}\mathbf{H}^{-1}\right]^{(\cdot,q)}.$
(6)
Therefore, rather than only prune the parameter $w_{l}^{(i,j)}$, we also
update other parameters simultaneously according to $\mathbf{\delta W}^{*}$,
which will give us better $\Delta\Psi_{q}$. In Eq. 6, superscript $(q,q)$
denotes the element at the $q^{th}$ row and $q^{th}$ column, $(\cdot,q)$
denotes the $q^{th}$ column in that matrix, and $q$ is the index of the
element in $\mathbf{H}$ associated with $w_{l}^{(i,j)}$. The estimation of
$\mathbf{H}^{-1}$ will be introduced in section 3.2.
Given the solution of Eq. 5 in Eq. 6, we can then evaluate $\Delta\Psi$ for
all parameters, remove the smallest one, apply $\delta\mathbf{W}^{*}$ to other
parameters, and repeat this procedure until the objective $\Psi$ converges.
However, this is intractable for networks with millions of parameters. For
practical considerations, we simultaneously remove multiple parameters with
lower $\Delta\Psi_{q}$, i.e. we first evaluate $\Delta\Psi_{q}$ for all
parameters, remove parameters that have $\Delta\Psi_{q}<0$, and update others
with $\delta\mathbf{W^{*}}$ of those removed parameters.
Recall that our ultimate goal is to achieve a small model. To do so, one can
set an appropriate $\lambda$ value in the very beginning, repeat the
aforementioned procedure several iterations333Such iterative manner is
commonly adopted in pruning methods., until $\Psi$ doesn’t decrease. However,
the value of $\lambda$ is critical for finding a sweet-point between model
size and model performance, and it’s generally not easy to set an appropriate
$\lambda$ beforehand. Therefore, we instead dynamically adjust the value of
$\lambda$ as pruning proceeds. At each iteration, we set $\lambda$ such that
the aforementioned pruning operation will give us a smaller model with $(1-p)$
times of original size. This is equivalent to setting $\lambda$ to the
$p^{th}$ percentile of $\Delta\mathcal{L}_{q}$. By iteratively doing this, we
can monitor the model size and model performance, and thus easily decide when
to stop pruning. More importantly, this also avoids tuning hyper-parameter
$\lambda$, and only leave us with $p$. As we’ll show in our ablation study,
our method is robust to the value of $p$, and thus it is easy to apply.
After pruning, we fine-tune the remaining parameters using SGD to get better
model performance,
### 3.2 Approximating Hessian using Fisher
Performing the pruning operation as in Eq. (6) involves estimating and
inverting the Hessian matrix, which is intractable for modern neural networks
that contain millions of parameters. Therefore, we propose an approximation of
the Hessian to efficiently calculate Eq. (6). We first employ the Fisher
Information matrix as an approximation of the Hessian and further use a
Kronecker-factored Approximate Curvature (K-FAC) method to approximate the
Fisher matrix. The first approximation comes from the fact that if the
training objective is the negative log-likelihood, the Hessian matrix and
Fisher matrix are the expectations of second-order derivatives under the data
distribution and model distribution respectively. Since modern neural networks
usually have strong model capacities, we expect those two distributions are
close for a well-trained model. The second approximation (K-FAC) is
demonstrated to be effective in optimization tasks [6, 26], and will further
help us to calculate Eq. (6) efficiently.
Given a neural network with stacked fully-connected layers444Details for
convolutional layers can be found in [6]., the Fisher Information matrix is
defined as
$\vspace{-0.2cm}\mathbf{F}=\mathbb{E}\left[\left(\nabla_{\overrightarrow{\mathbf{W}}}\mathcal{L}\right)\left(\nabla_{\overrightarrow{\mathbf{W}}}\mathcal{L}\right)^{T}\right],$
(7)
where $\overrightarrow{\mathbf{W}}$ is the vectorization of all parameters
$\mathbf{W}$ in this network.555Unless specified otherwise, the expectation is
taken with respect to the model distribution. Thus, $\mathbf{F}$ has same
dimension as number of parameters. The K-FAC method approximates this Fisher
matrix $\mathbf{F}$ using a block-diagonal matrix, and estimates each block by
the Kronecker-product of two much smaller matrices, which are the second-order
statistics of inputs and derivatives. To illustrate the idea of K-FAC, we
first re-write $\mathbf{F}$ in a block-wise manner by partitioning rows and
columns of $\mathbf{F}$ if they correspond to parameters within the same
layer. The $(i,j)$ block is then
$\mathbf{F}_{ij}=\mathbb{E}\left[\left(\nabla_{\overrightarrow{\mathbf{W}_{i}}}\mathcal{L}\right)\left(\nabla_{\overrightarrow{\mathbf{W}_{j}}}\mathcal{L}\right)^{T}\right].$
(8)
As noted by [26], the off-diagonal term $\mathbf{F}_{ij}$ is generally much
smaller than diagonal term $\mathbf{F}_{ii}$. Therefore, $\mathbf{F}$ can be
approximated by a block-diagonal matrix and the $l^{th}$ diagonal block is
$\mathbf{F}_{ll}$.
Using back-propagation, the gradients of layer $l$ can be expressed as
$\nabla_{\overrightarrow{\mathbf{W}_{l}}}\mathcal{L}=\text{vec}\left\\{\left(\nabla_{\mathbf{s}_{l}}\mathcal{L}\right)\left(\mathbf{a}_{l-1}\right)^{T}\right\\}$666$\text{vec}\\{\\}$
denotes vectorization.. The $l^{th}$ diagonal block $\mathbf{F}_{ll}$ can then
be written as
$\displaystyle\mathbf{F}_{ll}$
$\displaystyle=\mathbb{E}\left[\text{vec}\left\\{\left(\nabla_{\mathbf{s}_{l}}\mathcal{L}\right)\left(\mathbf{a}_{l-1}\right)^{T}\right\\}\text{vec}\left\\{\left(\nabla_{\mathbf{s}_{l}}\mathcal{L}\right)\left(\mathbf{a}_{l-1}\right)^{T}\right\\}^{T}\right]$
(9)
$\displaystyle=\mathbb{E}\left[\mathbf{a}_{l-1}\mathbf{a}_{l-1}^{T}\otimes(\nabla_{\mathbf{s}_{l}}\mathcal{L})(\nabla_{\mathbf{s}_{l}}\mathcal{L})^{T}\right]$
(10)
$\displaystyle\approx\mathbb{E}\left[\mathbf{a}_{l-1}\mathbf{a}_{l-1}^{T}\right]\otimes\mathbb{E}\left[(\nabla_{\mathbf{s}_{l}}\mathcal{L})(\nabla_{\mathbf{s}_{l}}\mathcal{L})^{T}\right],$
(11)
where $\otimes$ denotes Kronecker-product. The second equality comes from the
property of the Kronecker-
product777$vec\left\\{\mathbf{uv}^{T}\right\\}=\mathbf{v}\otimes\mathbf{u}$,
$(\mathbf{A}\otimes\mathbf{B})(\mathbf{C}\otimes\mathbf{D})=(\mathbf{A}\otimes\mathbf{C})(\mathbf{B}\otimes\mathbf{D})$..
The last approximation is to further accelerate the computation, and has shown
to be effective in optimization domain [26]. Therefore, $\mathbf{F}$ (and thus
$\mathbf{H}$) can be approximated by several much smaller matrices
$\displaystyle\mathbf{A}_{l-1}=\mathbb{E}\left[\mathbf{a}_{l-1}\mathbf{a}_{l-1}^{T}\right],\quad\mathbf{DS}_{l}=\mathbb{E}\left[(\nabla_{\mathbf{s}_{l}}\mathcal{L})(\nabla_{\mathbf{s}_{l}}\mathcal{L})^{T}\right],$
(12)
$\displaystyle\mathbf{F}_{ll}=\mathbf{A}_{l-1}\otimes\mathbf{DS}_{l}.\vspace{-0.2cm}$
(13)
For a typical modern neural network such as AlexNet, the original Hessian
matrix is a $61\text{M}\times 61\text{M}$ matrix, while $\mathbf{A}_{l-1}$ and
$\mathbf{DS}_{l}$ of the largest fully-connected layer have sizes of only
$9126\times 9126$ and $4096\times 4096$ respectively, and we only need to
estimate one $\mathbf{A}_{l-1}$ and $\mathbf{DS}_{l}$ for each layer. We use
exponential moving average to estimate the expectation in $\mathbf{A}_{l-1}$
and $\mathbf{DS}_{l}$, with a decay factor of 0.95 in all experiments. This
adds only a small overhead to the normal forward-backward pass. Furthermore,
the inverse $\mathbf{F}^{-1}$ and matrix-vector product
$\mathbf{F}^{-1}\mathbf{h}$ can also be efficiently calculated by leveraging
the block diagnoal structure and property of Kronecker-
product888$(\mathbf{A}\otimes\mathbf{B})^{-1}=\mathbf{A}^{-1}\otimes\mathbf{B}^{-1}$,
$(\mathbf{A}\otimes\mathbf{B})vec\\{\mathbf{X}\\}=vec\\{\mathbf{BXA}^{T}\\}$.
With Eq. 13, now we can compute $\Delta\mathcal{L}_{q}$ in Eq. 6 efficiently.
Ideally, $\Delta\mathcal{L}_{q}$ should capture the effect of pruning upon
training loss, and thus should be calibrated across all layers. For example,
given one neuron, the sum of $\Delta\mathcal{L}_{q}$ for all incoming
parameters in the previous layer should be close to the sum for all outgoing
parameters, because deleting all incoming parameters is the same as deleting
all outgoing parameters. However, we empirically find these sums sometimes
differ by a scale factor from layer to layer. This might be due to the block
diagonal approximation decorrelating some cross-layer influences, which is an
interesting topic for future work. In this paper, we employ a simple yet
effective strategy: after calculating $\Delta\mathcal{L}_{q}$, we normalize it
within the same layer:
$\tilde{\Delta\mathcal{L}_{q}}=\Delta\mathcal{L}_{q}/\sum_{q\in\text{layer
}l}\Delta\mathcal{L}_{q},$ and change our importance measure $\Delta\Psi_{q}$
accordingly. $\delta\mathbf{W^{*}}$ and other steps remain unchanged.
To summarize, our method first takes $T$ steps of normal forward-backward pass
on training dataset, and estimates statistics $\mathbf{A}_{l-1}$ and
$\mathbf{DS}_{l}$ for each layer. It then combines Eq. 6 and Eq. 13 to compute
$\Delta\mathcal{L}_{q}$ ($\tilde{\Delta\mathcal{L}_{q}}$), removes smallest
$p$ fraction parameters, updates other parameters accordingly, and then does
another iteration. We show our algorithm in Algorithm 1.
Algorithm 1 NAP: Network Automatic Pruning
0: $\mathbf{W}$, $\mathbf{\Gamma}$, Pruning fraction $p$, Number of steps $T$
for estimating $\mathbf{A}_{l-1}$ and $\mathbf{DS}_{l}$,
0:
1: Pre-train the network
1:
2: for $t=0,\cdots,T$ do
3: Update $\mathbf{A}_{l-1}\leftarrow
0.95\times\mathbf{A}_{l-1}+0.05\times\mathbf{a}_{l-1}\mathbf{a}^{T}_{l-1}$
4: Update $\mathbf{DS}_{l}\leftarrow
0.95\times\mathbf{DS}_{l}+0.05\times(\nabla_{\mathbf{s}_{l}}\mathcal{L})(\nabla_{\mathbf{s}_{l}}\mathcal{L})^{T}$
5: end for
6: Compute Fisher matrix $\mathbf{F}$ by Eq. (13).
7: Compute importance measure $\Delta\mathcal{L}_{q}$ for each parameter by
Eq. (6), and normalize within each layer to get
$\tilde{\Delta\mathcal{L}_{q}}$.
8: Compute $p^{th}$ percentile of $\tilde{\Delta\mathcal{L}_{q}}$ from all
layers as $\lambda$.
9: Update mask $\gamma_{l}^{(i,j)}$ to $0$ if its corresponding
$\tilde{\Delta\mathcal{L}_{q}}$ is smaller than $\lambda$.
10: Compute $\delta\mathbf{W}^{*}$ by Eq. (6) and update
$\mathbf{W}\leftarrow\mathbf{W}+\delta\mathbf{W}^{*}$.
10:
11: Fine-tune the network using SGD.
### 3.3 Channel-wise Pruning
In Eq. 4, we use number of parameters as the model complexity measurement.
This leads us to a small model with irregular sparsity, i.e. some individual
parameters are pruned away. However, this doesn’t imply faster inference in
reality. First, a small number of parameters doesn’t necessarily mean a small
number of FLOPs, as a fully connected layer usually has a much larger number
of parameters than a convolution layer, but a much fewer number of FLOPs.
Second, irregular sparsity typically needs specialized hardware/software
design for acceleration. To achieve faster speed during inference, we now
extend our method to consider number of FLOPs and channel-wise pruning.
Channel-wise pruning prunes a whole channel from a convolution layer, and thus
directly achieves higher forward and backward speed. To estimate the
importance of a channel $C$ in this network, we need to evaluate $\Delta\Psi$
in Eq. 4, which includes the effect on loss $\Delta\mathcal{L}_{C}$ and the
effect on model complexity. Note that pruning a channel means pruning all the
parameters in that channel. Therefore, the change in training loss
$\Delta\mathcal{L}_{C}$ can be approximated by the sum of
$\tilde{\Delta\mathcal{L}_{q}}$ of all parameters in this channel. The second
term in Eq. 4 is how much we reduce the model complexity. Here, we use number
of FLOPs as a measurement, and the second term becomes how many FLOPs are
reduced by pruning this channel. Similar to the fine-grained pruning setting,
we set $\lambda$ to the $p^{th}$ percentile of
$\Delta\mathcal{L}_{C}$/$\Delta\text{FLOPs}$ of all channels in this network,
and remove those channels smaller to $\lambda$. This give us a ’thinner’
network with fewer FLOPs.
## 4 Experiments
As illustrated in Section 3, NAP can handle both fine-grained pruning and
structured pruning. Therefore, we demonstrate the effectiveness of NAP in both
of these two settings. We’ll first show in Section 4.1 the results of
structured pruning, and then show our method can also push the limit of fine-
grained pruning in Section 4.2. For structured pruning, we use speed-up ratio
based on FLOPs as our metric and also report the real inference acceleration
tested on GPU. For fine-grained pruning, we use compression ratio based on
number of parameters.
To show the generality of our method, we conduct experiments on six popular
deep network architectures, from shallow to deep, and a number of benchmark
datasets. This includes LeNet-300-100 and LeNet-5 [19] on MNIST, CifarNet [16]
on Cifar-10, AlexNet [17], VGG16[32] and ResNet-50[11] on Imagenet
ILSVRC-2012. The aforementioned architectures contain both fully-connected
layers and convolution layers, with different sizes (from 267K to 138M
parameters) and different depth (from 3 to 50). In addition, we also test the
generalization of our compressed model on detection benchmarks. More
implementation details and analysis of NAP can be found in the appendix.
Network | Method | $\Delta$ Top1 | $\Delta$ Top5 | Speed up (x)
---|---|---|---|---
VGG16 | TE[28] | - | 4.8 | 3.9
FP[21] | - | 8.6 | $\approx 4$
Asym[39] | - | 3.8 | $\approx 4$
SSS[14] | 3.9 | 2.6 | 4.0
CP[13] | 2.7 | 1.7 | 4.5
ThiNet[25] | 2.4 | 1.2 | 4.5
RNP[22] | - | 3.6 | $\approx 5$
AMC[12] | - | 1.4 | $\approx 5$
NAP | 2.3 | 1.2 | 5.4
ResNet50 | SSS[14] | 4.3 | 2.1 | 1.8
CP[13] | 3.3 | 1.4 | $\approx 2$
ThiNet[25] | 4.3 | 2.2 | 2.3
NAP | 2.0 | 1.1 | 2.3
Table 1: Increase of classification error on ImageNet with channel-wise
pruning (lower the better). As a reference, our pre-trained VGG16 has accuracy
of 71.0% / 89.8% and ResNet50 of 75.0% / 92.2% (Top1 / Top5, single-crop).
### 4.1 Structured Pruning for Model Acceleration
Following our discussion in Section. 3.3, we apply NAP to conduct channel-wise
pruning. This is an important application of compression methods, as channel-
wise pruning can reduce FLOPs and directly accelerate models on common
hardware, e.g. GPU. Here, we follow previous work and demonstrate the
effectiveness of our method on ImageNet dataset with VGG16 and ResNet50. We
compare ours performance with previous channel (filter) pruning methods in
Table 1. These methods are sorted by their FLOPs speed-up ratio, and we use
notation $\approx$ if the exact FLOPs number is not reported in the original
paper.
#### 4.1.1 VGG16 Acceleration
VGG16 has computation intensive convolutional layers, e.g. big feature-map
size and big channel numbers. Therefore, it potentially has big redundancies
in its channels and acceleration is possible. To compress the network, we
iteratively update statistics for 0.2 epoch and prune 1% of remaining channels
until a favorable model size is reached. Note that more steps for updating
statistics and smaller pruning ratio are possible and may help the final
performance (see Appendix B), but will consume more time for pruning. Similar
to ThiNet [25], we fine-tune the final compressed model with learning rate
varying from 1e-3 to 1e-4, and stop at 30 epochs.
As shown in Table. 1, NAP outperforms all those recent channel-pruning
methods, with highest speed-up ratio and smallest performance drop. In
addition, we notice that: 1) NAP only requires the pruning ratio $p$ and
updating steps $T$ (or fine-tuning steps) in one pruning iteration. As we’ll
show in Section 4.3, NAP is robust to the values of $p$ and $T$, and thus our
method is easy to use in real applications. On the contrary, in addition to
$p$ and $T$, most of the previous methods in Table 1 either require manually
tuned per-layer compression ratios [21, 39, 13, 25, 34], or introduce new
hyper-parameters [22, 12]. 2) Similar to our method, TE [28] approximates the
effect of pruning one parameter using Taylor expansion. It uses the variance
of gradient rather than hessian to evaluate the Taylor expansion. However,
this method performs not as well as ours. This empirically suggests that,
evaluating the second order information is necessary for the Taylor expansion
to estimate the parameters’ importance. 3) We also test on GPU the absolute
acceleration of inference time. After compression, our model achieves 2.74
times faster than the original VGG16.999Testing with 1080 Ti, Tensorflow 1.8,
CUDA 9.2, cuDNN 7.1 and batch size 64.
Method | Baseline | Compressed | Speed up (x)
---|---|---|---
CP(2x)[13] | 68.7 | 68.3 | $\approx 2$
CP(4x)[13] | 68.7 | 66.9 | $\approx 4$
Asym[39]101010Reported in [12] | 68.7 | 67.8 | $\approx 4$
AMC[12] | 68.7 | 68.8 | $\approx 4$
NAP | 70.0 | 68.8 | 5.4
Table 2: Faster-RCNN detection result on PASCAL VOC 2007, using compressed
VGG16 as the backbone network. We report the mAP (%) for both baseline model
and compressed model. Speed-up is computed based on the flops of VGG backbone,
not the whole detector.
#### 4.1.2 ResNet50 Acceleration
ResNet has fewer FLOPs yet higher performance than VGG16. Therefore, it has
less redundancies in its channels and more challenging to prune. In addition,
the residue structure typically requires further modifications of pruning
methods [13, 23], because the shortcut layers and branch_c layers need to have
same channel numbers. To apply our method on ResNet50, we treat a channel in a
shortcut layer and its corresponding channels in other branch_c layers of the
same stage as one unity (a virtual channel). Thus, pruning one such unity
effectively reduces 1 channel from the shorcut path, and will maintain the
residue structure. Recall that we compute the importance of a channel by
$\Delta\mathcal{L}_{C}$/$\Delta\text{FLOPs}$ (Section. 3.3), we can compute
the importance of such unity by aggregating from all associated channels, i.e.
the sum of $\Delta\mathcal{L}_{C}$ of a channel in the shortcut layer and its
corresponding channels in branch_c layers, divided by the sum of
$\Delta\text{FLOPs}$ after removing these channels. After computing the
importance, this unity also compares its importance with other channels, and
we simply removes $k$ less important channels in this network.
The compression result is shown in Table. 1. Compared with previous methods,
we achieve smaller performance drop under similar speed-up ratio. We notice
that previous methods [13, 25] usually manually define the structure of the
compressed network, e.g. channels in the shortcut path remain unchanged and
blocks closer to max-pooling are pruned less. However, such a structure is not
easy to tune and may not be optimal for model compression, and thus limit
their performance. Furthermore, these methods also have difficulties to
achieve real acceleration on GPU ([13, 23] relies on sampling layers which
make the compressed ResNet even slower than the orignal one.) Since our method
directly prunes unimportant shortcut channels as well as other channels, we
don’t need to do any customization for acceleration: when testing inference
time on GPU, our compressed ResNet50 can run more than 1.4 times faster than
the original one.
#### 4.1.3 Generalization to Object Detection
We further test the generalization of our pruned model on detection task.
After compressing the VGG16 on ImageNet, we use it as the backbone network and
train Faster-RCNN on PASCAL VOC 2007 dataset. As shown in Table. 2, our model
can match the absolute performance of previous best model, with higher speed-
up ratio. Compared with our uncompressed baseline model, we have only 1.2%
performance drop under 5.4 times backbone network acceleration. This suggests
that our model indeed preserves enough model capacity and useful features, and
can generalize to other tasks.
### 4.2 Fine-grained Pruning for Model Compression
Our method can also be applied to fine-grained settings. Here, we compare our
method with recent fine-grained pruning methods, including 1) Randomly Pruning
[4], 2) OBD [20], 3) LWC [9], 4) DNS [7], 5) L-OBS [4] and 6) AMC [12]. We
report the change in top-1 error before and after pruning (except for ResNet50
we report top-5, since our baselines [4, 9, 12] only show results for top-5
error). Across different datasets and architectures, our method shows better
compression results.
Architecture | Method | $\Delta$ Error | CR
---|---|---|---
LeNet-300-100 | Random [4] | 0.49% | 12.5
OBD [20] | 0.20% | 12.5
LWC [9] | -0.05% | 12.5
DNS [7] | -0.29% | 55.6
L-OBS [4] | 0.20% | 66.7
NAP | 0.08% | 77.0
LeNet-5 | OBD [20] | 1.38% | 12.5
LWC [9] | -0.03% | 12.5
DNS [7] | 0.00% | 111
L-OBS [4] | 0.39% | 111
NAP | 0.04% | 200
CifarNet | LWC [9] | 0.79% | 11.1
L-OBS [4] | 0.19% | 11.1
NAP | 0.17% | 15.6
Table 3: MNIST and Cifar-10 Results. CR means Compression Ratio, higher the
better. $\Delta$ Error is the increase of classification error, lower the
better.
##### MNIST:
We first conduct experiments on the MNIST dataset with LeNet-300-100 and
LeNet-5. LeNet-300-100 has 2 fully-connected layers, with 300 and 100 hidden
units respectively. LeNet-5 is a CNN with 2 convolutional layers, followed by
2 fully-connected layers. Table 3 shows that we can compress LeNet-300-100 to
1.3% of original size with almost no loss in performance. Similarly for
LeNet-5, we can compress to 0.5%, which is much smaller than previous best
result.
##### Cifar-10:
We conduct experiments on Cifar-10 image classification benchmark with Cifar-
Net architecture. Cifar-Net is a variant of AlexNet, containing 3
convolutional layers and 2 fully-connected layers. Following previous work
[4], we first pre-train the network to achieve 18.43% error rate on the
testing set. After pruning, our method can compress the original model into a
16 times smaller one with negligible accuracy drop.
Architecture | Method | $\Delta$ Error | CR(x)
---|---|---|---
AlexNet | LWC [9] | -0.01% | 9.1
L-OBS [4] | -0.19% | 9.1
DNS [7] | -0.33% | 18
NAP | -0.03% | 25
VGG16 | LWC [9] | -0.06% | 13
L-OBS [4] | 0.36% | 13
NAP | -0.17% | 25
ResNet50 | L-OBS [4] | 2.2% | 2.0
LWC [9] | -0.1% | 2.7
AMC [12] | -0.03% | 5.0
NAP | 0.05% | 6.7
Table 4: ImageNet Results. CR means Compression Ratio, higher the better.
$\Delta$ Error is the increase of classification error, lower the better.
##### ImageNet:
To demonstrate our method’s effectiveness on larger models and datasets, we
prune AlexNet on ImageNet ILSVRC-2012. As shown in Table. 4, we achieve 25
times compression of its original size. Different from DNS [7] which allows
pruned parameter to revive in order to recover wrong pruning decisions, our
method only removes parameter, yet achieve better performance. We think this
empirically suggest that our method propose a better criterion to decide
unimportant parameters, and make fewer wrong pruning decisions. Also notice
that previous work [7, 9] find out it’s necessary to fix the convolutional
layers’ weights when pruning and retraining the fully-connected layer (and
vice versa), in order to recover the performance drops from pruning. However,
we don’t observe such difficulty in our experiments, as we simply prune and
retrain all layers simultaneously. This also indicates that our pruning
operation only brings recoverable errors and still preserves the model
capacity.
Ratio (%) | Epoch | $\Delta$ Top1 | $\Delta$ Top5 | Speed up (x)
---|---|---|---|---
0.5 | 0.1 | 2.4 | 1.4 | 5.4
1 | 0.2 | 2.3 | 1.2 | 5.4
2 | 0.4 | 2.6 | 1.3 | 5.4
5 | 1 | 2.5 | 1.3 | 5.3
10 | 2 | 2.6 | 1.3 | 4.9
Table 5: VGG16 channel-wise pruning results with different hyper-parameters.
The first column is pruning percentage $p$. The second column is number of
update steps $T$. The results show NAP is robust to hyper-parameters’ values.
We then apply our method to a modern neural network architecture, VGG16.
Despite its large number of parameters and deep depth, we find no difficulties
to prune VGG16 using our method. We use the same pruning set-up as in AlexNet
experiment, and achieve the best result as shown in Table. 4.
However, both AlexNet and VGG16 have large fully-connected layers, which
contribute most of the network parameters. It’s unclear whether the success of
our method can generalize to architectures mainly consisted of convolutional
layers. Therefore, we further conduct experiment on ResNet50, which has 49
convolutional layers, and one relatively small fully-connected layer. Due to
its large number of layers, the search space of per-layer compression ratios
is exponentially larger than that of AlexNet and VGG16. Therefore,
conventional methods would have difficulties to tune and set the compression
ratio for each of those 16 layers. This is supported by the fact that
automatic methods like AMC and ours achieve much better results than manually
tuned ones like L-OBS and LWC, as shown in Table 4. Furthermore, the
convolutional layers usually are much less redundant than fully-connected
layers, and thus a wrong choice of parameters to prune may lead to severe and
unrecoverable performance drops. This explains why our method outperform AMC:
though benefiting from reinforcement learning to predict per-layer compression
ratios, AMC defines unimportant parameters as those having lower magnitude. On
the contrary, our method use a theoretically sound criterion, and thus
resulting in better performance.
### 4.3 Ablation Study
As shown in Table. 5, we choose different pruning ratio $p$ and number of
update steps $T$ to perform pruning on VGG16. The essential idea is to keep
the total running time (epochs) for pruning roughly the same, while changing
different $p$ and $T$. From the table we can observe that NAP is very robust
to the hyper-parameter values. As long as hyper-parameter values remain in a
reasonable range, NAP can achieve decent results automatically. We’ll provide
more ablation study in the supplementary materials.
## 5 Conclusion
In this paper we have proposed Network Automatic Pruning method. It can be run
in an almost hyper-parameter free manner, and show descent compression
results. We’ve test its effectiveness on both fine-grained pruning and
structured pruning settings and show strong performance.
## References
* [1] J. Ba, R. Grosse, and J. Martens. Distributed second-order optimization using kronecker-factored approximations. 2016\.
* [2] M. Courbariaux, I. Hubara, D. Soudry, R. El-Yaniv, and Y. Bengio. Binarized neural networks: Training deep neural networks with weights and activations constrained to+ 1 or-1. arXiv, 2016.
* [3] E. L. Denton, W. Zaremba, J. Bruna, Y. LeCun, and R. Fergus. Exploiting linear structure within convolutional networks for efficient evaluation. In NIPS, 2014.
* [4] X. Dong, S. Chen, and S. Pan. Learning to prune deep neural networks via layer-wise optimal brain surgeon. In NIPS, 2017.
* [5] Y. Gong, L. Liu, M. Yang, and L. Bourdev. Compressing deep convolutional networks using vector quantization. arXiv, 2014.
* [6] R. Grosse and J. Martens. A kronecker-factored approximate fisher matrix for convolution layers. In International Conference on Machine Learning, 2016.
* [7] Y. Guo, A. Yao, and Y. Chen. Dynamic network surgery for efficient dnns. In NIPS, 2016.
* [8] S. Han, H. Mao, and W. J. Dally. Deep compression: Compressing deep neural networks with pruning, trained quantization and huffman coding. arXiv, 2015.
* [9] S. Han, J. Pool, J. Tran, and W. Dally. Learning both weights and connections for efficient neural network. In NIPS, 2015.
* [10] B. Hassibi and D. G. Stork. Second order derivatives for network pruning: Optimal brain surgeon. In NIPS, 1993.
* [11] K. He, X. Zhang, S. Ren, and J. Sun. Deep residual learning for image recognition. In CVPR, 2016.
* [12] Y. He, J. Lin, Z. Liu, H. Wang, L.-J. Li, and S. Han. Amc: Automl for model compression and acceleration on mobile devices. In ECCV, 2018.
* [13] Y. He, X. Zhang, and J. Sun. Channel pruning for accelerating very deep neural networks. In ICCV, 2017.
* [14] Z. Huang and N. Wang. Data-driven sparse structure selection for deep neural networks. In ECCV, 2018.
* [15] M. Jaderberg, A. Vedaldi, and A. Zisserman. Speeding up convolutional neural networks with low rank expansions. arXiv, 2014.
* [16] A. Krizhevsky and G. Hinton. Learning multiple layers of features from tiny images. 2009\.
* [17] A. Krizhevsky, I. Sutskever, and G. E. Hinton. Imagenet classification with deep convolutional neural networks. In NIPS, 2012.
* [18] V. Lebedev, Y. Ganin, M. Rakhuba, I. Oseledets, and V. Lempitsky. Speeding-up convolutional neural networks using fine-tuned cp-decomposition. arXiv, 2014.
* [19] Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner. Gradient-based learning applied to document recognition. Proceedings of the IEEE, 1998.
* [20] Y. LeCun, J. S. Denker, and S. A. Solla. Optimal brain damage. In NIPS, 1990.
* [21] H. Li, A. Kadav, I. Durdanovic, H. Samet, and H. P. Graf. Pruning filters for efficient convnets. arXiv, 2016.
* [22] J. Lin, Y. Rao, J. Lu, and J. Zhou. Runtime neural pruning. In NIPS, 2017.
* [23] Z. Liu, J. Li, Z. Shen, G. Huang, S. Yan, and C. Zhang. Learning efficient convolutional networks through network slimming. In ICCV, 2017.
* [24] J.-H. Luo and J. Wu. Autopruner: An end-to-end trainable filter pruning method for efficient deep model inference. arXiv, 2018.
* [25] J.-H. Luo, J. Wu, and W. Lin. Thinet: A filter level pruning method for deep neural network compression. In ICCV, 2017.
* [26] J. Martens and R. Grosse. Optimizing neural networks with kronecker-factored approximate curvature. In International conference on machine learning, 2015.
* [27] V. Mnih, K. Kavukcuoglu, D. Silver, A. Graves, I. Antonoglou, D. Wierstra, and M. Riedmiller. Playing atari with deep reinforcement learning. arXiv, 2013.
* [28] P. Molchanov, S. Tyree, T. Karras, T. Aila, and J. Kautz. Pruning convolutional neural networks for resource efficient inference. arXiv, 2016.
* [29] A. Novikov, D. Podoprikhin, A. Osokin, and D. P. Vetrov. Tensorizing neural networks. In NIPS, 2015.
* [30] M. Rastegari, V. Ordonez, J. Redmon, and A. Farhadi. Xnor-net: Imagenet classification using binary convolutional neural networks. In ECCV, 2016.
* [31] J. Rissanen. Modeling by shortest data description. Automatica, 1978.
* [32] K. Simonyan and A. Zisserman. Very deep convolutional networks for large-scale image recognition. arXiv, 2014.
* [33] I. Sutskever, O. Vinyals, and Q. V. Le. Sequence to sequence learning with neural networks. In NIPS, 2014.
* [34] H. Wang, Q. Zhang, Y. Wang, and H. Hu. Structured probabilistic pruning for convolutional neural network acceleration. arXiv, 2017.
* [35] W. Wen, C. Wu, Y. Wang, Y. Chen, and H. Li. Learning structured sparsity in deep neural networks. In NIPS, 2016.
* [36] J. Wu, C. Leng, Y. Wang, Q. Hu, and J. Cheng. Quantized convolutional neural networks for mobile devices. In CVPR, 2016.
* [37] Y. Wu, E. Mansimov, R. B. Grosse, S. Liao, and J. Ba. Scalable trust-region method for deep reinforcement learning using kronecker-factored approximation. In NIPS, 2017.
* [38] G. Zhang, S. Sun, D. Duvenaud, and R. Grosse. Noisy natural gradient as variational inference. arXiv, 2017.
* [39] X. Zhang, J. Zou, K. He, and J. Sun. Accelerating very deep convolutional networks for classification and detection. PAMI, 2016.
* [40] C. Zhu, S. Han, H. Mao, and W. J. Dally. Trained ternary quantization. arXiv, 2016.
## Appendix A Does NAP prune similar parameters as magnitude-based pruning?
Different from conventional pruning methods, NAP select unimportant parameters
based on their effects upon the final loss function. Here, we’ll investigate
what parameters NAP prune within a layer. Since both NAP and magnitude-based
pruning can achieve good compression results, it’s intriguing to explore if
NAP prunes similar parameters as magnitude-based pruning, or they prune
totally different parameters. We conduct our method in the fine-grained
setting, which provides us more fine-grained information.
Figure 2 and Figure 3 show the distribution of parameters’ magnitudes, coming
from the first fully-connected layer of AlexNet. Before pruning, weight
distribution is peaked at 0, and drops quickly as the absolute value
increasing. This is very much like a gaussian distribution as what the weights
were initialized from. Figure 3 shows the distribution after pruning using our
method. It is obvious to see that parameters with magnitudes close to 0
(center region) are pruned away, indicating that the parameters regarded as
having small impacts upon the loss by our method also usually have small
magnitudes. Another interesting observation is that after pruning, the
magnitudes of remaining parameters are larger than before, i.e. parameters’
magnitudes are around 0.03 where they are typically smaller than 0.01 before
pruning. This may provide some intuitions for future work on designing new
initialization distributions.
Figure 2: Weight distribution before pruning (First FC in AlexNet). Figure 3:
Weight distribution after pruning (First FC in AlexNet).
We further explore the correlation between a parameter’s magnitude and its
importance measured by NAP. From figure 4, we can observe that: 1) A
parameter’s absolute value indeed has a strong correlation with its importance
computed by NAP. This explains why magnitude-based methods also achieve fairly
good performance. Despite this strong correlation, directly compare the
magnitudes from all layers will a give different pruning result than our
method: setting the pruning threshold as the median of parameters’ magnitudes
in Figure 4(a) will prune 14% of conv5, while the median of our importance
will prune 7%. Therefore, directly compare magnitudes across all layers and
prune the smallest ones will lead to a sub-optimal architecture, and limit the
performance, and thus magnitude-based methods prune layers individually. 2)
The FC layer initially has smaller importance(larger redundancies) than
convolutional layer measured by NAP, and thus are pruned more severely. As
more and more parameters are pruned away(from Figure 4(a) to Figure 4(b)), the
importance of FC layer becomes closer to that of convolutional layer, and both
layers will be pruned with similar ratios. This shows that our method can
dynamically adjust the compression ratio of each layer, as the pruning going
on.
(a) 100% prune to 50%
(b) 25% prune to 12.5%
Figure 4: Correlation between parameters’ absolute values and importances.
## Appendix B Does NAP sensitive to hyper-parameters?
As already demonstrated in the ablation study, our method is robust to the
choice of hyper-parameters. NAP has two hyper-parameters: the pruning ratio
$p$ at each pruning step, and the number of steps for fine-tuning (updating
stats) $T$111111Since we update stats and fine-tune the network
simultaneously, we use those to terms interchangeably. In the ablation study,
we show that within a fixed pruning time-budget (the total number of steps run
on the training dataset before achieving $5.4$ times acceleration), different
choices of $p$ and $T$ gives similar final performance. We believe the fixed
pruning time-budget is a practical constraint, as one wants to get a small
network as soon as possible. Here, we first show more results with fixed time-
budget, and then show the effect of other hyper-parameter choices without this
constraint.
In addition to similar performances, we also notice that the final model
architectures given by different hyper-parameters are very similar. Figure 7
shows the remained channel index in a pruned VGG16, averaged over experiments
using different hyper-parameters. Same as before, we conduct 5 different
channel-wise pruning experiments, with 1) $p=10\%,T=\text{2 epoch}$, 2)
$p=5\%,T=\text{1 epoch}$, 3) $p=2\%,T=\text{0.4 epoch}$, 4) $p=1\%,T=\text{0.2
epoch}$, 5) $p=0.5\%,T=\text{0.1 epoch}$. These give us 5 pruned VGG16
network, with similar final performance (as shown in the ablation study). We
then average over these 5 architectures on whether a channel is pruned or not.
For instance, if a specific channel is pruned in all 5 models, then its
corresponding value is 0, if it’s pruned in 4 models, then its value is 0.2,
and if it’s not pruned in all model, then its value is 1. Based on these
values, we plot the heat-map in Figure 7.
Figure 5: Different value of pruning fraction $p$. Higher sparsity means fewer
parameters. Figure 6: Different value of fine-tuning step $T$. Higher sparsity
means fewer parameters.
From Figure 7 we can see that the pruned models have very consistent
architectures. Channels generally have dark blue or white color, meaning they
are either maintained in all 5 models or pruned in all 5 models. This
presented heat-map is very different from a random pruning pattern, in which
channels has 0.5 probability of either pruned or maintained, and thus the
associated heatmap will have a intermediate blue color for all channels. This
experimental result further prove that our method is robust to the choice of
hyper-parameter. Since different hyper-parameters end up with similar
architecture, the final performance are similar will be a reasonable result.
Furthermore, all these 5 pruned model originated with the same pre-trained
model. We believe the consistent architectures also indicates that there may
exist a one-one correspondance between a set of pre-trained weights and a
effective pruned architecture. We hope this observation will lead to more
discussions in future work.
Next, we show experimental results with other hyper-parameter choices,
regardless of the fixed pruning time-budget. These include effects of
different $p$ and effects of different $T$. Intuitively, smaller $p$ should
give better results, as we will have a more accurate approximation of the
Hessian matrix. Here, we apply our method with fixed $T$ and different $p$ in
the fine-grained setting, using CifarNet. Figure 5 shows the relation of final
accuracy under different sparsity levels. In particular, we apply different
$p$ followed by 20k steps of fine-tuning. We can see that small $p$ has better
results, especially when the sparsity is larger than 90%. On the other hand,
there are negligible differences in a broad range of sparsity, i.e. sparsity
is less than 90%. Therefore, we believe the choice of $p$ will depends on
one’s time-budget for compression; smaller $p$ will give better result but
consume longer time, while larger $p$ will still give relatively good results
using much shorter time.
We also have similar conclusions for different values of $T$. Since we use the
Fisher matrix as a proxy for the Hessian, we assume the model is converged
before each pruning. Thus, longer fine-tuning steps will help estimate a more
accurate Hessian matrix. As shown in Figure 6, larger $T$ yields better
results. Moreover, the value of $T$ has almost no effect when sparsity is
small. Similar to $p$, we believe the choice of $T$ also depends on the time-
budget for compression, and it is not hard to tune.
Figure 7: Remained channel index averaged over 5 experiments with different hyper-parameters. Dark blue means a channel is remained in all 5 experiments, while white indicates a channel is pruned in all 5 experiments. This heatmap shows that though using different hyper-parameters, our method ends up with consistent model architectures. Architecture | Layer | Parameters | LWC[9] | DNS[7] | NAP (Ours)
---|---|---|---|---|---
LeNet-300-100 | fc1 | 235K | 8% | 1.8% | 0.73%
fc2 | 30K | 9% | 1.8% | 4.74%
fc3 | 1K | 26% | 5.5 % | 39.4%
total | 267K | 8% | 1.8% | 1.3%
LeNet-5 | conv1 | 0.5K | 66% | 14.2% | 41.2%
conv2 | 25K | 12% | 3.1% | 3.1%
fc1 | 400K | 8% | 0.7% | 0.2%
fc2 | 5K | 19% | 4.3% | 8.9%
total | 431K | 8% | 0.9% | 0.5%
AlexNet | conv1 | 35K | 84% | 53.8% | 67.2%
conv2 | 307K | 38% | 40.6% | 37.8%
conv3 | 885K | 35% | 29.0% | 27.7%
conv4 | 663K | 37% | 32.3% | 33.2%
conv5 | 442K | 37% | 32.5% | 38.6%
fc1 | 38M | 9% | 3.7% | 1.5%
fc2 | 17M | 9% | 6.6% | 3.2%
fc3 | 4M | 25% | 4.6% | 13.7%
total | 61M | 11% | 5.7% | 4%
VGG16 | conv1_1 | 2K | 58% | - | 92.1%
conv1_2 | 37K | 22% | - | 66.4%
conv2_1 | 74K | 34% | - | 55.7%
conv2_2 | 148K | 36% | - | 46.9%
conv3_1 | 295K | 53% | - | 38.2%
conv3_2 | 590K | 24% | - | 30.8%
conv3_3 | 590K | 42% | - | 30.5%
conv4_1 | 1M | 32% | - | 20.6%
conv4_2 | 2M | 27% | - | 12.9%
conv4_3 | 2M | 34% | - | 12.8%
conv5_1 | 2M | 35% | - | 13.1%
conv5_2 | 2M | 29% | - | 13.9%
conv5_3 | 2M | 36% | - | 13.3%
fc1 | 103M | 4% | - | 0.3%
fc2 | 17M | 4% | - | 1.9%
fc3 | 4M | 23% | - | 9.2%
total | 138M | 7.5% | - | 4%
Table 6: Fine-grained compression ratios for different layers.
## Appendix C Does NAP find reasonable per-layer compression ratios?
We compare the fine-grained compression ratios, layer by layer, between our
method and previous methods, as shown in Table 6. The compression ratios of
LWC [9] and DNS [7] are manually tuned to achieve good results. In contrast,
our method automatically determines the compression ratio for each layer
during the pruning process. We notice that our compression ratios are not the
same as previous manually-tuned ratios, but share some similarities; layers
with smaller compression ratios in our method also have smaller ratios in
other methods. Also, fully-connected layers are generally be pruned much
severely than convolutional layers, which is in accord with the observation
that fully-connected layers usually have more redundancies. These suggest that
our method can find reasonable per-layer compression ratios according to the
sensitivities of each layer.
## Appendix D Implementation Details
Here, we provide our more detailed implementation details for future
reproduction. Our implementation of NAP is based on Tensorflow. On imagenet,
we first pretrain models following hyper-parameter settings in Caffe model zoo
(AlexNet and VGG16) or the original paper (ResNet50). Our pretrained model
match the performance in Caffe model zoo (or original paper for ResNet50).
Through all training pipeline, the training augmentation is first resize the
shortest side to a random size, and then randomly crop a 224x224 image with
random horizontally flip (except for AlexNet we resize to a fixed size of 256
and randomly crop a 227x227 image). For testing, we first resize the shortest
side to 256, and then center crop a 224x224 image. No other augmentation is
used. For all experiments, we use SGD with momentum of 0.9.
NAP has an operation to update second-order stats of input activations and
output gradients. Our implementation of this is based on the open sourced
K-FAC repository,121212https://github.com/tensorflow/kfac with some
modifications. Recall that we can fine-tune the network while we update those
stats. However, updating the stats typically consume many memories and
encounter Out-Of-Memory issue when conducting on GPUs. Therefore, for large
network like VGG, we update the stats on CPU. Moreover, the update stats
operation is conducted asynchronously with the forward-backward pass on GPU,
since this will hide the overhead and save the time. When doing inverse, we
also add a damping term to increase the numerical stability.
For channel-wise pruning, we remove $1\%$ of the remaining channels from a
pre-trained model, and fine-tune 0.2 epoch between two subsequent pruning. In
total, we run less than 8 epochs before we achieve a 5.4x accelerated VGG16.
This time budget is similar or shorter than previous methods. For the last
stage of fine-tuning VGG, we use 1e-3 as the initial learning rate, decay to
1e-4 at 20 epochs and stop at 30 epochs. The weight decay term is set to 0.
For ResNet, we use 1e-3 as the initial learning rate, decay by 10 at 10 epochs
and 20 epochs, and stop at 30 epochs. In our experiments, we find that we can
achieve better performance if we train with longer time. However, the goal of
the experiments is to show the effectiveness of our method, rather than the
final fine-tuning schedule. Therefore, we keep our fine-tune setting similar
as previous work.
For the fine-grained pruning, we prune more aggressively at each stage since a
network is less sensitive to fine-grained pruning. More specifically, we prune
AlexNet and VGG16 iteratively to $50\%$, $25\%$, $12.5\%$, $6.25\%$, $5\%$,
$4\%$. Such compression ratios are chosen because we simply choose to halve
the network size at the very beginning, and then prune less aggressively with
$80\%$ compression ratio in the last two pruning steps. After each pruning, we
retrain the network with the same learning schedule as in the pre-training,
with slightly lower weight decay (2e-4). We empirically find that such
schedule can converge faster than fine-tune with a small learning rate (e.g.
1e-4, which can converge to a similar performance but with much longer time:
200 epochs as shown in [9]). For ResNet50, we follow similar setting in [12].
We iteratively prune ResNet50 to $50\%$, $30\%$, $25\%$, $20\%$, $15\%$. The
fine-tune learning schedule is simply use 1e-4 and train until the model
converges.
|
1
Predictive Processing in Cognitive Robotics: a Review
Alejandra Ciria,1 Guido Schillaci2, Giovanni Pezzulo3, Verena V. Hafner4 and
Bruno Lara5
1Facultad de Psicología. Universidad Nacional Autónoma de México.
2The BioRobotics Institute. Scuola Superiore Sant’Anna. Italy.
3Institute of Cognitive Sciences and Technologies. National Research Council.
Italy.
4Adaptive Systems Group. Department of Computer Science. Humboldt-Universität
zu Berlin. Germany.
5Laboratorio de Robótica Cognitiva. Centro de Investigación en Ciencias.
Universidad Autónoma del Estado de Morelos. Mexico
Keywords: Cognitive robotics, predictive processing, active inference
Abstract
Predictive processing has become an influential framework in cognitive
sciences. This framework turns the traditional view of perception upside down,
claiming that the main flow of information processing is realized in a top-
down hierarchical manner. Furthermore, it aims at unifying perception,
cognition, and action as a single inferential process. However, in the related
literature, the predictive processing framework and its associated schemes
such as predictive coding, active inference, perceptual inference, free-energy
principle, tend to be used interchangeably. In the field of cognitive robotics
there is no clear-cut distinction on which schemes have been implemented and
under which assumptions. In this paper, working definitions are set with the
main aim of analyzing the state of the art in cognitive robotics research
working under the predictive processing framework as well as some related non-
robotic models. The analysis suggests that, first, both research in cognitive
robotics implementations and non-robotic models needs to be extended to the
study of how multiple exteroceptive modalities can be integrated into
prediction error minimization schemes. Second, a relevant distinction found
here is that cognitive robotics implementations tend to emphasize the learning
of a generative model, while in non-robotics models it is almost absent.
Third, despite the relevance for active inference, few cognitive robotics
implementations examine the issues around control and whether it should result
from the substitution of inverse models with proprioceptive predictions.
Finally, limited attention has been placed on precision weighting and the
tracking of prediction error dynamics. These mechanisms should help to explore
more complex behaviors and tasks in cognitive robotics research under the
predictive processing framework.
## 1 Introduction
Predictive processing has become an influential framework in the cognitive
sciences. A defining characteristic of predictive processing is that it
“…depicts perception, cognition, and action as the closely woven product of a
single kind of inferential process.” (Clark,, 2018, p. 522). This idea has
caused a profound effect on the models and theories in different research
communities, from neuroscience to psychology, computational modelling and
cognitive robotics. In the literature, terms such as “predictive processing”,
“hierarchical predictive processing”, “active inference”, “predictive coding”
and “free energy principle” are often used interchangeably. Scholars refer to
them either as theories or frameworks, occasionally interweaving their core
ideas.
In cognitive robotics, a number of architectures and models have claimed to
follow the postulates of these frameworks. Research in embodied cognitive
robotics focuses on understanding and modeling perception, cognition, and
action in artificial agents. It is through bodily-interactions with their
environment that agents are expected to learn and then be capable of
performing cognitive tasks autonomously (Lara et al.,, 2018; Schillaci et al.,
2016a, ). The aim of this article is to set working definitions and delimit
the main ideas for each of these frameworks, so as to be able to analyze the
literature of cognitive robotics and the different implementations in the
literature. This should help to highlight what has been done and what is
missing, and above all, what the real impact of these frameworks in the area
of robotics and artificial intelligence is. Finally, this manuscript sets the
issues and challenges that these new frameworks bring on the table.
The structure of this paper is as follows. Section 2 sets the relevant working
definitions. In Section 3 different models and architectures are analyzed in
the light of the above mentioned frameworks. Section 4 closes the paper.
## 2 Working definitions
For the purpose of this article, predictive processing is considered to be the
most general set of postulates. Predictive processing proposes to turn the
traditional picture of perception upside down (Clark,, 2015). The standard
picture of perceptual processing is dominated by the bottom-up flow of
information which is transduced from sensory receptors. In this picture of
perception, as information flows upwards, a progressively richer picture of
the world is then constructed from a low-level feature layer processing
perceptual input to a high-level semantics layer interpreting information
(Marr,, 1982). All together, predictive processing claims to unify perception,
cognition and action under the same explanatory scope (Clark,, 2013; Hohwy,,
2013).
The predictive processing view of perception states that agents are constantly
and actively predicting sensory stimulation and that only deviations from the
predicted sensory input (prediction errors) are processed bottom-up.
Prediction error is newsworthy sensory information which provides corrective
feedback on top-down predictions and promotes learning. Therefore, in this
view of perception, the core flow of information is top-down and the bottom-up
flow of sensory information is replaced by the upward flow of prediction
error. The core function of the brain is minimizing prediction error. This
process has become known as Prediction Error Minimization (PEM). In a general
sense, PEM has been a scheme used in many machine learning algorithms where
the error between the desired output and the output generated by the network
is used for learning (see, for instance, backpropagation algorithms for
training neural networks). Different strategies of PEM have been used in
models for perception and action control in artificial agents (see Schillaci
et al., 2016a for a review).
Going further, predictive processing suggests that the brain is an active
organ that constantly generates explanations about sensory inputs and then
tests these hypotheses against incoming sensory information (Feldman and
Friston,, 2010) – in a way that is coherent with Helmholtz’s view of
perception as an unconscious form of inference.
Figure 1: Schematic representation of hierarchical neuronal message under the
predictive processing postulates.
Recurrent neuronal interactions with descending predictions and ascending
prediction errors following the predictive processing postulates are
illustrated in a simplified segment of the cortical hierarchy in Figure 1.A.
Neuronal activity of deep pyramidal cells (represented in black) at higher
layers of the cortex encode prior beliefs about the expected states of the
superficial pyramidal cells (represented in red) at lower layers. At each
cortical level, prior beliefs encode the more likely neuronal activity at
lower levels. Superficial pyramidal cells compare descending predictions with
the ascending sensory evidence resulting in what is known as prediction error.
The prediction error at superficial pyramidal cells is sent to deep pyramidal
cells for belief updating (posterior belief). In Figure 1.B descending
modulation determines the relative influence of prediction errors at lower
levels of the hierarchy on deep pyramidal cells encoding predictions.
Precision beliefs are encoded by a descending neuromodulatory gating or gain
control (green) of superficial pyramidal cells. In Bayesian inference, beliefs
about precision have a great effect on how posterior beliefs are updated.
Precision beliefs are considered an attentional mechanism which weightens
predictions and sensory evidence depending on how certain or useful these are
for a given task and context. Figure 1.C shows a particular example of active
inference for prediction error minimization. Perceptual inferences about
grasping a cup generate visual, cutaneous, and proprioceptive prediction
errors that are then minimized by movement. Descending proprioceptive
predictions should be fulfilled by being highly weighted to incite movement.
Then, proprioceptive prediction errors are generated at the level of the
spinal cord and minimized at the level of peripheral reflexes. At the same
time, when the movement trajectory to grasp the cup is performed, visual and
cutaneous prediction errors are minimized at all levels of the cortical
hierarchy.
Humans and other biological agents have to deal with a world full of sensory
uncertainty. In humans, there is psychophysical evidence that shows how
Bayesian models can account for perceptual and motor biases by encoding
uncertainty in the internal representations of the brain (Knill and Pouget,,
2004).
There are several Bayesian approaches centered on the idea that perceptual and
cognitive processes are supported by internal probabilistic generative models
(Clark,, 2013, 2015; Friston, 2010a, ; Hohwy,, 2013; Rao and Ballard,, 1999).
A generative model is a probabilistic model (joint density), mapping hidden
causes in the environment with sensory consequences from which samples are
generated (Friston, 2010a, ). It is usually specified in terms of the
likelihood probability distribution of observing some sensory information
given its causes, and a prior probability distribution of the beliefs about
the hidden causes of sensory information (before sampling new observations)
(Badcock et al.,, 2017). A posterior density is a posterior belief generated
by combining the prior and the likelihood weighted according to their
precision, defined as the inverse variance (Adams et al., 2013b, ). A
posterior density can be calculated using the Bayes theorem:
$p(s|O)=\frac{p(O|s)p(s)}{p(O)}$ (1)
where $p(s|O)$, also known as the posterior belief, is the probability of
hypothesis $s$ with a given evidence or observation $O$. Prior beliefs are
updated (thus becoming posterior beliefs) when sensory evidence (likelihood)
is available. $p(O|s)$ is the likelihood relating the sensory observation to
the hidden causes, this is, the probability of the specific evidence $O$.
$P(s)$ is the prior distribution of any hypothesis $s$ or prior belief and it
can be seen as the prediction of states. $P(O)$ is the probability of
encountering this evidence or observation.
This calculation is often practically intractable, and variational Bayes is
then used for approximately calculating the posterior. This method introduces
an optimization problem which requires an auxiliary probability density termed
as the recognition density (Buckley et al.,, 2017).
Prediction error is the difference between the mean of the prior belief and
the mean of the likelihood in their respective probability distributions.
Information gain is measured as the KL divergence between the prior belief and
the posterior belief. The prior and likelihood distributions have an expected
precision, which is encoded as the inverse of their respective variance. This
precision will bias the posterior belief update. In particular, the posterior
belief is updated biased towards the prior belief given its higher expected
precision as compared to the low expected precision on sensory evidence (see
Figure 2A). On the contrary, when the expected precision on prior belief is
low and the expected precision on sensory evidence is high, the prediction is
more uncertain or unreliable, having less of an impact on how the posterior
belief is updated than the sensory evidence (see Figure 2B). In both examples
in Figure 2, although the magnitude of the prediction error is equivalent, the
information gain is greater in B due to the greater divergence between the
prior and the posterior beliefs.
Figure 2: Relevance of the precision of probability distributions in Bayesian
inference.
In Bayesian inference, there are beliefs about beliefs (empirical priors) in
terms of having expectations about the beliefs’ precision or uncertainty
(Adams et al., 2013b, ). Here, attention is seen as a selective sampling of
sensory information, in such a way that predictions about the confidence of
the signals are made to enhance or attenuate prediction errors from different
sensory modalities. In order to attain this sampling, this framework proposes
a mechanism known as precision weighting. The information coming from
different modalities are weighted according to the expected confidence given a
certain task in a certain context (Parr and Friston,, 2017; Friston et al.,
2012a, ; Donnarumma et al.,, 2017).
Importantly, precision weights are not only assigned according to their
reliability, but also by their context-varying usefulness, and are thus
considered to be a mechanism for behavior control (Clark,, 2020). In the
brain, precision weighting might be mediated by a neuromodulatory gain control
which can be conceived as a Bayes-optimal encoding of precision at a synaptic
level of neuronal populations encoding prediction errors (Friston et al.,,
2014). Prediction errors with high precision have a great impact on belief
updating, and priors with high precision are robust in the face of noisy or
irrelevant prediction errors.
Bayesian beliefs are treated as inferences about the posterior probability
distribution (recognition density) via a process of belief updating (Ramstead
et al.,, 2020). The recognition density is an approximate probability
distribution of the causes of sensory information which encodes posterior
beliefs as a product of inverting the generative model (Friston, 2010a, ).
According to the Bayesian brain hypothesis, prior beliefs are encoded as
neuronal representations, and in light of the new evidence beliefs are updated
(posterior density) to produce a posterior belief following Bayes’ rule
(Friston et al.,, 2014). This means that the brain encodes Bayesian
recognition densities within its neural dynamics, which can be conceived as
inferences of the hidden causes to find the best ‘guess’ of the environment
(Demekas et al.,, 2020).
According to Friston et al., (2006), predictive processing must be situated
within the context of the free-energy principle (Williams,, 2018), given that
’prediction error minimization’, under certain assumptions, corresponds to
minimizing free energy (Friston, 2010b, ). Predictive processing can be seen
as a name for a family of related theories, where the free energy principle
(FEP) provides a mathematical framework to implement the above ideas. The
free-energy principle is a biological and a neuroscientific framework in which
prediction error minimization is conceived as a fundamental process of self-
organizing systems to maintain their sensory states within their physiological
bounds in the face of constant environmental changes (Adams et al., 2013a, ;
Friston,, 2009; Friston, 2010b, ).
Essentially, the free-energy principle is a mathematical formulation of how
biological agents or systems (like brains) resist a natural tendency to
disorder by limiting the repertoire of their physiological and sensory states
that define their phenotypes (Friston, 2010b, ). In other words, to maintain
their structural integrity, the sensory states of any biological system must
have low entropy. Entropy is the negative log-probability of an outcome or the
average ‘surprise’ of sensory signals under the generative model of the causes
of the signals (Friston et al.,, 2011).
Therefore, biological systems are obliged to minimize their sensory surprise
(and implicitly entropy) in order to increase the probability of remaining
within their physiological bounds over long timescales (Friston,, 2009).
The main aim of minimizing free energy is to guarantee that biological systems
spend most of their time in their valuable states, those which they expect to
frequent. Prior expectations prescribe a primary repertoire of valuable states
with innate value, inherited through genetic and epigenetic mechanisms
(Friston, 2010b, ).
Agents are constantly trying to maximize the evidence for the generative model
by minimizing surprise. The FEP claims that because biological systems cannot
minimize surprise directly, they need to minimize an upper bound called ‘free
energy’ (Buckley et al.,, 2017). Free energy can be expressed as the Kullback-
Leibler divergence between two probability distributions, subtracted by the
natural log of the probability of possible states. As stated in Sajid et al.,
(2020), free energy can always be written in terms of complexity and accuracy:
$\begin{split}F&=D_{KL}(Q(s)||P(s|o))-lnP(o)\\\
&=D_{KL}(Q(s)||P(s)))-E_{Q}[lnP(o|s)]\end{split}$ (2)
Where $Q(s)$ is the recognition density or approximate posterior distribution,
and encodes the prior beliefs an agent possesses about the unknown variables.
The conditional density $P(s|o)$ is the probability of some (hidden) state (s)
given a certain observation (o), and is refereed to as the generative model.
The first writing in Eq. 2 can be read as evidence bound minus log evidence or
divergence minus surprise. Rewritten as in the second line it is read as
complexity, which is the difference between the posterior beliefs and prior
beliefs before new evidence is available and accuracy, the expected log
likelihood of the sensory outcomes given some posterior about the causes of
the data (Sajid et al.,, 2020).
The recognition density (coded by the internal states) and the generative
model are necessary to evaluate free energy (Friston, 2010a, ). Variational
free energy (VFE) provides an upper bound on surprise, and it is formally
equivalent to weighted prediction error (Buckley et al.,, 2017). VFE is a
statistical measure of the surprise under a generative model. Negative VFE
provides a lower bound on model evidence. Minimizing VFE with respect to the
recognition density will also minimize the Kullback-Leiber divergence between
the recognition density and the true posterior. Therefore, minimizing VFE
makes the recognition density, the probabilistic representation of the causes
of sensory inputs, an approximate of the true posterior (Friston, 2010a, ).
Optimizing the recognition density makes it a posterior density on the causes
of sensory information.
Biological agents can minimize free energy by means of two strategies:
changing the recognition density or actively changing their internal states.
Changing the recognition density minimizes free energy and thus, reduces the
perceptual divergence. This is a relevant component of the free energy
formulation when expressed as complexity minus accuracy.
Minimizing perceptual divergence increases the complexity of the model,
defined as the difference between the prior density and the posterior beliefs
encoded by the recognition density (Friston, 2010a, ). This first strategy is
known as perceptual inference, this is, when agents change their predictions
to match incoming sensory information. Given that sensory information can be
very noisy and ambiguous, perceptual inferences are necessary to make the
input coherent and meaningful.
The second strategy is the standard approach to action in predictive
processing, known as active inference (Adams et al., 2013a, ; Brown et al.,,
2013), which consists of an agent changing sensory inputs through actions that
conform to predictions. This is the same as minimizing the expected free
energy (Kruglanski et al.,, 2020). When acting on the world, free energy is
minimized by sampling sensory information that is consistent with prior
beliefs. An action can be defined as a set of real states that change hidden
states in the world, which are closely related to control states inferred by
the generative model to explain the consequences of action (Friston et al.,
2012b, ). Therefore, actions directly affect the accuracy of the generative
model, defined as the surprise about sensory information expected under the
recognition density (Friston, 2010a, ). For survival, valuable actions are
those which are expected to provide agents with the capability to avoid states
of surprise.
Every action serves to maximize the evidence of the generative model in such a
way that policies are selected to minimize complexity. The expected action
consequences include the expected inaccuracy or ambiguity, and the expected
complexity or risk, which are combined into the expected free energy
(Kruglanski et al.,, 2020). Thus, expected free energy is the value of a
policy, describing its pragmatic (instrumental) and epistemic value. In other
words, actions are valuable if they maximize the utility by exploitation
(fulfilling preferences), and if they minimize uncertainty by exploration on
model parameters (information gathering, as in intrinsic motivation
strategies) (Seth and Tsakiris,, 2018). Maximizing epistemic value is
associated with selecting actions that increase model complexity by changing
beliefs, whereas maximizing pragmatic value is associated with actions that
change internal states that align with beliefs (Tschantz et al.,, 2020).
Consequently, the minimization of expected free energy occurs when pragmatic
and epistemic value are maximized.
Priors are constantly optimized because they are linked hierarchically and
informed by sensory data in such a way that learning occurs when a system
effectively minimizes free energy (Friston, 2010b, ). Here, motor commands are
proprioceptive predictions, as specific muscle movements (internal frame of
reference) are mapped onto an external frame of reference (e.g. vision).
Furthermore, it has been suggested that for biological systems “…it becomes
important not only to track the constantly fluctuating instantaneous errors,
but also to pay attention to the dynamics of error reduction over longer time
scales.” (Kiverstein et al.,, 2019, p. 2856). Rate of change in prediction
error is relevant for epistemic value and novelty seeking situations. In other
words, this mechanism permits an agent to monitor how good it is in performing
an action, and it has been suggested as the basis for intrinsic motivation and
value related learning (Kiverstein et al.,, 2019; Kaplan and Friston,, 2018).
Therefore, prediction error and its reduction rates might signal the
expectations on the learnability of particular situations (Van de Cruys,,
2017).
Currently, predictive coding is the most accepted candidate to model how
predictive processing principles are manifested in the brain, namely those
laid out by the FEP (Friston,, 2009; Buckley et al.,, 2017). It is a framework
for understanding redundancy reduction and efficient coding in the brain
(Huang and Rao,, 2011) by means of neuronal message passing among different
levels of cortical hierarchies (Rao and Ballard,, 1999). ’Hierarchical
predictive coding’ suggests that the brain predicts its sensory inputs on the
basis of how higher-levels provide predictions about lower-levels activation
until eventually making predictions about incoming sensory information
(Friston,, 2002, 2005). Active inference enables predictive coding in a
prospective way, where actions attempt to fulfill sensory predictions by
minimizing prediction error (Friston et al.,, 2011).
In this framework, the minimization of prediction error occurs through
recurrent message passing within the hierarchical inference (Friston, 2010b,
). Therefore, the changes in higher-levels are driven by the forward flow of
the resultant prediction errors in the lower-level to optimize top-down
predictions until the prediction error is minimized (Friston,, 2002; Friston,
2010b, ).
Predictive coding is closely related to Bayes formulations, from the
explanation of how “hierarchical probabilistic generative models” are encoded
in the brain to the manner in which the whole system deals with uncertainty.
Furthermore, the PEM hypothesis suggests that the brain can be conceived as
being “literally Bayesian” (Hohwy,, 2013, p. 17).
However, there is an increasing number of predictive coding variants, for
example, there are differences in the algorithms and in the type of generative
model they use (Spratling,, 2017), and in the excitatory or inhibitory
properties of the hierarchical connections (e.g. Rao and Ballard, (1999);
Spratling, (2008), among others). “These issues matter when it comes to
finding definitive empirical evidence for the computational architectures
entailed by predictive coding” (Friston,, 2019, p. 3).
All of these frameworks provide new ways to solve the perception-action
control problem in cognitive robotics (Schillaci et al., 2016a, ). In the last
couple of decades, the standard solution was the use of paired inverse-forward
models in what is known as Optimal Control Theory (OCT). In OCT, a copy of a
motor command predicted by an inverse model or controller is passed to a
forward model that in turn, predicts the sensory consequences of the execution
of the movement (D M Wolpert and Jordan,, 1995; Wolpert and Kawato,, 1998;
Kawato,, 1999). This leads to multiple implementations using artificial agents
with different computational approaches (Demiris and Khadhouri,, 2006; Möller
and Schenck,, 2008; Escobar-Juárez et al.,, 2016; Schillaci et al., 2016b, ).
OCT presents a number of difficult issues to solve, such as the ill-posed
problem of learning an inverse model.
On the other hand, in predictive processing, optimal movements are understood
in terms of inference and beliefs, and not by the optimization of a value
function of states as being the causal explanation of movement (Friston,,
2011). Therefore, there are no desired consequences, because experience-
dependent learning generates prior expectations, which guide perceptual and
active inference (Friston et al.,, 2011). Contrary to OCT, in predictive
processing there are no rewards or cost functions to optimize behavior.
Optimal behavior minimizes variational free energy, and cost functions are
replaced by priors about sensory states and their transitions (Friston et al.,
2012b, ). Understanding movement as a matter of beliefs for generating
inferences removes the problem of learning an inverse model.
Therefore, predictive processing suggests that there is no need for an inverse
model and, thus, for any efference copy of the motor command as input to a
forward model. The mere existence of the efference copy of the motor command
is nowadays a controversial issue (Dogge et al.,, 2019; Pickering and Clark,,
2014). The core mechanism in predictive processing is an Integral Forward
Model (Pickering and Clark,, 2014), or better known as a generative model, in
which motor commands are replaced by proprioceptive top-down predictions,
mapping prior beliefs to sensory consequences (Friston,, 2011; Clark,, 2015;
Friston et al., 2012b, ). Top-down predictions can be seen as control states
based on an extrinsic frame of reference (world-centered-limb position) that
are translated into intrinsic muscle-based coordinates which are then
fulfilled by the classical reflex arcs (Friston,, 2011). Minimizing
proprioceptive prediction error brings the action about, which is enslaved to
fulfill sensory predictions (Friston et al.,, 2011).
## 3 Implementations
In this section, we review implementation studies inspired on the models and
frameworks described in the previous section. Different review papers can be
found in the literature. This work focuses mostly on robotics research, which
has been developing quite rapidly in the last couple of years. We review also
a number of non-robotic studies, in particular those having important aspects
that have not received enough exploration in robotics. By highlighting them,
this work aims at encouraging new experimental research in embodied cognitive
robotics.
We are certain that there could be work which is not mentioned in this
article. The omission is not intentional. Articles have been selected under
two criteria. First, the authors mention in their work any of the frameworks
described in the previous section. Second, although the authors do not
explicitly mention these frameworks, it is our understanding that these works
could well enter the discussion and bring interesting topics and questions to
the table. This includes also some non-robotic works. Deriving from the
descriptions in the previous section, the following items have been considered
as relevant to analyze the literature in cognitive robotics:
* •
(Bay) Bayesian/Probabilistic framework. Does the study adopt a Bayesian or
probabilistic formalization?
* •
(PW) Precision weights. Top down predictions and bottom-up prediction errors
are dynamically weighted according to their expected reliability.
* •
(FofI) Flow of information. Predictions flow top-down while the difference
between predictions and real sensory information – i.e., prediction error –
flows bottom-up in the model.
* •
(HP) Hierarchical processing. The model presents a hierarchical structure for
the processing of information.
* •
(IM) Inverse model. The work discusses the benefits or challenges of using an
inverse model, as it is the case in OCT.
* •
(Mod) Modalities. Which modalities are tackled in the proposed model.
* •
(BC) Beyond motor control and estimation of body states. Most of the reviewed
studies adopts predictive processing frameworks to control robot movements.
This attribute is defined to highlight those studies that make a step further
by addressing aspects of the framework that may help understanding or
implementing higher-level cognitive capabilities.
The selected studies are summarized in Tables 1 and 2. In particular, Table 1
classifies each study according to the attributes mentioned above. Table 2
provides an overview of some implementation details of these works:
* •
Training: the generative model used in the study is either pre-coded or
trained. If applicable, this specifies what type of learning algorithm (i.e.,
online or off-line) has been employed;
* •
Data generation: if applicable, this specifies how the training data has been
generated;
* •
Agent: what type of artificial system has been used in the experiment;
* •
Generative model: the name, or acronym, of the generative model that has been
implemented in the study. Some studies may have not implemented any generative
model, but used instead the forward kinematics provided by the robot
manufacturer.
* •
Aim: what cognitive or motor task has been modelled.
Article | Bay | PW | FofI | HP | IM | Mod | BC | Aim
---|---|---|---|---|---|---|---|---
Robotic studies | | | | | | | |
Tani and Nolfi, (1999) | - | - | - | ✓ | - | V | - | Safe navigation
Ahmadi and Tani, (2019) | ✓ | - | ✓ | ✓ | - | PV | - | Movement imitation
Ahmadi and Tani, (2017) | - | - | ✓ | ✓ | - | PV | - | Movement imitation
Baltieri and Buckley, (2017) | ✓ | ✓ | ✓ | - | ✓ | L | - | Gradient following
Hwang et al., (2018) | - | - | ✓ | ✓ | - | PV | - | Gesture imitation
Idei et al., (2018) | ✓ | ✓ | ✓ | - | - | PV | ✓ | Simul. of autistic behav.
Lanillos and Cheng, (2018) | ✓ | - | ✓ | - | - | PV(T) | - | Body pose estimation
Lanillos et al., (2020) | ✓ | - | ✓ | - | - | PV | ✓ | Self-other distinction
Murata et al., (2015) | ✓ | - | ✓ | ✓ | - | PV | - | Human-robot interact.
Ohata and Tani, (2020) | ✓ | - | ✓ | ✓ | ✓ | PV | ✓ | Multimodal imitation
Oliver et al., (2019) | ✓ | - | ✓ | - | - | PV | - | Visuo-motor coordin.
Park et al., (2018) | - | - | ✓ | ✓ | - | PV | - | Arm control
Pezzato et al., (2020) | ✓ | - | ✓ | - | ✓ | P | - | Arm control
Pio-Lopez et al., (2016) | ✓ | ✓ | ✓ | ✓ | ✓ | PV | - | Control and body estim.
Sancaktar and Lanillos, (2019) | ✓ | - | ✓ | - | - | PV | - | Control and body estim.
Schillaci et al., 2020a | - | - | - | - | ✓ | PV | ✓ | Goal regulation,emotion
Annabi et al., (2020) | ✓ | - | - | - | ✓ | PV | - | Simul. arm control
Zhong et al., (2018) | - | - | ✓ | ✓ | - | PV | - | Movement generation
Non robotic studies | | | | | | | |
Allen et al., (2019) | ✓ | ✓ | ✓ | - | - | IV | ✓ | Emotional inference
Baltieri and Buckley, (2019) | ✓ | - | ✓ | - | ✓ | P | - | 1 DoF Control
Friston et al., (2015) | ✓ | ✓ | ✓ | ✓ | - | RO | ✓ | Explorat. vs exploitat.
Huang and Rao, (2011) | ✓ | - | ✓ | ✓ | - | V | - | Visual perception
Oliva et al., (2019) | ✓ | ✓ | - | - | - | V | ✓ | PW development
Philippsen and Nagai, (2019) | ✓ | ✓ | - | - | - | V | ✓ | PW & represent.drawing
Tschantz et al., (2020) | ✓ | - | ✓ | - | - | RO | ✓ | Epistemic behaviours
Table 1: Legend. Bay: Bayesian/probabilistic framework; PW: implements precision-weighting; FofI: tackles bottom-up/top-down flows of information; HP: implements hierarchical processing; IM: discusses about the need of inverse models; Mod: modalities addressed in the experiment (P: proprioception, V: visual; T: tactile; I: interoceptive; L: luminance as chemo-trail; RO: simulated rewards and observation); BC: the study goes beyond motor control and estimation of body states. Article | Train | Data generation | Agent | Generative model
---|---|---|---|---
Robotic studies | | | |
Tani and Nolfi, (1999) | On-line | Direct learning | Mobile.ag. | RNN
Ahmadi and Tani, (2019) | Off-line | Direct teaching | Humanoid | PV-RNN
Ahmadi and Tani, (2017) | Off-line | Direct teaching | Humanoid | MTRNN
Baltieri and Buckley, (2017) | On-line | Exploration | Mobile.ag. | Agent dynamics
Hwang et al., (2018) | Off-line | Direct teaching | Simul.hum. | VMDNN
Idei et al., (2018) | Off-line | Recorded sequences | Humanoid | S-CTRNN with PB
Lanillos and Cheng, (2018) | Off-line | Random movements | Humanoid | Gaussian Process Regress.
Lanillos et al., (2020) | Re-train | left-right arm mov. | Humanoid | Mixt. Dens. Net.,DL class.
Murata et al., (2015) | Off-line | Motionese | Humanoid | S-CTRNN
Ohata and Tani, (2020) | Off-line | Human demonstrations | Humanoid | Multiple PV-RNN
Oliver et al., (2019) | None | N.A. | Humanoid | Forward kinematics
Park et al., (2018) | Dev.learn. | Sets of actions | Humanoid | RNNPB
Pezzato et al., (2020) | None | N.A. | Industr.rob. | Set-points
Pio-Lopez et al., (2016) | None | N.A. | Humanoid | Forward kinematics
Sancaktar and Lanillos, (2019) | Off-line | Rand.expl.,direct teach. | Humanoid | Convolutional decoder
Schillaci et al., 2020a | On-line | Goal-directed expl. | Simul.robot | Conv.AE,SOM, DeepNN
Annabi et al., (2020) | Off-line | Exploration | Simul.arm | SOM, RNN
Zhong et al., (2018) | Off-line | Recorded sequences | Simul.robot | Convolutional LSTM
Non robotic studies | | | |
Allen et al., (2019) | None | N.A. | Minim.agent | Markov Decision Process
Baltieri and Buckley, (2019) | On-line | N.A. | 1 DoF agent | System dynamics
Friston et al., (2015) | None | N.A. | Simul.rat | POMDP
Huang and Rao, (2011) | Off-line | Image dataset | - | Hierarchical neural model
Oliva et al., (2019) | Off-line | Pre-coded trajectories | Sim.drawing | S-CTRNN
Philippsen and Nagai, (2019) | Off-line | Human demonstrations | Sim.drawing | S-CTRNN
Tschantz et al., (2020) | On-line | RL exploration | OpenAIsim | Gaussian,Laplace approx.
Table 2: Legend. Training: which type of training – if applicable – has been
performed on the generative model; Data Generation: how the training data has
been generated; Agent: which type of artificial system has been used;
Generative model: the name of the machine learning tool – if applicable – that
has been adopted for training the generative model; Aim: which cognitive or
motor task has been modelled. N.A.: not applicable.
### 3.1 Robotic implementations
The analysis of the literature starts with one of the first robotic
implementations of predictive processing. Tani and Nolfi, (1999) present a
two-layers hierarchical architecture that self-organizes expert modules. Each
expert module is a Recurrent Neural Network (RNN). The bottom layer of RNNs is
trained and responds to different types of sensory and motor inputs. The upper
set of experts serves as a gating mechanism for the lower level RNNs. The
computational model has been deployed onto a simulated mobile robot for a
navigation task.The architecture is trained in an on-line fashion. After a
short period of time, the gating experts specialize in navigating through
corridors, right and left turns and T-junctions. The free parameters of the
architecture are trained on-line using the back-propagation through time
algorithm (Rumelhart et al.,, 1986). However, as the authors point out, a
limitation of the architecture is that it only uses the bottom-up flow of
information, without integrating top-down predictions to modulate the
activation of lower levels. Tani, (2019) provides a thorough review of related
neurorobotics experiments, many of which carried out in the authors’
laboratory. A very interesting implementation is described in Hwang et al.,
(2018), which the authors refer to as a predictive coding model. The adopted
network is a multi-layer hierarchical architecture encoding visual and
proprioceptive information. Although the work is far from the formulations
laid in the free-energy principle (Friston,, 2009), the VMDNN (Predictive
Visuo-Motor Deep Dynamic Neural Network) performs very similar operations.
These include the generation of actions following a prediction error
minimization scheme and the usage of the same model structure for action
generation and recognition. Authors claim that “the proposed model provides an
online prediction error minimization mechanism by which the intention behind
the observed visuo-proprioceptive patterns can be inferred by updating the
neurons’ internal states in the direction of minimizing the prediction error”
(Hwang et al.,, 2018, pp. 3). It is worth noting that such an update does not
refer to model weights but only to the state of the neurons. The training of
the model is performed in a supervised fashion. The error being minimized is
the difference between a signal generated through kinesthetic teaching (i.e.,
where a human experimenter manually directs the movements of the robot limb)
and the model predictions. A very interesting aspect of the network are the
lateral connections between modalities at each layer of the hierarchy.
Another relevant work from the same group (Ahmadi and Tani,, 2019) stands out
for its formulation of active inference and a training strategy based on
variational Bayes Recurrent Neural Networks.
Finally, Ahmadi and Tani, (2017) propose a multiple timescale recurrent neural
network (MTRNN) which consists of multiple levels of sub-networks with
specific temporal constraints on each layer. The model processes data from
three different modalities and is capable of generating long-term predictions
in both open-loop and closed-loop fashions. During closed-loop output
generation, internal states of the network can be inferred through error
regression. The network is trained in an open loop manner, modifying free
parameters using the error between desired states and real activation values.
A common characteristic of the implementations reviewed so far is that
learning and testing are decoupled. During the testing phase, prediction
errors flow bottom-up and the network’s “internal state is modified in the
direction of minimizing prediction error via error regression” (Ahmadi and
Tani,, 2017, pp. 4). This implies that network’s weights are not modified
after training. In most of their works, Tani and colleagues use mathematical
formulations based on connectionist networks and different from those proposed
by Friston, (2009); nonetheless, the work is conceptually very related to
predictive coding and active inference. In more recent works (e.g. (Matsumoto
and Tani,, 2020; Jung et al.,, 2019)), authors use explicitly variational
inference. An illustrative architecture, that comprises most of the
characteristics of the networks used by these authors can be seen in Figure 1
in Hwang et al., (2018).
A similar approach is presented by Murata et al., (2015), who propose a RNN-
based model named stochastic continuous-time RNN (S-CTRNN). The framework
integrates probabilistic Bayesian schemes in a recurrent neural network.
Networks training is performed off-line using temporal sequences under two
learning conditions, i.e., with and without presenting actions that reveal
distinctive characteristics amplifying or exaggerating meaning and structure
within bodily motions (also named motionese (Brand et al.,, 2002)). Training
data is obtained through kinesthetic teaching on the robot directed by an
experimenter. The loss function of the optimization process considers the sum
of log-uncertainty and precision-weighted prediction error. This is formally
equivalent to free energy as proposed in active inference.
In trying to explain the underlying mechanisms causing different types of
behavioral rigidity of the autism spectrum, Idei et al., (2018) adopt a
S-CTRNN with parametric bias (PB) as the computational model for simulating
aberrant sensory precision in a humanoid robot. In this study, S-CTRNN learn
to estimate sensory variance (precision) and to adapt to different
environments using prediction error minimization schemes. Learning is
performed in an off-line fashion using pre-recorded perceptual sequences. ”The
objective of the learning is to find the optimal values of the parameters
(synaptic weights, biases, and internal states of PB units) minimizing
negative log-likelihood, or precision weighted prediction error”. Once
trained, the network is capable of reproducing target visuo-proprioceptive
sequences. In the test phase following the learning one, only the internal
states of the PB units are updated in an online fashion, while keeping the
other parameters as fixed. The study simulates increased and decreased sensory
precision by altering estimated sensory variance (inverse of their precision).
This is performed by modulating a constant in the activation function of the
variance units of the trained model. Interestingly, the authors report
abnormal behaviors in the robot, such as freezing and inappropriate repetitive
behaviors, correlated to specific modulation of the sensory variance. In
particular, increased sensory variance reduces the precision of prediction
error, thus freezing the PB states of the network and, consequently, the robot
behavior. Decreasing sensory variance, instead, leads to unlearned repetitive
behavior, likely due to the fixation of the PB states on sub-optimal local
solution during prediction error minimization.
Ohata and Tani, (2020) extends the Predictive coding-inspired Variational
Recurrent Neural Network (PV-RNN) presented by Ahmadi and Tani, (2019) in a
multimodal imitative interaction experiment with a humanoid robot.Modalities
(proprioception and vision) – each encoded with a multi-layered PV-RNN – are
connected through an associative PV-RNN module. The associative module
generates the top-down prior, which is then fed to both the proprioception and
vision modules. Each sensory module also generates top-down priors conditioned
by the other flows. Authors show how meta-priors assigned to the
proprioception and vision modules impact the learning process and the
performance of the error regression. Modulating the Kullback-Leibler
divergence (KLD) term in the error minimization scheme leads to a better
regulation of multimodal perception, which would be otherwise biased towards a
single modality. Stronger regulation of the KLD term also lead to higher
adaptivity in a human-robot imitation experiment.
Park et al., (2012) proposes an architecture based on self-organizing maps and
transition matrices for studying three different capabilities and phenomena,
i.e., performing trajectories, object permanence and imitation. Interestingly,
the architecture features a hierarchical self-organized representation of
state spaces. However, no bidirectional (top-down/bottom-up) flow of
information as in the previous studies is implemented. Moreover, the models
are in part pre-coded.In a more recent study, Park et al., (2018) adopt a
recurrent neural network with parametric bias (RNNPB) with recurrent feedback
from the output layer to the input layer. As in (Tani,, 2019), training and
testing are decoupled and the optimization is based on the back-propagation
through time algorithm. The optimization of the network parameters uses the
prediction error between a generated motor action and a reference action.
Remarkably, this work analyses the developmental dynamics of the parameter
space in terms of prediction error. Experiments are carried out on a simulated
two degrees-of-freedom robot arm and on a Nao humanoid robot, where goal-
directed actions are generated using the RNNPB.
An interesting series of studies has been produced by Lanillos and colleagues.
Lanillos and Cheng, (2018) present an architecture that combines generative
models and a probabilistic framework inspired on some of the principles of
predictive processing. The architecture is employed to estimate body
configurations of a humanoid robot, using three modalities (proprioceptive,
vision and touch). In the literature, the way how the brain integrates multi-
modal streams in similar error minimization schemes is still under debate.
Some authors suggest that the integration of different streams of unimodal
sensory surprise occurs in hierarchically higher multimodal areas (Limanowski
and Blankenburg,, 2013; Apps and Tsakiris,, 2014; Clark,, 2013; Pezzulo et
al.,, 2015), and therefore multimodal predictions and prediction errors would
be generated (Friston,, 2012). Lanillos and Cheng, (2018) apply an additive
formulation of unimodal prediction errors: (i) prior error, i.e. the ”…error
between the most plausible value of the body configuration and its prior
belief”; (ii) proprioceptive error, i.e. the distance between joint angle
readings and joint angle samples generated by a Normal distribution; (iii)
visual error, i.e. the distance between observed end-effector image
coordinates and those predicted by a visual generative model.
The proposed minimization scheme adjusts the prior on body configuration by
summing up the additive multimodal error, while the system is exposed to
multimodal observations. As in Tani’s work, training and testing are
decoupled. The generative models are pre-trained using Gaussian Process
Regression. In particular, a visual forward model maps proprioceptive data
(position of three joints) to visual data (image coordinates of the end
effector), whereas a proprioceptive model generates joint angles from a Normal
distribution representing the joint states. Training data is recorded offline
from a humanoid robot executing random trajectories. Another generative model
is created for the tactile modality as a function of the visual generative
model. This model is used in a second experiment to translate the end-effector
positions to the spatial locations on the robot arm touched by an
experimenter, in order to correct visual estimations.
A follow-up work (Oliver et al.,, 2019) applies an active inference model for
visuomotor coordination in the humanoid robot iCub. The framework controls two
sub-systems of the robot body, i.e., the head and one arm. An attractor model
drives actions towards goals. Goals are specified in a visual domain – encoded
as linear velocity vectors towards a goal, whose 3D position is estimated
using stereo vision and a color marker – and transformed using a Moore-Penrose
pseudoinverse Jacobian matrix into linear velocities in the 4D joint space of
the robot. Similarly, visual goals are transformed into joint velocity goals
for the head sub-system. Authors assume normally distributed noise in the
sensory inputs. Sensor variances and action gains are pre-tuned and fixed
during the experiments. Although no generative models are trained in this
experiment (iCub’s forward kinematics functions are used), authors show that
minimizing Laplace-encoded free energy through gradient descent leads to
reaching behaviours and visuo-motor coordination. Similarly, Pezzato et al.,
(2020) present an active inference framework using a pre-coded controller and
a generative function. The study aims at controlling the movements of an
industrial robotic platform using active inference and at comparing its
adaptivity and robustness to another state-of-the-art controller for robotic
manipulators, namely the model reference adaptive controller (MRAC).
Lanillos et al., (2020) extend the active inference implementation presented
in Oliver et al., (2019). In this study, the visual generative model is pre-
trained using a probabilistic neural network (Mixture Density Network, MDN).
Inverse mapping is performed through the backward pass of the MDN of the most
plausible Gaussian kernel. The system re-trains the network from scratch
whenever the sensory inputs are too far from its predictions. Differently from
(Oliver et al.,, 2019), visual inputs consist of movements estimated through
an optical flow algorithm. The generative model thus maps joint angles to the
2D centroid of a moving blob detected from the camera. A deep learning
classifier is then trained to label joint velocities and optical flow inputs
as self-generated or not.
Sancaktar and Lanillos, (2019) apply a similar approach on the humanoid robot
Nao. The minimization scheme uses a pre-trained generative model for the
visual input, i.e., a convolutional decoder-like neural network. Training data
are collected through a combination of random babbling and kinesthetic
teaching. The generative model maps joint angles to visual inputs, as in to
Lang et al., (2018). When computing the likelihood for the gradient descent,
the density defining the visual input is created as a collection of
independent Gaussian distributions centered at each pixel. In the minimization
scheme, the visual prediction error multiplied by the inverse of the variance
is calculated by applying a forward pass and a backward pass to the
convolutional decoder. The approach is interesting, but studies have pointed
at questionable aspects about the biological plausibility of back-propagating
errors. This refers, in particular, to the lack of local error representations
in ANNs and at the symmetry between forwards and backwards weights, which is
not always present in cortical networks (Whittington and Bogacz,, 2019). As in
the previous series of experiments, active inference is used to control the
robot arm movement in a reaching experiment.
Pio-Lopez et al., (2016) present a proof-of-concept implementation of a
control scheme based on active inference using the 7 degrees-of-freedom arm of
a simulated PR2 humanoid robot. The control scheme is adopted to perform
trajectories towards predefined goals. Authors highlight that such a scheme
eliminates the need of an inverse model for motor control as “action realizes
the (sensory) consequences of (prior) causes” (Pio-Lopez et al.,, 2016, pp 9).
A generative model maps causes to actions, where causes are seen as ”forces
that have some desired fixed point or orbit”(Pio-Lopez et al.,, 2016, pp 9),
as sensed by proprioception. Proprioceptive predictions are thus realized in
an open-loop fashion, by means of reflex arcs.
This framework – which employs a hierarchical generative model – minimizes the
KL-divergence between the distribution of the agent’s priors and that of the
true posterior distribution, which represents the updated belief given the
evidence. Authors point out that more complex behaviours require the design of
equations of motion. The question on the scalability of such an approach for
cognitive robotics remains open.
Although not adopting an active inference approach, Schillaci et al., 2020a
present a study where intrinsically motivated behaviors are driven by error
minimization schemes in a simulated robot. The proposed architecture generates
exploratory behaviors towards self-generated goals, leverages computational
resources and regulates goal selection and the balance between exploitation
and exploration through a multi-level monitoring of prediction error dynamics.
The work is framed within the study of the underlying mechanisms of motivation
and the emergence of emotions that drive behaviors and goal selection to
promote learning. Scholars such as Van de Cruys, (2017), Kiverstein et al.,
(2019) and Hsee and Abelson, (1991) argue that what motivates engagement in a
behavior is not just the final outcome, but the satisfaction that emerges from
the pattern and the velocity of an outcome over time111Here we intend the
desired outcome of an event or of an activity. As for the velocity of an
outcome, we intend the velocity, or the rate, at which such desired goal is
achieved. In the context of learning, a goal could be merely the reduction of
prediction error. The velocity of the outcome here would correspond to the
rate of reduction of the prediction error, i.e., how fast or slow is
prediction error minimised.. “If one […] assumes that people not only
passively experience satisfaction, but actively seek satisfaction, then one
can infer an interesting corollary from the velocity relation: People engage
in a behavior not just to seek its actual outcome, but to seek a positive
velocity of outcomes that the behavior creates over time” (Hsee and Abelson,,
1991, pp, 346).
The system proposed by Schillaci et al., 2020a monitors prediction error
dynamics over time and at different levels, driving behaviours towards those
goals that are associated to specific patterns of prediction error dynamics.
The system also modulates exploration noise and leverages computational
resources according to the dynamics of the overall learning performance.
Learning is performed in an online fashion, where image features – compressed
using a pre-trained convolutional autoencoder – are fed into a self-organizing
neural network for unsupervised goal generation and into an inverse-forward
models pair for movement generation and prediction error monitoring. The
models are updated in an online fashion and an episodic memory system is
adopted to reduce catastrophic forgetting issues. Actions are generated
towards goals associated with the steepest descent in low-level prediction
error dynamics.
A similar approach for the self-generation of goals has been employed by
Annabi et al., (2020) in a simulated experiment where a two degrees-of-freedom
robotic arm has to learn how to write digits. The proposed architecture learns
sequences of motor primitives based on a free energy minimization approach.
The system combines recurrent neural networks for trajectories encoding with a
self-organising system for goal estimation, which is trained on data generated
through random behaviours. In the experiments, the system incrementally learns
motor primitives and policies, using a predefined generative forward model.
Free energy minimization is used for action selection.
Zhong et al., (2018) present a hierarchical model consisting of a series of
repeated stacked modules to implement active inference in simulated agents.
Each layer of the network contains different modules, including generative
units implemented as convolutional recurrent networks (Long Short-Term Memory
networks, LSTM). In the hierarchical architecture, predictions and prediction
errors flow in top-down and bottom-up directions, respectively. Generative
units are trained in an off-line learning session during two simulated
experiments.
It is worth noting that all the works reviewed in this section make use of
different forms of prediction error minimization schemes to obtain working
models and controllers.
### 3.2 Non-robotic implementations
A wide amount of non-robotic studies on predictive processing have been
produced during the last years. This section opens only a small window on this
literature. Nevertheless, promising directions for cognitive robotics research
on predictive processing can be characterised from the few samples reported
here.
The issue of scalability highlighted on the active inference study of Pio-
Lopez et al., (2016) is also apparent in the work of Baltieri and Buckley,
(2019), where the authors design an active inference based linear quadratic
Gaussian controller to manipulate a one degree-of-freedom system. The study
aims at showing that such a controller can achieve goal positions without the
need of an efference copy, as in optimal control theory (OCT).
Similar basic proofs-of-concept are presented by Tschantz et al., (2020) and
Baltieri and Buckley, (2017), where active inference is used to model
bacterial chemo-taxis in a minimal simulated agent. Tschantz et al., (2020)
focus on an action-oriented model that employ goal-directed (instrumental) and
information-seeking (epistemic) behaviors when learning a generative model.
Different error minimization strategies are tested, generating epistemic,
instrumental, random behaviours or expected free energy driven ones. Authors
show that active inference balances exploration and exploitation and suggest
that “[they] are both complementary perspectives of the same objective
function – the minimization of expected free energy.” (Tschantz et al.,, 2020,
pp.19). The model is not hierarchical, but it fully exploits the proposals of
active inference. In the other interesting proof-of-concept, Baltieri and
Buckley, (2017) present a Braitenberg-like vehicle where behaviors are
modulated according to predefined precision weights.
Friston et al., (2015) also addresses the exploration-exploitation dilemma.
Authors argue that, when adopting Bayes optimal behavior under the free energy
principle, epistemic, intrinsic value is maximized until there is no further
information gain, after which exploitation is assured through maximization of
extrinsic value, i.e., the utility of the result of an action. In fact,
epistemic actions can bring the agent far from a goal. Nonetheless, they can
be used to plan a path to a goal with greater confidence. Adopting the
formalism of partially observed Markov decision processes, authors present a
simulated experiment where an agent, i.e., a rat, navigates through a T-shaped
maze, to show the role of epistemic value in resolving uncertainty about goal-
directed behavior. Moreover, the authors discuss an aspect of the Bayesian
framework, that is, the role of the precision – i.e., the inverse of the
variance – of the posterior belief – which is estimated from the prior belief
and the likelihood of the evidence – about control states222In the generative
model, a control state corresponds to the hidden cause of an action. “This
means the agent has to infer its behavior by forming beliefs about control
states, based upon the observed consequences of its action” (Friston et al.,,
2015, pp. 190). as a message passing channel. Under this view, precision is
associated with dopaminergic responses, which has been interpreted in terms of
changes in expected value (e.g. reward prediction errors). In brief, changes
in precision would correlate with changes in exploratory or exploitative
behaviors.
In a follow-up study, Schwartenbeck et al., (2019) present an architecture
that has an implicit weighting of the exploitation and (goal-directed)
exploration tendencies, determined by the precision of prior beliefs and the
degree of uncertainty about the world. Two mechanisms for goal-directed
exploration are implemented in the rat-within-a-maze simulated setup: model
parameter exploration and hidden state exploration. In the former active
learning strategy, the agents forage for information about the correct
parameterization of the observation model, in the study represented as a
Markovian model. Here, parameters are the set of arrays encoding the Markovian
transition probabilities, i.e., the mapping between hidden states and
observations and the transition between hidden states. In the latter active
inference strategy, agents aim at gathering information about the current
(hidden) state of the world, for example the current context. In particular,
they sample the outcomes that are associated with a high uncertainty, only
when these are informative for the representation of the task structure.
Similarly to a standard intrinsic motivation approach, authors appeal to the
need of random sampling when the uncertainty about model parameters and hidden
states (goal-exploration strategies) fails to inform behavior. The aim of this
work is to understand “the generative mechanisms that underlie information
gain and its trade-off with reward maximization” (Schwartenbeck et al.,, 2019,
pp. 45), but, as authors notice, how to scale up these mechanisms to more
complicated tasks is still an open challenge.
Precision weighting is also one of the main focuses of the predictive coding
study carried out by Oliva et al., (2019). Interestingly, the authors analyze
the variations of the precision of prior prediction of a recurrent (S-CTRNN)
generative model over a developmental process. The model learns to estimate
stochastic time series (two-dimensional trajectory drawings), thus providing
an estimate of the variance of the input data. The framework “shares crucial
properties with the developmental process of humans in that it naturally
switches from a strong reliance on sensory input at an early learning stage to
a proper integration of sensory input and own predictions at later learning
stages” (Oliva et al.,, 2019, pp. 254). This is correlated to a reduction of
the prediction error and the estimated (prior) variance over time during
learning. Some formulations of the problem in this work are, however,
problematic, çin particular, in (Oliva et al.,, 2019) the posterior is
computed naively by multiplying the likelihood and the prior using the basic
Bayesian formula, and learning is performed only for maximizing the
likelihood. In a follow-up work (Philippsen and Nagai,, 2019), the framework
is applied to simulate the generation of representational drawings – i.e.,
drawings that represent objects – in infants and chimpanzees. Authors observe
that stronger reliance on the prior (hyper-prior) enables the network to
perform representational drawings as those produced by children, whereas a
weak reliance on the prior produces highly accurate lines but fails to produce
missing parts of the representational drawings, as observed in chimpanzees.
Results suggest that chimpanzees’ and humans’ “differences in representational
drawing behavior might be explainable by the degree to which they take prior
information into account” (Philippsen and Nagai,, 2019, pp. 176).
Allen et al., (2019) study active inference in a multimodal domain, simulating
interactions between interoceptive cardiac cycle and exteroceptive (visual)
perception. The work hypothesizes that effects of cardiac timing on perception
could arise as a function of periodic sensory attenuation. This study does not
involve any robotic implementation nor any learning or control task. However,
related implementations are mostly missing in the literature, therefore we
believe it is worth being mentioned in this review.
## 4 Discussion
This work has reviewed a series of robotics and non-robotics studies that have
adopted the paradigm of predictive processing under different forms. Tables 1
and 2 provided a general overview of the main aspects as well as the
differences of these studies.
It is certainly standing out to which length the robotics research and the
non-robotics models have addressed tasks that go beyond perception and motor
control, which have been traditionally the focus of predictive processing
studies. Limited cognitive robotics research has addressed the scaling up of
the predictive processing paradigm towards higher cognitive capabilities.
Computational studies on minimal simulated systems have suggested that
specific aspects, such as precision weighting, may bridge this gap.
Embodied robotic systems seem to be the most appropriate experimental
platforms not only for studying cognitive development within the predictive
processing framework, but also for extending this framework to a broader range
of modalities and behavioral possibilities. In fact, another aspect of the
robotics researches reviewed in this paper worth highlighting, is that almost
the totality of them333Lanillos and Cheng, (2018) address also the tactile
modality in their study, but do not fully integrate it in the error
minimization scheme. address only proprioception and a single exteroceptive
modality, i.e., vision. Little attention in the robotics community has been
posed on how multiple exteroceptive modalities – for example, vision, haptic,
auditory, etc. –, as well as interoceptive ones (Seth and Tsakiris,, 2018),
can be integrated in prediction error minimization schemes. Studies such as
those from Tschantz et al., (2020), Friston et al., (2015), Schwartenbeck et
al., (2019) and Schillaci et al., 2020a have discussed epistemic and
emotional value, homeostatic drives and intrinsic motivation that regulate
behaviors. Interesting research directions for robotics should include
extending this to multimodal self-generated goals and to combinations of fixed
homeostatic goals and dynamic ones.
Another important point concerns precision weighting, as in predictive
processing this is assigned a prominent role in behavior and goal regulation,
as well as in perceptual optimization processes. Further cognitive robotics
study should explore this path. Most of the non-robotic implementations adopt
a Bayesian or probabilistic formalization of error minimization schemes. This
allows an elegant formulation of the precision in weighting schemes, which
consists of the inverse of the variance of the prior and posterior
distributions. However, alternative strategies are available for implementing
precision weighting-like processes in non-probabilistic models, including the
modulation of neuronal activation or of synaptic weights in artificial neural
networks, modulation of firing rates in spiking neural networks, dopaminergic
modulation, and the like. There is a wide literature on sensor fusion
techniques in the machine learning community which focuses on very related
challenges, such as the learning and modulation of the relevance of single
sensors in multimodal and predictive settings (Fayyad et al.,, 2020).
A common denominator in all the reviewed implementations is the use of
predictions for guiding behavior. However, the implementations adopt different
machine learning tools. Works that follow strictly the active inference
principles make use of Bayes as their main tool. It is still an open question
how all other approaches should be considered in the wider predictive
processing framework. So far, most robotics implementations make use of non-
variational deep networks as their main tool. However, the bias of using the
Bayesian framework, in non-robotics implementations, might hinder the search
for other approaches that could have advantages, importantly, in terms of
computational cost and the complexity of designing generative models to
produce coherent and scaled-up behaviors.
Predictive processing emphasizes the prediction-based learning of a generative
model, which predicts incoming sensory signals (Clark,, 2015). In optimal
control theory, a high computational complexity is required for learning to
predict sensory consequences by means of the efference copy and the inverse
model. In predictive processing accounts, this complexity is mapped to the
learning of a generative model during hierarchical perceptual and active
inference (e.g. Friston, (2011); Friston et al., 2012b ). In this regard, it
is still unclear how generative models should be learned, due to the
complexity that implies modeling the richness of the entire environment
(Tschantz et al.,, 2020). Action-oriented models are a common approach to
solve this issue by learning and generating inferences that allow adaptive
behavior, even when the world is not modelled in a precise manner (e.g.
Tschantz et al., (2020); Baltieri and Buckley, (2017); Pezzulo et al.,
(2017)). It is worth highlighting that despite the relevance of learning for
belief updating, most non-robotic computational work focuses on inference and
not on learning. Actually, learning is almost absent here.
The few non-robotic models that focus on learning of generative models are
based on the expected free energy formulations and use very simplified agents
and behaviors (Tschantz et al.,, 2020; Baltieri and Buckley,, 2017;
Ueltzhöffer,, 2018; Millidge,, 2020). On the contrary, some cognitive robotics
implementations do have the emphasis slightly shifted towards the learning of
generative models (e.g. Lanillos et al., (2020); Ahmadi and Tani, (2017); Idei
et al., (2018); Schillaci et al., 2020a ; Schillaci et al., 2020b ). Yet,
learning and testing are decoupled in many of these studies and, in
particular, in those adopting probabilistic methods. This is likely due to the
challenges in implementing online learning of probabilistic models, especially
in the context of high-dimensional sensory and motor spaces.
It is worth pointing out that, in cognitive robotics, a variety of learning
methods are used and just few of these are equivalent to the free energy
principle formulations. Nonetheless, agents and behaviors used are much more
complex. For cognitive robotics, it is very relevant to explore the reach and
possibilities of using generative models for perception, action, and planning.
More importantly, there is a special interest on the tools and methods that
can be used for the learning of these models, an area that has been unattended
in non-robotic models using predictive processing principles.
Finally, limited attention has been posed on the temporal aspect of prediction
error dynamics (Kiverstein et al.,, 2019; Tschantz et al.,, 2020). Prediction
error patterns may be associated with emotional experience (Joffily and
Coricelli,, 2013). In artificial systems, they are essential components for
implementing intrinsically motivated exploration behaviors and artificial
curiosity (Oudeyer et al.,, 2007; Schillaci et al., 2020b, ; Baldassarre and
Mirolli,, 2013; Graziano et al.,, 2011). Recent studies suggest that error
dynamics may influence the regulation of computational resources (Schillaci et
al., 2020a, ) and the emotional valence of actions (Joffily and Coricelli,,
2013). We believe that prediction error dynamics represent a promising tool in
the exploration of more complex behaviours and tasks in cognitive robotics
under the predictive processing paradigm.
### Acknowledgments
Guido Schillaci has received funding from the European Union’s Horizon 2020
research and innovation programme under the Marie Sklodowska-Curie grant
agreement No. 838861 (Predictive Robots). Predictive Robots is an associated
project of the Deutsche Forschungsgemeinschaft (DFG, German Research
Foundation) Priority Programme ”The Active Self”.
Verena Hafner has received funding from the Deutsche Forschungsgemeinschaft
(DFG, German Research Foundation) Priority Programme ”The Active Self” -
402790442 (Prerequisites for the Development of an Artificial Self).
Bruno Lara and Alejandra Ciria have received funding from the Alexander von
Humboldt Foundation from the project ”Predictive Autonomous Behaviour Internal
Models and Predictive Self-regulation”.
The authors would like to thank the anonymous reviewer for his/her thorough
reading of our manuscript. His/her comments helped greatly to improve the
first version submitted.
## References
* (1) Adams, R. A., Shipp, S., and Friston, K. J. (2013a). Predictions not commands: active inference in the motor system. Brain Structure and Function, 218(3):611–643.
* (2) Adams, R. A., Stephan, K. E., Brown, H. R., Frith, C. D., and Friston, K. J. (2013b). The computational anatomy of psychosis. Frontiers in psychiatry, 4:47.
* Ahmadi and Tani, (2017) Ahmadi, A. and Tani, J. (2017). How can a recurrent neurodynamic predictive coding model cope with fluctuation in temporal patterns? robotic experiments on imitative interaction. Neural Networks, 92:3–16.
* Ahmadi and Tani, (2019) Ahmadi, A. and Tani, J. (2019). A novel predictive-coding-inspired variational rnn model for online prediction and recognition. Neural computation, 31(11):2025–2074.
* Allen et al., (2019) Allen, M., Levy, A., Parr, T., and Friston, K. J. (2019). In the body’s eye: The computational anatomy of interoceptive inference. BioRxiv, page 603928.
* Annabi et al., (2020) Annabi, L., Pitti, A., and Quoy, M. (2020). Autonomous learning and chaining of motor primitives using the free energy principle. arXiv preprint arXiv:2005.05151.
* Apps and Tsakiris, (2014) Apps, M. A. and Tsakiris, M. (2014). The free-energy self: a predictive coding account of self-recognition. Neuroscience & Biobehavioral Reviews, 41:85–97.
* Badcock et al., (2017) Badcock, P. B., Davey, C. G., Whittle, S., Allen, N. B., and Friston, K. J. (2017). The depressed brain: an evolutionary systems theory. Trends in Cognitive Sciences, 21(3):182–194.
* Baldassarre and Mirolli, (2013) Baldassarre, G. and Mirolli, M. (2013). Intrinsically motivated learning in natural and artificial systems. Springer.
* Baltieri and Buckley, (2017) Baltieri, M. and Buckley, C. L. (2017). An active inference implementation of phototaxis. In Artificial Life Conference Proceedings 14, pages 36–43. MIT Press.
* Baltieri and Buckley, (2019) Baltieri, M. and Buckley, C. L. (2019). Active inference: Computational models of motor control without efference copy. researchgate.
* Brand et al., (2002) Brand, R. J., Baldwin, D. A., and Ashburn, L. A. (2002). Evidence for ‘motionese’: modifications in mothers’ infant-directed action. Developmental Science, 5(1):72–83.
* Brown et al., (2013) Brown, H., Adams, R. A., Parees, I., Edwards, M., and Friston, K. (2013). Active inference, sensory attenuation and illusions. Cognitive processing, 14(4):411–427.
* Buckley et al., (2017) Buckley, C. L., Kim, C. S., McGregor, S., and Seth, A. K. (2017). The free energy principle for action and perception: A mathematical review. Journal of Mathematical Psychology, 81:55–79.
* Clark, (2013) Clark, A. (2013). Whatever next? predictive brains, situated agents, and the future of cognitive science. Behavioral and brain sciences, 36(3):181–204.
* Clark, (2015) Clark, A. (2015). Embodied prediction. Open MIND. Frankfurt am Main: MIND Group.
* Clark, (2018) Clark, A. (2018). A nice surprise? predictive processing and the active pursuit of novelty. Phenomenology and the Cognitive Sciences, 17(3):521–534.
* Clark, (2020) Clark, A. (2020). Beyond desire? agency, choice, and the predictive mind. Australasian Journal of Philosophy, 98(1):1–15.
* D M Wolpert and Jordan, (1995) D M Wolpert, Z. G. and Jordan, M. (1995). An internal model for sensorimotor integration. Science, 269(5232):1880 – 1882.
* Demekas et al., (2020) Demekas, D., Parr, T., and Friston, K. J. (2020). An investigation of the free energy principle for emotion recognition. Frontiers in Computational Neuroscience, 14.
* Demiris and Khadhouri, (2006) Demiris, Y. and Khadhouri, B. (2006). Hierarchical attentive multiple models for execution and recognition of actions. Robotics and Autonomous Systems, 54(5):361 – 369. The Social Mechanisms of Robot Programming from Demonstration.
* Dogge et al., (2019) Dogge, M., Custers, R., and Aarts, H. (2019). Moving forward: On the limits of motor-based forward models. Trends in Cognitive Sciences, 23(9):743–753.
* Donnarumma et al., (2017) Donnarumma, F., Costantini, M., Ambrosini, E., Friston, K., and Pezzulo, G. (2017). Action perception as hypothesis testing. Cortex, 89:45–60.
* Escobar-Juárez et al., (2016) Escobar-Juárez, E., Schillaci, G., Hermosillo-Valadez, J., and Lara-Guzmán, B. (2016). A self-organized internal models architecture for coding sensory–motor schemes. Frontiers in Robotics and AI, 3:22.
* Fayyad et al., (2020) Fayyad, J., Jaradat, M. A., Gruyer, D., and Najjaran, H. (2020). Deep learning sensor fusion for autonomous vehicle perception and localization: A review. Sensors, 20(15):4220.
* Feldman and Friston, (2010) Feldman, H. and Friston, K. (2010). Attention, uncertainty, and free-energy. Frontiers in human neuroscience, 4:215.
* Friston, (2002) Friston, K. (2002). Functional integration and inference in the brain. Progress in neurobiology, 68(2):113–143.
* Friston, (2005) Friston, K. (2005). A theory of cortical responses. Philosophical transactions of the Royal Society B: Biological sciences, 360(1456):815–836.
* Friston, (2009) Friston, K. (2009). The free-energy principle: a rough guide to the brain? Trends in cognitive sciences, 13(7):293–301.
* (30) Friston, K. (2010a). The free-energy principle: a unified brain theory? Nature reviews neuroscience, 11(2):127–138.
* (31) Friston, K. (2010b). Is the free-energy principle neurocentric? Nature Reviews Neuroscience, 11(8):605.
* Friston, (2011) Friston, K. (2011). What is optimal about motor control? Neuron, 72(3):488–498.
* Friston, (2012) Friston, K. (2012). Prediction, perception and agency. International Journal of Psychophysiology, 83(2):248–252.
* (34) Friston, K., Adams, R., Perrinet, L., and Breakspear, M. (2012a). Perceptions as hypotheses: saccades as experiments. Frontiers in psychology, 3:151.
* Friston et al., (2006) Friston, K., Kilner, J., and Harrison, L. (2006). A free energy principle for the brain. Journal of Physiology-Paris, 100(1-3):70–87.
* Friston et al., (2011) Friston, K., Mattout, J., and Kilner, J. (2011). Action understanding and active inference. Biological cybernetics, 104(1-2):137–160.
* Friston et al., (2015) Friston, K., Rigoli, F., Ognibene, D., Mathys, C., Fitzgerald, T., and Pezzulo, G. (2015). Active inference and epistemic value. Cognitive neuroscience, 6(4):187–214.
* (38) Friston, K., Samothrakis, S., and Montague, R. (2012b). Active inference and agency: optimal control without cost functions. Biological cybernetics, 106(8-9):523–541.
* Friston, (2019) Friston, K. J. (2019). Waves of prediction. PLoS biology, 17(10).
* Friston et al., (2014) Friston, K. J., Stephan, K. E., Montague, R., and Dolan, R. J. (2014). Computational psychiatry: the brain as a phantastic organ. The Lancet Psychiatry, 1(2):148–158.
* Graziano et al., (2011) Graziano, V., Glasmachers, T., Schaul, T., Pape, L., Cuccu, G., Leitner, J., and Schmidhuber, J. (2011). Artificial curiosity for autonomous space exploration. Acta Futura, 4:41–51.
* Hohwy, (2013) Hohwy, J. (2013). The predictive mind. Oxford University Press.
* Hsee and Abelson, (1991) Hsee, C. K. and Abelson, R. P. (1991). Velocity relation: Satisfaction as a function of the first derivative of outcome over time. Journal of Personality and Social Psychology, 60(3):341.
* Huang and Rao, (2011) Huang, Y. and Rao, R. P. (2011). Predictive coding. Wiley Interdisciplinary Reviews: Cognitive Science, 2(5):580–593.
* Hwang et al., (2018) Hwang, J., Kim, J., Ahmadi, A., Choi, M., and Tani, J. (2018). Dealing with large-scale spatio-temporal patterns in imitative interaction between a robot and a human by using the predictive coding framework. IEEE Transactions on Systems, Man, and Cybernetics: Systems.
* Idei et al., (2018) Idei, H., Murata, S., Chen, Y., Yamashita, Y., Tani, J., and Ogata, T. (2018). A neurorobotics simulation of autistic behavior induced by unusual sensory precision. Computational Psychiatry, 2:164–182.
* Joffily and Coricelli, (2013) Joffily, M. and Coricelli, G. (2013). Emotional valence and the free-energy principle. PLoS Comput Biol, 9(6):e1003094.
* Jung et al., (2019) Jung, M., Matsumoto, T., and Tani, J. (2019). Goal-directed behavior under variational predictive coding: Dynamic organization of visual attention and working memory. arXiv preprint arXiv:1903.04932.
* Kaplan and Friston, (2018) Kaplan, R. and Friston, K. J. (2018). Planning and navigation as active inference. Biological cybernetics, 112(4):323–343.
* Kawato, (1999) Kawato, M. (1999). Internal models for motor control and trajectory planning. Current Opinion in Neurobiology, 9(6):718–727.
* Kiverstein et al., (2019) Kiverstein, J., Miller, M., and Rietveld, E. (2019). The feeling of grip: novelty, error dynamics, and the predictive brain. Synthese, 196(7):2847–2869.
* Knill and Pouget, (2004) Knill, D. C. and Pouget, A. (2004). The bayesian brain: the role of uncertainty in neural coding and computation. TRENDS in Neurosciences, 27(12):712–719.
* Kruglanski et al., (2020) Kruglanski, A. W., Jasko, K., and Friston, K. (2020). All thinking is ‘wishful’thinking. Trends in Cognitive Sciences.
* Lang et al., (2018) Lang, C., Schillaci, G., and Hafner, V. V. (2018). A deep convolutional neural network model for sense of agency and object permanence in robots. In 2018 Joint IEEE 8th International Conference on Development and Learning and Epigenetic Robotics (ICDL-EpiRob), pages 257–262. IEEE.
* Lanillos and Cheng, (2018) Lanillos, P. and Cheng, G. (2018). Adaptive robot body learning and estimation through predictive coding. In 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pages 4083–4090. IEEE.
* Lanillos et al., (2020) Lanillos, P., Cheng, G., et al. (2020). Robot self/other distinction: active inference meets neural networks learning in a mirror. arXiv preprint arXiv:2004.05473.
* Lara et al., (2018) Lara, B., Astorga, D., Mendoza-Bock, E., Pardo, M., Escobar, E., and Ciria, A. (2018). Embodied cognitive robotics and the learning of sensorimotor schemes. Adaptive Behavior, 26(5):225–238.
* Limanowski and Blankenburg, (2013) Limanowski, J. and Blankenburg, F. (2013). Minimal self-models and the free energy principle. Frontiers in human neuroscience, 7:547.
* Marr, (1982) Marr, D. (1982). Vision: A computational investigation into the human representation and processing of visual information. The MIT Press.
* Matsumoto and Tani, (2020) Matsumoto, T. and Tani, J. (2020). Goal-directed planning for habituated agents by active inference using a variational recurrent neural network. Entropy, 22(5):564.
* Millidge, (2020) Millidge, B. (2020). Deep active inference as variational policy gradients. Journal of Mathematical Psychology, 96:102348.
* Möller and Schenck, (2008) Möller, R. and Schenck, W. (2008). Bootstrapping cognition from behavior a computerized thought experiment. Cognitive Science, 32(3):504–542.
* Murata et al., (2015) Murata, S., Tomioka, S., Nakajo, R., Yamada, T., Arie, H., Ogata, T., and Sugano, S. (2015). Predictive learning with uncertainty estimation for modeling infants’ cognitive development with caregivers: A neurorobotics experiment. In 2015 Joint IEEE International Conference on Development and Learning and Epigenetic Robotics (ICDL-EpiRob), pages 302–307. IEEE.
* Ohata and Tani, (2020) Ohata, W. and Tani, J. (2020). Investigation of multimodal and agential interactions in human-robot imitation, based on frameworks of predictive coding and active inference. arXiv preprint arXiv:2002.01632.
* Oliva et al., (2019) Oliva, D., Philippsen, A., and Nagai, Y. (2019). How development in the bayesian brain facilitates learning. In 2019 Joint IEEE 9th International Conference on Development and Learning and Epigenetic Robotics (ICDL-EpiRob), pages 1–7. IEEE.
* Oliver et al., (2019) Oliver, G., Lanillos, P., and Cheng, G. (2019). Active inference body perception and action for humanoid robots. arXiv preprint arXiv:1906.03022.
* Oudeyer et al., (2007) Oudeyer, P.-Y., Kaplan, F., and Hafner, V. V. (2007). Intrinsic motivation systems for autonomous mental development. IEEE transactions on evolutionary computation, 11(2):265–286.
* Park et al., (2018) Park, J.-C., Kim, D.-S., and Nagai, Y. (2018). Learning for goal-directed actions using rnnpb: Developmental change of “what to imitate”. IEEE Transactions on Cognitive and Developmental Systems, 10(3):545–556.
* Park et al., (2012) Park, J.-C., Lim, J. H., Choi, H., and Kim, D.-S. (2012). Predictive coding strategies for developmental neurorobotics. Frontiers in psychology, 3:134.
* Parr and Friston, (2017) Parr, T. and Friston, K. J. (2017). Working memory, attention, and salience in active inference. Scientific reports, 7(1):1–21.
* Pezzato et al., (2020) Pezzato, C., Ferrari, R., and Corbato, C. H. (2020). A novel adaptive controller for robot manipulators based on active inference. IEEE Robotics and Automation Letters, 5(2):2973–2980.
* Pezzulo et al., (2017) Pezzulo, G., Donnarumma, F., Iodice, P., Maisto, D., and Stoianov, I. (2017). Model-based approaches to active perception and control. Entropy, 19(6):266.
* Pezzulo et al., (2015) Pezzulo, G., Rigoli, F., and Friston, K. (2015). Active inference, homeostatic regulation and adaptive behavioural control. Progress in neurobiology, 134:17–35.
* Philippsen and Nagai, (2019) Philippsen, A. and Nagai, Y. (2019). A predictive coding model of representational drawing in human children and chimpanzees. In 2019 Joint IEEE 9th International Conference on Development and Learning and Epigenetic Robotics (ICDL-EpiRob), pages 171–176. IEEE.
* Pickering and Clark, (2014) Pickering, M. J. and Clark, A. (2014). Getting ahead: forward models and their place in cognitive architecture. Trends in cognitive sciences, 18(9):451–456.
* Pio-Lopez et al., (2016) Pio-Lopez, L., Nizard, A., Friston, K., and Pezzulo, G. (2016). Active inference and robot control: a case study. Journal of The Royal Society Interface, 13(122):20160616.
* Ramstead et al., (2020) Ramstead, M. J., Kirchhoff, M. D., and Friston, K. J. (2020). A tale of two densities: Active inference is enactive inference. Adaptive Behavior, 28(4):225–239.
* Rao and Ballard, (1999) Rao, R. P. and Ballard, D. H. (1999). Predictive coding in the visual cortex: a functional interpretation of some extra-classical receptive-field effects. Nature neuroscience, 2(1):79.
* Rumelhart et al., (1986) Rumelhart, D. E., Hinton, G. E., and Williams, R. J. (1986). Learning representations by back-propagating errors. nature, 323(6088):533–536.
* Sajid et al., (2020) Sajid, N., Parr, T., Hope, T. M., Price, C. J., and Friston, K. J. (2020). Degeneracy and redundancy in active inference. Cerebral Cortex.
* Sancaktar and Lanillos, (2019) Sancaktar, C. and Lanillos, P. (2019). End-to-end pixel-based deep active inference for body perception and action. arXiv preprint arXiv:2001.05847.
* (82) Schillaci, G., Ciria, A., and Lara, B. (2020a). Tracking emotions: Intrinsic motivation grounded on multi-level prediction error dynamics. Proceedings of the 10th Joint International Conference on Development and Learning and Epigenetic Robotics (IEEE ICDL-EpiRob 2020). arXiv preprint arXiv:2007.14632.
* (83) Schillaci, G., Hafner, V. V., and Lara, B. (2016a). Exploration behaviors, body representations, and simulation processes for the development of cognition in artificial agents. Frontiers in Robotics and AI, 3:39.
* (84) Schillaci, G., Pico Villalpando, A., Hafner, V. V., Hanappe, P., Colliaux, D., and Wintz, T. (2020b). Intrinsic motivation and episodic memories for robot exploration of high-dimensional sensory spaces. Adaptive Behavior, page 1059712320922916.
* (85) Schillaci, G., Ritter, C.-N., Hafner, V. V., and Lara, B. (2016b). Body representations for robot ego-noise modelling and prediction. towards the development of a sense of agency in artificial agents. Artificial Life Conference Proceedings, (28):390–397.
* Schwartenbeck et al., (2019) Schwartenbeck, P., Passecker, J., Hauser, T. U., FitzGerald, T. H., Kronbichler, M., and Friston, K. J. (2019). Computational mechanisms of curiosity and goal-directed exploration. Elife, 8:e41703.
* Seth and Tsakiris, (2018) Seth, A. K. and Tsakiris, M. (2018). Being a beast machine: the somatic basis of selfhood. Trends in cognitive sciences, 22(11):969–981.
* Spratling, (2008) Spratling, M. W. (2008). Predictive coding as a model of biased competition in visual attention. Vision research, 48(12):1391–1408.
* Spratling, (2017) Spratling, M. W. (2017). A review of predictive coding algorithms. Brain and cognition, 112:92–97.
* Tani, (2019) Tani, J. (2019). Accounting for the minimal self and the narrative self: Robotics experiments using predictive coding. In AAAI Spring Symposium: Towards Conscious AI Systems.
* Tani and Nolfi, (1999) Tani, J. and Nolfi, S. (1999). Learning to perceive the world as articulated: an approach for hierarchical learning in sensory-motor systems. Neural Networks, 12(7-8):1131–1141.
* Tschantz et al., (2020) Tschantz, A., Seth, A. K., and Buckley, C. L. (2020). Learning action-oriented models through active inference. PLoS computational biology, 16(4):e1007805.
* Ueltzhöffer, (2018) Ueltzhöffer, K. (2018). Deep active inference. Biological cybernetics, 112(6):547–573.
* Van de Cruys, (2017) Van de Cruys, S. (2017). Affective value in the predictive mind. Johannes Gutenberg-Universität Mainz.
* Whittington and Bogacz, (2019) Whittington, J. C. and Bogacz, R. (2019). Theories of error back-propagation in the brain. Trends in cognitive sciences.
* Williams, (2018) Williams, D. (2018). Predictive processing and the representation wars. Minds and Machines, 28(1):141–172.
* Wolpert and Kawato, (1998) Wolpert, D. M. and Kawato, M. (1998). Multiple paired forward and inverse models for motor control. Neural Netw., 11(7-8):1317–1329.
* Zhong et al., (2018) Zhong, J., Cangelosi, A., Zhang, X., and Ogata, T. (2018). Afa-prednet: The action modulation within predictive coding. In 2018 International Joint Conference on Neural Networks (IJCNN), pages 1–8. IEEE.
|
# Solving QSAT problems with neural MCTS
Ruiyang Xu and Karl Lieberherr
Khoury College of Computer Sciences
Northeastern University
<EMAIL_ADDRESS>
<EMAIL_ADDRESS>
###### Abstract
Recent achievements from AlphaZero using self-play has shown remarkable
performance on several board games. It is plausible to think that self-play,
starting from zero knowledge, can gradually approximate a winning strategy for
certain two-player games after an amount of training. In this paper, we try to
leverage the computational power of neural Monte Carlo Tree Search (neural
MCTS), the core algorithm from AlphaZero, to solve Quantified Boolean Formula
Satisfaction (QSAT) problems, which are PSPACE complete. Knowing that every
QSAT problem is equivalent to a QSAT game, the game outcome can be used to
derive the solutions of the original QSAT problems. We propose a way to encode
Quantified Boolean Formulas (QBFs) as graphs and apply a graph neural network
(GNN) to embed the QBFs into the neural MCTS. After training, an off-the-shelf
QSAT solver is used to evaluate the performance of the algorithm. Our result
shows that, for problems within a limited size, the algorithm learns to solve
the problem correctly merely from self-play.
## 1 Introduction
The past several years have witnessed the progress and success of self-play.
The combination of classical MCTS (?) algorithms with newly developed deep
learning techniques gives a stunning performance on complex board games like
Go and Chess (?; ?; ?). One common but outstanding feature of such an
algorithm is the tabula-rasa style of learning. In other words, it learns to
play the game with zero knowledge (except the game rules). Such tabula-rasa
learning is regarded as an approach to general artificial intelligence.
Given such an achievement, it is interesting to see whether their algorithm’s
superhuman capability can be used to solve problems in other domains.
Specifically, we apply neural MCTS (?; ?) to solve the QSAT problem through
self-play on the QSAT game. Our experiment shows that, even though the QSAT
game is fundamentally different from traditional board games (see section
4.2), the algorithm is still able to determine the truthfulness of the
corresponding QSAT problem through the dominant player. Furthermore, the
trained algorithm can be used to approximate the solution (or show the non-
existence of a solution) of the original problem through competitions against
an enumerator. However, our objective is not necessarily to improve the state-
of-the-art of hand-crafted problem solvers in specific areas but to illustrate
that there is a generic algorithm (neural MCTS) that can solve well-known
problems tabula-rasa.
In this work, we make two main contributions: 1. We propose a way to turn QBFs
into graphs so that they can be embedded with a graph neural network; 2. We
implemented a variant of the neural MCTS algorithm, which has two independent
neural networks (designed explicitly for the QSAT games) for each of the
player. Our result shows that the algorithm can determine the truthfulness of
the QSAT problem correctly. The remainder of this paper is organized as
follows. Section 2 shows some related works which inspired our work. Section 3
presents essential preliminaries on neural MCTS, the QSAT problems, and graph
neural networks. Section 4 introduces our approach to encode QBFs as graphs
and the architecture of our implementation. Section 5 gives our correctness
measurement and presents experimental results. Section 6 and 7 made a
discussion and conclusions.
## 2 Related Work
In terms of combining a QSAT solver with machine learning, Janota built a
competitive QSAT solver, QFUN (?), based on counterexample guided refinement
and machine learning. Although like in our work, the QSAT problems is treated
as a game, their learning does not depend on the game state (i.e., the QBF),
but focus on the assignment pairs from the two players in two consecutive
moves (i.e., a move by the existential player, and a countermove by the
universal player). By supervised learning a decision tree classifier, the
learning algorithm categorizes the move-countermove pairs into two classes:
feasible countermove and infeasible countermove. QFUN progressively solves a
QBF by learning moves for the existential player so that there are no feasible
countermoves for the universal player. While the performance is compelling,
their solver is largely based on a counterexample guided abstraction
refinement algorithm (?), whose design requires insights from human, hence
cannot be regarded as tabula-rasa.
As an alternative methodology, NeuroSAT (?) provides us another insight to
apply machine learning on such problems. By leveraging the graph neural
networks (?) and message passing process (?), they developed a single-bit
supervised SAT solver. The algorithm depends on zero-knowledge and learns
purely on the input formula. In NeuroSAT, Boolean formulas are encoded as
graphs so that a specially designed graph neural network can be applied to
those graphs. The target value of the graph neural network is a single bit,
represented for the solvability of the input SAT problem. It has been shown
that NeuroSAT performs adequately on SAT problems within a reasonable size.
When it comes to applying neural MCTS to solve problems in other domains, Xu
et al. use a technique called Zermelo Gamification to turn specific
combinatorial optimization problems into games so that they can be solved
through AlphaZero like algorithms (?). They applied their method to a
particular combinatorial optimization problem called HSR. Their result shows
that the algorithm can accurately solve such a problem within a given size.
Although they only applied their method to one specific problem, their
experiment result endorse the idea that there is a generic algorithm (neural
MCTS) that can solve well-known problems tabula-rasa. To this extent, our work
can be seen as an extension of theirs.
## 3 Preliminaries
### 3.1 Neural Monte Carlo Tree Search
The PUCT (Predictor + UCT) algorithm implemented in AlphaZero (?; ?) is
essentially a neural MCTS algorithm which uses PUCB Predictor + UCB (?) as its
confidence upper bound (?) and uses the neural prediction $P_{\phi}(a|s)$ as
the predictor. The algorithm is running through multiple searching iterations
to decide the optimal action for the current state. During each iteration,
there are 4 phases:
1. 1.
SELECT: At the beginning of each iteration, the algorithm selects a path from
the root (current game state) to a leaf (either a terminal state or an
unvisited state) in the tree according to the PUCB (see (?) for a detailed
explanation for terms used in the formula). Specifically, suppose the root is
$s_{0}$, we have 111Theoretically, the exploratory term should be
$\sqrt{\frac{\sum_{a^{\prime}}N(s_{i-1},a^{\prime})}{N(s_{i-1},a)+1}}$,
however, the AlphaZero used the variant
$\frac{\sqrt{\sum_{a^{\prime}}N(s_{i-1},a^{\prime})}}{N(s_{i-1},a)+1}$ without
any explanation. We tried both in our implementation, and it turns out that
the AlphaZero one performs much better.:
$a_{i}=\operatorname*{arg\,max}_{a}\left[Q(s_{i},a)+cP_{\phi}(a|s_{i})\frac{\sqrt{\sum_{a^{\prime}}N(s_{i},a^{\prime})}}{N(s_{i},a)+1}\right]$
$Q(s_{i},a)=\frac{W(s_{i},a)}{N(s_{i},a)+1}$ $s_{i+1}=move(s_{i},a_{i})$
2. 2.
EXPAND: Once the select phase ends at a non-terminal leaf, the leaf will be
fully expanded and marked as an internal node of the current tree. All its
children nodes will be considered as leaf nodes during the next iteration of
selection.
3. 3.
ROLL-OUT: Normally, starting from the expanded leaf node chosen from previous
phases, the MCTS algorithm uses a random policy to roll out the rest of the
game (?). The algorithm simulates the actions of each player randomly until it
arrives at a terminal state, which means the game has ended. The algorithm
then uses the outcome of the game as the result evaluation for the expanded
leaf node.
However, a random roll-out usually becomes time-consuming when the tree is
deep. A neural MCTS algorithm, instead, uses a neural network $V_{\phi}$ to
predict the result evaluation so that the algorithm saves the time on rolling
out.
4. 4.
BACKUP: This is the last phase of an iteration where the algorithm recursively
backs-up the result evaluation in the tree edges. Specifically, suppose the
path found in the Select phase is
$\\{(s_{0},a_{0}),(s_{1},a_{1}),...(s_{l-1},a_{l-1}),(s_{l},\\_)\\}$. then for
each edge $(s_{i},a_{i})$ in the path, we update the statistics as:
$W^{new}(s_{i},a_{i})=W^{old}(s_{i},a_{i})+V_{\phi}(s_{l})$
$N^{new}(s_{i},a_{i})=N^{old}(s_{i},a_{i})+1$
However, in practice, considering a Laplace smoothing in the expression of Q,
the following updates are actually applied:
$Q^{new}(s_{i},a_{i})=\frac{Q^{old}(s_{i},a_{i})\times
N^{old}(s_{i},a_{i})+V_{\phi}(s_{l})}{N^{old}(s_{i},a_{i})+1}$
$N^{new}(s_{i},a_{i})=N^{old}(s_{i},a_{i})+1$
Once the given number of iterations has been reached, the algorithm returns a
vector of action probabilities of the current state (root $s_{0}$). And each
action probability is computed as
$\pi(a|s_{0})=\frac{N(s_{0},a)}{\sum_{a^{\prime}}N(s_{0},a^{\prime})}$. The
real action played by the neural MCTS is then sampled from the action
probability vector $\pi$. In this way, neural MCTS simulates the action for
each player alternately until the game ends. This process is called neural
MCTS simulation, which is the core of self-play.
### 3.2 QSAT Problems and QSAT games
A quantified Boolean formula (QBF) is a formula in the following form:
$\exists x_{1}\forall x_{2}...\exists x_{n}.\Phi(x_{1},...,x_{n})$
Where $x_{i}$ are distinct boolean variables. The sequence of quantifiers and
variables is called the prefix of a QBF. The propositional formula $\Phi$ is
called the matrix of a QBF, which only uses the variables in $\\{x_{i}\\}$. A
QBF can evaluate to either true or false since there are no free variables,
and it is solvable only if it evaluates to true, otherwise, it is unsolvable.
The problem of determining the truthfulness of a QBF is called QSAT problem,
which is known to be PSPACE complete.
A QSAT problem can be seen as a game between two players: the existential
player (the Proponent (P)) who assigns values to the existentially quantified
variables, and the universal player (the Opponent (OP)) who assigns values to
the universally quantified variables. The two players make moves by assigning
values to the variables alternately following the sequence of quantifiers in
the prefix. The existential player (P) wins if the formula evaluates to True
and the universal player (OP) wins if it evaluates to False.
### 3.3 Gated Graph Neural Networks
In this work, QBFs are encoded as graphs, and a Gated Graph Neural Network
(GGNN) (?; ?) is applied to embed the QBFs into the neural MCTS framework.
Notice that the GGNN is not the only option and there are alternatives (?; ?),
we choose GGNN for the sake of its easy implementation.
The forward pass of the GGNN can be described as following:
$m_{v}^{t+1}=\sum_{e}\sum_{w\in N(v)}A_{e_{wv}}h_{w}^{t},t=0..T$
$h_{v}^{t+1}=GRU(h_{v}^{t},m_{v}^{t+1}),t=0..T$ $R=\sum_{v\in
V}\sigma(f(h_{v}^{T},h_{v}^{0}))\odot g(h_{v}^{T})$
where $e$ is the edge type in a multigraph, $A_{e}$ is the edge-weight matrix
to be learned during the training, $h_{v}^{t}$ is the hidden representation of
node $v$ at message passing iteration $t$, and $m_{v}^{t}$ is called the
message from node $v$ at iteration $t$. $R$ is called the read-out which
aggregates information from each node to generate a global feature target
(notice that $\sigma$ means the sigmoid activation function, $f$ and $g$ are
MLPs, and $\odot$ means element-wise product).
The message passing process iterates for a given $T$ times, during each
iteration, each node $v$ computes its message using the hidden representation
from the neighbor nodes $N(v)$. After that, a Gated Recurrent Unit (GRU) is
used to update the hidden representation of the node $v$. The message passing
process allows each node’s hidden representation to capture the global
structure information of the entire input graph. Finally, the read-out process
$R$ is applied to all the nodes to compute the global target of the input
graph. GGNN is invariant to graph isomorphism, which is well-suited to capture
the symmetry properties among the QBFs.
## 4 Implementation
### 4.1 QBF Graphs
Although the QSAT problem has a simple syntactic structure, symmetries induced
by the semantics of propositional logic should not be ignored (?). The fact
that symmetric QBFs are equivalent can improve learning efficiency. In this
work, we specially designed a graph encoding of the QBFs, which helps us catch
those symmetries through graph isomorphism.
After using Tseyting transformation to re-write $\Phi$ in conjunctive normal
form (CNF), a QBF is represented as an undirected multigraph (Fig. 1) with two
nodes for every literal and its negation, and one node for every clause. There
are four types of edges in this multigraph: 1. E2A edge, an edge between every
consecutive existential literal and universal literal; 2. A2E edge, an edge
between every consecutive universal literal and existential literal; 3. L2C
edge, an edge between every literal and every clause it appears in; 4.
reflexive edge, and an edge between each pair of literal and its negation.
The reason behind such a design are three aspects: 1. the sequential
information of the prefix is essential to identify the solution of a QBF. Even
if two QBFs have the same matrix $\Phi$, a different variable sequence in the
prefix might lead to a massive difference in the solution. Therefore, we use
the E2A edges and A2E edges to track such sequential information. 2. In a QBF,
variables only show as positive literals in the prefix; however, they can be
both positive and negative in the matrix $\Phi$. Hence we naturally represent
any variable as two nodes, meaning a pair of two complementary literals. 3.
Since any literal and its complement are coupled, we use a reflexive edge to
capture such entanglement.
Figure 1: An example of graph encoding for the QBF: $\exists x_{1}\forall
x_{2}\exists x_{3}(x_{1}\vee x_{2}\vee\neg x_{3})\wedge(x_{2}\vee
x_{3})\wedge(x_{1}\vee x_{3})$. Notice that there are four types of edges, and
two types of nodes.
### 4.2 Architecture
In our design, the policy-evaluation neural network of the neural MCTS becomes
two GGNNs (see section 3.3), one for each player. The reason why we use two
independent neural networks instead of one is that the QSAT game is asymmetric
in terms of the winning condition. As we have introduced in the section 3.2, P
wins the game if and only if the QBF evaluates to true, while OP wins the game
if and only if the QBF evaluates to false. On the other hand, when it comes to
the outcome of GGNN for two consecutive moves by different players, we noticed
that the hidden representation sometimes has no significant difference between
the two players. Hence the GGNN becomes confused on the input graphs. This
issue can be resolved only by separating the neural networks, so that both of
the players can learn and progress mutually and consistently.
Another fact to notice is that we treat every single QSAT problem as an
independent game. During the self-play phase, the neural MCTS algorithm
(section 3.1) simulates the move for each player based on the player’s GGNN.
The neural MCTS takes in the current game state (the QBF graph) and uses the
current player’s GGNN to do the selection and rollout. After a certain number
(25 in our case) of iterations, neural MCTS will return the action probability
distribution for the current state. The player will sample her next move from
this distribution. The simulation alternates between the two players until the
game ends, where the game outcome will be evaluated and stored for the
training phase. To call the neural network, the hidden representation
$h_{v}^{0}$ of each node $v$ is initialized with the type of the node.
Specifically, for an existential literal node, the hidden representation is
$[1,0,0,...,0]$; for a universal literal node, the hidden representation is
$[0,1,0,...,0]$; and for a CNF clause node, the hidden representation is
$[0,0,1,...,0]$ . Notice that we use $0$’s to pad the vector to a given
length. Another fact to notice is that there are two read-out task ($P_{\phi}$
and $V_{\phi}$). Hence we use two different sets of aggregation MLPs for each
of the task:
$R_{i}=\sum_{v\in V}\sigma(f_{i}(h_{v}^{T},h_{v}^{0}))\odot g_{i}(h_{v}^{T})$
$P_{\phi}=R_{1},V_{\phi}=R_{2}$
After each self-play simulation, we store the game trace of each player
separately as a set of tuples in the form of ($s$, $\pi$, $v$), where $s$ is
the game state (the QBF graph), $\pi$ is the action probability distribution
generated by neural MCTS based on current state, and $v$ is the game result in
the perspective of current player based on the game outcome. We run such a
simulation several times (in our case, ten times) to retrieve enough training
data. After that, we train the GGNN independently for each of the players
using those training data collected during self-play. After training, we use
the newly trained GGNNs to play against each other for 20 rounds and collect
the performance data for evaluation and analysis, and this is called the arena
phase.
## 5 Experiment
### 5.1 Experiment Setup
The hyperparameters are set as follows: the number of searching iteration for
neural MCTS is set to 25, and the number of simulation is set to 100; the
message passing time $T$ is set to 10 for the GGNN; the size of the hidden
representation of the GGNN is set to 128.
Considering the capacity and computation power of our machine, we generate 20
random QBFs (10 solvable and 10 unsolvable) which have 51 nodes (the prefix
has 21 quantified variables, and the matrix has 9 clauses. So there are 42
literal nodes and 9 clause nodes.) after encoding as graphs. Each QBF is
regarded as a single game to be played and learned by the neural MCTS. We run
the learning iteration (i.e., self-play, training, and arena) for 32 epochs,
and collect the performance data in the arena phase during each iteration.
### 5.2 Performance Measurement
To measure the performance of the algorithm, we use two metrics: the local
correctness ratio and the global correctness ratio. We compute the local
correctness ratio of the two players during the arena phase where the two
players compete with each other for 20 rounds. The action is locally correct
if it preserves a winning position. It is straightforward to check the local
correctness of actions using a QSAT solver: GhostQ (?). We collect the local
correctness ratio of the two players after each round of competing in arena
phase. Then we take the average value of their local correctness ratio as the
performance measurement for the current training iteration.
###### Definition 5.1.
Local Correctness for P
Given a QBF $\exists x_{1}\forall x_{2}...\exists x_{n}.\Phi$, an action
$x^{*}$ is locally correct if and only if $\forall x_{2}...\exists
x_{n}.\Phi[x_{1}\setminus x^{*}]$ evaluates to True.
###### Definition 5.2.
Local Correctness for OP
Given a QBF $\forall x_{1}\exists x_{2}...\exists x_{n}.\Phi$, an action
$x^{*}$ is locally correct if and only if $\exists x_{2}...\exists
x_{n}.\Phi[x_{1}\setminus x^{*}]$ evaluates to False.
Since the two neural networks might be inductively biased to each other, the
locally correct solution could be incorrect. To see whether the neural MCTS
learns the correct solution, we measure the global correctness ratio by test
the algorithm with an enumerator. To be specific, if a QBF is satisfactory,
then we enumerate all possible move for the OP (the universal player) and use
the enumerator to play against the P’s neural network. Vice-versa for the
unsatisfactory QBF. Theoretically, OP’s neural network fails to solve the QBF
if there is any chance that the P’s enumerator can win the game. We count the
number of winning games for each player and use it to measure the global
correctness ratio. A 100% global correctness not only means the neural MCTS
has found the correct solution, but also a fully support (represented as a
winning strategy encoded in the neural network) to that solution. On the other
hand, a non-100% global correctness can be treated as a measure of
approximation of the algorithm.
### 5.3 General Result
Our experiment shows that the algorithm can correctly determine the
truthfulness of all 20 test QBFs. We notice that, for a solvable QBF, the
existential player can quickly dominate the game and win at most of the times,
and vice-versa for the universal player in an unsolvable case. The result
indicates that for a solvable/ an unsolvable QSAT problem, the existential/
universal player has a higher chance to win the corresponding QSAT game
against the universal/ existential player.
We also measured the algorithm’s global correctness ratio for all test cases,
and we noticed an asymmetry between the solvable and unsolvable cases. To be
specific, we computed the average global correctness ratio (AGC) for all
solvable and unsolvable QBFs respectively, and it turns out that the AGC for
solvable cases is 87%, while the AGC for unsolvable cases is 85%. This fact
indicates that neural MCTS can still be an adequate approximator to QSAT
problem, even if it cannot derive a 100% correct strategy.
### 5.4 Two Examples
In this section, for the illustration purpose, we show the experiment results
for a solvable QSAT and an unsolvable QSAT (described in Fig. 2 and Fig. 3
where, due to limited space, we only show the matrix of the QBF). One can see,
in Fig. 2, the local correctness ratio of the existential player (P) soars
high after the first epoch; while in Fig. 3, the local correctness ratio of
the universal player (OP) increases rapidly. Even though there are
fluctuations, one of the player always dominate the game, this phenomenon is
treated as an indicator to the truthfulness of the QSAT. Also, notice that the
curves in the unsolvable case wave more violently than the ones in the
solvable case. This fact means that even though the player can dominate the
game, dominating an unsolvable QSAT game might be harder than a solvable one.
In terms of global correctness ratio, both of them got 100% correctness, that
means the neural MCTS not only makes the correct decision but also
constructively support its decision.
Figure 2: Local correctness ratio measured for a solvable QBF. The matrix of
the QBF is listed on the right side in QDIMACS format. Figure 3: Local
correctness ratio measured for an unsolvable QBF. The matrix of the QBF is
listed on the right side in QDIMACS format.
## 6 Discussion
### 6.1 Exploration v.s. Exploitation
One of the known issues of self-play is that the two players will always
mutually bias their strategy to fit with the other’s one through exploiting
their experiences. This mutual inductive bias facilitates the learning process
of the players when they are at the same pace. However, once the learning
speeds are unbalanced, the mutual inductive bias foils the improvement of
players’ performance by stagnating their strategies in a local optimum. To
understand this issue, one can think about a game between an expert and a
newbie. Since the expert can easily find a strategy to win against the newbie,
the newbie will always lose the game. And because there is no positive
feedback at all, the newbie will build a biased belief that there is no way to
win the game. Such a belief can be strengthened during self-play, and finally,
it leads to some fixed losing strategy. While on the other side, since the
opponent is not so challenging, the expert will also stay with the current
strategy without any attempt to improve it.
Nevertheless, we notice that neural MCTS is resilient to mutual inductive
bias. Whenever the learning paces are unbalanced, the weaker players decisions
become indifferent (i.e., no matter what moves she takes, she will always lose
the game). On the other hand, neural MCTS pushes those indifferent actions
into a uniform distribution, hence to encourage exploration by making random
moves. Consequently, neural MCTS adaptively balance the exploration and
exploitation, thus jumping out of the local optimal.
### 6.2 State Space Coverage
Neural MCTS is capable of handling a large state space (?). Such an algorithm
must search only a small portion of the state space and make the decisions
from those limited observations. To measure the state space coverage, we
recorded the number of states accessed during the experiment, In each QSAT
game, we count the total number of states accessed during each game in self-
play, and we compute the 10-game moving average of state accessed for all
self-plays (Fig. 4). This result indicates an implicit adaptive pruning
mechanism behind the neural MCTS algorithm, which can be regarded as a
justification for its capability of handling large state spaces.
Figure 4: Average states accessed during self-play for QSAT problem described
in Fig. 2. As a comparison, there are 226599 states in total.
### 6.3 Limitation
Our test cases are restricted to a limited size. Because QSAT is known to be
PSPACE complete, verifying the correctness of the algorithm is time-consuming.
In our experiment, there are 10 to 11 moves for each player. Hence to verify
the correctness of the algorithm, it roughly takes $2^{10}$ to $2^{11}$ tests.
And the verification time increases exponentially with the number of variables
in the QBF.
On the other hand, the strategy learned by the neural MCTS algorithm is
implicitly encoded inside the neural network, and there is no way to extract
such a strategy so that they can be explicitly verified by any more efficient
approaches from the formal method. Therefore, using an enumerator to verify
the correctness is inevitable for the time being. As a result, even though
neural MCTS can handle a deep game tree hence a large number of variables, it
is still hard or even impossible to verify the learning outcome.
## 7 Conclusion
In this work, intrigued by the astonishing achievements from AlphaZero, we
attempt to leverage the computational power of neural MCTS algorithm to solve
a practical problem: QSAT. We make two main contributions. First, we propose a
way to encode QBFs as undirected multigraphs, which bridges the logic formula
representation of QBFs with the graph neural network input. Second, we
particularly use two separated graph neural networks to build our neural MCTS
variant. Such a design can significantly reduce the learning confusion caused
by the asymmetry between the two players. Our evaluation is based on two
metrics: local and global correctness ratio. The local metric, by utilizing an
off-the-shelf QSAT solver, only focus on the correctness in a single game, yet
it imposes no constraints on the number of variables in the formula; The
global metric, relies on an enumerator, can determine the exact correctness of
the learned neural MCTS, but it is sensitive to the number of variables in the
formula. Our experiment results are positive on the given limited size test
cases, which justifies the feasibility of our idea to some extents. For future
work, it may be worthwhile to figure out how to explain the learned neural
MCTS or how to extract the generated strategy from the neural network. It is
also useful to do some study on how to optimize the current algorithm so that
it can handle more significant cases. Our objective is not necessarily to
improve the state-of-the-art of hand-crafted problem solvers in specific areas
but to illustrate that there is a generic algorithm (neural MCTS) that can
solve well-known problems tabula-rasa. The hope is that neural MCTS will help
solve future algorithmic problems that have not yet been solved by humans. We
view Neural MCTS as a helper in human solving of algorithmic problems in the
future. We also hope our research sheds some light on the remarkable but
mysterious learning ability of the neural MCTS algorithm from AlphaZero.
## References
* [Anthony, Tian, and Barber 2017] Anthony, T.; Tian, Z.; and Barber, D. 2017\. Thinking fast and slow with deep learning and tree search. In Advances in Neural Information Processing Systems, 5360–5370.
* [Battaglia et al. 2018] Battaglia, P. W.; Hamrick, J. B.; Bapst, V.; Sanchez-Gonzalez, A.; Zambaldi, V.; Malinowski, M.; Tacchetti, A.; Raposo, D.; Santoro, A.; Faulkner, R.; et al. 2018\. Relational inductive biases, deep learning, and graph networks. arXiv preprint arXiv:1806.01261.
* [Browne et al. 2012] Browne, C.; Powley, E. J.; Whitehouse, D.; Lucas, S. M.; Cowling, P. I.; Rohlfshagen, P.; Tavener, S.; Liebana, D. P.; Samothrakis, S.; and Colton, S. 2012\. A Survey of Monte Carlo Tree Search Methods. IEEE Trans. Comput. Intellig. and AI in Games 4(1):1–43.
* [Gilmer et al. 2017] Gilmer, J.; Schoenholz, S. S.; Riley, P. F.; Vinyals, O.; and Dahl, G. E. 2017\. Neural message passing for quantum chemistry. In Proceedings of the 34th International Conference on Machine Learning-Volume 70, 1263–1272. JMLR. org.
* [Janota et al. 2012] Janota, M.; Klieber, W.; Marques-Silva, J.; and Clarke, E. 2012\. Solving QBF with counterexample guided refinement. In Proceedings of the 15th International Conference on Theory and Applications of Satisfiability Testing, SAT’12, 114–128. Springer-Verlag.
* [Janota 2018] Janota, M. 2018\. Towards generalization in QBF solving via machine learning. In Proceedings of the Thirty-Second AAAI Conference on Artificial Intelligence, (AAAI-18), 6607–6614.
* [Kauers and Seidl 2018] Kauers, M., and Seidl, M. 2018\. Symmetries of quantified boolean formulas. ArXiv abs/1802.03993.
* [Klieber et al. 2010] Klieber, W.; Sapra, S.; Gao, S.; and Clarke, E. 2010\. A non-prenex, non-clausal QBF solver with game-state learning. In Proceedings of the 13th International Conference on Theory and Applications of Satisfiability Testing, SAT’10, 128–142. Springer-Verlag.
* [Kocsis and Szepesvari 2006] Kocsis, L., and Szepesvari, C. 2006\. Bandit Based Monte-Carlo Planning. In ECML, volume 4212 of Lecture Notes in Computer Science, 282–293. Springer.
* [Li et al. 2016] Li, Y.; Tarlow, D.; Brockschmidt, M.; and Zemel, R. S. 2016\. Gated graph sequence neural networks. In 4th International Conference on Learning Representations, ICLR 2016, San Juan, Puerto Rico, May 2-4, 2016, Conference Track Proceedings.
* [Rosin 2011] Rosin, C. D. 2011\. Multi-armed bandits with episode context. Annals of Mathematics and Artificial Intelligence 61(3):203–230.
* [Selsam et al. 2018] Selsam, D.; Lamm, M.; Bunz, B.; Liang, P.; de Moura, L.; and Dill, D. L. 2018\. Learning a sat solver from single-bit supervision. arXiv preprint arXiv:1802.03685.
* [Silver et al. 2016] Silver, D.; Huang, A.; Maddison, C. J.; Guez, A.; Sifre, L.; van den Driessche, G.; Schrittwieser, J.; Antonoglou, I.; Panneershelvam, V.; Lanctot, M.; Dieleman, S.; Grewe, D.; Nham, J.; Kalchbrenner, N.; Sutskever, I.; Lillicrap, T.; Leach, M.; Kavukcuoglu, K.; Graepel, T.; and Hassabis, D. 2016\. Mastering the game of Go with deep neural networks and tree search. Nature 529:484.
* [Silver et al. 2017a] Silver, D.; Hubert, T.; Schrittwieser, J.; Antonoglou, I.; Lai, M.; Guez, A.; Lanctot, M.; Sifre, L.; Kumaran, D.; Graepel, T.; Lillicrap, T. P.; Simonyan, K.; and Hassabis, D. 2017a. Mastering Chess and Shogi by Self-Play with a General Reinforcement Learning Algorithm. CoRR abs/1712.01815.
* [Silver et al. 2017b] Silver, D.; Schrittwieser, J.; Simonyan, K.; Antonoglou, I.; Huang, A.; Guez, A.; Hubert, T.; Baker, L.; Lai, M.; Bolton, A.; Chen, Y.; Lillicrap, T.; Hui, F.; Sifre, L.; van den Driessche, G.; Graepel, T.; and Hassabis, D. 2017b. Mastering the game of Go without human knowledge. Nature 550:354.
* [Silver et al. 2018] Silver, D.; Hubert, T.; Schrittwieser, J.; Antonoglou, I.; Lai, M.; Guez, A.; Lanctot, M.; Sifre, L.; Kumaran, D.; Graepel, T.; et al. 2018\. A general reinforcement learning algorithm that masters chess, shogi, and go through self-play. Science 362(6419):1140–1144.
* [Xu and Lieberherr 2019] Xu, R., and Lieberherr, K. 2019\. Learning self-game-play agents for combinatorial optimization problems. In Proceedings of the 18th International Conference on Autonomous Agents and MultiAgent Systems, AAMAS ’19, 2276–2278. IFAAMS.
|
# Planet Occurrence Rate Correlated to Stellar Dynamical History: Evidence
from Kepler and Gaia
Yuan-Zhe Dai School of Astronomy and Space Science, Nanjing University, 163
Xianlin Avenue, Nanjing, 210023, People’s Republic of China Key Laboratory of
Modern Astronomy and Astrophysics, Ministry of Education, Nanjing, 210023,
People’s Republic of China Hui-Gen Liu School of Astronomy and Space
Science, Nanjing University, 163 Xianlin Avenue, Nanjing, 210023, People’s
Republic of China Key Laboratory of Modern Astronomy and Astrophysics,
Ministry of Education, Nanjing, 210023, People’s Republic of China Dong-Sheng
An School of Astronomy and Space Science, Nanjing University, 163 Xianlin
Avenue, Nanjing, 210023, People’s Republic of China Key Laboratory of Modern
Astronomy and Astrophysics, Ministry of Education, Nanjing, 210023, People’s
Republic of China Ji-Lin Zhou School of Astronomy and Space Science, Nanjing
University, 163 Xianlin Avenue, Nanjing, 210023, People’s Republic of China
Key Laboratory of Modern Astronomy and Astrophysics, Ministry of Education,
Nanjing, 210023, People’s Republic of China
###### Abstract
The dynamical history of stars influences the formation and evolution of
planets significantly. To explore the influence of dynamical history on the
planet formation and evolution from observations, we assume stars that
experienced significantly different dynamical histories tend to have different
relative velocities. Utilizing the accurate Gaia-Kepler Stellar Properties
Catalog, we select single main-sequence stars and divide these stars into
three groups according to their relative velocities, i.e. high-V , medium-V ,
and low-V stars. After considering the known biases from Kepler data and
adopting prior and posterior correction to minimize the influence of stellar
properties on planet occurrence rate, we find that high-V stars have a lower
occurrence rate of super-Earths and sub-Neptunes (1–4 $R_{\oplus}$,
P$\textless$100 days) and higher occurrence rate of sub-Earth (0.5–1
$R_{\oplus}$, P$\textless$30 days) than low-V stars. Additionally, high-V
stars have a lower occurrence rate of hot Jupiter sized planets (4–20
$R_{\oplus}$, P$\textless$10 days) and a slightly higher occurrence rate of
warm or cold Jupiter sized planets (4–20 $R_{\oplus}$,
10$\textless$P$\textless$400 days). After investigating the multiplicity and
eccentricity, we find that high-V planet hosts prefer a higher fraction of
multi-planets systems and lower average eccentricity, which is consistent with
the eccentricity-multiplicity dichotomy of Kepler planetary systems. All these
statistical results favor the scenario that the high-V stars with large
relative velocity may experience fewer gravitational events, while the low-V
stars may be influenced by stellar clustering significantly.
Astrostatistics — Exoplanet astronomy: Exoplanet catalogs — planet hosting
stars — Stellar kinematics: stellar motion — Planet formation
††journal: AJ††software: Astropy (Astropy Collaboration et al., 2013),
Matplotlib (Hunter, 2007), Scipy Wes McKinney (2010), Pandas pandas
development team (2020), Scikit-learn (Pedregosa et al., 2011).
## 1 Introduction
Since the first discovery of an exoplanet around a solar-type star in 1995
(Mayor & Queloz, 1995), more than
4300111https://exoplanetarchive.ipac.caltech.edu/ exoplanets detected. Kepler
space telescope has discovered more than 2291 exoplanets and 1786 candidates
(based on Kepler DR25 Thompson et al. (2018)), a rich transiting exoplanet
sample via single telescope. To refine the parameters of Kepler planets,
different groups have done spectral follow-ups, e.g. California-$Kepler$
Survey (CKS) (Johnson et al., 2017) and the Large Sky Area Multi-Object Fiber
Spectroscopic Telescope survey (LAMOST) (Cui et al., 2012; Zhao et al., 2012;
Luo et al., 2012, 2015). Thanks to the precise astrometric data from Gaia DR2
(Gaia Collaboration et al., 2018), the stellar parameters can be refined more
accurately, e.g. effective temperature and stellar radius. Consequently, the
more accurate the planet parameters will be. With these accurate planet
parameters, we are now in a great epoch to explore the correlations between
stellar parameters and planet parameters or planetary system architectures.
Stars usually form in clustering environments (Lada et al., 1993; Carpenter,
2000; Lada & Lada, 2003). Due to the galactic tides, most clustering stars
would become field stars eventually. The differences between the occurrence
rate of planets around stars in clusters and field stars are crucial to
understanding the planetary formation and evolution in cluster environments.
From the aspect of observation, several surveys have monitored stars in young,
metal-rich open clusters and old, metal-poor globular clusters. However, only
tens of exoplanets have been found in open clusters, and two pulsars as planet
hosts are detected in globular clusters(GCs). Simply comparing the total
number of planets in clusters and field stars(considering most of Kepler stars
are field stars), planets in clusters are only a small fraction of the known
exoplanet sample. It seems that planets are rare in clusters. Some recent
works hold a different view. van Saders & Gaudi (2011) indicated that low
detection probabilities of planets in distant open clusters may cause a
significant lack of hot Jupiters, compared with hot Jupiters around field
stars. Meibom et al. (2013) suggested that both the orbital properties and the
frequency of planets in open clusters are consistent with those in the field
of the Milky Way. Brucalassi et al. (2017) found that the occurrence of the
planet rate of hot Jupiters in open clusters is slightly higher than that in
the field. All these previous planet searches in open clusters indicated that
planets discovered in open clusters appear to have properties in common with
those found around the field stars.
Theoretically, both the UV radiation from nearby stars (Johnstone et al.,
1998; Matsuyama et al., 2003; Dai et al., 2018; Winter et al., 2018; van
Terwisga et al., 2020) and gravitational perturbation during the frequent
close stellar encounters (Olczak et al., 2006; Spurzem et al., 2009; Liu et
al., 2013; Cai et al., 2017; Hamers & Tremaine, 2017) will probably affect the
planets formation and evolution considering the complicated clustering
environment. In the planet-forming disks, disk dispersal is essential,
especially for the formation of gas giants. According to the classical core-
accretion model (Ida & Lin, 2004), the disk lifetime will determine whether a
proto-planet can grow up to a gas giant or how massive the planet can grow
finally. We proposed a viscous photo-evaporative disk model combining the
photo-evaporation of external flyby stars and host stars (Dai et al., 2018).
Additionally, we applied this model to the clustering environment to explain
rare gas giants in dense globular clusters or clusters with massive stars.
Apart from the radiation environments in clusters, the gravitational
perturbations of external stars also affect planet formation both in the
stages of gas disk evolution and afterward. Olczak et al. (2006) showed that
nearly 90% of the disk mass will be removed during a close encounter in
extreme cases. Spurzem et al. (2009); Liu et al. (2013), use N-body simulation
shows that the instability of planets in clusters is considered to be an
important role. A large fraction of planets very close to the host star are
probably stable in open clusters or even in the outer region of GCs.
Especially, Fujii & Hori (2019) showed that the planet survival rate of
planets in clusters would decrease with increasing semi-major axis.
Based on current studies, it is under debate that the occurrence rate of
planets around stars in open clusters and around field stars are the same or
not. Due to the small number of planets in stellar clusters, the subsequent
large statistical uncertainties can hardly achieve conclusive results.
However, Gaia provides unique data to describe the stellar motions due to the
extremely high-precision astrometric data. Utilizing Gaia DR2 (Gaia
Collaboration et al., 2018), we can achieve the accurate stellar motion
parameters in the Kepler field.
Gaia DR2 provides accurate astrometric data. Kepler DR25 provides planets
properties. Utilizing Gaia DR2 and planet host stars, Winter et al. (2020)
finds that stellar clustering shapes the architecture of planetary systems.
Kruijssen et al. (2020) argues that stellar clustering is a key driver for
turning sub-Neptunes into Super-Earths. These two works provide us a new
window connecting planet formation and evolution, star and stellar cluster
formation, and galaxy evolution. Similarly, using Gaia DR2 and Kepler data, we
can study the correlations between stellar relative velocity and planets
occurrence rate(the average number of planets per star).
Recently, two groups investigated the correlations between stellar motion and
planet occurrence. Bashi & Zucker (2019) used a cross-matched catalog which
includes Gaia DR2, Kepler DR25, and LAMOST DR4222http://dr4.lamost.org/. They
found that planets around stars with lower iron metallicity and higher total
speed tend to have a higher occurrence rate. However, due to the limited
number of stars with three-dimensional velocities, these correlations may be
coupled with the influence of effective temperature and other properties i.e.
they may be biased.
McTier & Kipping (2019) also investigates the correlation between velocity of
Kepler field stars and Kepler planet hosts and argued that planet occurrence
is independent with stellar velocity. Actually they haven’t calculated the
occurrence rate of planets around stars with difference velocity. Besides that
they haven’t considered the potential observational biases from the Kepler
data, thus, it’s necessary to revisit the correlations between stellar
velocity and planet occurrence rate.
Additionally, there are some other factors that need to be included, e.g. the
effective temperature and metallicity of stars. Howard et al. (2012); Mulders
et al. (2015) found that the occurrence rate of planets with radius of 1–4
$R_{\oplus}$ will decrease with the increasing of effective temperature of
stars. In other words, planets orbit more common around cool stars. Several
works all show that occurrence rate of planet has a positive correlation with
stellar metallicity (Ida & Lin, 2004; Wang & Fischer, 2015; Mulders et al.,
2016; Zhu, 2019) which supports the classical core-accretion scenario. Here if
we use the proper motions and parallax of all the Kepler stars, we can get
two-dimensional projected velocities in barycentric right ascension (RA) and
barycentric declination(DEC). Comparing these two-dimentional velocities with
nearby stars, we can get their relative velocities and make statistical
analysis based on their relative velocities. Due to previous studies on
correlations between stellar properties and planet occurrence rate, we need to
deal with the influence of stellar properties very carefully when
investigating the correlation between planet occurrence rate and stellar
relative velocity.
This paper will be organized as follows, in section 2, we introduce our
methodology, i.e. in subsection 2.1 we introduce selection of main-sequence
stars and planets around them; in subsection 2.2, we describe the calculation
of two-dimensional velocity and split the stellar samples according to their
relative velocities; In section 3, we show our main results, i.e. the
correlations between stellar properties and relative velocities; the
correlations between occurrence rate of different sized planets and stellar
relative velocities; the correlations between eccentricity, multiplicity, and
stellar relative velocities. In section 4, we discuss several scenarios to
explain our statistical results. The conclusions are summarized in section 5.
## 2 Methodology
### 2.1 sample selection
The preliminary catalog we used is based on the table from Berger et al.
(2020), the Gaia-Kepler Stellar Properties Catalog, which is a set of the most
accurate stellar properties, and homogeneously derived from isochrones and
broadband photometry. Since we aim to unearth the correlations between stellar
relative velocity and planet occurrence rate, we need to obtain the two-
dimensional velocity of Kepler stars. The criteria of stellar samples selected
to calculate two two-dimensional velocities are as follows:
* •
1\. The selected stars are probably main-sequence stars.
* •
2\. The selected stars are probably single stars.
We choose main-sequence stars to exclude the potential systematic biases
depending on the stellar evolving stage, whereas we choose single stars
because the gravitational effects of stars in binary systems or multi-stars
systems may influence the stellar proper motion. Before the calculation of
relative velocities, we define nearby stars, i.e. stars around a chosen Kepler
star within 100 pc. Then we derive the average velocity and velocity
dispersion of these stars around a given Kepler target.
Berger et al. (2018)(hereafter B2018) revised the stellar radii using the
astrometry and photometry data from Gaia DR2. Here, we exclude stars flagged
as sub-giants or red giants in Berger et al. (2018) and only choose main-
sequence stars. We also exclude the potential cool binary stars due to their
inflated radii. Additionally, according to Berger et al. (2020)(hereafter
B2020), we remove some Kepler stars, around which Gaia-detected companions
within 4 arcsec may be binaries that will contaminate secondary Ks magnitudes.
What’s more, Berger et al. (2020) suggests that stars with RUWE333RUWE is the
magnitude- and color independent re-normalization of the astrometric
$\chi^{2}$ of Gaia DR2 (unit-weight error of UWE) $\gtrsim$ 1.2, which are
likely to be binaries(A. Kraus et al., in prep). To be cautious with the
sample selection, we exclude all of these stars with RUWE $\gtrsim$ 1.2.
Because binary stars may not only influence our calculation of velocity
dispersion but also intrinsically have significant impacts on planet
formation. For instance, planets around binary stars are different from single
stars. E.g. the tidal truncation of the gas disk by companion stars (Xie et
al., 2010; Silsbee & Rafikov, 2015), meanwhile, the gravitational secular
perturbation may significantly change the architectures of planetary systems,
e.g. Kozai-Lidov oscillations coupled to tidal friction for close binaries
(Ngo et al., 2016; Fontanive et al., 2019).
To sum up, we choose three criteria for excluding potential binaries or stars
in multi-star systems i.e. 1. Stars flagged as cool binaries in B2018; 2.
Stars flagged with RUWE $\gtrsim$ 1.2 in B2020; 3. Stars with contaminated Ks
mag.
Figure 1: Evolutionary state classifications of all Kepler targets based on
flags described in Berger et al. (2018) and Berger et al. (2020). We find that
$\simeq$ 67.5% of Kepler targets(stars in Kepler Input Catalogue) are main-
sequence stars. $\simeq$ 62.0% of Kepler targets are main-sequence stars with
RUWE $\textless$ 1.2(blue), $\simeq$ 5.4% of Kepler targets are main-sequence
stars with RUWE $\gtrsim$ 1.2(red) and $\simeq$ 2%(purple) of Kepler targets
are main-sequence stars flagged as cool binary stars. Other evolved stars
including subgiants and red giants make up $\simeq$ 32.5%(green) of Kepler
targets. We also select Kepler stars(yellow), around which Gaia-detected
companions within 4 arcsec may be binaries and contaminate secondary Ks
magnitudes.
Fig 1 shows evolutionary state classifications of all Kepler targets based on
flags described in Berger et al. (2018) and Berger et al. (2020). Here we
select probably main-sequence stars according to B2018. The aim of B2020 is to
rederive the stellar properties utilizing Gaia DR2 parallaxes, homogeneous
stellar g and Ks photometry, and spectroscopic metallicity. B2020 provides
evolved stars in the RGB stage or clump stage, however, it doesn’t provide
main-sequence stars. We use the main-sequence stars flagged in B2018. While we
use B2020 to exclude stars that are likely in multi-stellar systems according
to RUWE and whether the $K_{\rm s}$ mags of stars are contaminated. Although
the binary main sequence stars identified in B2018 are not prominent in B2020,
we cautiously exclude these cool binaries listed in B2018. In all stellar
targets in Kepler Input Catalogue(KIC), we find that $\simeq$ 67.5% of them
are main-sequence stars. $\simeq$ 62.0% of them are main-sequence stars with
RUWE $\textless$ 1.2(blue), $\simeq$ 5.4% of them are main-sequence stars with
RUWE $\gtrsim$ 1.2(red) and $\simeq$ 2%(purple) of them are cool binary main-
sequence stars. Other evolved stars such as subgiants and red giants make up
$\simeq$ 32.5%(green) of Kepler targets.
Table 1: Sample selection with data of B2020 | Star | Planet
---|---|---
Kepler DR25 | 199991 | 8054
Cross-matching with B2020 | 186301 | …
Main sequence | 119779 | …
Excluding cool binaries listed in B2018 | 116387 | …
Duty cycle444Duty cycle is the fraction of data cadences within the span of observations that contain test data and contribute toward detection of transit signals.$\geq$ 0.6 and data span555The time elapsed in days between the first and last cadences containing test data.>2yr | 100594 | ….
RUWE <1.2 | 92681 | 4197
Excluding stars with contaminated Ks-mag 666Ks-mag flag such as ”BinaryCorr” and ”TertiaryCorr” indicates potential binarity because that Gaia-detected companions within 4” for a given Kepler star may contaminate secondary Ks magnitudes. | 77508 | 3579
Distance $\leq$ 2000 pc | 75778 | 3533
FGK stars: 3850K $\leq T_{\rm eff}\leq$ 7500K | 74002 | 3441
Deposition score 777Deposition sore indicates the confidence in the KOI disposition. $\geq$ 0.9 | … | 1910
Table 1 lists the numbers of stars and planets after every selection step. We
follow several steps - cross-matching with B2020 (186301 stars left),
selecting main sequence stars (119779 stars left), excluding cool binaries
flagged in B2018 (116387 stars left), excluding stars with duty cycle
$\geqslant$ 0.6 and data span $\textless$ 2 yr (100594 stars left) (Narang et
al., 2018; Yang et al., 2020), excluding stars with RUWE $\gtrsim$ 1.2 (92681
stars left) and excluding the stars with contaminated Ks-mag flagged (77508
stars left). Around 77508 probably main-sequence single stars, there is 3579
corresponding Kepler Object of Interests(KOI) left. Furthermore, we exclude
stars with distance larger than 2 kpc and select FGK stars (3850 K - 7500 K)
using the spectral–temperature relationship as in Pecaut & Mamajek (2013)
(74002 stars left). Here, we follow the criterion in Mulders et al. (2018) and
select 1910 reliable planet candidates whose $Robovetter$ disposition scores
are larger than 0.9.
### 2.2 Stellar relative velocity
#### 2.2.1 the calculation of velocity dispersion and relative velocity
Gaia DR2 provides us the most accurate astrometric data of Milky stars up to
now. With these precise astrometric data including the position, distance, and
proper motion of stars, we can easily calculate the velocity both in the right
ascension direction and declination direction, i.e. $\upsilon_{\rm ra}$ and
$\upsilon_{\rm de}$.
$\upsilon_{\rm i}=\frac{\mu_{\rm i}}{\pi}$ (1)
where $i=ra,dec$ represents the direction of stellar velocity, i.e. right
ascension direction and declination direction, respectively. $\mu$ is the
proper motion, and $\pi$ is parallax.
Because we aim to explore the correlations between planet occurrence rate and
stellar relative velocity, we need to calculate the stellar velocity relative
to nearby stars. However, stars in the Milky Way have different rotational
speeds with different distances according to the well-known Milky Way rotation
curve Sofue et al. (2009). For most of the Kepler targets in the Kepler
region, their distance from the center of the Milky Way is about 8–10 kpc. The
difference of rotational speed may be up to several tens of kilometers per
second (see the figure of Milky Way rotation speed curve in Sofue et al.
(2009)). So if we use all the stars in the Kepler region, some systematic bias
will be brought into our calculation. Here when we calculate the relative
velocity of a given Kepler star, we select its nearby stars within 100 pc to
calculate the average velocity and the velocity dispersion of these stars. The
formula of velocity dispersion is written as follows:
$\sigma_{\rm i}\equiv\langle(\upsilon_{\rm i}-\langle\upsilon_{\rm
i}\rangle)^{2}\rangle^{1/2},$ (2)
where $\sigma_{\rm i}$ is the stellar velocity dispersion, i.e. the subscript
$i=ra,dec$ represents the direction of right ascension(RA) and
declination(DEC), respectively. $\upsilon_{\rm i}$ is the velocity in the
direction of RA and DEC. Calculation of the velocity dispersion of nearby
stars will minimize the effect of stellar rotation speed varying with the
position in the Milky Way.
#### 2.2.2 The sample classification via relative velocity
Here we define a new quantity $q$ that describes the deviation of stellar
velocity relative to the average stellar velocity calculated in the range of
stars around a given Kepler star.
$q\equiv\sqrt{\left(\boldsymbol{V}-\boldsymbol{\mu}\right)^{T}\boldsymbol{C}^{-1}\left(\boldsymbol{V}-\boldsymbol{\mu}\right)}$
(3)
$\boldsymbol{V}=\left(\begin{matrix}V_{\rm ra}\\\ V_{\rm
dec}\end{matrix}\right),\,\,\boldsymbol{\mu}=\left(\begin{matrix}\mu_{\rm
ra}\\\ \mu_{\rm dec}\end{matrix}\right)$ (4)
where $\boldsymbol{V}$ is the 2D velocity vector, $\boldsymbol{\mu}$ is the
average 2D velocity vector, and $\boldsymbol{C}$ is the covariance matrix of
$\boldsymbol{V}$. We divide 74,002 single main-sequence stars from our catalog
into three groups: high-V stars (9754) i.e. stars with high relative velocity
compared with stars in the proximity, medium-V stars (28049) i.e. stars with
median relative velocity, low-V stars (36199) i.e. stars with low relative
velocity. As is shown in Fig 2, KIC 757280 is a low-V star whose $0\textless
q\leqslant 1$.
* •
high-V stars: $q>2$;
* •
medium-V stars: $1\textless q\leqslant 2$;
* •
low-V stars: $0\textless q\leqslant 1$.
Figure 2: Take KIC 757280 as an example to show how we classify stars into
three different groups according to relative velocities. We select stars
within 100 pc of KIC 757280 from our catalog. KIC 757280 is indicated with a
black star. Here high-V stars(blue circles) are outside the green ellipse;
medium-V stars(green circles) are located between the red and green ellipse;
low-V stars(red circles) are inside the red ellipse.
Additionally, previous researches on the solar system neighborhood have found
that stellar velocity dispersion has correlations with stellar spectra type or
effective temperature. More specifically, when the color index of the B-V mag
is smaller than 0.61, stellar velocity dispersion has a strongly positive
correlation with B-V mag; when the color index of B-V mag is large than 0.61,
based on the data of Hipparcos, stellar velocity dispersion reaches a plateau
stage (Dehnen & Binney, 1998). Thus, considering these factors that may
influence the calculation of velocity dispersion, we should have selected the
nearby stars with similar effective temperature(deviation of $T_{\rm eff}$ is
less than 500 K) within 100 pc around a given target star to estimate average
velocity and dispersion. However, we do not add this criterion in our
calculation. On the one hand, this selection criterion of a similar
temperature will reduce the number of stars when calculating velocity
dispersion, which will increase the statistical uncertainty. On the other
hand, the majority($\sim$ 90% ) of high-V stars are the same, no matter we
choose stars with or without the temperature criterion. Therefore, the
different definitions of velocity dispersion only have limited influence on
the identification of high-V stars.
Figure 3: The distance and q-value distribution of Kepler stars. The upper
figure shows the distance distribution of Kepler stars. The lower ten figures
shows the q-value distribution of Kepler stars with different distances. The
distance increases from left to right. Three colors (purple, blue and green)
show the different ranges of stars selected to calculate the relative velocity
of a given star.
In this paper, we separate the Kepler stellar sample using the proper motions
of stars rather than 3D velocities. Thus, there may be projection effects as
small distances. To test the robustness of the high-, medium-, low-V star
classification correlated to different distance limits of nearby stars, we
show the distance distribution and q-value distribution of Kepler stars. For a
given star, we select stars within 30 pc (green), 50 pc (blue) and 100 pc
(purple) of this star to calculate the relative velocity. As is shown in
Figure 3, q-value distributions of Kepler stars are nearly the same for the
cases of different ranges, except for the Kepler stars with distance of
1.8–2.0 kpc. Given that the number of Kepler stars with distance of 1.8–2.0
kpc are rather small compared to the total sample, we neglect the differences
due to our method of separating stars.
## 3 Statistical Results
Many previous works have studied correlations between stellar properties and
the planet occurrence rate. In this section, we focus on correlating the
dependence of other stellar properties and try to obtain a more convincing
correlation between the occurrence rate and the relative stellar velocities.
In subsection 3.1, we show the correlations between stellar properties and
relative velocity. In subsection 3.2, we calculate the planet occurrence rate
according to the methods in Appendix B and compare our results with previous
studies to convince our validity of sample selection and calculation. Because
different sized planets have different occurrence rate, we divide the whole
sample into several groups in which sizes of planets are different to
carefully discuss the correlations between planet occurrence rates and stellar
relative velocities. In subsection 3.3, we will show the occurrence rates of
planets of the radius of 0.5–4 $R_{\oplus}$ around high-V , medium-V , and
low-V stars respectively. Because stellar properties have influence on the
calculation of planet occurrence rate, in subsection 3.4, we will show the
results after adopting the methods of correcting the planet occurrence rate.
In subsection 3.5, we will show the correlations between stellar relative
velocities of planet host stars and multiplicities and average eccentricities
of planetary systems.
### 3.1 Correlations between stellar properties and relative velocity
In a total of 74002 Kepler single main-sequence stars, we find that 9754 of
them are high-V stars, 28049 of them are medium-V stars and 36199 of them are
low-V stars. The occurrence rate of Kepler-like planets has anti-correlation
with effective temperature (Howard et al., 2012; Mulders et al., 2015; Yang et
al., 2020) and positive correlation with metallicity (Mulders et al., 2016;
Zhu, 2019). To find a robust correlation between planet occurrence rate and
stellar relative velocity, we should try our best to exclude other factors
influencing the planet occurrence rate e.g. stellar effective temperature and
metallicity. Therefore, we should first discuss the correlations between
stellar properties and relative velocity. Here is the probability distribution
function(PDF) of stellar parameters of high-V , medium-V , and low-V stars.
Figure 4: The distribution of stellar properties of 9754 high-V stars, 28049
medium-V stars, and 36199 low-V stars. Panel (a)(b)(c)(d) show the
distribution of stellar mass, effective temperature, the metallicity of B2020,
and metallicity of LAMOST DR4, respectively. We use the metallicity of LAMOST
DR4 from the second release of value-added catalogs of the LAMOST
Spectroscopic Survey of the Galactic Anticentre (LSS-GAC DR2) (Xiang et al.,
2017). Green is high-V stars; blue is medium-V stars; purple is low-V stars.
Green, blue and purple dashed lines represent the average value of stellar
properties of stars with different relative velocities. Similarly, green,
blue, and purple shadow colored regions show the standard deviation of stellar
properties of stars with different relative velocities.
Fig 4 shows the distribution of stellar parameters of high-V stars, medium-V
stars, and low-V stars. Panel (a) shows the distribution of stellar mass.
Different colors show different relative velocities. Green is high-V stars;
blue is medium-V stars; purple is low-V stars. Solid lines show the
probability distribution function of stars. Dashed lines show the average
value of stellar properties of stars with different relative velocities.
Similarly, green, blue, and purple shadow colored regions show the standard
deviation of stellar properties of stars with different relative velocities.
The average mass of high-V stars is 0.93${}_{\rm-0.16}^{+0.16}$ M⊙ where 0.16
is the standard deviation of the stellar mass. The average mass of medium-V
stars is 0.98${}_{\rm-0.19}^{+0.19}$ M⊙. The average mass of low-V stars is
1.02${}_{\rm-0.24}^{+0.24}$ M⊙. Considering the average uncertainty in stellar
mass is relatively small, about 7%, we can roughly conclude that the average
mass of stars tends to decrease with increasing relative velocity. Or we can
say that stars with lower mass prefer to move with higher relative velocity.
Similarly, in panel (b)—the distribution of effective temperature, we can also
conclude that stars with higher relative velocity prefer to have a lower
effective temperature. More specifically, with the consideration of 3% $T_{\rm
eff}$ errors or $\sim 112$ K, the average effective temperature of high-V
stars is higher than that of low-V stars with $\sim$ 1.5 $\sigma$. This is
easy to understand, because of the strong positive correlation between stellar
mass and effective temperature coming from the well-known empirical mass-
luminosity relation. Because some previous study argued the planet occurrence
rate increases with decreasing stellar mass or stellar effective temperature
(Howard et al., 2012; Mulders et al., 2015; Yang et al., 2020). Therefore, we
should discuss the potential influence of stellar mass or effective
temperature in the following calculation of the planet occurrence rate.
Panel (c) shows the distribution of stellar metallicity calculated in B2020.
The average metallicity of high-V stars is -0.08${}_{\rm-0.18}^{+0.18}$, that
of medium-V stars is 0.04${}_{\rm-0.14}^{+0.14}$ and that of low-V stars is
0.03${}_{\rm-0.13}^{+0.13}$. Although the average metallicity decreases
slightly with increasing stellar relative velocity, considering the relatively
large standard error of the metallicity, stars with different relative
velocities have similar stellar metallicity based on data from B2020 (with
differences of 0.4 $\sigma$).
Since Dong et al. (2014) have pointed out that the metallicity from Kepler
star (Huber et al., 2014) has significant systematic biases. To check whether
the metallicity calculated in Berger et al. (2020) have systematic biases, we
choose the data of LAMOST as a comparison with caution. Since LAMOST is
initially designed for the stellar spectral survey in the Milky Way, it’s
ideal for measuring the metallicity of stars in the Kepler region, although
LAMOST does not cover the entire sample of Kepler stars. After cross-matching
LAMOST DR4 and our selected stellar samples, we find $\sim$ 22000 stars (28%)
having metallicity of LAMOST DR4. There are about 2219 high-V stars, 7294
medium-V stars, and 11377 low-V stars among these stars. If stars in the
cross-matched catalog have significantly different fractions of stars with
different relative velocities from our selected stellar samples, it suggests
that high-V , medium-V , and low-V stars are not homogeneously distributed in
the Kepler region. Therefore, before using the Kepler LAMOST cross-matched
data, we should first check the fraction of stars with different relative
velocities both in Kepler LAMOST cross-matched catalog and our selected
stellar catalog. We find that they have similar fractions in different
catalogs i.e. high-V stars: 13% (B2020) vs 11% (LAMOST DR4), medium-V stars:
38% (B2020) vs 35% (LAMOST DR4), and low-V stars: 49% (B2020) vs 54% (LAMOST
DR4). Therefore, we can roughly conclude that stars with different relative
velocities are homogeneously distributed. We can use metallicity obtained in
LAMOST DR4 to represent the whole main-sequence stars in the Kepler region.
Panel (d) shows the distribution of stellar metallicity obtained from LAMOST
DR4. The average metallicity of high-V stars is -0.25${}_{\rm-0.34}^{+0.34}$,
that of medium-V stars is -0.09${}_{\rm-0.26}^{+0.26}$ and that of low-V stars
is 0.04${}_{\rm-0.25}^{+0.25}$. The standard error of metallicity in LAMOST
DR4 is about 0.1 dex. Similar to the analysis above, we can easily get the
result that high-V stars prefer lower metallicity than low-V stars with $\sim$
1.7 $\sigma$ differences.
The correlation between stellar metallicity and relative velocity shows
significantly different trends in panel (c) and (d)(more details in Appendix
A). The difference may be attributed to those stars without constraints of
spectroscopic metallicity. In the following subsections, we will both discuss
the influence of metallicity from different tables(metallicity of B2020 and
metallicity of LAMOST DR4) on correlations between planet occurrence rate and
stellar relative velocity.
To sum up, we find some correlations between stellar properties and stellar
relative velocity, i.e. high-V stars with higher relative velocity, have
smaller average stellar mass, lower average effective temperature, and lower
average stellar metallicity(LAMOST DR4).
### 3.2 Comparison with previous results
Figure 5: Relation between normalized planet occurrence rate(focc/dlnRp) and
planet radius(Rp). The black bars are show the occurrence rates estimated
using the method in appendix C. The green, blue, and purple bars shows the
results calculated in Dong & Zhu (2013) and Mulders et al. (2018).
Previous work shows the planet occurrence rate both correlated with planet
radius and orbital periods. In this section, we choose the planet with periods
less than 400 days to estimate the occurrence rate. Furthermore, to minimize
the influence of the size of bins, we calculate the normalized planet
occurrence rate (i.e. focc/dlnRp, the number of planets per star normalized in
different bin size in lnRp space).
Fig 5 shows the relation between normalized planet occurrence rate and planet
radius(Rp). Our planet occurrence rates are nearly the same as those in
Mulders et al. (2018), but slightly higher than the planet occurrence rates
calculated by Dong & Zhu (2013). Our sample includes 74002 stars and 1910
planets, while the sample used by Dong & Zhu (2013) includes $\sim$ 120000
stars and 2300 planet candidates. Thus, our sample will have a slightly higher
of average planet numbers per star i.e. (1910/74002)/(2300/120000) $\sim$
1.35.
We find the bi-modal structure at a range of 1–4 $R_{\rm\oplus}$ in the
distribution planet of planet radius and planet occurrence rate which is
highly consistent with that well-known 1.8 $R_{\oplus}$ planet radius gap
found by Fulton et al. (2017). After the second peak around $\sim$ 2.5–3
$R_{\oplus}$, normalized planet occurrence rate declines rapidly with the
increasing planet radius. This rapid declination of planet occurrence rate may
associate with planet desert due to the radiation of host stars. Additionally,
there is an ambiguous plateau in the range of $\sim$ 4–10 $R_{\oplus}$.
Although there are only several points, both our work(black dots), Dong & Zhu
(2013)(purple dots), and Mulders et al. (2018)(blue dots) show a similar
relation. The plateau indicates the different characteristics of giant planets
and Kepler-like planets including super-Earth and sub-Neptune. Kepler has not
detected many planet candidates in some specific ranges of orbital periods,
especially for cold gas giants. The occurrence rates of these planets with
longer periods are underestimated significantly. Gaia spacecraft may detect
more cold Jupiters with long periods through the astrometric method. More cold
Jupiters can be used to check whether there is a plateau for gas giants in the
distribution plane of planet occurrence rate and planet radius, and improve
our knowledge of the formation and evolution of cold Jupiters.
### 3.3 occurrence rates of planets with 0.5-4 $R_{\oplus}$
Figure 6: The relation between normalized planet occurrence rate($f_{\rm
occ}$/lnP) and stellar relative velocity in the distribution of planet orbital
period. The red, yellow, and blue symbols show the planets around stars with
high, median, and normal relative velocity respectively. Panel (a) shows
relation for planets with radius of 1-4 $R_{\oplus}$; Panel (b) shows relation
for planets with radius of 0.5-4 $R_{\oplus}$.
To investigate the correlation between occurrence rate and planet orbital
period, we choose the planets with a radius of 1–4 $R_{\oplus}$ and 0.5–4
$R_{\oplus}$. We calculate the normalized planet occurrence rate in a
different range of orbital periods(focc/dlnP). In Fig 6, we show the relation
between planet occurrence rates and stellar relative velocity. Panel (a) shows
the results for planets with radius of 1–4 $R_{\oplus}$ and Panel (b) shows
the results for planets with radius of 0.5–4 $R_{\oplus}$. In both panels, no
matter the planet hosts are high-V , medium-V , or low-V stars, the planet
occurrence rate increases with increasing orbital periods within the 10 days
($\sim$ 0.1 au for solar-mass star). When the orbital periods 10 $\textless$ P
$\textless$ 400 days, the planet occurrence rate keeps constant as a plateau.
The broken law of occurrence rate is consistent with Howard et al. (2012);
Dong & Zhu (2013); Mulders et al. (2015). Several mechanisms may interpret the
break around 10 days. The break-in planet occurrence rate at $\sim$ 10 days
can be attributed to the truncation of protoplanetary disks by their host star
magnetospheres at co-rotation radius Mulders et al. (2015); Lee & Chiang
(2017). Because under the assumption of in-situ formation scenario, lower
solid material left in the protoplanetary disks within 10 days consequently
result in the lower planet occurrence rate. Additionally, for solar-mass
stars, orbital period P $\sim$ 10 days i.e. semi-major axis a $\sim$ 0.1 au,
near the location of sublimation front of silicate Flock et al. (2019). Both
in-situ formation and migration scenario can explain this break at $\sim$ 10
days, i.e. for inward-drifting pebbles to accumulate and form planets at the
pressure maximum a short distance outside the dust sublimation front Flock et
al. (2019) or for inward migration of multiple planets where the first planet
is trapped at the inner disk edge and halts the migration of other planets
(Cossou et al. (2014)).
Besides the broken law, we find interesting correlations between planet
occurrence rates and stellar relative velocity. In panel (a), planets with the
radius of 1–4 $R_{\oplus}$ around high-V stars have a lower occurrence rate
than that around low-V stars with 0.85 $\sigma$ differences on average. In
panel (b), planets within 1 days with the radius of 0.5–4 $R_{\oplus}$ around
high-V stars have a slightly higher occurrence rate compared with planets
around low-V stars. While for planets outside 1 days, planets around high-V
stars still have a lower occurrence rate on average, i.e. with 0.59 $\sigma$
difference compared with planets around low-V stars. The difference of
occurrence rate planets within 1 days in panel (a) and (b) indicates that
short-period sub-Earth-sized planets (0.5–1 $R_{\oplus}$) prefer to orbit
around high-V stars.
Figure 7: The orbital period and planet radius distribution. Here we select
the planets with disposition score $\textgreater$ 0.9. The upper green PDF
shows the distribution of the orbital period, while the right red PDF shows
the distribution of the planet radius. The contour lines(yellow to red) in the
mid panel, represents the detection completeness $\eta$ (for a planet with
given planet radius and orbital periods, the completeness means the the number
of Kepler main-sequence single stars that can detect the planet over the
number of Kepler main-sequence single stars in different V bins) of 90%, 50%,
10%, 1%. You can also find the definition of completeness in Appendix B
Here we define detection completeness $\eta$ of a given planet as the fraction
of stars that the planet can be detected around (more details in Appendix B).
In our calculation, for planets with low detection completeness, small $\eta$
will result in large uncertainties of planet occurrence rate with the
assumption of Poisson distribution. To minimize the influence of planets with
low detection completeness, we set another parameter cutting criterion, i.e.
selecting planets with completeness $\eta$ $>$ 0.1. Fig 7 shows the detection
completeness of highly reliable planet candidates whose $Robovetter$
disposition scores are larger than 0.9 in the distribution plane of planet
radius and orbital period for stars with different relative velocities. For
planets with radius of 1–4 $R_{\oplus}$, we select planets with period P
$\textless$ 100 days. While for planets with radius of 0.5–1 $R_{\oplus}$, we
both select planets within 30 days. Fig 8 shows the occurrence rate of the
planet around high-V , medium-V , and low-V stars with cut of detection
completeness, i.e. orbital periods cut. For super-Earth with the radius of 1–4
$R_{\oplus}$ around high-V stars, their occurrence rate is significantly lower
than the occurrence rate of planets around low-V stars, i.e. with 2.5 $\sigma$
differences. For sub-Earth with the radius of 0.5–1 $R_{\oplus}$ around high-V
stars, the occurrence rate is slightly higher (1.5 $\sigma$) than the
occurrence rate of planets around low-V stars. These results are consistent
with that in Fig 6. However, we don’t exclude the influences of stellar
properties such as stellar mass and metallicity on the planet occurrence rate
i.e. planet occurrence rate has anti-correlation with effective temperature
(Howard et al., 2012; Mulders et al., 2015; Yang et al., 2020) and positive
correlation with metallicity (Mulders et al. (2016); Zhu (2019). In Appendix
C, we introduce two methods to minimize the influence of stellar properties,
i.e. prior correction and posterior corrections.
Figure 8: Occurrence rate of planets around high-V , medium-V and low-V stars.
The left panel shows the result of super-earth- and sub-neptune- sized
planets(1–4 $R_{\oplus}$, P$\textless$100 days). The right panel shows the
result of sub-earth-sized planets(0.5–1 $R_{\oplus}$, P$\textless$ 30 days)
### 3.4 Planet occurrence rate after corrections
Figure 9: The correlation between the planet occurrence rate and stellar
relative velocity. We shows the results of planets with radius $R_{\rm p}$
$\in$ 1–4 $R_{\oplus}$ and orbital period P $\textless$ 100 days. Blue squares
show the unrevised planet occurrence rate. Purple circles show the planet
occurrence rate after posterior correction. Orange triangles show the planet
occurrence rate after prior correction utilizing metallicity in B2020. Green
triangles show the planet occurrence rate after prior correction LAMOST DR4.
Table 2: Values in Fig. 9
Case | high-V stars | medium-V stars | low-V stars
---|---|---|---
Unrevised, $f_{\rm occ}$ | 0.545 | 0.645 | 0.665
Unrevised, $\sigma_{\rm f_{\rm occ}}$ | 0.042 | 0.027 | 0.024
Unrevised, differences | | 2.47 $\sigma$ |
Posterior correction, $f_{\rm occ}$ | 0.545 | 0.594 | 0.619
Posterior correction, $\sigma_{\rm f_{\rm occ}}$ | 0.042 | 0.025 | 0.023
Posterior correction, differences | | 1.55 $\sigma$ |
Prior correction B2020, $f_{\rm occ}$ | 0.545 | 0.541 | 0.650
Prior correction B2020, $\sigma_{\rm f_{\rm occ}}$ | 0.042 | 0.036 | 0.042
Prior correction B2020, differences | | 1.78 $\sigma$ |
Prior correction LAMOST, $f_{\rm occ}$ | 0.383 | 0.556 | 0.654
Prior correction LAMOST, $\sigma_{\rm f_{\rm occ}}$ | 0.062 | 0.076 | 0.087
Prior correction LAMOST, differences | | 2.55 $\sigma$ |
Prior correction LAMOST + Posterior correction, $f_{\rm occ}$ | 0.383 | 0.512 | 0.610
Prior correction LAMOST + Posterior correction, $\sigma_{\rm f_{\rm occ}}$ | 0.062 | 0.070 | 0.081
Prior correction LAMOST + differences, confidence level | | 2.23 $\sigma$ |
Note. — we list the values of planet occurrence rate($f_{\rm occ}$) and its
errors($\sigma_{\rm f_{\rm occ}}$). The confidence level is defined as the
difference between the occurrence rate of planets around high-V and low-V
stars. Here we not only list the values of cases named ”unrevised”, ”Posterior
correction”, ”Prior correction B2020” and ”Prior correction LAMOST”, but also
add the case named ”Prior correction LAMOST + Posterior correction”. As
mentioned in subsection 3.4, for the case of ”Prior correction LAMOST”,
because we are unable to select the nearest samples passing KS two-sample test
i.e. P $\textgreater$ 0.05, we combine the Prior correction and Posterior
correction to get a convincing value.
We both use posterior correction and prior correction to exclude the influence
of stellar properties on the planet occurrence rate and confirm whether
stellar relative velocity is another factor correlated with the planet
occurrence rate. Because of the difference between metallicity in B2020 and
LAMOST DR4, we use both of the metallicities in B2020 and LAMOST DR4 to do the
prior correction respectively, i.e. Prior correction B2020 and Prior
correction LAMOST. Because the empirical correlations between planet
occurrence rate and stellar properties are different for planets with
different sizes. We will show the results after the correction of the
occurrence rate of planets with different radius, i.e. correction of the
occurrence rate of planets with the radius of 1–4 $R_{\oplus}$, 0.5–1
$R_{\oplus}$ and 4–20 $R_{\oplus}$.
#### 3.4.1 Correction of occurrence rate of planets with radius of 1–4
$R_{\oplus}$
Figure 10: The cumulative distribution function(CDF) of stellar mass(panel
(a)), radius(panel (b)) and metallicity of B2020(panel (c)), for high-V ,
medium-V and low-V stars respectively. We select medium-V and low-V stars with
the nearest stellar properties of every given high-V star. Green, blue and
purple shows high-V stars and those selected medium-V and low-V stars
respectively. In each panel, we also list the average values of stellar
properties and p values of the KS two-sample test. H&M means KS two-sample
test between high-V and those selected medium-V stars. H&N means KS two-sample
test between high-V and those selected low-V stars. Panels (d)(e)(f) are
similar to Panels (a)(b)(c). The difference lies in the metallicity, i.e.
panels (d)(e)(f) use stars with metallicity of LAMOST DR4, while panels
(a)(b)(c) use stars with metallicity of B2020.
In Figure 9, we show the result for planets with a radius of 1–4 $R_{\oplus}$
after correction. Different colors show the results of different cases. Blue
squares show the unrevised planet occurrence rate. Purple circles show the
planet occurrence rate after posterior correction. Orange and green triangles
show the planet occurrence rate after prior correction utilizing the
metallicity of B2020 and prior correction utilizing the metallicity of LAMOST
DR4, respectively. All these results show an anticorrelation between stellar
relative velocity and occurrence rate of planets with a radius of 1–4
$R_{\oplus}$.
In Posterior correction, we use empirical $f_{\rm occ}-T_{\rm eff}$ and
$f_{\rm occ}-[Fe/H]$ correlations to correct the influence of $T_{\rm eff}$
and $[Fe/H]$. Here stellar effective temperatures are taken from B2020 and
metallicities are taken from LAMOST DR4. medium-V and low-V stars have
originally higher average $T_{\rm eff}$ and lower average metallicity than
high-V stars. In order to correct the planet occurrence rate due to different
distributions of stellar properties, we use efficiencies calculated with
Equation (C5) and (C6), i.e. $C_{\rm medium\Rightarrow high}$ = 0.92 and
$C_{\rm low\Rightarrow high}$ = 0.93. Before the posterior correction, the
difference in the occurrence rate of the planet around high-V and low-V stars
is about 2.47 $\sigma$. After correction, the difference is about 1.55
$\sigma$, which indicates that stellar relative velocity is likely another
factor influencing the planet occurrence rate. In the metallicity correction,
we use the data from figure 3 in Zhu (2019). Because the data of figure 3 in
Zhu (2019) includes large planets, the empirical Equation (C4) overestimates
the positive correlations between planet occurrence rate and metallicity. If
using the data excluding large planets, the $f_{\rm occ}-[Fe/H]$ correlation
is even weaker. Consequently, the difference in occurrence rates of planets
around high-V and low-V stars after posterior correction will be a little bit
larger than 1.55 $\sigma$.
In the Prior correction B2020, we use the NearestNeighbors function in scikit-
learn Pedregosa et al. (2011) to select 12856 medium-V and 12908 low-V stars
whose stellar properties are the nearest to that of 9754 high-V stars. We also
select 225 planets with radius of 1–4 $R_{\oplus}$ around medium-V stars, 241
planets with radius of 1–4 $R_{\oplus}$ around low-V stars, and 38 planets
with radius of 1–4 $R_{\oplus}$ around high-V stars correspondingly. In Fig
10, the panel (a)(b)(c) show the cumulative distribution function(CDF) of
stellar mass, stellar radius and metallicity of B2020 respectively. Although
K-S two sample tests shows low P value(i.e. $P_{\rm KS}\leqslant 0.05$), the D
value is small(i.e. the largest difference between the two cumulative
distribution function is lower than 5%). And the little difference of average
stellar mass, stellar radius and metallicity (lower than 5%) indicate that
their distributions are statistically similar(i.e. even if add another
Posterior correction, the results will be nearly the same). After Prior
correction B2020, the difference of occurrence rates of planets around high-V
and low-V stars is about 1.78 $\sigma$.
In the Prior correction LAMOST, we only select 2835 medium-V and 2621 low-V
stars whose stellar properties are the nearest to that of 2219 high-V stars.
We also select 53 planets with radius of 1–4 $R_{\oplus}$ around medium-V
stars, 57 planets with radius of 1–4 $R_{\oplus}$ around low-V stars, and 38
planets with radius of 1–4 $R_{\oplus}$ around high-V stars correspondingly.
Although due to limited numbers of stars and planets, the occurrence rates of
planets after Prior correction LAMOST have relatively large uncertainty, the
difference in occurrence rates of planets around high-V and low-V stars is
significant(2.55 $\sigma$). We use Nearest-Neighbors to select the stars with
similar metallicity, however, these three stellar groups show significant
difference in the distribution of metallicity, although is smaller than that
in Fig 4(panel (d)(e)(f) in Fig 10). If we add another Posterior correction,
the difference declines to 2.23 $\sigma$. You can find values in table 2.
Both of these occurrence rates of planets ($R_{\rm p}$ $\in$ 1–4 $R_{\oplus}$,
P $\textless$ 100 days) with and without correction show the anti-correlation
between planet occurrence rate and stellar relative velocity, which indicates
that stellar relative velocity is likely another factor influencing planet
occurrence rate. For posterior correction and prior correction B2020, the
difference of occurrence rates of planets around high-V and low-V stars is
1.78 $\sigma$, while for prior correction LAMOST, the difference is even more
significant(2.55 $\sigma$). Different ways of correction show results with a
similar trend. Therefore, the anti-correlation between planet occurrence rates
and stellar relative velocities is relatively robust.
#### 3.4.2 Correction of occurrence rate of planets with radius of 0.5–1
$R_{\oplus}$
Figure 11: The correlation between the planet occurrence rate and stellar
relative velocity. We shows the results of planets with radius $R_{\rm p}$
$\in$ 0.5–1 $R_{\oplus}$ and orbital period P $\textless$ 30 days. Blue
squares show the unrevised planet occurrence rate. Purple circles show the
planet occurrence rate after posterior correction. Orange triangles show the
planet occurrence rate after prior correction utilizing metallicity in B2020.
In addition to the correction of the occurrence rate of planets with a radius
of 1–4 $R_{\oplus}$, we also correct the occurrence rate of planets with a
radius of 0.5–1 $R_{\oplus}$. In Fig 11, blue squares show the unrevised
planet occurrence rate. Purple circles show the planet occurrence rate after
posterior correction. Orange and green triangles show the planet occurrence
rate after prior correction utilizing the metallicity of B2020 and prior
correction utilizing the metallicity of LAMOST DR4 respectively.
In the posterior correction(purple circles in Fig 11), we use the same methods
and same selection samples in Fig 9. After correction, the difference of
occurrence rates of planets around high-V and low-V stars is 1.73 $\sigma$,
which is slightly higher than the differences in the case with unrevised
data(1.5 $\sigma$).
In the prior correction B2020, we also select 12908 medium-V and 12865 low-V
stars(planets with the same ranges of parameters as that described in figure
11) whose stellar properties are the nearest to that of 9765 high-V stars.
After correction, the difference in occurrence rates of planets around high-V
and low-V stars is 1.53 $\sigma$ which is similar to the case of unrevised
data.
Similarly, with the prior correction LAMOST, the difference in occurrence
rates of sub-earth-sized planets(0.5–1 $R_{\oplus}$, P$\textless$30 days, and
$\eta$ $>$ 0.1) around high-V and low-V stars is only 0.77 $\sigma$. This
small difference may be attributed to the small number of selected planets in
this method of correction. Because occurrence rate of sub-earth-sized planets
around medium-V stars is smaller than that around low-V stars, which means the
correlation between occurrence rates of sub-earth-sized planets and stellar
relative velocities may not be a simply anti- or positive- correlation. We
could not draw the conclusion that occurrence rates of sub-earth-sized planets
decrease with decreasing stellar relative velocities. However, occurrence
rates of sub-earth-sized planets around high-V stars is higher than both of
planets around medium-V and low-V stars, no matter which way we use to
correction. Thus, the conclusion is relatively robust.
#### 3.4.3 Correction of occurrence rate of planets with radius of 4–20
$R_{\oplus}$
Figure 12: The occurrence rate of planets with a radius of 4–20 $R_{\oplus}$
around stars with different relative velocities. The planet occurrence rates
are calculated in the orbital period of smaller than 400 days. Blue squares
show the unrevised planet occurrence rate. Purple circles show the planet
occurrence rate after Posterior correction. Orange triangles show the planet
occurrence rate with Prior correction B2020.
For planets of $R_{\rm p}$ $\in$ 1–4 $R_{\oplus}$ and P $\textless$ 100 days,
planet occurrence rate is anti-correlated with stellar relative velocity.
While for planets of $R_{\rm p}$ $\in$ 0.5–1 $R_{\oplus}$ and P $\textless$ 30
days, high-V stars have higher occurrence rate of these planets than low-V
stars. Here we also want to investigate the correlation between occurrence
rate of planets of radius of 4–20 $R_{\oplus}$ and stellar relative velocity.
For relatively small planets with radius of 0.5–4 $R_{\oplus}$, because the
dependence of their occurrence rate on metallicity is much weaker than gas
giants Zhu (2019); Wu (2019); Kutra & Wu (2020). Therefore, we change the
empirical Equation (C4) fitted from Zhu (2019) to the empirical $f_{\rm
occ}-[Fe/H]$ relations proposed by Johnson et al. (2010),
$\footnotesize f(M_{\rm*},[Fe/H])=0.07\pm
0.01\times\left(\frac{M_{\rm*}}{M_{\rm\odot}}\right)^{1.0\pm 0.3}\times
10^{1.2\pm 0.2[Fe/H]}.$ (5)
We use planets of the radius of 4–20 $R_{\oplus}$ because of the limited
number of planets of 6 - 20 $R_{\oplus}$. Here, we assume that planets of the
radius of 4–20 $R_{\oplus}$ share the similar $f_{\rm occ}-[Fe/H]$ relation
with Jovian sized planets. Under this assumption, we also follow the methods
in Appendix C to obtain the planet occurrence rate after correction.
Several works have investigated the correlation between occurrence of Hot
Jupiters and stellar velocities. For example, Hamer & Schlaufman (2019) show
that hot jupiter(HJs) host stars have a smaller Galactic velocity dispersion
than a similar population of stars without HJs, which implies that stars with
a smaller Galactic velocity dispersion(similar to low-V stars in my paper) may
have a higher occurrence rate of HJs. Interestingly enough, Winter et al.
(2020) found that HJs prefer to exist around host stars in phase space
overdensities.
Here we simply assume the planets with 4–20 $R_{\oplus}$ and P $\textless$ 10
days as HJs and the planets with 4–20 $R_{\oplus}$ and
10$\textless$P$\textless$400 days as warm Jupiters(WJs) or cold Jupiters(CJs).
Fig 12 shows the correlation between the occurrence rate of Jupiter-sized
planets and stellar relative velocity before and after the correction. Panel
(a) shows the case of HJs and Panel (b) shows the case of WJs or CJs.
Before the correction, the occurrence rates of HJs around medium-V and low-V
stars are higher than that of HJs around high-V stars with 2.99 $\sigma$ and
2.46 $\sigma$ respectively. Because the occurrence rate of Jupiter-sized
planets have significantly positive correlation with metallicity, both the
posterior correction and prior correction will reduce the occurrence rate of
HJs around medium-V and low-V stars if we set high-V stars as a control
sample. After posterior correction, the occurrence rates of HJs around
medium-V and low-V stars are higher than that of HJs around high-V stars with
2.29 $\sigma$ and 1.36 $\sigma$ respectively. After prior correction B2020,
the differences reduce to 0.98 $\sigma$ and 0.56 $\sigma$. In panel (b), we
can see that before correction, occurrence rates of WJs or CJs are nearly the
same with the changing stellar relative velocities. Yet after posterior
correction, occurrence rate of WJs or CJs around high-V stars is slightly
higher than that of WJs or CJs around medium-V and low-V stars.
Although the occurrence rate of WJs and CJs is not significantly different,
the enhanced occurrence rate of HJs around stars with lower relative velocites
(similar to stars in phase space overdensities is consistent with previous
works, i.e. Winter et al. (2020)). Therefore, our results imply that
clustering environment may play important role in the formation and evolution
of HJs.
### 3.5 statistical results of multiplicity and eccentricity
Recently, Longmore et al. (2021) find that stars in high stellar phase space
density environments (overdensities) have a factor of 1.6–2.0 excess in the
number of single planet systems compared to stars in low stellar phase space
density environments (i.e. field star). Their result suggests that stellar
clustering may play an important role in shaping the properties of planetary
systems, e.g. multiplicity, average eccentricity, and mutual inclination.
Here, we want to investigate correlations between multiplicities, average
eccentricities of planetary systems, and stellar relative velocities. We use
two definitions to describe the multiplicity of planetary system, one is the
fraction of multi-planets systems over the fraction of single planet systems,
the other is the average number of planets per system. We will discuss the
average number of planets per system, the fraction of multi-planets systems
over the fraction of single planet systems, and the average eccentricity of
planetary systems,respectively, in the following subsections.
#### 3.5.1 Average planet numbers per planetary system
Figure 13: The average planet numbers per planetary system for planet hosts
with different relative velocities. Blue symbols show the results of planets
with koi score $\textgreater$ 0.9, while orange symbols show the results of
planets without koi score filter. The error bar is calculated according to
Poisson distribution.
The average planet numbers per planetary system is $\bar{N_{\rm p}}$,
$\bar{N_{\rm p}}=\frac{\Sigma_{\rm i=1}^{n_{\rm host}}N_{\rm i}}{n_{\rm
host}}$ (6)
where nhost is the total number of planet hosts in the stellar samples, Ni is
the number of transit planets around a given planet hosts. We use the Poisson
distribution to calculate the error. In Fig 13, considering the relatively
large error of $\bar{N_{\rm p}}$ of low-V planet hosts, whether using the koi
score filter or not, the average planet numbers per planetary system nearly
unchanged with stellar relative velocity. It seems that the average planet
numbers per planetary system have no significant correlation with stellar
relative velocity. However, because planets with high mutual inclinations may
not be detected by transit. Our calculated value of average planet numbers per
planetary system is underestimated. In other words, ”single” planet systems
may not be single, multi-planet systems may have some other planets that
haven’t been detected due the limitation of observation time. Furthermore, if
the stellar relative velocity is correlated with this underestimation of
average planet numbers per planetary system, the statistical results may also
be biased.
Therefore, in order to minimize observational selection effects, in the next
subsection, we will discuss another definition of the multiplicity of
planetary systems, i.e. the fraction of multi-planets systems over the
fraction of single planet systems, which is a relative value.
#### 3.5.2 The fraction of multi-planets systems over the fraction of single
planet systems
Figure 14: The correlations between multiplicities of planetary systems,
average eccentricities of planets, and stellar relative velocities. Blue
hollow squares show the uncorrected result of planets with koi score
$\textgreater$ 0.9, while orange hollow squares show the uncorrected result of
planets without koi score filter. The solid symbol shows the results of prior
correction utilizing the metallicity of B2020. The red hollow triangles show
the results of average eccentricities of planetary systems, i.e. eccentricity-
velocity correlation. The error bar is calculated according to Poisson
distribution. We select planets with the following criteria: (i), Rp $\in$
[0.5 - 20] $R_{\oplus}$; (ii), P $<$ 400 days; (iii) detection completeness
$\eta$ $>$ 0.1.
Here we define multi-planet systems as stars that host more than one Kepler
transit planet, and single-planet systems as stars that host only one Kepler
transit planets. With the calculation of $f_{\rm multi}/f_{\rm single}$, i.e.
a relative value which can avoid the influence of detection efficiency of
different stellar samples.
$\frac{f_{\rm multi}}{f_{\rm single}}=\frac{n_{\rm multi}/n_{\rm
total}}{n_{\rm single}/n_{\rm total}}$ (7)
where $n_{\rm multi}$, $n_{\rm single}$ and $n_{\rm total}$ are the number of
multi-planet systems, single-planet systems and total planet systems in three
stellar sub samples.
Actually, some other planets with high mutual inclinations can not be
discovered by transit. These ”single” planet systems are not really single.
Here, because we use a relative value to discuss the correlation between the
multiplicity of a planetary system and stellar relative velocity, even if the
transit method has some unknown preference on stellar relative velocity, this
relative value can also minimize this secondary influence on the correlation.
Thus we could draw a relatively robust conclusion.
In figure 14, with three selection criteria, i.e. (i), Rp $\in$ [0.5 - 20]
$R_{\oplus}$; (ii), P $<$ 400 days; (iii) detection completeness $\eta$ $>$
0.1, we show the correlation between $f_{\rm multi}/f_{\rm single}$ and
stellar relative velocities both before and after the correction, i.e. Both
hollow symbols(correction) and solid symbols(after correction) show that
planetary systems around high-V stars have a higher multiplicity than low-V
stars. The case with koi score filter shows a significantly higher $f_{\rm
multi}/f_{\rm single}$ than the case without. Choosing the case of blue hollow
squares and blue solid squares as an comparison, we can easily find that after
correction the difference between multiplicities of planetary systems around
high-V stars and low-V stars is even larger, i.e. 1.3 $\sigma$ correction and
1.8 $\sigma$ after correction.
Figure 15: The cumulative distribution function(CDF) of stellar mass(panel
(a)), radius(panel (b)) and metallicity of B2020(panel (c)), for high-V ,
medium-V and low-V planet hosts respectively. We select medium-V and low-V
planet hosts with the nearest stellar properties of every given high-V planet
host. Green, blue and purple lines show high-V and those selected medium-V and
low-V planets hosts respectively. Panels (d)(e)(f) are similar to Panels
(a)(b)(c). The difference lies in the metallicity, i.e. panels (d)(e)(f) use
planets hosts with metallicity of LAMOST DR4, while panels (a)(b)(c) use
planets hosts with metallicity of B2020.
The different stellar properties in different stellar sample may influence our
results. In order to minimize such impact, here we also used the
NearestNeighbors function in scikit-learn (Pedregosa et al., 2011) to choose
the two medium-V or low-V planet hosts with nearest value of stellar mass,
radius, and metallicity of B2020 for every selected high-V planet, i.e. Prior
correction B2020. Taking the case of no koi score filter as an example, after
selecting the nearest medium-V or low-V planets host for every high-V planets
host, we calculate the $f_{\rm multi}/f_{\rm single}$, for medium-V , $f_{\rm
multi}/f_{\rm single}$ = 0.164${}_{\rm-0.024}^{+0.024}$, for low-V , $f_{\rm
multi}/f_{\rm single}$ = 0.167${}_{\rm-0.024}^{+0.024}$. These two values are
nearly the same as values calculated with total medium-V and low-V planet
hosts. In Fig 15, we used K-S two sample test to compare distribution of
stellar mass, radius and metallicity for high-V and selected medium-V and
low-V planet hosts, producing p-values of 0.87 or 0.59, 0.99 or 0.95 and 0.66
or 0.13 respectively. Therefore we confirm that the distributions are
statistically similar. The solid symbols clearly show that high-V planet hosts
have higher $f_{\rm multi}/f_{\rm single}$ than low-V planet hosts after the
prior correction B2020. Here we do not list the results of prior correction
LAMOST because of the limited number of selected planet hosts with the
metallicity of LAMOST DR4. If don’t take into account the large uncertainty
due to the small number, after prior correction with the metallicity of
LAMOST, the fractions of multi-planets systems over the fraction of single
planet systems(koi score $\textgreater$ 0.9) are 0.5, 0.44, and 0.3 for high-V
, medium-V and low-V planet hosts respectively. The KS two-sample test also
shows that for prior correlation LAMOST(Fig 15), our selected medium-V and
low-V planet hosts have statistically similar distributions with high-V planet
hosts.
#### 3.5.3 The average eccentricity of planets
Here we use a robust general method to derive eccentricity distribution which
is based on the statistics of transit duration Ford et al. (2008) – the time
for transit planets to cross the stellar disks. Kepler’s second law states
that eccentric planets vary their velocity throughout their orbit. This
results in a different duration for their transits relative to the circular
orbit. Using equation (1) in Xie et al. (2016) and infer the eccentricity
distribution from the statistics of $t_{dur}/t_{dur,0}$ ($t_{dur,0}$ is the
transit duration for a circular orbit). Then, we could obtain the average
eccentricity of planets around stars with different relative velocities.
The result of eccentricity is also shown in figure 14. The average
eccentricity of planets increases with decreasing stellar relative velocities,
which show a significant anti-correlation with the trend of multiplicity. More
specifically, planets around low-V stars have larger average eccentricity than
planets around high-V stars with 2 $\sigma$ difference. The eccentricity
dichotomy states that Kepler singles are on eccentric orbits with
$\bar{e}\approx 0.3$, while the multiples are on nearly circular
($\bar{e}=0.04_{\rm-0.04}^{+0.03}$) and coplanar
($\bar{i}=1.4_{\rm-1.1}^{+0.8}$ degree) orbits Xie et al. (2016). Therefore,
our result is consistent with eccentricity dichotomy.
Recent work Longmore et al. (2021) find single planetary systems prefer to
exist around stars in overdensities with full Winter et al. (2020) sample. We
firstly find an anti-correlation between eccentricity and stellar relative
velocities. Again, with the assumption that low-V stars are similar to stars
in overdensities and high-V stars are similar to field stars, our finds also
imply the significant influence of stellar clustering on the architecture of
planetary systems.
## 4 Discussion
In this section, we mainly discuss the potential interpretations on these
statistical results.
Recently, several works point out that stellar clustering may play an
important role in the formation and evolution of planets (Winter et al., 2020;
Kruijssen et al., 2020; Longmore et al., 2021; Chevance et al., 2021).
Furthermore, Liu et al. (2013) found that most planetary systems ejected from
open clusters maintains the original planetary architectures. Yet the
planetary systems remain in clusters may be influenced by dynamical mechanisms
significantly. Here we try to establish the correlation between our
separations (i.e. high-V , medium-V , and low-V stars) and those
classifications (i.e stars in phase space overdensities and field stars)
qualitatively. Stars in phase space overdensities are stars have similar
positions and 3D velocities. In our sample selection, high-V , medium-V , and
low-V stars have similar positions, yet the different relative 2D velocities.
Therefore, we could assume that the low-V stars are similar to stars in phase
space overdensities and the high-V stars are similar to field stars.
With such assumption, our results of Jupiter-sized planets are consistent with
previous works. For example, Winter et al. (2020) HJs are almost exclusively
found in overdensities. We found a slightly enhancement of occurrence rate of
HJs around low-V stars. Additionally, we also find that high-V stars have a
slight lower occurrence rate of WJs or CJs than low-V stars.
There are two simple ways to explain our statistical results from the origin
of Jupiter-sized planets. One is that high-V stars have a higher in-situ
formation rate of WJs or CJs while a lower in-situ formation rate of HJs. The
other is forming ex situ(including disk migration and high-eccentricity
migration), i.e. high-V stars have a lower efficiency of forming from CJs to
HJs than low-V stars. Here we focus on the high-eccentricity migration.
Dynamical perturbation can easily trigger the difference of eccentricity and
multiplicity of planetary systems. Thus, high-eccentricity migration may be a
more reasonable scenario to answer several our statistical results.
In subsection 4.1, we will discuss several mechanisms that can trigger higher-
eccentricity migration. In subsection 4.2 and 4.3, we will use our results of
small planets, multiplicity and eccentricity to test these mechanisms.
### 4.1 High-eccentricity migration correlated to stellar relative velocities
According to the source of dynamical perturbation, we classify the high-
eccentricity migration of HJs as three types: Stellar binary Kozai (Muñoz et
al., 2016), Stellar flyby Kozai (Hamers & Tremaine, 2017; Rodet et al., 2021),
and Planet-planet interactions(planet secular coplanar (Petrovich, 2015),
Planet-planet Kozai (Petrovich & Tremaine, 2016), and Planet-planet
scattering) (Beaugé & Nesvorný, 2012). Because our difference of occurrence
rate of HJs is correlated to stellar reative velocities, we prefer scenarios
including external dynamical perturbations, i.e. Stellar binary Koizai and
Stellar flyby Kozai. We also do not focus on the mechanism of planet secular
coplanar, since it does not require the a proto-eccentric planet or an
external source perturbation. For the mechanism such as planet-planet Kozai
and planet-planet scattering, they may work after the previous external
perturbation. E.g. stellar flyby could strongly excite the orbital elements of
planets in the outermost orbit and the effect propagates to the entire
planetary system through secular planet-planet interaction Cai et al. (2017).
Hamer & Schlaufman (2019) found that hot Jupiter host stars have a smaller
Galactic velocity dispersion than a similar population of stars without hot
Jupiters. According to the age-velocity dispersion relation (AVR) (Strömberg,
1946; Wielen, 1977; Nordström et al., 2004; Yu & Liu, 2018), hot Jupiter host
stars may be on average younger than field stars. For our stellar separation,
we find that high-V stars are older than low-V stars on average(see figure
16). Therefore, tidal migration of planets, which is correlated to age, may be
another scenario to interpret our results of HJs.
We prefer the dynamical perturbation instead of the age-dependent tidal
dissipation. One reason is the large uncertainty of the age measurement.
Furthermore, some clues are suggesting that low-V stars may experience more
environmental influence. Tarricq et al. (2021) shows that the heating rate of
the Open clusters population is significantly lower for the vertical component
compared to the field stars. And the age of clusters indicates some but weak
age dependence of the known moving groups. Additionally, both simulation and
observation suggest that a high fraction of comoving stars with small physical
and 3D velocity separation are conatal Kamdar et al. (2019). These works imply
that star groups can keep low relative velocities for a long timescale as
several Gyrs. Thus the influence on relative velocity dispersion due to
stellar age is considered less significant in this paper. Additionally, the
dynamical perturbation can easily interpret our results of eccentricity and
multiplicity(more details in section 4.2).
Figure 16: The distribution of stellar age listed in Berger et al. (2020).
Green, blue and Purple shows high-V , medium-V and low-V stars respectively.
Solid lines are probability distribution functions. Dashed lines show the
average age of stars. The colored regions show the standard deviation of
stellar age.
#### 4.1.1 Binary induced high-eccentricity migration
Recently, Li et al. (2020) has shown that binaries are an important ingredient
in understanding the importance of dynamical perturbation to planetary
systems. Encounter rates for binaries may be larger than single stars. In
typical open clusters, nearly 10 per cent of the Sun–Jupiter pairs acquire a
stellar companion during scatterings. Such simulation results implies that the
binary induced high-eccentricity migration is common in clusters.
Although, there is no evidence that stars with low-V have a denser birth
environment e.g. dense clusters than high-V stars, we can qualitatively give
the effective interaction rate for disruption under their current stellar
environment. As described in Li & Adams (2015), the effective interaction rate
for disruption is,
$\Gamma=n_{*}\langle\sigma\rangle\langle v\rangle,$ (8)
where $n_{*}$ is the mean density of stars in the environment, $\langle
v\rangle$ is the mean relative velocity between systems, and
$\langle\sigma\rangle$ is the cross-section for the given mode of disruption.
Because of the velocity dependence of the cross-
sections(i.e.$\langle\sigma\rangle_{v}\equiv\langle\sigma v\rangle/\langle
v\rangle$), high-V stars with higher speed and lower effective interaction
rate are less effected by passing flybys than low-V stars.
Similarly, according to the equation(11) in Li et al. (2020), the rate of
Sun–Jupiter pairs acquiring a stellar companion during scatterings also
depends on velocity. Therefore, high-V stars with higher speed will result in
the lower occurrence rate. I.e. high-V stars could obtain companions with
lower efficiency than low-V stars and lower occurrence rate of HJs(induced by
binary companion perturbation) consequently. Additionally, Jupiters may be
directly ejected during such close encounters, although the fraction of these
ejected Jupiters make up only 1%. These ejection events may happen more
frequently around low-V stars due to velocity dependence, which will also
support our statistical results of Jupiter-sized planets.
#### 4.1.2 Flyby induced high-eccentricity migration
Li et al. (2020) argued that close encounters for binaries may be larger than
single stars, i.e. binaries may be the dominant scource of perturbation during
the flyby events. Yet, single flyby events still occupy a significant
fraction. Hamers & Tremaine (2017) proposed that HJs could be driven by high-
eccentricity migration in globular clusters. They found $\sim$ 2% of giant
planets that are converted to HJs, for the intermediate stellar density 104
pc-3, i.e. flyby induced high-eccentricity migration Rodet et al. (2021). Half
of these giant planets may be ejected from the original systems. Similar to
the analysis in the scenario of binary induced high-eccentricity migration,
high-V stars with higher relative velocities will have lower rate of, no
matter the ejection of giant planets or HJs forming from CJs than low-V stars.
Thus, flyby induced high-eccentricity migration may be another channel to
explain our statistical results of Jupiter-sized planets(with the assumption
that stellar relative velocities correlate to stellar clustering).
#### 4.1.3 Planet-planet interaction
Addition to dynamical perturbation from external sources, planet-planet
interaction, e.g. planet–planet Kozai (Petrovich & Tremaine, 2016) and planet-
planet scattering (Beaugé & Nesvorný, 2012) can also contribute the formation
of HJs. However, these two mechanisms have some preconditions. Planet-planet
Kozai requires the initial mutual inclinations of the two giant planets, while
planet-planet scattering requires three giant planets whose system usually
goes unstable. Here we focus on the planet-planet interaction after the flyby
events in clusters. For example, Cai et al. (2017) found that the orbital
elements during flyby events are most strongly excited in the outermost orbit
and the effect propagates to the entire planetary system through secular
evolution. This so-called planet-planet interactions after flyby events or
some other dynamical perturbation could be another way to HJs formation. Due
to the additional conditions compared to the other two scenarios, we consider
planet-planet interaction as a secondary influence. However, planet-planet
interaction may play an important role in answering the statistical results of
smaller planets, multiplicity, and eccentricity of planetary systems.
### 4.2 Explaining the results of multiplicity and eccentricity
The perturbation of external source can excite the mutual inclinations and
eccentricities of planetary systems. Cai et al. (2017) found that clusters
with higher stellar density can trigger the higher of average eccentricity of
planetary systems with a larger eccentricity dispersion than clusters with
lower stellar density. Additionally, they also find the eccentricity-
multiplicity dichotomy similar to observation results Xie et al. (2016) and
consistent with our statistical results.
If we assume the same velocity dispersion and the same mean relative velocity
of stars in clusters with different stellar density, the rate of flyby events
will be dominating by stellar density(see equation 8). Therefore, higher rate
of flyby events, the larger of mutual inclinations and average eccentricities
of planetary systems. Although we could not link the stellar relative velocity
of stars to the stellar density of their parental clusters directly, we can
use the occurrence rate of flyby events to connect these two parameters. As is
discussed above, high-V stars with higher relative velocity, have the higher
rate of flyby events. Winter et al. (2020) stars may remain co-moving state
after the disruption of clusters for Gyrs i.e. a relatively higher stellar
density compared with other field stars. Therefore, our eccentricity and
multiplicity results imply low-V stars may be related to higher density of
their parental clusters, or at least may have experienced more dynamical
perturbation events.
### 4.3 Explaining the results of occurrence rate of small planets
High-eccentricity migration may contribute to the close in planets(i.e. hot
Super-earths and hot Neptunes)Muñoz et al. (2016). If the high-eccentricity
migration is the dominating channel of forming such close in
planets(1–4$R_{\oplus}$, P $<$ 100 days), no matter flyby induced or binary
induced, we would expect the similar results results HJs, i.e. high-V stars
have a lower occurrence rate of close-in super-Earths(P$\textless$30 days) and
a potentially higher occurrence rate of cold super-Earths(P $>$ 400 days).
Additionally, the scenario of planet-planet interaction after dynamical
perturbation on outer small planets may be a possible channel to to explain
our statistical results. Eccentricity growth via scattering is limited to an
epicyclic velocity corresponding to the escape velocity from the surface of
the planet(e.g. Ida et al. (2013); Petrovich et al. (2014)),
$\begin{split}e_{\rm scatter}&\lesssim\frac{\sqrt{2GM_{\rm p}/R_{\rm p}}}{2\pi
a/P}\\\ &=0.2\left(\frac{M_{\rm p}}{0.5M_{\rm
Jup}}\right)^{1/2}\left(\frac{2R_{\rm
Jup}}{R_{p}}\right)^{1/2}\left(\frac{P}{3day}\right)^{1/3}\end{split}$ (9)
Once the eccentricity reaches this value, the cross section for collisions
exceeds the cross section for scattering, and planets tend to merge rather
than scatter during close encounters (Dawson & Johnson, 2018). Given that sub-
Earths with the smaller size than super-Earths, those cold sub-Earths are more
likely merging than super-Earths during the planet-planet scattering. This
mechanism may contribute to the decline of occurrence rate of sub-earths with
decreasing stellar relative velocities.
Additionally, since sub-earth may be excited more easily than larger planets,
especially in single planetary systems. Therefore, qualitatively speaking,
small planets may have the higher of mutual inclination. Therefore, some of
single planetary systems may have a sub-earths with a relatively higher mutual
inclination that can not be detected by transit. Therefore, such excitation of
inclination may also contribute to the lower occurrence rate of sub-Earths
around low-V stars.
## 5 conclusion
Stars experience extreme dynamical interaction may influence the planet
formation and evolution significantly. To explore the influence of such
dynamical history on planet formation and evolution. We choose the relative
velocity as a diagnose to show its correlation with the planet occurrence
rate, multiplicity, and average eccentricity of planetary systems.
We carefully select 74002 main-sequence single Kepler stars(FGK type) and 1910
reliable planets with deposition score larger than 0.9 in subsection 2.1.
Then, we calculate the two-dimensional relative velocity of these selected
stars based on Gaia DR2, in subsection 2.2. We divide the stars into three
groups, i.e. high-V stars, medium-V stars, and low-V stars due to different
relative velocity. There are some correlations between stellar properties and
stellar relative velocity, i.e. high-V stars with higher relative velocity,
have smaller average stellar mass, lower average effective temperature, and
lower average stellar metallicity. Considering the correlations between
stellar properties and relative velocity (e.g. stellar effective temperature
and stellar metallicity), and the influence of stellar properties on planet
occurrence rate, we utilize two methods to correct these selection biases,
i.e. prior correction and posterior correction.
After correlations, we calculate the occurrence rate of planets around high-V
stars, medium-V stars, and low-V stars and find some interesting correlations
between stellar relative velocity and planet occurrence rate, as well as
multiplicity and eccentricity in section 3. The main statistical results are
listed in the following,
* •
high-V stars have a lower occurrence rate of super-earth and sub-Neptune-sized
planets(1–4 $R_{\oplus}$, P$\textless$100 days, $\eta$ $>$ 0.1) than low-V
stars on average.
* •
high-V stars have a higher occurrence rate of sub-earth-sized planets(0.5–1
$R_{\oplus}$, P$\textless$30 days, $\eta$ $>$ 0.1) than low-V stars on
average.
* •
high-V stars have a slightly lower occurrence rate of HJs (4–20 $R_{\oplus}$,
P$\textless$10 days) than low-V stars on average. While for WJs or CJs (4–20
$R_{\oplus}$, 10$<$P$<$400 days), high-V stars have a slightly higher
occurrence of them than low-V stars on average.
* •
The multiplicity of planetary systems increases with increasing stellar
relative velocities, while the average eccentricity of planets shows an anti-
correlation with stellar relative velocities, consistent with the
eccentricity-multiplicity dichotomy.
In section 4, we discuss several scenarios to explain our statistical results.
Considering the age-velocity relation, i.e. older stars tend to the lower of,
the enhancement of HJs around stars with low-V is consistent with previous
studies. High-eccentricity migration may be a possible mechanism to explain
the formation of HJs. Because such dynamical mechanism may be related to
stellar clustering, no matter the binary induced (Li et al., 2020) or flyby
induced (Rodet et al., 2021). Clustering evolution coupled to high-
eccentricity migration may be a likely channel to explain the results of
Jupiter-sized planets.
Furthermore, external dynamical perturbation, e.g. close encounters in cluster
could easily excite mutual inclinations and eccentricities which may account
for with eccentricity-multiplicity dichotomy (Cai et al., 2017).
Interestingly, our finding of multiplicity of planetary systems and average
eccentricity of planets may be related to clustering environments(see also in
Longmore et al. (2021) whose results are consistent with ours).
Stellar clustering may also play important roles in the formation and
evolution of smaller planets e.g. super-Earths and sub-Neptunes Kruijssen et
al. (2020); Chevance et al. (2021); Longmore et al. (2021). Similar to the
channel of HJ’s formation, the enhancement of super-Earths and sub-Neptunes
around low-V stars may be related to the high-eccentricity migration, at least
some of close-in planets can be attributed to. The lower of occurrence rate
sub-earth-sized planets around low-V stars might be cause of higher of mutual
inclination angles in which case transit methods could hardly discover. Yet,
whether our findings about those planets with radius of 0.5–4 $R_{\oplus}$
correlated to stellar clustering needs more observational and theoretical
works.
In the future, with TESS and PLATO, more planets will be discovered. A larger
sample of exoplanets will help to convince the correlation between occurrence
rate and stellar relative velocities. More Planets in different clusters also
benefit us to know the essential influences on planet formation and evolution
due to cluster environments.
Fortunately, with the data release from Gaia, more giant planets can be
detected with a relatively larger semi-major axis compared with recent transit
planets. These planets with longer periods can not only extend our knowledge
of giant planet formation but also be able to valid our results, i.e. the
correlation of giant planet occurrence rate and stellar relative velocity.
We thank the anonymous referee for helpful comments and feedback. We also
thank Dr. Andrew Winter and Ji-Wei Xie for helpful comments and suggestions.
This work is supported by the National Natural Science Foundation of China
(Grant No. 11973028, 11933001, 11803012, 11673011), and the National Key R&D
Program of China (2019YFA0706601). The technology of Space Telescope Detecting
Exoplanet and Life supported by the National Defense Science and Engineering
Bureau civil spaceflight advanced research project D030201 also supports this
work. This work made use of the stellar properties catalog of LAMOST DR4.
Guoshoujing Telescope (the Large Sky Area Multi-Object Fiber Spectroscopic
Telescope LAMOST) is a National Major Scientific Project built by the Chinese
Academy of Sciences. Funding for the project has been provided by the National
Development and Reform Commission. LAMOST is operated and managed by the
National Astronomical Observatories, Chinese Academy of Sciences. This
research made use of the cross-match service provided by CDS, Strasbourg. This
research has made use of the NASA Exoplanet Archive, which is operated by the
California Institute of Technology, under contract with the National
Aeronautics and Space Administration under the Exoplanet Exploration Program.
## Appendix A The difference between metallicity of B2020 and metallicity of
LAMOST DR4
Figure A1: Comparison between stellar metallicity listed in LAMOST DR4 and
B2020. Panel (a) shows the distribution of metallicity listed LAMOST RD4 and
B2020. The blue and purple colors show the data of LAMOST DR4 and B2020
respectively. The solid line is the distribution of metallicity. The dashed
line is the average value of metallicity. The colored region shows the
standard deviation of metallicity. Panel (b) is a scatter diagram of stellar
metallicity from LAMOST DR4 and metallicity from B2020. X-axis is stellar
metallicity from LAMOST DR4. Y-axis is metallicity from B2020. Panel (c) shows
the difference between stellar metallicity from LAMOST DR4 and metallicity
from B2020. We bin the stellar metallicity from LAMOST DR4 with 0.1 indexes
(the average standard error of metallicity LAMOST DR4). The error bar is the
standard deviation of B2020 metallicity.
In Fig A1, panel (a) shows the distribution of stellar metallicity from LAMOST
DR4(blue) and B2020(purple). The average stellar metallicity of LAMOST
DR4(blue) is $-0.078_{\rm-0.28}^{+0.28}$, which is comparable with that of
B2020 $-0.061_{\rm-0.21}^{+0.21}$. However, the stellar metallicity of B2020
has a higher fraction of around zero. It is more clear in panel (b) which
shows the scatter diagram, where the X-axis shows metallicity([Fe/H]) of
LAMOST DR4 and the Y-axis shows metallicity of B2020. Most of the values are
distributed around the line(y=x), however, there is an abnormal branch around
the [Fe/H]B=0, i.e. the metallicity of B2020. Berger et al. (2020) mentioned
that the metallicity ([Fe/H]$\textless$-0.5 and [Fe/H]$\textgreater$-0.5) have
larger uncertainty. Consequently, this large uncertainty of input metallicity
and uncertainty of the model will result in the different distribution of
stellar metallicity (e.g. LAMOST DR4 and B2020) in some specific sub-sample,
although their entire distribution of metallicity is nearly the same. In panel
(c), we bin the LAMOST DR4 metallicity within 0.1 [Fe/H] index which is close
to the standard error of LAMOST DR4 metallicity. The error bar is the standard
deviation of metallicity from two catalogs in the bin. The dashed line fits
well for metallicity $\in$ [-0.5, 0.5], which means the stellar metallicity in
this range are consistent in both catalogs. While for metallicity outside the
range of [-0.5, 0.5], LAMOST DR4 and B2020 shows a significant difference.
For stars that do not have spectroscopic metallicity constraints
($\sim$120,000), Berger et al. (2020) used a prior assumption that those stars
have solar metallicity with a standard deviation of $\sim$0.20 index. Thus for
those stars without constraints, the metallicity of B2020 will concentrate
near zero. However, for those stars having LAMOST metallicity constraints,
metallicity from B2020 should be similar to that from LAMOST DR4. LAMOST DR4
provides several catalogs of stellar metallicity from different pipelines. In
our paper, we use the revised metallicities from Xiang et al. (2017), which
may be different from the metallicity of B2020. Different values of
metallicity may lead to different correlations, therefore, in section 3, we
will both discuss the influence of metallicity from different tables, on
correlations between planet occurrence rate and stellar relative velocity.
## Appendix B Planet occurrence rate
The planet occurrence rate means the average number of planets per star. We
follow Mulders et al. (2015) to calculate the planet occurrence rate. Here we
take into account the actual observation time of every selected Kepler star
and calculate the signal-noise ratio and detection efficiency of planets with
long orbital periods. Apart from that, we derive the formula for occurrence
rate of planets around stars with stellar relative velocities.
For a planet, its detection efficiency is different if it orbits around
different stars. Before calculating the planet occurrence rate, we should
first calculate the modeling signal-noise ratio of a given planet around
different selected Kepler stars. The stellar noise presents in a light curve,
the so-called Combined Differential Photometric Precision (CDPP, Christiansen
et al. (2012)) is time-varying. Here we use the robCDPP Burke et al. (2015).
Because non-robust rms CDPP (rmsCDPP) statistic typically reported by the
Q1-Q16 pipeline data products can be biased. In some works, they use Poisson
distribution to fit the noise of stars, while we follow the way of Mulders et
al. (2015) to assume the noise of stars follows a decaying power law. We use
the data of 3,6 and 12 hours of robCDPP to fit the function between stellar
noise $\sigma_{\rm*}$ and transition duration timescale $t$:
$\sigma_{\rm*}=\sigma_{\rm LC}\left(\frac{t}{t_{\rm LC}}\right)^{cdpp_{\rm
index}},$ (B1)
where $\sigma_{\rm LC}$ is the normalized stellar noise and $t_{\rm
LC}$(1765.5s) is the noise in the long cadence mode. For a planets with given
orbital periods and planet radius, whether it can be detected is determined by
the signal noise ratio(SNR). Noise is the noise of stars $\sigma_{\rm*}$, the
signal can be simply considered as the transit depth $\delta$,
$\delta=\left(R_{\rm p}/R_{\rm*}\right)^{2},$ (B2)
where Rp is the planet radius, R∗ is the stellar radius. Because stellar noise
is related with transit duration $t_{\rm dur}$ and signal is approximately
proportional to the square root of transit times $n$,
$n=\frac{t_{\rm obv}}{P},$ (B3)
where $t_{\rm obv}$ is the observation time of a given Kepler star, and P is
the orbital period of a planet. $t_{\rm obv}$ is written as,
$t_{\rm obv}=dutycycle*dataspan.$ (B4)
Signal noise ratio can be written as,
$SNR=\frac{\delta n^{0.5}}{\sigma\left(t_{\rm dur}\right)}.$ (B5)
Planet transit duration $t_{\rm dur}$ is written as,
$t_{\rm dur}=\frac{PR_{\rm*}\sqrt{1-e^{2}}}{\pi a},$ (B6)
where $R_{\rm*}$ is the stellar radius and $e$ is planet orbital eccentricity.
Semi-major axis $a$ can be written as,
$a=\sqrt[3]{\frac{GM_{\rm*}P^{2}}{4\pi^{2}}}.$ (B7)
Using this method, we can correct the systematic increase of detection
efficiency caused by planets with small orbital periods. Here we don’t take
into account the impact parameter $b$. The calculation of transit duration
needs the orbital eccentricity. Unfortunately, there are few planetary systems
with eccentricity. However, because the eccentricity of the most planet is
less than 0.3, the whole difference in transit duration caused by eccentricity
is not larger than 5%. Thus we simply fix the average eccentricity of Kepler
planetary systems to 0.1, which is in the range of eccentricity given by
Moorhead et al. (2011) (0.1 - 0.25).
In Kepler pipeline, confirming a potential planet candidate needs to observe
three transits. Here we follow the formula in (Beaugé & Nesvorný, 2013), this
efficiency increases with the transit times linearly. Such efficiency
determined by the transit numbers $f_{\rm n}$ can be written as,
$\begin{split}t_{\rm obs}\leq 2P&:f_{\rm n}=0,\\\ 2P<t_{\rm obs}<3P&:f_{\rm
n}=\left(t_{\rm obs}/P-2\right),\\\ t_{\rm obs}\geq 3&:f_{\rm
n}=1.\end{split}$ (B8)
Kepler pipeline defines a transit signal when SNR is larger than 7.1. Although
these selection criteria can exclude many false positive signals, yet it may
also exclude some potential real signal with low signal noise ratio because of
the limit of observation time. Here we follow the way of Mulders et al.
(2015), in which the detection efficiency $f_{\rm eff}$ is dependent on the
SNR and is assumed as linear where SNR is in the range of 6 to 12:
$\begin{split}\rm{SNR}\leq 6&:f_{\rm eff}=0,\\\ 6<\rm{SNR}\leq 12&:f_{\rm
eff}=\frac{{SNR}-6}{6},\\\ \rm{SNR}>12&:f_{\rm eff}=1.\end{split}$ (B9)
Then, we can calculate number of Kepler stars around which a given planet can
be detected, i.e. stellar number $N_{\rm*}$,
$N_{\rm*}\left(q,R_{\rm p},P\right)=\Sigma_{\rm i=0}^{N_{\rm*}}\left(f_{\rm
eff,i}\cdot f_{\rm n,i}\right),$ (B10)
Here we round $N_{\rm*}$ to integer. $N_{\rm*}$ is the function of $q$, Rp,
planet radius, and orbital period $P$.
The detection completeness $\eta=N_{\rm*}\left(q,R_{\rm p},P\right)/N_{\rm
star}\left(q\right)$, where $N_{\rm star}$ is the total number of selected
Kepler main-sequence single stars with specific range of relative stellar
velocity(i.e. with different q). The detection completeness and the planets
are one-to-one relation. The detection completeness of a given planet is the
fraction of stars around which we can detect the planet.
Besides the efficiency related with transit numbers and SNR, when we calculate
the planet occurrence, transit probability caused by the geographic structure
of a planetary system should be taken into account inevitably. The formula of
the transit probability is,
$f_{\rm geo}=\frac{R_{\rm p}+R_{\rm*}}{a\left(1-e^{2}\right)}$ (B11)
where $\left(1-e^{2}\right)$ is a revising impact caused by ellipse orbital
Burke (2008).
For a planet with given planet radius and orbital period, its occurrence rate
can be written as,
$f_{\rm occ}\left(q,R_{\rm p},P\right)=\frac{1}{f_{\rm
geo}N_{\rm*}\left(q,R_{\rm p},P\right)}.$ (B12)
For planets with given range of planet radius and orbital period, we add the
calculate occurrence rate $f_{\rm occ}\left(q,R_{\rm p},P\right)$
cumulatively. The dominant source of error is Poisson error, as opposed to
measurement errors. Therefore we estimate the confidence interval the usual
$1/\sqrt{N_{\rm exp}}$ approach(where $N_{\rm exp}$ is the number of planets
in a given range of planet radius and orbital period).
## Appendix C Correction of occurrence rate of planets due to different
stellar properties
We will introduce two simple ways to correct the influence of different
stellar properties on the planet occurrence rate In this subsection. One way
is the prior correction i.e. we minimize the influence of stellar properties
before the calculation of occurrence rate, while the other is posterior
correction i.e. we correct the influence of stellar properties utilizing the
empirical relations after the calculation of occurrence rate.
### C.1 Prior correction
For prior correction, we select the stars with similar stellar properties of
high-V stars in both medium-V stars and low-V stars to minimize the influence
of stellar properties. Since we select stars with a similar distribution of
stellar properties such as stellar radius, stellar mass, and metallicity, the
difference of other parameters such as planet occurrence rate can be probably
attributed to the difference of stellar relative velocity. Here we use the
NearestNeighbors function in scikit-learn (Pedregosa et al., 2011) to choose
the two nearest medium-V or low-V stars having similar stellar mass, radius,
and metallicity compared with every high-V stars. After stellar selection, we
use the methods described in Appendix B to calculate the planet occurrence
rate. As a consequence, we obtain the planet occurrence rate after prior
correction.
### C.2 Posterior correction
For posterior correction, we minimize the influence of stellar effective
temperature and stellar metallicity through empirical relations. The
correlations between planet occurrence and stellar effective temperature and
stellar metallicity have been studied broadly. Here we choose the influence of
stellar effective temperature instead of the stellar mass because that the
uncertainty of stellar effective temperature(%3, or 112 K) is lower than that
of stellar mass(%7).
We assume the planet occurrence rate as a function of planet radius $R_{\rm
p}$, orbital period $P$, stellar effective temperature $T_{\rm eff}$ and
stellar metallicity $[Fe/H]$, i.e. $f_{\rm occ}(R_{\rm p}\,,P\,,T_{\rm
eff}\,,[Fe/H])$. Because we focus on the occurrence rate of planets in
specific radius range. Thus after integration, we can rewrite the focc as
$f_{\rm occ}(P\,,T_{\rm eff}\,,[Fe/H])$. If we assume that orbital period $P$,
stellar effective temperature $T_{\rm eff}$ and stellar metallicity $[Fe/H]$
are three independent variables. focc can be written as,
$f_{\rm occ}(P,T_{\rm eff},[Fe/H])=f_{\rm 1}(P)f_{\rm 2}(T_{\rm eff})f_{\rm
3}([Fe/H]),$ (C1)
where f1(P) is in a form of broken power law as is shown in many other
previous studies (Silburt et al., 2015; Mulders et al., 2018; Neil & Rogers,
2020).
$f_{\rm 1}(P)=c_{\rm 1}\begin{cases}(P/P_{\rm 0})^{a}\,,&P\,\leqslant\,P_{\rm
0},\\\ (P/P_{\rm 0})^{b}\,,&P\,>\,P_{\rm 0},\end{cases}$ (C2)
where P0 is the broken point where the index of the power law is different, a
and b are the power law index and c1 is constant. However, we do not focus on
the function of planet occurrence rate with planet orbital periods. We are
interested in $f_{\rm 2}(T_{\rm eff})$ and $f_{\rm 3}([Fe/H])$.
For $f_{\rm 2}(T_{\rm eff})$, here we use the empirical relation between
planet occurrence rate and effective temperature proposed by Yang et al.
(2020),
$f_{\rm 2}\left(T_{\rm eff}\right)=c_{\rm
2}\left(0.30+\frac{0.43}{1+exp\left(\frac{T_{\rm
eff}-6061}{161}\right)}\right).$ (C3)
Since this formula is about average planet multiplicity $\bar{N}_{\rm p}$.
While actually in our definition of planet occurrence, i.e, t, $f_{\rm
occ}=F_{\rm Kep}\bar{N}_{\rm p}$, where $F_{\rm Kep}$ is the fraction of
planetary systems in the Kepler stars. Because $F_{\rm Kep}$ and $\bar{N}_{\rm
p}$ have the similar correlation with $T_{\rm eff}$ as is shown in Yang et al.
(2020), we simply use the formula of $\bar{N}_{\rm p}$(with a normalized
efficiency $c_{\rm 2}$) which is also consistent with the relation between
$f_{\rm occ}$ and $T_{\rm eff}$(see figure 9 in Yang et al. (2020). The
majority of the planet detected by Kepler is in the range of 0.5–4
$R_{\oplus}$. Therefore this Equation (C3) is reasonable. Although different
studies will derive different empirical $f_{\rm occ}-T_{\rm eff}$ relations,
the entire correlations between planet occurrence rate and stellar effective
temperature are similar. Utilizing different formula can only influence our
results slightly. Therefore, we use the equation (C5) typically.
For $f_{\rm 3}([Fe/H])$, we derive the function from a recent study on the
correlation between planet occurrence rate and stellar metallicity Zhu (2019).
Zhu (2019) found that occurrence rates of Kepler-like planets around solar-
like stars have a slightly positive correlation with metallicity. We use the
data in figure 3 of Zhu (2019). We simply fit the data with a function as
follows,
$f_{\rm 3}\left([Fe/H]\right)=c_{\rm 3}[Fe/H]^{\alpha}+c_{\rm 4},$ (C4)
where $\alpha$ is the power law index and c3 and c4 are constant values. In
our following calculation, we fix $\alpha$ to a typical value, $\alpha=1$. In
other words, it’s a linear relation. Noting that the data in figure 3 of Zhu
(2019) does not exclude the influence of multi-planet systems or giant
planets, thus it shows the upper limit of the positive correlation between
planet occurrence rate and metallicity.
We take into consideration of distribution of stellar effective temperature
and stellar metallicity, and function of planet occurrence rate in order to
revise the influence of stellar effective temperature and stellar metallicity.
Thus the Equation (C1) can be rewritten as,
$\begin{split}f_{\rm occ}&=\iint f_{\rm 1}(P)f_{\rm 2}\left(T_{\rm
eff}\right)PDF\left(T_{\rm eff}\right)f_{\rm
3}\left([Fe/H]\right)PDF\left([Fe/H]\right)dT_{\rm eff}d[Fe/H]\\\
&=C_{\rm[Fe/H]}C_{\rm T_{\rm eff}}f_{\rm 1}(P),\end{split}$ (C5)
where PDF$(T_{\rm eff})$ is the probability distribution function of stellar
effective temperature, and PDF([Fe/H]) is the probability distribution
function of stellar metallicity.
Here if we want to get the integrated planet occurrence rate after correction,
we only need to calculate the efficiency $C_{\rm[Fe/H]}$ and $C_{\rm T_{\rm
eff}}$. For example, if we know the original stellar sample with PDF${}_{\rm
0}(T_{\rm eff})$ and PDF${}_{\rm 0}([Fe/H])$ and the planet occurrence rate
$f_{\rm occ,0}$, and we want to get the occurrence rate of planets around
stars with PDF${}_{\rm 1}(T_{\rm eff})$ and PDF${}_{\rm 1}([Fe/H])$, i.e.
$f_{\rm occ,1}$,
$f_{\rm occ,1}=C_{\rm 0\Rightarrow 1}f_{\rm occ,0},$ (C6)
where $C_{\rm 0\Rightarrow 1}=\frac{C_{\rm T_{\rm
eff},1}C_{\rm[Fe/H],1}}{C_{\rm T_{\rm eff},0}C_{\rm[Fe/H],0}}$ is the
correction efficiency. $C_{\rm T_{\rm eff},0}$, $C_{\rm[Fe/H],0}$, $C_{\rm
T_{\rm eff},1}$ and $C_{\rm[Fe/H],1}$ are efficiencies related with
distribution of stellar effective temperature and metallicity before and after
the correction respectively.
## References
* Astropy Collaboration et al. (2013) Astropy Collaboration, Robitaille, T. P., Tollerud, E. J., et al. 2013, aap, 558, A33, doi: 10.1051/0004-6361/201322068
* Bashi & Zucker (2019) Bashi, D., & Zucker, S. 2019, aj, 158, 61, doi: 10.3847/1538-3881/ab27c9
* Beaugé & Nesvorný (2012) Beaugé, C., & Nesvorný, D. 2012, ApJ, 751, 119, doi: 10.1088/0004-637X/751/2/119
* Beaugé & Nesvorný (2013) —. 2013, apj, 763, 12, doi: 10.1088/0004-637X/763/1/12
* Berger et al. (2018) Berger, T. A., Huber, D., Gaidos, E., & van Saders, J. L. 2018, apj, 866, 99, doi: 10.3847/1538-4357/aada83
* Berger et al. (2020) Berger, T. A., Huber, D., van Saders, J. L., et al. 2020, AJ, 159, 280, doi: 10.3847/1538-3881/159/6/280
* Brucalassi et al. (2017) Brucalassi, A., Koppenhoefer, J., Saglia, R., et al. 2017, aap, 603, A85, doi: 10.1051/0004-6361/201527562
* Burke (2008) Burke, C. J. 2008, apj, 679, 1566, doi: 10.1086/587798
* Burke et al. (2015) Burke, C. J., Christiansen, J. L., Mullally, F., et al. 2015, apj, 809, 8, doi: 10.1088/0004-637X/809/1/8
* Cai et al. (2017) Cai, M. X., Kouwenhoven, M. B. N., Portegies Zwart, S. F., & Spurzem, R. 2017, MNRAS, 470, 4337, doi: 10.1093/mnras/stx1464
* Carpenter (2000) Carpenter, J. M. 2000, aj, 120, 3139, doi: 10.1086/316845
* Chevance et al. (2021) Chevance, M., Kruijssen, J. M. D., & Longmore, S. N. 2021, ApJ, 910, L19, doi: 10.3847/2041-8213/abee20
* Christiansen et al. (2012) Christiansen, J. L., Jenkins, J. M., Caldwell, D. A., et al. 2012, pasp, 124, 1279, doi: 10.1086/668847
* Cossou et al. (2014) Cossou, C., Raymond, S. N., Hersant, F., & Pierens, A. 2014, aap, 569, A56, doi: 10.1051/0004-6361/201424157
* Cui et al. (2012) Cui, X.-Q., Zhao, Y.-H., Chu, Y.-Q., et al. 2012, Research in Astronomy and Astrophysics, 12, 1197, doi: 10.1088/1674-4527/12/9/003
* Dai et al. (2018) Dai, Y.-Z., Liu, H.-G., Wu, W.-B., et al. 2018, mnras, 480, 4080, doi: 10.1093/mnras/sty2142
* Dawson & Johnson (2018) Dawson, R. I., & Johnson, J. A. 2018, ARA&A, 56, 175, doi: 10.1146/annurev-astro-081817-051853
* Dehnen & Binney (1998) Dehnen, W., & Binney, J. J. 1998, mnras, 298, 387, doi: 10.1046/j.1365-8711.1998.01600.x
* Dong & Zhu (2013) Dong, S., & Zhu, Z. 2013, apj, 778, 53, doi: 10.1088/0004-637X/778/1/53
* Dong et al. (2014) Dong, S., Zheng, Z., Zhu, Z., et al. 2014, apjl, 789, L3, doi: 10.1088/2041-8205/789/1/L3
* Flock et al. (2019) Flock, M., Turner, N. J., Mulders, G. D., et al. 2019, aap, 630, A147, doi: 10.1051/0004-6361/201935806
* Fontanive et al. (2019) Fontanive, C., Rice, K., Bonavita, M., et al. 2019, MNRAS, 485, 4967, doi: 10.1093/mnras/stz671
* Ford et al. (2008) Ford, E. B., Quinn, S. N., & Veras, D. 2008, ApJ, 678, 1407, doi: 10.1086/587046
* Fujii & Hori (2019) Fujii, M. S., & Hori, Y. 2019, aap, 624, A110, doi: 10.1051/0004-6361/201834677
* Fulton et al. (2017) Fulton, B. J., Petigura, E. A., Howard, A. W., et al. 2017, aj, 154, 109, doi: 10.3847/1538-3881/aa80eb
* Gaia Collaboration et al. (2018) Gaia Collaboration, Brown, A. G. A., Vallenari, A., et al. 2018, A&A, 616, A1, doi: 10.1051/0004-6361/201833051
* Hamer & Schlaufman (2019) Hamer, J. H., & Schlaufman, K. C. 2019, AJ, 158, 190, doi: 10.3847/1538-3881/ab3c56
* Hamers & Tremaine (2017) Hamers, A. S., & Tremaine, S. 2017, AJ, 154, 272, doi: 10.3847/1538-3881/aa9926
* Howard et al. (2012) Howard, A. W., Marcy, G. W., Bryson, S. T., et al. 2012, apjs, 201, 15, doi: 10.1088/0067-0049/201/2/15
* Huber et al. (2014) Huber, D., Silva Aguirre, V., Matthews, J. M., et al. 2014, apjs, 211, 2, doi: 10.1088/0067-0049/211/1/2
* Hunter (2007) Hunter, J. D. 2007, Computing in Science & Engineering, 9, 90, doi: 10.1109/MCSE.2007.55
* Ida & Lin (2004) Ida, S., & Lin, D. N. C. 2004, apj, 616, 567, doi: 10.1086/424830
* Ida et al. (2013) Ida, S., Lin, D. N. C., & Nagasawa, M. 2013, ApJ, 775, 42, doi: 10.1088/0004-637X/775/1/42
* Johnson et al. (2010) Johnson, J. A., Aller, K. M., Howard, A. W., & Crepp, J. R. 2010, PASP, 122, 905, doi: 10.1086/655775
* Johnson et al. (2017) Johnson, J. A., Petigura, E. A., Fulton, B. J., et al. 2017, AJ, 154, 108, doi: 10.3847/1538-3881/aa80e7
* Johnstone et al. (1998) Johnstone, D., Hollenbach, D., & Bally, J. 1998, apj, 499, 758, doi: 10.1086/305658
* Kamdar et al. (2019) Kamdar, H., Conroy, C., Ting, Y.-S., et al. 2019, ApJ, 884, L42, doi: 10.3847/2041-8213/ab4997
* Kruijssen et al. (2020) Kruijssen, J. M. D., Longmore, S. N., & Chevance, M. 2020, ApJ, 905, L18, doi: 10.3847/2041-8213/abccc3
* Kutra & Wu (2020) Kutra, T., & Wu, Y. 2020, arXiv e-prints, arXiv:2003.08431. https://arxiv.org/abs/2003.08431
* Lada & Lada (2003) Lada, C. J., & Lada, E. A. 2003, araa, 41, 57, doi: 10.1146/annurev.astro.41.011802.094844
* Lada et al. (1993) Lada, E. A., Strom, K. M., & Myers, P. C. 1993, in Protostars and Planets III, ed. E. H. Levy & J. I. Lunine, 245
* Lee & Chiang (2017) Lee, E. J., & Chiang, E. 2017, ApJ, 842, 40, doi: 10.3847/1538-4357/aa6fb3
* Li et al. (2020) Li, D., Mustill, A. J., & Davies, M. B. 2020, MNRAS, 499, 1212, doi: 10.1093/mnras/staa2945
* Li & Adams (2015) Li, G., & Adams, F. C. 2015, MNRAS, 448, 344, doi: 10.1093/mnras/stv012
* Liu et al. (2013) Liu, H.-G., Zhang, H., & Zhou, J.-L. 2013, apj, 772, 142, doi: 10.1088/0004-637X/772/2/142
* Longmore et al. (2021) Longmore, S. N., Chevance, M., & Kruijssen, J. M. D. 2021, arXiv e-prints, arXiv:2103.01974. https://arxiv.org/abs/2103.01974
* Luo et al. (2012) Luo, A. L., Zhang, H.-T., Zhao, Y.-H., et al. 2012, Research in Astronomy and Astrophysics, 12, 1243, doi: 10.1088/1674-4527/12/9/004
* Luo et al. (2015) Luo, A. L., Zhao, Y.-H., Zhao, G., et al. 2015, Research in Astronomy and Astrophysics, 15, 1095, doi: 10.1088/1674-4527/15/8/002
* Matsuyama et al. (2003) Matsuyama, I., Johnstone, D., & Hartmann, L. 2003, apj, 582, 893, doi: 10.1086/344638
* Mayor & Queloz (1995) Mayor, M., & Queloz, D. 1995, nat, 378, 355, doi: 10.1038/378355a0
* McTier & Kipping (2019) McTier, M. A. S., & Kipping, D. M. 2019, mnras, 489, 2505, doi: 10.1093/mnras/stz2088
* Meibom et al. (2013) Meibom, S., Torres, G., Fressin, F., et al. 2013, nat, 499, 55, doi: 10.1038/nature12279
* Moorhead et al. (2011) Moorhead, A. V., Ford, E. B., Morehead, R. C., et al. 2011, ApJS, 197, 1, doi: 10.1088/0067-0049/197/1/1
* Muñoz et al. (2016) Muñoz, D. J., Lai, D., & Liu, B. 2016, MNRAS, 460, 1086, doi: 10.1093/mnras/stw983
* Mulders et al. (2015) Mulders, G. D., Pascucci, I., & Apai, D. 2015, apj, 798, 112, doi: 10.1088/0004-637X/798/2/112
* Mulders et al. (2018) Mulders, G. D., Pascucci, I., Apai, D., & Ciesla, F. J. 2018, aj, 156, 24, doi: 10.3847/1538-3881/aac5ea
* Mulders et al. (2016) Mulders, G. D., Pascucci, I., Apai, D., Frasca, A., & Molenda-Żakowicz, J. 2016, aj, 152, 187, doi: 10.3847/0004-6256/152/6/187
* Narang et al. (2018) Narang, M., Manoj, P., Furlan, E., et al. 2018, AJ, 156, 221, doi: 10.3847/1538-3881/aae391
* Neil & Rogers (2020) Neil, A. R., & Rogers, L. A. 2020, ApJ, 891, 12, doi: 10.3847/1538-4357/ab6a92
* Ngo et al. (2016) Ngo, H., Knutson, H. A., Hinkley, S., et al. 2016, ApJ, 827, 8, doi: 10.3847/0004-637X/827/1/8
* Nordström et al. (2004) Nordström, B., Mayor, M., Andersen, J., et al. 2004, A&A, 418, 989, doi: 10.1051/0004-6361:20035959
* Olczak et al. (2006) Olczak, C., Pfalzner, S., & Spurzem, R. 2006, apj, 642, 1140, doi: 10.1086/501044
* pandas development team (2020) pandas development team, T. 2020, pandas-dev/pandas: Pandas, latest, Zenodo, doi: 10.5281/zenodo.3509134
* Pecaut & Mamajek (2013) Pecaut, M. J., & Mamajek, E. E. 2013, ApJS, 208, 9, doi: 10.1088/0067-0049/208/1/9
* Pedregosa et al. (2011) Pedregosa, F., Varoquaux, G., Gramfort, A., et al. 2011, Journal of Machine Learning Research, 12, 2825. http://jmlr.org/papers/v12/pedregosa11a.html
* Petrovich (2015) Petrovich, C. 2015, ApJ, 805, 75, doi: 10.1088/0004-637X/805/1/75
* Petrovich & Tremaine (2016) Petrovich, C., & Tremaine, S. 2016, ApJ, 829, 132, doi: 10.3847/0004-637X/829/2/132
* Petrovich et al. (2014) Petrovich, C., Tremaine, S., & Rafikov, R. 2014, ApJ, 786, 101, doi: 10.1088/0004-637X/786/2/101
* Rodet et al. (2021) Rodet, L., Su, Y., & Lai, D. 2021, arXiv e-prints, arXiv:2102.07898. https://arxiv.org/abs/2102.07898
* Silburt et al. (2015) Silburt, A., Gaidos, E., & Wu, Y. 2015, ApJ, 799, 180, doi: 10.1088/0004-637X/799/2/180
* Silsbee & Rafikov (2015) Silsbee, K., & Rafikov, R. R. 2015, ApJ, 798, 71, doi: 10.1088/0004-637X/798/2/71
* Sofue et al. (2009) Sofue, Y., Honma, M., & Omodaka, T. 2009, pasj, 61, 227, doi: 10.1093/pasj/61.2.227
* Spurzem et al. (2009) Spurzem, R., Giersz, M., Heggie, D. C., & Lin, D. N. C. 2009, apj, 697, 458, doi: 10.1088/0004-637X/697/1/458
* Strömberg (1946) Strömberg, G. 1946, ApJ, 104, 12, doi: 10.1086/144830
* Tarricq et al. (2021) Tarricq, Y., Soubiran, C., Casamiquela, L., et al. 2021, A&A, 647, A19, doi: 10.1051/0004-6361/202039388
* Thompson et al. (2018) Thompson, S. E., Coughlin, J. L., Hoffman, K., et al. 2018, ApJS, 235, 38, doi: 10.3847/1538-4365/aab4f9
* van Saders & Gaudi (2011) van Saders, J. L., & Gaudi, B. S. 2011, apj, 729, 63, doi: 10.1088/0004-637X/729/1/63
* van Terwisga et al. (2020) van Terwisga, S. E., van Dishoeck, E. F., Mann, R. K., et al. 2020, A&A, 640, A27, doi: 10.1051/0004-6361/201937403
* Wang & Fischer (2015) Wang, J., & Fischer, D. A. 2015, aj, 149, 14, doi: 10.1088/0004-6256/149/1/14
* Wes McKinney (2010) Wes McKinney. 2010, in Proceedings of the 9th Python in Science Conference, ed. Stéfan van der Walt & Jarrod Millman, 56 – 61, doi: 10.25080/Majora-92bf1922-00a
* Wielen (1977) Wielen, R. 1977, A&A, 60, 263
* Winter et al. (2018) Winter, A. J., Clarke, C. J., Rosotti, G., et al. 2018, mnras, 478, 2700, doi: 10.1093/mnras/sty984
* Winter et al. (2020) Winter, A. J., Kruijssen, J. M. D., Longmore, S. N., & Chevance, M. 2020, Nature, 586, 528, doi: 10.1038/s41586-020-2800-0
* Wu (2019) Wu, Y. 2019, ApJ, 874, 91, doi: 10.3847/1538-4357/ab06f8
* Xiang et al. (2017) Xiang, M. S., Liu, X. W., Yuan, H. B., et al. 2017, MNRAS, 467, 1890, doi: 10.1093/mnras/stx129
* Xie et al. (2010) Xie, J.-W., Zhou, J.-L., & Ge, J. 2010, ApJ, 708, 1566, doi: 10.1088/0004-637X/708/2/1566
* Xie et al. (2016) Xie, J.-W., Dong, S., Zhu, Z., et al. 2016, Proceedings of the National Academy of Science, 113, 11431, doi: 10.1073/pnas.1604692113
* Yang et al. (2020) Yang, J.-Y., Xie, J.-W., & Zhou, J.-L. 2020, AJ, 159, 164, doi: 10.3847/1538-3881/ab7373
* Yu & Liu (2018) Yu, J., & Liu, C. 2018, MNRAS, 475, 1093, doi: 10.1093/mnras/stx3204
* Zhao et al. (2012) Zhao, G., Zhao, Y.-H., Chu, Y.-Q., Jing, Y.-P., & Deng, L.-C. 2012, Research in Astronomy and Astrophysics, 12, 723, doi: 10.1088/1674-4527/12/7/002
* Zhu (2019) Zhu, W. 2019, apj, 873, 8, doi: 10.3847/1538-4357/ab0205
|
# $r$-mode instability of neutron stars in Low-mass X-ray binaries: effects
of Fermi surface depletion and superfluidity of dense matter
J. M. Dong Institute of Modern Physics, Chinese Academy of Sciences, Lanzhou
730000, China School of Physics, University of Chinese Academy of Sciences,
Beijing 100049, China
###### Abstract
The nucleon-nucleon correlation between nucleons leads to the Fermi surface
depletion measured by a $Z$-factor in momentum distribution of dense nuclear
matter. The roles of the Fermi surface depletion effect ($Z$-factor effect)
and its quenched neutron triplet superfluidity of nuclear matter in viscosity
and hence in the gravitational-wave-driven $r$-mode instability of neutron
stars (NSs) are investigated. The bulk viscosity is reduced by both the two
effects, especially the superfluid effect at low temperatures which is also
able to reduce the inferred core temperature of NSs. Intriguingly, due to the
neutron superfluidity, the core temperature of the NSs in known low-mass X-ray
binaries (LMXBs) are found to be clearly divided into two groups: high and low
temperatures which correspond to NSs with short and long recurrence times for
nuclear-powered bursts respectively. Yet, a large number of NSs in these LMXBs
are still located in the $r$-mode instability region. If the density-dependent
symmetry energy is stiff enough, the occurence of direct Urca process reduces
the inferred core temperature by about one order of magnitude. Accordingly,
the contradiction between the predictions and observations is alleviated to
some extent, but some NSs are still located inside the unstable region.
††preprint:
Key words: gravitational waves – stars: neutron – stars: oscillations
## I Introduction
A great deal of attention has been paid to gravitational waves following its
discovery from binary black holes merge (Abbott et al. 2016) and binary
neutron stars (NSs) merge (Abbott et al. 2017). As a species of compact
objects, NSs themselves can radiate gravitational waves due to for example the
magnetic deformation (Bonazzola & Gourgoulhon 1996; Regimbau & de Freitas
Pacheco 2001; Stella et al. 2005; Dall ’Osso, Shore & Stella 2009; Marassi et
al. 2011, Cheng et al. 2015, 2017) and $r$-mode instability (Andersson 1998).
The $r$ mode is a class of fluid mode of oscillation with Coriolis force as
restoring force which is analogous to Earth’s Rossby waves, leading to the
emission of gravitational waves in hot and rapidly rotating NSs due to the
Chandrasekhar-Friedmann-Schutz instability, and hence it prevents the pulsars
(rotating NSs) from reaching their Kepler rotational frequency
$\Omega_{\text{Kepler}}$. The emission of gravitational waves is able to
excite $r$ modes in NS core in turn and causes the oscillation amplitude to
grow, hence resulting in a positive feedback. Such gravitational radiation
induced by the $r$-mode instability is perhaps detectable with ground-based
instruments in the coming years, and thus potentially provides a probe to
uncover the interior information of NSs. On the other hand, the temperature-
dependent damping mechanisms attributed to the bulk and shear viscosities,
hinder the growth of the $r$ mode. The $r$-mode instability window borders a
critical curve that is determined by the balance of evolution time scales
($r$-mode driving and viscosity damping time scales are equal) in the
frequency-temperature ($\nu_{s}-T$) plane. Above this critical curve the
$r$-mode instability is active. In other words, the unstable window depends on
the competition between the gravitational radiation and the viscous
dissipation (see e.g. Andersson & Kokkotas 2001, Haskell 2015, Kostas et al.
2016, for review).
The low mass X-ray binary (LMXB) is a binary system where a compact object
such as a NS (We discuss this case in the present work) is accreting matter
from its low-mass companion that fills its Roche lobe. Ho et al. (2011)
assumed that the spin-up torque from accretion is balanced by the spin-down
torque from gravitational radiation due to the unstable $r$-mode, and the
associated heating is equal to cooling via neutrino emissions, and therefore
concluded that many NSs are located in the $r$-mode instability region. Since
the $r$-mode instability limits the spin-up of accretion powered millisecond
pulsars in LMXBs, the rapidly rotating pulsars, such as the PSR J1748-2446ad
rotating at 716 Hz, are difficult to understand. A better understanding of
relevant damping mechanisms is particularly necessary.
Bulk viscosity appears, if the perturbations in pressure and density induced
by the $r$-mode oscillations drives the dense matter away from
$\beta$-equilibrium. Consequently, energy is dissipated as the system tries to
restore its chemical equilibrium because of weak interaction. The bulk
viscosity due to the modified Urca reactions (or perhaps the direct Urca for
large mass NSs) provides the dominant dissipation mechanism at high
temperatures ($\gtrsim 10^{9}$K). However, at low temperatures ($\lesssim
10^{9}$K), the shear viscosity caused by the neutron scattering and electron
scattering or crust-core interface, is the primary mechanism for the damping
of $r$ modes. The viscosity is expected to be affected significantly by the
superfluidity.
Superfluidity is an intriguing feature of dense nuclear matter, which receives
great interest since it plays an essential role in the NS thermal evolution
(Yakovlev et al. 1999, 2001; Page et al. 2004, 2006). Dong et al. (2013, 2016)
found that the neutron ${}^{3}PF_{2}$ superfluidity for pure neutron matter or
$\beta$-stable nuclear matter is strongly reduced by about one order of
magnitude by the Fermi surface depletion effect (i.e., $Z$-factor effect) with
the help of the microscopic Brueckner theory starting from bare nucleon-
nucleon interactions. The nucleon-nucleon correlation, in particular the
short-range correlation including short-range repulsion and tensor
interaction, is so strong that it creats a high-momentum tail, giving rise to
the $Z$-factor. The $Z$-factor at the Fermi surface is equal to the
discontinuity of the occupation number in momentum distribution according to
the Migdal-Luttinger theorem (Migdal 1957), as shown in the inset of Fig.
1(a). Therefore, it characterizes the deviation from the right-angle momentum
distribution of perfect degenerate Fermi gas at zero-temperature, and hinders
the particle transitions around the Fermi surface which could affect many
properties of fermion systems related to particle-hole excitations. For
instance, the $Z$-factor has far-reaching impact on the nuclear structure
(Subedi et al. 2008; Hen et al. 2014), superfluidity of dense nuclear matter
(Dong et al. 2013, 2016), NS thermal evolution (Dong et al. 2016), and the
European Muon Collaboration effect (Hen et al. 2017; Duer et al. 2018),
highlighting its fundamental importance. In the present work, the influences
of the $Z$-factor along with its quenched superfluidity on the $r$-mode
instability of NSs in LMXBs are investigated in detail.
## II Effects of the $Z$-factor and its quenched superfluidity on viscosity
In addition to depressing the superfluidity, the $Z$-factor effect is also
able to change the viscosity directly because of the particle number depletion
at the Fermi surface. For the neutron and proton $Z$-factors at the Fermi
surfaces of $\beta$-stable NS matter at different nucleon number density
$\rho$, we use simple formulas that depend on several parameters to fit the
results of Dong et al. (2016) which is achieved from the Brueckner theory
using the Argonne V18 nucleon-nucleon interaction plus a microscopic three-
body force, given by
$\displaystyle Z_{F,n}(\rho)$ $\displaystyle=$ $\displaystyle
0.907-0.233\rho-0.480\rho^{2}+0.481\rho^{3},$ $\displaystyle Z_{F,p}(\rho)$
$\displaystyle=$ $\displaystyle\begin{cases}0.351+2.332\rho,\quad\rho\leq
0.15\text{ fm}^{-3}\\\
0.656+0.451\rho-1.151\rho^{2}+0.576\rho^{3},\quad\rho>0.15\text{
fm}^{-3}\end{cases}$ (1)
for the sake of application, where the fraction of each component for
$\beta$-stable matter is determined by the well-known variational APR equation
of state (EOS) (Akmal et al. 1998). The non-rotating NS maximum mass of
$2.2M_{\odot}$, the canonical NS radius of $11.6$ km, and stellar thermal
evolution obtained from this APR EOS are compatible with astrophysical
observations (Page et al. 2004), and it is one of the most popular EOS to be
employed to study the NS interior physics. The isospin-dependent part of EOS,
i.e., the symmetry energy, from the Brueckner-Hartree-Fock approach is so
stiff that the direct Urca (DUrca) reaction occurs even in $1.2M_{\odot}$ low-
mass NSs (Yin & Zuo 2013), which is not consistent with the current
understanding that the DUrca process does not occur in $1.4M_{\odot}$
canonical NSs (Lattimer & Prakash 2004; Page et al. 2004; Brown et al. 2018).
Fortunately, the $Z$-factor-quenched superfluidity is so weak that the energy
gap is not so sensitive to the nucleon-nucleon interaction and single-particle
potential any longer, which is beneficial to obtain reliable superfluid gaps.
For example, the inclusion of three-body force does not change the weak
neutron ${}^{3}PF_{2}$ superfluidity gap very much (Dong et al. 2013). The
$Z_{F}$ itself of neutron-rich matter is not very sensitive to the isospin
asymmetry $\beta$ at densities of interest if $\beta>0.5$ (Yin et al. 2013).
Therefore, the inconsistent treatment here is not expected to change the final
conclusions. Because the Fermi surface depletion hinders particle-hole
excitation around the Fermi level, the bulk viscosity $\xi$, being related to
neutrino emission, is expected to be reduced with the inclusion of the
$Z$-factor. The momentum distribution function $n(k)$ of nucleons near the
Fermi surface at finite temperature $T$ (not quite high) is given by (Dong et
al. 2016)
$n(k)=\frac{Z_{F}}{1+\exp\left(\frac{\omega-\mu}{T}\right)},\quad k\approx
k_{F},$ (2)
where $\omega$ is single-particle energy, and $\mu$ the chemical potential.
The nucleon-nucleon correlation between nucleons quenches the occupation
probability by a factor of $Z_{F}$ at the Fermi surface.
Figure 1: (Color online) (a): $Z$-factor at the Fermi surface as a function of
nucleon number density $\rho$ in $\beta$-stable NS matter (Dong et al. 2016).
The inset shows a schematic illustration of the momentum distribution due to
the $Z$-factor effect. (b): neutron ${}^{3}PF_{2}$ superfluid gap VS.
nucleonic density in $\beta$-stable matter (Dong et al. 2016; Li et al. 2020),
compared with the fitting results that denoted by the solid curves.
The rapid cooling of the NS in Cas A has been revealed through the ten-year
observations (Heinke & Ho 2010), which helps one to extract the interior
information of NSs (Page et al. 2011; Shternin et al. 2011; Blaschke et al.
2012, 2013; Sedrakian 2013; Newton et al. 2013; Bonanno et al. 2014; Ho et al.
2015). Then combined with theoretical analysis, Page et al. (2011) claimed
that this is the first direct evidence that neutron triplet superfluidity and
proton singlet superconductivity occur at supranuclear densities within NSs,
with the critical temperature for neutron ${}^{3}P_{2}$ superfluidity
$T_{\text{cn,max}}=5\times 10^{8}$ K (the corresponding superfluid gap 0.08
MeV). However, Posselt et al. reported that a statistically significant
temperature drop for the NS in Cas A is not present. Therefore, a reliable
calculation of the superfluid gap theoretically is especially necessary. When
the $Z$-factor effect is taken into account, the peak-value of neutron
${}^{3}PF_{2}$ superfluid gap is about 0.04 MeV for pure neutron matter as
well as $\beta$-stable NS matter (Dong et al., 2013, 2016), in agreement with
other predictions (Ding et al. 2016) but much lower than the constrained value
of 0.08 MeV by Page et al. (2011). The superfluid gap quenches all processes
that involve elementary excitations around the Fermi surface, leading to
remarkable effects on the neutrino emissivity, heat capacity, thermal
conductivity, and hence NS thermal evolution. Since the core temperatures for
NSs in LMXBs were inferred to be $(1\sim 3)\times 10^{8}$K (Ho et al. 2011)
which is basically below the critical temperature of neutron ${}^{3}PF_{2}$
superfluidity, such a superfluidity is expected to affect the $r$-mode
instability distinctly.
For the angular-averaged neutron ${}^{3}PF_{2}$ superfluid gap and proton
${}^{1}S_{0}$ superconducting gap quenched by the $Z$-factor effect at zero-
temperature in $\beta$-stable matter, we fit the results of Dong et al. (2016)
and Dong (2019) obtained with the generalized Bardeen-Cooper-Schriffer method
combined with the Brueckner theory, as summarized in Fig. 1 (b), which take
the form of
$\displaystyle\Delta_{n}(\rho)$ $\displaystyle=$
$\displaystyle(0.943\rho-0.050)\exp\left[-\left(\frac{\rho}{0.177}\right)^{1.665}\right],$
(3) $\displaystyle\Delta_{p}(\rho)$ $\displaystyle=$
$\displaystyle(1.015\rho-0.078)\exp\left[-\left(\frac{\rho}{0.136}\right)^{2.823}\right],$
(4)
as a function of nucleon number density $\rho$ and the corresponding critical
temperature is $T_{c}=0.57\Delta$ (Page et al. 2004). The results perhaps are
also useful to help one to understand pulsar glitches. The proton
${}^{1}S_{0}$ gap is much smaller than the neutron ${}^{3}PF_{2}$ gap, and its
superfluid domain is rather narrow. In addition, the proton fraction is much
smaller than the neutron one for $\beta$-stable NS matter. Therefore, the
proton superconductivity will not be considered in the following discussion.
We explore the influence of the $Z$-factor and superfluidity on the viscosity.
The bulk viscosity of $npe\mu$ matter is mainly determined by the reaction of
DUrca process $n\rightarrow p+l+\overline{\nu_{l}},p+l\rightarrow n+\nu_{l}$
together with the modified Urca (MUrca) processes $n+N\rightarrow
p+N+l+\overline{\nu_{l}},p+N+l\rightarrow n+N+\nu_{l}$, where $N$ denotes
neutron ($n$) or proton ($p$) and $l$ denotes electron ($e$) or muon ($\mu$).
The DUrca process is most efficient for neutrino ($\nu$) emission, but it
occurs only if the proton fraction is high enough to reach a threshold.
Frankfurt et al. (2008) concluded that the high-momentum tail of the nucleon
momentum distribution induced by the short-range correlation leads to a
significant enhancement of the neutrino emissivity of the DUrca process, and
this rapid process can be opened even at low proton fraction. However, Dong et
al. (2016) gave the opposite conclusions, that is, the neutrino emissivity is
reduced instead of enhanced, and the threshold condition for the DUrca
reaction is almost unchanged. The $\beta$-decay of neutrons and its inverse
process can be cyclically driven just by thermal excitations, and high
momentum tail cannot participate in the DUrca process and also the MUrca
processes. The explicit discussion are presented in Dong et al. (2016).
The partial bulk viscosity of $npe\mu$ matter induced by a non-equilibrium
DUrca process without the $Z$-factor effect ($Z=1$) is written as (Haensel et
al. 2000)
$\displaystyle\xi_{l,0}^{(D)}$ $\displaystyle=$ $\displaystyle
K\int_{0}^{\infty}dx_{\nu}x_{\nu}^{2}\int
dx_{n}dx_{p}dx_{l}\bigg{\\{}f(x_{n})f(x_{p})f(x_{l})$ (5)
$\displaystyle\cdot\left[\delta(x_{n}+x_{p}+x_{l}-x_{\nu}+\zeta)-\delta(x_{n}+x_{p}+x_{l}-x_{\nu}-\zeta)\right]\bigg{\\}},$
through a complicated deduction, where $f(x)=1/(1+e^{x})$ in the phase space
integral is a Fermi-Dirac function and $K$ is related to the temperature $T$,
$r$-mode angular frequency $\omega$, nuclear matter EOS and possible
superfluid gap. $\zeta$ is a parameter to measure the deviation of the system
from $\beta$-equilibrium. Due to the strong degeneracy of nucleons and
electrons, the main contribution to the above integral comes from the very
narrow regions of momentum space near the corresponding Fermi surfaces
$k_{F}$, just as the calculation of neutrino emissivity of the DUrca process
in Yakovlev et al. (2001). If the $Z$-factor effect is included, the above
Fermi-Dirac distribution $f(x)$ near the Fermi surface for nucleons should be
replaced by Eq. (2), i.e., $Z_{F}/(1+e^{x})=Z_{F}f(x)$, resulting in an
additional factor $Z_{F,n}Z_{F,p}$ appearing in the right hand side of Eq.
(5). Therefore, the bulk viscosity induced by DUrca reaction is given by
$\xi_{l}^{(D)}=Z_{F,n}Z_{F,p}\xi_{l,0}^{(D)}.$ (6)
The explicit expression of the $\xi_{l,0}^{(D)}$ takes the form of (Haensel et
al. 2000)
$\displaystyle\xi_{l,0}^{(D)}$ $\displaystyle=$ $\displaystyle 8.553\times
10^{24}\frac{m_{n}^{\ast}}{m_{n}}\frac{m_{p}^{\ast}}{m_{p}}\left(\frac{n_{e}}{0.16}\right)^{1/3}$
(7)
$\displaystyle\cdot\frac{T_{9}^{4}}{\omega_{4}^{2}}\left(\frac{C_{l}}{100\text{
MeV}}\right)^{2}\Theta_{npl}\text{\ g cm}^{-1}\text{s}^{-1},$
with $\omega_{4}=\omega/(10^{4}$s${}^{-1})$, $T_{9}=T/(10^{9}$K$)$ and
$C_{l}=4(1-2Y_{p})\rho dS(\rho)/d\rho-k_{F,l}^{2}/[12(1-2Y_{p})S(\rho)]$ where
$Y_{p}$ and $S(\rho)$ are the proton fraction and density-dependent symmetry
energy respectively. $n_{e}$ and $m^{\ast}$ are the electron number density in
units of fm-3 and nucleonic effective mass, respectively. The step function is
$\Theta_{npl}=1$ if the DUrca process opens for $k_{F,n}<(k_{F,l}+k_{F,p})$.
The $\xi_{l,0}^{(D)}$ could include the superfluid effect by a multiplying
control function whose tedious expression can be consulted in Haensel et al.
(2000). The angular frequency $\omega$ of the $r$-mode ($l=m=2$) in a
corotating frame is given by $\omega=2m\Omega/l(l+1)$ (Andersson 2001), where
$\Omega$ is the angular velocity of the rotating star. Similarly, the
distribution function $f(x_{n})f(x_{p})f(x_{N})f(x_{N^{\prime}})$ that appears
in phase space integral (Haensel et al. 2001) for the bulk viscosity produced
by the MUrca processes, is replaced by
$[Z_{F,n}f(x_{n})][Z_{F,p}f(x_{p})][Z_{F,N}f(x_{N})][Z_{F,N^{\prime}}f(x_{N^{\prime}})]$,
and thus we obtain
$\displaystyle\xi_{l}^{(Mn)}$ $\displaystyle=$ $\displaystyle
Z_{F,n}^{3}Z_{F,p}\xi_{l,0}^{(Mn)},$ (8) $\displaystyle\xi_{l}^{(Mp)}$
$\displaystyle=$ $\displaystyle Z_{F,n}Z_{F,p}^{3}\xi_{l,0}^{(Mp)},$ (9)
where the superscript $N=n(p)$ denotes the neutron (proton) branch of the
MUrca processes. The $\xi_{l,0}^{(Mn)}$ and $\xi_{l,0}^{(Mp)}$ are given by
(Haensel et al. 2001)
$\displaystyle\xi_{e,0}^{(Mn)}$ $\displaystyle=$ $\displaystyle 1.49\times
10^{19}\left(\frac{m_{n}^{\ast}}{m_{n}}\right)^{3}\frac{m_{p}^{\ast}}{m_{p}}\left(\frac{n_{p}}{0.16}\right)^{1/3}$
(10)
$\displaystyle\cdot\frac{T_{9}^{6}}{\omega_{4}^{2}}\left(\frac{C_{e}}{100\text{
MeV}}\right)^{2}\alpha_{n}\beta_{n}\text{\ g cm}^{-1}\text{s}^{-1},$
$\displaystyle\xi_{e,0}^{(Mp)}$ $\displaystyle=$
$\displaystyle\xi_{e,0}^{(Mn)}\left(\frac{m_{p}^{\ast}}{m_{n}^{\ast}}\right)^{2}\frac{(3k_{F,p}+k_{F,e}-k_{F,n})^{2}}{8k_{F,p}k_{F,e}}\Theta_{pe},$
(11) $\displaystyle\xi_{\mu,0}^{(Mn)}$ $\displaystyle=$
$\displaystyle\xi_{e,0}^{(Mn)}\left(\frac{k_{F,\mu}}{k_{F,e}}\right)\left(\frac{C_{\mu}}{C_{e}}\right)^{2},$
(12) $\displaystyle\xi_{\mu,0}^{(Mp)}$ $\displaystyle=$
$\displaystyle\xi_{e,0}^{(Mn)}\left(\frac{C_{\mu}m_{p}^{\ast}}{C_{e}m_{n}^{\ast}}\right)^{2}\frac{(3k_{F,p}+k_{F,\mu}-k_{F,n})^{2}}{8k_{F,p}k_{F,e}}\Theta_{p\mu},$
(13)
with $\Theta_{pl}=1$ for $k_{F,n}<(k_{F,l}+3k_{F,p})$ and $\Theta_{pl}=0$
otherwise. We use $\alpha_{n}=1.76-0.63(1.68$
fm${}^{-1}/k_{F,n})^{2},\beta_{n}=0.68$ from Page et al. (2004) here. The
tedious control functions to measure the superfluid effect can be seen in
Haensel et al. (2001).
Figure 2: (Color online) The logarithmic $\xi\text{ (g cm}^{-1}\text{s}^{-1})$
under different temperature $T$ (in units of $10^{8}$ K). The angular
frequency is $\Omega=10^{4}$ s-1. The calculated results without the
superfluidity (SF) and $Z$-factor, with $Z$-factor only, with the
${}^{3}PF_{2}$ superfluidity only, and with both the superfluidity and
$Z$-factor, are shown for comparison.
The calculated $\xi$ involving electron and muon branches as a function of the
temperature $T$, without and with the $Z$-factor and its quenched
superfluidity, are displayed in Fig. 2, taking the density $\rho=0.17$ fm-3
and rotation angular frequency $\Omega=10^{4}\text{ s}^{-1}$ as an example.
The $Z$-factor reduces the $\xi$ by about $50\%$ at $\rho=0.17$ fm-3, and such
a $Z$-factor effect can be more substantial at high densities, according to
Eqs. (1,8,9). When the temperature drops below the neutron ${}^{3}PF_{2}$
superfluid critical temperature, the $\xi$ is significantly reduced. For
instance, at the temperature $T=10^{7}$ K, the $\xi$ is lowered by about six
orders of magnitude. The corresponding damping time scale $\tau_{\xi}$ is thus
enlarged especially at low temperatures. However, the bulk viscosity dominates
the damping mechanism just at high temperatures ($\gtrsim 10^{9}$K), at which
the neutron ${}^{3}PF_{2}$ superfluidity vanishes. It has been concluded that
shear viscosity primarily stems from electron scattering instead of neutron
scattering (Shternin & Yakovlev 2008; Vidana 2012) at temperatures $T\gtrsim
10^{7}$ K. Shang et al. found that the neutron shear viscosity $\eta_{n}$ is
enhanced by the $Z$-factor effect but is lowered much more strongly by the
onset of neutron triplet superfluidity. As a result, the electron shear
viscosity $\eta_{e}$ is generally larger than the neutron one $\eta_{n}$ even
at very low temperatures. So we still employ the widely-used formulas for
electron shear viscosity $\eta_{e}$ (Shternin & Yakovlev 2008; Alford et al.
2012) and unimportant neutron shear viscosity $\eta_{n}$ (Cutler & L. Lindblom
1987) which are respectively given by
$\displaystyle\eta_{e}$ $\displaystyle=$ $\displaystyle 4\times
10^{-26}\left(Y_{p}n_{b}\right)^{14/9}T^{-5/3}\text{ \ g
cm}^{-1}\text{s}^{-1},$ (14) $\displaystyle\eta_{n}$ $\displaystyle=$
$\displaystyle 2\times 10^{18}\rho_{15}^{9/4}T_{9}^{-2}\text{ \ g
cm}^{-1}\text{s}^{-1},$ (15)
where $\rho_{15}$ and $T_{9}$ are the density and temperature in units of
$10^{15}$ g cm-3 and $10^{9}$ K, respectively. $n_{b}$ is baryon number
density in units of cm-3 here.
The time scales for bulk viscosity (Lindblom et al. 2002; Nayyar et al. 2006),
shear viscosity (Lindblom et al. 1998), and a $r$-mode growth due to the
gravitation radiation (Lindblom et al. 1998) are respectively given by
$\displaystyle\frac{1}{\tau_{\xi}}$ $\displaystyle=$
$\displaystyle\frac{4\pi}{690}\left(\frac{\Omega^{2}}{\pi
G\overline{\rho}}\right)^{2}R^{2l-2}\left(\int_{0}^{R}\rho
r^{2l+2}dr\right)^{-1}\int_{0}^{R}\xi\left(\frac{r}{R}\right)^{6}\left[1+0.86\left(\frac{r}{R}\right)^{2}\right]r^{2}dr,$
(16) $\displaystyle\frac{1}{\tau_{\eta}}$ $\displaystyle=$
$\displaystyle(l-1)(2l+1)\left(\int_{0}^{R}\rho
r^{2l+2}dr\right)^{-1}\int_{0}^{R}\eta r^{2l}dr,$ (17)
$\displaystyle\frac{1}{\tau_{\text{GW}}}$ $\displaystyle=$
$\displaystyle\frac{32\pi
G\Omega^{2l+2}}{c^{2l+3}}\frac{(l-1)^{2l}}{\left[(2l+1)!!\right]^{2}}\left(\frac{l+2}{l+1}\right)^{2l+2}\left(\int_{0}^{R}\rho
r^{2l+2}dr\right),$ (18)
where $\overline{\rho}$ is the average density and $G$ is the gravitational
constant. The viscous dissipation at the viscous boundary layer (VBL) of
perfectly rigid crust and fluid core is proposed as the primary damping
mechanism in some literatures, and its time scale is (Bildsten & Ushomirsky
2000; Lindblom et al. 2000)
$\tau_{\text{VBL}}=\frac{1}{2\Omega}\frac{2^{l+3/2}(l+1)!}{l(2l+1)!!I_{l}}\sqrt{\frac{2\Omega
R_{c}^{2}\rho_{c}}{\eta_{c}}}\int_{0}^{R_{c}}\frac{\rho}{\rho_{c}}\left(\frac{r}{R_{c}}\right)^{2l+2}\frac{dr}{R_{c}},$
(19)
where $\rho_{c}$, $R_{c}$, and $\eta_{c}$ are the density, radius and the
shear viscosity of the NS matter at the outer edge of the core (or
equivalently inner edge of the crust). In the case of $l=m=2$ mode, the
$I_{2}$ is 0.80411 (Lindblom et al. 2000; Rieutord 2001). Yet, a rigid crust
is an ideal model. The relative motion (slippage) between the crust and core
reduces the damping by as much as a factor of $f=10^{2}-10^{3}$ (Levin &
Ushomirsky 2001). In the present work, we select $f=10^{2}$ as in the
references of Ho et al. (2011) with a slippage factor $\mathcal{S}=0.1$. Such
a role is questioned if the core-crust boundary is defined by a continuous
transition from non-uniform matter to uniform matter through ‘nuclear pasta’
phases with different shapes (Pethick & Potekhin 1998), and consequently the
VBL is smeared out (Gearheart et al. 2011).
An overall time scale of the $r$-mode, which includes the exponential growth
induced by the Chandrasekhar-Friedmann-Schutz mechanism and the decay due to
viscous damping, is given by
$1/\tau=-1/\tau_{\text{GW}}+1/\tau_{\xi}+1/\tau_{\eta}+1/f\tau_{\text{VBL}}$.
If $1/\tau>0$, the mode will exponentially grow, while it will quickly decay
if $1/\tau<0$. Therefore, a NS will be stable against the $r$-mode instability
if its angular velocity $\Omega$ is smaller than a critical value
$\Omega_{c}$. A star with $\Omega>\Omega_{c}$ will lose its angular momentum
through gravitational radiation until the $\Omega$ falls below the
$\Omega_{c}$.
## III Effects of the $Z$-factor and the superfluidity on $r$-mode
instability
Usually the stellar surface temperature can be obtained from the fitting of
black-body spectra of LMXBs in quiescence, whereas the core temperature cannot
be obtained readily. Ho et al. (2012) assumed that the stellar interior is
isothermal and the heating is balanced by neutrino cooling in steady state NSs
of LMXBs, i.e., $L_{\text{heat}}=L_{\nu}$, to infer the stellar core
temperature for a NS with spin frequency $\nu_{s}=\Omega/2\pi$. The
$L_{\text{heat}}$ is given by
$L_{\text{heat}}=0.065(\nu_{s}/300$Hz$)L_{\text{acc}}$ (Brown & Ushomirsky
2000), where $L_{\text{acc}}$ is the accretion luminosity computed by using
the observed flux and distance. Usually, one assumes a canonical NS cools via
the MUrca processes when the core temperature $T\gtrsim 10^{8}$ K. Such a
neutrino emission process can be depressed by the neutron ${}^{3}PF_{2}$
superfluidity substantially when the core temperature is below the critical
temperature $T_{c}$. On the other hand, the superfluidity is able to enhance
the emission due to the Cooper pair breaking and formation (PBF) process when
the temperature is slightly below $T_{c}$. This PBF process tends to be more
effective than the MUrca process, which could result in a rapid NS cooling
(Page et al. 2011; Shternin et al. 2011).
We calculate the neutrino emissivity $Q$ for $1.4M_{\odot}$ canonical NSs with
the inclusion of the $Z$-factor and neutron superfluid effects. The neutrino
emissivity for DUrca, MUrca and PBF processes are given as
$Q^{(D)}=Z_{F,n}Z_{F,p}Q_{0}^{(D)}$,
$Q^{(Mn)}=Z_{F,n}^{3}Z_{F,p}Q_{0}^{(Mn)}$,
$Q^{(Mp)}=Z_{F,n}Z_{F,p}^{3}Q_{0}^{(Mp)}$ and
$Q^{(PBF)}=Z_{F,n}^{2}Q_{0}^{(PBF)}$ (Dong et al. 2016). Here $Q_{0}^{(D)}$,
$Q_{0}^{(Mn)}$, $Q_{0}^{(Mp)}$ and $Q_{0}^{(PBF)}$ are well-established
emissivity that include the possible superfluid effect with control functions
(see Yakovlev et al. 1999, 2001 for details) but without the $Z$-factor effect
($Z=1$). The corresponding luminosity is calculated by volume integral $L=\int
QdV$. The NS interior is assumed to be composed of $npe\mu$ dense matter
without exotic degrees of freedom. The stellar structure for a nonrotating NS
is established by integrating the Tolman-Oppenheimer-Volkov (TOV) equation
with the EOS from the user-friendly IMP1 Skyrme energy density functional
(Dong & Shang 2020). Such an EOS is close to the APR EOS because the APR EOS
for pure neutron matter has been served as a calibration for building up the
IMP1 Skyrme interaction (Dong & Shang 2020), and provides good description of
neutron star properties. Within this EOS, the results do not depend
substantially on the assumed stellar mass.
Figure 3: (Color online) The calculated neutrino luminosity from the MUrca,
PBF processes for canonical NSs, where the $Z$-factor effect and its quenched
neutron ${}^{3}PF_{2}$ superfluidity are included. The stellar structure is
constructed based on the EOS from the IMP1 Skyrme interaction. The dashed
horizontal lines denote the obtained $L_{\text{heat}}$ for NSs in known LMXBs
using the observed flux, distance and spin frequency (Watts et al. 2008). The
intersection of the $L_{\text{heat}}=L_{\nu}$ yields the stellar core
temperature for each NS.
The calculated luminosity $L_{\text{MUrca}}$, $L_{\text{PBF}}$ and the total
neutrino luminosity $L_{\nu}=L_{\text{MUrca}}+L_{\text{PBF}}$ are presented in
Fig. 3. At temperatures lower than the superfluid critical temperature
$T_{c}$, $L_{\text{PBF}}$ is about $2\sim 3$ orders of magnitude larger than
the $L_{\text{MUrca}}$, resulting in a lower inferred core temperature $T$.
The observed flux, distance and spin frequency from Watts et al. (2008) are
applied to achieve the $L_{\text{heat}}$ for NSs in known LMXBs. The
intersection of the curve $L_{\nu}$ and $L_{\text{heat}}$ gives the stellar
core temperature for each NS. Unlike in Ho et al. (2011a, 2011b), each NS we
discussed here has a unique inferred core temperature, and it lies in the
range of $(1\sim 3)\times 10^{8}$ K.
Figure 4: The observed spin frequency $\nu_{s}$ and the inferred core
temperature $T$ for NSs in known LMXBs. The hallow squares do not include the
VBL damping, namely $1/\tau_{\text{GW}}=1/\tau_{\xi}+1/\tau_{\eta}$. The black
squares include the VBL damping, namely
$1/\tau_{\text{GW}}=1/\tau_{\xi}+1/\tau_{\eta}+1/f\tau_{\text{VBL}}$, with
$f=100$. Above the critical curves are the $r$-mode instability region for
$M=1.4M_{\odot}$ canonical NSs without (a) and with (b) the $Z$-factor and its
quenched superfluidity (SF).
Figure 4 exhibits the $\nu_{s}-T$ plot where core temperatures $T$ are
inferred from $L_{\nu}=L_{\text{heat}}$ and spin frequency $\nu_{s}$ have been
observed for NSs in these LMXBs. The error bar on the inferred $T$ is not
included here because it does not affect the following discussions. The
$r$-mode instability window is determined by the time scale balance
$1/\tau_{\text{GW}}=1/\tau_{\xi}+1/\tau_{\eta}+1/f\tau_{\text{VBL}}$. This
critical curve is almost not influenced by the $Z$-factor and superfluid
effects because the electron shear viscosity contributes mainly to the viscous
damping in this temperature range. When the $Z$-factor and superfluidity are
excluded, most of these millisecond pulsars are located in the instability
window, as shown in Fig. 4(a). When these two effects are included, the
referred core temperature of some rapidly rotating NSs are reduced by a factor
of around 2, as displayed in Fig. 4(b), being attributed to an enhanced
neutrino emissivity from PBF process (the $Z$-factor itself does not
contribute distinctly). Nevertheless, it is not evident to modify the
conclusion that lots of systems locate inside the unstable region, even if the
damping due to the VBL is included. So far, it is not fully understood what
causes LMXBs to undergo long versus short recurrence time bursts.
Intriguingly, as shown in Fig. 4(b), these NSs are clearly divided into two
categories: high-temperature branch ($\sim 3\times 10^{8}$ K) and low-
temperature branch ($\sim 10^{8}$ K). The short LMXBs perhaps have high core
temperature ($\sim 3\times 10^{8}$ K) than long LMXBs ($\sim 10^{8}$ K), as
proposed by Ho et al. (2011). A higher surface temperature is able to shorten
the time intervals between ignition of nuclear burning.
The EOS used above is calculated by employing the IMP1 Skyrme force. The
symmetry energy are not stiff sufficiently to allow the onset of the DUrca
processes for canonical NSs and even large mass NSs. Yet, recently Brown et
al. (2018) showed that the NS in the transient system MXB1659-29 has a core
neutrino luminosity that is consistent with the DUrca reaction occurring in a
small fraction of the core, substantially exceeds the MUrca processes, and
they fitted the NS mass $M\sim 1.6M_{\odot}$ in their model. We investigate
the role of the symmetry energy in the $\nu_{s}-T$ relation, and employ the
density-dependent symmetry energy written as
$\displaystyle S(\rho)$ $\displaystyle=$ $\displaystyle
13.0\left(\frac{\rho}{\rho_{0}}\right)^{2/3}+C_{1}\left(\frac{\rho}{\rho_{0}}\right)+C_{2}\left(\frac{\rho}{\rho_{0}}\right)^{\gamma},$
(20) $\displaystyle C_{1}$ $\displaystyle=$ $\displaystyle
19.4-\frac{18.3}{2.06-3\gamma\cdot 0.69^{\gamma}},$ $\displaystyle C_{2}$
$\displaystyle=$ $\displaystyle 19.4-C_{1},$
to supplement to the EOS of symmetric matter from IMP1 manually. It can be
considered as an extension of the DDM3Y-shape expression (Mukhopadhyay et al.
2007; Dong et al. 2012; Dong et al. 2013; Fan et al. 2014) and enables us to
reproduce the symmetry energy $S=32.4$ MeV at nuclear saturation density
$\rho_{0}$ and the slope parameter $L=42$ MeV at $\rho=0.11$ fm-3 (Dong et al.
2018). The only one free parameter $\gamma$ or equivalently the slope
parameter $L$ (distinguished from aforementioned luminosity $L$) at $\rho_{0}$
controls its density-dependence (i.e., whether the symmetry energy is stiff or
soft). Vidana (2012) and Wen et al. (2012) discussed the effects of symmetry
energy on the $r$-mode instability, which give opposite conclusions. Figure 5
presents the calculated $r$-mode instability critical curve and the location
of each NS with $M_{\text{TOV}}=1.6M_{\odot}$ in LMXBs in the $\nu_{s}-T$
plot, taking $L=50,60$ and 80 MeV as examples. If the DUrca process opens,
such as the $L=60$ MeV case, the inferred core $T$ is reduced by about one
order of magnitude because of the high efficiency of the DUrca reaction.
Although the neutrino emissivity of the DUrca reaction can be about eight
orders of magnitude larger than that of the MUrca process, its influence on
the inferred core temperature is only about one order of magnitude because of
the strong temperature scaling in the $L_{\nu}-T$ relation. Many NSs are
inside the stable region, and others are also located in but closer to the
stability window. Therefore, the presence of the DUrca process could alleviate
the disagreement between the observed and the predicted results to a large
extent. Yet, if the VBL is smeared out because of the nuclear pasta phases,
the DUrca process still cannot modify the conclusion that most NSs locate well
inside the unstable region.
Additional damping mechanisms or physics perhaps is required to reconcile the
theory and observations. The mutual friction due to vortices in a rotating
superfluid (electrons scattered off of magnetized vortices) is claimed to be
unlikely to suppress the $r$-mode instability in rapidly spinning NSs, but for
a large ‘drag’ parameter $\mathcal{R}$ the mutual friction is sufficiently
strong to suppress this instability completely as soon as the core becomes
superfluid (Haskell et al. 2009). Anyway, this vortex-mediated mutual friction
damping is a rather complicated mechanism that is difficult to calculate
precisely, calling for further investigation. In addition, the hyperonic bulk
viscosity caused by nonleptonic weak interactions is also found to stabilize
the oscillation mode effectively (Jha et al. 2010). Gusakov et al. (2014)
argued that finite temperature effects in the superfluid core leads to a
resonance coupling and enhanced damping of oscillation modes at certain
temperatures, and therefore the rapidly rotating NSs may spend a long time at
these resonance temperatures. Moveover, the theoretical understanding can be
consistent with observations if the $r$-mode saturation amplitude is so small
that the gravitational wave torque cannot counteract the accretion torque
although $r$-mode heating is balanced by stellar cooling. As a result, the
$r$-mode instability has no impact on the spin or thermal evolution of NSs.
The recent investigations indeed present low saturation amplitudes
$\alpha_{m}$. For instance, Haskell & Patruno (2017) constrained the amplitude
of an unstable $r$-mode from the spinning down of PSR J1023+0038, and gave
$\alpha_{m}\approx 5\times 10^{-8}$. Haskell et al. (2012) concluded that for
most known LMXBs in the unstable region one has $\alpha_{m}=10^{-9}\sim
10^{-8}$. Using known X-ray upper bounds on the temperatures and luminosities
of several non-accreting millisecond radio pulsars, Schwenzer et al. (2017)
derived the $r$-mode amplitude as low as $\alpha_{m}\lesssim 10^{-8}$.
Figure 5: (Color online) Similar to Fig. 4, but the NS masses are assumed to
be $M_{\text{TOV}}=1.6M_{\odot}$. The EOS with various density-dependent
symmetry energy are employed, where the onset of the DUrca process due to a
stiff symmetry energy could lead to a lower core temperature. The dashed
(solid) curves are the critical curves without (with) the VBL damping, and
$f=10^{2}$.
## IV Summary
We have investigated the bulk viscosity $\xi$ and $r$-mode instability under
the influence of both the $Z$-factor and its quenched neutron superfluidity,
where the recently calculated superfluid gaps and $Z$-factors at Fermi
surfaces within a microscopic nuclear many-body approaches are employed. The
$Z$-factor effect reduces the bulk viscosity $\xi$ by several times at most,
while the neutron ${}^{3}PF_{2}$ superfluidity is able to reduce the $\xi$ by
several orders of magnitude when the core temperature $T$ is lower than the
critical temperature. With the inclusion of superfluidity, the PBF process
opens when the stellar core temperature is slightly lower than the critical
temperature, leading to an enhanced neutrino emission. As a result, the
superfluidity decreases the inferred core temperature for some relatively low
temperature NSs in LMXBs obviously. Interestingly, because of the neutron
${}^{3}PF_{2}$ superfluidity, the core temperature of the NSs in these
discussed LMXBs are divided into two groups-high and low temperatures. These
NSs with long recurrence times for nuclear-powered bursts are considered to
have lower core temperature ($10^{8}$ K), while these with short recurrence
times have high core temperatures ($3\times 10^{8}$ K). However, most NSs are
still predicted to be $r$-mode instable. In other words, the introducing of
the $Z$-factor and neutron triplet superfluidity cannot solve this problem
fundamentally. If the DUrca process occurs due to a sufficiently stiff
symmetry energy, the inferred core temperature is reduced by about one order
of magnitude, many NSs are located in the $r$-mode stable window and others
are closer to this region if the VBL damping is taken into account. In other
words, more NSs will be inside the $r$-mode stability window for interactions
which give larger values of symmetry energy slope $L$. However, the existence
of most rapidly rotating NSs, such as the 716 Hz PSR J1748-2446 ad, remains a
puzzle. If the $r$-mode saturation amplitude is too small to impact on the
spin or thermal evolution of NSs, such as the inferred $\alpha\approx 5\times
10^{-8}$ from the spinning down of PSR J1023+0038 by Haskell & Patruno (2017),
the theoretical understanding can be consistent with observations.
## Acknowledgement
J. M. Dong would like to thank L. J. Wang for helpful suggestions. This work
was supported by the National Natural Science Foundation of China under Grant
No. 11775276, by the Strategic Priority Research Program of Chinese Academy of
Sciences under Grant No. XDB34000000, by the Youth Innovation Promotion
Association of Chinese Academy of Sciences under Grant No. Y201871.
## Data Availability
The data used to support the findings of this study are available from the
corresponding author upon request.
## References
* (1) Akmal A., Pandharipande V. R., Ravenhall D. G. 1998, Phys. Rev. C, 58, 1804
* (2) Alford M. G., Mahmoodifar S., Schwenzer K., 2012, Phys. Rev. D, 85, 024007
* (3) Andersson N., 2001, Class. Quant. Grav., 20, R105
* (4) Andersson N., Kokkotas K. D., 2001, IJMPD, 10, 381
* (5) Abbott B. P. et al., 2016, Phys. Rev. Lett, 116, 061102
* (6) Abbott B. P. et al., 2017, Phys. Rev. Lett, 119, 161101
* (7) Blaschke D., Grigorian H., Voskresensky D. N., Weber F. 2012, Phys. Rev. C, 85, 022802(R)
* (8) Blaschke D., Grigorian H., Voskresensky D. N. 2013, Phys. Rev. C, 88, 065805
* (9) Bildsten L., Ushomirsky G., 2000, ApJ, 529, L33
* (10) Bonanno A., Baldo M., Burgio G. F., Urpin V. 2014, A&A, 561, L5
* (11) Bonazzola S., Gourgoulhon E., 1996, A&A, 312, 675
* (12) Brown, E. F., Cumming, A., Fattoyev, F. J., Horowitz, C. J., Page, D., Reddy, S., 2018, Phys. Rev. Lett., 120, 182701
* (13) Brown E. F., Ushomirsky G., 2000, ApJ, 536, 915
* (14) Cheng Q., Yu Y. W., Zheng X. P., 2015, MNRAS 454, 2299
* (15) Cheng Q., Zhang S. N., Zheng X. P., 2017, Phys. Rev. D, 95, 083003
* (16) Cutler C., Lindblom L., 1987, ApJ, 314, 234
* (17) Dall’Osso S., Shore S. N., Stella L., 2009, MNRAS, 398, 1869
* (18) Ding D., Rios A., Dussan H., Dickhoff W. H., Witte S. J., Carbone A., Polls A., 2016, Phys. Rev. C, 94, 025802
* (19) Dong J., Zhang H., Wang L., Zuo W., 2013, Phys. Rev. C, 88, 014302
* (20) Dong J. M., Lombardo U., Zhang H. F., Zuo W., 2016, ApJ, 817, 6
* (21) Dong J., Zuo W., Gu J., Lombardo U., 2012, Phys. Rev. C, 85, 034308
* (22) Dong J. M., 2019, Pairing and neutron star cooling, priviate communication
* (23) Dong J. M., Lombardo U., Zuo W., 2013, Phys. Rev. C, 87, 062801(R)
* (24) Dong J. M., Shang X. L., 2020, Phys. Rev. C, 101, 014305
* (25) Dong J. M., Wang L. J., Zuo W., Gu J. Z., 2018, Phys. Rev. C, 97, 034318
* (26) Duer M. et al., 2018, Nature, 560, 617
* (27) Fan X., Dong J., Zuo W., 2014, Phys. Rev. C, 89, 017305
* (28) Frankfurt L., Sargsian M., Strikman M., 2008, IJMPA, 23, 2991
* (29) Gearheart M., Newton W. G., Hooker J., Li B., 2011, MNRAS, 418, 2343
* (30) Gusakov M. E., Chugunov A. I., Kantor E. M., 2014, Phys. Rev. Lett., 112, 151101
* (31) Haensel P., Levenfish K. P., Yakovlev D. G., 2000, A&A, 357, 1157
* (32) Haensel P., Levenfish K. P., Yakovlev D. G., 2001, A&A, 372, 130
* (33) Haskell B., 2015, IJMPE, 24, 1541007
* (34) Haskell B., Andersson N., Passamonti A., 2009, MNRAS, 397, 1464
* (35) Haskell B., Degenaar N., Ho W. C. G., 2012, MNRAS, 424, 93
* (36) Haskell B., Patruno A., 2017, Phys. Rev. Lett., 119, 161103
* (37) Heinke C. O., Ho W. C. G., 2010, ApJL, 719, L167
* (38) Hen O. et al., 2014, Science, 346, 614
* (39) Hen O., Miller G. A., Piasetzky E., Weinstein L. B., 2017, Rev. Mod. Phys., 89, 045002
* (40) Hessels J. W. T. et al., 2006, Science, 311, 1901
* (41) Ho W. C. G., 2011, MNRAS, 418, L99
* (42) Ho W. C. G., Andersson N., Haskell B., 2011, Phys. Rev. Lett., 107, 101101
* (43) Ho W. C. G., Elshamouty K. G., Heinke C. O., Potekhin A. Y., 2015, Phys. Rev. C, 91, 015806
* (44) Jha T. K., Mishra H., Sreekanth V., 2010, Phys. Rev. C, 82, 025803
* (45) Kadanoff L. P., Baym G., 1962, Quantum Statistical Mechanics, (New York)
* (46) Kokkotas K. D., Schwenzera K., 2016, Eur. Phys. J. A, 52, 38
* (47) Lattimer J. M., Prakash M., 2004, Science, 304, 536
* (48) Levin Y., Ushomirsky G., 2001, MNRAS, 324, 917
* (49) Lindblom L., Owen B. J., Ushomirsky G., 2000, Phys. Rev. D, 62, 084030
* (50) Lindblom L., Owen B. J., 2002, Phys. Rev. D, 65, 063006
* (51) Lindblom L., Owen B. J., Morsink S. M., 1998, Phys. Rev. Lett., 80, 4843
* (52) Luttinger J. M., 1960, Phys. Rev., 119, 1153
* (53) Marassi S., Ciolfi R., Schneider R., Stella L., Ferrari V., 2011, MNRAS, 411, 2549
* (54) Migdal A. B., 1957, Sov. Phys. JETP, 5, 333
* (55) Mukhopadhyay T., Basu D. N., 2007, Nucl. Phys. A, 789, 201
* (56) Nayyar M., Owen B. J., 2006, Phys. Rev. D, 73, 084001
* (57) Newton W. G., Murphy K., Hooker J., Li B.-A., 2013, ApJL, 779, L4
* (58) Page D., Lattimer J. M., Prakash M., Steiner A. W., 2004, ApJS, 155, 623
* (59) Page D., Geppert U., Weber F., 2006, Nucl. Phys. A, 777, 497
* (60) Page D., Prakash M., Lattimer J. M., Steiner A. W., 2011, Phys. Rev. Lett., 106, 081101
* (61) Pethick C., Potekhin A. Y., 1998, Phys. Lett. B, 427, 7
* (62) Posselt B., Pavlov G. G., Suleimanov V., Kargaltsev O., 2013, ApJ, 779, 186
* (63) Regimbau T., de Freitas Pacheco J. A., 2001, A&A, 374, 182
* (64) Rieutord M., 2001, ApJ, 550, 443
* (65) Schwenzer K., Boztepe T., Güver T., Vurgun E., 2017, MNRAS, 466, 2560
* (66) Sedrakian A., 2013, A&A, 555, L10
* (67) Shang X. L., et al., (unpublished)
* (68) Shternin P. S. et al., 2011, MNRAS, 412, L108
* (69) Shternin P. S., Yakovlev D. G., 2008, Phys. Rev. D, 78, 063006
* (70) Subedi R. et al., 2008, Science, 320, 1476
* (71) Vidana I., 2012, Phys. Rev. C, 85, 045808
* (72) Watts A. L., Krishnan B., Bildsten L., Schutz B. F., 2008, MNRAS, 389, 839
* (73) Wen D. -H., Newton W. G., Li B.-A., 2012, Phys. Rev. C, 85, 025801
* (74) Yakovlev D. G., Levenfish K. P., Shibanov Y. A., 1999, Phys. Uspekhi, 42, 737
* (75) Yakovlev D. G., Kaminker A. D., Gnedin O. Y., Haensel P., 2001, Phys. Rep., 354, 1
* (76) Yin P., Li J. Y., Wang P., Zuo W., 2013, Phys. Rev. C, 87, 014314
* (77) Yin P., Zuo W., 2013, Phys. Rev. C, 88, 015804
* (78)
|
# Martingale solution of stochastic hybrid Korteweg - de Vries - Burgers
equation
Anna Karczewska Faculty of Mathematics, Computer Science and Econometrics
University of Zielona Góra, Szafrana 4a, 65-516 Zielona Góra, Poland
<EMAIL_ADDRESS>and Maciej Szczeciński Faculty of Management,
Computer Science and Finance
Wrocław University of Economics, Komandorska 118/120, 53-345 Wrocław, Poland
<EMAIL_ADDRESS>
###### Abstract.
In the paper, we consider a stochastic hybrid Korteweg - de Vries - Burgers
type equation with multiplicative noise in the form of cylindrical Wiener
process. We prove the existence of a martingale solution to the equation
studied. The proof of the existence of the solution is based on two
approximations of the considered problem and the compactness method. First, we
introduce an auxiliary problem corresponding to the equation studied. Then, we
prove the existence of a martingale solution to this problem. Finally, we show
that the solution of the auxiliary problem converges, in some sense, to the
solution of the equation under consideration.
###### Key words and phrases:
KdV equation, Burgers equation, mild solution
###### 2010 Mathematics Subject Classification:
93B05, 93C25, 45D05, 47H08, 47H10
## 1\. Introduction
The deterministc hybrid Korteweg - de Vries - Burgers (hKdVB for short)
equation has been derived by Misra, Adhikary and Shuka [10] and Elkamash and
Kourakis [5] in the context of shock excitations in multicomponent plasma. The
hKdVB equation, derived in stretched coordinates
$\xi=\epsilon^{\frac{1}{2}}(x-vt),~{}\tau=\epsilon^{\frac{3}{2}}t$ ($v$ is the
phase velocity of the wave) has the form
(1.1) $u_{\tau}+Au\,u_{\xi}+Bu_{3\xi}=Cu_{2\xi}-Du.$
In (1.1), $u(\xi,\tau)$ represents electrostatic potential or electric field
pulse in the reference frame moving with the velocity $v$. Indexes denote
partial derivatives, that is, $u_{\tau}=\partial u/\partial\tau$,
$u_{2\xi}=\partial^{2}u/\partial\xi^{2}$ and so on. Constants $A,B,C,D$ are
related to parameters describing properties of plasma [5, Eq. (27)].
Although the equation (1.1) was derived for dissipative dispersive waves in
multicomponent plasma, it can be applied in several other physical systems,
e.g., surface water waves and the motion of optical impulses in fibers. For
some particular values of constants $A,B,C,D$ the hKdVB equation (1.1) reduces
to the particular cases:
* •
the Korteweg - de Vries equation, when $C=D=0$;
* •
the damped (dissipative) KdV equation, when $C=0$;
* •
the Burgers equation, when $B=D=0$;
* •
the KdV-Burgers equation, when $D=0$;
* •
the damped Burgers equation, when $B=0$.
The term with $A\neq 0$ introduces nonlinearity, that with $B\neq 0$ is
responsible for dispersion, $C\neq 0$ supplies diffusive term and $D\neq 0$
introduces damping. All equations of these kinds were widely studied 30-40
years ago, and most of physical ideas have been already understood (see, e.g.,
Lev Ostrovsky’s book [11] and references therein). On the other hand, during
the last few years, one can observe renewal of interest in this fields, mostly
due to extensions to higher order equations.
Studies of the full generalized hybrid KdB-Burgers equation (1.1) have
appeared in the physical literature only in [5, 10]. Some approximate analytic
solutions and several cases of numerical solutions to (1.1) were subjects of
recent studies in [6].
The paper deals with a stochastic hybrid Korteweg - de Vries - Burgers type
equation. The presence of stochastic noise has deep physical grounds. In the
case of waves in plasma, it can be caused by thermal fluctuations, whereas in
the case of water surface waves by air pressure fluctuations due to the wind.
To the best of our knowledge, our paper is the first one which deals with the
stochastic hKdVB equation.
The main result of the paper, Theorem 2.2 supplies the existence of a
martingale solution to the equation (2.1), which is the stochastic version of
the equation (1.1).
The idea of the proof of the existence of a martingale solution to (2.1) is
the following. First, we introduce an auxiliary problem (2.6) which we can
call $\varepsilon$-approximation of the equation (2.1). Then, in Lemma 2.3 we
prove that the problem (2.6) has a martingale solution. Here we use the
Galerkin approximation (4.1) of (2.6) and the tightness of the family of
distributions of the solutions to the approximation (4.1). Next, in Lemma 2.4
we show two estimates used in the proof of Theorem 2.2. Lemma 2.5 guarantees
the tightness of the family of distributions of solutions to the problem (2.6)
in a proper space. Finally, we prove that the solution to(2.6) converges, in
some sense, to the solution of the equation (2.1).
The paper is organized as follows. In section 2 we define the martingale
solution to some kind of stochastic hybrid Korteweg - de Vries - Burgers
equation (2.1) with a multiplicative Wiener noise on the interval $[0,T]$.
Then we formulate and prove Theorem 2.2. In the proof, some methods introduced
in [7] and extended in [4] have been adapted to the problem considered.
In section 3 Lemmas 2.4 and 2.5 used in the proof of Theorem 2.2 are proved.
Lemma 2.4 contains a version of estimates which are analogous to those
presented in [4] and [7].
In section 4 we give the detailed proof of Lemma 2.3. This lemma formulates
sufficient conditions for the existence of martingale solutions for
$m$-dimensional Galerkin approximation of Korteweg - de Vries - Burgers
equation with a multiplicative Wiener noise for arbitrary $m\in\mathrm{N}$.
## 2\. Existence of martingale solution
Denote $X:=[x_{1},x_{2}]\subset\mathbb{R}$, where
$-\infty<x_{1}<0<x_{2}<\infty$. We consider the following initial value
problem for hybrid Korteweg - de Vries - Burgers type equation
(2.1)
$\begin{cases}du(t,x)+\big{(}Au(t,x)u_{x}(t,x)+Bu_{3x}(t,x)-Cu_{2x}(t,x)+Du(t,x)\big{)}dt\\\
\hskip 34.44434pt=\Phi\left(u(t,x)\right)dW(t)\\\ u(0,x)=u_{0}(x),\quad x\in
X,\quad t\geq 0.\\\ \end{cases}$
In (2.1), $W(t)$, $t\geq 0$, is a cylindrical Wiener process adapted to the
filtration $\left\\{\mathscr{F}_{t}\right\\}_{t\geq 0}$, $u_{0}\in L^{2}(X)$
is a deterministic real-valued function. In (2.1)
$u(\omega,\cdot,\cdot):\mathbb{R}_{+}\times\mathbb{R}\rightarrow\mathbb{R}$
for all $\omega\in\Omega$. We assume that there exists such $\lambda_{X}>0$,
that
(2.2)
$\left|u(t,x)\right|<\lambda_{X}<\infty\quad\mbox{for~{}all~{}}t\in\mathbb{R}_{+}\mbox{~{}and~{}all~{}}x\in
X.$
This assumption reflects the finiteness of the solutions to the deterministic
equation (1.1) on finite interval $X$ (see, e.g., [10, 6]).
By $H^{1}(X),H^{2}(X),H^{s}(X),s<0$ we denote the Sobolev spaces according to
definitions in [1]. We assume that $\Phi:H^{2}(X)\rightarrow
L_{0}^{2}(L^{2}(X))$ is a continuous mapping which for all $u\in H^{2}(X)$
fulfills the following conditions:
(2.3)
$\displaystyle\mathop{\exists}_{\kappa_{1},\kappa_{2}>0}\quad\left\|\Phi(u(x))\right\|_{L_{0}^{2}(L^{2}(X))}\leq\kappa_{1}\min\left\\{\left|u(x)\right|^{2}_{L^{2}(X)},\left|u(x)\right|_{L^{2}(X)}\right\\}+\kappa_{2};$
(2.4) $\displaystyle\mbox{there~{}exist~{}such~{}functions}~{}a,b\in
L^{2}(X)~{}\mbox{with~{}compact support,~{}that~{}the~{}mapping}$
$\displaystyle
u\mapsto\left(\Phi(u)a,\Phi(u)b\right)_{L^{2}(X)}~{}\mbox{is~{}continuous~{}in~{}}L^{2}(X).$
###### Definition 2.1.
We say that the problem (2.1) has a martingale solution on the interval
$[0,T]$, $0<T<\infty$, if there exists a stochastic basis
$(\Omega,\mathscr{F},\left\\{\mathscr{F}_{t}\right\\}_{t\geq
0},\mathbb{P},\left\\{W_{t}\right\\}_{t\geq 0})$, where
$\left\\{W_{t}\right\\}_{t\geq 0}$ is a cylindrical Wiener process, and there
exists the process $\left\\{u(t,x)\right\\}_{t\geq 0}$ adapted to the
filtration $\left\\{\mathscr{F}_{t}\right\\}_{t\geq 0}$ with trajectories in
the space
$L^{\infty}(0,T;L^{2}(X))\cap L^{2}(0,T;L^{2}(X))\cap C(0,T;H^{s}(X)),\quad
s<0,\quad\mathbb{P}-a.s.,$
such that
$\displaystyle\left\langle u(t,x);v(x)\right\rangle+\int_{0}^{t}\left\langle
Au(t,x)u_{x}(t,x)+Bu_{3x}(t,x)-Cu_{2x}(t,x)+Du(t,x);v(x)\right\rangle ds$
$\displaystyle=\left\langle
u_{0}(x);v(x)\right\rangle+\left\langle\int_{0}^{t}\Phi(u(s,x))dW(s);v(x)\right\rangle,\quad\mathbb{P}-a.s.,$
for all $t\in[0,T]$ and $v\in H^{1}(X)$.
In our consideration we shall assume that the coefficients of the equation
(2.1) fulfill the following condition
(2.5) $B,C,D\geq 0\qquad\mbox{with}\qquad 3B\geq A+1.$
The physical sense of the coefficients $A,B,C,D$ and the fact that $A$ can be
positive or negative, see, e.g. [5, 6, 10] confirms that the condition (2.5)
admits for a broad class of physically meaningful equations which contains all
particular cases listed in section 1.
###### Theorem 2.2.
If the conditions (2.2)-(2.5) hold then for all real-valued $u_{0}\in
L^{2}(X)$ and $0<T<\infty$ there exists a martingale solution to (2.1).
###### Proof.
Let $\varepsilon>0$. Consider the following auxiliary problem
(2.6) $\begin{cases}du^{\varepsilon}(t,x)+[\varepsilon
u^{\varepsilon}_{4x}(t,x)+Au^{\varepsilon}(t,x)u^{\varepsilon}_{x}(t,x)+Bu^{\varepsilon}_{3x}(t,x)-Cu^{\varepsilon}_{2x}(t,x)\\\
\hskip
51.6665pt+Du^{\varepsilon}(t,x)]dt=\Phi\left(u^{\varepsilon}(t,x)\right)dW(t)\\\
u^{\varepsilon}_{0}(x)=u^{\varepsilon}(0,x),\quad\varepsilon>0.\end{cases}$
In the proof of theorem we shall use the following lemmas.
###### Lemma 2.3.
For any $\varepsilon>0$ there exists a martingale solution to the problem
(2.6) if the conditions (2.3), (2.4) and (2.5) hold.
###### Lemma 2.4.
There exists $\varepsilon_{0}>0$, such that
(2.7)
$\displaystyle\exists_{C_{1}>0}\forall_{0<\varepsilon<\varepsilon_{0}}~{}\varepsilon\,\mathbb{E}\left(\left|u^{\varepsilon}(t,x)\right|^{2}_{L^{2}(0,T;H^{2}(\mathbb{R}))}\right)$
$\displaystyle\leq\tilde{C}_{1},$ (2.8) $\displaystyle\forall_{k\in
X_{k}}\exists_{C_{2}(k)>0}\forall_{0<\varepsilon<\varepsilon_{0}}~{}\mathbb{E}\left(\left|u^{\varepsilon}(t,x)\right|^{2}_{L^{2}(0,T;H^{1}(-k,k))}\right)$
$\displaystyle\leq\tilde{C}_{2}(k),$
where
$X_{k}=\big{\\{}k>0:\left|k\right|\leq\min\left\\{-x_{1},x_{2}\right\\}\big{\\}}$.
###### Lemma 2.5.
Let $\mathscr{L}(u^{\varepsilon})$ denote the family of distributions of the
solutions $u^{\varepsilon}$ to (2.6). Then the family
$\mathscr{L}(u^{\varepsilon})$ is tight in $L^{2}(0,T;L^{2}(X))\cap
C(0,T;H^{-3}(X))$.
Now, substitute in Prohorov’s theorem (e.g., see Theorem 5.1 in [2]),
$S:=L^{2}(0,T;L^{2}(X))\cap C(0,T;H^{-3}(X))$ and
$\mathscr{K}:=\left\\{\mathscr{L}(u^{\varepsilon})\right\\}_{\varepsilon>0}$.
Since $\mathscr{K}$ is tight in $S$, then it is sequentially compact, so there
exists a subsequence of
$\left\\{\mathscr{L}(u^{\varepsilon})\right\\}_{\varepsilon>0}$ converging to
some measure $\mu$ in $\bar{\mathscr{K}}$. Because
$\left\\{\mathscr{L}(u^{\varepsilon})\right\\}_{\varepsilon>0}$ is convergent,
then in Skorohod’s theorem (e.g., see Theorem 6.7 in [2]) one can substitute
$\mu_{\varepsilon}:=\left\\{\mathscr{L}(u^{\varepsilon})\right\\}_{\varepsilon>0}$
and $\mu:=\lim_{\varepsilon\rightarrow 0}\mu_{\varepsilon}$. Then there exists
a space
$(\bar{\Omega},\bar{\mathscr{F}},\left\\{\bar{\mathscr{F}}_{t}\right\\}_{t\geq
0},\bar{\mathbb{P}})$ and random variables with values in
$L^{2}(0,T;L^{2}(X))\cap C(0,T;H^{-3}(X))$, such that
$\bar{u}^{\varepsilon}\rightarrow\\!\bar{u}$ in $L^{2}(0,T;L^{2}(X))$ and
$\bar{u}^{\varepsilon}\rightarrow\\!\bar{u}$ in $C(0,T;H^{-3}(X))$. Moreover,
$\mathscr{L}(\bar{u}^{\varepsilon})\\!=\mathscr{L}(u^{\varepsilon})$.
Then due to Lemma 2.4, for any $p\in\mathbb{N}$ there exist constants
$\tilde{C}_{1}(p)$, $\tilde{C}_{2}$ such that
$\displaystyle\mathbb{E}(\sup_{t\in[0,T]}\left|\bar{u}^{\varepsilon}(t,x)\right|_{L^{2}(X)}^{2p})\leq\tilde{C}_{1}(p),\quad\mathbb{E}(\left|\bar{u}^{\varepsilon}(t,x)\right|^{2}_{L^{2}(0,T;H^{2}(X))})\leq\tilde{C}_{2}$
and $\bar{u}^{\varepsilon}(t,x)\in L^{2}(0,T;H^{1}(-k,k))\cap
L^{\infty}(0,T;L^{2}(X)),~{}\mathbb{P}-a.s.$ Then one can conclude that
$\bar{u}^{\varepsilon}\rightarrow\bar{u}$ weakly in
$L^{2}(\bar{\Omega},L^{2}(0,T;H^{1}(-k,k)))$.
Let $x\in\mathbb{R}$ be fixed. Denote
$\displaystyle M^{\varepsilon}(t):=$ $\displaystyle
u^{\varepsilon}(t,x)-u_{0}^{\varepsilon}(x)+\int_{0}^{t}\bigg{[}\varepsilon
u^{\varepsilon}(t,x)_{4x}(t,x)+Au^{\varepsilon}(t,x)u^{\varepsilon}_{x}(t,x)$
$\displaystyle\hskip
94.72192pt+Bu^{\varepsilon}_{3x}(t,x)-Cu^{\varepsilon}_{2x}(t,x)+Du^{\varepsilon}(t,x)\bigg{]}ds,$
$\displaystyle\bar{M}^{\varepsilon}(t):=$
$\displaystyle\bar{u}^{\varepsilon}(t,x)-\bar{u}_{0}^{\varepsilon}(x)+\int_{0}^{t}\bigg{[}A\bar{u}^{\varepsilon}(t,x)\bar{u}^{\varepsilon}_{x}(t,x)+B\bar{u}^{\varepsilon}_{3x}(t,x)-C\bar{u}^{\varepsilon}_{2x}(t,x)+D\bar{u}^{\varepsilon}(t,x)\bigg{]}ds.$
Note, that
$\displaystyle M^{\varepsilon}(t)=$ $\displaystyle\hskip
4.30554ptu_{0}^{\varepsilon}(x)-\int_{0}^{t}\bigg{[}\varepsilon
u^{\varepsilon}(t,x)_{4x}(t,x)+Au^{\varepsilon}(t,x)u^{\varepsilon}_{x}(t,x)+Bu^{\varepsilon}_{3x}(t,x)-Cu^{\varepsilon}_{2x}(t,x)$
$\displaystyle\hskip
55.97205pt+Du^{\varepsilon}(t,x)\bigg{]}ds+\int_{0}^{t}\left(\Phi\left(u^{\varepsilon}(s,x)\right)\right)dW(s)$
$\displaystyle-
u_{0}^{\varepsilon}(x)+\int_{0}^{t}\bigg{[}Au^{\varepsilon}(t,x)u^{\varepsilon}_{x}(t,x)+Bu^{\varepsilon}_{3x}(t,x)-Cu^{\varepsilon}_{2x}(t,x)+Du^{\varepsilon}(t,x)\bigg{]}ds$
$\displaystyle=$
$\displaystyle\int_{0}^{t}\left(\Phi\left(u^{\varepsilon}(s,x)\right)\right)dW(s).$
So, $M^{\varepsilon}(t)$, $t\geq 0$, is a square integrable martingale with
values in $L^{2}(X)$, adapted to the filtration
$\sigma\left\\{u^{\varepsilon}(s,x),0\leq s\leq t\right\\}$ with quadratic
variation equal
$\left[M^{\varepsilon}\right](t):=\int_{0}^{t}\Phi(u^{\varepsilon}(s,x))\left[\Phi(u^{\varepsilon}(s,x))\right]^{*}ds.$
Substitute in the Doob inequality (e.g., see Theorem 2.2 in [8])
$M_{t}:=M^{\varepsilon}(t)$ and $p:=2p$. Then
(2.9)
$\mathbb{E}\left[\left(\sup_{t\in[0,T]}\left|M^{\varepsilon}(t)\right|_{L^{2}(X)}^{p}\right)\right]\leq\left(\frac{p}{p-1}\right)^{p}\mathbb{E}\left(\left|M^{\varepsilon}(T)\right|_{L^{2}(X)}\right).$
Assume $0\leq s\leq t\leq T$ and let $\varphi$ be a bounded continuous
function on $L^{2}(0,s;L^{2}(X))$ or $C(0,s;H^{-3}(X))$. Let $a,b\in
H^{3}_{0}(-k,k)$, $k\in\mathbb{N}$, be arbitrary and fixed. Since
$M^{\varepsilon}(t)$ is a martingale and
$\mathscr{L}(\bar{u}^{\varepsilon})=\mathscr{L}(u^{\varepsilon})$, then (see
[7], p. 377-378)
$\mathbb{E}\Big{(}\left\langle
M^{\varepsilon}(t)-M^{\varepsilon}(s);a\right\rangle\varphi\left(u^{\varepsilon}(t,x)\right)\Big{)}=0\quad\mbox{and}\quad\mathbb{E}\Big{(}\left\langle\bar{M}^{\varepsilon}(t)-\bar{M}^{\varepsilon}(s);a\right\rangle\varphi\left(\bar{u}^{\varepsilon}(t,x)\right)\Big{)}=0.$
Moreover
$\displaystyle\mathbb{E}$ $\displaystyle\bigg{\\{}\bigg{[}\left\langle
M^{\varepsilon}(t);a\right\rangle\left\langle
M^{\varepsilon}(t);b\right\rangle-\left\langle
M^{\varepsilon}(s);a\right\rangle\left\langle
M^{\varepsilon}(s);b\right\rangle$
$\displaystyle-\int_{s}^{t}\left\langle\left[\Phi\left(u^{\varepsilon}(\xi,x)\right)\right]^{*}a;\left[\Phi\left(u^{\varepsilon}(\xi,x)\right)\right]^{*}b\right\rangle
d\xi\bigg{]}\varphi(u^{\varepsilon}(t,x))\bigg{\\}}=0$
$\displaystyle\mbox{and}\quad\mathbb{E}$
$\displaystyle\bigg{\\{}\bigg{[}\left\langle\bar{M}^{\varepsilon}(t);a\right\rangle\left\langle\bar{M}^{\varepsilon}(t);b\right\rangle-\left\langle\bar{M}^{\varepsilon}(s);a\right\rangle\left\langle\bar{M}^{\varepsilon}(s);b\right\rangle$
$\displaystyle-\int_{s}^{t}\left\langle\left[\Phi\left(\bar{u}^{\varepsilon}(\xi,x)\right)\right]^{*}a;\left[\Phi\left(\bar{u}^{\varepsilon}(\xi,x)\right)\right]^{*}b\right\rangle
d\xi\bigg{]}\varphi(\bar{u}^{\varepsilon}(t,x))\bigg{\\}}=0.$
Denote
$\bar{M}(t):=\bar{u}(t,x)-u_{0}(x)+\int_{0}^{t}\bigg{[}A\bar{u}(t,x)\bar{u}_{x}(t,x)+B\bar{u}_{3x}(t,x)-C\bar{u}_{2x}(t,x)+D\bar{u}(t,x)\bigg{]}ds$.
If $\varepsilon\rightarrow 0$, then
$\bar{M}^{\varepsilon}(t)\rightarrow\bar{M}(t)$ and
$\bar{M}^{\varepsilon}(s)\rightarrow\bar{M}(s)$, $\bar{\mathbb{P}}$ \- a.s. in
$H^{-3}(X)$. Moreover, since $\varphi$ is continuous, then
$\varphi(\bar{u}^{\varepsilon}(s,x))\rightarrow\varphi(\bar{u}(s,x))$,
$\bar{\mathbb{P}}$ \- a.s. So, if $\varepsilon\rightarrow 0$, then
$\displaystyle\mathbb{E}$
$\displaystyle\Big{(}\left\langle\bar{M}^{\varepsilon}(t)-\bar{M}^{\varepsilon}(s);a\right\rangle\varphi(\bar{u}^{\varepsilon}(t,x))\Big{)}\rightarrow\mathbb{E}\Big{(}\left\langle\bar{M}(t)-\bar{M}(s);a\right\rangle\varphi(\bar{u}(t,x))\Big{)}.$
Moreover, since $\Phi$ is a continuous operator in topology $L^{2}(X)$ and
(2.9) holds, so if $\varepsilon\rightarrow 0$, then
$\displaystyle\left\langle\left(\Phi(\bar{u}^{\varepsilon}(s,x))\right)^{*}a;\left(\Phi(\bar{u}^{\varepsilon}(s,x))\right)^{*}b\right\rangle\rightarrow\left\langle\left(\Phi(\bar{u}(s,x))\right)^{*}a;\left(\Phi(\bar{u}(s,x))\right)^{*}b\right\rangle$
and
$\displaystyle\mathbb{E}$
$\displaystyle\bigg{\\{}\bigg{[}\left\langle\bar{M}^{\varepsilon}(t);a\right\rangle\left\langle\bar{M}^{\varepsilon}(t);b\right\rangle-\left\langle\bar{M}^{\varepsilon}(s);a\right\rangle\left\langle\bar{M}^{\varepsilon}(s);b\right\rangle$
$\displaystyle-\int_{s}^{t}\left\langle\left[\Phi\left(\bar{u}^{\varepsilon}(s,\xi)\right)\right]^{*}a;\left[\Phi\left(\bar{u}^{\varepsilon}(s,\xi)\right)\right]^{*}b\right\rangle
d\xi\bigg{]}\varphi(\bar{u}^{\varepsilon}(t,x))\bigg{\\}}$
$\displaystyle\rightarrow\mathbb{E}$
$\displaystyle\bigg{\\{}\bigg{[}\left\langle\bar{M}(t);a\right\rangle\left\langle\bar{M}(t);b\right\rangle-\left\langle\bar{M}(s);a\right\rangle\left\langle\bar{M}(s);b\right\rangle$
$\displaystyle-\int_{s}^{t}\left\langle\left[\Phi\left(\bar{u}(s,\xi)\right)\right]^{*}a;\left[\Phi\left(\bar{u}(s,\xi)\right)\right]^{*}b\right\rangle
d\xi\bigg{]}\varphi(\bar{u}(t,x))\bigg{\\}}.$
Then $\bar{M}(t),t\geq 0,$ is also a square integrable martingale adapted to
the filtration
$\sigma\left\\{\bar{u}(s),0\leq s\leq t\right\\}$ with quadratic variation
$\int_{0}^{t}\Phi(\bar{u}(s,x))\left(\Phi(\bar{u}(s,x))\right)^{*}ds$.
Substitute in the representation theorem (e.g., see Theorem 8.2 in [3]),
$M_{t}:=\bar{M}(t)$,
$[M_{t}]:=\int_{0}^{t}\Phi(\bar{u}(s,x))\times\left(\Phi(\bar{u}(s,x))\right)^{*}ds$
and $\Phi(s):=\Phi(\bar{u}(s,x))$.
Then there exists a process
$\tilde{M}(t)=\int_{0}^{t}\Phi(\bar{u}(s,x))dW(s)$, that
$\tilde{M}(t)=\bar{M}(t)$, $\mathbb{\bar{P}}$ \- a.s., and
$\displaystyle\bar{u}(t,x)-u_{0}(x)+\int_{0}^{t}\bigg{[}A\bar{u}(t,x)\bar{u}_{x}(t,x)+B\bar{u}_{3x}(t,x)-C\bar{u}_{2x}(t,x)+D\bar{u}(t,x)\bigg{]}ds$
$\displaystyle=\int_{0}^{t}\Phi(\bar{u}(s,x))dW(s).$
This implies that
$\displaystyle\bar{u}(t,x)=u_{0}(x)-\int_{0}^{t}\bigg{[}A\bar{u}(t,x)\bar{u}_{x}(t,x)+B\bar{u}_{3x}(t,x)-C\bar{u}_{2x}(t,x)+D\bar{u}(t,x)\bigg{]}ds$
$\displaystyle+\int_{0}^{t}\Phi(\bar{u}(s,x))dW(s),$
so $\bar{u}(t,x)$ is a solution to (2.1), what finishes the proof of Theorem
2.2 . ∎
## 3\. Proofs of Lemma 2.4 and Lemma 2.5
###### Proof of Lemma 2.4.
Let $p:\mathbb{R}\rightarrow\mathbb{R}$ be a smooth function fulfilling the
following conditions
* (i)
$p$ is increasing on $X$;
* (ii)
$p(x_{1})=\delta>0$;
* (iii)
$p^{\prime}(x)>\alpha_{X}$ for all $x\in X$;
* (iv)
$Bp^{\prime\prime\prime}(x)+Cp^{\prime\prime}(x)\leq\gamma<-1$ for all $x\in
X$.
Additionally, let
$F(u^{\varepsilon}):=\int_{X}p(x)(u^{\varepsilon}(x))^{2}dx$. Application of
the Itô formula for $F(u^{\varepsilon})$ yields the formula
$\displaystyle dF(u^{\varepsilon}(t,x))=$ $\displaystyle\left\langle
F^{\prime}(u^{\varepsilon}(t,x));\Phi(u^{\varepsilon}(t,x))\right\rangle
dW(t)-\left\langle F^{\prime}(u^{\varepsilon}(t,x));\varepsilon
u^{\varepsilon}_{4x}+Au^{\varepsilon}(t,x)u_{x}^{\varepsilon}(t,x)\right.$
(3.1)
$\displaystyle\left.+Bu^{\varepsilon}_{3x}(t,x)-Cu^{\varepsilon}_{2x}(t,x)+Du^{\varepsilon}(t,x)\right\rangle
dt$
$\displaystyle+\frac{1}{2}\text{Tr}\left\\{F^{\prime\prime}(u^{\varepsilon}(t,x))\Phi(u^{\varepsilon}(t,x))\left[\Phi(u^{\varepsilon}(t,x))\right]^{*}\right\\}dt,$
where $\left\langle
F^{\prime}(u^{\varepsilon}(t,x));v(t,x)\right\rangle=2\int_{X}p(x)u^{\varepsilon}(t,x)v(t,x)dx$
and $F^{\prime}(u^{\varepsilon}(t,x))v(t,x)=2p(x)v(t,x)$.
We will use the following auxiliary result.
###### Lemma 3.1.
[[4], p.242] There exist positive constants $C_{1},C_{2},C_{3}$ such that
$\displaystyle\int_{X}p(x)u^{\varepsilon}(t,x)u^{\varepsilon}_{4x}(t,x)dx\geq$
$\displaystyle\frac{1}{2}\int_{X}p(x)\left[u^{\varepsilon}_{2x}(t,x)\right]^{2}dx-
C_{1}\left|u^{\varepsilon}(t,x)\right|^{2}_{L^{2}(X)}$ $\displaystyle-
C_{2}\int_{X}p^{\prime}(x)\left[u_{x}(t,x)\right]^{2}dx;$
$\displaystyle\int_{X}p(x)u^{\varepsilon}(t,x)u^{\varepsilon}_{3x}(t,x)dx=$
$\displaystyle\frac{3}{2}\int_{X}p^{\prime}(x)\left[u^{\varepsilon}_{x}(t,x)\right]^{2}dx-\frac{1}{2}\int_{X}p^{\prime\prime\prime}(x)\left[u(t,x)\right]^{2}dx;$
$\displaystyle\int_{X}p(x)\left[u^{\varepsilon}(t,x)\right]^{2}u^{\varepsilon}_{x}(t,x)dx\geq$
$\displaystyle-
C_{3}\left(1+\left|u^{\varepsilon}(t,x)\right|_{L^{2}(X)}^{6}\right)-\frac{1}{2}\int_{X}p^{\prime}(x)\left[u_{x}(t,x)\right]^{2}dx.$
Similarly as in Lemma 3.1, one has
$\displaystyle\int_{X}p(x)u^{\varepsilon}(t,x)u^{\varepsilon}_{2x}(t,x)dx=$
$\displaystyle\frac{1}{2}\int_{X}p^{\prime\prime}(x)\left[u^{\varepsilon}(t,x)\right]^{2}dx-\int_{X}p(x)\left[u_{x}(t,x)\right]^{2}dx.$
These estimations imply
$\displaystyle\left\langle F^{\prime}(u^{\varepsilon}(t,x));\varepsilon
u_{4x}^{\varepsilon}(t,x)+Au^{\varepsilon}(t,x)u_{x}^{\varepsilon}(t,x)+Bu^{\varepsilon}_{3x}(t,x)-Cu^{\varepsilon}_{2x}(t,x)+Du^{\varepsilon}(t,x)\right\rangle$
$\displaystyle\geq$
$\displaystyle\varepsilon\int_{X}p(x)\left[u^{\varepsilon}_{2x}(t,x)\right]^{2}dx-2\varepsilon
C_{1}\left|u^{\varepsilon}(t,x)\right|^{2}_{L^{2}(X)}-2\varepsilon
C_{2}\int_{X}p^{\prime}(x)\left[u_{x}(t,x)\right]^{2}dx$
$\displaystyle+3B\int_{X}p^{\prime}(x)\left[u^{\varepsilon}_{x}(t,x)\right]^{2}dx-B\int_{X}p^{\prime\prime\prime}(x)\left[u^{\varepsilon}(t,x)\right]^{2}dx-2AC_{3}\left(1+\left|u^{\varepsilon}(t,x)\right|_{L^{2}(X)}^{6}\right)$
$\displaystyle-A\int_{X}p^{\prime}(x)\left[u_{x}(t,x)\right]^{2}dx-C\int_{X}p^{\prime\prime}(x)\left[u^{\varepsilon}(t,x)\right]^{2}dx+2C\int_{X}p(x)\left[u_{x}(t,x)\right]^{2}dx$
$\displaystyle+2D\int_{X}p(x)\left[u(t,x)\right]^{2}dx$ $\displaystyle\geq$
$\displaystyle\varepsilon\int_{X}p(x)\left[u^{\varepsilon}_{2x}(t,x)\right]^{2}dx-2\varepsilon
C_{1}\left|u^{\varepsilon}(t,x)\right|^{2}_{L^{2}(X)}+\left(3B-A-2\varepsilon
C_{2}\right)C_{2}\int_{X}p^{\prime}(x)\left[u_{x}(t,x)\right]^{2}dx$
$\displaystyle-\int_{X}\left[Bp^{\prime\prime\prime}(x)+Cp^{\prime\prime}(x)\right]\left[u^{\varepsilon}(t,x)\right]^{2}dx-2AC_{3}\left(1+\left|u^{\varepsilon}(t,x)\right|_{L^{2}(X)}^{6}\right)$
$\displaystyle\geq$
$\displaystyle\varepsilon\int_{X}p(x)\left[u^{\varepsilon}_{2x}(t,x)\right]^{2}dx+\left(3B-A-2\varepsilon
C_{2}\right)\int_{X}p^{\prime}(x)\left[u_{x}(t,x)\right]^{2}dx$
$\displaystyle-\left(\gamma+2\varepsilon
C_{1}\right)\left|u^{\varepsilon}(t,x)\right|^{2}_{L^{2}(X)}-2AC_{3}\left(1+\left|u^{\varepsilon}(t,x)\right|_{L^{2}(X)}^{6}\right).$
Let
$\varepsilon\leq\min\left\\{\frac{3B-A-1}{2C_{2}},-\frac{1+\gamma}{2C_{1}}\right\\}$.
Then
(3.2) $\displaystyle\left\langle F^{\prime}(u^{\varepsilon}(t,x));\varepsilon
u_{4x}^{\varepsilon}(t,x)+Au^{\varepsilon}(t,x)u_{x}^{\varepsilon}(t,x)+Bu^{\varepsilon}_{3x}(t,x)-Cu^{\varepsilon}_{2x}(t,x)+Du^{\varepsilon}(t,x)\right\rangle$
$\displaystyle\geq$
$\displaystyle\varepsilon\int_{X}p(x)\left[u^{\varepsilon}_{2x}(t,x)\right]^{2}dx+\int_{X}p^{\prime}(x)\left[u_{x}(t,x)\right]^{2}dx+\left|u^{\varepsilon}(t,x)\right|^{2}_{L^{2}(X)}$
$\displaystyle-2AC_{3}\left(1+\left|u^{\varepsilon}(t,x)\right|_{L^{2}(X)}^{6}\right)$
$\displaystyle\geq$
$\displaystyle\varepsilon\int_{X}p(x)\left[u^{\varepsilon}_{2x}(t,x)\right]^{2}dx+\int_{X}p^{\prime}(x)\left[u_{x}(t,x)\right]^{2}dx-2AC_{3}\left(1+\left|u^{\varepsilon}(t,x)\right|_{L^{2}(X)}^{6}\right).$
Let $\left\\{e_{i}\right\\}_{i\in\mathbb{N}}$ be an orthonormal basis in
$L^{2}(X)$. Then, there exists a constant $C_{4}>0$ such that
(3.3)
$\displaystyle\text{Tr}\left(F^{\prime\prime}(u)\Phi(u)\left[\Phi(u)\right]^{*}\right)=$
$\displaystyle
2\sum_{i\in\mathbb{N}}\int_{X}p(x)\left|\Phi\left(u^{\varepsilon}(t,x)\right)e_{i}(x)\right|^{2}dx\leq
C_{4}\left|\Phi\left(u^{\varepsilon}(t,x)\right)\right|^{2}_{L_{0}^{2}\left(L^{2}(X)\right)}$
$\displaystyle\leq$ $\displaystyle
C_{4}\left(\kappa_{1}\left|u^{\varepsilon}(t,x)\right|_{L^{2}(X)}^{2}+\kappa_{2}\right)^{2}.$
From (3.2) and (3.3) we have
$\displaystyle\mathbb{E}F(u^{\varepsilon}(t,x))\leq$ $\displaystyle
F\left(u^{\varepsilon}_{0}\right)-\varepsilon\mathbb{E}\int_{0}^{t}\int_{X}p(x)\left[u^{\varepsilon}_{2x}(t,x)\right]^{2}dxdt-\mathbb{E}\int_{0}^{t}\int_{X}p^{\prime}(x)\left[u_{x}(t,x)\right]^{2}dxdt$
$\displaystyle+2AC_{3}\mathbb{E}\int_{0}^{t}\left(1+\left|u^{\varepsilon}(t,x)\right|_{L^{2}(X)}^{6}\right)dt+C_{4}\mathbb{E}\left(\kappa_{1}\left|u^{\varepsilon}(t,x)\right|^{2}_{L^{2}(\mathbb{R})}+\kappa_{2}\right)^{2},$
so
$\displaystyle\mathbb{E}F(u^{\varepsilon}(t,x))+\varepsilon\mathbb{E}\int_{0}^{t}\int_{X}p(x)\left[u^{\varepsilon}_{2x}(t,x)\right]^{2}dxdt+\mathbb{E}\int_{0}^{t}\int_{X}p^{\prime}(x)\left[u_{x}(t,x)\right]^{2}dxdt$
$\displaystyle\leq$ $\displaystyle
F\left(u^{\varepsilon}_{0}\right)+2AC_{3}\mathbb{E}\int_{0}^{t}\left(1+\left|u^{\varepsilon}(t,x)\right|_{L^{2}(X)}^{6}\right)dt+C_{4}\mathbb{E}\left(\kappa_{1}\left|u^{\varepsilon}(t,x)\right|^{2}_{L^{2}(\mathbb{R})}+\kappa_{2}\right)^{2}$
$\displaystyle\leq$ $\displaystyle
F\left(u^{\varepsilon}_{0}\right)+2AC_{3}\mathbb{E}\int_{0}^{T}\left(1+\left|u^{\varepsilon}(t,x)\right|_{L^{2}(X)}^{6}\right)dt+C_{4}\mathbb{E}\left(\kappa_{1}\left|u^{\varepsilon}(t,x)\right|^{2}_{L^{2}(\mathbb{R})}+\kappa_{2}\right)^{2}$
$\displaystyle\leq$ $\displaystyle
F\left(u^{\varepsilon}_{0}\right)+2AC_{3}\mathbb{E}\int_{0}^{T}\left(1+C_{5}\right)dt+C_{6}=F\left(u^{\varepsilon}_{0}\right)+2AC_{3}T(1+C_{5})+C_{6}\leq
C_{7}.$
Let $\varepsilon_{0}>0$ be fixed. Then for all $0<\varepsilon<\varepsilon_{0}$
one has
$\displaystyle\varepsilon$
$\displaystyle\mathbb{E}\left(\left|u^{\varepsilon}(t,x)\right|^{2}_{L^{2}(0,T;H^{2}(X))}\right)=\varepsilon\mathbb{E}\int_{0}^{T}\int_{X}\left[u^{\varepsilon}(t,x)\right]^{2}dxdt+\varepsilon\mathbb{E}\int_{0}^{T}\int_{X}\left[u_{2x}^{\varepsilon}(t,x)\right]^{2}dxdt$
$\displaystyle\leq\varepsilon
C_{8}+\varepsilon\mathbb{E}\int_{0}^{T}\int_{X}\left[u_{2x}^{\varepsilon}(t,x)\right]^{2}dxdt=\varepsilon
C_{8}+\varepsilon\mathbb{E}\int_{0}^{T}\int_{X}\frac{1}{p(x)}p(x)\left[u_{2x}^{\varepsilon}(t,x)\right]^{2}dxdt$
$\displaystyle\leq\varepsilon
C_{8}+\varepsilon\mathbb{E}\int_{0}^{T}\int_{X}\frac{1}{\delta}p(x)\left[u_{2x}^{\varepsilon}(t,x)\right]^{2}dx\leq\varepsilon
C_{8}+\frac{1}{\delta}\varepsilon\mathbb{E}\int_{0}^{T}\int_{X}p(x)\left[u_{2x}^{\varepsilon}(t,x)\right]^{2}dx$
$\displaystyle\leq\varepsilon C_{8}+\frac{1}{\delta}C_{7}\leq
C_{9}(\varepsilon+\frac{1}{\delta})\leq
C_{9}(\varepsilon_{0}+\frac{1}{\delta}),$
which proves the formula (2.7). Moreover, we have
$\displaystyle\mathbb{E}$
$\displaystyle\left(\left|u^{\varepsilon}(t,x)\right|^{2}_{L^{2}(0,T;H^{1}(-k,k))}\right)=\mathbb{E}\int_{0}^{T}\int_{-k}^{k}\left[u^{\varepsilon}(t,x)\right]^{2}dxdt+\mathbb{E}\int_{0}^{T}\int_{-k}^{k}\left[u_{x}^{\varepsilon}(t,x)\right]^{2}dxdt$
$\displaystyle\leq
C_{10}+\mathbb{E}\int_{0}^{T}\int_{-k}^{k}\left[u_{x}^{\varepsilon}(t,x)\right]^{2}dx\leq
C_{10}+\mathbb{E}\int_{0}^{T}\int_{X}\left[u_{x}^{\varepsilon}(t,x)\right]^{2}dx$
$\displaystyle\leq
C_{10}+\mathbb{E}\int_{0}^{T}\int_{X}\frac{1}{p^{\prime}(x)}p^{\prime}(x)\left[u_{x}^{\varepsilon}(t,x)\right]^{2}dx.$
Since $p^{\prime}(x)$ is bounded from below on every compact set $X$ by
positive number $\alpha_{X}>0$, then
$\displaystyle\mathbb{E}\left(\left|u^{\varepsilon}(t,x)\right|^{2}_{L^{2}(0,T;H^{1}(-k,k))}\right)$
$\displaystyle\leq
C_{10}+\frac{1}{\alpha_{X}}\mathbb{E}\int_{0}^{T}\int_{X}p^{\prime}(x)\left[u_{x}^{\varepsilon}(t,x)\right]^{2}dx\leq
C_{10}+\frac{1}{\alpha_{X}}C_{7}\leq C_{11}.$
This proves inequality (2.8). ∎
###### Proof of Lemma 2.5.
Let $k\in X_{k}$ be arbitrary and fixed and let
$0<\varepsilon<\varepsilon_{0}\leq 1$. Then
(3.4) $\displaystyle u^{\varepsilon}(t,x)$
$\displaystyle=u_{0}^{\varepsilon}(x)-\\!\int_{0}^{t}\\!\bigg{[}\varepsilon
u^{\varepsilon}_{4x}(t,x)+Au^{\varepsilon}(t,x)u^{\varepsilon}_{x}(t,x)+Bu^{\varepsilon}_{3x}(t,x)$
$\displaystyle\hskip 60.27759pt-
Cu^{\varepsilon}_{2x}(t,x)+Du^{\varepsilon}(t,x)\bigg{]}ds+\\!\int_{0}^{t}\\!\left(\Phi(u^{\varepsilon}(s,x))\right)dW(s).$
Denote
$\displaystyle J_{1}$ $\displaystyle:=u_{0}^{\varepsilon}(x);$ $\displaystyle
J_{2}:=-\varepsilon\int_{0}^{t}u^{\varepsilon}_{4x}(t,x)ds;$ $\displaystyle
J_{3}$
$\displaystyle:=-A\int_{0}^{t}u^{\varepsilon}(s,x)u^{\varepsilon}_{x}(s,x)ds;\quad$
$\displaystyle J_{4}:=-B\int_{0}^{t}u^{\varepsilon}_{3x}(t,x)ds;$
$\displaystyle J_{5}$
$\displaystyle:=C\int_{0}^{t}u^{\varepsilon}_{2x}(t,x)ds;$ $\displaystyle
J_{6}:=-D\int_{0}^{t}u^{\varepsilon}(t,x)ds;$ $\displaystyle J_{7}$
$\displaystyle:=\int_{0}^{t}\left(\Phi(u^{\varepsilon}(s,x))\right)dW(s).$
Now, we start estimating the above terms.
From the assumption,
$\mathbb{E}\left|J_{1}\right|^{2}_{W^{1,2}(0,T;H^{-2}(-k,k))}=C_{1}$, where
$C_{1}>0$.
Next, there exists a constant $C_{2}>0$, that
$\left|-\varepsilon
u^{\varepsilon}_{4x}(t,x)\right|_{H^{-2}(-k,k)}=\varepsilon\left|u^{\varepsilon}_{4x}(t,x)\right|_{H^{-2}(-k,k)}\leq
C_{2}\varepsilon\left|u^{\varepsilon}(s,x)\right|_{H^{2}(-k,k)}.$
So, due to Lemma 2.4, there exists a constant $C_{3}(k)>0$ such that
$\displaystyle\mathbb{E}$ $\displaystyle\left|-\varepsilon
u^{\varepsilon}_{4x}(t,x)\right|^{2}_{L^{2}(0,T;H^{-2}(-k,k))}=\mathbb{E}\int_{0}^{T}\left|-\varepsilon
u^{\varepsilon}_{4x}(t,x)\right|^{2}_{H^{-2}(-k,k)}ds$ $\displaystyle\leq
C_{2}^{2}\varepsilon^{2}\mathbb{E}\int_{0}^{T}\left|u^{\varepsilon}(s,x)\right|^{2}_{H^{2}(-k,k)}ds\leq
C_{3}(k).$
Therefore we can write $\hskip
4.30554pt\mathbb{E}\left|J_{2}\right|^{2}_{W^{1,2}(0,T,H^{-2}(-k,k))}\leq
C_{4}(k)$, where $C_{4}(k)>0$.
Now, we will use the following result from [4].
###### Lemma 3.2.
([4], p. 243) There exists a constant $C_{5}(k)$ such that the following
inequality holds
$\quad\left|u^{\varepsilon}(s,x)u^{\varepsilon}_{x}(s,x)\right|_{H^{-1}(-k,k)}\leq
C_{5}(k)\left|u^{\varepsilon}(s,x)\right|^{\frac{3}{2}}_{L^{2}(-k,k)}\left|u^{\varepsilon}(s,x)\right|^{\frac{1}{2}}_{H^{1}(-k,k)}.$
Due to Lemma 3.2 there exist positive constants $C_{6},C_{7}(k),C_{8}(k)$ such
that
$\displaystyle|-Au^{\varepsilon}(s,x)u^{\varepsilon}_{x}(s,x)|_{H^{-2}(-k,k)}=A\left|u^{\varepsilon}(s,x)u^{\varepsilon}_{x}(s,x)\right|_{H^{-2}(-k,k)}\leq
C_{6}A\left|u^{\varepsilon}(s,x)u^{\varepsilon}_{x}(s,x)\right|_{H^{-1}(-k,k)}$
$\displaystyle\leq
AC_{7}(k)\left|u^{\varepsilon}(s,x)\right|^{\frac{3}{2}}_{L^{2}(-k,k)}\left|u^{\varepsilon}(s,x)\right|^{\frac{1}{2}}_{H^{1}(-k,k)}$
$\displaystyle\leq
AC_{7}(k)\left|u^{\varepsilon}(s,x)\right|_{L^{2}(-k,k)}\left|u^{\varepsilon}(s,x)\right|^{\frac{1}{2}}_{L^{2}(-k,k)}\left|u^{\varepsilon}(s,x)\right|^{\frac{1}{2}}_{H^{1}(-k,k)}$
$\displaystyle\leq
AC_{7}(k)\left[\left(2k\lambda_{X}^{2}\right)^{\frac{1}{2}}\right]\left|u^{\varepsilon}(s,x)\right|^{\frac{1}{2}}_{H^{1}(-k,k)}\leq
AC_{8}(k)\lambda_{X}\left|u^{\varepsilon}(s,x)\right|_{H^{1}(-k,k)}.$
Due to Lemma 2.4 there exists a constant $C_{9}(k)>0$, that we can write
$\displaystyle\mathbb{E}$
$\displaystyle\left|-Au^{\varepsilon}(s,x)u^{\varepsilon}_{x}(s,x)\right|^{2}_{L^{2}(0,T;H^{-2}(-k,k))}=\mathbb{E}\int_{0}^{T}\\!\\!\\!\left|-Au^{\varepsilon}(s,x)u^{\varepsilon}_{x}(s,x)\right|^{2}_{H^{-2}(-k,k)}ds$
$\displaystyle\leq
A^{2}C_{8}^{2}(k)\lambda_{X}^{2}\mathbb{E}\int_{0}^{T}\\!\\!\\!\left|u^{\varepsilon}(s,x)\right|^{2}_{H^{1}(-k,k)}ds=A^{2}C_{8}^{2}(k)\lambda_{X}^{2}\mathbb{E}\left|u^{\varepsilon}(s,x)\right|^{2}_{L^{2}(0,T;H^{1}(-k,k))}\leq
A^{2}C_{9}(k)\lambda_{X}^{2}.$
Threfore, we obtain $\hskip
4.30554pt\mathbb{E}\left|J_{3}\right|^{2}_{W^{1,2}(0,T,H^{-2}(-k,k))}\leq
C_{10}(k)$, where $C_{10}(k)>0$.
Next, there exist constants $C_{11},C_{12}>0$, that
$\displaystyle\left|-Bu^{\varepsilon}_{3x}(t,x)\right|_{H^{-2}(-k,k)}$
$\displaystyle=B\left|u^{\varepsilon}_{3x}(t,x)\right|_{H^{-2}(-k,k)}\leq
BC_{11}\left|u^{\varepsilon}(s,x)\right|_{H^{1}(-k,k)}$ $\displaystyle\leq
BC_{12}\left|u^{\varepsilon}(s,x)\right|_{H^{2}(-k,k)}.$
Lemma 2.4 implies the existence of a constant $C_{13}(k)>0$, that we can write
the following estimates
$\displaystyle\mathbb{E}$
$\displaystyle\left|-Bu^{\varepsilon}_{3x}(t,x)\right|^{2}_{L^{2}(0,T;H^{-2}(-k,k))}=\mathbb{E}\int_{0}^{T}\left|-Bu^{\varepsilon}_{3x}(t,x)\right|^{2}_{H^{-2}(-k,k)}ds$
$\displaystyle\leq
B^{2}C^{2}_{12}\mathbb{E}\int_{0}^{T}\left|u^{\varepsilon}(s,x)\right|^{2}_{H^{2}(-k,k)}ds=B^{2}C^{2}_{12}\mathbb{E}\left|u^{\varepsilon}(s,x)\right|^{2}_{L^{2}(0,T;H^{2}(-k,k))}$
$\displaystyle\leq
B^{2}C^{2}_{12}\mathbb{E}\left|u^{\varepsilon}(s,x)\right|^{2}_{L^{2}(0,T;H^{2}(\mathbb{R}))}\leq
B^{2}C_{13}.$
So, we obtain $\hskip
4.30554pt\mathbb{E}\left|J_{4}\right|^{2}_{W^{1,2}(0,T,H^{-2}(-k,k))}\leq
C_{14}$, where $C_{14}>0$.
For some constant $C_{15}>0$, we have
$\left|Cu^{\varepsilon}_{2x}(t,x)\right|_{H^{-2}(-k,k)}=C\left|u^{\varepsilon}_{2x}(t,x)\right|_{H^{-2}(-k,k)}\leq
CC_{15}\left|u^{\varepsilon}(s,x)\right|_{L^{2}(-k,k)}\leq
CC_{16}\left|u^{\varepsilon}(s,x)\right|_{H^{2}(-k,k)}.$
Lemma 2.4 implies the existence of a constant $C_{17}(k)>0$ such that
$\displaystyle\mathbb{E}$
$\displaystyle\left|Cu^{\varepsilon}_{2x}(t,x)\right|^{2}_{L^{2}(0,T;H^{-2}(-k,k))}=\mathbb{E}\int_{0}^{T}\left|Cu^{\varepsilon}_{2x}(t,x)\right|^{2}_{H^{-2}(-k,k)}ds$
$\displaystyle\leq
C^{2}C^{2}_{16}\mathbb{E}\int_{0}^{T}\left|u^{\varepsilon}(s,x)\right|^{2}_{H^{2}(-k,k)}ds=C^{2}C^{2}_{16}\mathbb{E}\left|u^{\varepsilon}(s,x)\right|^{2}_{L^{2}(0,T;H^{2}(-k,k))}$
$\displaystyle\leq
C^{2}C^{2}_{16}\mathbb{E}\left|u^{\varepsilon}(s,x)\right|^{2}_{L^{2}(0,T;H^{2}(\mathbb{R}))}\leq
C^{2}C_{17}.$
Hence, we received $\hskip
4.30554pt\mathbb{E}\left|J_{5}\right|^{2}_{W^{1,2}(0,T,H^{-2}(-k,k))}\leq
C_{18}$, where $C_{18}>0$.
There exists a constant $C_{19}>0$ such that
$\left|-Du^{\varepsilon}(t,x)\right|_{H^{-2}(-k,k)}=D\left|u^{\varepsilon}(t,x)\right|_{H^{-2}(-k,k)}\leq
DC_{19}\left|u^{\varepsilon}(s,x)\right|_{H^{2}(-k,k)}.$
Due to Lemma 2.4 for some constant $C_{20}(k)>0$, we obtain
$\displaystyle\mathbb{E}$
$\displaystyle\left|-Du^{\varepsilon}(t,x)\right|^{2}_{L^{2}(0,T;H^{-2}(-k,k))}=\mathbb{E}\int_{0}^{T}\left|-Du^{\varepsilon}(t,x)\right|^{2}_{H^{-2}(-k,k)}ds$
$\displaystyle\leq
D^{2}C^{2}_{19}\mathbb{E}\int_{0}^{T}\left|u^{\varepsilon}(s,x)\right|^{2}_{H^{2}(-k,k)}ds=D^{2}C^{2}_{19}\mathbb{E}\left|u^{\varepsilon}(s,x)\right|^{2}_{L^{2}(0,T;H^{2}(-k,k))}$
$\displaystyle\leq
D^{2}C^{2}_{19}\mathbb{E}\left|u^{\varepsilon}(s,x)\right|^{2}_{L^{2}(0,T;H^{2}(\mathbb{R}))}\leq
D^{2}C_{20}.$
This implies $\hskip
4.30554pt\mathbb{E}\left|J_{6}\right|^{2}_{W^{1,2}(0,T,H^{-2}(-k,k))}\leq
C_{21}$, where $C_{21}>0$.
Insert in Lemma 2.1 from [7] $f(s):=\Phi(u(s,x))$, $K\\!=\\!H\\!=\\!L^{2}(X)$.
Then $\mathscr{I}(f)(t)\\!=\\!\int_{0}^{t}\Phi(u(s,x))dW(s)$ and for all
$p\geq 1$ and $\alpha<\frac{1}{2}$ there exists a constant
$C_{22}(p,\alpha)>0$, that
$\mathbb{E}\left|\int_{0}^{t}\Phi(u^{m}(s,x))dW(s)\right|^{2p}_{W^{\alpha(p),2p}(0,T;L^{2}(X))}\leq
C_{22}(2p,\alpha)\mathbb{E}\left(\int_{0}^{T}\left|\Phi(u^{m}(s,x))\right|^{2p}_{L_{2}^{0}(L^{2}(X))}ds\right).$
Therefore, due to the condition (2.3), we can write
$\mathbb{E}\left|\int_{0}^{t}\Phi(u^{m}(s,x))dW(s)\right|^{2p}_{W^{\alpha,2p}(0,T;L^{2}(X))}\leq
C_{23}(p,\alpha),\mbox{~{}where~{}}C_{23}>0.$
Substitution $p:=1$ in the above inequality yields
(3.5)
$\mathbb{E}\left|J_{7}\right|^{2}_{W^{\alpha,2}(0,T;L^{2}(X))}=\mathbb{E}\left|\int_{0}^{t}\Phi(u(s,x))dW(s)\right|^{2}_{W^{\alpha,2}(0,T;L^{2}(X))}\leq
C_{23}(2,\alpha)=C_{24}(\alpha).$
Let $\beta\in\left(0,\frac{1}{2}\right)$ and
$\alpha\in\left(\beta+\frac{1}{2},\infty\right)$ be arbitrary fixed. Note,
that the following relations hold
* $W^{\alpha,2}(0,T;L^{2}(\mathbb{R}))\subset W^{\alpha,2}(0,T;H^{-2}([-k,k))\quad$ and
* $W^{1,2}(0,T,H^{-2}(-k,k))\subset W^{\alpha,2}(0,T,H^{-2}(-k,k))$.
Therefore, there exists a constant $C_{25}(\alpha)>0$, such that
$\displaystyle\mathbb{E}$
$\displaystyle\left|u^{m}(s,x)\right|_{W^{\alpha,2}(0,T,H^{-2}(-k,k))}^{2}=\mathbb{E}\left|\sum_{i=1}^{7}J_{i}\right|_{W^{\alpha,2}(0,T,H^{-2}(-k,k))}^{2}\leq\mathbb{E}\left(\sum_{i=1}^{7}\left|J_{i}\right|_{W^{\alpha,2}(0,T,H^{-2}(-k,k))}\right)^{2}$
$\displaystyle=$
$\displaystyle\mathbb{E}\left[\sum_{i=1}^{7}\left|J_{i}\right|^{2}_{W^{\alpha,2}(0,T,H^{-2}(-k,k))}+2\sum_{i=1}^{6}\sum_{j=i+1}^{7}\left|J_{i}\right|_{W^{\alpha,2}(0,T,H^{-2}(-k,k))}\left|J_{j}\right|_{W^{\alpha,2}(0,T,H^{-2}(-k,k))}\right]$
$\displaystyle\leq$
$\displaystyle\mathbb{E}\left[\sum_{i=1}^{7}\left|J_{i}\right|^{2}_{W^{\alpha,2}(0,T,H^{-2}(-k,k))}+2\sum_{i=1}^{6}\sum_{j=i+1}^{7}\left(\left|J_{i}\right|^{2}_{W^{\alpha,2}(0,T,H^{-2}(-k,k))}+\left|J_{j}\right|^{2}_{W^{\alpha,2}(0,T,H^{-2}(-k,k))}\right)\right]$
$\displaystyle=$
$\displaystyle\mathbb{E}\left[8\sum_{i=1}^{7}\left|J_{i}\right|^{2}_{W^{\alpha,2}(0,T,H^{-2}(-k,k))}\right]=8\sum_{i=1}^{7}\left[\mathbb{E}\left|J_{i}\right|^{2}_{W^{\alpha,2}(0,T,H^{-2}(-k,k))}\right]\leq
C_{25}(\alpha).$
Moreover, one has
* $W^{\alpha,2}(0,T,H^{-2}(-k,k))\subset C^{\beta}(0,T;H^{-3}_{loc}(-k,k)\quad$ and
* $W^{\alpha,2}(0,T,H^{-2}(\mathbb{R}))\subset W^{\alpha,2}(0,T,H^{-2}(-k,k))$.
So, there exist constants $C_{27}(k),C_{28}(k,\alpha)>0$, that
(3.6)
$\displaystyle\mathbb{E}\left|u^{\varepsilon}(s,x)\right|_{C^{\beta}(0,T;H^{-3}(-k,k)}^{2}\leq$
$\displaystyle
C_{26}\mathbb{E}\left|u^{\varepsilon}(s,x)\right|_{W^{\alpha,2}(0,T,H^{-3}(-k,k))}^{2}\leq
C_{27}(k,\alpha)$
$\displaystyle\mathbb{E}\left|u^{\varepsilon}(s,x)\right|_{W^{\alpha,2}(0,T,H^{-2}(-k,k))}\leq$
$\displaystyle C_{28}(k,\alpha).$
Let $\eta>0$ be arbitrary fixed. Lemma 2.4 implies the existence of a constant
$C_{30}(k)>0$, that
(3.7)
$\displaystyle\mathbb{E}\left|u^{\varepsilon}(s,x)\right|^{2}_{L^{2}(0,T,H^{-1}(-k,k))}\leq$
$\displaystyle
C_{29}(k)\mathbb{E}\left|u^{\varepsilon}(s,x)\right|^{2}_{L^{2}(0,T,H^{-1}(\mathbb{R}))}\tilde{C}_{2}=C_{30}(k).$
Substituting in [4, Lemma 2.1]
$\alpha_{k}:=\eta^{-1}2^{k}\left(C_{30}(k)+C_{27}(k,\alpha)+C_{28}(k,\alpha)\right)$
and using Markov inequality [12, p. 114] for
$X:=\left|u^{\varepsilon}(s,x)\right|^{2}_{L^{2}(0,T,H^{-1}(-k,k))}+\left|u^{\varepsilon}(s,x)\right|^{2}_{W^{\alpha,2}(0,T,H^{-2}(-k,k))}+\left|u^{\varepsilon}(s,x)\right|_{C^{\beta}(0,T;H^{-3}_{loc}(-k,k)}^{2}$
and
$\varepsilon:=\eta^{-1}2^{k}\left(C_{30}(k)+C_{27}(k,\alpha)+C_{28}(k,\alpha)\right)$,
one obtains
$\displaystyle\mathbb{P}\Big{(}u^{\varepsilon}\in
A\left(\left\\{\alpha_{k}\right\\}\right)\Big{)}$
$\displaystyle=1-\mathbb{P}\Big{(}\left|u^{\varepsilon}(s,x)\right|^{2}_{L^{2}(0,T,H^{-1}(-k,k))}+\left|u^{\varepsilon}(s,x)\right|^{2}_{W^{\alpha,2}(0,T,H^{-2}(-k,k))}$
$\displaystyle+\left|u^{\varepsilon}(s,x)\right|_{C^{\beta}(0,T;H^{-3}_{loc}(-k,k))}^{2}\geq\eta^{-1}2^{k}\left(C_{30}(k)+C_{27}(k,\alpha)+C_{28}(k,\alpha)\right)\Big{)}$
$\displaystyle=1-\frac{C_{30}(k)+C_{27}(k,\alpha)+C_{28}(k,\alpha)}{\eta^{-1}2^{k}\left(C_{30}(k)+C_{27}(k,\alpha)+C_{28}(k,\alpha)\right)}=1-\frac{\eta}{2^{k}}>1-\eta.$
Let $K$ be a mapping such that for $\eta>0$,
$K\left(\eta\right):=A\left(\left\\{a_{k}^{(\eta)}\right\\}\right)$, where
$\left\\{a_{k}^{(\eta)}\right\\}$ is an increasing sequence of positive
numbers, which can, but does not have to, depend on $\eta$. Note, that due to
[4, Lemma 2.1], the set $K(\eta)$ is compact for all $\eta>0$. Moreover,
$\mathbb{P}\left\\{K\left(\eta\right)\right\\}>1-\eta$, then the family
$\mathscr{L}\left(u^{\varepsilon}\right)$ is tight. ∎
## 4\. Proof of Lemma 2.3
###### Proof.
Let $\left\\{e_{i}\right\\}_{i\in\mathbb{N}}$ be an orthonormal basis in space
$L^{2}(X)$. Denote by $P_{m}$, for all $m\in\mathbb{N}$, the orthogonal
projection on $Sp(e_{1},...,e_{m})$. Consider finite dimensional Galerkin
approximation of the problem (2.6) in space $P_{m}L^{2}(X)$ in the form
(4.1)
$\begin{cases}du^{m,\varepsilon}(t,x)+\left(\varepsilon\theta\left(\frac{\left|u^{m,\varepsilon}_{4x}(t,x)\right|^{2}}{m}\right)u^{m,\varepsilon}_{4x}(t,x)+A\theta\left(\frac{\left|u^{m,\varepsilon}_{x}(t,x)\right|^{2}}{m}\right)u^{m,\varepsilon}(t,x)u^{m,\varepsilon}_{x}(t,x)\right.\\\
\left.+B\theta\left(\frac{\left|u^{m,\varepsilon}_{3x}(t,x)\right|^{2}}{m}\right)u^{m,\varepsilon}_{3x}(t,x)-C\theta\left(\frac{\left|u^{m,\varepsilon}_{2x}(t,x)\right|^{2}}{m}\right)u^{m,\varepsilon}_{2x}(t,x)+Du^{m,\varepsilon}(t,x)\right)dt\\\
=P_{m}\Phi\left(u^{m,\varepsilon}(t,x)\right)dW^{m}(t)\\\
u^{m,\varepsilon}_{0}(x)=P_{m}u^{\varepsilon}(0,x),\end{cases}$
where $\theta\in C^{\infty}(\mathbb{R})$ fulfills conditions
(4.2) $\begin{cases}\theta(\xi)=1,\quad&\textrm{when}\quad\xi\in[0,1]\\\
\theta(\xi)\in[0,1],\quad&\textrm{when}\quad\xi\in(1,2)\\\
\theta(\xi)=0,\quad&\textrm{when}\quad\xi\in\left.[2,\infty)\right..\end{cases}$
Let $m\in\mathbb{N}$ be arbitrary fixed and
$\displaystyle b(u(t,x)):=$
$\displaystyle\varepsilon\theta\left(\frac{\left|u^{m,\varepsilon}_{4x}(t,x)\right|^{2}}{m}\right)u^{m,\varepsilon}_{4x}(t,x)+A\theta\left(\frac{\left|u^{m,\varepsilon}_{x}(t,x)\right|^{2}}{m}\right)u^{m,\varepsilon}(t,x)u^{m,\varepsilon}_{x}(t,x)$
$\displaystyle+B\theta\left(\frac{\left|u^{m,\varepsilon}_{3x}(t,x)\right|^{2}}{m}\right)u^{m,\varepsilon}_{3x}(t,x)-C\theta\left(\frac{\left|u^{m,\varepsilon}_{2x}(t,x)\right|^{2}}{m}\right)u^{m,\varepsilon}_{2x}(t,x)+Du^{m,\varepsilon}(t,x),$
$\displaystyle\sigma(u(t,x)):=$ $\displaystyle
P_{m}\Phi(u^{m,\varepsilon}(t,x)).$
Then
$\displaystyle\left|b(u(t,x))\right|_{L^{2}(X)}\leq\varepsilon\left|\theta\left(\frac{\left|u^{m,\varepsilon}_{4x}(t,x)\right|^{2}}{m}\right)u^{m,\varepsilon}(t,x)u^{m,\varepsilon}_{4x}(t,x)\right|_{L^{2}(X)}$
$\displaystyle+A\left|\theta\left(\frac{\left|u^{m,\varepsilon}_{x}(t,x)\right|^{2}}{m}\right)u^{m,\varepsilon}(t,x)u^{m,\varepsilon}_{x}(t,x)\right|_{L^{2}(X)}+B\left|\theta\left(\frac{\left|u^{m,\varepsilon}_{3x}(t,x)\right|^{2}}{m}\right)u^{m,\varepsilon}_{3x}(t,x)\right|_{L^{2}(X)}$
$\displaystyle+C\left|\theta\left(\frac{\left|u^{m,\varepsilon}_{2x}(t,x)\right|^{2}}{m}\right)u^{m,\varepsilon}_{2x}(t,x)\right|_{L^{2}(X)}+D\left|u^{m,\varepsilon}(t,x)\right|_{L^{2}(X)}$
$\displaystyle=:\varepsilon J_{1}+AJ_{2}+BJ_{3}+CJ_{4}+DJ_{5}.$
Note, that
$J_{2}=\begin{cases}0,\quad\textrm{when}\quad\left|u^{m,\varepsilon}_{x}(t,x)\right|\geq\sqrt{2m}\\\
\lambda\left|u^{m,\varepsilon}(t,x)u^{m,\varepsilon}_{x}(t,x)\right|_{L^{2}(X)},\quad\textrm{when}\quad\left|u^{m,\varepsilon}_{x}(t,x)\right|\leq\sqrt{2m},\end{cases}$
where $\lambda\in[0,1]$, so
$\displaystyle
J_{2}\leq\left|u^{m,\varepsilon}(t,x)u^{m,\varepsilon}_{x}(t,x)\right|_{L^{2}(X)}\leq\sqrt{2m}\left|u^{m,\varepsilon}(t,x)\right|_{L^{2}(X)}.$
Similarly,
$J_{1}=\begin{cases}0,\quad\textrm{when}\quad\left|u^{m,\varepsilon}_{4x}(t,x)\right|\geq\sqrt{2m}\\\
\lambda\left|u^{m,\varepsilon}_{4x}(t,x)\right|_{L^{2}(X)},\quad\textrm{when}\quad\left|u^{m,\varepsilon}_{4x}(t,x)\right|\leq\sqrt{2m},\end{cases}$
$J_{3}=\begin{cases}0,\quad\textrm{when}\quad\left|u^{m,\varepsilon}_{3x}(t,x)\right|\geq\sqrt{2m}\\\
\lambda\left|u^{m,\varepsilon}_{3x}(t,x)\right|_{L^{2}(X)},\quad\textrm{when}\quad\left|u^{m,\varepsilon}_{3x}(t,x)\right|\leq\sqrt{2m},\end{cases}$
and
$J_{4}=\begin{cases}0,\quad\textrm{when}\quad\left|u^{m,\varepsilon}_{2x}(t,x)\right|\geq\sqrt{2m}\\\
\lambda\left|u^{m,\varepsilon}_{2x}(t,x)\right|_{L^{2}(X)},\quad\textrm{when}\quad\left|u^{m,\varepsilon}_{2x}(t,x)\right|\leq\sqrt{2m},\end{cases}$
where $\lambda\in[0,1]$, so
$J_{1},J_{3},J_{4}\leq\sqrt{2m}.$
Therefore one gets
$\displaystyle\left|b(u^{m,\varepsilon}(t,x))\right|_{L^{2}(X)}=\varepsilon
J_{1}+AJ_{2}+BJ_{3}+CJ_{4}+DJ_{5}$ $\displaystyle\leq$
$\displaystyle\,\varepsilon\sqrt{2m}+A\sqrt{2m}\left|u^{m,\varepsilon}(t,x)\right|_{L^{2}(X)}+B\sqrt{2m}+C\sqrt{2m}+D\left|u^{m,\varepsilon}(t,x)\right|_{L^{2}(X)}$
$\displaystyle=$
$\displaystyle\left(A\sqrt{2m}+D\right)\left|u^{m,\varepsilon}(t,x)\right|_{L^{2}(X)}+\sqrt{2m}\left(\varepsilon+B+C\right).$
Moreover, due to the condition (2.3), there exist constants
$\kappa_{1},\kappa_{2}>0$, such that
$\left\|\Phi(u^{m}(t,x))\right\|_{L_{0}^{2}(L^{2}(X))}\leq\kappa_{1}\left|u^{m}(t,x)\right|_{L^{2}(X)}+\kappa_{2},$
so
$\displaystyle\left|b(u^{m,\varepsilon}(t,x))\right|_{L^{2}(X)}+\left\|\sigma(u^{m}(t,x))\right\|_{L_{0}^{2}(L^{2}(X))}$
$\displaystyle\leq$
$\displaystyle\left(A\sqrt{2m}+D\right)\left|u^{m,\varepsilon}(t,x)\right|_{L^{2}(X)}+\sqrt{2m}\left(\varepsilon+B+C\right)+\kappa_{1}\left|u^{m}(t,x)\right|_{L^{2}(X)}+\kappa_{2}$
$\displaystyle=$
$\displaystyle\left(A\sqrt{2m}+D+\kappa_{1}\right)\left|u^{m,\varepsilon}(t,x)\right|_{L^{2}(X)}+\sqrt{2m}\left(\varepsilon+B+C\right)+\kappa_{2}.$
Let $\kappa:=\max\left\\{\kappa_{1},\kappa_{2}\right\\}$ and
$\Lambda=\max\left\\{A,\varepsilon+B+C\right\\}$. Then
$\displaystyle\left|b(u^{m,\varepsilon}(t,x))\right|_{L^{2}(X)}+\left\|\sigma(u^{m}(t,x))\right\|_{L_{0}^{2}(L^{2}(X))}$
$\displaystyle\leq$
$\displaystyle\left(\Lambda\sqrt{2m}+\kappa+D\right)\left|u^{m,\varepsilon}(t,x)\right|_{L^{2}(X)}+\Lambda\sqrt{2m}+\kappa+D$
$\displaystyle=$
$\displaystyle\left(\Lambda\sqrt{2m}+\kappa+D\right)\left(\left|u^{m,\varepsilon}(t,x)\right|_{L^{2}(X)}+1\right).$
Therefore, by [9, Proposition 3.6], and [9, Proposition 4.6], for all
$m\in\mathbb{N}$ there exists a martingale solution to (4.1). Moreover,
applying the same methods as in section 3 one can show that for all $m$ the
following inequalities hold
$\displaystyle\exists_{C_{1}(\varepsilon)>0}\mathbb{E}\left(\left|u^{m,\varepsilon}(t,x)\right|^{2}_{L^{2}(0,T;H^{2}(X))}\right)$
$\displaystyle\leq$ $\displaystyle\tilde{C}_{1}(\varepsilon),$
$\displaystyle\forall_{k\in
X_{k}}\exists_{C_{2}(k,\varepsilon)>0}\mathbb{E}\left(\left|u^{m,\varepsilon}(t,x)\right|^{2}_{L^{2}(0,T;H^{1}(-k,k))}\right)$
$\displaystyle\leq$ $\displaystyle\tilde{C}_{2}(k,\varepsilon)$
and the family of distributions $\mathscr{L}(u^{m,\varepsilon})$ is tight in
$L^{2}(0,T;L^{2}(X))\cap C(0,T;H^{-3}(X))$. Then application of the same
methods as on pages 2.2–3 leads to proof of the existence of martingale
solution to (2.6) with the conditions (2.3), (2.4) and (2.5). ∎
## References
* [1] Adams R.A., Sobolev Spaces, Academic Press, New York, San Fransisco, London, 1975.
* [2] Billingsley P., Convergence of Probability Measures, John Wiley & Sons, New York, 1999.
* [3] Da Prato G., Zabczyk J., Stochastic equations in infinite dimensions, Cambridge University Press, Cambridge, 1992.
* [4] de Bouard A., Debussche A., On the Stochastic Korteweg - de Vries Equation, Journal of Functional Analysis 15, (1998) 215-251.
* [5] Elkamash I.S., Kourakis I., Electrostatic shock structures in dissipative multi-ion dusty plasmas, Physics of Plasmas 25, 062104 (2018).
* [6] Elkamash I.S., Kraenkel R.A. Verheest, F., Coutinho R.M., Reville B., Kourakis I., Generalized hybrid Korteweg de Vries - Burgers type equation for propagating shock structures in non-integrable systems Nonlinear Dynamics, 2018, (unpublished).
* [7] Flandoli F., Gątarek D., Martingale and stationary solutions for stochastic Navier-Stokes equations, Probability Theory and Related Fields 102, (1995) 367-391.
* [8] Gawarecki L., Mandrekar V., Stochastic differential equations in infinite dimensions, Springer, New York, 2011.
* [9] Karatzas I., Shreve S., Brownian motion and stochastic calculus, second edition, Springer, 1991.
* [10] Misra, A.P., Adhikary, N.C., Shukla, P.K., Ion-acoustic solitary waves and shocks in a collisional dusty negative-ion plasma, Phys. Rev. E 86, 056406 (2012).
* [11] Ostrovsky, L., Asymptotic Perturbation Theory of Waves Imperial College Press (2015).
* [12] Papoulis A., Probability, Random Variables, and Stochastic Processes, 3rd ed., McGraw-Hill, New York, 1991.
|
11institutetext: Key Laboratory of Dark Matter and Space Astronomy, Purple
Mountain Observatory, CAS, Nanjing 210023, PR China
11email<EMAIL_ADDRESS>22institutetext: Shandong Provincial Key Laboratory
of Optical Astronomy and Solar-Terrestrial Environment, Institute of Space
Sciences, Shandong University, Weihai 264209, PR China
33institutetext: CAS Key Laboratory of Solar Activity, National Astronomical
Observatories, Chinese Academy of Sciences, Beijing 100101, PR China
44institutetext: School of Astronomy and Space Science, University of Chinese
Academy of Sciences, Beijing 100049, PR China
# Spectroscopic observations of a flare-related coronal jet
Q. M. Zhang 1133 Z. H. Huang 22 Y. J. Hou 3344 D. Li 1133 Z. J. Ning 11 and Z.
Wu 22
(Received; accepted)
###### Abstract
Context. Coronal jets are ubiquitous in active regions (ARs) and coronal
holes.
Aims. In this paper, we study a coronal jet related to a C3.4 circular-ribbon
flare in active region 12434 on 2015 October 16.
Methods. The flare and jet were observed in ultraviolet (UV) and extreme
ultraviolet (EUV) wavelengths by the Atmospheric Imaging Assembly (AIA) on
board the Solar Dynamics Observatory (SDO). The line-of-sight (LOS)
magnetograms of the photosphere were observed by the Helioseismic and Magnetic
Imager (HMI) on board SDO. The whole event was covered by the Interface Region
Imaging Spectrograph (IRIS) during its imaging and spectroscopic observations.
Soft X-ray (SXR) fluxes of the flare were recorded by the GOES spacecraft.
Hard X-ray (HXR) fluxes at 4$-$50 keV were obtained from observations of
RHESSI and Fermi. Radio dynamic spectra of the flare were recorded by the
ground-based stations belonging to the e-Callisto network.
Results. Two minifilaments were located under a 3D fan-spine structure before
flare. The flare was generated by the eruption of one filament. The kinetic
evolution of the jet was divided into two phases: a slow rise phase at a speed
of $\sim$131 km s-1 and a fast rise phase at a speed of $\sim$363 km s-1 in
the plane-of-sky. The slow rise phase may correspond to the impulsive
reconnection at the breakout current sheet. The fast rise phase may correspond
to magnetic reconnection at the flare current sheet. The transition between
the two phases occurred at $\sim$09:00:40 UT. The blueshifted Doppler
velocities of the jet in the Si iv 1402.80 Å line range from -34 to -120 km
s-1. The accelerated high-energy electrons are composed of three groups. Those
propagating upward along open field generate type III radio bursts, while
those propagating downward produce HXR emissions and drive chromospheric
condensation observed in the Si iv line. The electrons trapped in the rising
filament generate a microwave burst lasting for $\leq$40 s. Bidirectional
outflows at the base of jet are manifested by significant line broadenings of
the Si iv line. The blueshifted Doppler velocities of outflows range from -13
to -101 km s-1. The redshifted Doppler velocities of outflows range from
$\sim$17 to $\sim$170 km s-1.
Conclusions. Our multiwavelength observations of the flare-related jet are in
favor of the breakout jet model and are important for understanding the
acceleration and transport of nonthermal electrons.
###### Key Words.:
Line: profiles – Magnetic reconnection – Sun: flares – Sun: filaments,
prominences – Sun: UV radiation
## 1 Introduction
Jet-like activities as a result of magnetic reconnection are ubiquitous in the
solar atmosphere. Small-scale jets with lower energy budgets and shorter
lifetimes include spicules (De Pontieu et al., 2004; Samanta et al., 2019),
chromospheric jets (Shibata et al., 2007; Liu et al., 2009; Tian et al.,
2014a), bidirectional plasma jets related to explosive events (Innes et al.,
1997; Li et al., 2018a), coronal nanojets (Antolin et al., 2020) and mini-jets
(Chen et al., 2020). Large-scale jets with higher energy budgets and longer
lifetimes include H$\alpha$ surges (Schmieder et al., 1995; Liu & Kurokawa,
2004) and coronal jets (Cirtain et al., 2007; Savcheva et al., 2007; Shimojo
et al., 2007; Raouafi et al., 2016; Shen, 2021). Coronal jets are transient
collimated outflows propagating along open magnetic field or large-scale
closed loops (Shibata et al., 1992, 1994; Zhang et al., 2012; Huang et al.,
2012, 2020). The speeds of jets can reach hundreds of km s-1 (Shimojo et al.,
1996; Culhane et al., 2007; Lu et al., 2019). The presence of open field
facilitates the escape of electron beams accelerated by reconnection at the
jet base and the generation of type III radio bursts (e.g., Krucker et al.,
2011; Glesener et al., 2012; Glesener & Fleishman, 2018).
Although the morphology of jets varies from case to case, most of them show an
inverse-Y shape or a two-sided shape (Moore et al., 2010; Shen et al., 2019).
The temperature of a jet decreases from $\sim$10 MK at the base (Shimojo &
Shibata, 2000; Chifor et al., 2008; Bain & Fletcher, 2009) to a few MK along
the spire (Zhang & Ji, 2014b). Sometimes, the hot, fast extreme ultraviolet
(EUV) jet is adjacent to or mixed with the cool, slow H$\alpha$ jet (Shen et
al., 2017; Sakaue et al., 2017, 2018; Hou et al., 2020). The electron
densities of jets range from $\geq$108 cm-3 (Young & Muglach, 2014) to
$\geq$1010 cm-3 (Mulay et al., 2017). Bright and compact blobs (or plasmoids)
are discovered in coronal jets (Zhang & Ji, 2014b; Li & Yang, 2019; Zhang &
Ni, 2019; Joshi et al., 2020a), which are mainly interpreted by magnetic
islands as a result of tearing-mode instability in the current sheet (Wyper et
al., 2016; Ni et al., 2017). Recurring jets produced by successive energy
release at the same region are common (Chifor et al., 2008; Hong et al., 2019;
Lu et al., 2019). Aside from the radial motion, untwisting motions have been
detected in helical jets, implying rapid release and transfer of magnetic
helicity (Chen et al., 2012; Zhang & Ji, 2014a; Cheung et al., 2015; Doyle et
al., 2019; Joshi et al., 2020b).
In spite of substantial investigations on jets using multiwavelength
observational data, the triggering mechanism is an important issue that needs
to be addressed. Till now, several mechanisms have been proposed, such as
magnetic flux emergence and reconnection with pre-existing magnetic fields
(Yokoyama & Shibata, 1996; Archontis & Hood, 2013), magnetic cancellation
(Panesar et al., 2016; Sterling et al., 2017), minifilament eruption (Sterling
et al., 2015; Hong et al., 2017; Li et al., 2018b), and photospheric rotation
(Pariat et al., 2009). Wyper et al. (2017) performed three-dimensional (3D)
magnetohydrodynamics (MHD) numerical simulations of a coronal jet driven by
filament ejection, whereby a region of strongly sheared magnetic field near
the solar surface becomes unstable and erupts. The authors concluded that
energy is initially released via magnetic reconnection at the thin breakout
current sheet above the flux rope, which is followed by continuing energy
release at the thin flare current sheet beneath the erupting filament (or flux
rope). The kinetic evolution of the jet is apparently divided into a slow rise
phase before the flux rope opens up and a fast rise phase after the rope
totally opens up, which correspond to magnetic reconnections at the breakout
current sheet and flare current sheet, respectively (Wyper et al., 2018, see
their Fig. 5). The breakout jet model is verified observationally by the
signatures of a rotating jet and fast degradation of the circular flare ribbon
following the coherent brightenings of the ribbon associated with the jet
(Zhang et al., 2020). Breakout reconnection at the null point of a fan-spine
structure is recently evidenced by the bidirectional outflows ejected from the
reconnection site (Yang et al., 2020). However, reconnection at the flare
current sheet below the jet has not been noticed.
In this paper, we report our multiwavelengths of a coronal jet related to a
C3.4 circular-ribbon flare that induced simultaneous transverse oscillations
of a coronal loop and a filament (Zhang, 2020). The flare occurred in NOAA
active region (AR) 12434 where a series of homologous flares were produced
(Zhang et al., 2016a, b). This paper is arranged as follows. Data analysis is
described in Sect. 2. The results are presented in Sect. 3 and compared with
previous works in Sect. 4. Finally, a brief summary is given in Sect. 5.
## 2 Observations and data analysis
The C3.4 flare and jet were observed by the Atmospheric Imaging Assembly (AIA;
Lemen et al., 2012) on board the Solar Dynamics Observatory (SDO) on 2015
October 16. SDO/AIA takes full-disk images in two ultraviolet (UV; 1600 and
1700 Å) and seven EUV (94, 131, 171, 193, 211, 304, and 335 Å) wavelengths.
The line-of-sight (LOS) magnetograms of the photosphere were observed by the
Helioseismic and Magnetic Imager (HMI; Scherrer et al., 2012) on board SDO.
The level_1 data of AIA and HMI were calibrated using the standard solar
software (SSW) program aia_prep.pro and hmi_prep.pro, respectively.
The flare and jet were also observed by the Interface Region Imaging
Spectrograph (IRIS; De Pontieu et al., 2014) Slit-Jaw Imager (SJI) in 1330 Å
($\log T\approx 4.4$) and 1400 Å ($\log T\approx 4.8$) with a field of view
(FOV) of 134$\arcsec\times$129$\arcsec$. Spectroscopic observation of the
flare was in the “large coarse 8-step raster” mode using four spectral windows
(C ii, Mg ii, O i, and Si iv). Each raster had 8 steps from east to west and
covered an area of 14$\arcsec\times$129$\arcsec$. The step cadence and
exposure time were $\sim$10 s and 8 s. The spectra were preprocessed using the
standard SSW programs iris_orbitvar_corr_l2.pro and iris_prep_despike.pro. The
Si iv 1402.80 Å line ($\log T\approx 4.8$) is optically thin and can be fitted
by single or multicomponent Gaussian functions. The reference line center of
Si iv is set to be 1402.80 Å (Li et al., 2015a; Yu et al., 2020a). The
uncertainty in Doppler velocity is $\sim$2 km s-1.
Soft X-ray (SXR) light curves of the flare in 0.5$-$4 and 1$-$8 Å were
recorded by the GOES spacecraft. Hard X-ray (HXR) fluxes of the flare at
different energy bands were obtained from observations of the Reuven Ramaty
High-Energy Solar Spectroscopic Imager (RHESSI; Lin et al., 2002) and the
Gamma-ray Burst Monitor (GBM; Meegan et al., 2009) on board the Fermi
spacecraft. The time cadence of Fermi/GBM switched from an ordinary value of
0.256 s before flare to 0.064 s during the flare. Radio dynamic spectra of the
flare were recorded by the ground-based stations belonging to the e-Callisto
network111http://www.e-callisto.org. The observational parameters are listed
in Table 1.
To obtain the 3D magnetic configuration of the AR before flare, we utilize the
“weighted optimization” method (Wiegelmann, 2004; Wiegelmann et al., 2012) to
perform a nonlinear force-free field (NLFFF) extrapolation based on the
photospheric vector magnetograms observed by HMI at 08:48 UT. The azimuthal
component of the inverted vector magnetic field was processed to correct the
180$\degr$ ambiguity (Leka et al., 2009). The vector field in the image plane
was transformed to the heliographic plane (Gary & Hagyard, 1990). The
extrapolation was carried out within a box of 380$\times$400$\times$512
uniformly spaced grid points with $dx=dy=dz=1\aas@@fstack{\prime\prime}2$. The
squashing factor $Q$ (Demoulin et al., 1996; Titov et al., 2002) and twist
number $\mathcal{T}_{w}$ (Berger & Prior, 2006) were calculated with the code
developed by Liu et al. (2016).
Table 1: Description of the observational parameters. Instru. | $\lambda$ | Time | Cad. | Pix. Size
---|---|---|---|---
| (Å) | (UT) | (s) | (″)
AIA | 94$-$304 | 08:50-09:18 | 12 | 0.6
AIA | 1600 | 08:50-09:18 | 24 | 0.6
HMI | 6173 | 08:48-09:18 | 45 | 0.6
SJI | 1330 | 08:50-09:18 | 9, 10 | 0.166
SJI | 1400 | 08:50-09:18 | 19, 20 | 0.166
GOES | 0.5$-$4 | 08:58-09:18 | 2.05 | …
GOES | 1$-$8 | 08:58-09:18 | 2.05 | …
RHESSI | 6$-$50 keV | 09:00-09:10 | 4.0 | …
GBM | 4$-$50 keV | 08:58-09:10 | 0.064 | …
KRIM | 250$-$350 MHz | 08:58-09:01 | 0.25 | …
blen5m | 0.98$-$1.27 GHz | 09:00-09:02 | 0.25 | …
ZSTS | 1.41$-$1.43 GHz | 09:00-09:02 | 0.25 | …
Figure 1: (a) SXR light curves of the C3.4 flare in 0.5$-$4 Å (cyan line) and
1$-$8 Å (magenta line). (b-c) HXR light curves recorded by RHESSI and Fermi at
different energy bands. (d) Radio light curves recorded by e-Callisto/KRIM at
280, 300, 320, and 340 MHz. Figure 2: (a-f) Snapshots of the AIA 304 Å images.
The arrows point to two minifilaments (F1 and F2), flare, and jet. In panel
(d), the curved slice (S1) is used to investigate the radial propagation of
the jet. (g) AIA 1600 Å image at 09:00:39 UT. The intensity contours of the
HXR image are superposed with magenta lines. (h) HMI LOS magnetogram at
08:58:15 UT. The thick green and purple lines represent F1 and F2 in panel
(a). The whole evolution of the event observed in 304 Å is shown in a movie
(anim304.mov) available online.
## 3 Results
### 3.1 Minifilament eruption and circular-ribbon flare
Figure 1(a) shows the SXR light curves of the flare. The SXR emissions started
to rise at $\sim$08:58 UT and reached peak values at $\sim$09:03 UT before
declining gradually until $\sim$09:12 UT. In Fig. 2, the EUV images observed
by AIA in 304 Å illustrate the whole evolution of the event (see also the
online movie anim304.mov). Before flare, two short minifilaments (F1 and F2)
resided in the AR (panel (a)). With the slow rising of F2, the EUV intensities
of the flare started to increase at $\sim$08:58 UT and reached peak values at
$\sim$09:00 UT (panels (b-c)). Meanwhile, a coronal jet propagates in the
southeast direction, which was also observed in 1600 Å (panel (g)). The F1
close to F2, however, was undisturbed and survived the flare (panel (f)). Both
of the minifilaments were located along the polarity inversion lines (panel
(h)). The morphology and evolution of the C3.4 flare were quite analogous to
those of C4.2 flare starting at $\sim$13:36 UT (Zhang et al., 2016a; Dai et
al., 2020).
HXR light curves of the flare recorded by RHESSI and Fermi at different energy
bands are plotted in Fig. 1(b-c). Note that there was no observation from
RHESSI until 09:00:20 UT. Before 08:59:42 UT, there were two small peaks
(panel (c)). The HXR emissions at 11$-$26 keV and 26$-$50 keV started to
increase sharply at 08:59:42 UT and peaked at 09:00:20 UT before decreasing to
$\sim$09:00:40 UT. The HXR peaks indicate that the most drastic release of
energy and particle acceleration took place during 08:59:42$-$09:00:40 UT
($\sim$60 s). Afterwards, the HXR emissions between 4 and 26 keV increased
gradually and peaked around 09:02:00 UT before decreasing slowly until
$\sim$09:04:30 UT, implying their thermal nature from hot plasma ($\sim$10 MK)
as a result of ongoing chromospheric evaporation.
Figure 3: Top view (a) and side view (b) of the 3D magnetic configuration of
AR 12434. The blue and yellow lines represent the fan-spine field lines. The
green and light violet lines represent the field lines of F1 and F2,
respectively. (c) Close-up of the flare region within the dashed box of panel
(a). (d) Spatial distribution of $\log Q$ at $z=0$ within the flare region,
which is overlapped with the field lines of F1 and F2. The 3D magnetic
configuration from different perspectives is shown in a movie (anim3d.mov)
available online.
Figure 2(h) shows the LOS magnetogram of the flare region observed by HMI at
08:58:15 UT, featuring a central negative polarity surrounded by positive
polarities. Such a magnetic pattern is indicative of the fan-spine topology in
the corona (e.g., Zhang et al., 2012; Li et al., 2018b; Hou et al., 2019; Yang
et al., 2020). In Fig. 3, the left panels demonstrate the top and side views
of 3D magnetic configuration of AR 12434. The fan-spine topology is clearly
depicted by the blue and yellow lines, and the outer spine is connected to a
remote negative polarity to the southeast of fan dome. Below the dome, the
green and light violet lines represent the field lines of F1 and F2,
respectively. A close-up of the flare region is displayed in Fig. 2(c). The
magnetic fields supporting the two minifilaments are sheared arcades instead
of twisted flux ropes. The $\mathcal{T}_{w}$ lies in the range of 0.45$-$0.65
for the lower F1 and 0.6$-$0.7 for the upper F2. Figure 3(d) shows the spatial
distribution of $\log Q$ within the flare region, highlighting the closed
ribbon of high $Q$, which is excellently cospatial with the bright circular
ribbon (Fig. 2(c)).
The flare was accompanied by type III radio bursts at 250$-$350 MHz. In Fig.
4, the bottom panel shows the radio dynamic spectra of the flare recorded by
e-Callisto/KRIM. The bursts with fast frequency drift rates are noticeable
during 08:59:10$-$09:00:40 UT. The radio fluxes at 280, 300, 320, and 340 MHz
are extracted and plotted with colored lines in Fig. 1(d). It is revealed that
the peaks in radio are roughly correlated with the peaks in HXR, suggesting
their common origin. The reconnection-accelerated electrons propagate downward
to produce HXR emissions in the chromosphere and propagate upward along open
field to produce type III bursts at the same time (Zhang et al., 2016b).
Combining with the results of extrapolation in Fig. 3, the real magnetic
configuration of the flare region is consistent with previous schematic
illustration (Wang & Liu, 2012, see their Fig. 1).
Figure 4: Radio dynamic spectra of the flare recorded by three ground-based
stations of e-Callisto network. The white dashed line denotes the time at
09:00:40 UT. In panel (c), the black arrow points to the type III bursts with
fast frequency drift rates.
### 3.2 Coronal jet
To investigate the radial propagation of the jet, an artificial slice (S1)
along the jet spire is selected in Fig. 2(d), which is $\sim$80$\arcsec$ in
length. Time-distance diagrams of S1 in 94 Å ($\log T\approx 6.8$), 171 Å
($\log T\approx 5.8$), and 304 Å ($\log T\approx 4.7$) are displayed in Fig.
5. The jet is distinctly observed in all EUV wavelengths, indicating its
multithermal nature (Zhang & Ji, 2014b; Joshi et al., 2020a). The kinetic
evolution is divided into two phases: a slow rise phase at a plane-of-sky
speed of $\sim$131 km s-1 and a fast rise phase at a speed of $\sim$363 km
s-1, respectively. The transition between the two phases occurred at
$\sim$09:00:40 UT, which is denoted by the magenta dashed line. Impulsive
heating to reach a temperature of $\geq$6 MK is concurrent with the turning
point (panel (a)). The evolution of jet is basically consistent with the
breakout model (Wyper et al., 2017, 2018). Magnetic reconnections at the
breakout current sheet and flare current sheet lead to intense electron
acceleration and heating.
Figure 5: Time-distance diagrams of S1 in 94, 171, and 304 Å. The magenta
dashed line denotes the time at 09:00:40 UT. The plane-of-sky speeds of the
jet during the slow rise ($\sim$131 km s-1) and fast rise ($\sim$363 km s-1)
are labeled. On the $y$-axis, $s=0$ and $s=80\aas@@fstack{\prime\prime}4$
denote the northwest and southeast endpoints of S1, respectively.
In Fig. 6, the 1330 Å images observed by IRIS/SJI illustrate the evolution of
flare and jet in FUV wavelength, which is similar to that in EUV wavelengths.
Coherent brightenings of the circular ribbon took place around 09:00:07 UT,
indicating null point reconnection (see panel (c) and the online movie
anim1330.mov). Unfortunately, the southern part of flare and major part of jet
spire were not observed due to the limited FOV of IRIS. The raster
observations enable us to carry out spectral analysis of the flare and jet
base. In Fig. 7, the left column shows selected FUV images observed by SJI
when the jet was covered by the slit. The right column shows the corresponding
Si iv spectra of the slit. The spectra of the jet are generally blueshifted,
meaning that the jet materials are moving toward the observer.
Figure 6: Snapshots of the IRIS/SJI 1330 Å images with a FOV of
60$\arcsec\times$40$\arcsec$. The arrows point to the bright circular ribbon
and jet. The evolution of the event observed in 1330 Å is shown in a movie
(anim1330.mov) available online. Figure 7: Selected FUV images observed by SJI
(left panels) and the corresponding Si iv spectra of the slit (right panels).
The white dashed lines represent the slit positions of raster observations.
The short cyan lines mark the positions of jet for spectral fittings in Fig.
8. The green dashed lines represent the reference line center of Si iv, i.e.,
1402.80 Å.
To precisely quantify the Doppler velocities of the jet, the line profiles of
Si iv at the positions marked with cyan lines are extracted and plotted with
orange lines in Fig. 8. The entirely blueshifted profiles are satisfactorily
fitted with a single-Gaussian function (blue lines in panels (b-e)), and the
calculated Doppler velocities ($v_{b}$) are labeled on top of each panel. The
profiles with an enhanced blue wing are fitted with a double-Gaussian function
(blue and magenta lines in the remaining panels). The sum of two components
are drawn with cyan dashed lines, which nicely agree with the observed
profiles (orange lines), meaning that the results of double-Gaussian fitting
are acceptable. The calculated Doppler velocities ($v_{b}$) of the blueshifted
component are also labeled on top of each panel. In Fig. 12, the values of
$v_{b}$, ranging from -34 to -120 km s-1, are marked with blue circles.
Figure 8: Line profiles of Si iv at the positions of slit, marked with short
cyan lines in Fig. 7. The orange lines represent the observed profiles. The
magenta and blue lines stand for the fitted components. The blueshifted
Doppler velocities ($v_{b}$) of the jet are labeled.
### 3.3 Chromospheric condensation
Chromospheric condensation at circular flare ribbons has been observed and
investigated in the homologous C3.1 and C4.2 flares (Zhang et al., 2016a, b).
The redshifted velocities of the downflow reach up to $\sim$60 km s-1 using
the observations of Si iv line. The condensation is primarily driven by
reconnection-accelerated nonthermal electrons (Li et al., 2015b, 2017). In
Fig. 9, likewise, the left column shows selected FUV images observed by SJI
when the circular ribbon was covered by the slit. The right column shows the
corresponding Si iv spectra of the slit. The spectra of ribbon are redshifted,
indicating plasma downflow or chromospheric condensation.
Figure 9: Selected FUV images observed by SJI (left panels) and the
corresponding Si iv spectra of the slit (right panels). The white dashed lines
represent the slit positions of raster observations. The short cyan lines mark
the positions of ribbon for spectral fittings in Fig. 10. In panels (a2),
(a3), (a4), (c2), and (c4), the short green lines mark the positions of jet
base for spectral fittings in Fig. 11. The green dashed lines represent the
reference line center of Si iv.
To quantify the Doppler velocities of condensation, the line profiles of Si iv
at the positions marked with cyan lines are extracted and plotted with orange
lines in Fig. 10. The profiles are fitted with a double-Gaussian function (red
and magenta lines), and the sum of two components are drawn with cyan dashed
lines, which gratifyingly agree with the observed profiles. The calculated
Doppler velocities ($v_{r}$) of the redshifted component are labeled on top of
each panel. In Fig. 12, the values of $v_{r}$, ranging from $\sim$20 to
$\sim$80 km s-1, are marked with red circles. The cause of condensation will
be discussed in Sect. 4.2.
Figure 10: Line profiles of Si iv at the positions of slit, marked with short
cyan lines in Fig. 9. The orange lines represent the observed profiles. The
magenta and red lines stand for the fitted components. The redshifted Doppler
velocities ($v_{r}$) of chromospheric condensation are labeled.
### 3.4 Bidirectional outflows
As mentioned before, magnetic reconnection occurs not only at the breakout
current sheet above the eruptive filament (or flux rope) but also at the flare
current sheet below the flux rope (Wyper et al., 2017, 2018). In Fig. 9,
dramatic line broadenings of the Si iv line at the jet base are shown in
panels (b2), (b3), (b4), (d2), and (d4), implying simultaneous bidirectional
reconnection outflows below the jet (e.g., Tian et al., 2014b; Li et al.,
2018c; Xue et al., 2018). To work out the Doppler velocities of outflows, the
line profiles of Si iv at the positions marked with short green lines are
extracted and plotted with orange lines in Fig. 11. The profiles are fitted
with a triple-Gaussian function (blue, magenta, and red lines) except in panel
(c), which is fitted with a double-Gaussian function. The sum of multiple
components are drawn with cyan dashed lines. The calculated Doppler velocities
of the redshifted component ($v_{r}$) and blueshifted component ($v_{b}$) are
labeled on top of each panel. In Fig. 12, the velocities of reconnection
upflow, in the range of -13 and -101 km s-1, are marked with blue triangles.
The velocities of reconnection downflow, in the range of $\sim$17 to $\sim$170
km s-1, are marked with red triangles. We note that the fast reconnection
outflows are observed after 09:00:40 UT, when the jet is rising quickly at a
speed of $\sim$363 km s-1.
It should be emphasized that the chromospheric condensation takes place at the
circular ribbon as marked by short cyan lines in Fig. 9. Only redshifted
downflow at speeds of 20$-$80 km s-1 could be identified in the spectra (Figs.
9, 10). The bidirectional outflows are observed inside the circular ribbon and
close to the jet base. Simultaneous upflow and downflow could be recognized in
the spectra (Figs. 9, 11) and the Doppler velocities of bidirectional outflows
are significantly higher than those of condensation. These are the main
differences between chromospheric condensation and bidirectional outflows.
Figure 11: Line profiles of Si iv at the positions of slit, marked with short
green lines in Fig. 9. The orange lines represent the observed profiles. The
blue, magenta, and red lines stand for the fitted components. The redshifted
($v_{r}$) and blueshifted ($v_{b}$) Doppler velocities of bidirectional
reconnection outflows are labeled. Figure 12: Time evolutions of the Doppler
velocities for jet (blue circles), chromospheric condensation (red circles),
reconnection downflow (red triangles) and upflow (blue triangles),
respectively.
## 4 Discussion
### 4.1 Magnetic reconnection
Magnetic reconnection is believed to play a key role in the energy release of
solar flares (Priest & Forbes, 2002; Priest & Pontin, 2009). Direct evidences
of reconnection at current sheets are abundant, including the bidirectional
inflows and outflows (e.g., Savage et al., 2012; Ning & Guo, 2014; Wu et al.,
2016; Xue et al., 2018; Yan et al., 2018; Chen et al., 2020; Yu et al.,
2020b), change of magnetic topology (Su et al., 2013), localized heating to
$\geq$10 MK (Seaton et al., 2017; Li et al., 2018c; Warren et al., 2018), and
creation of magnetic islands (Li et al., 2016). Using 3D MHD numerical
simulations, Wyper et al. (2017) proposed a universal model for solar
eruptions, including eruptive flares and coronal jets. Breakout reconnection
around an X-type null point in a quadrupolar magnetic configuration is
observed and investigated by Chen et al. (2016). Observations of fast
reconnection at the breakout current sheet of a fan-spine structure are
carried out by Yang et al. (2020).
In our study, impulsive interchange reconnection at the null point was
manifested by coherent brightenings of the circular ribbon, HXR peaks, and
type III radio bursts around 09:00:00 UT (Fig. 1(c-d) and Fig. 6(c)). Line
broadening as a result of bidirectional outflows was absent before the
minifilament broke through the fan surface. During the reconnection at the
flare current sheet underneath the filament, pronounced upflows and downflows
at the jet base were evidenced by significant Doppler line broadenings of Si
iv. The jet did not show untwisting motion during its radial propagation,
which is probably due to the small $\mathcal{T}_{w}$ of the sheared arcade
supporting the minifilament. Taken as a whole, our multiwavelength
observations of the flare-related jet support the breakout jet model (Wyper et
al., 2018).
Using combined observations of a small prominence eruption on 2014 May 1,
Reeves et al. (2015) found evidence for reconnection between the prominence
magnetic field and the overlying field. Reconnection outflows at a plane-of-
sky speed of $\sim$300 km s-1 and Doppler velocity of $\sim$200 km s-1 were
detected by SDO/AIA and IRIS, respectively. Moreover, possible reconnection
site below the prominence is found (see their Fig. 10). The authors, however,
concluded that the reconnection was triggered not by breakout reconnection,
but by reconnection occurring along and beneath the prominence.
Using multiwavelength observations from SDO, Hinode/XRT, IRIS, and the DST of
Hida Observatory, Sakaue et al. (2018) analyzed a jet-related C5.4 flare on
2014 November 11. The morphology and magnetic configuration of the jet were
somewhat similar to the jet in our study. However, the C5.4 flare and jet were
caused by magnetic reconnection between the emerging magnetic flux of the
satellite spots and the pre-existing ambient fields (Sakaue et al., 2017).
Part of the cool H$\alpha$ jet experienced secondary acceleration between the
trajactories of the H$\alpha$ jet and the hot SXR jet after it had been
ejected from the lower atmosphere, which is explained by magnetic reconnection
between the preceding H$\alpha$ jet and the plasmoid in the subsequent SXR
jet. In our case, the circular-ribbon flare was caused by eruption of a
minifilament underlying the null point (Figs. 2, 3). The fast rise of the jet
after $\sim$09:00:40 UT may result from quick ejection of F2 after null-point
reconnection (Wyper et al., 2018). The reconnection upflow at the flare
current sheet may also contribute to the acceleration of jet, as in the case
of coronal mass ejections (CMEs; Zhang et al., 2004).
### 4.2 Cause of chromospheric condensation
As mentioned above, chromospheric condensation could be driven by electron
beam heating (e.g., Li et al., 2015a, b; Yu et al., 2020a). During the C4.2
flare on 2015 October 16, explosive chromospheric evaporation took place,
which was characterized by plasma upflow at speeds of 35$-$120 km s-1 in the
Fe xxi 1354.09 Å line ($\log T\approx 7.05$) and downflow at speeds of 10$-$60
km s-1 in the Si iv 1393.77 Å line (Zhang et al., 2016a). The estimated
nonthermal energy flux above 20 keV exceeds the threshold ($\sim$1010 erg s-1
cm-2) for explosive evaporation (Fisher et al., 1985). Condensation during the
C3.1 flare associated with a type III burst is also believed to be driven by
electron beams (Zhang et al., 2016b).
To explore the cause of condensation during the C3.4 flare, we focus on
nonthermal electrons as usual. In Fig. 2(g), the AIA 1600 Å image at 09:00:39
UT is superposed with intensity contours of HXR emission at 12$-$25 keV during
09:00:20$-$09:00:40 UT (magenta lines). The centroid of the single HXR source
is cospatial with the bright inner ribbon, where majority of nonthermal
electrons are precipitated. In Fig. 13, the HXR spectrum obtained from RHESSI
observation is fitted with a thermal component plus a thick-target, nonthermal
component consisting of a broken power law (Rubio da Costa et al., 2015):
$F(E)=(\delta-1)\frac{\mathcal{F}_{c}}{E_{c}}(\frac{E}{E_{c}})^{-\delta},$ (1)
where $\delta$ and $E_{c}$ represent the spectral index and cutoff energy of
nonthermal electrons. $\mathcal{F}_{c}=\int_{E_{c}}^{\infty}F(E)dE$ denotes
the total electron flux above $E_{c}$. The fitted thermal component is plotted
with a red dashed line, with the values of thermal temperature ($T$), emission
measure (EM) being labeled. The fitted nonthermal component is plotted with a
blue dashed line, with the values of $\delta$ and $E_{c}$ being labeled as
well. The sum of two components (magenta dashed line) agrees with the observed
spectrum.
Figure 13: Results of RHESSI spectral fitting during 09:00:20$-$09:00:40 UT.
The data points with horizontal and vertical error bars represent the observed
data. The spectra for thermal and nonthermal component are plotted with red
and blue dashed lines, respectively. The sum of two components is shown with a
magenta dashed line. The fitted parameters, including the thermal temperature
($T$), emission measure (EM), electron spectral index ($\delta$), and low-
energy cutoff ($E_{c}$), are labeled.
Before $\sim$09:00:40 UT, the HXR emissions were predominantly produced by
nonthermal electrons (Fig. 1(c)). The power of injected electrons above
$E_{c}$ is expressed as:
$P_{nth}=\int_{E_{c}}^{\infty}F(E)EdE=\frac{\delta-1}{\delta-2}\mathcal{F}_{c}E_{c}.$
(2)
Substituting the parameters obtained from HXR spectral fitting in Fig. 13:
$\delta=4.6$, $\mathcal{F}_{c}=5.6\times 10^{35}$ electrons s-1, $E_{c}=10.3$
keV, then $P_{nth}$ is estimated to be $\sim$1.3$\times$1028 erg s-1. A lower
limit of the total nonthermal energy in electrons is $\sim$2.6$\times$1029
erg, since there was no RHESSI observation before 09:00:20 UT. Considering
that the area of HXR source is in the range of (0.25$-$1)$\times$1018 cm2, the
total nonthermal energy flux is estimated to be (1.3$-$5.2)$\times$1010 erg
s-1 cm-2, which is adequate to drive chromospheric condensation at flare
ribbon.
After $\sim$09:00:40 UT, the HXR emission at 25$-$50 keV decreased to the pre-
eruption level, while the emission at 12$-$25 keV increased gradually with
episodic spikes till $\sim$09:02:00 UT. To investigate the role of heat
conduction in driving condensation, we estimate the energy flux of heat
conduction using the expression:
$F_{cond}=\kappa_{0}\frac{T^{7/2}}{L},$ (3)
where $\kappa_{0}\approx 10^{-6}$ erg s-1 cm-1 K-7/2, $T\approx 14.5$ MK
denotes the thermal temperature in the corona, and $L\approx\frac{\pi d}{4}$
denotes the length scale of the temperature gradient ($d\approx 36\arcsec$
represents the equivalent diameter of circular ribbon). $F_{cond}$ is
estimated to be $\sim$5.5$\times$109 erg s-1 cm-2. Therefore, the condensation
after $\sim$09:00:40 UT should be driven by a combination of nonthermal
electrons and heat conduction (Sadykov et al., 2015).
### 4.3 Emission mechanism of microwave burst
As mentioned above, the flare was associated with type III radio bursts, which
are generated by electron beams propagating upward along open field (Innes et
al., 2011; Chen et al., 2018). The top and middle panels of Fig. 4 show radio
dynamic spectra recorded by ZSTS and blen5m during 09:00$-$09:02 UT with the
same cadence as KRIM. Enhancement of the microwave emission at 0.98$-$1.43 GHz
was obviously demonstrated before 09:00:40 UT. Contrary to the discrete type
III bursts owing to the coherent plasma radiation mechanism (Dulk, 1985), the
microwave burst seems to be continuous and has no one-to-one correspondence
with the type III bursts, implying that the microwave burst was not produced
by plasma radiation mechanism. To interpret the relationship between the HXR
and radio time profiles of single-loop flares, Kundu et al. (2001) proposed a
simple trap model by introducing a critical pitch angle (i.e. loss cone
angle). Those injected high-energy electrons with smaller pitch angles are not
reflected by the increasing magnetic field as they approach the loop
footpoints and will precipitate on their first approach. The remaining
electrons with pitch angles outside the loss cone will be trapped in the loop
unless pitch angle scattering takes place.
In our case, the accelerated nonthermal electrons before $\sim$09:00:40 UT are
composed of three groups. The first group propagates along open field to
generate type III radio bursts (Fig. 4(c)). The second group precipitates
straightforward into the chromosphere and generates HXR emissions (Fig.
1(b-c)). The third group is trapped in the rising minifilament and generates
microwave burst through the gyrosynchrotron emission mechanism (Dulk, 1985;
Lee & Gary, 2000; Wu et al., 2016). Of course, the trapped electrons will
eventually precipitate once pitch angle scattering switches on. Existence of
trapped nonthermal electrons in a kink-unstable filament undergoing a failed
eruption has been reported by Guo et al. (2012). After the minifilament breaks
through the null point and totally opens up, there are no trapped electrons
any more and the microwave emission vanishes. This scenario is qualitative and
needs to be validated by in-depth investigations. Additional cases of
microwave bursts like in Fig. 4 have been collected, which will be the topic
of our next paper.
## 5 Summary
In this work, a coronal jet related to a C3.4 flare in AR 12434 on 2015
October 16 is studied in detail. The main results are summarized as follows:
1. 1.
Two minifilaments were located under a 3D fan-spine structure before flare.
The flare was generated by the eruption of one filament. The kinetic evolution
of the jet was divided into two phases: a slow rise phase at a speed of
$\sim$130 km s-1 and a fast rise phase at a speed of $\sim$360 km s-1 in the
plane-of-sky. The slow rise phase may correspond to breakout reconnection at
the breakout current sheet, and the fast rise phase may correspond to
reconnection at the flare current sheet. The transition between the two phases
took place at $\sim$09:00:40 UT. The blueshifted Doppler velocities of the jet
in the Si iv 1402.80 Å line range from -34 to -120 km s-1.
2. 2.
The accelerated high-energy electrons are composed of three groups. Those
propagating upward along open field generate type III radio bursts, while
those propagating downward produce HXR emissions and drive chromospheric
condensation. The electrons trapped in the rising filament generate a
microwave burst lasting for $\leq$40 s.
3. 3.
Bidirectional outflows at the jet base are manifested by significant line
broadenings of the Si iv line. The blueshifted Doppler velocities range from
-13 to -101 km s-1. The redshifted Doppler velocities range from $\sim$17 to
$\sim$170 km s-1. Our multiwavelength observations of the flare-related jet
are in favor of the breakout jet model and shed light on the acceleration and
transport of nonthermal electrons.
###### Acknowledgements.
The authors thank the referee for constructive suggestions and comments. The
authors appreciate Drs. Xiaoli Yan in Yunnan Observatories, Ying Li and Lei Lu
in Purple Mountain Observatory, Yang Guo in Nanjing University, and Sijie Yu
in New Jersey Institute of Technology for valuable discussions. SDO is a
mission of NASA’s Living With a Star Program. AIA and HMI data are courtesy of
the NASA/SDO science teams. This work is funded by NSFC grants (No. 11773079,
11790302, U1831112, 11903050, 11790304, 11973092, 11573072, and 11703017), the
International Cooperation and Interchange Program (11961131002), the Youth
Innovation Promotion Association CAS, CAS Key Laboratory of Solar Activity,
National Astronomical Observatories (KLSA202003, KLSA202006), and the project
supported by the Specialized Research Fund for State Key Laboratories.
## References
* Antolin et al. (2020) Antolin, P., Pagano, P., Testa, P., et al. 2020, Nature Astronomy. doi:10.1038/s41550-020-1199-8
* Archontis & Hood (2013) Archontis, V., & Hood, A. W. 2013, ApJ, 769, L21. doi:10.1088/2041-8205/769/2/L21
* Bain & Fletcher (2009) Bain, H. M., & Fletcher, L. 2009, A&A, 508, 1443. doi:10.1051/0004-6361/200911876
* Berger & Prior (2006) Berger, M. A., & Prior, C. 2006, Journal of Physics A Mathematical General, 39, 8321. doi:10.1088/0305-4470/39/26/005
* Chen et al. (2012) Chen, H.-D., Zhang, J., & Ma, S.-L. 2012, Research in Astronomy and Astrophysics, 12, 573. doi:10.1088/1674-4527/12/5/009
* Chen et al. (2016) Chen, Y., Du, G., Zhao, D., et al. 2016, ApJ, 820, L37. doi:10.3847/2041-8205/820/2/L37
* Chen et al. (2018) Chen, B., Yu, S., Battaglia, M., et al. 2018, ApJ, 866, 62. doi:10.3847/1538-4357/aadb89
* Chen et al. (2020) Chen, B., Shen, C., Gary, D. E., et al. 2020, Nature Astronomy, doi:10.1038/s41550-020-1147-7
* Chen et al. (2020) Chen, H., Zhang, J., De Pontieu, B., et al. 2020, ApJ, 899, 19. doi:10.3847/1538-4357/ab9cad
* Cheung et al. (2015) Cheung, M. C. M., De Pontieu, B., Tarbell, T. D., et al. 2015, ApJ, 801, 83. doi:10.1088/0004-637X/801/2/83
* Chifor et al. (2008) Chifor, C., Isobe, H., Mason, H. E., et al. 2008, A&A, 491, 279. doi:10.1051/0004-6361:200810265
* Cirtain et al. (2007) Cirtain, J. W., Golub, L., Lundquist, L., et al. 2007, Science, 318, 1580. doi:10.1126/science.1147050
* Culhane et al. (2007) Culhane, L., Harra, L. K., Baker, D., et al. 2007, PASJ, 59, S751. doi:10.1093/pasj/59.sp3.S751
* Dai et al. (2020) Dai, J., Zhang, Q. M., Su, Y. N., et al. 2020, arXiv:2012.07074
* Demoulin et al. (1996) Demoulin, P., Henoux, J. C., Priest, E. R., et al. 1996, A&A, 308, 643
* De Pontieu et al. (2004) De Pontieu, B., Erdélyi, R., & James, S. P. 2004, Nature, 430, 536. doi:10.1038/nature02749
* De Pontieu et al. (2014) De Pontieu, B., Title, A. M., Lemen, J. R., et al. 2014, Sol. Phys., 289, 2733. doi:10.1007/s11207-014-0485-y
* Doyle et al. (2019) Doyle, L., Wyper, P. F., Scullion, E., et al. 2019, ApJ, 887, 246. doi:10.3847/1538-4357/ab5d39
* Dulk (1985) Dulk, G. A. 1985, ARA&A, 23, 169. doi:10.1146/annurev.aa.23.090185.001125
* Fisher et al. (1985) Fisher, G. H., Canfield, R. C., & McClymont, A. N. 1985, ApJ, 289, 425. doi:10.1086/162902
* Gary & Hagyard (1990) Gary, G. A., & Hagyard, M. J. 1990, Sol. Phys., 126, 21. doi:10.1007/BF00158295
* Glesener et al. (2012) Glesener, L., Krucker, S., & Lin, R. P. 2012, ApJ, 754, 9. doi:10.1088/0004-637X/754/1/9
* Glesener & Fleishman (2018) Glesener, L., & Fleishman, G. D. 2018, ApJ, 867, 84. doi:10.3847/1538-4357/aacefe
* Guo et al. (2012) Guo, Y., Ding, M. D., Schmieder, B., et al. 2012, ApJ, 746, 17. doi:10.1088/0004-637X/746/1/17
* Hong et al. (2017) Hong, J., Jiang, Y., Yang, J., et al. 2017, ApJ, 835, 35. doi:10.3847/1538-4357/835/1/35
* Hong et al. (2019) Hong, J., Yang, J., Chen, H., et al. 2019, ApJ, 874, 146. doi:10.3847/1538-4357/ab0c9d
* Hou et al. (2019) Hou, Y., Li, T., Yang, S., et al. 2019, ApJ, 871, 4. doi:10.3847/1538-4357/aaf4f4
* Hou et al. (2020) Hou, Y. J., Li, T., Zhong, S. H., et al. 2020, A&A, 642, A44. doi:10.1051/0004-6361/202038668
* Huang et al. (2012) Huang, Z., Madjarska, M. S., Doyle, J. G., et al. 2012, A&A, 548, A62. doi:10.1051/0004-6361/201220079
* Huang et al. (2020) Huang, Z., Zhang, Q., Xia, L., et al. 2020, ApJ, 897, 113. doi:10.3847/1538-4357/ab96bd
* Innes et al. (1997) Innes, D. E., Inhester, B., Axford, W. I., et al. 1997, Nature, 386, 811. doi:10.1038/386811a0
* Innes et al. (2011) Innes, D. E., Cameron, R. H., & Solanki, S. K. 2011, A&A, 531, L13. doi:10.1051/0004-6361/201117255
* Joshi et al. (2020a) Joshi, R., Chandra, R., Schmieder, B., et al. 2020a, A&A, 639, A22. doi:10.1051/0004-6361/202037806
* Joshi et al. (2020b) Joshi, R., Schmieder, B., Aulanier, G., et al. 2020b, A&A, 642, A169. doi:10.1051/0004-6361/202038562
* Krucker et al. (2011) Krucker, S., Kontar, E. P., Christe, S., et al. 2011, ApJ, 742, 82. doi:10.1088/0004-637X/742/2/82
* Kundu et al. (2001) Kundu, M. R., White, S. M., Shibasaki, K., et al. 2001, ApJ, 547, 1090. doi:10.1086/318422
* Lee & Gary (2000) Lee, J., & Gary, D. E. 2000, ApJ, 543, 457. doi:10.1086/317080
* Leka et al. (2009) Leka, K. D., Barnes, G., Crouch, A. D., et al. 2009, Sol. Phys., 260, 83. doi:10.1007/s11207-009-9440-8
* Lemen et al. (2012) Lemen, J. R., Title, A. M., Akin, D. J., et al. 2012, Sol. Phys., 275, 17. doi:10.1007/s11207-011-9776-8
* Li et al. (2015a) Li, Y., Ding, M. D., Qiu, J., et al. 2015, ApJ, 811, 7. doi:10.1088/0004-637X/811/1/7
* Li et al. (2015b) Li, D., Ning, Z. J., & Zhang, Q. M. 2015, ApJ, 813, 59. doi:10.1088/0004-637X/813/1/59
* Li et al. (2016) Li, L., Zhang, J., Peter, H., et al. 2016, Nature Physics, 12, 847. doi:10.1038/nphys3768
* Li et al. (2017) Li, D., Ning, Z. J., Huang, Y., et al. 2017, ApJ, 841, L9. doi:10.3847/2041-8213/aa71b0
* Li et al. (2018a) Li, D., Li, L., & Ning, Z. 2018, MNRAS, 479, 2382. doi:10.1093/mnras/sty1712
* Li et al. (2018b) Li, T., Yang, S., Zhang, Q., et al. 2018, ApJ, 859, 122. doi:10.3847/1538-4357/aabe84
* Li et al. (2018c) Li, Y., Xue, J. C., Ding, M. D., et al. 2018, ApJ, 853, L15. doi:10.3847/2041-8213/aaa6c0
* Li & Yang (2019) Li, H., & Yang, J. 2019, ApJ, 872, 87. doi:10.3847/1538-4357/aafb3a
* Lin et al. (2002) Lin, R. P., Dennis, B. R., Hurford, G. J., et al. 2002, Sol. Phys., 210, 3. doi:10.1023/A:1022428818870
* Liu & Kurokawa (2004) Liu, Y. & Kurokawa, H. 2004, ApJ, 610, 1136. doi:10.1086/421715
* Liu et al. (2009) Liu, W., Berger, T. E., Title, A. M., et al. 2009, ApJ, 707, L37. doi:10.1088/0004-637X/707/1/L37
* Liu et al. (2016) Liu, R., Kliem, B., Titov, V. S., et al. 2016, ApJ, 818, 148. doi:10.3847/0004-637X/818/2/148
* Lu et al. (2019) Lu, L., Feng, L., Li, Y., et al. 2019, ApJ, 887, 154. doi:10.3847/1538-4357/ab530c
* Meegan et al. (2009) Meegan, C., Lichti, G., Bhat, P. N., et al. 2009, ApJ, 702, 791. doi:10.1088/0004-637X/702/1/791
* Moore et al. (2010) Moore, R. L., Cirtain, J. W., Sterling, A. C., & Falconer, D. A. 2010, ApJ, 720, 757. doi:10.1088/0004-637X/720/1/757
* Mulay et al. (2017) Mulay, S. M., Del Zanna, G., & Mason, H. 2017, A&A, 606, A4. doi:10.1051/0004-6361/201730429
* Ni et al. (2017) Ni, L., Zhang, Q.-M., Murphy, N. A., et al. 2017, ApJ, 841, 27. doi:10.3847/1538-4357/aa6ffe
* Ning & Guo (2014) Ning, Z., & Guo, Y. 2014, ApJ, 794, 79. doi:10.1088/0004-637X/794/1/79
* Panesar et al. (2016) Panesar, N. K., Sterling, A. C., Moore, R. L., et al. 2016, ApJ, 832, L7. doi:10.3847/2041-8205/832/1/L7
* Pariat et al. (2009) Pariat, E., Antiochos, S. K., & DeVore, C. R. 2009, ApJ, 691, 61. doi:10.1088/0004-637X/691/1/61
* Priest & Forbes (2002) Priest, E. R., & Forbes, T. G. 2002, A&A Rev., 10, 313. doi:10.1007/s001590100013
* Priest & Pontin (2009) Priest, E. R., & Pontin, D. I. 2009, Physics of Plasmas, 16, 122101. doi:10.1063/1.3257901
* Raouafi et al. (2016) Raouafi, N. E., Patsourakos, S., Pariat, E., et al. 2016, Space Sci. Rev., 201, 1. doi:10.1007/s11214-016-0260-5
* Reeves et al. (2015) Reeves, K. K., McCauley, P. I., & Tian, H. 2015, ApJ, 807, 7. doi:10.1088/0004-637X/807/1/7
* Rubio da Costa et al. (2015) Rubio da Costa, F., Kleint, L., Petrosian, V., et al. 2015, ApJ, 804, 56. doi:10.1088/0004-637X/804/1/56
* Sadykov et al. (2015) Sadykov, V. M., Vargas Dominguez, S., Kosovichev, A. G., et al. 2015, ApJ, 805, 167. doi:10.1088/0004-637X/805/2/167
* Sakaue et al. (2017) Sakaue, T., Tei, A., Asai, A., et al. 2017, PASJ, 69, 80. doi:10.1093/pasj/psx071
* Sakaue et al. (2018) Sakaue, T., Tei, A., Asai, A., et al. 2018, PASJ, 70, 99. doi:10.1093/pasj/psx133
* Samanta et al. (2019) Samanta, T., Tian, H., Yurchyshyn, V., et al. 2019, Science, 366, 890. doi:10.1126/science.aaw2796
* Savage et al. (2012) Savage, S. L., Holman, G., Reeves, K. K., et al. 2012, ApJ, 754, 13. doi:10.1088/0004-637X/754/1/13
* Savcheva et al. (2007) Savcheva, A., Cirtain, J., Deluca, E. E., et al. 2007, PASJ, 59, S771. doi:10.1093/pasj/59.sp3.S771
* Scherrer et al. (2012) Scherrer, P. H., Schou, J., Bush, R. I., et al. 2012, Sol. Phys., 275, 207. doi:10.1007/s11207-011-9834-2
* Schmieder et al. (1995) Schmieder, B., Shibata, K., van Driel-Gesztelyi, L., et al. 1995, Sol. Phys., 156, 245. doi:10.1007/BF00670226
* Seaton et al. (2017) Seaton, D. B., Bartz, A. E., & Darnel, J. M. 2017, ApJ, 835, 139. doi:10.3847/1538-4357/835/2/139
* Shen et al. (2017) Shen, Y., Liu, Y. D., Su, J., et al. 2017, ApJ, 851, 67. doi:10.3847/1538-4357/aa9a48
* Shen et al. (2019) Shen, Y., Qu, Z., Yuan, D., et al. 2019, ApJ, 883, 104. doi:10.3847/1538-4357/ab3a4d
* Shen (2021) Shen, Y. 2021, arXiv:2101.04846
* Shibata et al. (1992) Shibata, K., Ishido, Y., Acton, L. W., et al. 1992, PASJ, 44, L173
* Shibata et al. (1994) Shibata, K., Nitta, N., Strong, K. T., et al. 1994, ApJ, 431, L51. doi:10.1086/187470
* Shibata et al. (2007) Shibata, K., Nakamura, T., Matsumoto, T., et al. 2007, Science, 318, 1591. doi:10.1126/science.1146708
* Shimojo et al. (1996) Shimojo, M., Hashimoto, S., Shibata, K., et al. 1996, PASJ, 48, 123. doi:10.1093/pasj/48.1.123
* Shimojo & Shibata (2000) Shimojo, M., & Shibata, K. 2000, ApJ, 542, 1100. doi:10.1086/317024
* Shimojo et al. (2007) Shimojo, M., Narukage, N., Kano, R., et al. 2007, PASJ, 59, S745. doi:10.1093/pasj/59.sp3.S745
* Sterling et al. (2015) Sterling, A. C., Moore, R. L., Falconer, D. A., & Adams, M. 2015, Nature, 523, 437. doi:10.1038/nature14556
* Sterling et al. (2017) Sterling, A. C., Moore, R. L., Falconer, D. A., et al. 2017, ApJ, 844, 28. doi:10.3847/1538-4357/aa7945
* Su et al. (2013) Su, Y., Veronig, A. M., Holman, G. D., et al. 2013, Nature Physics, 9, 489. doi:10.1038/nphys2675
* Tian et al. (2014a) Tian, H., DeLuca, E. E., Cranmer, S. R., et al. 2014, Science, 346, 1255711. doi:10.1126/science.1255711
* Tian et al. (2014b) Tian, H., Li, G., Reeves, K. K., et al. 2014, ApJ, 797, L14. doi:10.1088/2041-8205/797/2/L14
* Titov et al. (2002) Titov, V. S., Hornig, G., & Démoulin, P. 2002, Journal of Geophysical Research (Space Physics), 107, 1164. doi:10.1029/2001JA000278
* Wang & Liu (2012) Wang, H., & Liu, C. 2012, ApJ, 760, 101. doi:10.1088/0004-637X/760/2/101
* Warren et al. (2018) Warren, H. P., Brooks, D. H., Ugarte-Urra, I., et al. 2018, ApJ, 854, 122. doi:10.3847/1538-4357/aaa9b8
* Wiegelmann (2004) Wiegelmann, T. 2004, Sol. Phys., 219, 87. doi:10.1023/B:SOLA.0000021799.39465.36
* Wiegelmann et al. (2012) Wiegelmann, T., Thalmann, J. K., Inhester, B., et al. 2012, Sol. Phys., 281, 37. doi:10.1007/s11207-012-9966-z
* Wu et al. (2016) Wu, Z., Chen, Y., Huang, G., et al. 2016, ApJ, 820, L29. doi:10.3847/2041-8205/820/2/L29
* Wyper et al. (2016) Wyper, P. F., DeVore, C. R., Karpen, J. T., et al. 2016, ApJ, 827, 4. doi:10.3847/0004-637X/827/1/4
* Wyper et al. (2017) Wyper, P. F., Antiochos, S. K., & DeVore, C. R. 2017, Nature, 544, 452. doi:10.1038/nature22050
* Wyper et al. (2018) Wyper, P. F., DeVore, C. R., & Antiochos, S. K. 2018, ApJ, 852, 98. doi:10.3847/1538-4357/aa9ffc
* Xue et al. (2018) Xue, Z., Yan, X., Yang, L., et al. 2018, ApJ, 858, L4. doi:10.3847/2041-8213/aabe77
* Yan et al. (2018) Yan, X. L., Yang, L. H., Xue, Z. K., et al. 2018, ApJ, 853, L18. doi:10.3847/2041-8213/aaa6c2
* Yang et al. (2020) Yang, S., Zhang, Q., Xu, Z., et al. 2020, ApJ, 898, 101. doi:10.3847/1538-4357/ab9ac7
* Yokoyama & Shibata (1996) Yokoyama, T., & Shibata, K. 1996, PASJ, 48, 353. doi:10.1093/pasj/48.2.353
* Young & Muglach (2014) Young, P. R., & Muglach, K. 2014, Sol. Phys., 289, 3313. doi:10.1007/s11207-014-0484-z
* Yu et al. (2020a) Yu, K., Li, Y., Ding, M. D., et al. 2020a, ApJ, 896, 154. doi:10.3847/1538-4357/ab9014
* Yu et al. (2020b) Yu, S., Chen, B., Reeves, K. K., et al. 2020b, ApJ, 900, 17. doi:10.3847/1538-4357/aba8a6
* Zhang et al. (2004) Zhang, J., Dere, K. P., Howard, R. A., et al. 2004, ApJ, 604, 420. doi:10.1086/381725
* Zhang et al. (2012) Zhang, Q. M., Chen, P. F., Guo, Y., et al. 2012, ApJ, 746, 19. doi:10.1088/0004-637X/746/1/19
* Zhang & Ji (2014a) Zhang, Q. M., & Ji, H. S. 2014a, A&A, 561, A134. doi:10.1051/0004-6361/201322616
* Zhang & Ji (2014b) Zhang, Q. M., & Ji, H. S. 2014b, A&A, 567, A11. doi:10.1051/0004-6361/201423698
* Zhang et al. (2016a) Zhang, Q. M., Li, D., Ning, Z. J., et al. 2016a, ApJ, 827, 27. doi:10.3847/0004-637X/827/1/27
* Zhang et al. (2016b) Zhang, Q. M., Li, D., & Ning, Z. J. 2016b, ApJ, 832, 65. doi:10.3847/0004-637X/832/1/65
* Zhang & Ni (2019) Zhang, Q. M., & Ni, L. 2019, ApJ, 870, 113. doi:10.3847/1538-4357/aaf391
* Zhang et al. (2020) Zhang, Q. M., Yang, S. H., Li, T., et al. 2020, A&A, 636, L11. doi:10.1051/0004-6361/202038072
* Zhang (2020) Zhang, Q. M. 2020, A&A, 642, A159. doi:10.1051/0004-6361/202038557
|
11institutetext: Faculty of Mathematics, Computer Science and Econometrics,
University of Zielona Góra, 65-246 Zielona Góra, Poland,
11email<EMAIL_ADDRESS>
WWW home page: http://staff.uz.zgora.pl/akarczew 22institutetext: Faculty of
Physics and Astronomy, University of Zielona Góra,
65-246 Zielona Góra, Poland,
22email<EMAIL_ADDRESS>
# Signatures of chaotic dynamics in wave motion according to the extended KdV
equation
Anna Karczewska 11 Piotr Rozmej 22
###### Abstract
In this communication we test the hypothesis that for some initial conditions
the time evolution of surface waves according to the _extended KdV equation_
(KdV2) exhibits signatures of the deterministic chaos.
###### keywords:
Extended KdV equation, numerical evolution, deterministic chaos
## 1 Introduction
Korteweg-de Vries equation (KdV) is the most famous nonlinear partial
differential equation modelling long-wave, weekly dispersive gravity waves of
small amplitude on a surface of shallow water. In scaled variables and a fixed
reference frame KdV takes the following form
$\eta_{t}+\eta_{x}+\frac{3}{2}\alpha\eta\eta_{x}+\frac{1}{6}\beta\eta_{xxx}=0.$
(1)
In (1), $\eta(x,t)$ represents the wave profile, $\alpha\\!=\\!A/H$,
$\beta\\!=\\!(H/L)^{2}$, where $A$ is wave’s amplitude, $L$ \- it’s average
wavelength and $H$ is water depth. Indexes indicate partial derivatives. KdV
equation (1) is derived from Euler equations in perturbation approach, under
assumption that parameters $\alpha\\!\approx\\!\beta$ are small. It has
several analytic solutions: single soliton, multi-soliton, and periodic
(cnoidal) ones. KdV is integrable. It also has the unique property of the
infinite number of integral invariants, see, e.g., [1].
In 1990, Marchant and Smyth, extending perturbation approach to second order
in small parameters $\alpha,\beta$, derived the _extended KdV equation_ (KdV2)
[2]
$\displaystyle\eta_{t}+\eta_{x}$
$\displaystyle+\frac{3}{2}\alpha\eta\eta_{x}+\frac{1}{6}\beta\eta_{xxx}-\frac{3}{8}\alpha^{2}\eta^{2}\eta_{x}$
(2)
$\displaystyle+\alpha\beta\left(\frac{23}{24}\eta_{x}\eta_{xx}+\frac{5}{12}\eta\eta_{xxx}\right)+\frac{19}{360}\beta^{2}\eta_{xxxxx}=0.$
Studying this equation, we showed that KdV2 has only one exact invariant,
representing mass (volume) of displaced fluid. The other integral invariants
are only approximate, with deviations of the order of $O(\alpha^{2})$ [3].
Next, we showed that KdV2 equation, despite being non-integrable, has exact
single soliton and periodic solutions in the same form as KdV equation, but
with slightly different coefficients [4, 5, 6].
An exact single soliton solution of KdV2 has the same form as the KdV soliton,
that is,
$\eta(x,t)=A\,\,{\sf sech}[B(x-vt)].$ (3)
However, coefficients $A,B,v$ are uniquely determined by the coefficients of
the KdV2 equation [4]. This property is entirely different from KdV solitons’
properties, for which there is a one-parameter family of possible solutions.
Therefore KdV equation admits multi-soliton solutions, whereas the KdV2
equation does not. Since the equation (2) is second-order in small parameters
$\alpha,\beta$ (assuming that $\alpha\approx\beta$), it should be a good
approximation for much larger values of small parameters than KdV.
Figure 1: Snapshots of time evolution according to the KdV2 equation (2).
Subsequent profiles corresponding to times $t_{n}=n*64$, with
$n=0,1,\ldots,5$, are shifted up by one unit to avoid overlaps. Initial
condition in the form of the Gaussian of the same volume as the soliton, the
same amplitude and velocity.
Figure 2: The same as in Fig. 1 but for initial condition in the form of the
Gaussian which volume is three times greater, with the same amplitude and
velocity but the inverted displacement.
However, in the KdV2 case, for some initial conditions we obtained unexpected
results. In particular, when the initial condition was chosen in the form of
depression (instead of ‘usual’ elevation), the time evolution calculated
according to the KdV2 equation (2) appeared to be entirely different than that
when initial conditions are not much different from the exact soliton. We
first encountered these facts in [7]. We show an example of this behavior in
Figs. 1 and 2.
At the first glance, time evolution presented in Fig. 2 (bottom) looks
_chaotic_. In the next section, we try to verify this observation
quantitatively.
## 2 Dynamics determined by the KdV2 equation
It is well known [8] that the _deterministic chaos_ occurs when trajectories
of the system’s motion, starting from very close initial conditions, diverge
exponentially with time. How can we define the distance between the
trajectories? Let $\eta_{1}(x,t)$ and $\eta_{2}(x,t)$ denote two different
trajectories. Define the following measures of the distance between them:
$\displaystyle M_{1}(t)$
$\displaystyle=\int_{-\infty}^{\infty}|\eta_{1}(x,t)-\eta_{2}(x,t)|\,dx$ (4)
$\displaystyle M_{2}(t)$
$\displaystyle=\int_{-\infty}^{\infty}\left[\eta_{1}(x,t)-\eta_{2}(x,t)\right]^{2}dx.$
(5)
In numerical simulations, we utilize the finite difference method (FDM)
described in detail in [4, 7] with $N$ of the order 5000-10000. Integrals are
approximated by sums, so
$M_{1}(t)\approx\sum_{i=1}^{N}|\eta_{1}(x_{i},t)-\eta_{2}(x_{i},t)|dx$ and
$M_{2}(t)\approx\sum_{i=1}^{N}[\eta_{1}(x_{i},t)-\eta_{2}(x_{i},t)]^{2}dx$. In
calculations, we use the periodic boundary conditions. Therefore, the interval
$x\in[x_{1},x_{2}]$ has to be much wider than the region where the surface
wave is localized. In numerical calculations, presented in [7], when the
initial conditions were exact KdV2 solitons, the invariant
$I_{1}=\int_{-\infty}^{\infty}\eta(x,t)dx$ was conserved with the precision
$10^{-10}-10^{-12}$. In calculations shown in this note, since initial
conditions are much different, $I_{1}$ is conserved up to $10^{-7}$.
Figure 3: Distances between trajectories, which start from almost identical
initial conditions, as functions of time. Lines marked by S correspond to
initial conditions in the form of exact KdV2 soliton (3), whereas those marked
by G correspond to initical conditions in the form of Gaussians. Open symbols
indicate $M_{1}$ measures, whereas the filled ones indicate $M_{2}$ measures.
First, we check the divergence of trajectories when the initial conditions are
close to the exact KdV2 soliton. This is presented in Fig. 3 with lines marked
by S. The lines marked by G in Fig. 3 represent the divergence of trajectories
when $\eta_{1}(x,t=0)$ was the Gaussian distortion having the same amplitude
$A_{1}$ as the KdV2 soliton, with the width $\sigma_{1}$ providing the same
volume. Then, for $\eta_{2}(x,t=0)$ we chose the similar Gaussian form but
with parameters slightly changed, namely with $A_{2}=A_{1}(1+\varepsilon)$ and
$\sigma_{2}=\sigma_{1}/(1+\varepsilon)$, which ensures the same initial
volume. In both cases, initial velocity was assumed to be equal to soliton’s
velocity. Note, that profiles shown in Fig. 2 represent the evolution of
$\eta_{1}(x,t=0)$. Results displayed in Fig. 3 were obtained for
$\varepsilon=10^{-4}$. For $\varepsilon=10^{-3}$ and $\varepsilon=10^{-5}$ the
relative distances $M(t)/M(0)$ behave almost exactly the same. It is clear
that in all cases presented in Fig. 3 the distances between trajectories which
start from neighbor initial conditions diverge linearly with time. For
dynamical systems, it means that such conditions belong to this phase-space
region in which motion is regular.
Figure 4: Time dependence of measures $M_{1}$ for $\varepsilon=10^{-4}$ and
different volumes of initial conditions.
In the below-presented calculations, we studied the motion according to the
KdV2 equation when initial displacements have the sign opposite from the
soliton. For $\eta_{1}(x,t=0)$ we chose the Gaussian distortion having the
amplitude $-A_{1}$ of the KdV2 soliton, but with the width $\sigma_{1}$
ensuring multiple soliton’s volumes. For $\eta_{2}(x,t=0)$ we chose the
similar Gaussian form but with parameters slightly changed, namely with
$A_{2}=-A_{1}(1+\varepsilon)$ and $\sigma_{2}=\sigma_{1}/(1+\varepsilon)$,
which ensures the same initial volume. In both cases, the initial velocity is
chosen the same as the velocity of the KdV2 soliton.
Since in all cases shown below, the distances between trajectories increased
much faster than linearly with time, the next figures are plotted on a
semilogarithmic scale. In Fig. 4, we show the time dependence of the distance
measures $M_{1}$ for $\varepsilon=10^{-4}$. The notation Vol=$n$, with
$n=1,2,3,4$ denotes the initial volume of the $\eta_{1}(x,t=0)$ in the units
of the KdV2 soliton volume. In Fig. 5 we present the analogous results but for
$M_{2}$ measures. In both figures we observe almost perfect exponential
divergence of trajectories, even for $n=1$. Fits to the plots displayed in
Fig. 4 give the following exponents: 0.00475, 0.00785, 0.0086 and 0.0089 for
Vol=1,2,3,4, respectively. For $M_{2}$ measures the fitted exponents are:
0.00477, 0.0051, 0.00539 and 0.00561, respectively. All these exponents are
obtained by fitting in the interval $t\in[150:300]$.
Figure 5: The same as in Fig. 4 but for $M_{2}$ measures.
The results shown above allow us to conclude that for initial conditions
substantially different from the exact KdV2 soliton, the dynamics of motion
governed by the KdV2 equation exhibits exponential divergence of trajectories.
## References
* [1] Drazin, P.G., Johnson, T.S.: _Solitons: An Introduction._ Cambridge University Press, Cambridge 1989.
* [2] Marchant, T.R., Smyth, N.F.: _The extended Korteweg-de Vries equation and the resonant flow of a fluid over topography._ J Fluid Mech 1990:221:263-288.
* [3] Karczewska, A., Rozmej, P., Infeld E., Rowlands G.: _Adiabatic invariants of the extended KdV equation_. Phys Lett A 2017:381:270-275.
* [4] Karczewska, A., Rozmej, P., Infeld, E.: _Shallow water soliton dynamics beyond Korteweg-de Vries equation_. Phys Rev E 2014:90:012907.
* [5] Infeld, I., Karczewska, A., Rowlands, G., Rozmej, P.: _Exact cnoidal solutions of the extended KdV equation_. Acta Phys Pol 2018:133:1191-1199.
* [6] Rozmej, P., Karczewska, A., Infeld E.: _Superposition solutions to the extended KdV equation for water surface waves._ Nonlinear Dynamics 2018:91:1085-1093.
* [7] Karczewska, A., Rozmej, P.: _Generalized KdV-type equations versus Boussinesq’s equations for uneven bottom – numerical study._ Comput Meth Sci Tech 2020:26(4):121-136.
* [8] Alligood, K.T., Sauer, T., Yorke, J.A.: _Chaos: an introduction to dynamical systems._ Springer-Verlag, 1997.
|
# Time Discretization From Noncommutativity
Fedele Lizzi<EMAIL_ADDRESS>Dipartimento di Fisica “Ettore Pancini”,
Università di Napoli Federico II, Napoli, Italy INFN, Sezione di Napoli,
Italy Departament de Física Quàntica i Astrofísica and Institut de Cíencies
del Cosmos (ICCUB), Universitat de Barcelona. Barcelona, Spain Patrizia
Vitale<EMAIL_ADDRESS>INFN, Sezione di Napoli, Italy Departament
de Física Quàntica i Astrofísica and Institut de Cíencies del Cosmos (ICCUB),
Universitat de Barcelona. Barcelona, Spain
###### Abstract
We show that a particular noncommutative geometry, sometimes called angular or
$\rho$-Minkowski, requires that the spectrum of time be discrete. In this
noncommutative space the time variable is not commuting with the angular
variable in cylindrical coordinates. The possible values that the variable can
take go from minus infinity to plus infinity, equally spaced by the scale of
noncommmutativity. Possible self-adjoint extensions of the “time operator” are
discussed. They give that a measurement of time can be any real value, but
time intervals are still quantized.
In general relativity spacetime itself is dynamical, therefore any theory of
quantum gravity will imply a _quantum spacetime_. There are different ways to
implement this quantization, and one of the most popular ones is to mimic what
has been done for ordinary quantum mechanics, and consider that the algebra
generated by the coordinate functions become noncommutative, thus defining a
_noncommutative geometry_. Different flavours of noncommutative spaces based
on noncommuting coordinates have appeared. The most promising of those have a
deformed symmetry, described by a quantum group, or a Hopf algebra.
We will work in a particular kind of noncommutative spacetime in four
dimensions, based on the following commutation relations among the coordinate
functions:
$\displaystyle[x^{0},x^{i}]$ $\displaystyle=$
$\displaystyle\mathrm{i}\lambda{\epsilon^{i}}_{j3}x^{j}$
$\displaystyle{[}x^{i},x^{j}{]}$ $\displaystyle=$ $\displaystyle 0$ (1)
where $\lambda$ is a constant with the dimensions of length and all other
commutators among the coordinate functions vanish. Notice that the third
coordinate $x^{3}$ is central, i.e. it commutes with all other coordinates.
This form of noncommutativity is a particular kind of Lie algebra type
noncommutativity, the underlying Lie algebra being the Euclidean algebra,
which goes back to at least [1] (also see [2]). In the context of twisted
symmetries it was discussed by Lukierski and Woronowicz in [3]. In [4] it was
christened $\rho$-Minkowski because what we call $\lambda$ is called $\rho$ in
that paper. We changed the notation to reserve the use of $\rho$ for the
radius in cylindrical coordinates. In that paper it was shown that the
principle of relative locality [5] holds. This kind of noncommutativity might
have concrete physical interest [6, 7] and even phenomenological/observational
consequences [8]. A field theory of this space has been constructed in [9].
Commutation relations similar to the ones considered here, and in the above
references, have appeared in the definition of the “noncommutative cylinder”
[10, 11, 12, 13, 14, 15]. This is an example of a two-dimensional space with a
compact dimension, with a commutation relation similar to (1) (or rather (6)
below). In particular in [11] time discretization, one of the results of this
paper, was noted. Our discussion is however in four dimensions, there are no
compact dimensions and the symmetries of the space are recovered in a quantum
manner.
Emergence of a discrete time, which is one of the main points of this paper is
fascinating. Its origin goes back to no less than C.N. Yang in 1947 [16], or
even earlier to Levi [17] who coined the term “Chronon”. Discrete time also
appeared in 2+1 gravity thanks to the work of ’t Hooft [18] (see also [19]).
The point of view presented here is however novel, in that it connects to
deformed symmetries and a promising quantum space.
We study the kinematics of this noncommutative space using the tools developed
for usual quantum mechanics, namely quantise the space associating to it an
algebra of operators, obtain a concrete representation of them on some Hilbert
space, whose vectors are pure states, diagonalise sets of completely commuting
observables and use the known measurement theory, namely that the possible
results of a measurement are given by the eigenvalues of the observables with
probabilities given by the spectral decomposition of self-adjoint operators.
The word “quantum” in this context is ambiguous, we use it in the sense that
our space is described by operators. But Planck’s action constant $\hbar$
plays no role. We are at a purely kinematical level. Incidentally, this will
allow us to freely talk of “time operator”, an object which in usual quantum
mechanics is problematic. For a recent point of view see [20] and references
therein. If we identify $\lambda$ with Planck’s length, then we are in a
situation in which the inverse of the speed of light $c$ and the gravitational
constant cannot be ignored, but the quantum of action can.
This kind of analysis was performed for the better known $\kappa$-Minkowski
spacetime in [21, 22, 23]. The commutation relations in that case are of the
kind
$[x^{0},x^{i}]=\frac{\mathrm{i}}{\kappa}x^{i}\ ;\ [x^{i},x^{j}]=0$ (2)
It was found that only states localised at the origin, identified with the
position of a local observer, can be absolutely localised. States at a
distance cannot be precisely localised. This is a consequence of an
uncertainty principle which reads as
$\Delta x^{0}\Delta x^{i}\geq\frac{1}{2\kappa}\left|\langle
x^{i}\rangle\right|.$ (3)
Let us analyse the case of $\rho$-Minkowski. The relations (1) are clearly of
an angular nature. For this reason we work in cylindrical coordinates defined
as
$\rho=\sqrt{{x^{1}}^{2}+{x^{2}}^{2}}\ ,\ z=\ x^{3}\ ,\ t=\frac{x^{0}}{c}\ ,\
\varphi=\arctan\frac{x^{2}}{x^{1}}$ (4)
One could be tempted to say that in these coordinates the only nonzero
commutator is
$[t,\varphi]=\mathrm{i}\lambda$ (5)
but this expression clearly does not make sense. The quantity $\varphi$ is not
a single valued function and upon quantisation no self-adjoint operator would
correspond to it. A correct expression is
$[t,{\mathrm{e}}^{\mathrm{i}\varphi}]=-\mathrm{i}\lambda{\mathrm{e}}^{\mathrm{i}\varphi}$
(6)
where ${\mathrm{e}}^{\mathrm{i}\varphi}$ is a legitimate well defined unitary
operator.
As we said, we want to borrow the analysis from the usual quantum mechanics of
point particles in three dimensions, for example. In this case we have various
standard sets of mutually commuting operators. For example we can consider the
three position coordinates $q^{i}$, and represent them as multiplicative
operators on functions belonging to $L^{2}(\mathbb{R}^{3})$, i.e. functions on
configuration space. Alternatively we could consider $p_{i}$ as complete sets,
and consider functions in Fourier transform, and position acting as a
differential operator. In both these cases the spectrum is continuous (the
whole line for each component), and the eigenstates are improper vectors
(distributions), Dirac $\delta$’s and plane waves respectively. Other standard
choices are the three number operators
$N_{i}=\frac{1}{2}(p_{i}^{2}+q_{i}^{2})-\frac{1}{2}$ (7)
In this case the spectrum is discrete and the eigenfunctions are represented
by Hermite polynomials multiplied by a Gaussian. All sort of combinations of
continuous and discrete spectrum can occur; for example, for the hydrogen atom
a complete set of observables is represented by the Hamiltonian itself, the
square of the angular momentum and one of its components. In this case the
spectrum has continuous and discrete components, and the eigenfunctions are
combinations of Laguerre polynomials, exponential functions and spherical
harmonics. Any complete set will do, as long as the operators belonging to the
set are self-adjoint and commute.
An important aspect to note is that the quantization of phase space has
representations on square integrable functions of a lower dimensional space.
The choice of the complete set indicates which observables we may
simultaneously measure. Let us consider first the case of
$L^{2}(\mathbb{R}^{3})$ functions and configuration space variables as a
complete set of commuting observables. Classically the momentum of a particle
is related to the velocity, proportional to it in the absence of magnetic
forces. There is no problem in the perfect localisation in position and
momentum. The $\delta$’s are states of the commutative algebra of position and
momentum variables. Quantum mechanically it is still possible to localise the
state in position space, but to do this it is necessary to superimpose
particles of all momenta:
$\delta(\vec{x})=\frac{1}{2\pi}\int_{-\infty}^{\infty}\\!{\mathrm{d}}^{3}k\,{\mathrm{e}}^{\mathrm{i}\vec{k}\cdot\vec{x}}$
(8)
Good knowledge of position implies bad knowledge of momentum, and viceversa.
Both position and momentum are self-adjoint non bounded operators with
spectrum the real line. Up to signs and complex conjugations they are
symmetrical in the theory. Likewise, to obtain the improper eigenfunction
$\delta$ as a superposition of eigenfunctions of the number operators, an
infinite series is necessary, whose coefficients are not particularly simple
or illuminating. Conversely, to express the eigenfunctions of the number
operator in the basis in which position is diagonal we need to give infinite
information, i.e. a function of the variables: in this case an Hermite
polynomial times a Gaussian. All this is of course well known. We just stress
it to compare with the noncommutative cases below.
Let us first review what has been done for $\kappa$-Minkowski space-time in
[21, 22, 23]. In this case the only non trivial commutator can be expressed as
$[t,r]=\frac{\mathrm{i}}{\kappa}r.$ (9)
A possible set of commuting coordinates is thus given by the spatial
coordinates111We have already commented on the fact that $\varphi$, an
$\lambda$ are not selfadjoint operators, we nevertheless use notations like
$\\{r,\lambda,\varphi\\}$ as an useful shorthand. What we mean is that the
observables are the $\\{x^{i}\\}$’s acting on functions written in spherical
(and later cylindrical) coordinates., and it is possible to represent $t$ as
an operator acting on functions of $r$ as a _dilation_ :
$t=\frac{\mathrm{i}}{\kappa}\left(r\partial_{r}+\frac{3}{2}\right)$ (10)
where the $\frac{3}{2}$ factor is necessary for self-adjointness. The time
operator has a continuous spectrum, and the (improper) eigenfunctions are the
distributions
$T_{\tau}=\frac{r^{-\frac{3}{2}-\mathrm{i}\tau}}{\kappa^{\mathrm{i}\tau}}=r^{-\frac{3}{2}}{\mathrm{e}}^{-\mathrm{i}\tau\log\left(r\kappa\right)}$
(11)
which play the same role as plane waves in quantum mechanics for the operator
$p$. The expansion of functions in the basis of the $\tau$ operator is
therefore provided by monomials, suggesting the use of the _Mellin_ transform,
which replaces the Fourier transform. Hence, a state shall be written either
as a function of $r\kappa$, or of $\tau={x^{0}}\kappa$, according to:
$\displaystyle\psi(r,\theta,\varphi)$ $\displaystyle=$
$\displaystyle\frac{1}{\sqrt{2\pi}}\int_{-\infty}^{\infty}{\mathrm{d}}\tau\,r^{-\frac{3}{2}}{\mathrm{e}}^{-\mathrm{i}\tau\log\left(r\kappa\right)}\widetilde{\psi}(\tau,\theta,\varphi),$
$\displaystyle\widetilde{\psi}(\tau,\theta,\varphi)$ $\displaystyle=$
$\displaystyle\frac{1}{\sqrt{2\pi}}\int_{0}^{\infty}r^{2}{\mathrm{d}}r\,r^{-\frac{3}{2}}{\mathrm{e}}^{\mathrm{i}\tau\log\left(r\kappa\right)}\psi(r,\theta,\varphi).$
(12)
The transformation is an isometry of $L^{2}$, and $|\psi|^{2}$ and
$|\widetilde{\psi}|^{2}$ are the probability densities to find the particle at
position $r$ or time $\tau$ respectively.
The uncertainty relation (3) means that it is impossible to localise exactly a
state both temporally and radially, except when the space is localised at
$r=0$. The origin is the point at which the observer is located, and is not a
“special point”. Another observer will be located at its own (different)
origin, and will be able to localize states near to him.
As for space-time symmetries, let us recall that $\kappa$-Minkowski
commutation relations are not Poincaré invariant; indeed they are
$\kappa$-Poincaré invariant, and translations in this case are not commuting.
Therefore there is no contradiction in the fact that it is impossible for
Alice to locate a state which Bob may. Alice cannot even precisely locate Bob!
To summarise, for $\kappa$-Minkowski space-time we have two complete sets of
operators, $\\{x^{i}\\}=\\{r,\theta,\varphi\\}$, $\\{t,\theta,\varphi\\}$, and
the two variables $t$ and $r$ are connected by a Mellin rather than a Fourier
transform. The noncommuting operators do not appear symmetrically. They are
both unbounded with continuous spectrum, but while the spectrum of $r$ is the
positive real line, that of $t$ is the whole of $\mathbb{R}$. The eigenstates
of the two are related by Mellin, anti-Mellin transforms, which are not
symmetric as in the Fourier case. The final result is that states along the
time axis can be localised, while states at a distance from the origin need to
superimpose states of arbitrary time. We refer to [21, 22, 23] for details.
For the case of $\rho$-Minkowski there are two natural choices of complete
sets of commuting observables. On the one side we have again the three
position variables, which is convenient to express in cylindrical coordinates
$\\{\rho,z,\varphi\\}$, or we may choose to have time among the observables
and have $\\{\rho,z,t\\}$.
The three position operators act as multiplication operators, with $\rho$ and
$z$ selfadjoint, and ${\mathrm{e}}^{\mathrm{i}\varphi}$ unitary. The pure
states of this algebra, like the previous cases, are the Dirac $\delta$
functions localised at the points of $\mathbb{R}^{3}$. It is possible to
completely localise a state at any point. The time operator acts as the
angular momentum in the $z$ direction. This leads to the central observation
of this note:
_The spectrum of time, i.e. the possible results of a measurement, is composed
of discrete integer multiples of a quantum of time._
The relation between the two bases is given by the Fourier _series_ expansion
of the angular part:
$\psi(\rho,z,\varphi)=\sum_{n=-\infty}^{\infty}\psi_{n}(\rho,z){\mathrm{e}}^{\mathrm{i}n\varphi}$
(13)
The time and angular variable are dual, but since $\varphi$ is not a good
self-adjoint operator we cannot write the equivalent of Heisenberg uncertainty
as $\Delta t\Delta\varphi$. Nevertheless a similar reasoning can be made. As
is known an eigenstate of the angular variable would need
$\delta(\varphi)=\frac{1}{2\pi}\sum_{n=-\infty}^{\infty}{\mathrm{e}}^{\mathrm{i}n\varphi}.$
(14)
On the other side, after a time measurement, which has given as result
$n_{0}\lambda$, the system will be in an eigenstate of the time variable,
namely a single ${\mathrm{e}}^{\mathrm{i}n_{0}\varphi}$. This means that an
absolutely precise measurement of time would return a state which is uniformly
dense in the angular variable. If one measures instead time with some
uncertainty, i.e. uses a certain number of Fourier modes to build a state
peaked around some time, then the corresponding uncertainty in the angular
variable is given by the fact that only a finite set of elements of the basis
is available. If one identifies the length $\lambda$ with Planck length,
converting this quantity in time units by means of the speed of light gives
for the quantum of time a quantity of the order of $5.39\ 10^{-44}\,$sec. The
most accurate measurement of time available to date is of the order of
$10^{-19}\,$sec. [24] Let us make a very heuristic order of magnitude
argument. We may say that such a measurement needs the superposition of
something like $10^{25}$ quanta of time. In order to localise with absolute
precision the angular component of a state all infinite Fourier modes are
needed, but a delta function can be well approximated by the Dirichlet nucleus
$\delta_{N}=\sum_{n=-N}^{N}{\mathrm{e}}^{\mathrm{i}n\varphi}=\frac{1}{2\pi}\frac{\sin(N+\frac{1}{2})\varphi}{\sin\frac{N}{2}\varphi}.$
(15)
In Fig. 1 we plot $\delta_{N}$ for three (extremely small!) values of
$N=5,10,15$.
Figure 1: The Dirichlet kernel for some values of $N$.
This means that the most precise experiment is using $N\sim 10^{25}$. In this
case the first zero of the Dirichlet nucleus (the width of the main peak of
the function) is for $\varphi\sim 10^{-25}$. We may assume this to be the
uncertainty in an angle determination. To translate this as an uncertainty in
position we need an estimate of the radius $\rho$, the larger $\rho$, the
larger the uncertainty. The uncertainty at the edge of the observable universe
($10^{26}\,$m) is of the order of meters. An uncertainty on the localisation
of objects we can live with!
An important aspect of $\rho$-Minkowski space-time is that, like
$\kappa$-Minkowski [25, 26, 27], it may be regarded as the homogeneous space
of a quantum group (i.e. a quantum Hopf algebra) [28]. Therefore, although its
commutation relations violate standard Poincaré symmetry, they are covariant
under the action of the appropriate deformation of the Poincaré group. The
latter may be described following two approaches, which are dual to each
other: the Lie algebra deformation with its universal enveloping algebra and
the group algebra deformation, namely the deformation of the algebra of
functions on the group. They both yield to quantum Hopf algebras, dually
related, which are both referred to as the quantum group $\mathcal{G}_{q}$,
with $\mathcal{G}$ the starting Lie group, and $q$ the deformation parameter.
For the case at hand the former approach has been described in [29], and
further analysed in [9] in the context of field theory, whereas the latter has
not been investigated up to now, to our knowledge. Interestingly, the two
points of view may be related to observer-dependent and particle dependent
transformations. Let us see how it works in the present context.
The noncommutative algebra (1) may be realised on the algebra of functions on
$\mathbb{R}^{4}$ in terms of a star product associated with a twist operator
which reads
$\displaystyle\mathcal{F}$ $\displaystyle=$
$\displaystyle\exp{\left\\{-\frac{\mathrm{i}\lambda}{2}\left[\partial_{0}\otimes\left(x^{2}\partial_{1}-x^{1}\partial_{2}\right)-\left(x^{2}\partial_{1}-x^{1}\partial_{2}\right)\otimes\partial_{0}\right]\right\\}}$
(16) $\displaystyle=$
$\displaystyle\exp{\left\\{\frac{\mathrm{i}\lambda}{2}\left(\partial_{0}\otimes\partial_{\varphi}-\partial_{\varphi}\otimes\partial_{0}\right)\right\\}}.$
The star-product is therefore defined according to
$(f\star g)(x):=\mu_{\star}(f\otimes
g)(x)=\mu_{0}\circ\mathcal{F}^{-1}(f\otimes g)(x).$ (17)
Since the algebra is noncommutative, so is the combination of plane waves, and
the sum rule of momenta:
${\mathrm{e}}^{-\mathrm{i}p\cdot x}\star{\mathrm{e}}^{-\mathrm{i}q\cdot
x}={\mathrm{e}}^{-\mathrm{i}(R_{i}^{j}(p_{j}+q_{j})x^{i}},$ (18)
with $R$ the following matrix:
$R(t)\equiv\left(\begin{array}[]{cccc}\cos{\left(\frac{\lambda
t}{2}\right)}&0&0&\sin{\left(\frac{\lambda t}{2}\right)}\\\ 0&1&0&0\\\
0&0&1&0\\\ -\sin{\left(\frac{\lambda
t}{2}\right)}&0&0&\cos{\left(\frac{\lambda t}{2}\right)}\end{array}\right).$
(19)
In the twist approach, the Poincaré generators act undeformed on a single copy
of the algebra of observables, with standard Lie brackets; but, in order to
act on products of observables (and therefore on the commutator (1)), the
coproduct of Lie algebra generators has to be twisted for the consistency of
the whole enveloping algebra $\mathcal{U}(\mathfrak{p})$, according to
$\Delta_{\mathcal{F}}=\mathcal{F}\Delta\mathcal{F}^{-1}$. This entails a
twisted Leibniz rule, so that for $X\in\mathfrak{p}$
$X\triangleright f\star g:=\mu_{\star}\circ\Delta_{\mathcal{F}}(X)(f\otimes
g)$ (20)
The twisted coproduct of Poincaré generators [29] is given by:
$\displaystyle\Delta^{\cal F}P_{0}=P_{0}\otimes 1+1\otimes P_{0},$
$\displaystyle\Delta^{\cal F}P_{3}=P_{3}\otimes 1+1\otimes P_{3},$
$\displaystyle\Delta^{\cal
F}P_{1}=P_{1}\otimes\cos\left(\frac{\lambda}{2}P_{0}\right)+\cos\left(\frac{\lambda}{2}P_{0}\right)\otimes
P_{1}+P_{2}\otimes\sin\left(\frac{\lambda}{2}P_{0}\right)-\sin\left(\frac{\lambda}{2}P_{0}\right)\otimes
P_{2},$ $\displaystyle\Delta^{\cal
F}P_{2}=P_{2}\otimes\cos\left(\frac{\lambda}{2}P_{0}\right)+\cos\left(\frac{\lambda}{2}P_{0}\right)\otimes
P_{2}-P_{1}\otimes\sin\left(\frac{\lambda}{2}P_{0}\right)+\sin\left(\frac{\lambda}{2}P_{0}\right)\otimes
P_{1},$ (21)
while the twisted coproduct of Lorentz generators is:
$\displaystyle\Delta^{\cal
F}M_{31}=M_{31}\otimes\cos\left(\frac{\lambda}{2}P_{0}\right)+\cos\left(\frac{\lambda}{2}P_{0}\right)\otimes
M_{31}+M_{32}\otimes\sin\left(\frac{\lambda}{2}P_{0}\right)-\sin\left(\frac{\lambda}{2}P_{0}\right)\otimes
M_{32}$ $\displaystyle\hskip 42.67912pt-
P_{1}\otimes\frac{\lambda}{2}M_{12}\cos\left(\frac{\lambda}{2}P_{0}\right)+\frac{\lambda}{2}M_{12}\cos\left(\frac{\lambda}{2}P_{0}\right)\otimes
P_{1}$ $\displaystyle\hskip 42.67912pt-
P_{2}\otimes\frac{\lambda}{2}M_{12}\sin\left(\frac{\lambda}{2}P_{0}\right)-\frac{\lambda}{2}M_{12}\sin\left(\frac{\lambda}{2}P_{0}\right)\otimes
P_{2},$ $\displaystyle\Delta^{\cal
F}M_{32}=M_{32}\otimes\cos\left(\frac{\lambda}{2}P_{0}\right)+\cos\left(\frac{\lambda}{2}P_{0}\right)\otimes
M_{32}-M_{31}\otimes\sin\left(\frac{\lambda}{2}P_{0}\right)+\sin\left(\frac{\lambda}{2}P_{0}\right)\otimes
M_{31}$ $\displaystyle\hskip 42.67912pt-
P_{2}\otimes\frac{\lambda}{2}M_{12}\cos\left(\frac{\lambda}{2}P_{0}\right)+\frac{\lambda}{2}M_{12}\cos\left(\frac{\lambda}{2}P_{0}\right)\otimes
P_{2}$ $\displaystyle\hskip
42.67912pt+P_{1}\otimes\frac{\lambda}{2}M_{12}\sin\left(\frac{\lambda}{2}P_{0}\right)+\frac{\lambda}{2}M_{12}\sin\left(\frac{\lambda}{2}P_{0}\right)\otimes
P_{1},$ $\displaystyle\Delta^{\cal F}M_{30}=M_{30}\otimes 1+1\otimes
M_{30}-\frac{\lambda}{2}P_{3}\otimes M_{12}+\frac{\lambda}{2}M_{12}\otimes
P_{3},$ $\displaystyle\Delta^{\cal F}M_{12}=M_{12}\otimes 1+1\otimes M_{12},$
$\displaystyle\Delta^{\cal
F}M_{10}=M_{10}\otimes\cos\left(\frac{\lambda}{2}P_{0}\right)+\cos\left(\frac{\lambda}{2}P_{0}\right)\otimes
M_{10}+M_{20}\otimes\sin\left(\frac{\lambda}{2}P_{0}\right)-\sin\left(\frac{\lambda}{2}P_{0}\right)\otimes
M_{20}$ $\displaystyle\Delta^{\cal
F}M_{20}=M_{20}\otimes\cos\left(\frac{\lambda}{2}P_{0}\right)+\cos\left(\frac{\lambda}{2}P_{0}\right)\otimes
M_{20}-M_{10}\otimes\sin\left(\frac{\lambda}{2}P_{0}\right)+\sin\left(\frac{\lambda}{2}P_{0}\right)\otimes
M_{10}.$ (22)
The universal enveloping algebra of $\mathfrak{p}$, endowed with the twisted
coproduct above, an antipode and a co-unit, may be shown to yield a quantum
Hopf algebra, or equivalently, a quantum group, which we shall refer to as the
$\rho$-Poincaré quantum group $U_{\rho}({\mathfrak{p}})$ (remember that the
deformation parameter is $\lambda$ in our case, but we keep the notation
$\rho$ to adhere to the existing literature). For the purposes of the paper we
don’t need to enter the technical details of the construction. What is
relevant to us is that the commutation relations (1) are twist-covariant,
namely covariant under the action of the Poincaré generators if their action
is implemented through the appropriate co-product
$X\triangleright[f,g]_{*}:=\mu_{\star}\circ\Delta_{\mathcal{F}}(X)(f\otimes
g-g\otimes f).$ (23)
Infinitesimal transformations of the observables, realised through the action
of the deformed Hopf algebra $U_{\lambda}({\mathfrak{p}})$ are usually
considered when dealing with active transformations of physical quantities in
a fixed reference frame.
Let us now consider finite transformations, which are especially relevant in
connection with passive, or observer transformations [30, 31]. The deformation
of the co-product in the Lie algebra of the group has its counterpart in the
deformation of the product in the algebra of functions on the group manifold.
In particular, it will affect the parameters of the Poincaré group,
$(\Lambda^{\mu}_{\nu},a^{\mu})\in\mathcal{F}({\mathcal{P}})$. Following [32]
(see also [33]) we first construct the Poisson-Lie bracket for
$\mathcal{F}({\mathcal{P}})$ and then obtain the quantum group
${\mathcal{P}}_{\lambda}$ by replacing the PL-bracket with the commutator. To
this, we read off the $r$-matrix from the twist operator (16), it being
$\mathcal{F}=1+\frac{1}{2}r+\dots$,
$r=-\mathrm{i}\lambda(P_{0}\wedge M_{12})\in\mathfrak{p}\otimes\mathfrak{p}$
(24)
which satisfies the classical Yang-Baxter equation [32]. In terms of $r$ we
define the Poisson-Lie bracket on the group manifold
$\\{f,h\\}=\lambda\left(X_{0}^{R}\wedge X^{R}_{12}-X_{0}^{L}\wedge
X^{L}_{12}\right)(df,dg)\;\;f,g\in\mathcal{F}(\mathcal{P})$ (25)
with $X_{0}^{R,L},X^{R,L}_{12}$ the right and left invariant vector fields
corresponding to the Lie algebra generators $P_{0},M_{12}$. They read
$\begin{aligned}
X_{\alpha}^{L}&=\Lambda^{\sigma}_{\alpha}\frac{\partial}{\partial
a^{\sigma}},\;\;\;&X_{\alpha\beta}^{L}&=\Lambda^{\sigma}_{\alpha}\frac{\partial}{\partial\Lambda^{\sigma\beta}}-\Lambda^{\sigma}_{\beta}\frac{\partial}{\partial\Lambda^{\sigma\alpha}}\\\
X_{\alpha}^{R}&=\frac{\partial}{\partial
a^{\alpha}},\;\;\;&X_{\alpha\beta}^{R}&=\Lambda_{\beta\sigma}\frac{\partial}{\partial\Lambda^{\alpha}_{\sigma}}-\Lambda_{\alpha\sigma}\frac{\partial}{\partial\Lambda^{\beta}_{\sigma}}+a_{\beta}\frac{\partial}{\partial
a_{\alpha}}\end{aligned}$
---
(26)
and we compute
$\displaystyle\\{{\Lambda^{\mu}}_{\nu},{\Lambda^{\rho}}_{\sigma}\\}$
$\displaystyle=$ $\displaystyle 0$ (27)
$\displaystyle\\{{\Lambda^{\mu}}_{\nu},a^{\rho}\\}$ $\displaystyle=$
$\displaystyle\lambda\big{[}-\delta^{\rho}_{0}(\Lambda_{2\nu}\delta^{\mu}_{1}-\Lambda_{1\nu}\delta^{\mu}_{2})+{\Lambda^{\rho}}_{0}({\Lambda^{\mu}}_{1}\eta_{2\nu}-{\Lambda^{\mu}}_{2}\eta_{1\nu})\big{]}$
(28) $\displaystyle\\{a^{\mu},a^{\nu}\\}$ $\displaystyle=$
$\displaystyle\lambda\big{[}\delta^{\mu}_{0}(a_{2}\delta^{\nu}_{1}-a_{1}\delta^{\nu}_{2})-{\delta^{\nu}}_{0}(a_{2}\delta^{\mu}_{1}-a_{1}\delta^{\mu}_{2})\big{]}.$
(29)
We thus perform the standard quantization by replacing Poisson brackets with
$-\mathrm{i}$ times the commutators. We obtain
$\displaystyle[{\Lambda^{\mu}}_{\nu},{\Lambda^{\rho}}_{\sigma}]$
$\displaystyle=$ $\displaystyle 0$ (30)
$\displaystyle{[}{\Lambda^{\mu}}_{\nu},a^{\rho}{]}$ $\displaystyle=$
$\displaystyle\mathrm{i}\lambda\big{[}-\delta^{\rho}_{0}(\Lambda_{2\nu}\delta^{\mu}_{1}-\Lambda_{1\nu}\delta^{\mu}_{2})+{\Lambda^{\rho}}_{0}({\Lambda^{\mu}}_{1}\eta_{2\nu}-{\Lambda^{\mu}}_{2}\eta_{1\nu})\big{]}$
(31) $\displaystyle{[}a^{\mu},a^{\nu}{]}$ $\displaystyle=$
$\displaystyle\mathrm{i}\lambda\big{[}\delta^{\mu}_{0}(a_{2}\delta^{\nu}_{1}-a_{1}\delta^{\nu}_{2})-\delta^{\nu}_{0}(a_{2}\delta^{\mu}_{1}-a_{1}\delta^{\mu}_{2})\big{]}.$
(32)
Since the composition law of group elements is compatible with Poisson
brackets, the coproduct, antipode and counit are undeformed. The algebra
$\mathcal{F}_{\lambda}(\mathcal{P})$ with the structures here introduced is a
quantum Hopf algebra, the quantum Poincaré group $\mathcal{P}_{\lambda}$ in
the dual picture announced before. Not surprisingly, the commutator of
translation parameters (32) reproduces the $\rho$-Minkowski algebra (1), the
latter being identified with the homogeneous space of the quantum Poincaré
group with respect to the undeformed Lorentz subgroup. In other words, the
$\rho$-Minkowski space-time is co-acted upon according to
${x^{\prime}}^{\mu}={\Lambda^{\mu}}_{\nu}\otimes x^{\nu}+a^{\mu}\otimes 1$
(33)
and commutation relations (1) are covariant under (33), provided (27)-(32)
hold.
This completes the picture which allows for equivalent descriptions of the
$\rho$-Poincaré group and its action on the $\rho$-Minkowski space-time.
The group algebra approach is useful to understand the observer-dependent
transformations222 A similar analysis has been performed in [21] for
$\kappa$-Minkowski.. The most relevant consequence is that the transformations
relating different reference frames belong to a noncommutative algebra. Hence
localisability will be subject to limitations as well. States for the algebra
generated by coordinates may be more or less sharply localised. When the
algebra is noncommuting, there may not be states of absolute localisation.
This happens in our case. As a consequence, different observers will not agree
in general on the localizability properties of the same state. We have to
specify the observer making the observations, and we have been implicitly
considering an observer located at the origin. In order to change observer, a
Poincaré transformation is needed. But in our case the symmetry is the quantum
$\rho$-Poincaré. Accordingly it will be impossible to locate the position of
the transformed observer, since translations do not commute.
Since time (and time slices) are discrete there appears to be a universal
clock, whose beats give the allowable instants. This is only partially true.
It is known (see for example [34, 35]) that periodic functions are only one of
the domains of selfadjointness of the operator
$-\mathrm{i}\partial_{\varphi}$. A generic domain in which functions are
periodic up to a phase ${\mathrm{e}}^{\mathrm{i}\alpha}$ is equally good. The
basis of this domain is given by functions of the kind
$e_{\alpha}={\mathrm{e}}^{\mathrm{i}(n+\alpha)\varphi}$ (34)
which, together with the coefficients $\psi_{n}(\rho,z)$ in (13), provide an
expansion for the vectors of the Hilbert space. The spectrum of the time
operator is given by the set $n+\alpha$. The differences between eigenvalues
are unchanged, and the effect is a rigid shift. Of course $\alpha$ is itself
periodic of period $2\pi$. This however means that a different choice of
selfadjointess domain has been made. Time translations are undeformed, and two
time-translated observers will be in different, but equivalent domains. A
given observer, nevertheless, can only measure quantized time intervals.
We conclude with some comments. There are several noncommutative spaces with
discrete features, reviews and references can be found for example in [36,
37]. Although the main motivation of this note was not to present a
phenomenological viable model, nevertheless this model might be developed in a
more physical direction. A field theory has been built in [9], and it was
shown there that decays may be affected. The $S$-matrix for these processes is
discussed in [38]. In [8] it was connected to lensing. We feel that
$\rho$-Minkowski can be added to the list of viable noncommutative spaces, and
its peculiar properties deserve further investigation in a variety of
directions.
## Acknowledgments
We would like to thank Giovanni Amelino-Camelia and Jerzy Kowalski-Glikman for
asking one of us to give a talk on $\rho$-Minkowski. This triggered a renewed
interest in this quantum space, and inspired this work. We also wish thank
Peter Schupp for sharing with us interesting ideas about the interpretation of
time discretisation and useful references. We acknowledge support from the
INFN Iniziativa Specifica GeoSymQFT. FL acknowledges the Spanish MINECO
underProject No. MDM-2014-0369 of ICCUB (Unidad de Excelencia ‘Maria de
Maeztu’), Grant No. FPA2016-76005-C2-1-P. 67985840.
## References
* [1] S. Gutt, _An explicit $*$-product on the cotangent bundle of a Lie group_, _Lett. Math. Phys._ 7 (1983) 249.
* [2] J. M. Gracia-Bondia, F. Lizzi, G. Marmo and P. Vitale, _Infinitely many star products to play with_ , _JHEP_ 04 (2002) 026 [hep-th/0112092].
* [3] J. Lukierski and M. Woronowicz, _New Lie-algebraic and quadratic deformations of Minkowski space from twisted Poincaré symmetries_ , _Phys. Lett. B_ 633 (2006) 116 [hep-th/0508083].
* [4] G. Amelino-Camelia, L. Barcaroli and N. Loret, _Modeling transverse relative locality_ , _Int. J. Theor. Phys._ 51 (2012) 3359 [1107.3334].
* [5] G. Amelino-Camelia, L. Freidel, J. Kowalski-Glikman and L. Smolin, _The principle of relative locality_ , _Phys. Rev. D_ 84 (2011) 084010 [1101.0931].
* [6] M. Dimitrijević Ćirić, N. Konjik and A. Samsarov, _Search for footprints of quantum spacetime in black hole QNM spectrum_ , 1910.13342.
* [7] M. Dimitrijević Ćirić, N. Konjik and A. Samsarov, _Noncommutative scalar field in the nonextremal Reissner-Nordström background: Quasinormal mode spectrum_ , _Phys. Rev. D_ 101 (2020) 116009 [1904.04053].
* [8] G. Amelino-Camelia, L. Barcaroli, S. Bianco and L. Pensato, _Planck-scale dual-curvature lensing and spacetime noncommutativity_ , _Adv. High Energy Phys._ 2017 (2017) 6075920 [1708.02429].
* [9] M. Dimitrijevic Ciric, N. Konjik, M. A. Kurkov, F. Lizzi and P. Vitale, _Noncommutative field theory from angular twist_ , _Phys. Rev. D_ 98 (2018) 085011 [1806.06678].
* [10] M. Chaichian, A. Demichev and P. Presnajder, _Quantum field theory on noncommutative space-times and the persistence of ultraviolet divergences_ , _Nucl. Phys. B_ 567 (2000) 360 [hep-th/9812180].
* [11] M. Chaichian, A. Demichev, P. Presnajder and A. Tureanu, _Space-time noncommutativity, discreteness of time and unitarity_ , _Eur. Phys. J. C_ 20 (2001) 767 [hep-th/0007156].
* [12] B. P. Dolan, K. S. Gupta and A. Stern, _Noncommutative BTZ black hole and discrete time,_ _Class. Quant. Grav._ 24 (2007), 1647 [hep-th/0611233].
* [13] A. P. Balachandran, A. G. Martins and P. Teotonio-Sobrinho, _Discrete time evolution and energy nonconservation in noncommutative physics_ , _JHEP_ 05 (2007) 066 [hep-th/0702076].
* [14] D. Bak and K.-M. Lee, _Noncommutative supersymmetric tubes_ , _Phys. Lett. B_ 509 (2001) 168 [hep-th/0103148].
* [15] H. Steinacker, _Split noncommutativity and compactified brane solutions in matrix models_ , _Prog. Theor. Phys._ 126 (2011) 613 [1106.6153].
* [16] C. Yang, _On quantized space-time_ , _Phys. Rev._ 72 (1947) 874.
* [17] R. Lévi, _Théorie de l’action universelle et discontinue_ , _J. Phys. Radium_ 8 (1927) 182.
* [18] G. ’t Hooft, _Canonical quantization of gravitating point particles in (2+1)-dimensions_ , _Class. Quant. Grav._ 10 (1993) 1653 [gr-qc/9305008].
* [19] A. P. Balachandran and L. Chandar, _Discrete time from quantum physics_ , _Nucl. Phys. B_ 428 (1994) 435 [hep-th/9404193].
* [20] P. Aniello, F. Ciaglia, F. Di Cosmo, G. Marmo and J. Pérez-Pardo, _Time, classical and quantum_ , _Annals Phys._ 373 (2016) 532 [1605.03534].
* [21] F. Lizzi, M. Manfredonia, F. Mercati and T. Poulain, _Localization and Reference Frames in $\kappa$-Minkowski Spacetime_, _Phys. Rev._ D99 (2019) 085003 [1811.08409].
* [22] F. Lizzi, M. Manfredonia and F. Mercati, _Localizability in $\kappa$-Minkowski spacetime_, _Int. J. Geom. Meth. Mod. Phys._ 17 (2020) 2040010 [1912.07098].
* [23] F. Lizzi, M. Manfredonia and F. Mercati, _The momentum spaces of $\kappa$-Minkowski noncommutative spacetime_, _Nucl. Phys. B_ 958 (2020) 115117 [2001.08756].
* [24] S. Grundmann, D. Trabert, K. Fehre, N. Strenger, A. Pier, L. Kaiser et al., _Zeptosecond birth time delay in molecular photoionization_ , _Science_ 2020 (2020) 339 [2010.08298].
* [25] J. Lukierski, A. Nowicki and H. Ruegg, _Real forms of complex quantum anti-De Sitter algebra U-q(Sp(4:C)) and their contraction schemes_ , _Phys. Lett._ B271 (1991) 321 [hep-th/9108018].
* [26] J. Lukierski, A. Nowicki and H. Ruegg, _New quantum Poincaré algebra and k deformed field theory_ , _Phys. Lett._ B293 (1992) 344 [hep-th/9108018].
* [27] S. Majid and H. Ruegg, _Bicrossproduct structure of kappa Poincaré group and noncommutative geometry_ , _Phys. Lett._ B334 (1994) 348 [hep-th/9405107].
* [28] V. G. Drinfel’d, _Hopf algebras and the quantum Yang-Baxter equation_ , _Sov. Math. Dokl._ 32 (1985) 254.
* [29] M. Dimitrijević Ćirić, N. Konjik and A. Samsarov, _Noncommutative scalar quasinormal modes of the Reissner–Nordström black hole_ , _Class. Quant. Grav._ 35 (2018) 175005 [1708.04066].
* [30] P. Kosinski, J. Lukierski and P. Maslanka, _Local d=4 field theory on $\kappa$–deformed Minkowski space_, http://arxiv.org/abs/hep-th/9902037v2.
* [31] P. Kosinski, J. Lukierski and P. Maslanka, _$\kappa$ –deformed Wigner construction of relativistic wave functions and free fields on $\kappa$-Minkowski space_, http://arxiv.org/abs/hep-th/0103127v1.
* [32] A. N. P. Vyjayanthi Chari, _A guide to quantum groups_. Cambridge University Press, 1994.
* [33] P. Kosinski and P. Maslanka, _The Kappa-Weyl group and its algebra_ , 9512018.
* [34] M. Reed and B. Simon, _Methods of Modern Mathematical Physics. 2. Fourier Analysis, Self-adjointness_. Academic Press, 1, 1975.
* [35] G. Esposito, G. Marmo, G. Miele and G. Sudarshan, _Advanced concepts in quantum mechanics_. Cambridge University Press, 10, 2014, 10.1017/CBO9781139875950.
* [36] F. Lizzi and P. Vitale, _Matrix Bases for Star Products: a Review_ , _SIGMA_ 10 (2014) 086 [1403.0808].
* [37] F. D’Andrea, F. Lizzi and P. Martinetti, _Spectral geometry with a cut-off: topological and metric aspects_ , _J. Geom. Phys._ 82 (2014) 18 [1305.2605].
* [38] O. O. Novikov, _$\mathcal{PT}$ -symmetric quantum field theory on the noncommutative spacetime_, _Mod. Phys. Lett. A_ 35 (2019) 2050012 [1906.05239].
|
1cm 90
[pagex=1,fontfamily=bch,fontsize=11pt,color=gray,angle=0,scale=1.1,xpos=0,ypos=12cm]Extended
version of the accepted paper in $35^{th}$ AAAI Conference on Artificial
Intelligence 2021 AAAI 2021
# Context-aware Attentional Pooling (CAP) for Fine-grained Visual
Classification
Ardhendu Behera, Zachary Wharton, Pradeep Hewage, Asish Bera
###### Abstract
Deep convolutional neural networks (CNNs) have shown a strong ability in
mining discriminative object pose and parts information for image recognition.
For fine-grained recognition, context-aware rich feature representation of
object/scene plays a key role since it exhibits a significant variance in the
same subcategory and subtle variance among different subcategories. Finding
the subtle variance that fully characterizes the object/scene is not
straightforward. To address this, we propose a novel context-aware attentional
pooling (CAP) that effectively captures subtle changes via sub-pixel
gradients, and learns to attend informative integral regions and their
importance in discriminating different subcategories without requiring the
bounding-box and/or distinguishable part annotations. We also introduce a
novel feature encoding by considering the intrinsic consistency between the
informativeness of the integral regions and their spatial structures to
capture the semantic correlation among them. Our approach is simple yet
extremely effective and can be easily applied on top of a standard
classification backbone network. We evaluate our approach using six state-of-
the-art (SotA) backbone networks and eight benchmark datasets. Our method
significantly outperforms the SotA approaches on six datasets and is very
competitive with the remaining two.
## Introduction
Over recent years, there has been significant progress in the landscape of
computer vision due to the adaptation and enhancement of a fast, scalable and
end-to-end learning framework, the CNN (LeCun et al. 1998). This is not a
recent invention, but we now see a profusion of CNN-based models achieving
SotA results in visual recognition (He et al. 2016; Huang et al. 2017; Zoph et
al. 2018; Sandler et al. 2018). The performance gain primarily comes from the
model’s ability to reason about image content by disentangling discriminative
object pose and part information from texture and shape. Most discriminative
features are often based on changes in global shape and appearance. They are
often ill-suited to distinguish subordinate categories, involving subtle
visual differences within various natural objects such as bird species (Wah et
al. 2011; Van Horn et al. 2015), flower categories (Nilsback and Zisserman
2008), dog breeds (Khosla et al. 2011), pets (Parkhi et al. 2012) and man-made
objects like aircraft types (Maji et al. 2013), car models (Krause et al.
2013), etc. To address this, a global descriptor is essential which ensembles
features from multiple local parts and their hierarchy so that the subtle
changes can be discriminated as a misalignment of local parts or pattern. The
descriptor should also be able to emphasize the importance of a part.
There have been some excellent works on fine-grained visual recognition (FGVC)
using weakly-supervised complementary parts (Ge, Lin, and Yu 2019), part
attention (Liu et al. 2016), object-part attention (Peng, He, and Zhao 2018),
multi-agent cooperative learning (Yang et al. 2018), recurrent attention (Fu,
Zheng, and Mei 2017), and destruction and construction learning (Chen et al.
2019). All these approaches avoid part-level annotations and automatically
discriminate local parts in an unsupervised/weakly-supervised manner. Many of
them use a pre-trained object/parts detector and lack rich representation of
regions/parts to capture the object-parts relationships better. To truly
describe an image, we need to consider the image generation process from
pixels to object to the scene in a more fine-grained way, not only to regulate
the object/parts and their spatial arrangements but also defining their
appearances using multiple partial descriptions as well as their importance in
discriminating subtle changes. These partial descriptions should be rich and
complementary to each other to provide a complete description of the
object/image. In this work, we propose a simple yet compelling approach that
embraces the above properties systematically to address the challenges
associated with the FGVC. Thus, it can benefit to a wide variety of
applications such as image captioning (Herdade et al. 2019; Huang et al.
2019a; Li et al. 2019), expert-level image recognition (Valan et al. 2019;
Krause et al. 2016), and so on.
Our work: To describe objects in a conventional way as in CNNs as well as
maintaining their visual appearance, we design a context-aware attentional
pooling (CAP) to encode spatial arrangements and visual appearance of the
parts effectively. The module takes the input as a convolutional feature from
a base CNN and then learns to emphasize the latent representation of multiple
integral regions (varying coarseness) to describe hierarchies within objects
and parts. Each region has an anchor in the feature map, and thus many regions
have the same anchor due to the integral characteristics. These integral
regions are then fed into a recurrent network (e.g. LSTM) to capture their
spatial arrangements, and is inspired by the visual recognition literature,
which suggests that humans do not focus their attention on an entire scene at
once. Instead, they focus sequentially by attending different parts to extract
relevant information (Zoran et al. 2020). A vital characteristic of our CAP is
that it generates a new feature map by focusing on a given region conditioned
on all other regions and itself. Moreover, it efficiently captures subtle
variations in each region by the sub-pixel gradients via bilinear pooling. The
recurrent networks are mainly designed for sequence analysis/recognition. We
aim to capture the subtle changes between integral regions and their spatial
arrangements. Thus, we introduce a learnable pooling to emphasize the most-
informative hidden states of the recurrent network, automatically. It learns
to encode the spatial arrangement of the latent representation of integral
regions and uses it to infer the fine-grained subcategories.
Our contributions: Our main contributions can be summarized as: 1) an easy-to-
use extension to SotA base CNNs by incorporating context-aware attention to
achieve a considerable improvement in FGVC; 2) to discriminate the subtle
changes in an object/scene, context-aware attention-guided rich representation
of integral regions is proposed; 3) a learnable pooling is also introduced to
automatically select the hidden states of a recurrent network to encode
spatial arrangement and appearance features; 4) extensive analysis of the
proposed model on eight FGVC datasets, obtaining SotA results; and 5) analysis
of various base networks for the wider applicability of our CAP.
Figure 1: a) High-level illustration of our model (left). b) The detailed
architecture of our novel CAP (right).
(a) Pixel-level relationships (b) Attention-focused contextual information
from integral regions (surrounding context) (c) Capturing spatial arrangements
(d) Aggregated hidden state responses to class prediction
Figure 2: a) Learning pixel-level relationships from the convolutional feature
map of size $W\times H\times C$. b) CAP using integral regions to capture both
self and neighborhood contextual information. c) Encapsulating spatial
structure of the integral regions using an LSTM. d) Classification by
learnable aggregation of hidden states of the LSTM.
## Related Work
Unsupervised/weakly-supervised parts/regions based approaches: Such methods
learn a diverse collection of discriminative parts/regions to represent the
complete description of an image. In (Chen et al. 2019), the global structure
of an image is substantially changed by a random patch-shuffling mechanism to
select informative regions. An adversarial loss is used to learn essential
patches. In (Ge, Lin, and Yu 2019), Mask R-CNN and conditional random field
are used for object detection and segmentation. A bidirectional LSTM is used
to encode rich complementary information from selected part proposals for
classification. A hierarchical bilinear pooling framework is presented in (Yu
et al. 2018a) to learn the inter-layer part feature interaction from
intermediate convolution layers. This pooling scheme enables inter-layer
feature interaction and discriminative part feature learning in a mutually
reinforced manner. In (Cai, Zuo, and Zhang 2017), a higher-order integration
of hierarchical convolutional features is described for representing parts
semantic at different scales. A polynomial kernel-based predictor is defined
for modelling part interaction using higher-order statistics of convolutional
activations. A general pooling scheme is demonstrated in (Cui et al. 2017) to
represent higher-order and nonlinear feature interactions via compact and
explicit feature mapping using kernels. Our approach is complementary to these
approaches by exploring integral regions and learns to attend these regions
using a bilinear pooling that encodes partial information from multiple
integral regions to a comprehensive feature vector for subordinate
classification.
Object and/or part-level attention-based approaches: Recently, there has been
significant progress to include attention mechanisms (Zhao, Jia, and Koltun
2020; Leng, Liu, and Chen 2019; Bello et al. 2019; Parmar et al. 2019) to
boost image recognition accuracy. It is also explored in FGVC (Zheng et al.
2019; Ji et al. 2018; Sun et al. 2018). In (Zheng et al. 2020), a part
proposal network produces several local attention maps, and a part
rectification network learns rich part hierarchies. Recurrent attention in
(Fu, Zheng, and Mei 2017) learns crucial regions at multiple scales. The
attended regions are cropped and scaled up with a higher resolution to compute
rich features. Object-part attention model (OPAM) in (Peng, He, and Zhao 2018)
incorporates object-level attention for object localization, and part-level
attention for the vital parts selection. Both jointly learn multi-view and
multi-scale features to improve performance. In (Liu et al. 2019), a
bidirectional attention-recognition model (BARM) is proposed to optimize the
region proposals via a feedback path from the recognition module to the part
localization module. Similarly, in attention pyramid hierarchy (Ding et al.
2020), top-down and bottom-up attentions are integrated to learn both high-
level semantic and low-level detailed feature representations. In (Rodríguez
et al. 2020), a modular feed-forward attention mechanism consisting of
attention modules and attention gates is applied to learn low-level feature
activations. Our novel paradigm is a step forward and takes inspiration from
these approaches. It is advantageous over the existing methods as it uses a
single network and the proposed attention mechanism learns to attend both
appearance and shape information from a single-scale image in a hierarchical
fashion by exploring integral regions. We further extend it by innovating the
classification layer, where the subtle changes in integral regions are learned
by focusing on the most informative hidden states of an LSTM.
## Proposed Approach
The overall pipeline of our model is shown in Fig. 1a. It takes an input image
and provides output as a subordinate class label. To solve this, we are given
$N$ images $I=\\{I_{n}|n=1,\dots,N\\}$ and their respective fine-grained
labels. The aim is to find a mapping function $\mathcal{F}$ that predicts
$\hat{y}_{n}=\mathcal{F}(I_{n})$, which matches the true label $y_{n}$. The
ultimate goal is to learn $\mathcal{F}$ by minimizing a loss
$L(y_{n},\hat{y}_{n})$ between the true and the predicted label. Our model
consists of three elements (Fig. 1a): 1) a base CNN
$\mathcal{F}_{b}(.;\theta_{b})$, and our novel 2) CAP
$\mathcal{F}_{c}(.;\theta_{c})$ and 3) classification
$\mathcal{F}_{d}(.;\theta_{d})$ modules. We aim to learn the model’s
parameters $\theta=\\{\theta_{b},\theta_{c},\theta_{d}\\}$ via end-to-end
training. We use the SotA CNN architecture as a base CNN
$\mathcal{F}_{b}(.;\theta_{b})$ and thus, we emphasize on the design and
implementation of the rest two modules $\mathcal{F}_{c}(.;\theta_{c})$ and
$\mathcal{F}_{d}(.;\theta_{d})$.
### Context-aware attentional pooling (CAP)
It takes the output of a base CNN as an input. Let us consider
$\mathbf{x}=\mathcal{F}_{b}(I_{n};\theta_{b})$ to be the convolutional feature
map as the output of the base network $F_{b}$ for input image $I_{n}$. The
proposed CAP considers contextual information from pixel-level to small
patches to large patches to image-level. The pixel refers to a spatial
location in the convolutional feature map $\mathbf{x}$ of width $W$, height
$H$ and channels $C$. The aim is to capture contextual information
hierarchically to better model the subtle changes observed in FGVC tasks. Our
attention mechanism learns to emphasize pixels, as well as regions of
different sizes located in various parts of the image $I_{n}$. At pixel-level,
we explicitly learn the relationships between pixels, i.e.
$p(\mathbf{x}_{i}|\mathbf{x}_{j};\theta_{p})$, $\forall i$ $i\neq j$ and
$1\leq i,j\leq W\times H$, even they are located far apart in $\mathbf{x}$. It
signifies how much the model should attend the $i^{th}$ location when
synthesizing the $j^{th}$ position in $\mathbf{x}$ (Fig. 2a). To achieve this,
we compute the attention map $\theta_{p}$ by revisiting the self-attention
concept (Zhang et al. 2018) where key
$k(\mathbf{x})=\mathbf{W}_{k}\mathbf{x}$, query
$q(\mathbf{x})=\mathbf{W}_{q}\mathbf{x}$ and value
$v(\mathbf{x})=\mathbf{W}_{v}\mathbf{x}$ in $\mathbf{x}$ are computed using
separate $1\times 1$ convolutions. The attentional output feature map
$\mathbf{o}$ is a dot-product of attention map $\theta_{p}$ and $\mathbf{x}$.
$\theta_{p}=\\{\mathbf{W}_{k}$,
$\mathbf{W}_{q},\mathbf{W}_{v}\\}\in\theta_{c}$ is learned.
Proposing integral regions: To learn contextual information efficiently, we
propose multiple integral regions with varying level of coarseness on the
feature map $\mathbf{o}$. The level of coarseness is captured by different
size of a rectangular region. Let us consider the smallest region
$r(i,j,\Delta_{x},\Delta_{y})$ of width $\Delta_{x}$, height $\Delta_{y}$ and
is located (top-left corner) at the $i^{th}$ column and $j^{th}$ row of
$\mathbf{o}$. Using $r(i,j,\Delta_{x},\Delta_{y})$, we derive a set of regions
by varying their widths and heights i.e.
$R=\\{r(i,j,m\Delta_{x},n\Delta_{y})\\}$; $m,n=1,2,3,\dots$ and
$i<i+m\Delta_{x}\leq W$, $j<j+n\Delta_{y}\leq H$. This is illustrated in Fig.
1b (left) for the given spatial location of ($i,j$). The goal is to generate
the similar set of regions $R$ at various spatial locations ($0<i<W$, $0<j<H$)
in $\mathbf{o}$. In this way, we generate a final set of regions
$\mathcal{R}=\\{R\\}$ located at different places with different sizes and
aspect ratios, as shown in Fig. 1b. The approach is a comprehensive context-
aware representation to capture the rich contextual information characterizing
subtle changes in images hierarchically.
Bilinear pooling: There are $|\mathcal{R}|$ regions with size varies from a
minimum of $\Delta_{x}\times\Delta_{y}\times C$ to a maximum of $W\times
H\times C$. The goal is to represent these variable size regions $(X\times
Y\times C)\Rightarrow(w\times h\times C)$ with a fixed size feature vector.
Thus, we use bilinear pooling, typically bilinear interpolation to implement
differentiable image transformations, which requires indexing operation. Let
$T_{\psi}(\mathbf{y})$ be the coordinate transformation with parameters $\psi$
and $\mathbf{y}=(i,j)\in\mathbb{R}^{2}$ denotes a region coordinates at which
the feature value is $\mathbf{R}(\mathbf{y})\in\mathbb{R}^{C}$. The
transformed image $\mathbf{\tilde{R}}$ at the target coordinate
$\mathbf{\tilde{y}}$ is:
$\mathbf{\tilde{R}}(\mathbf{\tilde{y}})=\sum_{\mathbf{y}}\mathbf{R}(T_{\psi}(\mathbf{y}))\text{
}K(\mathbf{\tilde{y}},T_{\psi}(\mathbf{y})),$ (1)
where $\mathbf{R}(T_{\psi}(\mathbf{y}))$ is the image indexing operation and
is non-differentiable; thus, the way gradients propagate through the network
depends on the kernel $K(.,.)$. In bilinear interpolation, the kernel
$K(\mathbf{y}_{1},\mathbf{y}_{2})=0$ when $\mathbf{y}_{1}$ and
$\mathbf{y}_{2}$ are not direct neighbors. Therefore, the sub-pixel gradients
(i.e. the feature value difference between neighboring locations in the
original region) only flow through during propagation (Jiang et al. 2019).
This is an inherent flaw in bilinear interpolation since the sub-pixel
gradients will not associate to the large-scale changes which cannot be
captured by the immediate neighborhood of a point. To overcome this, several
variants (Jiang et al. 2019; Lin and Lucey 2017) have been proposed. However,
for our work, we exploit this flaw to capture the subtle changes in all
regions via sub-pixel gradients. Note that the bilinear interpolation,
although is not differentiable at all points due to the floor and ceiling
functions, can backpropagate the error and is differentiable in most inputs as
mentioned in the seminal work of Spatial Transform Networks (Jaderberg et al.
2015). We use bilinear kernel $K(.,.)$ in (1) to pool fixed size features
$\bar{f}_{r}$ ($w\times h\times C$) from all $r\in\mathcal{R}$.
Context-aware attention: In this step, we capture the contextual information
using our novel attention mechanism, which transforms $\bar{f}_{r}$ to a
weighted version of itself and conditioned on the rest of the feature maps
$\bar{f}_{r^{\prime}}$ ($r,r^{\prime}\in\mathcal{R}$). It enables our model to
selectively focus on more relevant integral regions to generate holistic
context information. The proposed context-aware attention takes a query
$\mathbf{q}(\bar{f}_{r})$ and maps against a set of keys
$\mathbf{k}(\bar{f}_{r^{\prime}})$ associated with the integral regions
$r^{\prime}$ in a given image, and then returns the output as a context vector
$\mathbf{c}_{r}$ and is computed as:
$\begin{split}\mathbf{c}_{r}=\sum_{r^{\prime}=1}^{|\mathcal{R}|}&\alpha_{r,r^{\prime}}\bar{f}_{r^{\prime}}\text{
,
}\alpha_{r,r^{\prime}}=\text{softmax}\left(W_{\alpha}\beta_{r,r^{\prime}}+b_{\alpha}\right)\\\
\beta_{r,r^{\prime}}&=\text{tanh}\left(\mathbf{q}(\bar{f}_{r})+\mathbf{k}(\bar{f}_{r^{\prime}})+b_{\beta}\right)\\\
\mathbf{q}(\bar{f}_{r})&=W_{\beta}\bar{f}_{r}\text{ and
}\mathbf{k}(\bar{f}_{r^{\prime}})=W_{\beta^{{}^{\prime}}}\bar{f}_{r^{\prime}},\end{split}$
(2)
where weight matrices $W_{\beta}$ and $W_{\beta^{\prime}}$ are for estimating
the query and key from the respective feature maps $\bar{f}_{r}$ and
$\bar{f}_{r^{\prime}}$; $W_{\alpha}$ is their nonlinear combination;
$b_{\alpha}$ and $b_{\beta}$ are the biases. These matrices and biases
($\\{W_{\beta},W_{\beta^{\prime}},W_{\alpha},b_{\alpha},b_{\beta}\\}\in\theta_{c}$)
are learnable parameters. The context-aware attention element
$\alpha_{r,r^{\prime}}$ captures the similarity between the feature maps
$\bar{f}_{r}$ and $\bar{f}_{r^{\prime}}$ of regions $r$ and $r^{\prime}$,
respectively. The attention-focused context vector $\mathbf{c}_{r}$ determines
the strength of $\bar{f}_{r}$ in focus conditioned on itself and its
neighborhood context. This applies to all integral regions $r$ (refer Fig.
2b).
Spatial structure encoding: The context vectors
$\mathbf{c}=\\{\mathbf{c}_{r}|r=1\dots|\mathcal{R}|\\}$ characterize the
attention and saliency. To include the structural information involving the
spatial arrangements of regions (see Fig. 1b and 2b), we represent
$\mathbf{c}$ as a sequence of regions (Fig. 2c) and adapt a recurrent network
to capture the structural knowledge using its internal states, which is
modeled via hidden units $h_{r}\in\mathbb{R}^{n}$. Thus, the internal state
representing the region $r$ is updated as:
$h_{r}=\mathcal{F}_{h}(h_{r-1},f_{r};\theta_{h})$, where $\mathcal{F}_{h}$ is
a nonlinear function with learnable parameter $\theta_{h}$. We use a fully-
gated LSTM as $\mathcal{F}_{h}$ (Hochreiter and Schmidhuber 1997) which is
capable of learning long-term dependencies. The parameter
$\theta_{h}\in\theta_{c}$ consists of weight matrices and biases linking
input, forget and output gates, and cell states of $\mathcal{F}_{h}$. For
simplicity, we omitted equations to compute these parameters and refer
interested readers to (Hochreiter and Schmidhuber 1997) for further details.
To improve the generalizability and lower the computational complexity of our
CAP, the context feature $f_{r}$ is extracted from the context vector
$\mathbf{c}_{r}$ via global average pooling (GAP). This results in the
reduction of feature map size from ($w\times h\times C)$ to ($1\times C$). The
sequence of hidden states
$h=(h_{1},h_{2},\dots,h_{r},\dots,h_{|\mathcal{R}|})$ corresponding to the
input sequence of context feature
$f=(f_{1},f_{2},\dots,f_{r},\dots,f_{|\mathcal{R}|})$ (see Fig. 1b) is used by
our classification module $\mathcal{F}_{d}(.;\theta_{d})$.
### Classification
To further guide our model to discriminate the subtle changes, we propose a
learnable pooling approach (Fig. 2c), which aggregates information by grouping
similar responses from the hidden states $h_{r}$. It is inspired by the
existing feature encoding approach, such as NetVLAD (Arandjelovic et al.
2016). We adapt this differentiable clustering approach for the soft
assignment of the responses from hidden states $h_{r}$ to cluster $\kappa$ and
their contribution to the VLAD encoding.
$\begin{split}\gamma_{\kappa}(h_{r})&=\frac{e^{W_{\kappa}^{T}h_{r}+b_{\kappa}}}{\sum_{i=1}^{\mathcal{K}}e^{W_{i}^{T}h_{r}+b_{i}}}\\\
N_{v}(o,\kappa)&=\sum_{r=1}^{|\mathcal{R}|}\gamma_{\kappa}(h_{r})h_{r}(o)\text{,
}\hat{y}=\text{softmax}(W_{N}N_{v})\end{split}$ (3)
where $W_{i}$ and $b_{i}$ are learnable clusters’ weights and biases. $T$
signifies transpose. The term $\gamma_{\kappa}(h_{r})$ refers to the soft
assignment of $h_{r}$ to cluster $\kappa$, and $N_{v}$ is the encoded
responses of hidden states from all the regions $r\in\mathcal{R}$. In the
original implementation of VLAD, the weighted sum of the residuals is used
i.e.
$\sum_{r=1}^{|\mathcal{R}|}\gamma_{\kappa}(h_{r})\left(h_{r}(o)-\hat{c}_{\kappa}(o)\right)$
in which $\hat{c}_{\kappa}$ is the $\kappa^{th}$ cluster center and $o\in
h_{r}$ is one of the elements in the hidden state response. We adapt the
simplified version that averages the actual responses instead of residuals
(Miech, Laptev, and Sivic 2017), which requires fewer parameters and computing
operations. The encoded response is mapped into prediction probability
$\hat{y}$ by using a learnable weight $W_{N}$ and softmax. The learnable
parameter for the classification module $\mathcal{F}_{d}$ is
$\theta_{d}=\\{W_{i},b_{i},W_{N}\\}$.
Dataset | #Train / #Test | #Classes | Our | Past Best (primary) | Past Best (primary + secondary)
---|---|---|---|---|---
Aircraft | 6,667 / 3,333 | 100 | 94.9 | 93.0 (Chen et al. 2019) | 92.9 (Yu et al. 2018b)
Food-101 | 75,750 / 25,250 | 101 | 98.6 | 93.0 (Huang et al. 2019b) | 90.4 (Cui et al. 2018)
Stanford Cars | 8,144 / 8,041 | 196 | 95.7 | 94.6 (Huang et al. 2019b) | 94.8 (Cubuk et al. 2019)
Stanford Dogs | 12,000 / 8,580 | 120 | 96.1 | 93.9 (Ge, Lin, and Yu 2019) | 97.1 (Ge, Lin, and Yu 2019)
CUB-200 | 5,994 / 5,794 | 200 | 91.8 | 90.3 (Ge, Lin, and Yu 2019) | 90.4 (Ge, Lin, and Yu 2019)
Oxford Flower | 2,040 / 6,149 | 102 | 97.7 | 96.4 (Xie et al. 2016) | 97.7 (Chang et al. 2020)
Oxford Pets | 3,680 / 3,669 | 37 | 97.3 | 95.9 (Huang et al. 2019b) | 93.8 (Peng, He, and Zhao 2018)
NABirds | 23,929 / 24,633 | 555 | 91.0 | 86.4 (Luo et al. 2019) | 87.9 (Cui et al. 2018)
Table 1: Dataset statistics and performance evaluation. FGVC accuracy (%) of our model and the previous best using only the primary dataset. The last column involves the transfer/joint learning strategy consisting of more than one dataset. Aircraft | Food-101 | Stanford Cars
---|---|---
Method | ACC | Method | ACC | Method | ACC
DFL (Wang et al. 2018) | 92.0 | WISeR (Martinel et al., 2018) | 90.3 | BARM (Liu et al. 2019) | 94.3
BARM (Liu et al. 2019) | 92.5 | DSTL∗ (Cui et al. 2018) | 90.4 | MC${}_{Loss}^{*}$ (Chang et al. 2020) | 94.4
GPipe (Huang et al. 2019b) | 92.7 | MSMVFA (Jiang et al. 2020) | 90.6 | DCL(Chen et al. 2019) | 94.5
MC${}_{Loss}^{*}$ (Chang et al. 2020) | 92.9 | JDNet∗ (Zhao et al. 2020) | 91.2 | GPipe (Huang et al. 2019b) | 94.6
DCL (Chen et al. 2019) | 93.0 | GPipe (Huang et al. 2019b) | 93.0 | AutoAug∗ (Cubuk et al. 2019) | 94.8
Proposed | 94.9 | Proposed | 98.6 | Proposed | 95.7
CUB-200 | Oxford-IIIT Pets | NABirds
iSQRT (Li et al. 2018) | 88.7 | NAC (Simon and Rodner 2015) | 91.6 | T-Loss (Taha et al. 2020) | 79.6
DSTL∗ (Cui et al. 2018) | 89.3 | TL-Attn∗ (Xiao et al. 2015) | 92.5 | PC-CNN (Dubey et al. 2018a) | 82.8
DAN (Hu et al. 2019) | 89.4 | InterAct (Xie et al. 2016) | 93.5 | MaxEnt∗ (Dubey et al. 2018b) | 83.0
BARM (Liu et al. 2019) | 89.5 | OPAM∗ (Peng, He, and Zhao 2018) | 93.8 | Cross-X (Luo et al. 2019) | 86.4
CPM∗ (Ge, Lin, and Yu 2019) | 90.4 | GPipe (Huang et al. 2019b) | 95.9 | DSTL∗ (Cui et al. 2018) | 87.9
Proposed | 91.8 | Proposed | 97.3 | Proposed | 91.0
Table 2: Accuracy (%) comparison with the recent top-five SotA approaches.
Methods marked with * involve transfer/joint learning strategy for
objects/patches/regions consisting more than one dataset (primary and
secondary). Please refer to the supplementary page in the end for the results
of Stanford Dogs and Oxford Flowers.
## Experiments and Discussion
We comprehensively evaluate our model on widely used eight benchmark FGVC
datasets: Aircraft (Maji et al. 2013), Food-101 (Bossard, Guillaumin, and Gool
2014), Stanford Cars (Krause et al. 2013), Stanford Dogs (Khosla et al. 2011),
Caltech Birds (CUB-200) (Wah et al. 2011), Oxford Flower (Nilsback and
Zisserman 2008), Oxford-IIIT Pets (Parkhi et al. 2012), and NABirds (Van Horn
et al. 2015). We do not use any bounding box/part annotation. Thus, we do not
compare with methods which rely on these. Statistics of datasets and their
train/test splits are shown in Table 1. We use the top-1 accuracy (%) for
evaluation.
Experimental settings: In all our experiments, we resize images to size
$256\times 256$, apply data augmentation techniques of random rotation ($\pm
15$ degrees), random scaling ($1\pm 0.15$) and then random cropping to select
the final size of $224\times 224$ from $256\times 256$. The last Conv layer of
the base CNN (e.g. $7\times 7$ pixels) is increased to $42\times 42$ by using
an upsampling layer (as in GAN) and then fed into our CAP (Fig. 1a) to pool
features from multiple integral regions $\mathcal{R}$. We fix bilinear pooling
size of $w=h=7$ for each region with minimum width $\Delta_{x}=7$ and height
$\Delta_{y}=7$. We use spatial location gap of 7 pixels between consecutive
anchors to generate $|\mathcal{R}|=27$ integral regions. This is decided
experimentally by considering the trade-off between accuracy and computational
complexity. We set the cluster size to 32 in our learnable pooling approach.
We apply Stochastic Gradient Descent (SGD) optimizer to optimize the
categorical cross-entropy loss function. The SGD is initialized with a
momentum of 0.99, and initial learning rate 1e-4, which is multiplied by 0.1
after every 50 epochs. The model is trained for 150 epochs using an NVIDIA
Titan V GPU (12 GB). We use Keras+Tensorflow to implement our algorithm.
Quantitative results and comparison to the SotA approaches: Overall, our model
outperforms the SotA approaches by a clear margin on all datasets except the
Stanford Dogs (Khosla et al. 2011) and Oxford Flowers (Nilsback and Zisserman
2008) (Table 1). In Table 1, we compare our performances with the two previous
best (last two columns). One uses only the target dataset (primary) for
training and evaluation (past best) and is the case in our model. The other
(last column) uses primary and additional secondary (e.g. ImageNet, COCO,
iNat, etc.) datasets for joint/transfer learning of objects/patches/regions
during training. It is worth mentioning that we use only the primary datasets
and our performance in most datasets is significantly better than those uses
additional datasets. This demonstrates the benefit of the proposed approach
for discriminating fine-grained changes in recognizing subordinate categories.
Moreover, we use only one network for end-to-end training, and our novel CAP
and classification layers are added on top of a base CNN. Therefore, the major
computations are associated with the base CNNs.
Using our model, the two highest gains are 5.6% and 3.1% in the respective
Food-101 (Bossard, Guillaumin, and Gool 2014) and NABirds (Van Horn et al.
2015) datasets. In Dogs, our method (96.1%) is significantly better than the
best SotA approach (93.9%) (Ge, Lin, and Yu 2019) using only primary data.
However, their accuracy increases to 97.1% when joint fine-tuning with
selected ImageNet images are used. Similarly, in Flowers, our accuracy (97.7%)
is the same as in (Chang et al. 2020) which uses both primary and secondary
datasets, and we achieve an improvement of 1.3% compared to the best SotA
approach in (Xie et al. 2016) using only primary data. We also compare our
model’s accuracy with the top-five SotA approaches on each dataset in Table 2.
Our accuracy is significantly higher than SotA methods using primary data in
all six datasets in Table 2 and two in supplementary (provided in the end).
Furthermore, it is also considerably higher than SotA methods, which use both
primary and secondary data in six datasets (Aircraft, Food-101, Cars, CUB-200,
Pets and NABirds). This clearly proves our model’s powerful ability to
discriminate subtle changes in recognizing subordinate categories without
requiring additional datasets and/or subnetworks and thus, has an advantage of
easy implementation and a little computational overhead in solving FGVC.
Base CNN | Plane | Cars | Dogs | CUB | Flowers | Pets
---|---|---|---|---|---|---
ResNet-50 | 94.9 | 94.9 | 95.8 | 90.9 | 97.5 | 96.7
Incep. V3 | 94.8 | 94.8 | 95.7 | 91.4 | 97.6 | 96.2
Xception | 94.1 | 95.7 | 96.1 | 91.8 | 97.7 | 97.0
DenseNet | 94.6 | 93.6 | 95.5 | 91.6 | 97.6 | 96.9
NASNet-M | 93.8 | 93.7 | 96.0 | 89.7 | 97.7 | 97.3
Mob-NetV2 | 94.4 | 94.0 | 95.9 | 89.2 | 97.4 | 96.4
Table 3: Our model’s accuracy (%) with different SotA base CNN architectures.
Previous best accuracies for these results are; Aircraft: 93.0 (Chen et al.
2019), Cars: 94.6 (Huang et al. 2019b), Dogs: 93.9 (Ge, Lin, and Yu 2019),
CUB: 90.3 (Ge, Lin, and Yu 2019), Flowers: 96.4 (Xie et al. 2016), and Pets:
95.9 (Huang et al. 2019b). The result of the Birds dataset is included in the
supplementary document in the end.
Ablation study: We compare the performance of our approach using the
benchmarked base CNN architectures such as ResNet-50 (He et al. 2016),
Inception-V3 (Szegedy et al. 2016), Xception (Chollet 2017) and DenseNet121
(Huang et al. 2017), as well as SotA lightweight architectures such as
NASNetMobile (Zoph et al. 2018) and MobileNetV2 (Sandler et al. 2018). The
performance is shown in Table 3. In all datasets, both standard and
lightweight architectures have performed exceptionally well when our proposed
CAP and classification modules are incorporated. Even our model outperforms
the previous best (primary data) for both standard and lightweight base CNNs
except in Cars and CUB-200 datasets in which our model with standard base CNNs
exceed the previous best. Our results in Table 1 & 2 are the best accuracy
among these backbones. Nevertheless, the accuracy of our model using any
standard backbones (ResNet50 / Inception V3 / Xception; Table 3) is better
than the SotA. In Flowers and Pets datasets, the lightweight NASNetMobile is
the best performer, and the MobileNetV2 is not far behind (Table 3). This
could be linked to the dataset size since these two are of smallest in
comparison to the rest (Table 1). However, in other datasets (e.g. Aircraft,
Cars and Dogs), there is a little gap in performance between standard and
lightweight CNNs. These lightweight CNNs involve significantly less
computational costs, and by adding our modules, the performance can be as
competitive as the standard CNNs. This proves the importance of our modules in
enhancing performance and its broader applicability.
We have also evaluated the above base CNNs (B), and the influence of our novel
CAP (+C) and the classification module (+E) in the recognition accuracy on
Aircraft, Cars and Pets datasets (more in the supplementary in the end). The
results are shown in Table 4. It is evident that the accuracy improves as we
add our modules to the base networks, i.e., (B+C+E) $>$ (B+C) $>$ (B+E) $>$ B,
resulting in the largest gain contributed by our novel CAP (B+C). This
signifies the impact of our CAP. In B+C, the minimum gain is 7.2%, 5.7% and
5.1% on the respective Aircraft, Cars and Pets datasets for the Inception-V3
as a base CNN. Similarly, the highest gain is 12.5% and 11.3% in Aircraft and
Cars, respectively. These two datasets are relatively larger than the Pets
(Table 1) in which the highest gain (7.9%) is achieved by using ResNet-50 as a
base CNN. We also observe that there is a significant gap in baseline accuracy
between lightweight and standard base CNNs in larger (Aircraft and Cars)
datasets. These gaps are considerably reduced when our CAP is added. There is
a further increase in accuracy when we add the classification module (B+C+E).
This justifies the inclusion of our novel encoding by grouping hidden
responses using residual-less NetVLAD and then infer class probability using
learnable pooling from these encoded responses. For base CNNs, we use the
standard transfer learning by fine-tuning it on the target dataset using the
same data augmentation and hyper-parameters. For our models, we use pre-
trained weights for faster convergence. We experimentally found that the
random initialization takes nearly double iterations to converge (similar
accuracy) than the pre-trained weights. A similar observation is shown in (He,
Girshick, and Dollár 2019).
| Aircraft/Planes | Stanford Cars | Oxford-IIIT Pets | Param | Time
---|---|---|---|---|---
Base CNN | Base | B+C | B+E | B+C+E | Base | B+C | B+E | B+C+E | Base | B+C | B+E | B+C+E | (M) | ms
ResNet-50 | 79.7 | 88.8 | 81.1 | 94.9 | 84.7 | 91.5 | 85.7 | 94.9 | 86.8 | 94.7 | 86.3 | 96.7 | 36.9 | 4.1
Incep. V3 | 82.4 | 89.6 | 83.3 | 94.8 | 85.7 | 91.4 | 85.7 | 94.8 | 90.2 | 95.3 | 92.4 | 96.2 | 35.1 | 3.8
Xception | 79.5 | 89.5 | 89.3 | 94.1 | 84.8 | 91.6 | 89.1 | 95.7 | 91.0 | 96.2 | 96.0 | 97.0 | 34.2 | 4.2
NASNet-M | 77.1 | 89.6 | 80.4 | 93.8 | 80.4 | 91.7 | 82.7 | 93.7 | 89.9 | 95.6 | 94.9 | 97.3 | 9.7 | 3.5
Table 4: Performance (accuracy in %) of our model with the addition of our
novel CAP (+C) and classification (+E) module to various SotA base (B) CNNs.
The observed accuracy trend is (B+C+E) $>$ (B+C) $>$ (B+E) $>$ B for all base
CNNs. Final model’s (B+C+E) trainable parameters (Param) are given in million
(M) and the respective per-frame inference time in millisecond (ms).
(a) Base CNN (b) Impact on base CNN
(c) CAP (d) CAP + Encoding
(e) $\alpha_{r,r^{\prime}}$ for class 1
(f) $\alpha_{r,r^{\prime}}$ for class 2
(g) $\mathbf{c}_{r}$ of region 1
(h) $\mathbf{c}_{r}$ of region 20
(i) $\mathbf{c}_{r}$ of class 1
(j) t-SNE plot of $\mathbf{c}_{r}$
Figure 3: Discriminability using t-SNE to visualize class separability and compactness (a-d). Aircraft test images using Xception: a) base CNN’s output, b) our CAP’s impact on the base CNN’s output, c) our CAP’s output, and d) our model’s final output. Our CAP’s class-specific attention-aware response for class 1 (e) and class 2 (f) to capture the similarity between 27 integral regions (27$\times$27). Class-specific $\mathbf{c}_{r}$ in (2) for 9 classes ($3\times 3$) from region 1 (g) and 20 (h). Blue to red represents class-specific less to more attention towards that region. Class-specific individual feature response within $\mathbf{c}_{r}$ of the region 1 and class 4 (i). t-SNE plot of $\mathbf{c}_{r}$ representing images from the above 9 classes (j). | Aircraft | Cars
---|---|---
Base CNN | #9 | #27 | #36 | #9 | #27 | #36
ResNet-50 | 85.9 | 94.9 | 91.2 | 92.9 | 94.9 | 91.9
Xception | 87.8 | 94.1 | 90.0 | 93.9 | 95.7 | 92.6
NASNet-M | 92.7 | 93.8 | 90.3 | 92.4 | 93.7 | 90.9
Table 5: Accuracy (%) of our model with a varying number of integral regions.
More results in the supplementary in the end.
Our model’s accuracy is also compared using different numbers of regions
$|\mathcal{R}|$. It is a hyper-parameter and is computed from $\Delta_{x}$ and
$\Delta_{y}$. The results are shown in Table 5 (best $|\mathcal{R}|=27$). We
have also provided results for top-N accuracy in the supplementary document
provided in the end. The top-2 accuracy is around 99% and is independent of
the CNN types.
Model complexity: It is represented as a number of trainable parameters in
millions and per-image inference time in millisecond (Table 4). It also
depends on the base CNNs types (e.g. standard vs lightweight). Given the
number of trainable parameters (9.7M) and inference time (3.5ms), the
performance of the lightweight NASNetMobile is very competitive in comparison
to the rest. The role of secondary data has improved accuracy in (Chang et al.
2020; Cubuk et al. 2019; Ge, Lin, and Yu 2019; Ge and Yu 2017). However, such
models involve multiple steps and resource-intensive, resulting in difficulty
in implementing. For example, 3 steps in (Ge, Lin, and Yu 2019): 1) object
detection and instance segmentation (Mask R-CNN and CRF), 2) complementary
part mining (512 ROIs) and 3) classification using context gating. The model
is trained using 4 GPUs. In contrast, our model can be trained on a single GPU
(12 GB). The per-image inference time is 4.1ms. In (Ge, Lin, and Yu 2019), it
is 27ms for step 3 and additional 227ms in step 2. FCANs (Liu et al. 2016)
reported its inference time as 150ms. Using 27 integral regions and ResNet50
as a base, the training time for the Aircraft is $\sim$4.75 hrs for 150 epochs
(12 batch size). It is $\sim$5.7 hrs for Cars and $\sim$8.5 hrs for Dogs.
Qualitative analysis: To understand the discriminability of our model, we use
t-SNE (Van Der Maaten 2014) to visualize the class separability and
compactness in the features extracted from a base CNN, and our novel CAP and
classification modules. We also analyze the impact of our CAP in enhancing the
discriminability of a base CNN. We use test images in Aircraft and Xception as
a base CNN. In Fig. 3(a-d), it is evident that when we include our CAP +
encoding modules, the clusters are farther apart and compact, resulting in a
clear distinction of various clusters representing different subcategories.
Moreover, the discriminability of the base CNN is significantly improved (Fig.
3b) in comparison to without our modules shown in Fig. 3a. More results are
shown in the supplementary material, added in the end. We have also looked the
inside of our CAP by visualizing its class-specific attention-aware response
using $\alpha_{r,r^{\prime}}$ and context vector $\mathbf{c}_{r}$ in (2).
Aircraft images (randomly selected 9 classes) are used in Fig. 3(e-j). Such
results clearly show our model’s power in capturing the context information
for discriminating subtle changes in FGVC problems. We have also included some
examples, which are incorrectly classified by our model with an explanation in
the supplementary information in the end.
## Conclusion
We have proposed a novel approach for recognizing subcategories by introducing
a simple formulation of context-aware attention via learning where to look
when pooling features across an image. Our attention allows for explicit
integration of bottom-up saliency by taking advantages of integral regions and
their importance, without requiring the bounding box/part annotations. We have
also proposed a feature encoding by considering the semantic correlation among
the regions and their spatial layouts to encode complementary partial
information. Finally, our model’s SotA results on eight benchmarked datasets,
quantitative/qualitative results and ablation study justify the efficiency of
our approach. Code is available at https://ardhendubehera.github.io/cap/.
## Acknowledgments
This research is supported by the UKIERI-DST grant CHARM (DST
UKIERI-2018-19-10). The GPU used in this research is generously donated by the
NVIDIA Corporation.
> ## References
>
> * Arandjelovic et al. (2016) Arandjelovic, R.; Gronat, P.; Torii, A.;
> Pajdla, T.; and Sivic, J. 2016. NetVLAD: CNN architecture for weakly
> supervised place recognition. In _Proceedings of the IEEE conference on
> computer vision and pattern recognition_ , 5297–5307.
> * Bello et al. (2019) Bello, I.; Zoph, B.; Vaswani, A.; Shlens, J.; and
> Le, Q. V. 2019. Attention augmented convolutional networks. In _IEEE
> International Conference on Computer Vision_ , 3286–3295.
> * Bossard, Guillaumin, and Gool (2014) Bossard, L.; Guillaumin, M.; and
> Gool, L. V. 2014. Food-101 - Mining Discriminative Components with Random
> Forests. In _Proc. 13th Eur. Conf., Part VI_ , volume 8694, 446–461.
> * Cai, Zuo, and Zhang (2017) Cai, S.; Zuo, W.; and Zhang, L. 2017.
> Higher-order integration of hierarchical convolutional activations for fine-
> grained visual categorization. In _Proceedings of the IEEE International
> Conference on Computer Vision_ , 511–520.
> * Chang et al. (2020) Chang, D.; Ding, Y.; Xie, J.; Bhunia, A. K.; Li, X.;
> Ma, Z.; Wu, M.; Guo, J.; and Song, Y.-Z. 2020. The devil is in the
> channels: Mutual-channel loss for fine-grained image classification. _IEEE
> Trans. on Image Processing_ 29: 4683–4695.
> * Chen et al. (2019) Chen, Y.; Bai, Y.; Zhang, W.; and Mei, T. 2019.
> Destruction and construction learning for fine-grained image recognition.
> In _IEEE Conference on Computer Vision and Pattern Recognition_ , 5157–5166.
> * Chollet (2017) Chollet, F. 2017. Xception: Deep learning with depthwise
> separable convolutions. In _IEEE conference on computer vision and pattern
> recognition_ , 1251–1258.
> * Cubuk et al. (2019) Cubuk, E. D.; Zoph, B.; Mane, D.; Vasudevan, V.; and
> Le, Q. V. 2019. Autoaugment: Learning augmentation strategies from data.
> In _Proceedings of the IEEE CVPR_ , 113–123.
> * Cui et al. (2018) Cui, Y.; Song, Y.; Sun, C.; Howard, A.; and Belongie,
> S. 2018. Large scale fine-grained categorization and domain-specific
> transfer learning. In _IEEE CVPR_ , 4109–4118.
> * Cui et al. (2017) Cui, Y.; Zhou, F.; Wang, J.; Liu, X.; Lin, Y.; and
> Belongie, S. 2017. Kernel pooling for convolutional neural networks. In
> _IEEE conference on computer vision and pattern recognition_ , 2921–2930.
> * Ding et al. (2020) Ding, Y.; Wen, S.; Xie, J.; Chang, D.; Ma, Z.; Si,
> Z.; and Ling, H. 2020. Weakly Supervised Attention Pyramid Convolutional
> Neural Network for Fine-Grained Visual Classification. _arXiv preprint
> arXiv:2002.03353_ .
> * Dubey et al. (2018a) Dubey, A.; Gupta, O.; Guo, P.; Raskar, R.; Farrell,
> R.; and Naik, N. 2018a. Pairwise confusion for fine-grained visual
> classification. In _Euro. Conf. on Computer Vision_ , 70–86.
> * Dubey et al. (2018b) Dubey, A.; Gupta, O.; Raskar, R.; and Naik, N.
> 2018b. Maximum-entropy fine grained classification. In _Advances in Neural
> Information Processing Systems_ , 637–647.
> * Fu, Zheng, and Mei (2017) Fu, J.; Zheng, H.; and Mei, T. 2017. Look
> closer to see better: Recurrent attention convolutional neural network for
> fine-grained image recognition. In _IEEE conference on computer vision and
> pattern recognition_ , 4438–4446.
> * Ge, Lin, and Yu (2019) Ge, W.; Lin, X.; and Yu, Y. 2019. Weakly
> supervised complementary parts models for fine-grained image classification
> from the bottom up. In _Proceedings of the IEEE Conference on Computer
> Vision and Pattern Recognition_ , 3034–3043.
> * Ge and Yu (2017) Ge, W.; and Yu, Y. 2017. Borrowing treasures from the
> wealthy: Deep transfer learning through selective joint fine-tuning. In
> _Proceedings of the IEEE conference on computer vision and pattern
> recognition_ , 1086–1095.
> * He, Girshick, and Dollár (2019) He, K.; Girshick, R.; and Dollár, P.
> 2019. Rethinking imagenet pre-training. In _Proceedings of the IEEE
> international conference on computer vision_ , 4918–4927.
> * He et al. (2016) He, K.; Zhang, X.; Ren, S.; and Sun, J. 2016. Deep
> residual learning for image recognition. In _Proc. IEEE conf. Comp. Vis.
> Patt. Recog. (CVPR)_ , 770–778.
> * Herdade et al. (2019) Herdade, S.; Kappeler, A.; Boakye, K.; and Soares,
> J. 2019. Image Captioning: Transforming Objects into Words. In _Advances
> in Neural Info. Processing Systems_ , 11135–11145.
> * Hochreiter and Schmidhuber (1997) Hochreiter, S.; and Schmidhuber, J.
> 1997. Long short-term memory. _Neural computation_ 9(8): 1735–1780.
> * Hu et al. (2019) Hu, T.; Qi, H.; Huang, Q.; and Lu, Y. 2019. See better
> before looking closer: Weakly supervised data augmentation network for fine-
> grained visual classification. _arXiv preprint arXiv:1901.09891_ .
> * Huang et al. (2017) Huang, G.; Liu, Z.; Van Der Maaten, L.; and
> Weinberger, K. Q. 2017. Densely connected convolutional networks. In
> _Proceedings of the IEEE conference on computer vision and pattern
> recognition_ , 4700–4708.
> * Huang et al. (2019a) Huang, L.; Wang, W.; Chen, J.; and Wei, X.-Y.
> 2019a. Attention on attention for image captioning. In _IEEE International
> Conference on Computer Vision_ , 4634–4643.
> * Huang et al. (2019b) Huang, Y.; Cheng, Y.; Bapna, A.; Firat, O.; Chen,
> D.; Chen, M.; Lee, H.; Ngiam, J.; Le, Q. V.; Wu, Y.; et al. 2019b. Gpipe:
> Efficient training of giant neural networks using pipeline parallelism. In
> _Advances in Neural Information Processing Systems_ , 103–112.
> * Jaderberg et al. (2015) Jaderberg, M.; Simonyan, K.; Zisserman, A.; et
> al. 2015. Spatial transformer networks. In _Advances in neural information
> processing systems_ , 2017–2025.
> * Ji et al. (2018) Ji, Z.; Fu, Y.; Guo, J.; Pang, Y.; Zhang, Z. M.; et al.
> 2018. Stacked semantics-guided attention model for fine-grained zero-shot
> learning. In _Advances in Neural Information Processing Systems_ ,
> 5995–6004.
> * Jiang et al. (2020) Jiang, S.; Min, W.; Liu, L.; and Luo, Z. 2020.
> Multi-Scale Multi-View Deep Feature Aggregation for Food Recognition. _IEEE
> Transaction on Image Processing_ 29: 265–276.
> * Jiang et al. (2019) Jiang, W.; Sun, W.; Tagliasacchi, A.; Trulls, E.;
> and Yi, K. M. 2019. Linearized Multi-Sampling for Differentiable Image
> Transformation. In _Proceedings of the IEEE International Conference on
> Computer Vision_ , 2988–2997.
> * Khosla et al. (2011) Khosla, A.; Jayadevaprakash, N.; Yao, B.; and Li,
> F.-F. 2011. Novel dataset for fine-grained image categorization: Stanford
> dogs. In _Proc. CVPR Workshop on Fine-Grained Visual Categorization (FGVC)_
> , volume 2.
> * Krause et al. (2016) Krause, J.; Sapp, B.; Howard, A.; Zhou, H.; Toshev,
> A.; Duerig, T.; Philbin, J.; and Fei-Fei, L. 2016. The unreasonable
> effectiveness of noisy data for fine-grained recognition. In _European
> Conference on Computer Vision_ , 301–320.
> * Krause et al. (2013) Krause, J.; Stark, M.; Deng, J.; and Fei-Fei, L.
> 2013. 3d object representations for fine-grained categorization. In _IEEE
> international conf. on computer vision workshops_ , 554–561.
> * LeCun et al. (1998) LeCun, Y.; Bottou, L.; Bengio, Y.; and Haffner, P.
> 1998. Gradient-based learning applied to document recognition.
> _Proceedings of the IEEE_ 86(11): 2278–2324.
> * Leng, Liu, and Chen (2019) Leng, J.; Liu, Y.; and Chen, S. 2019.
> Context-aware attention network for image recognition. _Neural Computing
> and Applications_ 31(12): 9295–9305.
> * Li et al. (2019) Li, G.; Zhu, L.; Liu, P.; and Yang, Y. 2019. Entangled
> Transformer for Image Captioning. In _Proceedings of the IEEE International
> Conference on Computer Vision_ , 8928–8937.
> * Li et al. (2018) Li, P.; Xie, J.; Wang, Q.; and Gao, Z. 2018. Towards
> faster training of global covariance pooling networks by iterative matrix
> square root normalization. In _IEEE Conference on Computer Vision and
> Pattern Recognition_ , 947–955.
> * Lin and Lucey (2017) Lin, C.-H.; and Lucey, S. 2017. Inverse
> compositional spatial transformer networks. In _IEEE CVPR_ , 2568–2576.
> * Liu et al. (2019) Liu, C.; Xie, H.; Zha, Z.-J.; Yu, L.; Chen, Z.; and
> Zhang, Y. 2019. Bidirectional Attention-Recognition Model for Fine-grained
> Object Classification. _IEEE Trans. on Multimedia_ .
> * Liu et al. (2016) Liu, X.; Xia, T.; Wang, J.; Yang, Y.; Zhou, F.; and
> Lin, Y. 2016. Fully convolutional attention networks for fine-grained
> recognition. _arXiv preprint arXiv:1603.06765_ .
> * Luo et al. (2019) Luo, W.; Yang, X.; Mo, X.; Lu, Y.; Davis, L. S.; Li,
> J.; Yang, J.; and Lim, S.-N. 2019. Cross-X Learning for Fine-Grained Visual
> Categorization. In _IEEE ICCV_ , 8242–8251.
> * Maji et al. (2013) Maji, S.; Rahtu, E.; Kannala, J.; Blaschko, M.; and
> Vedaldi, A. 2013. Fine-grained visual classification of aircraft. _arXiv
> preprint arXiv:1306.5151_ .
> * Miech, Laptev, and Sivic (2017) Miech, A.; Laptev, I.; and Sivic, J.
> 2017. Learnable pooling with context gating for video classification.
> _arXiv preprint arXiv:1706.06905_ .
> * Nilsback and Zisserman (2008) Nilsback, M.-E.; and Zisserman, A. 2008.
> Automated flower classification over a large number of classes. In _Indian
> Conf. on Comp. Vision, Graphics & Image Processing_, 722–729.
> * Parkhi et al. (2012) Parkhi, O. M.; Vedaldi, A.; Zisserman, A.; and
> Jawahar, C. 2012. Cats and dogs. In _IEEE CVPR_ , 3498–3505.
> * Parmar et al. (2019) Parmar, N.; Ramachandran, P.; Vaswani, A.; Bello,
> I.; Levskaya, A.; and Shlens, J. 2019. Stand-alone self-attention in vision
> models. In _Proceedings of NeurIPS_ , 68–80.
> * Peng, He, and Zhao (2018) Peng, Y.; He, X.; and Zhao, J. 2018. Object-
> part attention model for fine-grained image classification. _IEEE
> Transactions on Image Processing_ 27(3): 1487–1500.
> * Rodríguez et al. (2020) Rodríguez, P.; Velazquez, D.; Cucurull, G.;
> Gonfaus, J. M.; Roca, F. X.; and Gonzàlez, J. 2020. Pay attention to the
> activations: a modular attention mechanism for fine-grained image
> recognition. _IEEE Trans. on Multimedia_ 22(2): 502–514.
> * Sandler et al. (2018) Sandler, M.; Howard, A.; Zhu, M.; Zhmoginov, A.;
> and Chen, L.-C. 2018. Mobilenetv2: Inverted residuals and linear
> bottlenecks. In _IEEE CVPR_ , 4510–4520.
> * Simon and Rodner (2015) Simon, M.; and Rodner, E. 2015. Neural
> activation constellations: Unsupervised part model discovery with
> convolutional networks. In _IEEE Intl. Conf. on Comp. Vision_ , 1143–1151.
> * Sun et al. (2018) Sun, M.; Yuan, Y.; Zhou, F.; and Ding, E. 2018.
> Multi-attention multi-class constraint for fine-grained image recognition.
> In _European Conf. on Computer Vision_ , 805–821.
> * Szegedy et al. (2016) Szegedy, C.; Vanhoucke, V.; Ioffe, S.; Shlens, J.;
> and Wojna, Z. 2016. Rethinking the inception architecture for computer
> vision. In _IEEE CVPR_ , 2818–2826.
> * Taha et al. (2020) Taha, A.; Chen, Y.-T.; Misu, T.; Shrivastava, A.; and
> Davis, L. 2020. Boosting Standard Classification Architectures Through a
> Ranking Regularizer. In _The IEEE Winter Conference on Applications of
> Computer Vision_ , 758–766.
> * Valan et al. (2019) Valan, M.; Makonyi, K.; Maki, A.; Vondráček, D.; and
> Ronquist, F. 2019\. Automated taxonomic identification of insects with
> expert-level accuracy using effective feature transfer from convolutional
> networks. _Systematic biology_ 68(6): 876–895.
> * Van Der Maaten (2014) Van Der Maaten, L. 2014. Accelerating t-SNE using
> tree-based algorithms. _The Journal of Machine Learning Research_ 15(1):
> 3221–3245.
> * Van Horn et al. (2015) Van Horn, G.; Branson, S.; Farrell, R.; Haber,
> S.; Barry, J.; Ipeirotis, P.; Perona, P.; and Belongie, S. 2015. Building a
> bird recognition app and large scale dataset with citizen scientists: The
> fine print in fine-grained dataset collection. In _IEEE Conf. on Computer
> Vision and Pattern Recognition_ , 595–604.
> * Wah et al. (2011) Wah, C.; Branson, S.; Welinder, P.; Perona, P.; and
> Belongie, S. 2011. The caltech-ucsd birds-200-2011 dataset .
> * Xiao et al. (2015) Xiao, T.; Xu, Y.; Yang, K.; Zhang, J.; Peng, Y.; and
> Zhang, Z. 2015. The application of two-level attention models in deep
> convolutional neural network for fine-grained image classification. In
> _Proceedings of the IEEE CVPR_ , 842–850.
> * Xie et al. (2016) Xie, L.; Zheng, L.; Wang, J.; Yuille, A. L.; and Tian,
> Q. 2016. Interactive: Inter-layer activeness propagation. In _IEEE Conf.
> on Computer Vision and Pattern Recognition_ , 270–279.
> * Yang et al. (2018) Yang, Z.; Luo, T.; Wang, D.; Hu, Z.; Gao, J.; and
> Wang, L. 2018. Learning to navigate for fine-grained classification. In
> _European Conference on Computer Vision (ECCV)_ , 420–435.
> * Yu et al. (2018a) Yu, C.; Zhao, X.; Zheng, Q.; Zhang, P.; and You, X.
> 2018a. Hierarchical bilinear pooling for fine-grained visual recognition.
> In _European Conference on Computer Vision_ , 574–589.
> * Yu et al. (2018b) Yu, F.; Wang, D.; Shelhamer, E.; and Darrell, T.
> 2018b. Deep layer aggregation. In _IEEE CVPR_ , 2403–2412.
> * Zhang et al. (2018) Zhang, H.; Goodfellow, I.; Metaxas, D.; and Odena,
> A. 2018. Self-attention generative adversarial networks. _arXiv preprint
> arXiv:1805.08318_ .
> * Zhao, Jia, and Koltun (2020) Zhao, H.; Jia, J.; and Koltun, V. 2020.
> Exploring self-attention for image recognition. In _IEEE Conference on
> Computer Vision and Pattern Recognition_ , 10076–10085.
> * Zhao et al. (2020) Zhao, H.; Yap, K.-H.; Kot, A. C.; and Duan, L. 2020.
> JDNet: A Joint-learning Distilled Network for Mobile Visual Food
> Recognition. _IEEE Journal of Selected Topics in Signal Processing_ .
> * Zheng et al. (2019) Zheng, H.; Fu, J.; Zha, Z.-J.; and Luo, J. 2019.
> Looking for the devil in the details: Learning trilinear attention sampling
> network for fine-grained image recognition. In _IEEE Conf. on Computer
> Vision and Pattern Recognition_ , 5012–5021.
> * Zheng et al. (2020) Zheng, H.; Fu, J.; Zha, Z.-J.; Luo, J.; and Mei, T.
> 2020. Learning rich part hierarchies with progressive attention networks
> for fine-grained image recognition. _IEEE Transactions on Image Processing_
> 29: 476–488.
> * Zoph et al. (2018) Zoph, B.; Vasudevan, V.; Shlens, J.; and Le, Q. V.
> 2018. Learning transferable architectures for scalable image recognition.
> In _IEEE CVPR_ , 8697–8710.
> * Zoran et al. (2020) Zoran, D.; Chrzanowski, M.; Huang, P.-S.; Gowal, S.;
> Mott, A.; and Kohli, P. 2020\. Towards robust image classification using
> sequential attention models. In _IEEE Conference on Computer Vision and
> Pattern Recognition_ , 9483–9492.
>
Supplementary Document
In this document, we have included the remaining quantitative and qualitative
results, which we could not include in the main paper.
Remaining results of Table 2: The performance comparison (accuracy in %) using
the remaining two datasets (Stanford Dogs and Oxford Flowers) for Table 2 in
the main paper. It is presented in Table 6 below.
Table 6: Performance comparison with the recent top-five SotA approaches on each dataset. Methods marked with * involve transfer/joint learning strategy for objects/patches/regions consisting more than one dataset (primary and secondary) Stanford Dogs | Oxford Flowers
---|---
Method | Accuracy (%) | Method | Accuracy (%)
FCANs (Liu et al. 2016) | 89.0 | InterAct (Xie et al. 2016) | 96.4
SJFT∗ (Ge and Yu 2017) | 90.3 | SJFT∗ (Ge and Yu 2017) | 97.0
DAN (Hu et al. 2019) | 92.2 | OPAM∗ (Peng, He, and Zhao 2018) | 97.1
WARN (Rodríguez et al. 2020) | 92.9 | DSTL∗ (Cui et al. 2018) | 97.6
CPM∗ (Ge, Lin, and Yu 2019) | 97.1 | MC${}_{Loss}*$ (Chang et al. 2020) | 97.7
Proposed | 96.1 | Proposed | 97.7
Remaining results of Table 3: The accuracy of the proposed method is evaluated
on the NABirds dataset using six different SotA base CNNs for Table 3 in the
main paper. It is presented in Table 7 below.
Table 7: Our model’s accuracy (%) on the NABirds dataset with different SotA base CNN architectures. Previous best accuracy is 86.4% (Luo et al. 2019) for primary only and 87.9% (Cui et al. 2018) for combined primary and secondary datasets. Base CNN | Accuracy(%)
---|---
ResNet-50 | 88.8
Inception V3 | 89.1
Xception | 91.0
DenseNet-121 | 88.3
NASNet-Mobile | 88.7
MobileNet V2 | 89.1
Remaining results of Table 4: In ablation study (Table 4 of the main paper),
we have presented the performance of the proposed model (with the addition of
our novel context-aware attentional pooling (+C) and classification (+E)
module) on the Aircraft, Stanford Cars and Oxford-IIIT Pets datasets. The same
evaluation procedure is performed on the Stanford Dogs, Oxford Flowers and
Caltech Birds (CUB-200) datasets and the recognition accuracy (%) is presented
in Table 8. Like in Table 4, a similar trend is observed in the improvement of
accuracy when our context-aware attentional pooling (+C) and classification
(+E) modules are added to various SotA base CNN architectures (B).
Table 8: Accuracy (%) of the proposed model with the addition of our novel context-aware attentional pooling (+C) and classification (+E) module to various SotA base (B) CNN architectures. It presents the remaining evaluation of Table 4. | Stanford Dogs | Oxford Flowers | Caltech Birds: CUB-200
---|---|---|---
Base CNN | B | B+C | B+C+E | B | B+C | B+C+E | B | B+C | B+C+E
Inception-V3 | 78.7 | 94.2 | 95.7 | 92.3 | 94.9 | 97.6 | 76.0 | 87.1 | 91.4
Xception | 82.7 | 94.8 | 96.1 | 91.9 | 94.9 | 97.7 | 75.6 | 87.4 | 91.8
DenseNet-121 | 79.5 | 94.5 | 95.5 | 94.4 | 95.1 | 97.6 | 79.1 | 87.2 | 91.6
NASNet-Mobile | 79.5 | 94.7 | 96.0 | 90.7 | 95.0 | 97.7 | 73.0 | 86.8 | 89.7
MobileNetV2 | 76.5 | 94.3 | 95.9 | 92.3 | 95.0 | 97.4 | 74.5 | 87.0 | 89.2
Previous Best | ( Ge et al. 2019) | 93.9 | ( Xie et al. 2016) | 96.4 | ( Ge et al. 2019) | 90.3
Remaining results of Table 5: The performance is evaluated using a different
number of integral regions on the Aircraft and Stanford Cars datasets (Table
5). The same experiment is also carried out on the Stanford Dogs dataset, and
the results are given in Table 9 below.
Table 9: Accuracy (%) of our model with numbers of 9, 27, and 36 integral regions on Stanford Dogs dataset. Base CNN | #9 | #27 | #36
---|---|---|---
ResNet-50 | 90.5 | 95.8 | 92.1
Xception | 95.3 | 96.1 | 95.2
NASNet-M | 91.7 | 96.0 | 93.3
Top-N Accuracy (%): We have also evaluated the proposed approach using top-N
accuracy metric on Oxford-IIIT Pets, Stanford Cars and Aircraft datasets. The
performance of our modules on top of various base architectures is presented
in Table 10 below. On all three datasets, the top-2 accuracy is around 99% and
is independent of the type of base CNN architecture used. Moreover, the top-5
accuracy is nearly 100%. This justifies the significance of our novel
attentional pooling and encoding modules in enhancing performance and their
wider applicability.
Table 10: Top-N accuracy (in %) of the proposed model using different base architectures on Oxford-IIIT Pets, Stanford Cars and Aircraft datasets. The top-2 accuracy is around 99% and is independent of the type of base CNN architecture used. The top-5 accuracy is nearly 100%. This shows the significance of the proposed attentional pooling and encoding modules. Dataset | Base CNN architecture | Top 1 | Top 2 | Top 3 | Top 5
---|---|---|---|---|---
Oxford-IIIT Pets | Inception-V3 | 96.2 | 99.0 | 99.5 | 99.9
| Xception | 97.0 | 99.7 | 99.9 | 99.9
| DenseNet121 | 96.9 | 99.2 | 99.6 | 99.7
| NASNetMobile | 97.3 | 99.4 | 99.8 | 99.9
| MobileNetV2 | 96.4 | 98.9 | 99.5 | 99.6
Stanford Cars | Inception-V3 | 94.8 | 99.4 | 99.7 | 99.8
| Xception | 95.7 | 99.3 | 99.7 | 99.8
| DenseNet121 | 93.6 | 98.7 | 99.5 | 99.9
| NASNetMobile | 93.7 | 99.1 | 99.7 | 99.8
| MobileNetV2 | 94.0 | 99.3 | 99.8 | 99.9
Aircraft | Inception-V3 | 94.8 | 99.1 | 99.7 | 99.8
| Xception | 94.1 | 98.9 | 99.2 | 99.5
| DenseNet121 | 94.6 | 98.8 | 99.3 | 99.4
| NASNetMobile | 93.8 | 99.4 | 99.8 | 99.8
| MobileNetV2 | 94.4 | 99.1 | 99.7 | 99.8
Additional Qualitative Analysis:
We have provided the additional qualitative analysis of our model’s
performance by selecting a few example images, which are wrongly classified
against the label they are mistaken for (selected from the mistaken
subcategories). This is presented in Figure 4. It is evident that the mistaken
labels come from classes with extremely similar features, often being from the
same manufacturer (Boeing 747, Audi, etc.). We have also noticed that
subcategories can have very specific defining features that are not clearly
visible in every image due to poor angles or lighting conditions (e.g. The
chin of a Ragdoll and legs of a Birman cat shown in Fig. 4g).
(a)
(b)
(c)
(d)
(e)
(f)
(g)
(h)
(i)
(j)
(k)
(l)
(m)
(n)
(o)
Figure 4: Some of the example images, which are incorrectly classified by our
model (left) against the label they are mistaken for (right - selected from
the mistaken subcategories): Aircraft (a-c), Stanford Cars (d-f), Oxford-IIIT
Pets (g-i), Caltech-UCSD Birds - CUB-200 (j-l), and Oxford Flowers (m-o). It
can be seen that the mistaken labelling comes from classes with extremely
similar appearance features and/or perspective changes, often being from the
same manufacturer (Boeing 747, Audi, etc.). We have also noticed that
subcategories can have very specific defining features that are not clearly
visible in every image due to poor angles or lighting conditions (e.g. The
chin of a Ragdoll and legs of a Birman cat).
We have also included an additional qualitative analysis of discriminating
ability (Figure 5 to Figure 9) of our model using t-SNE to visualize class
separability and compactness on the different datasets as well as various
backbone CNNs.
(a) (b) (c) (d) (e) (f)
Figure 5: Qualitative analysis of discriminating ability using t-SNE to
monitor class separability and compactness. Visualization of Aircraft test
images using Inception-V3 and NASNetMobile as a base CNN: (a & d) output of
the base CNN, (b & e) feature maps from our attentional pooling (CAP), and (c
& f) our model’s final feature maps (CAP+Encoding). Best view in color.
(a) (b) (c) (d) (e) (f)
Figure 6: Qualitative analysis of discriminating ability using t-SNE to
monitor class separability and compactness. Visualization of Stanford Cars
test images using Inception-V3 and NASNetMobile as a base CNN: (a & d) output
of the base CNN, (b & e) feature maps from our attentional pooling (CAP), and
(c & f) our model’s final feature maps (CAP+Encoding). Best view in color.
(a) (b) (c) (d) (e) (f)
Figure 7: Qualitative analysis of discriminating ability using t-SNE to
monitor class separability and compactness. Visualization of Oxford-IIIT Pets
test images using Inception-V3 and NASNetMobile as a base CNN: (a & d) output
of the base CNN, (b & e) feature maps from our attentional pooling (CAP), and
(c & f) our model’s final feature maps (CAP+Encoding). Best view in color.
(a) (b) (c) (d) (e) (f)
Figure 8: Qualitative analysis of discriminating ability using t-SNE to
monitor class separability and compactness. Visualization of Oxford Flowers
test images using MobileNetV2 and NASNetMobile as a base CNN: (a & d) output
of the base CNN, (b & e) feature maps from our attentional pooling (CAP), and
(c & f) our model’s final feature maps (CAP+Encoding). Best view in color.
(a) (b) (c) (d) (e) (f)
Figure 9: Qualitative analysis of discriminating ability using t-SNE to
monitor class separability and compactness. Visualization of the Caltech-UCSD
Birds (CUB-200) test images using MobileNetV2 and NASNetMobile as a base CNN:
(a & d) output of the base CNN, (b & e) feature maps from our attentional
pooling (CAP), and (c & f) our model’s final feature maps (CAP+Encoding). Best
view in color.
|
Rabia Azzi and Gayo Diallo 11institutetext: BPH Center/INSERM U1219, Univ.
Bordeaux, F-33000, France,
11email<EMAIL_ADDRESS>
# AMALGAM: A Matching Approach to fairfy tabuLar data with knowledGe grAph
Model
Rabia Azzi Gayo Diallo
###### Abstract
In this paper we present AMALGAM, a matching approach to fairify tabular data
with the use of a knowledge graph. The ultimate goal is to provide fast and
efficient approach to annotate tabular data with entities from a background
knowledge. The approach combines lookup and filtering services combined with
text pre-processing techniques. Experiments conducted in the context of the
2020 Semantic Web Challenge on Tabular Data to Knowledge Graph Matching with
both Column Type Annotation and Cell Type Annotation tasks showed promising
results.
###### keywords:
Tabular Data, Knowledge Graph, Entity Linking
## 1 Introduction
Making web data complying with the FAIR111FAIR stands for: Findable,
Accessible, Interoperable and Reusable principles has become a necessity in
order to facilitate their discovery and reuse [1]. The value for the knowledge
discovery of implementing FAIR is to increase, data integration, data
cleaning, data mining, machine learning and knowledge discovery tasks.
Successfully implemented FAIR principles will improve the value of data by
making them findable, accessible and resolve semantic ambiguities. Good data
management is not a goal in itself, but rather is the key conduit leading to
knowledge discovery and acquisition, and to subsequent data and knowledge
integration and reuse by the community after the data publication process [2].
Semantic annotation could be considered as a particular knowledge acquisition
task [3, 4, 5]. The semantic annotation process may rely on formal metadata
resources described with an Ontology, even sometimes with multiple ontologies
thanks to the use of semantic repositories [6]. Over the last years, tables
are one of the most used formats to share results and data. In this field, a
set of systems for matching web tables to knowledge bases has been developed
[7, 8]. They can be categorized in two main tasks: structure and semantic
annotation. The structure annotation deals with tasks such as data type
prediction and table header annotation [9]. Semantic annotation involves
matching table elements into KG [10] e.g., columns to class and cells to
entities [11, 12].
Recent years have seen an increasing number of works on Semantic Table
Interpretation. In this context, SemTab
2020222http://www.cs.ox.ac.uk/isg/challenges/sem-tab/2020/index.html has
emerged as an initiative which aims at benchmarking systems which deals with
annotating tabular data with entities from a KG, referred as table annotation
[13]. SemTab is organised into three tasks, each one with several evaluation
rounds. For the 2020 edition for instance, it involves: (i) assigning a
semantic type (e.g., a KG class) to a column (CTA); (ii) matching a cell to a
KG entity (CEA); (iii) assigning a KG property to the relationship between two
columns (CPA).
Our goal is to automatically annotate on the fly tabular data. Thus, our
annotation approach is fully automated, as it does not need prior information
regarding entities, or metadata standards. It is fast and easy to deploy, as
it takes advantage of the existing system like Wikidata and Wikipedia to
access entities.
## 2 Related Work
Various research works have addressed the issue of semantic table annotation.
The most popular approaches which deal with the three above mentioned tasks
rely on supervised learning setting, where candidate entities are selected by
a classification models [14]. Such systems include (i) MTab [15], which
combines a voting algorithm and the probability models to solve critical
problems of the matching tasks, (ii) DAGOBAH [16] aiming at semantically
annotating tables with Wikidata and DBpedia entities; more precisely it
performs cell and column annotation and relationship identification, via a
pipeline starting from a pre-processing step to enriching an existing
knowledge graph using the table information; (iii) ADOG [17] is a system
focused on leveraging the structure of a well-connected ontology graph which
is extracted from different Knowledge Graphs to annotate structured or semi-
structured data. In the latter approach, they combine in novel ways a set of
existing technologies and algorithms to automatically annotate structured and
semi-structured records. It takes advantage of the native graph structure of
ontologies to build a well-connected network on ontologies from different
sources; (iv) Another example is described in [18]. Its process is split into
a Candidate Generation and a Candidate Selection phases. The former involves
looking for relevant entities in knowledge bases, while the latter involves
picking the top candidate using various techniques such as heuristics (the
‘TF-IDF’ approach) and machine learning (the Neural Network Ranking model).
In [19] the authors present TableMiner, a learning approach for a semantic
table interpretation. This is essentially done by improving annotation
accuracy by making innovative use of various types of contextual information
both inside and outside tables as features for inference. Then, it reduces
computational overheads by adopting an incremental, bootstrapping approach
that starts by creating preliminary and partial annotations of a table using
‘sample’ data in the table, then using the outcome as ‘seed’ to guide
interpretation of remaining contents. Following also a machine learning
approach, [20] proposes Meimei. It combines a latent probabilistic model with
multi-label classifiers.
Other alternative approaches address only a single specific task. Thus, in the
work of [21], the authors focuses on column type prediction for tables without
any metadata. Unlike traditional lexical matching-based methods, they follow a
deep prediction model that can fully exploit tables’ contextual semantics,
including table locality features learned by a Hybrid Neural Network (HNN),
and inter-column semantics features learned by a knowledge base (KB) lookup
and query answering algorithm. It exhibits good performance not only on
individual table sets, but also when transferring from one table set to
another. In the same vein, a work conducted by [22] propose Taipan, which is
able to recover the semantics of tables by identifying subject columns using a
combination of structural and semantic features.
From Web tables point of view, various works could be mentioned. Thus, in [23]
an iterative matching approach is described. It combines both schema and
entity matching and is dedicated to matching large set of HTML tables with a
cross-domain knowledge base. Similarly, TabEL uses a collective classification
technique to disambiguate all mentions in a given table [24]. Instead of using
a strict mapping of types and relations into a reference knowledge base, TabEL
uses soft constraints in its graphical model to sidestep errors introduced by
an incomplete or noisy KB. It outperforms previous work on multiple datasets.
Overall, all the above mentioned approaches are based on a learning strategy.
However, for the real-time application, there is a need to get the result as
fast as possible. Another main limitation of these approaches is their
reproducibility. Indeed, key explicit information concerning study parameters
(particularly randomization control) and software environment are lacking.
The ultimate goal with AMALGAM, which could be categorized as a tabular data
to KG matching system, is to provide a fast and efficient approach for tabular
data to KG matching task.
## 3 The AMALGAM approach
AMALGAM is designed according to the workflow in Fig. 1. There are three main
phases which consist in, respectively, pre-processing, context annotation and
tabular data to KG matching. The first two steps are identical for both CEA
and CTA tasks.
Figure 1: Workflow of AMALGAM.
Tables Pre-Processing. It is common to have missing values in datasets.
Beside, the content of the table can have different types (string, date,
float, etc.)The aim of the pre-processing step is to ensure that loading table
happens without any error. For instance, a textual encoding where some
characters are loaded as noisy sequences or a text field with an unescaped
delimiter will cause the considered record to have an extra column, etc.
Loading incorrect encoding might strongly affect the lookup performance. To
overcome this issue, AMALGAM relies on the Pandas
library333https://pandas.pydata.org/ to fix all noisy textual data in the
tables being processed.
Figure 2: Illustration of a table structure.
Annotation context. We consider a table as a two-dimensional tabular structure
(see Fig. 2(A)) which is composed of an ordered set of x rows and y columns.
Each intersection between a row and a column determines a cell $c_{ij}$ with
the value $v_{ij}$ where $1\leq i\leq x,1\leq j\leq y$. To identify the
attribute label of a column also referred as header detection (CTA task), the
idea consists in annotating all the items of the column using entity linking.
Then, the attribute label is estimated using a random entity linking. The
annotation context is represented by the list of items in the same column (see
Fig. 2(B)). For example, the context of the first column in the Fig. 2 is
represented by the following items: [A1,B1,…,n]. Following the same logic, we
consider that all cells in the same row describe the same context. More
specifically, the first cell of the row describes the entity and the following
cells the associated properties. For instance, the context of the first row in
the Fig. 2 is represented by the following list of items: [A1,A2,A3,A4].
Assigning a semantic type to a column (CTA). The CTA task consists in
assigning a Wikidata KG entity to a given column. It can be performed by
exploiting the process described in Fig. 3. The Wikidata KG allows to look up
a Wikidata item by its title of its corresponding page on Wikipedia or other
Wikimedia family sites using a dedicated API
444https://www.wikidata.org/w/api.php. In our case, the main information
needed from the entity is a list of the instances of (P31), subclass of (P279)
and part of (P361) statements. To do so, a parser is developed to retrieve
this information from the Wikidata built request. For example, ”Grande
Prairie” provides the following results: [list of towns in Alberta:Q15219391,
village in Alberta:Q6644696, city in Alberta:Q55440238]. To achieve this, our
methodology combines wbsearchentities and parse actions provided by the API.
It could be observed that in this task, there were many items that have not
been annotated. This is because tables contain incorrectly spelled terms.
Therefore, before implementing the other tasks, a spell check component is
required.
As per the literature [25], spell-checker is a crucial language tool of
natural language processing (NLP) which is used in applications like
information extraction, proofreading, information retrieval, social media and
search engines. In our case, we compared several approaches and libraries:
Textblob555https://textblob.readthedocs.io/en/dev/, Spark
NLP666https://nlp.johnsnowlabs.com/,
Gurunudi777https://github.com/guruyuga/gurunudi, Wikipedia
api888https://wikipedia.readthedocs.io/en/latest/code.html,
Pyspellchecker999https://github.com/barrust/pyspellchecker,
Serpapi101010https://serpapi.com/spell-check. A comparison of these approaches
could be found in table 1.
Table 1: Comparative of approaches and libraries related to spell-checking. Name | Category | Strengths/Limitations
---|---|---
Textblob | NLP | Spelling correction, Easy-to-use
Spark NLP | NLP | Pre-trained, Text analysis
Gurunudi | NLP | Pre-trained, Text analysis, Easy-to-use
Wikipedia api | Search engines | Search/suggestion, Easy-to-use, Unlimited access
Pyspellchecker | Spell checking | Simple algorithm, No pre-trained, Easy-to-use
Serpapi | Search engines | Limited access for free
Figure 3: Assigning a semantic type to a column (CTA).
Our choice is oriented towards Gurunudi and the Wikidata API with a post-
processing step consisting in validating the output using
fuzzywuzzy111111https://github.com/seatgeek/fuzzywuzzy to keep only the
results whose ratio is greater than the threshold of 90%. For example, let’s
take the expression “St Peter’s Seminarz” after using the Wikidata API we get
“St Peter’s seminary” and the ratio of fuzzy string matching is 95%.
We are now able to perform the CTA task. In the trivial case, the result of an
item lookup is equal a single record. The best matching entity is chosen as a
result. In the other cases, where the result is more than one, no annotation
is produced for the CTA task. Finally, if there is no result after the lookup,
another one is performed using the output of the spell check produced by the
item. At the end of these lookups, the matched couple results are then stored
in a nested dictionary [item:claims]. The most relevant candidate, counting
the number of occurrences, is selected.
Input: $Table\ T$
Output: $Annotated\ Table\ T^{\prime}$
foreach _$\textit{col i}\in\mathcal{T}{}$_ do
${candidates\\_col}\leftarrow\emptyset$
foreach _$\textit{el}\in\textit{col}{}$_ do
${label}\leftarrow\textit{el.value}$
${candidates}\leftarrow\textit{wd-lookup}\ (\textit{label})$
if _( $candidates.size=1$)_ then
$candidates\\_col(k,candidates)$
else if _( $candidates.size=0$)_ then
${new\\_label}\leftarrow\textit{spell-check}\ (\textit{label})$
${candidates}\leftarrow\textit{wd-lookup}\ (\textit{new\\_label})$
if _( $candidates.size=1$)_ then
$candidates\\_col(k,candidates)$
end if
end if
end foreach
$annotate(T^{\prime}.col.i,getMostCommunClass(candidates\\_col))$
end foreach
Algorithm 1 CTA task
Matching a cell to a KG entity (CEA). The CEA task can be performed by
exploiting the process described in Fig. 4. Our approach reuse the process of
the CTA task and made necessary adaptations. The first step is to get all the
statements for the first item of the list context. The process is the same as
CTA, the only difference is where result provides than one record. In this
case, we create nested dictionary with all candidates. Then, to disambiguate
the candidates entities, we use the concept of the column generated with the
CTA task. Next, a lookup is performed by using the other items of the list
context in the claims of the first item. If the item is found, it is selected
as the target entity; if not, the lookup is performed with the item using the
Wikidata API (if the result is empty, no annotation is produced).
With this process, it is possible to reduce errors associated with the lookup.
Let’s take the value “650“ in row 0 of the table Fig. 4 for instance. If we
lookup directly in Wikidata, we can get many results. However, if we check
first in the statements of the first item of the list, “Grande Prairie“, it is
more likely to successfully identify the item.
Figure 4: Matching a cell to a KG entity (CEA).
Input: $Table\ T$, $TColsContext$
Output: $Annotated\ Table\ T^{\prime}$
foreach _$\textit{row i}\in\mathcal{T}{}$_ do
${FirstEl\\_properties}\leftarrow\emptyset$
foreach _$\textit{el}\in\textit{row}{}$_ do
${label}\leftarrow\textit{el.value}$
if _( $el=0$)_ then
${FirstEl\\_properties}\leftarrow GetProperties({label,ColsContext})$
end if
if _( $\textit{Prop-lookup}\ (\textit{label})\not=\emptyset$)_ then
$annotate(T^{\prime}.row.i.el,candidates.value)$
else
${candidates}\leftarrow\textit{wd-lookup}\ (\textit{label,ColsContext})$
if _( $candidates.size=1$)_ then
$annotate(T^{\prime}.row.i.el,candidates.value)$
else if _( $candidates.size=0$)_ then
${new\\_label}\leftarrow\textit{spell-check}\ (\textit{label})$
${candidates}\leftarrow\textit{wd-lookup}\ (\textit{new\\_label,ColsContext})$
if _( $candidates.size=1$)_ then
$annotate(T^{\prime}.row.i.el,candidates.value)$
end if
end if
end if
end foreach
end foreach
Algorithm 2 Algorithm of CEA processing task
## 4 Experimental Results
The evaluation of AMALGAM is done in the context of the SemTab 2020
challenge121212http://www.cs.ox.ac.uk/isg/challenges/sem-tab/2020/index.html.
This challenge is subdivided into 4 successive rounds containing respectively
34294, 12173, 62614 and 22387 CSV tables to annotate. For example, Table 2,
lists all Alberta towns with additional information such as the country and
the elevation above sea level. The evaluation metrics are respectively the F1
score and the Precision [26].
Table 2: List of Alberta towns, extracted from SemTab Round 1. col0 | col1 | col2 | col3 | col4 | col5
---|---|---|---|---|---
Grande Prairie | city in Alberta | canada | Sexsmith | 650 | Alberta
Sundre | town in Alberta | canada | Mountain View County | 1093 | Alberta
Peace River | town in clberta | Canada | Northern Sunrise County | 330 | Alberta
Vegreville | town in Alberta | canada | Mundare | 635 | Alberta
Tables 3, 4, 5 and 6 report the evaluation of CTA and CEA respectively for
round 1, 2, 3 and 4. Thus, it could be observed that AMALGAM handles properly
the two tasks, in particular in the CEA task. Regarding the CTA task, these
results can be explained with a new revision of Wikidata created in the item
revision history and there are possibly spelling errors in the contents of the
tables. For instance, ”rural district of Lower Saxony” became ”district of
Lower Saxony” after the 16th April 2020 revision. A possible solution to this
issue is to retrieve the history of the different revisions, by parsing
Wikidata data history dumps, to use them in the lookup. This is a possible
extension to this work. Another observed issue is that spelling errors impacts
greatly the lookup efficiency.
Table 3: Results of Round 1. TASK | F1 Score | Precision
---|---|---
CTA | 0.724 | 0.727
CEA | 0.913 | 0.914
Table 4: Results of Round 2. TASK | F1 Score | Precision
---|---|---
CTA | 0.926 | 0.928
CEA | 0.921 | 0.927
Table 5: Results of Round 3. TASK | F1 Score | Precision
---|---|---
CTA | 0.869 | 0.873
CEA | 0.877 | 0.892
Table 6: Results of Round 4. TASK | F1 Score | Precision
---|---|---
CTA | 0.858 | 0.861
CEA | 0.892 | 0.914
From the round 1 experience, we specifically focused on the spell check
process of items to improve the results of the CEA and CTA tasks in round 2.
Two API services, from Wikipedia and Gurunudi (presented in Sect. 3.)
respectively were used for spelling correction. According to the results in
Table 4, both F1-Score and Precision have been improved. From these rounds, we
observed that term with a single word is often ambiguous as it may refer to
more than one entity. In Wikidata, there is only one article (one entry) for
each concept. However, there can be many equivalent titles for a concept due
to the existence of synonyms, etc. These synonymy and ambiguity issues make it
difficult to match the correct item. For example, the term “Paris” may refer
to various concepts such as “the capital and largest city of France“, “son of
Priam, king of Troy“, “county seat of Lamar County, Texas, United States“.
This leads us to introduce a disambiguation process during rounds 3 and 4. For
these two last rounds, we have updated the annotation algorithm by integrating
the concept of the column obtained during the CTA task in the linking phase.
We showed that the two tasks can be performed relatively successfully with
AMALGAM, achieving higher than 0.86 in precision. However, the automated
disambiguation process of items proved to be a more challenging task.
## 5 Conclusion and Future Works
In this paper, we described AMALGAM, a matching approach to enabling tabular
datasets to be FAIR compliant by making them explicit thanks to their
annotation using a knowledge graph, in our case Wikidata. Its advantage is
that it allows to perform both CTA and CEA tasks in a timely manner. These
tasks can be accomplished through the combination of lookup services and a
spell check techniques quickly. The results achieved in the context of the
SemTab 2020 challenge show that it handles table annotation tasks with a
promising performance. Our findings suggest that the matching process is very
sensitive to errors in spelling. Thus, as of future work, an improved spell
checking techniques will be investigated. Further, to process such errors the
contextual based spell-checkers are needed. Often the string is very close in
spelling, but context could help reveal which word makes the most sense.
Moreover, the approach will be improved through finding a trade-off between
effectiveness and efficiency.
## References
* [1] Wilkinson, M., Dumontier, M., Aalbersberg, I. et al. The FAIR Guiding Principles for scientific data management and stewardship. Sci Data 3, 160018 (2016).
* [2] Wilkinson, M.-D., et all.: The FAIR Guiding Principles for scientific data management and stewardship. Scientific Data 3(1), 1–9 (2016).
* [3] Diallo G., Simonet M., Simonet A. (2006) An Approach to Automatic Ontology-Based Annotation of Biomedical Texts. In: Ali M., Dapoigny R. (eds) Advances in Applied Artificial Intelligence. IEA/AIE 2006. Lecture Notes in Computer Science, vol 4031. Springer, Berlin, Heidelberg.
* [4] Dramé, K., Mougin F., Diallo, G. Large scale biomedical texts classification: a kNN and an ESA-based approaches. J. Biomedical Semantics. Vol:7(40), 2016.
* [5] Handschuh, S.: Semantic Annotation of Resources in the Semantic Web. Semantic Web Services. Springer, Berlin, Heidelberg, 135–155 (2007).
* [6] Diallo, G. ”Efficient Building of Local Repository of Distributed Ontologies,” 7th IEEE International Conference on SITIS, Dijon, 2011, pp. 159-166.
* [7] Subramanian, A., Srinivasa, S.: Semantic Interpretation and Integration of Open Data Tables. In: Geospatial Infrastructure, Applications and Technologies: India Case Studies, pp. 217–233. Springer Singapore (2018).
* [8] Taheriyan, M., Knoblock, C.-A., Szekely, P., Ambite, J.-L.: Learning the semantics of structured data sources. Web Semantics: Science, Services and Agents on the World Wide Web 37(38), 152–169 (2016)
* [9] Zhang, L., Wang, T., Liu, Y., Duan, Q.: A semi-structured information semantic annotation method for Web pages. Neur. Comp. and App. 32(11), 6491–6501(2019)
* [10] Ji, S., Pan, S., Cambria, E., Marttinen, P., Yu, P.S.: A survey on knowledge graphs: Representation, acquisition and applications. CoRRabs/2002.00388(2020)
* [11] Efthymiou, V., Hassanzadeh, O., Rodriguez-Muro, M., Christophides, V.: Matching Web Tables with Knowledge Base Entities: From Entity Lookups to Entity Embeddings. In: LNCS, pp. 260–277. Springer Int. Publishing (2017).
* [12] Eslahi, Y., Bhardwaj, A., Rosso, P., Stockinger, K., Cudre-Mauroux, P.: Annotating Web Tables through Knowledge Bases: A Context-Based Approach. In: 2020 7th Swiss Conference on Data Science (SDS), pp. 29–34. IEEE (2020).
* [13] Hassanzadeh, O., Efthymiou, V., Chen, C., Jimenez-Ruiz, E., Srinivas, K.: SemTab2020: Semantic Web Challenge on Tabular Data to Knowledge Graph Matching - 2020 Data Sets, October 2020.
* [14] Hassanzadeh, O., Efthymiou, V., Chen, C., Jimenez-Ruiz, E., Srinivas, K.: SemTab2019: Semantic Web Challenge on Tabular Data to Knowledge Graph Matching - 2019 Data Sets (Version 2019).
* [15] Nguyen, P., Kertkeidkachorn, N., Ichise, R., Takeda, H.: MTab: Matching Tabular Data to Knowledge Graph using Probability Models. Proceedings of the SemTab Challenge co-located with the 18th ISWC conference, 2019.
* [16] Chabot, Y., Labbe, T., Liu, J., Troncy, R.: DAGOBAH: An End-to-End Context-Free Tabular Data Semantic Annotation System. Proceedings of the SemTab Challenge co-located with the 18th ISWC conference, 2019.
* [17] Oliveira, D., Aquin, M.: ADOG-Annotating Data with Ontologies and Graphs. Proc. of the SemTab Challenge co-located with the 18th ISWC conference, 2019.
* [18] Thawani, A., Hu, M., Hu, E., Zafar, H., Divvala, N-.T., Singh, A., Qasemi, E., Szekely, P., Pujara, J.: Entity Linking to Knowledge Graphs to Infer Column Types and Properties. Proc. of the SemTab Challenge co-located with ISWC’19, 2019..
* [19] Zhang, Z.: Effective and efficient Semantic Table Interpretation using TableMiner+.Semantic Web IOS Press 8(6), 921–-957 (2017).
* [20] Takeoka, K., Oyamada, M., Nakadai, S., Okadome, T.: Meimei: An Efficient Probabilistic Approach for Semantically Annotating Tables. Proceedings of the AAAI Conference on Artificial Intelligence 33, 281–288 (2019).
* [21] Chen, J., Jimenez-Ruiz, E., Horrocks, I., Sutton, C.: Learning Semantic Annotations for Tabular Data. Proceedings of the 28th International Joint Conference on Artificial Intelligence (IJCAI-19), 2088–2094 (2019).
* [22] Ermilov, I., Ngomo, AC.N.: TAIPAN: Automatic Property Mapping for Tabular Data. In: Blomqvist E., Ciancarini P., Poggi F., Vitali F. (eds) Knowledge Engineering and Knowledge Management. EKAW 2016. Lecture Notes in Computer Science, vol 10024. Springer, Cham.
* [23] Ritze, D., Lehmberg, O., Bizer, C.: Matching HTML Tables to DBpedia. In: Proceedings of the 5th International Conference on Web Intelligence, Mining and Semantics - WIMS '15, pp. 1–-6. ACM Press (2015).
* [24] Bhagavatula, C.-S., Noraset, T., Downey, D.: TabEL: Entity Linking in Web Tables. In: Proceedings of the The Semantic Web - ISWC 2015, Springer International Publishing, pp. 425–441 (2015).
* [25] Shashank, S., Shailendra, S.: Systematic review of spell-checkers for highly inflectional languages. Artificial Intelligence Review 53(6), 4051–4092 (2019)
* [26] Jiménez-Ruiz, E., Hassanzadeh, O., Efthymiou, V., Chen, J., Srinivas, K.: SemTab 2019: Resources to Benchmark Tabular Data to Knowledge Graph Matching Systems. In: Harth A. et al. (eds) The Semantic Web. ESWC (2020).
|
# Surface-Wave Propagation on Non-Hermitian Metasurfaces with Extreme
Anisotropy
Marino Coppolaro, Massimo Moccia, Giuseppe Castaldi, Andrea Alù, and Vincenzo
Galdi M. Coppolaro, M. Moccia, G. Castaldi, and V. Galdi are with the Fields &
Waves Lab, Department of Engineering, University of Sannio, I-82100 Benevento,
Italy (e-mail<EMAIL_ADDRESS><EMAIL_ADDRESS><EMAIL_ADDRESS>[email protected]).A. Alù is with the Photonics
Initiative, Advanced Science Research Center, the Physics Program, Graduate
Center, and the Department of Electrical Engineering, City College, all at the
City University of New York, New York, NY 10031, USA (email<EMAIL_ADDRESS>
###### Abstract
Electromagnetic metasurfaces enable the advanced control of surface-wave
propagation by spatially tailoring the local surface reactance. Interestingly,
tailoring the surface resistance distribution in space provides new, largely
unexplored degrees of freedom. Here, we show that suitable spatial modulations
of the surface resistance between positive (i.e., loss) and negative (i.e.,
gain) values can induce peculiar dispersion effects, far beyond a mere
compensation. Taking inspiration from the parity-time symmetry concept in
quantum physics, we put forward and explore a class of non-Hermitian
metasurfaces that may exhibit extreme anisotropy mainly induced by the gain-
loss interplay. Via analytical modeling and full-wave numerical simulations,
we illustrate the associated phenomenon of surface-wave canalization, explore
nonlocal effects and possible departures from the ideal conditions, and
address the feasibility of the required constitutive parameters. Our results
suggest intriguing possibilities to dynamically reconfigure the surface-wave
propagation, and are of potential interest for applications to imaging,
sensing and communications.
###### Index Terms:
Metasurfaces, non-Hermitian, surface waves, anisotropic materials, extreme
parameters.
## I Introduction
Surface electromagnetics is a research topic of longstanding interest in
microwave and antenna engineering, which is experiencing a renewed vitality
(see, e.g., [1] for a recent review) in view of the widespread applications of
artificial (metasurfaces) [2] and natural (e.g., graphene) [3] low-dimensional
(2-D) materials.
In addition to enabling advanced wavefront manipulations [4], metasurfaces can
support the propagation of tightly bound surface waves [5, 6, 7, 8] which can
be finely controlled via transformation-optics approaches [9, 10, 11, 12, 13,
14] conceptually similar to those applied for volumetric metamaterials [15,
16]. Likewise, exotic phenomena observed in volumetric metamaterials can be
transposed to “flatland” scenarios. These include, for instance, hyperbolic
propagation (characterized by open dispersion characteristics) [17, 18, 19,
20, 21, 22], topological transitions (from closed elliptic-like to open
hyperbolic-like dispersion characteristics) [18], extreme anisotropy (i.e.,
very elongated dispersion characteristics) [23], and canalization (i.e.,
diffractionless propagation of subwavelength beams) [18, 19, 20, 23].
Interestingly, new intriguing concepts and effects are emerging that are
specific of 2-D materials. Among these, it is worth mentioning the “line
waves”, localized both in-plane and out-of-plane around a surface reactance
discontinuity with dual character (capacitive/inductive) [24, 25], and the
rich moiré physics observed in rotated, evanescently coupled metasurfaces [26,
27].
The above studies have focused on passive scenarios, wherein unavoidable
losses are undesired and their effects need to be minimized. In fact, it has
been shown that, in certain parameter regimes, losses can be beneficially
exploited to enhance the canalization effects [23]. In this study, we further
leverage and generalize this concept by exploring a class of metasurfaces
characterized by tailored spatial modulations of loss and gain. Inspired by
quantum-physics concepts such as “parity-time” (${\cal PT}$) symmetry [28],
these non-Hermitian configurations are garnering a growing interest in several
branches of physics [29], including electromagnetics [30]. In quantum
mechanics, a ${\cal PT}$-symmetric operator is characterized by a potential
function that satisfies the condition $V\left(-x\right)=V^{*}\left(x\right)$,
with $x$ and ∗ denoting a spatial coordinate and complex conjugation,
respectively [28]. In view of the well-known analogies, in electromagnetic
scenarios, this translates into a refractive-index distribution
$n\left(-x\right)=n^{*}\left(x\right)$, which implies an imaginary part with
odd symmetry, corresponding to alternated loss and gain regions. Likewise,
when referred to metasurfaces, of specific interest in our study, this
condition translates as $Z\left(-x\right)=-Z^{*}\left(x\right)$ and
$\sigma\left(-x\right)=-\sigma^{*}\left(x\right)$ for the surface impedance
and conductivity, respectively. Within this framework, several non-Hermitian
metasurface scenarios have been explored, with possible applications to
negative refraction [31], cloaking [32], imaging [33, 34], sensing [35, 36,
37], low-threshold laser and coherent perfect absorbers [38], line waves [39],
and unconventional lattice resonances [40]. However, in most studies gain and
loss are distributed on separate metasurfaces, with out-of-plane coupling, and
there are only few examples of metasurfaces featuring in-plane modulation of
gain and loss. In fact, the effect of the gain-loss interplay in the surface-
wave propagation remains largely unexplored.
Here, inspired by recent findings for volumetric metamaterials [41], we show
that the judicious tailoring of the gain-loss interplay can induce extreme-
anisotropy responses in non-Hermitian metasurfaces. The anisotropy is
particularly pronounced under ${\cal PT}$-symmetry conditions, and yields
strong surface-wave canalization effects that significantly depend on the
gain-loss level. The possibility to control the gain (e.g., via solid-state
amplifiers or optical pumping, depending on the operational frequency) opens
the door to intriguing strategies for dynamic reconfiguration of the response.
The rest of the paper is structured as follows. In Sec. II, we outline the
problem formulation and its geometry. In Sec. III, we derive the parameter
regimes of interest and illustrate some representative results, obtained via
an effective-medium theory (EMT) and full-wave simulations, with details
relegated in two Appendices. Specifically, we study the extreme anisotropy and
associated canalization phenomena that can occur under ${\cal PT}$-symmetry
conditions, as well as the effects of nonlocality and possible departures from
these conditions. In Sec. IV, we explore the practical feasibility of the
required gain levels, also addressing the stability issues. Finally, in Sec.
V, we draw some brief conclusions and discuss possible perspectives.
Figure 1: (a) Problem geometry (details in the text). (b) Effective-parameter
representation.
## II Problem Geometry and Formulation
### II-A Geometry
The problem geometry is schematically illustrated in Fig. 1a. We consider, in
free space, a metasurface of infinite extent in the $x-y$ plane, featuring a
1-D periodic modulation of the surface conductivity along the $y$-direction,
with alternating values $\sigma_{a}$ and $\sigma_{b}$ (and sub-periods $d_{a}$
and $d_{b}$, respectively). Assuming an implicit $\exp(-i\omega t)$ time-
harmonic dependence, the generally complex-valued conductivities are written
as
${\sigma_{a}}={\sigma^{\prime}_{a}}+i{\sigma^{\prime\prime}_{a}},\quad{\sigma_{b}}=-{\sigma^{\prime}_{b}}+i{\sigma^{\prime\prime}_{b}},$
(1)
with the prime and double-prime symbols tagging the real and imaginary parts,
respectively. Throughout the study, we focus on the parameter regime
$\sigma^{\prime}_{a,b}>0,\quad\sigma^{\prime\prime}_{a}\sigma^{\prime\prime}_{b}>0.$
(2)
In view of the assumed time-dependence, the first condition implies that the
“$a$”- and “$b$”-type constituents exhibit loss and gain, respectively. The
second condition instead guarantees that both constituents are either of
inductive ($\sigma^{\prime\prime}_{a,b}>0$) or capacitive
($\sigma^{\prime\prime}_{a,b}<0$) nature; this rules out the possibility of a
hyperbolic response, which is not of interest here since it has already been
studied [19, 20].
We highlight that the surface-conductivity parameterization in (1) and (2) is
especially suited for 2-D materials (e.g., graphene), and can be readily
related to more conventional constitutive parameters. For instance, it can be
obtained from the surface impedance as [42]
$\sigma=\frac{2}{Z},$ (3)
with the factor 2 accounting for the two-faced character of the sheet.
Moreover, for a very thin dielectric layer of thickness $\Delta\ll\lambda$
(with $\lambda=2\pi c/\omega$ denoting the free-space wavelength and $c$ the
corresponding wavespeed), it can be approximately related to the relative
permittivity via [9, 43]
$\sigma\approx\frac{i\left({1-\varepsilon}\right)k\Delta}{\eta},$ (4)
where $k=\omega/c=2\pi/\lambda$ and $\eta=\sqrt{\mu_{0}/\varepsilon_{0}}$
denote the free-space wavenumber and characteristic impedance, respectively.
### II-B Formulation
In [44], it was shown that, in the limit of deeply subwavelength modulation
periods $d=d_{a}+d_{b}\ll\lambda$, a structure like the one in Fig. 1a could
be effectively modeled by a homogeneous, uniaxially anisotropic effective
surface conductivity with relevant components
${\bar{\sigma}_{xx}}={f_{a}}{\sigma_{a}}+{f_{b}}{\sigma_{b}},\quad{\bar{\sigma}_{yy}}=\frac{{{\sigma_{a}}{\sigma_{b}}}}{{{f_{b}}{\sigma_{a}}+{f_{a}}{\sigma_{b}}}},$
(5)
with $f_{a}=d_{a}/d$ and $f_{b}=1-f_{a}$ denoting the filling fractions. These
mixing formulae closely resemble those occurring in the EMT modeling of
multilayered volumetric metamaterials [45].
We are interested in studying the propagation of surface waves along this
metasurface. In mathematical terms, this entails finding nontrivial source-
free solutions $\propto\exp\left[i\left(k_{x}x+k_{y}y+k_{z}z\right)\right]$
which are evanescent along the $z$-direction and propagating in the $x-y$
plane (see Fig. 1). By assuming for now the EMT model in (5), it can be shown
(see [5, 6, 7] for details) that the wavenumbers must satisfy the dispersion
equation
$\displaystyle
k{k_{z}}\left({4+{\eta^{2}}{{\bar{\sigma}}_{xx}}{{\bar{\sigma}}_{yy}}}\right)$
(6a) $\displaystyle+$ $\displaystyle
2\eta\left[{{k^{2}}\left({{{\bar{\sigma}}_{xx}}+{{\bar{\sigma}}_{yy}}}\right)-{{\bar{\sigma}}_{xx}}k_{x}^{2}-{{\bar{\sigma}}_{yy}}k_{y}^{2}}\right]=0,$
with the constraints $k_{x}^{2}+k_{y}^{2}+k_{z}^{2}={k^{2}},\quad{\mathop{\rm
Im}\nolimits}\left({{k_{z}}}\right)\geq 0.$ (6b)
The numerical solution of the nonlinear system in (6b) can be efficiently
carried out following the approach described in [19]. The resulting modes are
generally hybrid, and contain as special cases the transverse-electric and
transverse-magnetic modes that can be supported by isotropic, inductive and
capacitive metasurfaces, respectively. Moreover, it is readily verified that
the dispersion equation in (6a) is invariant under the duality transformations
${\bar{\sigma}_{xx}}\to\frac{4}{{{\eta^{2}}{{\bar{\sigma}}_{yy}}}},\quad{\bar{\sigma}_{yy}}\to\frac{4}{{{\eta^{2}}{{\bar{\sigma}}_{xx}}}}.$
(7)
The above relationships somehow resemble those observed for self-complementary
metasurfaces [46, 47, 48], which exploit the Babinet’s principle. In our case,
rather than geometrical self-complementarity, we rely on ${\cal PT}$ symmetry.
In what follows, we will study the effects of the gain-loss interplay in the
surface-wave propagation, via the approximate EMT modeling and full-wave
numerical simulations.
## III Modeling and Results
### III-A Loss-Gain Compensation in Effective Parameters
Looking at the mixing formulae in (5), an interesting question is whether
there exist specific combinations of the constituents (in terms of
$\sigma_{a}$, $\sigma_{b}$ and $f_{a}$) that render the effective parameters
purely reactive. Within the limitations of the EMT, this would imply a perfect
balancing of the loss and gain effects, which in the language of ${\cal
PT}$-symmetry, corresponds to the symmetric phase [28]. By substituting the
complex-valued conductivities (1) in the mixing formulae (5), and enforcing
the zeroing of the effective-parameter real parts, we obtain the conditions
(see Appendix A for details)
$\displaystyle f_{a}$ $\displaystyle=$
$\displaystyle\frac{\sigma^{\prime}_{b}}{\sigma^{\prime}_{a}+\sigma^{\prime}_{b}},$
(8a) $\displaystyle{\sigma^{\prime\prime}_{a}}$ $\displaystyle=$
$\displaystyle\pm\sqrt{\left|\sigma_{b}\right|^{2}-\left(\sigma^{\prime}_{a}\right)^{2}}.$
(8b)
In view of the assumption $\sigma^{\prime}_{a,b}>0$, the condition in (8a) is
always feasible ($0\leq f_{a}\leq 1$). On the other hand, we must enforce the
additional constraint
$\left|\sigma_{b}\right|>\sigma^{\prime}_{a},$ (9)
in order to ensure that ${\sigma^{\prime\prime}_{a}}$ in (8b) is consistently
real-valued. Moreover, in view of our assumption
$\sigma^{\prime\prime}_{a}\sigma^{\prime\prime}_{b}>0$, the sign determination
in (8b) must be chosen consistently with the sign of
$\sigma^{\prime\prime}_{b}$. By enforcing the conditions (8) in (5), after
some algebra, we obtain
$\displaystyle{\bar{\sigma}}_{xx}$ $\displaystyle=$ $\displaystyle
i\frac{\sigma^{\prime}_{a}\sigma^{\prime\prime}_{b}\pm\sigma^{\prime}_{b}\sqrt{\left|\sigma_{b}\right|^{2}-\left(\sigma^{\prime}_{a}\right)^{2}}}{\sigma^{\prime}_{a}+\sigma^{\prime}_{b}},$
(10a) $\displaystyle{\bar{\sigma}}_{yy}$ $\displaystyle=$ $\displaystyle
i\frac{\sigma^{\prime}_{a}\sigma^{\prime\prime}_{b}\mp\sigma^{\prime}_{b}\sqrt{\left|\sigma_{b}\right|^{2}-\left(\sigma^{\prime}_{a}\right)^{2}}}{\sigma^{\prime}_{a}-\sigma^{\prime}_{b}},$
(10b)
which identify a class of parameter combinations that yield purely imaginary
effective parameters, i.e., loss-gain balance. This result is not necessarily
surprising, as a balanced amount of gain and loss may be expected to
compensate each other, but we will show hereafter that the gain-loss interplay
may have much deeper implications on the modes supported by this metasurface.
Figure 2: Anisotropy ratio
$\left|{\bar{\sigma}}_{yy}\right|/\left|{\bar{\sigma}}_{xx}\right|$ for ${\cal
PT}$-symmetric configurations, as a function of the gain-loss parameter
$\sigma^{\prime}\eta$, for representative values of the normalized
susceptance: $\sigma^{\prime\prime}\eta=1$ (red-solid curve),
$\sigma^{\prime\prime}\eta=0.1$ (blue-solid), and
$\sigma^{\prime\prime}\eta=0.01$ (green-dotted). Note the log-log scale.
Figure 3: Examples of EFCs for ${\cal PT}$-symmetric inductive configurations
with $\sigma^{\prime\prime}\eta=0.1$. (a) $\sigma^{\prime}=0$
(${\bar{\sigma}}_{xx}\eta={\bar{\sigma}}_{yy}\eta=i0.1$,
$\left|{\bar{\sigma}}_{yy}\right|/\left|{\bar{\sigma}}_{xx}\right|=1$). (b)
$\sigma^{\prime}\eta=0.3$ (${\bar{\sigma}}_{xx}\eta=i0.1$,
${\bar{\sigma}}_{yy}\eta=i$,
$\left|{\bar{\sigma}}_{yy}\right|/\left|{\bar{\sigma}}_{xx}\right|=10$). (c)
$\sigma^{\prime}\eta=1$ (${\bar{\sigma}}_{xx}\eta=i0.1$,
${\bar{\sigma}}_{yy}\eta=i10.1$,
$\left|{\bar{\sigma}}_{yy}\right|/\left|{\bar{\sigma}}_{xx}\right|=101$). (d)
$\sigma^{\prime}\eta=3$, (${\bar{\sigma}}_{xx}\eta=i0.1$,
${\bar{\sigma}}_{yy}\eta=i90.1$,
$\left|{\bar{\sigma}}_{yy}\right|/\left|{\bar{\sigma}}_{xx}\right|=901$). In
view of the inherent symmetry, only the $k_{y}>0$ branches are shown. Note the
different scales on the vertical axes. Figure 4: Examples of EFCs for ${\cal
PT}$-symmetric capacitive configurations. (a) $\sigma^{\prime}=0$ and
$\sigma^{\prime\prime}\eta=-40$
(${\bar{\sigma}}_{xx}\eta={\bar{\sigma}}_{yy}\eta=-i40$,
$\left|{\bar{\sigma}}_{yy}\right|/\left|{\bar{\sigma}}_{xx}\right|=1$). (b)
$\sigma^{\prime}\eta=12$ and $\sigma^{\prime\prime}\eta=-5$
(${\bar{\sigma}}_{xx}\eta=-i5$, ${\bar{\sigma}}_{yy}\eta=-i33.8$,
$\left|{\bar{\sigma}}_{yy}\right|/\left|{\bar{\sigma}}_{xx}\right|=6.76$). (c)
$\sigma^{\prime}\eta=4$ and $\sigma^{\prime\prime}\eta=-0.5$
(${\bar{\sigma}}_{xx}\eta=-i0.5$, ${\bar{\sigma}}_{yy}\eta=-i32.5$,
$\left|{\bar{\sigma}}_{yy}\right|/\left|{\bar{\sigma}}_{xx}\right|=65$). (d)
$\sigma^{\prime}\eta=0.2$ and $\sigma^{\prime\prime}\eta=-0.001$
(${\bar{\sigma}}_{xx}\eta=-i0.001$, ${\bar{\sigma}}_{yy}\eta=-i40$,
$\left|{\bar{\sigma}}_{yy}\right|/\left|{\bar{\sigma}}_{xx}\right|=4000$). In
view of the inherent symmetry, only the $k_{y}>0$ branches are shown. Note the
different scales on the vertical axes.
### III-B ${\cal PT}$-Symmetry-Induced Extreme Anisotropy
By inspection of (10), it can be observed that the conductivity components can
differ substantially in the limit
$\sigma^{\prime}_{a}\rightarrow\sigma^{\prime}_{b}$. From (8), this yields
$f_{a}=f_{b}=0.5$ and (with the sign determination of interest here)
${\sigma^{\prime\prime}_{a}}={\sigma^{\prime\prime}_{b}}$. In other words, by
removing irrelevant superscripts, we obtain
$\sigma_{a}=\sigma^{\prime}+i\sigma^{\prime\prime},\quad\sigma_{b}=-\sigma^{\prime}+i\sigma^{\prime\prime},$
(11)
which, with a suitable choice of the reference-system origin, corresponds to
the aforementioned ${\cal PT}$-symmetry condition
$\sigma\left(-y\right)=-\sigma^{*}\left(y\right).$ (12)
It can be shown (see Appendix A for details) that, under these conditions, the
effective parameters reduce to
${\bar{\sigma}}_{xx}=i\sigma^{\prime\prime},\quad{\bar{\sigma}}_{yy}=i\frac{\left|\sigma\right|^{2}}{\sigma^{\prime\prime}}.$
(13)
The remarkably simple expressions in (13) clearly show that the gain-loss
interplay can significantly affect the effective-parameter anisotropy. In
particular, it is evident that the conditions
$\left|\sigma^{\prime\prime}\right|\eta\ll
1,\quad\sigma^{\prime}\gg\left|\sigma^{\prime\prime}\right|$ (14)
would lead to
$\left|{\bar{\sigma}}_{xx}\right|\ll\left|{\bar{\sigma}}_{yy}\right|$, i.e.,
extreme anisotropy. For a quantitative illustration, Fig. 2 shows the
anisotropy ratio
$\left|{\bar{\sigma}}_{yy}\right|/\left|{\bar{\sigma}}_{xx}\right|$ as a
function of the gain-loss parameter ${\bar{\sigma}}^{\prime}\eta$ for various
values of the normalized susceptance $\sigma^{\prime\prime}\eta$; note that
results do not depend on the sign of $\sigma^{\prime\prime}$, i.e., on the
inductive/capacitive character. However, for a given anisotropy ratio, the
dispersion characteristics do depend on the reactive character of the
metasurface.
For example, Fig. 3 shows some representative dispersion characteristics, in
terms of equi-frequency contours (EFCs), for an inductive
($\bar{\sigma}^{\prime\prime}>0$) scenario, by fixing
$\sigma^{\prime\prime}\eta=0.1$ and varying the gain-loss parameter
$\sigma^{\prime}\eta$. As can be observed, the field exhibits a propagating
character within a spectral wavenumber region that can be estimated
analytically from (6b) (see Appendix A for details), viz.
$\left|k_{x}\right|\leq
k_{x}^{\left({\max}\right)}=k\sqrt{1-\frac{4}{{\bar{\sigma}_{xx}^{2}{\eta^{2}}}}}\approx\frac{{2k}}{{\sigma^{\prime\prime}}\eta},$
(15)
with the approximate equality holding in the limit
$\sigma^{\prime\prime}\eta\ll 1$. Quite interestingly, this spatial bandwidth
is essentially controlled by the normalized susceptance
$\sigma^{\prime\prime}\eta$. Outside this region, the field is evanescent,
with a purely imaginary $k_{y}$ (not shown for brevity). The gain-loss
parameter $\sigma^{\prime}\eta$ controls instead the anisotropy degree.
Specifically, starting from the trivial ($\sigma^{\prime}=0$) isotropic case
(Fig. 3a), the anisotropy becomes increasingly pronounced by increasing
$\sigma^{\prime}$ (Figs. 3b and 3c), and the EFCs approach a limiting curve
for $\sigma^{\prime}\eta\gg 1$ (Fig. 3d).
The responses for capacitive ($\bar{\sigma}^{\prime\prime}>0$) scenarios can
be in principle obtained via the duality transformations in (7). However, for
completeness, they are also exemplified in Fig. 4 directly in terms of the
constituent parameters $\sigma^{\prime}$ and $\sigma^{\prime\prime}$. In this
case, the propagating spectral region is given by (see Appendix A for details)
$\left|k_{x}\right|\leq
k_{x}^{\left({\max}\right)}=k\sqrt{1-\frac{\bar{\sigma}_{yy}^{2}{\eta^{2}}}{4}}\approx\frac{{k\eta\left(\sigma^{\prime}\right)^{2}}}{2\left|\sigma^{\prime\prime}\right|},$
(16)
where the approximate equality holds in the asymptotic limit
$\eta\left(\sigma^{\prime}\right)^{2}\gg\left|\sigma^{\prime\prime}\right|$.
We observe that, unlike the inductive case, the spatial bandwidth now depends
on both $\sigma^{\prime}$ and $\sigma^{\prime\prime}$; the representative
values considered in Fig. 4 are chosen so as to maintain the spatial bandwidth
$k_{x}^{(max)}\approx 20k$, progressively moving from perfect isotropy (Fig.
4a) to extreme anisotropy (Fig. 4d).
Figure 5: Examples of canalization effects. (a) Numerically computed in-plane
field map [$\mbox{Re}\left(H_{z}\right)$], in false-color scale, pertaining to
a metasurface with effective parameters ${\bar{\sigma}_{xx}}\eta=i0.1$,
${\bar{\sigma}_{yy}}\eta=i10.1$. (b) Corresponding results for the actual
${\cal PT}$-symmetric conductivity modulation, with $\sigma_{a}\eta=1+i0.1$,
$\sigma_{b}\eta=-1+i0.1$, $f_{a}=f_{b}=0.5$, and $d=0.025\lambda$. Fields are
excited by a $z$-directed elementary magnetic dipole located at $x=0$,
$y=-0.1\lambda$, $z=0.001\lambda$, and are computed at $z=0.01\lambda$. (c)
Spatial spectrum (2-D Fourier transform) magnitude, in false color scale, of
the field map in panel (a). Also shown (white dashed curve), as a reference,
is the theoretical EFC from Fig. 3c.
### III-C Canalization Effects
Canalization effects, intended as the diffractionless transfer of
subwavelength features over distances of several wavelengths, have been
observed in hyperbolic [18, 19] and extreme-anisotropy [23] metasurfaces, and
loss-induced canalization has also been demonstrated [49].
These effects can be intuitively understood by looking at the examples in
Figs. 3 and 4 with higher anisotropy (e.g., Figs. 3c, 3d, 4c and 4d). It is
apparent that a significant fraction of high-$k_{x}$ spectral components can
propagate as unattenuated surface waves in the $x-y$ plane and, in view of the
pronounced flatness of the EFCs, their group velocities (normal to the EFCs)
are predominantly $y$-directed.
For illustration, Fig. 5 shows the surface-wave propagation along an
inductive, non-Hermitian metasurface (with parameters as in Fig. 3c), excited
by a $z$-directed elementary dipole. Specifically, Figs. 5a and 5b show the
numerically computed (see Appendix B for details) field maps at a close
distance from the metasurface, by considering the EMT model and actual
conductivity modulation, respectively. Results are in fair agreement, with the
differences attributable to nonlocal effects (see Sec. III-E below). In
particular, they clearly display the aforementioned canalization effect, with
unattenuated, diffractionless propagation of sub-wavelength features. This is
markedly different from the conventional cylindrical wavefronts that would be
observed in a homogeneous, isotropic case. As a further quantitative evidence,
Fig. 5c shows the spatial spectrum (2-D Fourier transform) of the EMT field
map, which is essentially peaked around the theoretical EFC. Similar results,
not shown for brevity, are observed for capacitive scenarios as well.
Canalization effects like those in Fig. 5 are of great interest for
applications to high-resolution imaging. Although the above phenomena may
appear qualitatively similar to the canalization effects observed in
hyperbolic metasurfaces [18, 19], we stress that the underlying physics is
completely different. While in the hyperbolic case these effects are induced
by the dual character (inductive/capacitive) of the two constituents, in our
case there is no contrast between the reactive parts, and the extreme
anisotropy is solely induced by the gain-loss interplay.
Figure 6: Representative EFCs (top panels: real parts; bottom panels:
imaginary parts) for non-${\cal PT}$-symmetric configurations. (a), (b) No
gain: $\sigma_{a}\eta=1+i0.1$, $\sigma_{b}\eta=i0.1$, $f_{a}=f_{b}=0.5$
(${\bar{\sigma}}_{xx}\eta=0.5+i0.1$, ${\bar{\sigma}}_{yy}\eta=0.019+i0.196$).
(c), (d) Gain-loss-balanced: $\sigma_{a}\eta=0.8+i0.608$,
$\sigma_{b}\eta=-1+i0.1$, $f_{a}=0.556$, $f_{b}=0.444$
(${\bar{\sigma}}_{xx}\eta=i0.382$, ${\bar{\sigma}}_{yy}\eta=i2.641$). (e), (f)
Imperfect ${\cal PT}$ symmetry: $\sigma_{a}\eta=1.2+i0.1$,
$\sigma_{b}\eta=-1+i0.1$, $f_{a}=f_{b}=0.5$
(${\bar{\sigma}}_{xx}\eta=0.1+i0.1$, ${\bar{\sigma}}_{yy}\eta=-5.95+i6.15$).
(g), (h) Imperfect ${\cal PT}$ symmetry: $\sigma_{a}\eta=1+i0.1$,
$\sigma_{b}\eta=-1+i0.15$, $f_{a}=f_{b}=0.5$
(${\bar{\sigma}}_{xx}\eta=i0.125$, ${\bar{\sigma}}_{yy}\eta=0.4+i8.12$). In
view of the inherent symmetry, only the $\mbox{Re}\left(k_{y}\right)>0$
branches are shown. Note the different scales on the vertical axes.
### III-D Effects of Departure from ${\cal PT}$ Symmetry
To better understand the crucial role played by ${\cal PT}$ symmetry in
establishing the extreme-anisotropy response, it is insightful to explore
different configurations that do not fulfill such condition. Within this
framework, it is also worth highlighting that the unavoidable material
dispersion dictates (via causality) that the ${\cal PT}$-symmetry condition
may occur only at isolated frequencies [35, 50], and therefore the above
phenomena are inherently narrowband.
Figure 6 shows the EFCs pertaining to four representative non-${\cal
PT}$-symmetric configurations of interest. Specifically, Figs. 6a and 6b show
the (real and imaginary, respectively) results for the inductive parameter
configuration in Fig. 3c, but in the absence of gain ($\sigma_{a}\eta=1+i0.1$,
$\sigma_{b}\eta=i0.1$). As also evident from the effective parameters
(${\bar{\sigma}}_{xx}\eta=0.5+i0.1$, ${\bar{\sigma}}_{yy}\eta=0.019+i0.196$),
the extreme anisotropy is now lost and, most importantly, the propagation is
severely curtailed in view of the substantial values of
$\mbox{Im}\left(k_{y}\right)$. This is clearly visible in the corresponding
field map shown in Fig. 7a. Such a stark difference from the ${\cal
PT}$-symmetric case suggests interesting possibilities for dynamically
reconfiguring the surface-wave response, e.g., switching between a
propagating, canalized regime and a strong damping by (de)activating the gain.
Figures 6c and 6d illustrate another interesting example with parameters
$\sigma_{a}\eta=0.8+i0.608$, $\sigma_{b}\eta=-1+i0.1$, $f_{a}=0.556$,
$f_{b}=0.444$ satisfying the gain-loss balance conditions in (8) but not the
${\cal PT}$-symmetry condition. As expected, since the effective parameters
are purely reactive (inductive), the EFCs exhibit propagating and evanescent
regions as in Fig. 3. However, although the gain constituent is the same as in
Fig. 3c, the spatial bandwidth and the anisotropy are less pronounced. As can
be observed in the corresponding field map shown in Fig. 7b, this results in a
general deterioration of the canalization effects. Generally speaking, it can
be verified numerically that, for a fixed gain constituent, among the infinite
parameter configurations that fulfill the gain-loss balance conditions in (8),
the ${\cal PT}$-symmetry condition in (12) guarantees the maximum anisotropy.
As previously mentioned, when material dispersion is taken into account, the
${\cal PT}$-symmetry condition can only be perfectly satisfied at isolated
frequencies. It is therefore instructive to look at the effects of moderate
mismatches that can occur at close-by frequencies. For illustration, Fig. 6e
and 6f show the EFCs for parameters as in Fig. 3c when the susceptance are
perfectly balanced, but there is a moderate gain-loss mismatch
($\sigma_{a}\eta=1.2+i0.1$, $\sigma_{b}\eta=-1+i0.1$). Interestingly, the
resulting effective parameters (${\bar{\sigma}}_{xx}\eta=0.1+i0.1$,
${\bar{\sigma}}_{yy}\eta=-5.95+i6.15$) exhibit simultaneously loss and gain
along different directions. Such “indefinite“ non-Hermitian character is also
observed in volumetric metamaterials [51, 41]. The EFCs differ substantially
from the perfectly ${\cal PT}$-symmetric reference case in Fig. 3c, especially
in light of an imaginary part which assumes small negative values at lower
wavenumbers and increasingly positive values for higher wavenumbers. Figure 7c
shows the corresponding field map, from which we still observe some
canalization effects, though with a more visible spreading and attenuation by
comparison with Fig. 5.
For the same parameter configuration, Figs. 6g and 6h show instead the effects
of the imbalance in the susceptances (while maintaining the inductive
character), but assuming now perfect gain-loss balance ($\sigma_{a}=1+i0.1$,
$\sigma_{b}=-1+i0.15$). In spite of the sizable imbalance, the EFC departures
from the perfectly ${\cal PT}$-symmetric reference case in Fig. 3c seem less
dramatic. As can be observed, there is still an extended spectral region
characterized by a small positive imaginary part
[$\mbox{Im}\left(k_{y}\right)\lesssim 0.02k$] and pronounced anisotropy,
although the spatial bandwidth is moderately reduced. Quite interestingly,
outside this region we observe $\mbox{Im}\left(k_{y}\right)<0$ which, assuming
the conventional choice $\mbox{Re}\left(k_{y}\right)>0$ for the branch-cut,
would imply amplification. However, this is not necessarily the case, as in
structures mixing gain and loss an a priori choice of the branch-cut is not
obvious in the presence of unbounded domains. In fact, a counterintuitive sign
flip in the propagation constant has been observed at certain critical
incidence conditions [52] (see also the discussion in [41]). In our specific
case, the numerical simulations in Fig. 7d indicate the presence of
canalization effects accompanied by a mild attenuation, thereby suggesting
that $\mbox{Re}\left(k_{y}\right)<0$ may be the proper choice for the branch-
cut in the regions where $\mbox{Im}\left(k_{y}\right)<0$.
To sum up, the above examples indicate that, for moderate departures from the
${\cal PT}$-symmetry conditions, canalization effects are still attainable but
with reduced resolution and propagation distance. In this respect, the gain-
loss (im)balance turns out to be more critical than that in the susceptance.
Figure 7: Numerically computed in-plane field maps
[$\mbox{Re}\left(H_{z}\right)$], in false-color scale, pertaining to the
non-${\cal PT}$-symmetric configurations in Fig. 6. (a) No-gain, as in Figs.
6a and 6b. (b) Gain-loss-balanced, as in Figs. 6c and 6d. (c) Imperfect ${\cal
PT}$ symmetry, as in Figs. 6e and 6f. (d) Imperfect ${\cal PT}$ symmetry, as
in Figs. 6g and 6h. Fields are excited by a $z$-directed elementary magnetic
dipole located at $x=0$, $y=-0.1\lambda$, $z=0.001\lambda$, and are computed
at $z=0.01\lambda$. Figure 8: EFCs pertaining to a ${\cal PT}$-symmetric
parameter configuration with $\sigma_{a}=1+i0.1$, $\sigma_{b}=-1+i0.1$, and
$f_{a}=f_{b}=0.5$. Comparison between the EMT prediction (red-solid curve;
${\bar{\sigma}}_{xx}\eta=0.1$, ${\bar{\sigma}}_{yy}\eta=10.1$) and full-wave
numerical simulations for $d=0.001\lambda$ (blue-dashed), $d=0.01\lambda$
(green-dotted), and $d=0.02\lambda$ (purple-dashed-dotted). In view of the
inherent symmetry, only the $k_{y}>0$ branches are shown.
### III-E Nonlocal Effects
As previously mentioned, the EMT approximation in (5) is generally accurate
for deeply subwavelength modulation periods $d\ll\lambda$. For a more
quantitative assessment, Fig. 8 compares its predictions with the rigorous
EFCs obtained via full-wave numerical simulations (see Appendix B), for
different values of the modulation period $d$. We observe that the results for
$d=0.001\lambda$ are hardly distinguishable from the EMT prediction, whereas
some visible differences progressively appear for $d=0.01\lambda$ and
$d=0.02\lambda$, with changes in the local curvature and moderate reductions
in the spatial bandwidth. Interestingly, the propagation constant remain
purely real, thereby indicating that the ${\cal PT}$-symmetric-induced gain-
loss compensation extends beyond the range of validity of the EMT
approximation.
The observed departures from the EMT predictions indicate that nonlocal
effects (i.e., spatial dispersion) are no longer negligible. In principle,
these effects can be captured by introducing in the effective parameters some
wavevector-dependent correction terms, e.g., along the lines of [53].
## IV Feasibility Issues
### IV-A Possible Implementation Strategies
Although this study is essentially focused on exploring the basic
phenomenology, and a practical implementation requires further investigations,
one might wonder to what extent the parameter configurations required for
${\cal PT}$-symmetry-induced extreme anisotropy are feasible. Within this
framework, the most critical element is the gain constituent, whose
implementation varies with the operational frequency of interest. For
instance, at microwave frequencies, active metasurfaces typically rely on
negative-resistance elements based on amplifiers [54] or tunnel diodes [55].
At terahertz frequencies, an optically pumped graphene monolayer may be a
viable option, as it can support population inversion and a negative dynamic
conductivity [56]. Specifically, for typical parameters found in the
literature [35, 39], the real part can reach values
$\sigma_{g}^{\prime}\approx-0.02/\eta$, while the imaginary part
$\sigma_{g}^{\prime\prime}$ can be tuned between positive and negative values
(ranging approximately from $-0.05/\eta$ to $0.05/\eta$) by acting on the
frequency and quasi-Fermi energy. These figures allow in principle to attain
the large anisotropy ratios of interest here.
At optical wavelengths, gain media are typically obtained by doping host media
with organic dyes [57, 58] or quantum dots [59, 60, 61]. To derive some basic
quantitative estimates, we consider the thin-dielectric-layer model in (4),
and invert for the complex-valued relative permittivity
$\varepsilon=\varepsilon^{\prime}+i\varepsilon^{\prime\prime}=1+i\frac{\sigma\eta}{{k\Delta}}.$
(17)
Accordingly, the conditions in (14) for extreme anisotropy translate into
$\left|1-\varepsilon^{\prime}\right|k\Delta\ll
1,\quad\left|1-\varepsilon^{\prime}\right|\ll\left|\varepsilon^{\prime\prime}\right|.$
(18)
For instance, assuming $k\Delta=0.1$, in the capacitive case, a relative
permittivity $\varepsilon=1.01-i2$ would yield a normalized conductivity
$\sigma\eta=-0.2-i0.001$, like the one considered in the extreme-anisotropy
case in Fig. 4d. These permittivity values are in line with those attainable
at infrared wavelengths by doping transparent conductive oxides (such as
indium tin oxide) with lanthanides [41, 62, 63, 64]. Similar considerations
hold for the inductive case too.
As for the practical realizability of the required gain-loss spatial
modulation, one possibility could be to rely on high-resolution selective
optical pumping [65], possibly based on digital spatial light modulators [66].
Alternatively, one could think of relying on a uniform optical pumping, and
patterning a thin layer of gain material with thin, lossy strips, so as to
suitably overcompensate the gain in certain selected regions.
The above considerations suggest that the gain-loss configurations of interest
are within reach with current or near-future technologies, although further
studies are needed to develop some practical designs. In particular, the
presence of a substrate should also be taken into account, and a simple EMT
modeling may not be applicable, thereby requiring extensive numerical
optimization driven by full-wave simulations. These aspects are beyond the
scope of the present investigation, and will be the subject of forthcoming
studies.
Figure 9: (a), (b) Real and imaginary parts, respectively, of dispersive
models for loss and gain constituents, from (19). Parameters are chosen as:
$A_{L}=A_{G}=2.02\cdot 10^{-2}\omega_{0}$, $\omega_{L}=0.999\omega_{0}$,
$\omega_{G}=1.001\omega_{0}$, $\Gamma_{L}=\Gamma_{G}=0.02\omega_{0}$, in order
to satisfy the conditions in Fig. 5, $\sigma_{a}\eta=1+i0.1$,
$\sigma_{b}\eta=-1+i0.1$, at the operational radian frequency $\omega_{0}$.
Figure 10: Representative stability maps for the parameter configuration in
Fig. 9. (a), (b), (c), (d) Inverse residual (magnitude of left-hand-side
reciprocal) pertaining to the dispersion equation (6a) for $k_{x}=0$,
$k_{x}=5k_{0}$, $k_{x}=10k_{0}$, $k_{x}=15k_{0}$, respectively, in the complex
${\tilde{\omega}}=\omega^{\prime}+i\omega^{\prime\prime}$ plane;
$k_{0}=\omega_{0}/c$ denotes the wavenumber at the operational frequency,
$k_{y}$ and $k_{z}$ are determined from the nominal EFC in Fig. 3c and from
(6b), respectively. The insets show magnified views around
($\omega^{\prime}=\omega_{0}$, $\omega^{\prime\prime}=0$). Note the presence
of surface-wave poles on the real axis at $\omega^{\prime}=\omega_{0}$, and
the absence of poles in the upper half-plane $\omega^{\prime\prime}>0$.
### IV-B Stability
An important issue in non-Hermitian configurations featuring gain is the
potential onset of instability, manifested as self-oscillations supported by
the system [67]. In previous studies dealing with planar [33] and cylindrical
[34] non-Hermitian metasurfaces, the stability issue was addressed by
introducing a physical, dispersive model for the gain constituent, and by
looking at the poles of a relevant transfer function analytically continued in
the complex frequency plane. Here, we follow a similar approach, and consider
for the conductivities in the loss and gain regions a Lorentzian (standard and
inverted, respectively) model [68]
$\displaystyle\sigma_{a}\left(\omega\right)$ $\displaystyle=$
$\displaystyle\frac{iA_{L}\omega}{\eta\left(\omega^{2}-\omega_{L}^{2}+i\Gamma_{L}\omega\right)},$
(19a) $\displaystyle\sigma_{b}\left(\omega\right)$ $\displaystyle=$
$\displaystyle-\frac{iA_{G}\omega}{\eta\left(\omega^{2}-\omega_{G}^{2}+i\Gamma_{G}\omega\right)},$
(19b)
where $A_{L,G}$, $\omega_{L,G}$ and $\Gamma_{L,G}$ denote some dimensional
constants, peak radian frequencies, and damping factors, respectively. Figure
9 shows the above dispersion laws with parameters chosen so as to attain at an
operational radian frequency $\omega_{0}$ the nominal values
$\sigma_{a}\eta=1+i0.1$, $\sigma_{b}\eta=-1+i0.1$ considered in the
canalization example of Fig. 5.
We then consider the EMT dispersion equation in (6b), which can be viewed as a
pole condition for the reflection/transmission coefficient under plane-wave
illumination from free space, and substitute the dispersive effective
parameters computed from (10) with (19). Finally, for fixed values of the
wavenumbers, we look for roots of (6b) (i.e., poles) in the complex
${\tilde{\omega}}=\omega^{\prime}+i\omega^{\prime\prime}$ plane. In view of
the assumed time-harmonic convention, instability corresponds to complex poles
lying in the upper half-plane $\omega^{\prime\prime}>0$.
Figure 10 shows stability maps for representative values of the wavenumbers on
the nominal EFC in Fig. 3c. We observe the expected presence of the surface-
wave pole on the real axis at the operational radian frequency
$\omega^{\prime}=\omega_{0}$, and the absence of poles in the upper half-plane
$\omega^{\prime\prime}>0$, which indicates that the system is unconditionally
stable for any temporal excitation. The above examples only serve to
illustrate that it is possible, in principle, to attain stability via suitable
dispersion engineering. Clearly, different parameter choices in the dispersion
models and/or the anisotropy ratios may induce the transition of some poles to
the upper half-plane $\omega^{\prime\prime}>0$, thereby causing instabilities.
## V Conclusions and Perspectives
In conclusion, we have studied the surface-wave propagation for a class of
non-Hermitian metasurfaces based on ${\cal PT}$-symmetric modulations of the
surface conductivity. Via a simple EMT model, we have shown that a suitably
tailored gain-loss interplay can induce extreme anisotropy, giving rise to
interesting canalization effects. These theoretical predictions are in good
agreement with numerical full-wave simulations. Moreover, we have explored the
effects of possible departures from perfect ${\cal PT}$ symmetry, as well as
of nonlocality. Finally, we have addressed some preliminary feasibility
issues, including the stability.
The outcomes from this study open new perspectives in the design of
metasurfaces. Within this enlarged framework, the parameter space extends over
the entire complex plane of the complex conductivity, and losses are not
treated as second-order, detrimental effects to be minimized or compensated.
Instead, their interplay with gain is harnessed to attain exotic dispersion
effects, which can be dynamically controlled and/or reconfigured by acting on
the gain level (e.g., via optical pumping), while maintaining a strong out-of-
plane confinement. This can find a variety of potential applications in
scenarios including sensing, sub-diffractive imaging and communications.
Current and future studies are aimed at exploring such applications and
designing some practical implementations of the idealized configuration
considered here. Also of great interest is the study of exceptional points and
lasing conditions in these metasurfaces [30].
## Appendix A Details on EMT Modeling
By substituting (1) in (5), we derive the real parts of the effective
parameters
${\bar{\sigma}}_{xx}^{\prime}=f_{a}\sigma_{a}^{\prime}-\left(1-f_{a}\right)\sigma_{b}^{\prime},$
(20a)
${\bar{\sigma}}_{yy}^{\prime}=\frac{f_{a}\sigma_{a}^{\prime}\left|\sigma_{b}\right|^{2}-\left(1-f_{a}\right)\sigma_{b}^{\prime}\left|\sigma_{a}\right|^{2}}{D\left(\sigma_{a},\sigma_{b},f_{a}\right)},$
(20b)
with
$\displaystyle D\left(\sigma_{a},\sigma_{b},f_{a}\right)$ $\displaystyle=$
$\displaystyle\left(1-f_{a}\right)^{2}\left|\sigma_{a}\right|^{2}+f_{a}\left|\sigma_{b}\right|^{2}$
(21) $\displaystyle+$ $\displaystyle
2f_{a}\left(1-f_{a}\right)\left(\sigma_{a}^{\prime}\sigma_{b}^{\prime}+\sigma_{a}^{\prime\prime}\sigma_{b}^{\prime\prime}\right).$
By recalling that, in view of the assumptions (2),
$D\left(\sigma_{a},\sigma_{b},f_{a}\right)$ in (21) is a sum of non-negative
terms, and hence never vanishes, Eqs. (8)–(10) follow from zeroing (20a) and
the numerator of (20b).
In the ${\cal PT}$-symmetric case, the expressions in (13) can be directly
obtained by substituting (11) in (5) or, equivalently, by particularizing
(10). In this last case, a limit operation is entailed in (10b), which yields
a $0/0$ indeterminate form that can be evaluated by means of the L’Hôpital’s
rule.
The spatial bandwidths in (15) and (16) are calculated by solving the
dispersion equation (6b) for $k_{y}=0$. Solving with respect to $k_{z}$, we
find two solutions
$k_{zi}=-\frac{2k}{\eta{\bar{\sigma}}_{xx}},\quad
k_{zc}=-\frac{k\eta{\bar{\sigma}}_{yy}}{2}.$ (22)
Recalling the assumed branch-cut $\mbox{Im}\left(k_{z}\right)\geq 0$, it is
readily verified that $k_{zi}$ is the proper solution in the inductive case
(${\bar{\sigma}}_{xx}^{\prime\prime}>0$), whereas $k_{zc}$ should be selected
in the capacitive case (${\bar{\sigma}}_{yy}^{\prime\prime}<0$). The
equalities in (15) and (16) follow from solving the dispersion equation (6a)
with respect to $k_{x}$, with the proper choice of $k_{z}$ in (22).
## Appendix B Details on Numerical Simulations
The field maps in Figs. 5 and 7, as well as the rigorous EFCs in Fig. 8, are
computed via numerical (finite-element) numerical simulations via the
commercial software package COMSOL Multiphysics [69].
For the dipole-excited configurations in Figs. 5 and 7, we consider a 3-D
computational domain of total size $3.2\lambda\times 3.2\lambda\times
0.35\lambda$. The metasurface is modeled via an impedance boundary condition
at $z=0$ enforced in terms of a surface current density
($J_{x}=\sigma_{xx}E_{x},J_{y}=\sigma_{yy}E_{y},J_{z}=0$). For the
configuration in Fig. 5b, we model the actual conductivity modulation, with
$d=0.04\lambda$ and a total 55 periods. To minimize the finite-size effects,
the metasurfaces are terminated in-plane with fictitious sections of length
$0.4\lambda$ with electrical conductivity tapered so as to match the free-
space level. The domain is terminated (with the exception of the $z=0$ face)
by perfectly matched layers of thickness $0.25\lambda$, and is discretized
with an adaptive mesh, resulting into $\sim 2.3$ million degrees of freedom.
We carry out a frequency-domain analysis, by means of the the Pardiso direct
solver (with default parameters).
The spatial spectrum in Fig. 5c is computed from the calculated field
distribution at $z=0.01\lambda$, by means of a $2048\times 512$ 2-D fast
Fourier transform implemented in a Python code via the routine fft2 available
in the NumPy package [70].
For the EFCs in Fig. 8, we consider instead a 2-D computational domain
(assuming an infinite extent along the $x$-direction) including only one
period of the conductivity modulation. The structure is terminated by phase-
shift walls along the $y$-direction, and includes a free-space layer
terminated by a perfectly matched layer, both of thickness $0.5\lambda$. Once
again, an adaptive mesh is applied, which yields $\sim 1.7$ million degrees of
freedom. In this case, we utilize the Modal Analysis, with the MUMPS direct
solver and default parameters. To calculate the EFCs, we scan the wavenumber
$k_{y}$ over the real axis, and compute the corresponding eigenvalue $k_{x}$.
## References
* [1] F. Yang and Y. Rahmat-Samii, _Surface Electromagnetics: With Applications in Antenna, Microwave, and Optical Engineering_. Cambridge University Press, 2019.
* [2] C. L. Holloway, E. F. Kuester, J. A. Gordon, J. O’Hara, J. Booth, and D. R. Smith, “An overview of the theory and applications of metasurfaces: The two-dimensional equivalents of metamaterials,” _IEEE Antennas Propagat. Mag._ , vol. 54, no. 2, pp. 10–35, Apr. 2012.
* [3] Q. Bao and H. Hoh, _2D Materials for Photonic and Optoelectronic Applications_. Elsevier Science & Technology, 2019.
* [4] N. Yu, P. Genevet, M. A. Kats, F. Aieta, J.-P. Tetienne, F. Capasso, and Z. Gaburro, “Light propagation with phase discontinuities: Generalized laws of reflection and refraction,” _Science_ , vol. 334, no. 6054, pp. 333–337, Oct. 2011.
* [5] H. J. Bilow, “Guided waves on a planar tensor impedance surface,” _IEEE Trans. Antennas Propagat._ , vol. 51, no. 10, pp. 2788–2792, Oct. 2003\.
* [6] A. M. Patel and A. Grbic, “Modeling and analysis of printed-circuit tensor impedance surfaces,” _IEEE Trans. Antennas Propagat._ , vol. 61, no. 1, pp. 211–220, Jan. 2013.
* [7] R. Quarfoth and D. Sievenpiper, “Artificial tensor impedance surface waveguides,” _IEEE Trans. Antennas Propagat._ , vol. 61, no. 7, pp. 3597–3606, Jul. 2013.
* [8] M. Mencagli, E. Martini, and S. Maci, “Surface wave dispersion for anisotropic metasurfaces constituted by elliptical patches,” _IEEE Trans. Antennas Propagat._ , vol. 63, no. 7, pp. 2992–3003, Jul. 2015.
* [9] A. Vakil and N. Engheta, “Transformation optics using graphene,” _Science_ , vol. 332, no. 6035, pp. 1291–1294, Jun. 2011.
* [10] R. Yang and Y. Hao, “An accurate control of the surface wave using transformation optics,” _Opt. Express_ , vol. 20, no. 9, pp. 9341–9350, Apr. 2012.
* [11] A. M. Patel and A. Grbic, “Transformation electromagnetics devices based on printed-circuit tensor impedance surfaces,” _IEEE Trans. Microwave Theory Tech._ , vol. 62, no. 5, pp. 1102–1111, May 2014.
* [12] M. Mencagli, E. Martini, D. González-Ovejero, and S. Maci, “Metasurfing by transformation electromagnetics,” _IEEE Antennas Wireless Propagat. Lett._ , vol. 13, pp. 1767–1770, 2014.
* [13] E. Martini, M. Mencagli, and S. Maci, “Metasurface transformation for surface wave control,” _Philos. Trans. R. Soc. A_ , vol. 373, no. 2049, p. 20140355, Aug. 2015.
* [14] M. McCall, J. B. Pendry, V. Galdi, Y. Lai, S. A. R. Horsley, J. Li, J. Zhu, R. C. Mitchell-Thomas, O. Quevedo-Teruel, P. Tassin, V. Ginis, E. Martini, G. Minatti, S. Maci, M. Ebrahimpouri, Y. Hao, P. Kinsler, J. Gratus, J. M. Lukens, A. M. Weiner, U. Leonhardt, I. I. Smolyaninov, V. N. Smolyaninova, R. T. Thompson, M. Wegener, M. Kadic, and S. A. Cummer, “Roadmap on transformation optics,” _J. Opt._ , vol. 20, no. 6, p. 063001, May 2018.
* [15] J. B. Pendry, D. Schurig, and D. R. Smith, “Controlling electromagnetic fields,” _Science_ , vol. 312, no. 5781, pp. 1780–1782, Jun. 2006.
* [16] U. Leonhardt, “Optical conformal mapping,” _Science_ , vol. 312, no. 5781, pp. 1777–1780, Jun. 2006.
* [17] O. Y. Yermakov, A. I. Ovcharenko, M. Song, A. A. Bogdanov, I. V. Iorsh, and Y. S. Kivshar, “Hybrid waves localized at hyperbolic metasurfaces,” _Phys. Rev. B_ , vol. 91, p. 235423, Jun. 2015.
* [18] J. S. Gomez-Diaz, M. Tymchenko, and A. Alù, “Hyperbolic plasmons and topological transitions over uniaxial metasurfaces,” _Phys. Rev. Lett._ , vol. 114, p. 233901, Jun. 2015.
* [19] J. S. Gomez-Diaz, M. Tymchenko, and A. Alù, “Hyperbolic metasurfaces: surface plasmons, light-matter interactions, and physical implementation using graphene strips,” _Opt. Mater. Express_ , vol. 5, no. 10, pp. 2313–2329, Oct. 2015.
* [20] J. S. Gomez-Diaz and A. Alù, “Flatland optics with hyperbolic metasurfaces,” _ACS Photonics_ , vol. 3, no. 12, pp. 2211–2224, Dec. 2016\.
* [21] Y. Yang, L. Jing, L. Shen, Z. Wang, B. Zheng, H. Wang, E. Li, N.-H. Shen, T. Koschny, C. M. Soukoulis, and H. Chen, “Hyperbolic spoof plasmonic metasurfaces,” _NPG Asia Mater._ , vol. 9, no. 8, p. e428, Aug. 2017.
* [22] O. Y. Yermakov, D. V. Permyakov, F. V. Porubaev, P. A. Dmitriev, A. K. Samusev, I. V. Iorsh, R. Malureanu, A. V. Lavrinenko, and A. A. Bogdanov, “Effective surface conductivity of optical hyperbolic metasurfaces: from far-field characterization to surface wave analysis,” _Sci. Rep._ , vol. 8, no. 1, p. 14135, Sep. 2018.
* [23] D. Correas-Serrano, A. Alù, and J. S. Gomez-Diaz, “Plasmon canalization and tunneling over anisotropic metasurfaces,” _Phys. Rev. B_ , vol. 96, p. 075436, Aug. 2017.
* [24] S. A. R. Horsley and I. R. Hooper, “One dimensional electromagnetic waves on flat surfaces,” _J. Phys. D Appl. Phys._ , vol. 47, no. 43, p. 435103, Oct. 2014.
* [25] D. J. Bisharat and D. F. Sievenpiper, “Guiding waves along an infinitesimal line between impedance surfaces,” _Phys. Rev. Lett._ , vol. 119, p. 106802, Sep. 2017.
* [26] G. Hu, A. Krasnok, Y. Mazor, C.-W. Qiu, and A. Alù, “Moiré hyperbolic metasurfaces,” _Nano Lett._ , vol. 20, no. 5, pp. 3217–3224, May 2020.
* [27] G. Hu, Q. Ou, G. Si, Y. Wu, J. Wu, Z. Dai, A. Krasnok, Y. Mazor, Q. Zhang, Q. Bao, C.-W. Qiu, and A. Alù, “Topological polaritons and photonic magic angles in twisted $\alpha$-MoO 3 bilayers,” _Nature_ , vol. 582, no. 7811, pp. 209–213, Jun. 2020.
* [28] C. M. Bender and S. Boettcher, “Real spectra in non-Hermitian Hamiltonians having PT symmetry,” _Phys. Rev. Lett._ , vol. 80, pp. 5243–5246, Jun. 1998.
* [29] R. El-Ganainy, K. G. Makris, M. Khajavikhan, Z. H. Musslimani, S. Rotter, and D. N. Christodoulides, “Non-Hermitian physics and PT symmetry,” _Nat. Phys._ , vol. 14, no. 1, pp. 11–19, Jan. 2018.
* [30] L. Feng, R. El-Ganainy, and L. Ge, “Non-Hermitian photonics based on parity–time symmetry,” _Nat. Photonics_ , vol. 11, no. 12, pp. 752–762, Jan. 2017.
* [31] R. Fleury, D. L. Sounas, and A. Alù, “Negative refraction and planar focusing based on parity-time symmetric metasurfaces,” _Phys. Rev. Lett._ , vol. 113, p. 023903, Jul. 2014.
* [32] D. L. Sounas, R. Fleury, and A. Alù, “Unidirectional cloaking based on metasurfaces with balanced loss and gain,” _Phys. Rev. Appl._ , vol. 4, p. 014005, Jul. 2015.
* [33] F. Monticone, C. A. Valagiannopoulos, and A. Alù, “Parity-time symmetric nonlocal metasurfaces: All-angle negative refraction and volumetric imaging,” _Phys. Rev. X_ , vol. 6, p. 041018, Oct. 2016.
* [34] S. Savoia, C. A. Valagiannopoulos, F. Monticone, G. Castaldi, V. Galdi, and A. Alù, “Magnified imaging based on non-Hermitian nonlocal cylindrical metasurfaces,” _Phys. Rev. B_ , vol. 95, p. 115114, Mar. 2017.
* [35] P.-Y. Chen and J. Jung, “PT symmetry and singularity-enhanced sensing based on photoexcited graphene metasurfaces,” _Phys. Rev. Appl._ , vol. 5, p. 064018, Jun. 2016.
* [36] M. Sakhdari, M. Farhat, and P.-Y. Chen, “PT-symmetric metasurfaces: wave manipulation and sensing using singular points,” _New J. Phys._ , vol. 19, no. 6, p. 065002, Jun. 2017.
* [37] M. Farhat, M. Yang, Z. Ye, and P.-Y. Chen, “PT-symmetric absorber-laser enables electromagnetic sensors with unprecedented sensitivity,” _ACS Photonics_ , vol. 7, no. 8, pp. 2080–2088, Aug. 2020.
* [38] M. Sakhdari, N. M. Estakhri, H. Bagci, and P.-Y. Chen, “Low-threshold lasing and coherent perfect absorption in generalized $\mathcal{P}\mathcal{T}$-symmetric optical structures,” _Phys. Rev. Appl._ , vol. 10, p. 024030, Aug. 2018.
* [39] M. Moccia, G. Castaldi, A. Alù, and V. Galdi, “Line waves in non-Hermitian metasurfaces,” _ACS Photonics_ , vol. 7, no. 8, pp. 2064–2072, Aug. 2020.
* [40] R. Kolkowski and A. F. Koenderink, “Lattice resonances in optical metasurfaces with gain and loss,” _P. IEEE_ , vol. 108, no. 5, pp. 795–818, May 2020.
* [41] M. Coppolaro, M. Moccia, V. Caligiuri, G. Castaldi, N. Engheta, and V. Galdi, “Extreme-parameter non-Hermitian dielectric metamaterials,” _ACS Photonics_ , vol. 7, no. 9, pp. 2578–2588, Sep. 2020.
* [42] D. J. Bisharat and D. F. Sievenpiper, “Manipulating line waves in flat graphene for agile terahertz applications,” _Nanophotonics_ , vol. 7, no. 5, pp. 893–903, May 2018.
* [43] M. Mattheakis, C. A. Valagiannopoulos, and E. Kaxiras, “Epsilon-near-zero behavior from plasmonic dirac point: Theory and realization using two-dimensional materials,” _Phys. Rev. B_ , vol. 94, p. 201404, Nov. 2016\.
* [44] E. Forati, G. W. Hanson, A. B. Yakovlev, and A. Alù, “Planar hyperlens based on a modulated graphene monolayer,” _Phys. Rev. B_ , vol. 89, p. 081410, Feb. 2014.
* [45] A. H. Sihvola, _Electromagnetic Mixing Formulas and Applications_. IEE, 1999.
* [46] J. D. Ortiz, J. D. Baena, V. Losada, F. Medina, R. Marqués, and J. L. A. Quijano, “Self-complementary metasurface for designing narrow band pass/stop filters,” _IEEE Microw. Wirel. Compon. Lett._ , vol. 23, no. 6, pp. 291–293, Jun. 2013.
* [47] D. González-Ovejero, E. Martini, and S. Maci, “Surface waves supported by metasurfaces with self-complementary geometries,” _IEEE Trans. Antennas Propagat._ , vol. 63, no. 1, pp. 250–260, Jan. 2015.
* [48] J. D. Baena, S. B. Glybovski, J. P. del Risco, A. P. Slobozhanyuk, and P. A. Belov, “Broadband and thin linear-to-circular polarizers based on self-complementary zigzag metasurfaces,” _IEEE Trans. Antennas Propagat._ , vol. 65, no. 8, pp. 4124–4133, Aug. 2017.
* [49] H. Jiang, W. Liu, K. Yu, K. Fang, Y. Sun, Y. Li, and H. Chen, “Experimental verification of loss-induced field enhancement and collimation in anisotropic $\mu$-near-zero metamaterials,” _Phys. Rev. B_ , vol. 91, p. 045302, Jan. 2015.
* [50] A. A. Zyablovsky, A. P. Vinogradov, A. V. Dorofeenko, A. A. Pukhov, and A. A. Lisyansky, “Causality and phase transitions in $\mathcal{PT}$-symmetric optical systems,” _Phys. Rev. A_ , vol. 89, p. 033808, Mar. 2014.
* [51] T. G. Mackay and A. Lakhtakia, “Dynamically controllable anisotropic metamaterials with simultaneous attenuation and amplification,” _Phys. Rev. A_ , vol. 92, p. 053847, Nov. 2015.
* [52] H. Herzig Sheinfux, B. Zhen, I. Kaminer, and M. Segev, “Total internal reflection in gain media,” in _CLEO: 2015_. Optical Society of America, 2015, p. FM2D.3.
* [53] D. Correas-Serrano, J. S. Gomez-Diaz, M. Tymchenko, and A. Alù, “Nonlocal response of hyperbolic metasurfaces,” _Opt. Express_ , vol. 23, no. 23, pp. 29 434–29 448, Nov. 2015.
* [54] L. Chen, Q. Ma, H. B. Jing, H. Y. Cui, Y. Liu, and T. J. Cui, “Space-energy digital-coding metasurface based on an active amplifier,” _Phys. Rev. Applied_ , vol. 11, p. 054051, May 2019.
* [55] D. Ye, K. Chang, L. Ran, and H. Xin, “Microwave gain medium with negative refractive index,” _Nat. Commun._ , vol. 5, p. 5841, Dec. 2014.
* [56] V. Ryzhii, M. Ryzhii, and T. Otsuji, “Negative dynamic conductivity of graphene with optical pumping,” _J. Appl. Phys._ , vol. 101, no. 8, p. 083114, Apr. 2007.
* [57] S. Campione, M. Albani, and F. Capolino, “Complex modes and near-zero permittivity in 3D arrays of plasmonic nanoshells: loss compensation using gain,” _Opt. Mater. Express_ , vol. 1, no. 6, pp. 1077–1089, Oct. 2011.
* [58] V. Caligiuri, L. Pezzi, A. Veltri, and A. De Luca, “Resonant gain singularities in 1D and 3D metal/dielectric multilayered nanostructures,” _ACS Nano_ , vol. 11, no. 1, pp. 1012–1025, Jan. 2017.
* [59] P. Holmström, L. Thylén, and A. Bratkovsky, “Composite metal/quantum-dot nanoparticle-array waveguides with compensated loss,” _Appl. Phys. Lett._ , vol. 97, no. 7, p. 073110, Aug. 2010.
* [60] I. Moreels, D. Kruschke, P. Glas, and J. W. Tomm, “The dielectric function of PbS quantum dots in a glass matrix,” _Opt. Mater. Express_ , vol. 2, no. 5, pp. 496–500, May 2012.
* [61] S. D. Campbell and R. W. Ziolkowski, “The performance of active coated nanoparticles based on quantum-dot gain media,” _Adv. Optoelectron._ , vol. 2012, pp. 1–6, Jan. 2012.
* [62] K. Binnemans, “Lanthanide-based luminescent hybrid materials,” _Chem. Rev._ , vol. 109, no. 9, pp. 4283–4374, Aug. 2009.
* [63] W. Shao, G. Chen, A. Kuzmin, H. L. Kutscher, A. Pliss, T. Y. Ohulchanskyy, and P. N. Prasad, “Tunable narrow band emissions from dye-sensitized core/shell/shell nanocrystals in the second near-infrared biological window,” _J. Am. Chem. Soc._ , vol. 138, no. 50, pp. 16 192–16 195, Dec. 2016.
* [64] H. Lin, D. Xu, Y. Li, L. Yao, L. Xu, Y. Ma, S. Yang, and Y. Zhang, “Intense red upconversion luminescence in Er3+-sensitized particles through confiniing the 1532 nm excitation energy,” _J. Lumin._ , vol. 216, p. 116731, Dec. 2019.
* [65] X. Fang, K. Wei, T. Zhao, Y. Zhai, D. Ma, B. Xing, Y. Liu, and Z. Xiao, “High spatial resolution multi-channel optically pumped atomic magnetometer based on a spatial light modulator,” _Opt. Express_ , vol. 28, no. 18, pp. 26 447–26 460, Aug. 2020.
* [66] N. Savage, “Digital spatial light modulators,” _Nat. Photonics_ , vol. 3, no. 3, pp. 170–172, Mar. 2009.
* [67] Y. Zhiyenbayev, Y. Kominis, C. Valagiannopoulos, V. Kovanis, and A. Bountis, “Enhanced stability, bistability, and exceptional points in saturable active photonic couplers,” _Phys. Rev. A_ , vol. 100, p. 043834, Oct. 2019.
* [68] S. Chen, P. Kühne, V. Stanishev, S. Knight, R. Brooke, I. Petsagkourakis, X. Crispin, M. Schubert, V. Darakchieva, and M. P. Jonsson, “On the anomalous optical conductivity dispersion of electrically conducting polymers: ultra-wide spectral range ellipsometry combined with a Drude–Lorentz model,” _J. Mater. Chem. C_ , vol. 7, pp. 4350–4362, Apr. 2019.
* [69] COMSOL Group, _COMSOL Multiphysics: Version 5.1_. COMSOL, Stockholm, 2015.
* [70] T. Oliphant, “NumPy: A guide to NumPy,” USA: Trelgol Publishing, 2006. [Online]. Available: http://www.numpy.org/
|
# An Assessment of Different Electronic Structure Approaches for Modeling
Time-Resolved X-ray Absorption Spectroscopy
Shota Tsuru Current address: Arbeitsgruppe Quantenchemie, Ruhr-Universität
Bochum, D-44780, Bochum, Germany<EMAIL_ADDRESS>Marta L.
Vidal Mátyás Pápai Current address: Wigner Research Centre for Physics,
Hungarian Academy of Sciences, P.O. Box 49, H-1525 Budapest, Hungary DTU
Chemistry, Technical University of Denmark, Kemitorvet Bldg 207, DK-2800, Kgs.
Lyngby, Denmark Anna I. Krylov Department of Chemistry, University of
Southern California, Los Angeles, California 90089, United States Klaus B.
Møller Sonia Coriani<EMAIL_ADDRESS>DTU Chemistry, Technical University of
Denmark, Kemitorvet Bldg 207, DK-2800, Kgs. Lyngby, Denmark
###### Abstract
We assess the performance of different protocols for simulating excited-state
X-ray absorption spectra. We consider three different protocols based on
equation-of-motion coupled-cluster singles and doubles, two of them combined
with the maximum overlap method. The three protocols differ in the choice of a
reference configuration used to compute target states. Maximum-overlap-method
time-dependent density functional theory is also considered. The performance
of the different approaches is illustrated using uracil, thymine, and
acetylacetone as benchmark systems. The results provide a guidance for
selecting an electronic structure method for modeling time-resolved X-ray
absorption spectroscopy.
††preprint: AIP/Special issue
## I Introduction
Since the pioneering study by Zewail’s group in the mid-eighties, Scherer _et
al._ (1985) ultrafast dynamics has been an active area of experimental
research. Advances in light sources provide new means for probing dynamics by
utilizing core-level transitions. X-ray free electron lasers (XFELs) and
instruments based on high-harmonic generation (HHG) enable spectroscopic
measurements on the femtosecond Young _et al._ (2018); Chergui and Collet
(2017); Ueda (2018) and attosecondCalegari _et al._ (2016); Ramasesha, Leone,
and Neumark (2016); Ischenko, Weber, and Miller (2017); Villeneuve (2018)
timescales. Methods for investigating femtosecond dynamics can be classified
into two categories: $(i)$ methods that track the electronic structure as
parametrically dependent on the nuclear dynamics, such as time-resolved
photoelectron spectroscopy (TR-PES), Schuurman and Stolow (2018); Adachi and
Suzuki (2018); Suzuki (2019); Liu _et al._ (2020) and $(ii)$ methods that
directly visualize the nuclear dynamics, such as ultrafast X-ray scattering
Glownia _et al._ (2016); Stankus _et al._ (2019); Ruddock _et al._ (2019);
Stankus _et al._ (2020) and ultrafast electron diffraction. Yang _et al._
(2016); Liu _et al._ (2020) Time-resolved X-ray absorption spectroscopy (TR-
XAS) belongs to the former category. Similarly to X-ray photoelectron
spectroscopy (XPS), XAS is also element and chemical-state specific Stöhr
(1992) but is able to resolve the underlying electronic states better than TR-
XPS. On the other hand, TR-XPS allows photoelectron detection from all the
involved electronic states with higher yield. XAS has been used to probe the
local structure of bulk-solvated systems, such as in most chemical reaction
systems in the lab and in cytoplasm. TR-XAS has been employed to track photo-
induced dynamics in organic molecules Pertot _et al._ (2017); Attar _et al._
(2017); Wolf _et al._ (2017); Bhattacherjee _et al._ (2017) and transition
metal complexes. Chen, Zhang, and Shelby (2014); Chergui (2016); Chergui and
Collet (2017); Wernet (2019) With the aid of simulations, Katayama _et al._
(2019) nuclear dynamics can be extracted from experimental TR-XAS spectra.
Similar to other time-resolved experimental methods from category $(i)$,
interpretation of TR-XAS relies on computational methods for simulating
electronic structure and nuclear wave-packet dynamics. In this context,
electronic structure calculations should be able to provide: (1) XAS of the
ground states; (2) a description of the valence-excited states involved in the
dynamics; (3) XAS of the valence-excited states.
Quantum chemistry has made a major progress in simulations of XAS spectra of
ground states. Norman and Dreuw (2018); Bokarev and Kühn (2020) Among the
currently available methods, the transition-potential density functional
theory (TP-DFT) with the half core-hole approximation Triguero, Pettersson,
and Ågren (1998); Leetmaa _et al._ (2010) is widely used to interpret the XAS
spectra of ground states. Vall-llosera _et al._ (2008); Perera and Urquhart
(2017) Ehlert et al. extended the TP-DFT method to core excitations from
valence-excited states, Ehlert, Gühr, and Saalfrank (2018) and implemented it
in PSIXAS, Ehlert and Klamroth (2020) a plugin to the Psi4 code. TP-DFT is
capable of simulating (TR-)XAS spectra of large molecules with reasonable
accuracy, as long as the core-excited states can be described by a single
electronic configuration. Other extensions of Kohn-Sham DFT, suitable for
calculating the XAS spectra of molecules in their ground states, also exist.
Michelitsch and Reuter (2019) Linear response (LR) time-dependent (TD) DFT, a
widely used method for excited states, Dreuw and Head-Gordon (2005); Luzanov
and Zhikol (2012); Laurent and Jacquemin (2013); Ferré, Filatov, and Huix-
Rotllant (2016) has been extended to the calculation of core-excited states
Stener, Fronzoni, and de Simone (2003); Besley and Asmuruf (2010) by means of
the core-valence separation (CVS) scheme, Cederbaum, Domcke, and Schirmer
(1980) a specific type of truncated single excitation space (TRNSS) approach.
Besley (2004) In the CVS approach, configurations that do not involve core
orbitals are excluded from the excitation space; this is justified because the
respective matrix elements are small, owing to the localized nature of the
core orbitals and the large energetic gap between the core and the valence
orbitals.
Core-excitation energies calculated using TDDFT show errors up to $\approx$20
eV when standard exchange-correlation (xc) functionals such as B3LYP Becke
(1993) are used. The errors can be reduced by using specially designed xc-
functionals, such as those reviewed in Sec. 3.4.4. of Ref. 27. Hait and Head-
Gordon recently developed a square gradient minimum (SGM) algorithm for
excited-state orbital optimization to obtain spin-pure restricted open-shell
Kohn-Sham (ROKS) energies of core-excited states; they reported sub-eV errors
in XAS transition energies. Hait and Head-Gordon (2020a)
The maximum overlap method (MOM) Gilbert, Besley, and Gill (2008) provides
access to excited-state self-consistent field (SCF) solutions and, therefore,
could be used to compute core-level states. More importantly, MOM can be also
combined with TDDFT to compute core excitations from a valence-excited state.
Attar _et al._ (2017); Bhattacherjee _et al._ (2017); Northey _et al._
(2020) MOM-TDDFT is an attractive method for simulating TR-XAS spectra,
because it is computationally cheap and may provide excitation energies
consistent with the TDDFT potential energy surfaces, which are often used in
the nuclear dynamics simulations. However, in MOM calculations the initial
valence-excited states are independently optimized and thus not orthogonal to
each other. This non-orthogonality may lead to flipping of the energetic order
of the states. Moreover, open-shell Slater determinants provide a spin-
incomplete description of excited states (the initial state in an excited-
state XAS calculation), which results in severe spin contamination of all
states and may affect the quality of the computed spectra. Hait and Head-
Gordon have presented SGM as an alternative general excited-state orbital-
optimization method Hait and Head-Gordon (2020b) and applied it to compute XAS
spectra of radicals. Hait _et al._ (2020)
Applications of methods containing some empirical component, such as TDDFT,
require benchmarking against the spectra computed with a reliable wave-
function method, whose accuracy can be systematically assessed. Among various
post-HF methods, coupled-cluster (CC) theory yields a hierarchy of size-
consistent ansätze for the ground state, with the CC singles and doubles
(CCSD) method being the most practical. Bartlett and Musiał (2007) CC theory
has been extended to excited states via the linear response Koch and Jørgensen
(1990); Christiansen, Jørgensen, and Hättig (1998); Sneskov and Christiansen
(2012) and the equation-of-motion for excited states (EOM-EE) Stanton and
Bartlett (1993); Krylov (2008); Bartlett (2012); Coriani _et al._ (2016)
formalisms. Both approaches have been adapted to treat core-excited states by
using the CVS scheme, Coriani and Koch (2015) including calculations of
transition dipole moments and other properties. Vidal _et al._ (2019); Tsuru
_et al._ (2019); Faber _et al._ (2019); Nanda _et al._ (2020); Faber and
Coriani (2020); Vidal, Krylov, and Coriani (2020a); Vidal _et al._ (2020a)
The benchmarks illustrate that CVS-enabled EOM-CC methods describe well the
relaxation effects caused by the core hole as well as differential correlation
effects. Given their robustness and reliability, CC-based methods provide
high-quality XAS spectra, which can be used to benchmark other methods. Beside
several CCSD investigations, Coriani _et al._ (2012); Coriani and Koch
(2015); Frati _et al._ (2019); Carbone _et al._ (2019); Wolf _et al._
(2017); Peng _et al._ (2015); Tsuru _et al._ (2019); Fransson _et al._
(2013); Vidal _et al._ (2019); Sarangi _et al._ (2020); Tenorio _et al._
(2019); Moitra _et al._ (2020); Vidal _et al._ (2020a, b) core excitation
and ionization energies have also been reported at the CC2, Coriani _et al._
(2012); Carbone _et al._ (2019); Frati _et al._ (2019); Costantini _et al._
(2019); Moitra _et al._ (2020) CC3 Wolf _et al._ (2017); Myhre _et al._
(2018); Folkestad _et al._ (2020); Paul, Myhre, and Koch (2021),
CCSDT,Carbone _et al._ (2019); Myhre _et al._ (2018); Matthews (2020)
CCSDR(3), Coriani _et al._ (2012); Matthews (2020); Moitra _et al._ (2020)
and EOM-CCSD*Matthews (2020) levels of theory. XAS spectra have also been
simulated with a linear-response (LR-) density cumulant theory (DCT), Peng,
Copan, and Sokolov (2019) which is closely related to the LR-CC methods.
The algebraic diagrammatic construction (ADC) approach Schirmer (1982); Dreuw
and Wormit (2015) has also been used to model inner-shell spectroscopy. The
second-order variant ADC(2) Barth and Schirmer (1985) yields valence-
excitation energies with an accuracy and a computational cost [$O(N^{5})$]
similar to CC2 (coupled-cluster singles and approximate doubles),
Christiansen, Koch, and Jørgensen (1995) but within the Hermitian formalism.
ADC(2) was extended to core excitations by the CVS scheme. Schirmer _et al._
(1993); Wenzel, Wormit, and Dreuw (2014) Because ADC(2) is inexpensive and is
capable of accounting for dynamic correlation when calculating potential
energy surfaces, Plasser _et al._ (2014a) it holds promise of delivering
reasonably accurate time-resolved XAS spectra at a low cost at each step of
the nuclear dynamic simulation. Neville et al. simulated TR-XAS spectra with
ADC(2), Neville _et al._ (2016a, b, 2018) while using multi-reference first-
order configuration interaction (MR-FOCI) in their nuclear dynamics
simulations. Neville and Schuurman also reported an approach to simulate XAS
spectra using electronic wave packet autocorrelation functions based on TD-
ADC(2). Neville and Schuurman (2018) An ad hoc extension of ADC(2), ADC(2)-x,
Trofimov and Schirmer (1995) is known to give ground-state XAS spectra with
relatively high accuracy (better than ADC(2)) employing small basis sets such
as 6-31+G, Plekan _et al._ (2008) but the improvement comes with a higher
computational cost $[O(N^{6})]$. List et al. have recently used ADC(2)-x,
along with RASPT2, to study competing relaxation pathways in malonaldehyde by
TR-XAS simulations. List _et al._ (2020)
An important limitation of the single-reference methods (at least those only
including singles and double excitations) is that they can reliably treat only
singly excited states. While transitions to the singly occupied molecular
orbitals (SOMO) result in target states that are formally singly excited from
the ground-state reference state, other final states accessible by core
excitation from a valence-excited state can be dominated by configurations of
double or higher excitation character relative to the ground-state reference.
Consequently, these states are not well described by conventional response
methods such as TDDFT, LR/EOM-CCSD, or ADC(2) (see Fig. 2 in II.1). Tsuru _et
al._ (2019); List _et al._ (2020) This is the main rational of using MOM
within TDDFT.
To overcome this problem while retaining a low computational cost, Seidu et
al. Seidu _et al._ (2019) suggested to combine DFT and multireference
configuration interaction (MRCI) with the CVS scheme, which led to the CVS-
DFT/MRCI method. The authors demonstrated that the semi-empirical Hamiltonian
adjusted to describe the Coulomb and exchange interactions of the valence-
excited states Lyskov, Kleinschmidt, and Marian (2016) works well for the
core-excited states too.
In the context of excited-state nuclear dynamics simulations based on complete
active-space SCF (CASSCF) or CAS second-order perturbation theory (CASPT2),
popular choices for computing core excitations from a given valence-excited
state are restricted active-space SCF (RASSCF) Olsen _et al._ (1988);
Malmqvist, Rendell, and Roos (1990) or RAS second-order perturbation theory
(RASPT2). Malmqvist _et al._ (2008) Delcey et al. have clearly summarized how
to apply RASSCF for core excitations. Delcey _et al._ (2019) XAS spectra of
valence-excited states computed by RASSCF/RASPT2 have been presented by
various authors. Hua, Mukamel, and Luo (2019); Segatta _et al._ (2020);
Northey _et al._ (2020) RASSCF/RASPT2 schemes are sufficiently flexible and
even work in the vicinity of conical intersections; they also can tackle
different types of excitations, including, for example, those with multiply
excited character.Schweigert and Mukamel (2007) However, the accuracy of these
methods depends strongly on an appropriate selection of the active space,
which makes their application system-specific. In addition, RASSCF simulations
might suffer from insufficient description of dynamic correlation whereas the
applicability of RASPT2 may be limited by its computational cost.
Many of the methods mentioned above are available in standard quantum
chemistry packages. Hence, the assessment of their performance would be
valuable help for computational chemists who want to use these methods to
analyze the experimental TR-XAS spectra. Since experimental TR-XAS spectra are
still relatively scarce, we set out assessing the performance of four selected
single-reference methods from the perspective of the three requirements stated
above. That is, they should be able to accurately describe the core and
valence excitations from the ground state, to give the transition strengths
between the core-excited and valence-excited states, and yield the XAS spectra
of the valence-excited states over the entire pre-edge region, i.e., describe
the spectral features due to the transitions of higher excitation character.
More specifically, we extend the use of the MOM approach to the CCSD framework
and evaluate its accuracy relative to standard fc-CVS-EOM-EE-CCSD and to MOM-
TDDFT. We note that MOM has been used in combination with CCSD to calculate
double core excitations. Lee, Small, and Head-Gordon (2019) For selected
ground-state XAS simulations, we also consider ADC(2) results.
We use the following systems to benchmark the methodology: uracil, thymine,
and acetylacetone (Fig. 1).
Figure 1: Structures of (a) uracil, (b) thymine, and (c) acetylacetone. Atom
numbering follows the IUPAC standard.
Experimental TR-XAS spectra have not been recorded for uracil yet, but its
planar symmetry at the Franck-Condon (FC) geometry and its similarities with
thymine make it a computationally attractive model system. Experimental TR-XAS
data are available at the O K-edge of thymine and at the C K-edge of
acetylacetone.
The paper is organized as follows: First, we describe the methodology and
computational details. We then compare the results obtained with the CVS-
ADC(2), CVS-EOM-CCSD, and TDDFT methods against the experimental ground-state
XAS spectra. Feyer _et al._ (2010); Wolf _et al._ (2017); Attar _et al._
(2017); Bhattacherjee _et al._ (2017) We also compare the computed valence-
excitation energies with UV absorption and electron energy loss spectroscopy
(EELS, often called electron impact spectroscopy when it is applied to gas-
phase molecules). Trajmar (1980) We then present the XAS spectra of the
valence-excited states obtained with different CCSD-based protocols and
compare them with experimental TR-XAS spectra when available. Wolf _et al._
(2017); Attar _et al._ (2017); Bhattacherjee _et al._ (2017) Finally, we
evaluate the performance of MOM-TDDFT.
## II Methodology
### II.1 Protocols for Computing XAS
We calculated the energies and oscillator strengths for core and valence
excitations from the ground states by standard linear-response/equation-of-
motion methods: ADC(2), Schirmer (1982); Trofimov and Schirmer (1995); Dreuw
and Wormit (2015) EOM-EE-CCSD, Christiansen _et al._ (1996); Hald, Hättig,
and Jørgensen (2000); Bartlett and Musiał (2007); Stanton and Bartlett (1993);
Krylov (2008); Bartlett (2012); Coriani _et al._ (2016) and TDDFT. In the
ADC(2) and CCSD calculations of the valence-excited states, we employ the
frozen core (fc) approximation. CVS Wenzel, Wormit, and Dreuw (2014); Coriani
and Koch (2015); Vidal _et al._ (2019) was applied to obtain the core-excited
states within all methods. Within the fc-CVS-EOM-EE-CCSD framework, Vidal _et
al._ (2019) we explored three different strategies to obtain the excitation
energies and oscillator strengths for selected core-valence transitions, as
summarized in Fig. 2. In the first one, referred to as standard CVS-EOM-CCSD,
we assume that the final core-excited states belong to the set of excited
states that can be reached by core excitation from the ground states (see Fig.
2, top panel). Accordingly, we use the HF Slater determinant, representing the
ground state ($|\Phi_{0}\rangle$) as the reference
($|\Phi_{\mathrm{ref}}\rangle$) for the CCSD calculation; the (initial)
valence-excited and (final) core-excited states are then computed with EOM-EE-
CCSD and fc-CVS-EOM-EE-CCSD, respectively. The transition energies for core-
valence excitations are subsequently computed as the energy differences
between the final core states and the initial valence state. The oscillator
strengths for the transitions between the two excited states are obtained from
the transition moments between the EOM states, according to EOM-EE theory.
Stanton and Bartlett (1993); Bartlett and Musiał (2007); Vidal _et al._
(2019) In this approach, both the initial and the final states are spin-pure
states. However, the final core-hole states that have multiple excitation
character with respect to the ground state are either not accessed or
described poorly by this approach (the respective configurations are crossed
in Fig. 2).
Figure 2: Schematics of the standard CVS-EOM-CCSD, LSOR-CCSD, and HSOR-CCSD
protocols. The crossed configurations are formally doubly excited with respect
to the ground-state reference.
In the second approach, named high-spin open-shell reference (HSOR) CCSD, we
use as a reference ($|\Phi_{\mathrm{ref}}\rangle$) for the CCSD calculations a
high-spin open-shell HF Slater determinant that has the same electronic
configuration as the initial singlet valence-excited state to be probed in the
XAS step. Tsuru _et al._ (2019); Vidal, Krylov, and Coriani (2020a, b) This
approach is based on the assumption that the exchange interactions, which are
responsible for the energy gap between singlets and triplets, cancel out in
calculations of the transition energies and oscillator strengths. An
attractive feature of this approach is that the reference is spin complete (as
opposed to a low-spin open-shell determinant of the same occupation) and that
the convergence of the SCF procedure is usually robust. A drawback of this
approach is the inability to distinguish between the singlet and triplet
states with the same electronic configurations.
In the third approach, we use low-spin (Ms=0) MOM-reference for singlet
excited states and high-spin (Ms=1) MOM-reference for triplet excited states.
We refer to this approach as low-spin open-shell reference (LSOR) CCSD.
In both HSOR-CCSD and LSOR-CCSD, the calculation begins with an SCF
optimization targeting the dominant configuration of the initial valence-
excited state by means of the MOM algorithm, and the resulting Slater
determinant is then used as the reference $|\Phi_{\mathrm{ref}}\rangle$ in the
subsequent CCSD calculation. Core-excitation energies and oscillator strengths
from the high-spin and the low-spin references are computed with standard CVS-
EOM-EE-CCSD. Such MOM-based CCSD calculations can describe all target core-
hole states, provided that they have singly excited character with respect to
the chosen reference. Furthermore, in principle, initial valence-excited
states of different spin symmetry can be selected. However, in calculations
using low-spin open-shell references (LSOR-CCSD states), variational collapse
might occur. Moreover, the LSOR-CCSD treatment of singlet excited states
suffers from spin contamination, as the underlying open-shell reference is not
spin complete (the well known issue of spin-completeness in calculations using
open-shell references is discussed in detail in recent review articles. Krylov
(2017); Casanova and Krylov (2020))
We note that the HSOR-CCSD ansatz for a spin-singlet excited state is
identical to the LSOR-CCSD ansatz of a (Ms = 1) spin-triplet state having the
same electronic configuration as the spin-singlet excited state (see Fig. 2).
In addition to the three CCSD-based protocols described above, we also
considered MOM-TDDFT, which is often used for simulation of the TR-NEXAFS
spectra. Attar _et al._ (2017); Bhattacherjee _et al._ (2017); Northey _et
al._ (2020) We employed the B3LYP xc-functional, Becke (1993) as in Refs. 20;
22; 47.
### II.2 Computational Details
The equilibrium geometry of uracil was optimized at the MP2/cc-pVTZ level. The
equilibrium geometries of thymine and acetylacetone were taken from the
literature; Wolf _et al._ (2017); Faber _et al._ (2019) they were optimized
at the CCSD(T)/aug-cc-pVDZ and CCSD/aug-cc-pVDZ level, respectively. These
structures represent the molecules at the Franck-Condon (FC) points. The
structures of the T1($\pi\pi^{\ast}$) and S1(n$\pi^{\ast}$) states of
acetylacetone, and of the S1(n$\pi^{\ast}$) state of thymine were optimized at
the EOM-EE-CCSD/aug-cc-pVDZ level. Faber _et al._ (2019)
We calculated near-edge X-ray absorption fine structure (NEXAFS) of the ground
state of all three molecules using CVS-ADC(2), CVS-EOM-CCSD, and TDDFT/B3LYP.
The excitation energies of the valence-excited states were calculated with
ADC(2), EOM-EE-CCSD, and TDDFT/B3LYP. The XAS spectra of the
T1($\pi\pi^{\ast}$), T2(n$\pi^{\ast}$), S1(n$\pi^{\ast}$) and
S2($\pi\pi^{\ast}$) states of uracil were calculated at the FC geometry. We
used the FC geometry for all states in order to make a coherent comparison of
the MOM-based CCSD methods with the standard CCSD method, and to ensure that
the final core-excited states are the same in the ground state XAS and
transient state XAS calculations using standard CCSD. The spectra of thymine
in the S1(n$\pi^{\ast}$) state were calculated at the potential energy minimum
of the S1(n$\pi^{\ast}$) state. The spectra of acetylacetone in the
T1($\pi\pi^{\ast}$) and S2($\pi\pi^{\ast}$) states were calculated at the
potential energy minima of the T1($\pi\pi^{\ast}$) and S1(n$\pi^{\ast}$)
states, respectively. Our choice of geometries for acetylacetone is based on
the fact that the S2($\pi\pi^{\ast}$)-state spectra were measured during wave
packet propagation from the S2($\pi\pi^{\ast}$) minimum (planar) towards the
S1(n$\pi^{\ast}$) minimum (distorted) and the ensemble was in equilibrium when
the T1($\pi\pi^{\ast}$)-state spectra were measured. Bhattacherjee _et al._
(2017)
The XAS spectra of the valence-excited states were computed with CVS-EOM-CCSD,
HSOR-CCSD, and LSOR-CCSD. Pople’s 6-311++G** basis set was used throughout. In
each spectrum, the oscillator strengths were convoluted with a Lorentzian
function (FWHM = 0.4 eV, unless otherwise specified). We used the natural
transition orbitals (NTOs) Luzanov, Sukhorukov, and Umanskii (1976); Martin
(2003); Luzanov and Zhikol (2012); Plasser, Wormit, and Dreuw (2014); Plasser
_et al._ (2014b); Bäppler _et al._ (2014); Mewes _et al._ (2018); Kimber and
Plasser (2020); Krylov (2020) to determine the character of the excited
states.
All calculations were carried out with the Q-Chem 5.3 electronic structure
package. Shao _et al._ (2015) The initial guesses
[HOMO($\beta$)]1[LUMO($\alpha$)]1 and [HOMO($\alpha$)]1[LUMO($\alpha$)]1 were
used in MOM-SCF for the spin-singlet and triplet states dominated by
(HOMO)1(LUMO)1 configuration, respectively. The SOMOs of the initial guess in
a MOM-SCF process are the canonical orbitals (or the Kohn-Sham orbitals) which
resemble the hole and particle NTO of the transition from the ground state to
the valence-excited state. One should pay attention to the order of the
orbitals obtained in the ground-state SCF, especially when the basis set has
diffuse functions. In LSOR-CCSD calculation, the SCF convergence threshold had
to be set to $10^{-9}$ Hartree. To ensure convergence to the dominant
electronic configuration of the desired electronic state, we used the initial
MOM (IMOM) algorithm Barca, Gilbert, and Gill (2018) instead of regular MOM;
this is especially important for cases when the desired state belongs to the
same irreducible representation as the ground state.
## III Results and Discussion
### III.1 Ground-State NEXAFS
Fig. 3 shows the O K-edge NEXAFS spectra of uracil in the ground state
computed by CVS-EOM-CCSD, CVS-ADC(2), and TDDFT/B3LYP. Table 1 shows NTOs of
the core-excited states calculated at the CVS-EOM-CCSD/6-311++G** level, where
$\sigma_{K}$ are the singular values for a given NTO pair (their renormalized
squares give the weights of the respective configurations in the transition).
Luzanov, Sukhorukov, and Umanskii (1976); Martin (2003); Luzanov and Zhikol
(2012); Plasser, Wormit, and Dreuw (2014); Plasser _et al._ (2014b); Bäppler
_et al._ (2014); Mewes _et al._ (2018); Kimber and Plasser (2020); Krylov
(2020) The NTOs for the other two methods are collected in the SI. Panel (d)
of Fig. 3 shows the experimental spectrum (digitized from Ref. 105). The
experimental spectrum has two main peaks at 531.3 and 532.2 eV, assigned to
core excitations to the $\pi^{\ast}$ orbitals from O4 and O2, respectively.
Beyond these peaks, the intensity remains low up to 534.4 eV. The next notable
spectral feature, attributed to Rydberg excitations, emerges at around 535.7
eV, just before the first core-ionization onset (indicated as IE). The
separation of $\sim$0.9 eV between the two main peaks is reproduced at all
three levels of theory. The NTO analysis at the CCSD level (cf. Table 1)
confirms that the excitation to the 6A" state has Rydberg character and, after
the uniform shift, the peak assigned to this excitation falls in the Rydberg
region of the experimental spectrum. ADC(2) also yields a 6A" transition of
Rydberg character, but it is significantly red-shifted relative to the
experiment. No Rydberg transitions are found at the TDDFT level. Only CVS-EOM-
CCSD reproduces the separation between the 1A" and the 6A" peaks with
reasonable accuracy, 4.91 eV versus 4.4 eV in the experimental spectrum. The
shoulder structure of the experimental spectrum in the region between 532.2
and 534.4 eV is attributed to vibrational excitations or shake-up transitions.
Stöhr (1992); Rehr _et al._ (1978)
Figure 3: Uracil. Ground-state NEXAFS at the oxygen K-edge calculated with: (a) ADC(2); (b) CVS-EOM-CCSD; (c) TDDFT/B3LYP. The calculated IEs are 539.68 and 539.86 eV (fc-CVS-EOM-IP-CCSD/6-311++G**). In panel (d) the computed spectrum of (b) is shifted by $-$1.8 eV and superposed with the experimental spectrumFeyer _et al._ (2010) (black curve). Basis set: 6-311++G**. Table 1: Uracil. CVS-EOM-CCSD/6-311++G** energies, strengths and NTOs of the O1s core excitations from the ground state at the FC geometry (NTO isosurface is 0.04 for the Rydberg transition and 0.05 for the rest). Final state | $E^{\mathrm{ex}}$ (eV) | Osc. strength | Hole | $\sigma_{K}^{2}$ | Particle
---|---|---|---|---|---
1A" | 533.17 | 0.0367 | | 0.78 |
2A" | 534.13 | 0.0343 | | 0.79 |
3A" | 537.55 | 0.0003 | | 0.76 |
4A" | 537.66 | 0.0004 | | 0.78 |
6A" | 538.08 | 0.0022 | | 0.82 |
Fig. 4 shows the ground-state NEXAFS spectra of thymine at the O K-edge. For
construction of the theoretical absorption spectra, we used FWHM of 0.6 eV for
the Lorentzian convolution function. Panel (d) shows the experimental spectrum
(digitized from Ref. 21). Both the experimental and calculated spectra exhibit
fine structures, similar to those of uracil. Indeed, the first and second
peaks at 531.4 and 532.2 eV of the experimental spectrum were assigned to
O1s-hole states having the same electronic configuration characters as the two
lowest-lying O1s-hole states of uracil. The NTOs of thymine can be found in
the SI. Again, only CVS-EOM-CCSD reproduces with reasonable accuracy the
Rydberg region after 534 eV. The separation of the two main peaks is well
reproduced at all three levels of theory.
Figure 4: Thymine. Ground-state oxygen K-edge NEXAFS calculated with: (a)
ADC(2), (b) CVS-EOM-CCSD, (c) TDDFT/B3LYP. The computed ionization energies
(IEs) are 539.67 and 539.73 eV (fc-CVS-EOM-IP-CCSD). In panel (d), the CVS-
EOM-CCSD spectrum of (b) is shifted by $-$1.7 eV and superposed with the
experimental oneWolf _et al._ (2017) (black curve). Basis set: 6-311++G**.
FWHM of the Lorentzian convolution function is 0.6 eV.
Fig. 5 shows the C K-edge ground-state NEXAFS spectra of acetylacetone; the
NTOs of the core excitations obtained at the CVS-EOM-CCSD/6-311++G** level are
collected in Table 2. The experimental spectrum, plotted in panel (d) of Fig.
5, was digitized from Ref. 22. Table 2 shows that the first three core
excitations are dominated by the transitions to the LUMO from the 1$s$
orbitals of the carbon atoms C2, C3, and C4. Transition from the central
carbon atom, C3, appears as the first relatively weak peak at 284.4 eV. We
note that acetylacetone may exhibit keto–enol tautomerism. In the keto form,
atoms C2 and C4 are equivalent. Therefore, transitions from these carbon atoms
appear as quasi-degenerate main peaks at $\approx$286.6 eV. The region around
288.2 eV is attributed to Rydberg transitions. The $\sim$2 eV separation
between the first peak and the main peak due to the two quasi-degenerate
transitions is well reproduced by ADC(2) and TDDFT/B3LYP, and slightly
underestimated by CVS-EOM-CCSD (1.6 eV). On the other hand, the separation of
$\sim$1.6 eV between the main peak and the Rydberg resonance region is
accurately reproduced only by CVS-EOM-CCSD.
Figure 5: Acetylacetone. Ground-state NEXAFS at carbon K-edge calculated with: (a) ADC(2); (b) CVS-EOM-CCSD; (c) TDDFT/B3LYP. The ionization energies (IEs) are 291.12, 291.88, 292.11, 294.10, and 294.56 eV (fc-CVS-EOM-IP-CCSD). In panel (d), the computational result of (b) is shifted by $-0.9$ eV and superposed with the experimental spectrumBhattacherjee _et al._ (2017) (black curve). Basis set: 6-311++G**. Table 2: Acetylacetone. CVS-EOM-CCSD/6-311++G** NTOs of the C1s core excitations from the ground state at the FC geometry (NTO isosurface is 0.03 for the Rydberg transition and 0.05 for the rest). Final state | $E^{\mathrm{ex}}$ (eV) | Osc. strength | Hole | $\sigma_{K}^{2}$ | Particle
---|---|---|---|---|---
1A | 285.88 | 0.0133 | | 0.76 |
2A | 287.36 | 0.0671 | | 0.82 |
3A | 287.53 | 0.0673 | | 0.81 |
9A | 288.63 | 0.0213 | | 0.79 |
11A | 289.13 | 0.0202 | | 0.82 |
13A | 289.27 | 0.0205 | | 0.83 |
14A | 289.28 | 0.0175 | | 0.82 |
15A | 289.30 | 0.0174 | | 0.81 |
The results for the three considered molecules illustrate that CVS-EOM-CCSD
describes well the entire pre-edge region of the NEXAFS spectrum. CVS-ADC(2)
and TDDFT/B3LYP describe well the core excitations to the LUMO and LUMO+1
(apart from an overall systematic shift), but generally fail to describe the
transitions at higher excitation energies.
### III.2 Valence-Excited States
Table 3 shows the excitation energies of the two lowest triplet states, the
three lowest singlet states, plus the S5($\pi\pi^{\ast}$) state of uracil,
calculated at the FC geometry, along with the values derived from the EELS
Chernyshova _et al._ (2012) and UV absorption experiments. Clark, Peschel,
and Tinoco (1965) The EOM-EE-CCSD/6-311++G** NTOs are collected in Table 4 and
the NTOs for other methods are given in the SI. We refer to Ref. 125 for an
extensive benchmark study of the one-photon absorption and excited-state
absorption of uracil.
In EELS, the excited states are probed by measuring the kinetic energy change
of a beam of electrons after inelastic collision with the probed molecular
sample. Trajmar (1980) In the limit of high incident energy or small
scattering angle, the transition amplitude takes a dipole form and the
selection rules are same as those of UV-Vis absorption. Otherwise, the
selection rules are different and optically dark states can be detected.
Furthermore, spin-orbit coupling enables excitation into triplet states.
Assignment of the EELS spectral signatures is based on theoretical
calculations. Note that excitation energies obtained with EELS may be blue-
shifted compared to those from UV-Vis absorption due to momentum transfer
between the probing electrons and the probed molecule.
EOM-EE-CCSD excitation energies for all valence states of uracil agree well
with the experimental values from EELS. Both the EOM-EE-CCSD and EELS values
slightly overestimate the UV-Vis results. For the two triplet states and the
S1(A′′,n$\pi^{\ast}$) and S2(A′,$\pi\pi^{\ast}$) states, ADC(2) also gives
fairly accurate excitation energies. ADC(2)-x, on the other hand, seems
unbalanced for the valence excitations (regardless of the basis set). The
TDDFT/B3LYP excitation energies are red-shifted with respect to the EELS
values, but the energy differences between the T1(A′, $\pi\pi^{\ast}$),
T2(A′′, n$\pi^{\ast}$), S1(A′′, n$\pi^{\ast}$), and S2(A′, $\pi\pi^{\ast}$)
states are in reasonable agreement with the corresponding experimentally
derived values.
Table 3: Uracil. Excitation energies (eV) at the FC geometry and comparison with experimental values from EELS Chernyshova _et al._ (2012) and UV absorption spectroscopy. Clark, Peschel, and Tinoco (1965) | ADC(2) | ADC(2)-x | EOM-CCSD | TDDFT | EELS | UV
---|---|---|---|---|---|---
T1($\pi\pi^{\ast}$) | 3.91 | 3.36 | 3.84 | 3.43 | 3.75 |
T2(n$\pi^{\ast}$) | 4.47 | 3.79 | 4.88 | 4.27 | 4.76 |
S1(n$\pi^{\ast}$) | 4.68 | 3.93 | 5.15 | 4.65 | 5.2 |
S2($\pi\pi^{\ast}$) | 5.40 | 4.70 | 5.68 | 5.19 | 5.5 | 5.08
S3($\pi\mathrm{Ryd}$) | 5.97 | 5.39 | 6.07 | 5.70 | - |
S5($\pi\pi^{\ast}$) | 6.26 | 5.32 | 6.74 | 5.90 | 6.54 | 6.02
Table 4: Uracil. EOM-EE-CCSD/6-311++G** NTOs for the transitions from the ground state to the lowest valence-excited states at the FC geometry (NTO isosurface is 0.05). Final state | $E^{\mathrm{ex}}$ (eV) | Osc. strength | Hole | $\sigma_{K}^{2}$ | Particle
---|---|---|---|---|---
T1(A′,$\pi\pi^{\ast}$) | 3.84 | - | | 0.82 |
T2(A′′,n$\pi^{\ast}$) | 4.88 | - | | 0.82 |
S1(A′′,n$\pi^{\ast}$) | 5.15 | 0.0000 | | 0.81 |
S2(A’,$\pi\pi^{\ast}$) | 5.68 | 0.2386 | | 0.75 |
S3(A′′,$\pi\mathrm{Ryd}$) | 6.07 | 0.0027 | | 0.85 |
S5(A’,$\pi\pi^{\ast}$) | 6.74 | 0.0573 | | 0.73 |
Table 5 shows the excitation energies of the five lowest triplet and singlet
states of thymine, along with the experimental values obtained by EELS.
Chernyshova _et al._ (2013) We did not find literature data for the UV
absorption of thymine in the gas phase. The energetic order is based on EOM-
EE-CCSD. Here, we reassign the peaks of the EELS spectra Chernyshova _et al._
(2013) on the basis of the following considerations: $(i)$ optically bright
transitions also exhibit strong peaks in the EELS spectra; $(ii)$ the
excitation energy of a triplet state is lower than the excitation energy of
the singlet state with the same electronic configuration; $(iii)$ the
strengths of the transitions to triplet states are smaller than the strengths
of the transitions to singlet states; $(iv)$ among the excitations enabled by
spin-orbit coupling, $\pi\to\pi^{\ast}$ transitions have relatively large
transition moment.
Except for T1($\pi\pi^{\ast}$), the ADC(2) excitation energies are red-shifted
relative to EOM-CCSD. As a result, the ADC(2) excitation energies of the
states considered here are closest, in absolute values, to the experimental
values from Table 5. However, the energy differences between the singlet
states (S1, S2, S4, and S5) are much better reproduced by EOM-CCSD.
TDDFT/B3LYP accurately reproduces the excitation energies of the
T2(n$\pi^{\ast}$), S1(n$\pi^{\ast}$), and S2($\pi\pi^{\ast}$) states.
Table 5: Thymine. Excitation energies (eV) at the FC geometry compared with the experimental values from EELS. Chernyshova _et al._ (2013) The oscillator strengths are from EOM-EE-CCSD, and used for the re-assignment. | ADC(2) | EOM-CCSD | TDDFT | EELS | Osc. strength
---|---|---|---|---|---
T1($\pi\pi^{\ast}$) | 3.70 | 3.63 | 3.19 | 3.66 | -
T2(n$\pi^{\ast}$) | 4.39 | 4.81 | 4.25 | 4.20 | -
S1(n$\pi^{\ast}$) | 4.60 | 5.08 | 4.64 | 4.61 | 0.0000
S2($\pi\pi^{\ast}$) | 5.18 | 5.48 | 4.90 | 4.96 | 0.2289
T3($\pi\pi^{\ast}$) | 5.27 | 5.32 | 4.61 | 5.41 | -
T4($\pi\mathrm{Ryd}$) | 5.66 | 5.76 | 5.39 | - | -
S3($\pi\mathrm{Ryd}$) | 5.71 | 5.82 | 5.46 | - | 0.0005
T5($\pi\pi^{\ast}$) | 5.87 | 5.91 | 5.10 | 5.75 | -
S4(n$\pi^{\ast}$) | 5.95 | 6.45 | 5.72 | 5.96 | 0.0000
S5($\pi\pi^{\ast}$) | 6.15 | 6.63 | 5.87 | 6.17 | 0.0679
Table 6 shows the excitation energies of the two lowest triplet and singlet
states, and the lowest Rydberg states of acetylacetone, along with the
experimental values obtained from EELS Walzl, Xavier, and Kuppermann (1987)
and UV absorption Nakanishi, Morita, and Nagakura (1977) (the exact state
ordering of states in the singlet Rydberg manifold is unknown). Table 7 shows
the NTOs obtained at the EOM-EE-CCSD/6-311++G** level. Remarkably, for this
molecule the excitation energies from EELS agree well with those from UV
absorption. Note that the EELS spectra of acetylacetone were recorded with
incident electron energies of 25 and 100 eV, Walzl, Xavier, and Kuppermann
(1987) whereas those for uracil Chernyshova _et al._ (2012) were obtained
with 0–8.0 eV. The higher incident electron energies reduce the effective
acceptance angle of the electrons, which may hinder the detection of electrons
that have undergone momentum transfer. The transitions to the
T1($\pi\pi^{\ast}$) and T2(n$\pi^{\ast}$) states appeared only with the 25 eV
incident electron energy and a scattering angle of 90° (see Fig. 3 of Ref.
127). The peaks were broad and, furthermore, an order of magnitude less
intense than the S${}_{0}\to~{}$S2($\pi\pi^{\ast}$) transition. Consequently,
it is difficult to resolve the excitation energies of T1($\pi\pi^{\ast}$) and
T2(n$\pi^{\ast}$). ADC(2) yields the best match with the experimental results
for acetylacetone.
Table 6: Acetylacetone. Excitation energies (eV) at the FC geometry compared with the values obtained in EELS Walzl, Xavier, and Kuppermann (1987) and UV absorption spectroscopy. Nakanishi, Morita, and Nagakura (1977) | ADC(2) | ADC(2)-x | EOM-CCSD | TDDFT | EELS | UV
---|---|---|---|---|---|---
T1($\pi\pi^{\ast}$) | 3.76 | 3.16 | 3.69 | 3.23 | 3.57? | -
T2(n$\pi^{\ast}$) | 3.79 | 3.13 | 4.11 | 3.75 | ? | -
S1(n$\pi^{\ast}$) | 4.03 | 3.29 | 4.39 | 4.18 | 4.04 | 4.2
S2($\pi\pi^{\ast}$) | 4.96 | 4.28 | 5.24 | 5.08 | 4.70 | 4.72
T${}_{3}(\pi\mathrm{Ryd})$ | 5.91 | 5.45 | 6.02 | 5.66 | 5.52 | -
S${}_{3?}(\pi\mathrm{Ryd})$ | 5.98 | 5.53 | 6.13 | 5.72 | 5.84 | 5.85
S${}_{5?}(\pi\mathrm{Ryd})$ | 6.87 | 6.30 | 7.06 | 6.64 | 6.52 | 6.61
Table 7: Acetylacetone. EOM-EE-CCSD/6-311++G** NTOs of the excitations from the ground state to the lowest-lying valence-excited states at the FC geometry (NTO isosurface is 0.03 for the Rydberg transitions and 0.05 for the rest). Final state | $E^{\mathrm{ex}}$ (eV) | Osc. strength | Hole | $\sigma_{K}^{2}$ | Particle
---|---|---|---|---|---
T1(A’,$\pi\pi^{\ast}$) | 3.69 | - | | 0.82 |
T2(A",n$\pi^{\ast}$) | 4.11 | - | | 0.82 |
S1(A",n$\pi^{\ast}$) | 4.39 | 0.0006 | | 0.81 |
S2(A’,$\pi\pi^{\ast}$) | 5.24 | 0.3299 | | 0.77 |
T${}_{3}[\pi\mathrm{Ryd}(s)]$ | 6.02 | - | | 0.86 |
S${}_{3?}[\pi\mathrm{Ryd}(s)]$ | 6.13 | 0.0072 | | 0.86 |
S${}_{5?}[\pi\mathrm{Ryd}(p)]$ | 7.06 | 0.0571 | | 0.85 |
These results indicate that the excitation energies of the valence-excited
states computed by EOM-EE-CCSD, ADC(2), and TDDFT/B3LYP are equally
(in)accurate. Which method yields the best match with experiment depends on
the molecule.
### III.3 Core Excitations from the Valence-Excited States
In Secs. III.1 and III.2, we analyzed two of our three desiderata for a good
electronic structure method for TR-XAS—that is, the ability to yield accurate
results for ground-state XAS as well as for the valence-excited states
involved in the dynamics. In this subsection, we focus on the remaining item,
i.e., the ability to yield accurate XAS of valence-excited states.
For uracil, we confirmed that EOM-CCSD and CVS-EOM-CCSD yield fairly accurate
results for the valence-excited T1($\pi\pi^{\ast}$),
T2($\mathrm{n}\pi^{\ast}$), S1($\mathrm{n}\pi^{\ast}$), and
S2($\pi\pi^{\ast}$) states and for the (final) singlet (O1s) core-excited
states at the FC geometry, respectively. It is thus reasonable to consider the
oxygen K-edge XAS spectra of the S1($\mathrm{n}\pi^{\ast}$) and
S2($\pi\pi^{\ast}$) states of uracil obtained from CVS-EOM-CCSD as our
reference, even though CVS-EOM-CCSD only yields the peaks of the core-to-SOMO
transitions.
Fig. 6 shows the oxygen K-edge XAS of uracil in the (a)
S1($\mathrm{n}\pi^{\ast}$), (b) S2($\pi\pi^{\ast}$), (c)
T2($\mathrm{n}\pi^{\ast}$), and (d) T1($\pi\pi^{\ast}$) states, calculated
using CVS-EOM-CCSD (blue curve) and LSOR-CCSD (red curve) at the FC geometry.
Note that the HSOR-CCSD spectra of S1($\mathrm{n}\pi^{\ast}$) and
S2($\pi\pi^{\ast}$) are identical to the LSOR-CCSD spectra for the
T2($\mathrm{n}\pi^{\ast}$) and T1($\pi\pi^{\ast}$) states, respectively,
because their orbital electronic configuration are the same, see Table 4. The
ground-state spectrum (green curve) is included in all panels for comparison.
The LSOR-CCSD NTOs of the transitions underlying the peaks in the
S1($\mathrm{n}\pi^{\ast}$), S2($\pi\pi^{\ast}$) and T1($\pi\pi^{\ast}$)
spectra are given in Tabs. 8, 9, and 10, respectively.
Figure 6: Uracil. Oxygen K-edge NEXAFS of the four lowest-lying valence states: (a) S1($\mathrm{n}\pi^{\ast}$); (b) S2($\pi\pi^{\ast}$); (c) T2($\mathrm{n}\pi^{\ast}$); and (d) T1($\pi\pi^{\ast}$)]. The blue and red curves correspond to the CVS-EOM-CCSD and LSOR-CCSD results, respectively. Note that the HSOR spectra for S1 and S2 are identical to the LSOR-CCSD spectra for T2 and T1. Basis set: 6-311++G**. FC geometry. The ground state XAS (green curve) is included for comparison. Table 8: Uracil. LSOR-CCSD/6-311++G** NTOs of the O1s core excitations from the S1 (n$\pi*$) state at the FC geometry (NTO isosurface value is 0.05). $E^{\mathrm{ex}}$ (eV) | Osc. strength | Spin | Hole | $\sigma_{K}^{2}$ | Particle
---|---|---|---|---|---
526.39 | 0.0451 | $\alpha$ | | 0.86 |
534.26 | 0.0323 | $\alpha$ | | 0.56 |
| | $\beta$ | | 0.23 |
Table 9: Uracil. LSOR-CCSD/6-311++G** NTOs of the O1s core excitations from the S2($\pi\pi^{*}$) state at the FC geometry (NTO isosurface value is 0.05). $E^{\mathrm{ex}}$ (eV) | Osc. strength | Spin | Hole | $\sigma_{K}^{2}$ | Particle
---|---|---|---|---|---
530.16 | 0.0102 | $\alpha$ | | 0.68 |
530.54 | 0.0131 | $\alpha$ | | 0.67 |
532.96 | 0.0186 | $\beta$ | | 0.74 |
534.74 | 0.0155 | $\beta$ | | 0.80 |
535.70 | 0.0076 | $\alpha$ | | 0.77 |
535.88 | 0.0085 | $\alpha$ | | 0.76 |
Table 10: Uracil. LSOR-CCSD/6-311++G** NTOs of the O1s core excitations from the T1($\pi\pi^{*}$) state at the FC geometry (NTO isosurface is 0.05). $E^{\mathrm{ex}}$ (eV) | Osc. strength | Spin | Hole | $\sigma_{K}^{2}$ | Particle
---|---|---|---|---|---
529.81 | 0.0212 | $\beta$ | | 0.79 |
532.39 | 0.0115 | $\beta$ | | 0.78 |
534.15 | 0.0187 | $\alpha$ | | 0.76 |
535.09 | 0.0100 | $\alpha$ | | 0.73 |
535.58 | 0.0062 | $\beta$ | | 0.77 |
535.61 | 0.0081 | $\beta$ | | 0.72 |
The CVS-EOM-CCSD spectrum of S1($\mathrm{n}\pi^{\ast}$) exhibits a relatively
intense peak at 528.02 eV, and tiny peaks at 532.40 and 532.52 eV. The intense
peak is due to transition from the 1$s$ orbital of O4 to SOMO, which is a
lone-pair-type orbital localized on O4. The tiny peak at 532.40 eV is assigned
to the transition to SOMO from the 1$s$ orbital of O2, whereas the peak at
532.52 eV is assigned to a transition with multiply excited chatacter. The
LSOR-CCSD spectrum exhibits the strong core-to-SOMO transition peak at 526.39
eV, which is red-shifted from the corresponding CVS-EOM-CCSD one by 1.63 eV.
As Table 8 shows, the peak at 534.26 eV is due to transition from the 1$s$
orbital of O2 to a $\pi^{\ast}$ orbital, and it corresponds to the second peak
in the ground-state spectrum. In the S1($\mathrm{n}\pi^{\ast}$) XAS spectrum
there is no peak corresponding to the first band in the ground-state spectrum,
there assigned to the O4 1$s\to\pi^{\ast}$ transition. This suggests that this
transition is suppressed by the positive charge localized on O4 in the
S1($\mathrm{n}\pi^{\ast}$) state.
The S1($\mathrm{n}\pi^{\ast}$) state from LSOR-CCSD is spin-contaminated, with
$\langle S^{2}\rangle=1.033$. The spectra of S1($\mathrm{n}\pi^{\ast}$)
yielded by LSOR-CCSD [panel (a)] and by HSOR-CCSD [panel (c)] are almost
identical. This is not too surprising, as the spectra of
S1($\mathrm{n}\pi^{\ast}$) and T2($\mathrm{n}\pi^{\ast}$) from CVS-EOM-CCSD
are also almost identical. This is probably a consequence of small exchange
interactions in the two states (the singlet and the triplet), due to
negligible spatial overlap between the lone pair (n) and $\pi^{\ast}$
orbitals.
In the CVS-EOM-CCSD spectrum of S2($\pi\pi^{\ast}$), see panel (b), the peaks
due to the core-to-SOMO ($\pi$) transitions from O4 and O2 occur at 527.50 and
531.87 eV, respectively. The additional peak at 531.99 eV is assigned to a
transition with multiple electronic excitation. In the LSOR-CCSD spectrum, the
core-to-SOMO peaks appear at 530.16 and 530.54 eV, respectively.
As shown in Table 9, we assign the peaks at 532.96 and 534.74 eV in the LSOR-
CCSD spectrum to transitions from the 1$s$ orbitals of the two oxygens to the
$\pi^{\ast}$ orbital, which is half occupied in S2($\pi\pi^{\ast}$). The NTO
analysis reveals that they correspond to the first and second peak of the
ground-state spectrum. Note that $\langle S^{2}\rangle$ = 1.326 for the
S2($\pi\pi^{\ast}$) state obtained from LSOR-CCSD.
In the HSOR-CCSD spectrum of the S2($\pi\pi^{\ast}$) state [which is equal to
the LSOR-CCSD spectrum of the T1($\pi\pi^{\ast}$) state in panel (d)], the
peaks of the core-to-SOMO ($\pi$) transitions from O4 and O2 appear at 529.81
and 532.39 eV, respectively (see Table 10). They are followed by transitions
to the half-occupied $\pi^{\ast}$ orbital at 534.15 and 535.09 eV,
respectively. In contrast to what we observed in the
S1($\mathrm{n}\pi^{\ast}$) spectra, the LSOR-CCSD and HSOR-CCSD spectra of the
S2($\pi\pi^{\ast}$) state are qualitatively different. This can be explained,
again, in terms of importance of the exchange interactions in the initial and
final states. On one hand, there is a stabilization of the T1($\pi\pi^{\ast}$)
(initial) state over the S2($\pi\pi^{\ast}$) state by exchange interaction, as
the overlap between the $\pi$ and $\pi^{\ast}$ orbitals is not negligible. The
exchange interaction between the strongly localized core-hole orbital and the
half-occupied valence/virtual orbital in the final core-excited state, on the
other hand, is expected to be small.
Figure 7: Thymine. (a) Oxygen K-edge NEXAFS in the S1(n$\pi^{\ast}$) state at
its potential energy minimum. Blue: CVS-EOM-CCSD. Red: LSOR-CCSD. Thin green
line: ground-state spectrum at the FC geometry. (b) Thick black: Experimental
spectrum at the delay time of 2 ps.Wolf _et al._ (2017) Blue: computational
spectrum made from the blue and green curves of (a), shifted by $-$1.7 eV.
Red: computational spectrum made from the red and green curves of (a), shifted
by $-$1.7 eV. The blue and red curves from (a) were scaled by 0.2 in (b). The
ground-state spectrum from (a) was scaled by 0.8 in (b). FWHM of the
Lorentzian convolution function is 0.6 eV.
To evaluate the accuracy of the excited-state XAS spectra from CVS-EOM-CCSD
and LSOR-CCSD, we also calculated the XAS spectra of the S1(n$\pi^{\ast}$)
state of thymine at the potential energy minimum of S1(n$\pi^{\ast}$), see
panel (a) of Fig. 7. For construction of the surface cut of the theoretical
absorption spectra, we chose FWHM of 0.6 eV for the Lorentzian convolution
function. Panel (b) shows the spectra of S1(n$\pi^{\ast}$) multiplied by 0.2
and added to the ground-state spectrum multiplied by 0.8. These factors 0.2
and 0.8 were chosen for the best fit with the experimental spectrum. A surface
cut of the experimental TR-NEXAFS spectrum at the delay time of 2 ps Wolf _et
al._ (2017) is also shown in panel (b) of Fig. 7. The reconstructed
computational spectra are shifted by $-$1.7 eV. In the experimental spectrum,
the core-to-SOMO transition peak occurs at 526.4 eV. In the reconstructed
theoretical spectrum, the core-to-SOMO transition peaks appear at 526.62 and
524.70 eV, for CVS-EOM-CCSD and LSOR-CCSD, respectively. Thus, the CVS-EOM-
CCSD superposed spectrum agrees slightly better with experiment than the LSOR-
CCSD spectrum. Nonetheless, the accuracy of the LSOR-CCSD spectrum is quite
reasonable, as compared with the experimental spectrum.
Due to the lack of experimental data, not much can be said about the accuracy
of CVS-EOM-CCSD and LSOR-CCSD/HSOR-CCSD for core excitations from a triplet
excited state in uracil and thymine. Furthermore, we are unable to
unambiguously clarify, using uracil and thymine as model system, which of the
two methods, LSOR-CCSD or HSOR-CCSD, should be considered more reliable when
they give qualitatively different spectra for the singlet excited states.
Therefore, we turn our attention to the carbon K-edge spectra of acetylacetone
and show, in Fig. 8, the spectra obtained using CVS-EOM-CCSD (blue), LSOR-CCSD
(red), and HSOR-CCSD (magenta) for the T1($\pi\pi^{\ast}$) [panel (a)] and
S2($\pi\pi^{\ast}$) [panel (b)] states. The T1($\pi\pi^{\ast}$) spectra were
obtained at the potential energy minimum of T1($\pi\pi^{\ast}$). The spectra
of S2($\pi\pi^{\ast}$) were calculated at the potential energy minimum of the
S1(n$\pi^{\ast}$) state. In doing so, we assume that the nuclear wave packet
propagates on the S2($\pi\pi^{\ast}$) surface toward the potential energy
minimum of the S1(n$\pi^{\ast}$) surface. Note that CVS-EOM-CCSD does not
describe all the core excitations from a valence-excited state (see Fig. 2).
In panels (c) and (d), the LSOR-CCSD spectra were multiplied by 0.75 and
subtracted from the ground-state spectrum, scaled by 0.25, and superposed to
the surface cuts of the experimental transient-absorption NEXAFS at delay
times 7-10 ps and 120-200 fs, respectively. The calculated transient-
absorption spectra were shifted by $-$0.9 eV, i.e. by the same amount as the
spectrum of the ground state [see panel (b) of Fig. 5]. For construction of
the surface cut of the theoretical transient-absorption spectra, we used FWHM
of 0.6 eV for the Lorentzian convolution function. The scaling factors values
0.75 and 0.25 were chosen to yield the best fit with the experimental spectra.
The NTOs of the core excitations from T1($\pi\pi^{\ast}$) and
S2($\pi\pi^{\ast}$) are shown in Tabs. 11 and 12, respectively. In the
experimental study, Bhattacherjee _et al._ (2017) it was concluded that
S2($\pi\pi^{\ast}$) is populated at the shorter time scale, whereas at the
longer time scale it is T1($\pi\pi^{\ast}$) that becomes populated.
The surface cut of the experimental transient-absorption spectra at longer
times (7-10 ps) features two peaks at 281.4 and 283.8 eV. In panel (a) of Fig.
8, the CVS-EOM-CCSD spectrum of T1($\pi\pi^{\ast}$) shows the core-to-SOMO
transition peaks at 282.69 and 284.04 eV, whereas the LSOR-CCSD ones appear at
281.76 and 283.94 eV. The LSOR-CCSD spectrum also shows a peak corresponding
to a transition from C4 to the half-occupied $\pi^{\ast}$ orbital at 286.96 eV
(see Table 11). The separation of 2.4 eV between the two core-to-SOMO peaks in
the experiment is well reproduced by LSOR-CCSD. Spin contamination is small,
$\langle S^{2}\rangle$=2.004 for the T1($\pi\pi^{\ast}$) state obtained using
LSOR-CCSD. Therefore, it is safe to say that LSOR-CCSD accurately describes
core excitations from the low-lying triplet states.
The surface cut of the transient-absorption spectra at shorter times, 120-240
fs, features relatively strong peaks at 284.7, 285.9 and a ground-state bleach
at 286.6 eV. The CVS-EOM-CCSD spectrum of the S2($\pi\pi^{\ast}$) state shows
the core-to-SOMO peak at 280.77. The LSOR-CCSD spectrum (red) has core-to-SOMO
transition peaks at 281.30 and 283.69 eV, plus the peaks due to the
transitions from the core of C2, C4 and C3 to the half-occupied $\pi^{\ast}$
orbital at 285.43, 286.07 and 287.39 eV, respectively (see Table 12). Note
that the peaks at 285.43 and 286.07 eV correspond to the main degenerate peaks
of the ground-state spectrum, as revealed by inspection of the NTOs. The HSOR-
CCSD spectrum (magenta) exhibits the core-to-SOMO transition peaks at 281.99
and 283.17 eV, followed by only one of the quasi-degenerate peaks
corresponding to transitions to the half-occupied $\pi^{\ast}$ orbital, at
287.95 eV. Since the experimental surface-cut spectrum does not clearly show
the core-to-SOMO transition peaks, it is difficult to assess the accuracy of
these peaks as obtained in the calculations. When it comes to the experimental
peaks at 284.7 and 285.9 eV, only LSOR-CCSD reproduces them with reasonable
accuracy. The experimental peak at 288.4 eV is not reproduced. In the case of
acetylacetone, the HSOR-CCSD approximation fails to correctly mimic the
spectrum of S2($\pi\pi^{\ast}$), since it does not give the peaks at 284.7 and
285.9 eV. The differences between LSOR-CCSD and HSOR-CCSD spectra for
S2($\pi\pi^{\ast}$) can be rationalized as done for uracil.
We emphasize that the assignment of the transient absorption signal at shorter
time to S2($\pi\pi^{\ast}$) is based on peaks assigned to transitions to the
$\pi^{\ast}$ orbitals (almost degenerate in the ground state), which cannot be
described by CVS-EOM-CCSD (see Fig. 2 in Sec. II.1).
Figure 8: Acetylacetone. Carbon K-edge NEXAFS from the T1($\pi\pi^{\ast}$) (a) and S2($\pi\pi^{\ast}$) (b) states. The spectra of T1($\pi\pi^{\ast}$) were computed at the potential energy minimum of T1($\pi\pi^{\ast}$). The spectra of S2($\pi\pi^{\ast}$) were computed at the potential energy minimum of S1(n$\pi^{\ast}$). Blue: CVS-EOM-CCSD. Red: LSOR-CCSD. Magenta: HSOR-CCSD. Green: ground-state spectrum at the FC geometry. (c), (d) Black: Experimental transient absorption spectra at the delay times of 7-10 ps and 120-200 fs Bhattacherjee _et al._ (2017), respectively. Red: computational transient absorption spectra made from the red and the green curves of (a) and (b), respectively, shifted by $-$0.9 eV as the spectrum of the ground state [see panel (b) of Fig. 5]. The red curves of panels (a) and (b) were scaled by 0.75 and from these, the green GS spectrum, scaled by 0.25, was subtracted. FWHM of the Lorentzian convolution function is 0.4 eV for panels (a) and (b), 0.6 eV for panels (c) and (d), respectively. Basis set: 6-311++G**. Table 11: Acetylacetone. LSOR-CCSD/6-311++G** NTOs of the C1s core excitations from the T1 state at the potential energy minimum (NTO isosurface is 0.05). $E^{\mathrm{ex}}$ (eV) | Osc. strength | Spin | Hole | $\sigma_{K}^{2}$ | Particle
---|---|---|---|---|---
281.76 | 0.0347 | $\beta$ | | 0.86 |
283.94 | 0.0318 | $\beta$ | | 0.84 |
285.69 | 0.0036 | $\beta$ | | 0.72 |
286.96 | 0.0334 | $\alpha$ | | 0.65 |
| | $\beta$ | | 0.14 |
Table 12: Acetylacetone. LSOR-CCSD/6-311++G** NTOs of the C1s core excitations from the S2 state at the potential energy minimum of S1 (NTO isosurface is 0.05). $E^{\mathrm{ex}}$ (eV) | Osc. strength | Spin | Hole | $\sigma_{K}^{2}$ | Particle
---|---|---|---|---|---
281.30 | 0.0228 | $\alpha$ | | 0.77 |
283.69 | 0.0085 | $\alpha$ | | 0.71 |
285.43 | 0.0269 | $\beta$ | | 0.76 |
286.07 | 0.0381 | $\beta$ | | 0.76 |
287.39 | 0.0057 | $\beta$ | | 0.64 |
On the basis of the above analysis, we conclude that, despite spin
contamination, LSOR-CCSD describes the XAS of singlet valence-excited states
with reasonable accuracy. LSOR-CCSD could even be used as benchmark for other
levels of theory, especially when experimental TR-XAS spectra are not
available.
We conclude this section by analyzing the MOM-TDDFT results for the transient
absorption. As seen in Secs. III.1 and III.2, ADC(2) and TDDFT/B3LYP yield
reasonable results for the lowest-lying core-excited states and for the
valence-excited states of interest in the nuclear dynamics. The next question
is thus whether MOM-TDDFT/B3LYP can reproduce the main peaks of the time-
resolved spectra with reasonable accuracy. We attempt to answer this question
by comparing the MOM-TDDFT/B3LYP spectra of thymine and acetylacetone with the
surface cuts of the experimental spectra.
The MOM-TDDFT/B3LYP O K-edge NEXAFS spectrum of thymine in the
S1(n$\pi^{\ast}$) state is shown in Fig. 9, panel (a). For construction of the
surface cut of the theoretical absorption spectra, we used FWHM of 0.6 eV for
the Lorentzian convolution function. A theoretical surface cut spectrum was
constructed as sum of the MOM-TDDFT spectrum and the standard TDDFT spectrum
of the ground state, scaled by 0.2 and 0.8, respectively. This is shown in
panel (b), together with the experimental surface cut spectrum at 2 ps
delay.Wolf _et al._ (2017) The MOM-TDDFT/B3LYP peaks due to the core
transitions from O4 and O2 to SOMO (n) are found at 511.82 and 513.50 eV,
respectively. The peak corresponding to the first main peak of the ground-
state spectrum is missing, and the one corresponding to the second main peak
in the GS appears at 517.71 eV. These features are equivalent to what we
observed in the LSOR-CCSD case (see Fig. 7). Thus, the separation between the
core-to-SOMO peak and the ground-state main peaks is accurately reproduced.
Figure 9: (a) Red: Oxygen K-edge NEXAFS for thymine in the S1(n$\pi^{\ast}$)
state calculated at the MOM-TDDFT/B3LYP/6-311++G** level at the potential
energy minimum. Green: Ground-state spectrum. (b) Black: Experimental spectrum
at the delay time of 2 ps Wolf _et al._ (2017), Red: computational spectrum
made from the red and the green curves of (a), shifted by $+$14.8 eV. The red
curve of (a) was scaled by 0.2. The green curve of (a) was scaled by 0.8. FWHM
of the Lorentzian convolution function is 0.6 eV.
Next, we consider the carbon K-edge spectra of acetylacetone in the
T1($\pi\pi^{\ast}$) [at the minimum of T1($\pi\pi^{\ast}$)] and
S2($\pi\pi^{\ast}$) [at the minimum of S1(n$\pi^{\ast}$)] states, as obtained
from MOM-TDDFT. They are plotted in panels (a) and (b) of Fig. 10,
respectively. Surface cuts of the transient-absorption NEXAFS spectra were
constructed by subtracting the TDDFT spectrum, scaled by 0.25, with the MOM-
TDDFT spectra scaled by 0.75. For this construction, we convoluted the
oscillator strengths with a Lorentzian function (FWHM = 0.6 eV), and chose the
factors 0.75 and 0.25 for the best fit with the experimental spectra. They are
superposed with those from experiment at delay times of 7-10 ps and 120-200 fs
in Fig. 10, panels (c) and (d). The MOM-TDDFT spectrum of T1($\pi\pi^{\ast}$)
exhibits the core-to-SOMO transition peaks at 270.88 and 272.41 eV. A peak due
to the transition to the half-occupied $\pi^{\ast}$ orbital occurs at 274.16
eV. All peaks observed in the LSOR-CCSD spectrum were also obtained by MOM-
TDDFT. The fine structure of the surface-cut transient absorption spectrum is
qualitatively reproduced.
The MOM-TDDFT spectrum of S2($\pi\pi^{\ast}$) exhibits the core-to-
SOMO($\pi^{\ast}$) transition peaks at 269.94 and 271.73 eV. The peaks due to
the transitions to the half-occupied $\pi^{\ast}$ orbital appear at 274.17 and
274.98 eV. The reconstructed transient-absorption spectrum agrees well with
the experimental surface-cut spectrum.
Figure 10: (a), (b) Carbon K-edge NEXAFS for acetylacetone in the
T1($\pi\pi^{\ast}$) and S2($\pi\pi^{\ast}$) states calculated at the MOM-
TDDFT/B3LYP/6-311++G** level at the potential energy minima of
T1($\pi\pi^{\ast}$) and S1(n$\pi^{\ast}$), respectively. The green curve is
the ground-state spectrum. In panels (c) and (d) the experimental transient
absorption spectra at delay times of 7-10 ps and 120-200 fs are reported with
black lines.Bhattacherjee _et al._ (2017) In red are the computational
transient absorption spectra reconstructed from the red and green curves of
panels (a) and (b), respectively, shifted by $+$10.9 eV. The red curves of (a)
and (b) were scaled by 0.75, and subtracted from the green curves, which were
scaled by 0.25. FWHM of the Lorentzian convolution function is 0.4 eV for
panels (a) and (b), 0.6 eV for panels (c) and (d), respectively.
## IV Summary and Conclusions
We have analyzed the performance of different single-reference electronic
structure methods for excited-state XAS calculations. The analysis was carried
out in three steps. First, we compared the results for the ground-state XAS
spectra of uracil, thymine, and acetylacetone computed using CVS-ADC(2), CVS-
EOM-CCSD, and TDDFT/B3LYP, and with the experimental spectra. Second, we
computed the excitation energies of the valence-excited states presumably
involved in the dynamics at ADC(2), EOM-EE-CCSD, and TDDFT/B3LYP levels, and
compared them with the experimental data from EELS and UV absorption. Third,
we analyzed different protocols for the XAS spectra of the lowest-lying
valence-excited states based on the CCSD ansatz, namely, regular CVS-EOM-CCSD
for transitions between excited states, and EOM-CCSD applied on the excited-
state reference state optimized imposing the MOM constraint. The results for
thymine and acetylacetone were evaluated by comparison with the experimental
time-resolved spectra. Finally, the performance of MOM-TDDFT/B3LYP for TR-XAS
was evaluated, again on thymine and acetylacetone, by comparison with the
LSOR-CCSD and the experimental spectra.
In the first step, we found that CVS-EOM-CCSD reproduces well the entire pre-
edge region of the ground-state XAS spectra. On the other hand, CVS-ADC(2) and
TDDFT/B3LYP only describe the lowest-lying core excitations with reasonable
accuracy, while the Rydberg region is not captured. In the second step, we
observed that EOM-EE-CCSD, ADC(2), and TDDFT/B3LYP treat the valence-excited
states with a comparable accuracy.
Among the methods analyzed in the third step, only LSOR-CCSD and MOM-TDDFT can
reproduce the entire pre-bleaching region of the excited-state XAS spectra for
thymine and acetylacetone, despite spin contamination of the singlet excited
states. LSOR-CCSD could be used as the reference when evaluating the
performance of other electronic structure methods for excited-state XAS,
especially if no experimental spectra are available. For the spectra of the
spin-singlet states, CVS-EOM-CCSD yields slightly better core$\to$SOMO
positions.
We note that the same procedure can be used to assess the performance of other
xc-functional or post-HF methods for TR-XAS calculations. We also note that
description of an initial state with the MOM algorithm is reasonably accurate
only when the initial state has a single configurational wave-function
character. The low computational scaling and reasonable accuracy of MOM-TDDFT
makes it rather attractive for the on-the-fly calculation of TR-XAS spectra in
the excited-state nuclear dynamics simulations.
## Supplementary material
See supplementary material for the NTOs of all core and valence excitations.
## Data availability statement
The data that supports the findings of this study are available within the
article and its supplementary material.
## Acknowledgments
This research was funded by the European Union’s Horizon 2020 research and
innovation program under the Marie Skłodowska-Curie Grant Agreements No.
713683 (COFUNDfellowsDTU) and No. 765739 (COSINE – COmputational Spectroscopy
In Natural sciences and Engineering), by DTU Chemistry, and by the Danish
Council for Independent Research (now Independent Research Fund Denmark),
Grant Nos. 7014-00258B, 4002-00272, 014-00258B, and 8021-00347B. A.I.K. was
supported by the U.S. National Science Foundation (No. CHE-1856342).
## Conflicts of interest
The authors declare the following competing financial interest(s): A. I. K. is
president and part-owner of Q-Chem, Inc.
## References
## References
* Scherer _et al._ (1985) N. F. Scherer, J. L. Knee, D. D. Smith, and A. H. Zewail, “Femtosecond photofragment spectroscopy: The reaction ICN $\to$ CN + I,” J. Phys. Chem. 89, 5141–5143 (1985).
* Young _et al._ (2018) L. Young, K. Ueda, M. Gühr, P. H. Bucksbaum, M. Simon, S. Mukamel, N. Rohringer, K. C. Prince, C. Masciovecchio, M. Meyer, A. Rudenko, D. Rolles, C. Bostedt, M. Fuchs, D. A. Reis, R. Santra, H. Kapteyn, M. Murnane, H. Ibrahim, F. Légaré, M. Vrakking, M. Isinger, D. Kroon, M. Gisselbrecht, A. L’Huillier, H. J. Wörner, and S. R. Leone, “Roadmap of ultrafast x-ray atomic and molecular physics,” J. Phys. B 51, 032003 (2018).
* Chergui and Collet (2017) M. Chergui and E. Collet, “Photoinduced structural dynamics of molecular systems mapped by time-resolved x-ray methods,” Chem. Rev. 117, 11025–11065 (2017).
* Ueda (2018) K. Ueda, _X-ray Free Electron Lasers_ (MDPI, Basel, Switzerland, 2018).
* Calegari _et al._ (2016) F. Calegari, G. Sansone, S. Stagira, C. Vozzi, and M. Nisoli, “Advances in attosecond science,” J. Phys. B 49, 062001 (2016).
* Ramasesha, Leone, and Neumark (2016) K. Ramasesha, S. R. Leone, and D. M. Neumark, “Real-time probing of electron dynamics using attosecond time-resolved spectroscopy,” Annu. Rev. Phys. Chem. 67, 41–63 (2016).
* Ischenko, Weber, and Miller (2017) A. A. Ischenko, P. M. Weber, and R. J. D. Miller, “Capturing chemistry in action with electrons: Realization of atomically resolved reaction dynamics,” Chem. Rev. 117, 11066–11124 (2017).
* Villeneuve (2018) D. M. Villeneuve, “Attosecond science,” Contemp. Phys. 59, 47–61 (2018).
* Schuurman and Stolow (2018) M. S. Schuurman and A. Stolow, “Dynamics at conical intersections,” Ann. Rev. Phys. Chem. 69, 427–450 (2018).
* Adachi and Suzuki (2018) S. Adachi and T. Suzuki, “UV-driven harmonic generation for time-resolved photoelectron spectroscopy of polyatomic molecules,” Appl. Sci. 8, 1784 (2018).
* Suzuki (2019) T. Suzuki, “Ultrafast photoelectron spectroscopy of aqueous solutions,” J. Chem. Phys. 151, 090901 (2019).
* Liu _et al._ (2020) Y. Liu, S. L. Horton, J. Yang, J. P. F. Nunes, X. Shen, T. J. A. Wolf, R. Forbes, C. Cheng, B. Moore, M. Centurion, K. Hegazy, R. Li, M.-F. Lin, A. Stolow, P. Hockett, T. Rozgonyi, P. Marquetand, X. Wang, and T. Weinacht, “Spectroscopic and structural probing of excited-state molecular dynamics with time-resolved photoelectron spectroscopy and ultrafast electron diffraction,” Phys. Rev. X 10, 021016 (2020).
* Glownia _et al._ (2016) J. M. Glownia, A. Natan, J. P. Cryan, R. Hartsock, M. Kozina, M. P. Minitti, S. Nelson, J. Robinson, T. Sato, T. van Driel, G. Welch, C. Weninger, D. Zhu, and P. H. Bucksbaum, “Self-referenced coherent diffraction x-ray movie of Ångstrom- and femtosecond-scale atomic motion,” Phys. Rev. Lett. 117, 153003 (2016).
* Stankus _et al._ (2019) B. Stankus, H. Yong, N. Zotev, J. M. Ruddock, D. Bellshaw, T. J. Lane, M. Liang, S. Boutet, S. Carbajo, J. S. Robinson, W. Du, N. Goff, Y. Chang, J. E. Koglin, M. P. Minitti, A. Kirrander, and P. M. Weber, “Ultrafast X-ray scattering reveals vibrational coherence following Rydberg excitation,” Nat. Chem. 11, 716–721 (2019).
* Ruddock _et al._ (2019) J. M. Ruddock, H. Yong, B. Stankus, W. Du, N. Goff, Y. Chang, A. Odate, A. M. Carrascosa, D. Bellshaw, N. Zotev, M. Liang, S. Carbajo, J. Koglin, J. S. Robinson, S. Boutet, A. Kirrander, M. P. Minitti, and P. M. Weber, “A deep UV trigger for ground-state ring-opening dynamics of 1,3-cyclohexadiene,” Sci. Adv. 5, eaax6625 (2019).
* Stankus _et al._ (2020) B. Stankus, H. Yong, J. Ruddock, L. Ma, A. M. Carrascosa, N. Goff, S. Boutet, X. Xu, N. Zotev, A. Kirrander, M. P. Minitti, and P. M. Weber, “Advances in ultrafast gas-phase x-ray scattering,” J. Phys. B 53, 234004 (2020).
* Yang _et al._ (2016) J. Yang, M. Guehr, X. Shen, R. Li, T. Vecchione, R. Coffee, J. Corbett, A. Fry, N. Hartmann, C. Hast, K. Hegazy, K. Jobe, I. Makasyuk, J. Robinson, M. S. Robinson, S. Vetter, S. Weathersby, C. Yoneda, X. Wang, and M. Centurion, “Diffractive imaging of coherent nuclear motion in isolated molecules,” Phys. Rev. Lett. 117, 153002 (2016).
* Stöhr (1992) J. Stöhr, _NEXAFS Spectroscopy_ (Springer-Verlag, Berlin, 1992).
* Pertot _et al._ (2017) Y. Pertot, C. Schmidt, M. Matthews, A. Chauvet, M. Huppert, V. Svoboda, A. von Conta, A. Tehlar, D. Baykusheva, J.-P. Wolf, and H. J. Wörner, “Time-resolved x-ray absorption spectroscopy with a water window high-harmonic source,” Science 355, 264–267 (2017).
* Attar _et al._ (2017) A. R. Attar, A. Bhattacherjee, C. D. Pemmaraju, K. Schnorr, K. D. Closser, D. Prendergast, and S. R. Leone, “Femtosecond x-ray spectroscopy of an electrocyclic ring-opening reaction,” Science 356, 54–59 (2017).
* Wolf _et al._ (2017) T. J. A. Wolf, R. H. Myhre, J. P. Cryan, S. Coriani, R. J. Squibb, A. Battistoni, N. Berrah, C. Bostedt, P. Bucksbaum, G. Coslovich, R. Feifel, K. J. Gaffney, J. Grilj, T. J. Martinez, S. Miyabe, S. P. Moeller, M. Mucke, A. Natan, R. Obaid, T. Osipov, O. Plekan, S. Wang, H. Koch, and M. Gühr, “Probing ultrafast $\pi\pi^{\ast}$/n$\pi^{\ast}$ internal conversion in organic chromophores via K-edge resonant absorption,” Nat. Commun. 8, 29 (2017).
* Bhattacherjee _et al._ (2017) A. Bhattacherjee, C. D. Pemmaraju, K. Schnorr, A. R. Attar, and S. R. Leone, “Ultrafast intersystem crossing in acetylacetone via femtosecond X-ray transient absorption at the carbon K-edge,” J. Am. Chem. Soc. 139, 16576–16583 (2017).
* Chen, Zhang, and Shelby (2014) L. X. Chen, X. Zhang, and M. L. Shelby, “Recent advances on ultrafast X-ray spectroscopy in the chemical sciences,” Chem. Sci. 5, 4136–4152 (2014).
* Chergui (2016) M. Chergui, “Time-resolved X-ray spectroscopies of chemical systems: New perspectives,” Struct. Dyn. 3, 031001 (2016).
* Wernet (2019) P. Wernet, “Chemical interactions and dynamics with femtosecond X-ray spectroscopy and the role of X-ray free-electron lasers,” Phil. Trans. R. Soc. A 377, 20170464 (2019).
* Katayama _et al._ (2019) T. Katayama, T. Northey, W. Gawelda, C. J. Milne, G. Vankó, F. A. Lima, R. Bohinc, Z. Németh, S. Nozawa, T. Sato, D. Khakhulin, J. Szlachetko, T. Togashi, S. Owada, S.-i. Adachi, C. Bressler, M. Yabashi, and T. J. Penfold, “Tracking multiple components of a nuclear wavepacket in photoexcited Cu(I)-phenanthroline complex using ultrafast X-ray spectroscopy,” Nat. Commun. 10, 3606 (2019).
* Norman and Dreuw (2018) P. Norman and A. Dreuw, “Simulating x-ray spectroscopies and calculating core-excited states of molecules,” Chem. Rev. 118, 7208–7248 (2018).
* Bokarev and Kühn (2020) S. I. Bokarev and O. Kühn, “Theoretical x-ray spectroscopy of transition metal compounds,” WIREs Comput. Mol. Sci. 10, e1433 (2020).
* Triguero, Pettersson, and Ågren (1998) L. Triguero, L. G. M. Pettersson, and H. Ågren, “Calculations of near-edge x-ray-absorption spectra of gas-phase and chemisorbed molecules by means of density-functional and transition-potential theory,” Phys. Rev. B 58, 8097–8110 (1998).
* Leetmaa _et al._ (2010) M. Leetmaa, M. Ljungberg, A. Lyubartsev, A. Nilsson, and L. G. M. Pettersson, “Theoretical approximations to X-ray absorption spectroscopy of liquid water and ice,” J. Electron Spectrosc. Relat. Phenom. 177, 135–157 (2010).
* Vall-llosera _et al._ (2008) G. Vall-llosera, B. Gao, A. Kivimäki, M. Coreno, J. Álvarez Ruiz, M. de Simone, H. Ågren, and E. Rachlew, “The C 1$s$ and N 1$s$ near edge x-ray absorption fine structure spectra of five azabenzenes in the gas phase,” J. Chem. Phys. 128, 044316 (2008).
* Perera and Urquhart (2017) S. D. Perera and S. G. Urquhart, “Systematic investigation of $\pi$–$\pi$ interactions in near-edge x-ray fine structure (NEXAFS) spectroscopy of paracyclophanes,” J. Phys. Chem. A 121, 4907–4913 (2017).
* Ehlert, Gühr, and Saalfrank (2018) C. Ehlert, M. Gühr, and P. Saalfrank, “An efficient first principles method for molecular pump-probe NEXAFS spectra: Application to thymine and azobenzene,” J. Chem. Phys. 149, 144112 (2018).
* Ehlert and Klamroth (2020) C. Ehlert and T. Klamroth, “PSIXAS: A Psi4 plugin for efficient simulations of X-ray absorption spectra based on the transition-potential and $\Delta$-Kohn–Sham method,” J. Comput. Chem. 41, 1781–1789 (2020).
* Michelitsch and Reuter (2019) G. S. Michelitsch and K. Reuter, “Efficient simulation of near-edge x-ray absorption fine structure (NEXAFS) in density-functional theory: Comparison of core-level constraining approaches,” J. Chem. Phys. 150, 074104 (2019).
* Dreuw and Head-Gordon (2005) A. Dreuw and M. Head-Gordon, “Single-reference ab initio methods for the calculation of excited states of large molecules,” Chem. Rev. 105, 4009–4037 (2005).
* Luzanov and Zhikol (2012) A. V. Luzanov and O. A. Zhikol, “Excited state structural analysis: TDDFT and related models,” in _Practical Aspects of Computational Chemistry_, Vol. I, edited by J. Leszczynski and M. K. Shukla (Springer, Heidelberg, Germany, 2012) pp. 415 – 449.
* Laurent and Jacquemin (2013) A. D. Laurent and D. Jacquemin, “TD-DFT benchmarks: A review,” Int. J. Quantum Chem. 113, 2019–2039 (2013).
* Ferré, Filatov, and Huix-Rotllant (2016) N. Ferré, M. Filatov, and M. Huix-Rotllant, eds., _Density-Functional Methods for Excited States_ (Springer, Cham, Switzerland, 2016).
* Stener, Fronzoni, and de Simone (2003) M. Stener, G. Fronzoni, and M. de Simone, “Time dependent density functional theory of core electrons excitations,” Chem. Phys. Lett. 373, 115–123 (2003).
* Besley and Asmuruf (2010) N. A. Besley and F. A. Asmuruf, “Time-dependent density functional theory calculations of the spectroscopy of core electrons,” Phys. Chem. Chem. Phys. 12, 12024–12039 (2010).
* Cederbaum, Domcke, and Schirmer (1980) L. S. Cederbaum, W. Domcke, and J. Schirmer, “Many-body theory of core holes,” Phys. Rev. A 22, 206–222 (1980).
* Besley (2004) N. A. Besley, “Calculation of the electronic spectra of molecules in solution and on surfaces,” Chem. Phys. Lett. 390, 124 – 129 (2004).
* Becke (1993) A. D. Becke, “Density-functional thermochemistry. III. The role of exact exchange,” J. Chem. Phys. 98, 5648–5652 (1993).
* Hait and Head-Gordon (2020a) D. Hait and M. Head-Gordon, “Highly accurate prediction of core spectra of molecules at density functional theory cost: Attaining sub-electronvolt error from a restricted open-shell Kohn–Sham approach,” J. Phys. Chem. Lett. 11, 775–786 (2020a).
* Gilbert, Besley, and Gill (2008) A. T. B. Gilbert, N. A. Besley, and P. M. W. Gill, “Self-consistent field calculations of excited states using the maximum overlap method (MOM),” J. Phys. Chem. A 112, 13164–13171 (2008).
* Northey _et al._ (2020) T. Northey, J. Norell, A. E. A. Fouda, N. A. Besley, M. Odelius, and T. J. Penfold, “Ultrafast nonadiabatic dynamics probed by nitrogen K-edge absorption spectroscopy,” Phys. Chem. Chem. Phys. 22, 2667–2676 (2020).
* Hait and Head-Gordon (2020b) D. Hait and M. Head-Gordon, “Excited state orbital optimization via minimizing the square of the gradient: General approach and application to singly and doubly excited states via density functional theory,” J. Chem. Theory Comput. 16, 1699–1710 (2020b).
* Hait _et al._ (2020) D. Hait, E. A. Haugen, Z. Yang, K. J. Oosterbaan, S. R. Leone, and M. Head-Gordon, “Accurate prediction of core-level spectra of radicals at density functional theory cost via square gradient minimization and recoupling of mixed configurations,” J. Chem. Phys. 153, 134108 (2020).
* Bartlett and Musiał (2007) R. J. Bartlett and M. Musiał, “Coupled-cluster theory in quantum chemistry,” Rev. Mod. Phys. 79, 291–352 (2007).
* Koch and Jørgensen (1990) H. Koch and P. Jørgensen, “Coupled cluster response functions,” J. Chem. Phys. 93, 3333–3344 (1990).
* Christiansen, Jørgensen, and Hättig (1998) O. Christiansen, P. Jørgensen, and C. Hättig, “Response functions from Fourier component variational perturbation theory applied to a time-averaged quasienergy,” Int. J. Quantum Chem. 68, 1–52 (1998).
* Sneskov and Christiansen (2012) K. Sneskov and O. Christiansen, “Excited state coupled cluster methods,” WIREs Comput. Mol. Sci. 2, 566–584 (2012).
* Stanton and Bartlett (1993) J. F. Stanton and R. J. Bartlett, “The equation of motion coupled-cluster method. A systematic biorthogonal approach to molecular excitation energies, transition probabilities, and excited state properties,” J. Chem. Phys. 98, 7029–7039 (1993).
* Krylov (2008) A. I. Krylov, “Equation-of-Motion Coupled-Cluster Methods for Open-Shell and Electronically Excited Species: The Hitchhiker’s Guide to Fock Space,” Ann. Rev. Phys. Chem. 59, 433–462 (2008).
* Bartlett (2012) R. J. Bartlett, “Coupled-cluster theory and its equation-of-motion extensions,” WIREs Comput. Mol. Sci. 2, 126–138 (2012).
* Coriani _et al._ (2016) S. Coriani, F. Pawłowski, J. Olsen, and P. Jørgensen, “Molecular response properties in equation of motion coupled cluster theory: A time-dependent perspective,” J. Chem. Phys. 144, 024102 (2016).
* Coriani and Koch (2015) S. Coriani and H. Koch, “Communication: X-ray absorption spectra and core-ionization potentials within a core-valence separated coupled cluster framework,” J. Chem. Phys. 143, 181103 (2015).
* Vidal _et al._ (2019) M. L. Vidal, X. Feng, E. Epifanovsky, A. I. Krylov, and S. Coriani, “New and efficient equation-of-motion coupled-cluster framework for core-excited and core-ionized states,” J. Chem. Theory Comput. 15, 3117–3133 (2019).
* Tsuru _et al._ (2019) S. Tsuru, M. L. Vidal, M. Pápai, A. I. Krylov, K. B. Møller, and S. Coriani, “Time-resolved near-edge X-ray absorption fine structure of pyrazine from electronic structure and nuclear wave packet dynamics simulations,” J. Chem. Phys. 151, 124114 (2019).
* Faber _et al._ (2019) R. Faber, E. F. Kjønstad, H. Koch, and S. Coriani, “Spin adapted implementation of EOM-CCSD for triplet excited states: Probing intersystem crossings of acetylacetone at the carbon and oxygen K-edges,” J. Chem. Phys. 151, 144107 (2019).
* Nanda _et al._ (2020) K. D. Nanda, M. L. Vidal, R. Faber, S. Coriani, and A. I. Krylov, “How to stay out of trouble in RIXS calculations within the equation-of-motion coupled-cluster damped response theory? Safe hitchhiking in the excitation manifold by means of core-valence separation,” Phys. Chem. Chem. Phys. 22, 2629–2641 (2020).
* Faber and Coriani (2020) R. Faber and S. Coriani, “Core–valence-separated coupled-cluster-singles-and-doubles complex-polarization-propagator approach to X-ray spectroscopies,” Phys. Chem. Chem. Phys. 22, 2642–2647 (2020).
* Vidal, Krylov, and Coriani (2020a) M. L. Vidal, A. I. Krylov, and S. Coriani, “Dyson orbitals within the fc-CVS-EOM-CCSD framework: theory and application to X-ray photoelectron spectroscopy of ground and excited states,” Phys. Chem. Chem. Phys. 22, 2693–2703 (2020a).
* Vidal _et al._ (2020a) M. L. Vidal, P. Pokhilko, A. I. Krylov, and S. Coriani, “Equation-of-Motion Coupled-Cluster Theory to Model L-Edge X-ray Absorption and Photoelectron Spectra,” J. Phys. Chem. Lett. 11, 8314–8321 (2020a).
* Coriani _et al._ (2012) S. Coriani, O. Christiansen, T. Fransson, and P. Norman, “Coupled-Cluster Response Theory for Near-Edge X-Ray-Absorption Fine Structure of Atoms and Molecules,” Phys. Rev. A 85, 022507 (2012).
* Frati _et al._ (2019) F. Frati, F. de Groot, J. Cerezo, F. Santoro, L. Cheng, R. Faber, and S. Coriani, “Coupled cluster study of the x-ray absorption spectra of formaldehyde derivatives at the oxygen, carbon, and fluorine K-edges,” J. Chem. Phys. 151, 064107 (2019).
* Carbone _et al._ (2019) J. P. Carbone, L. Cheng, R. H. Myhre, D. Matthews, H. Koch, and S. Coriani, “An analysis of the performance of coupled cluster methods for K-edge core excitations and ionizations using standard basis sets,” Adv. Quant. Chem. 79, 241–261 (2019).
* Peng _et al._ (2015) B. Peng, P. J. Lestrange, J. J. Goings, M. Caricato, and X. Li, “Energy-Specific Equation-of-Motion Coupled-Cluster Methods for High-Energy Excited States: Application to $K$-Edge X-Ray Absorption Spectroscopy,” J. Chem. Theory Comput. 11, 4146–4153 (2015).
* Fransson _et al._ (2013) T. Fransson, S. Coriani, O. Christiansen, and P. Norman, “Carbon x-ray absorption spectra of fluoroethenes and acetone: A study at the coupled cluster, density functional, and static-exchange levels of theory,” J. Chem. Phys. 138, 124311 (2013).
* Sarangi _et al._ (2020) R. Sarangi, M. L. Vidal, S. Coriani, and A. I. Krylov, “On the basis set selection for calculations of core-level states: different strategies to balance cost and accuracy,” Mol. Phys. 118, e1769872 (2020).
* Tenorio _et al._ (2019) B. N. C. Tenorio, T. Moitra, M. A. C. Nascimento, A. B. Rocha, and S. Coriani, “Molecular inner-shell photoabsorption/photoionization cross sections at core-valence-separated coupled cluster level: Theory and examples,” J. Chem. Phys. 150, 224104 (2019).
* Moitra _et al._ (2020) T. Moitra, D. Madsen, O. Christiansen, and S. Coriani, “Vibrationally resolved coupled-cluster x-ray absorption spectra from vibrational configuration interaction anharmonic calculations,” J. Chem. Phys. 153, 234111 (2020).
* Vidal _et al._ (2020b) M. L. Vidal, M. Epshtein, V. Scutelnic, Z. Yang, T. Xue, S. R. Leone, A. I. Krylov, and S. Coriani, “Interplay of Open-Shell Spin-Coupling and Jahn-Teller Distortion in Benzene Radical Cation Probed by X-ray Spectroscopy,” J. Phys. Chem. A 124, 9532–9541 (2020b).
* Costantini _et al._ (2019) R. Costantini, R. Faber, A. Cossaro, L. Floreano, A. Verdini, C. Hättig, A. Morgante, S. Coriani, and M. Dell’Angela, “Picosecond timescale tracking of pentacene triplet excitons with chemical sensitivity,” Commun. Phys. 2, 56 (2019).
* Myhre _et al._ (2018) R. H. Myhre, T. J. A. Wolf, L. Cheng, S. Nandi, S. Coriani, M. Gühr, and H. Koch, “A theoretical and experimental benchmark study of core-excited states in nitrogen,” J. Chem. Phys. 148, 064106 (2018).
* Folkestad _et al._ (2020) S. D. Folkestad, E. F. Kjønstad, R. H. Myhre, J. H. Andersen, A. Balbi, S. Coriani, T. Giovannini, L. Goletto, T. S. Haugland, A. Hutcheson, I.-M. Høyvik, T. Moitra, A. C. Paul, M. Scavino, A. S. Skeidsvoll, Å. H. Tveten, and H. Koch, “$e^{T}$ 1.0: An open source electronic structure program with emphasis on coupled cluster and multilevel methods,” J. Chem. Phys. 152, 184103 (2020).
* Paul, Myhre, and Koch (2021) A. C. Paul, R. H. Myhre, and H. Koch, “New and Efficient Implementation of CC3,” J. Chem. Theory Comput. 17, 117–126 (2021).
* Matthews (2020) D. A. Matthews, “EOM-CC methods with approximate triple excitations applied to core excitation and ionisation energies,” Mol. Phys. 118, e1771448 (2020).
* Peng, Copan, and Sokolov (2019) R. Peng, A. V. Copan, and A. Y. Sokolov, “Simulating x-ray absorption spectra with linear-response density cumulant theory,” J. Phys. Chem. A 123, 1840–1850 (2019).
* Schirmer (1982) J. Schirmer, “Beyond the random-phase approximation: A new approximation scheme for the polarization propagator,” Phys. Rev. A 26, 2395–2416 (1982).
* Dreuw and Wormit (2015) A. Dreuw and M. Wormit, “The algebraic diagrammatic construction scheme for the polarization propagator for the calculation of excited states,” WIREs Comput. Mol. Sci. 5, 82–95 (2015).
* Barth and Schirmer (1985) A. Barth and J. Schirmer, “Theoretical core-level excitation spectra of N2 and CO by a new polarisation propagator method,” J. Phys. B 18, 867–885 (1985).
* Christiansen, Koch, and Jørgensen (1995) O. Christiansen, H. Koch, and P. Jørgensen, “The second-order approximate coupled cluster singles and doubles model CC2,” Chem. Phys. Lett. 243, 409 – 418 (1995).
* Schirmer _et al._ (1993) J. Schirmer, A. B. Trofimov, K. J. Randall, J. Feldhaus, A. M. Bradshaw, Y. Ma, C. T. Chen, and F. Sette, “$K$-shell excitation of the water, ammonia, and methane molecules using high-resolution photoabsorption spectroscopy,” Phys. Rev. A 47, 1136 (1993).
* Wenzel, Wormit, and Dreuw (2014) J. Wenzel, M. Wormit, and A. Dreuw, “Calculating core-level excitations and x-ray absorption spectra of medium-sized closed-shell molecules with the algebraic-diagrammatic construction scheme for the polarization propagator,” J. Comput. Chem. 35, 1900–1915 (2014).
* Plasser _et al._ (2014a) F. Plasser, R. Crespo-Otero, M. Pederzoli, J. Pittner, H. Lischka, and M. Barbatti, “Surface hopping dynamics with correlated single-reference methods: 9H-adenine as a case study,” J. Chem. Theory Comput. 10, 1395–1405 (2014a).
* Neville _et al._ (2016a) S. P. Neville, V. Averbukh, S. Patchkovskii, M. Ruberti, R. Yun, M. Chergui, A. Stolow, and M. S. Schuurman, “Beyond structure: ultrafast X-ray absorption spectroscopy as a probe of non-adiabatic wavepacket dynamics,” Faraday Discuss. 194, 117–145 (2016a).
* Neville _et al._ (2016b) S. P. Neville, V. Averbukh, M. Ruberti, R. Yun, S. Patchkovskii, M. Chergui, A. Stolow, and M. S. Schuurman, “Excited state X-ray absorption spectroscopy: Probing both electronic and structural dynamics,” J. Chem. Phys. 145, 144307 (2016b).
* Neville _et al._ (2018) S. P. Neville, M. Chergui, A. Stolow, and M. S. Schuurman, “Ultrafast x-ray spectroscopy of conical intersections,” Phys. Rev. Lett. 120, 243001 (2018).
* Neville and Schuurman (2018) S. P. Neville and M. S. Schuurman, “A general approach for the calculation and characterization of x-ray absorption spectra,” J. Chem. Phys. 149, 154111 (2018).
* Trofimov and Schirmer (1995) A. B. Trofimov and J. Schirmer, “An efficient polarization propagator approach to valence electron excitation spectra,” J. Phys. B 28, 2299–2324 (1995).
* Plekan _et al._ (2008) O. Plekan, V. Feyer, R. Richter, M. Coreno, M. de Simone, K. C. Prince, A. B. Trofimov, E. V. Gromov, I. L. Zaytseva, and J. Schirmer, “A theoretical and experimental study of the near edge X-ray absorption fine structure (NEXAFS) and X-ray photoelectron spectra (XPS) of nucleobases: Thymine and adenine,” Chem. Phys. 347, 360–375 (2008).
* List _et al._ (2020) N. H. List, A. L. Dempwolff, A. Dreuw, P. Norman, and T. J. Martínez, “Probing competing relaxation pathways in malonaldehyde with transient X-ray absorption spectroscopy,” Chem. Sci. 11, 4180–4193 (2020).
* Seidu _et al._ (2019) I. Seidu, S. P. Neville, M. Kleinschmidt, A. Heil, C. M. Marian, and M. S. Schuurman, “The simulation of X-ray absorption spectra from ground and excited electronic states using core-valence separated DFT/MRCI,” J. Chem. Phys. 151, 144104 (2019).
* Lyskov, Kleinschmidt, and Marian (2016) I. Lyskov, M. Kleinschmidt, and C. M. Marian, “Redesign of the DFT/MRCI Hamiltonian,” J. Chem. Phys. 144, 034104 (2016).
* Olsen _et al._ (1988) J. Olsen, B. O. Roos, P. Jørgensen, and H. J. A. Jensen, “Determinant based configuration interaction algorithms for complete and restricted configuration interaction spaces,” J. Chem. Phys. 89, 2185–2192 (1988).
* Malmqvist, Rendell, and Roos (1990) P. Å. Malmqvist, A. Rendell, and B. O. Roos, “The restricted active space self-consistent-field method, implemented with a split graph unitary group approach,” J. Phys. Chem. 94, 5477–5482 (1990).
* Malmqvist _et al._ (2008) P. Å. Malmqvist, K. Pierloot, A. R. M. Shahi, C. J. Cramer, and L. Gagliardi, “The restricted active space followed by second-order perturbation theory method: Theory and application to the study of CuO2 and Cu2O2 systems,” J. Chem. Phys. 128, 204109 (2008).
* Delcey _et al._ (2019) M. G. Delcey, L. K. Sørensen, M. Vacher, R. C. Couto, and M. Lundberg, “Efficient calculations of a large number of highly excited states for multiconfigurational wavefunctions,” J. Comput. Chem. 40, 1789–1799 (2019).
* Hua, Mukamel, and Luo (2019) W. Hua, S. Mukamel, and Y. Luo, “Transient x-ray absorption spectral fingerprints of the S1 dark state in uracil,” J. Phys. Chem. Lett. 10, 7172–7178 (2019).
* Segatta _et al._ (2020) F. Segatta, A. Nenov, S. Orlandi, A. Arcioni, S. Mukamel, and M. Garavelli, “Exploring the capabilities of optical pump X-ray probe NEXAFS spectroscopy to track photo-induced dynamics mediated by conical intersections,” Faraday Discuss. 221, 245–264 (2020).
* Schweigert and Mukamel (2007) I. V. Schweigert and S. Mukamel, “Coherent ultrafast core-hole correlation spectroscopy: X-ray analogues of multidimensional NMR,” Phys. Rev. Lett. 99, 163001 (2007).
* Lee, Small, and Head-Gordon (2019) J. Lee, D. W. Small, and M. Head-Gordon, “Excited states via coupled cluster theory without equation-of-motion methods: Seeking higher roots with application to doubly excited states and double core hole states,” J. Chem. Phys. 151, 214103 (2019).
* Feyer _et al._ (2010) V. Feyer, O. Plekan, R. Richter, M. Coreno, M. de Simone, K. C. Prince, A. B. Trofimov, I. L. Zaytseva, and J. Schirmer, “Tautomerism in Cytosine and Uracil: A Theoretical and Experimental X-ray Absorption and Resonant Auger Study,” J. Phys. Chem. A 114, 10270–10276 (2010).
* Trajmar (1980) S. Trajmar, “Electron impact spectroscopy,” Acc. Chem. Res 13, 14–20 (1980).
* Christiansen _et al._ (1996) O. Christiansen, H. Koch, A. Halkier, P. Jørgensen, T. Helgaker, and A. Sánchez de Merás, “Large-scale calculations of excitation energies in coupled cluster theory: The singlet excited states of benzene,” J. Chem. Phys. 105, 6921–6939 (1996).
* Hald, Hättig, and Jørgensen (2000) K. Hald, C. Hättig, and P. Jørgensen, “Triplet excitation energies in the coupled cluster singles and doubles model using an explicit triplet spin coupled excitation space,” J. Chem. Phys. 113, 7765–7772 (2000).
* Vidal, Krylov, and Coriani (2020b) M. L. Vidal, A. I. Krylov, and S. Coriani, “Correction: Dyson orbitals within the fc-CVS-EOM-CCSD framework: theory and application to X-ray photoelectron spectroscopy of ground and excited states,” Phys. Chem. Chem. Phys. 22, 3744–3747 (2020b).
* Krylov (2017) A. I. Krylov, “The quantum chemistry of open-shell species,” in _Reviews in Computational Chemistry_ (John Wiley & Sons, Ltd, 2017) Chap. 4, pp. 151–224.
* Casanova and Krylov (2020) D. Casanova and A. I. Krylov, “Spin-flip methods in quantum chemistry,” Phys. Chem. Chem. Phys. 22, 4326–4342 (2020).
* Luzanov, Sukhorukov, and Umanskii (1976) A. V. Luzanov, A. A. Sukhorukov, and V. É. Umanskii, “Application of transition density matrix for analysis of excited states,” Theor. Exp. Chem. 10, 354–361 (1976).
* Martin (2003) R. L. Martin, “Natural transition orbitals,” J. Chem. Phys. 118, 4775–4777 (2003).
* Plasser, Wormit, and Dreuw (2014) F. Plasser, M. Wormit, and A. Dreuw, “New tools for the systematic analysis and visualization of electronic excitations. I. Formalism,” J. Chem. Phys. 141, 024106 (2014).
* Plasser _et al._ (2014b) F. Plasser, S. A. Bäppler, M. Wormit, and A. Dreuw, “New tools for the systematic analysis and visualization of electronic excitations. II. Applications,” J. Chem. Phys. 141, 024107 (2014b).
* Bäppler _et al._ (2014) S. A. Bäppler, F. Plasser, M. Wormit, and A. Dreuw, “Exciton analysis of many-body wave functions: Bridging the gap between the quasiparticle and molecular orbital pictures,” Phys. Rev. A 90, 052521 (2014).
* Mewes _et al._ (2018) S. Mewes, F. Plasser, A. I. Krylov, and A. Dreuw, “Benchmarking excited-state calculations using exciton properties,” J. Chem. Theory Comput. 14, 710–725 (2018).
* Kimber and Plasser (2020) P. Kimber and F. Plasser, “Toward an understanding of electronic excitation energies beyond the molecular orbital picture,” Phys. Chem. Chem. Phys. 22, 6058–6080 (2020).
* Krylov (2020) A. I. Krylov, “From orbitals to observables and back,” J. Chem. Phys. 153, 080901 (2020).
* Shao _et al._ (2015) Y. Shao, Z. Gan, E. Epifanovsky, A. T. B. Gilbert, M. Wormit, J. Kussmann, A. W. Lange, A. Behn, J. Deng, X. Feng, D. Ghosh, M. Goldey, P. R. Horn, L. D. Jacobson, I. Kaliman, R. Z. Khaliullin, T. Kuś, A. Landau, J. Liu, E. I. Proynov, Y. M. Rhee, R. M. Richard, M. A. Rohrdanz, R. P. Steele, E. J. Sundstrom, H. L. Woodcock III, P. M. Zimmerman, D. Zuev, B. Albrecht, E. Alguire, B. Austin, G. J. O. Beran, Y. A. Bernard, E. Berquist, K. Brandhorst, K. B. Bravaya, S. T. Brown, D. Casanova, C.-M. Chang, Y. Chen, S. H. Chien, K. D. Closser, D. L. Crittenden, M. Diedenhofen, R. A. DiStasio Jr., H. Do, A. D. Dutoi, R. G. Edgar, S. Fatehi, L. Fusti-Molnar, A. Ghysels, A. Golubeva-Zadorozhnaya, J. Gomes, M. W. D. Hanson-Heine, P. H. P. Harbach, A. W. Hauser, E. G. Hohenstein, Z. C. Holden, T.-C. Jagau, H. Ji, B. Kaduk, K. Khistyaev, J. Kim, J. Kim, R. A. King, P. Klunzinger, D. Kosenkov, T. Kowalczyk, C. M. Krauter, K. U. Lao, A. D. Laurent, K. V. Lawler, S. V. Levchenko, C. Y. Lin, F. Liu, E. Livshits, R. C. Lochan, A. Luenser, P. Manohar, S. F. Manzer, S.-P. Mao, N. Mardirossian, A. V. Marenich, S. A. Maurer, N. J. Mayhall, E. Neuscamman, C. M. Oana, R. Olivares-Amaya, D. P. O’Neill, J. A. Parkhill, T. M. Perrine, R. Peverati, A. Prociuk, D. R. Rehn, E. Rosta, N. J. Russ, S. M. Sharada, S. Sharma, D. W. Small, A. Sodt, T. Stein, D. Stück, Y.-C. Su, A. J. W. Thom, T. Tsuchimochi, V. Vanovschi, L. Vogt, O. Vydrov, T. Wang, M. A. Watson, J. Wenzel, A. White, C. F. Williams, J. Yang, S. Yeganeh, S. R. Yost, Z.-Q. You, I. Y. Zhang, X. Zhang, Y. Zhao, B. R. Brooks, G. K. L. Chan, D. M. Chipman, C. J. Cramer, W. A. Goddard III, M. S. Gordon, W. J. Hehre, A. Klamt, H. F. Schaefer III, M. W. Schmidt, C. D. Sherrill, D. G. Truhlar, A. Warshel, X. Xu, A. Aspuru-Guzik, R. Baer, A. T. Bell, N. A. Besley, J.-D. Chai, A. Dreuw, B. D. Dunietz, T. R. Furlani, S. R. Gwaltney, C.-P. Hsu, Y. Jung, J. Kong, D. S. Lambrecht, W. Liang, C. Ochsenfeld, V. A. Rassolov, L. V. Slipchenko, J. E. Subotnik, T. Van Voorhis, J. M. Herbert, A. I. Krylov, P. M. W. Gill, and M. Head-Gordon, “Advances in molecular quantum chemistry contained in the q-chem 4 program package,” Mol. Phys. 113, 184–215 (2015).
* Barca, Gilbert, and Gill (2018) G. M. J. Barca, A. T. B. Gilbert, and P. M. W. Gill, “Simple models for difficult electronic excitations,” J. Chem. Theory Comput. 14, 1501–1509 (2018).
* Rehr _et al._ (1978) J. J. Rehr, E. A. Stern, R. L. Martin, and E. R. Davidson, “Extended x-ray-absorption fine-structure amplitudes—Wave-function relaxation and chemical effects,” Phys. Rev. B 17, 560–565 (1978).
* Chernyshova _et al._ (2012) I. V. Chernyshova, J. E. Kontros, P. P. Markush, and O. B. Shpenik, “Excitation of lowest electronic states of the uracil molecule by slow electrons,” Opt. Spectrosc. 113, 5–8 (2012).
* Clark, Peschel, and Tinoco (1965) L. B. Clark, G. G. Peschel, and I. Tinoco, “Vapor spectra and heats of vaporization of some purine and pyrimidine bases,” J. Phys. Chem. 69, 3615–3618 (1965).
* Fedotov _et al._ (2020) D. A. Fedotov, A. C. Paul, P. Posocco, F. Santoro, M. Garavelli, H. Koch, S. Coriani, and R. Improta, “Excited State Absorption of Uracil in the Gas Phase: Mapping the Main Decay Paths by Different Electronic Structure Methods,” (2020), 10.26434/chemrxiv.13176554.v1, accepted in J. Chem. Theory Comput.
* Chernyshova _et al._ (2013) I. V. Chernyshova, J. E. Kontros, P. P. Markush, and O. B. Shpenik, “Excitations of lowest electronic states of thymine by slow electrons,” Opt. Spectrosc. 115, 645–650 (2013).
* Walzl, Xavier, and Kuppermann (1987) K. N. Walzl, I. M. Xavier, and A. Kuppermann, “Electron-impact spectroscopy of various diketone compounds,” J. Chem. Phys. 86, 6701–6706 (1987).
* Nakanishi, Morita, and Nagakura (1977) H. Nakanishi, H. Morita, and S. Nagakura, “Electronic structures and spectra of the keto and enol forms of acetylacetone,” Bull. Chem. Soc. Jpn. 50, 2255–2261 (1977).
|
CO excitation, molecular gas density and interstellar radiation field in local and high-redshift galaxies
Daizhong Liu
Max Planck Institute for Astronomy, Königstuhl 17, D-69117 Heidelberg, Germany
Emanuele Daddi
CEA, Irfu, DAp, AIM, Universitè Paris-Saclay, Universitè de Paris, CNRS, F-91191 Gif-sur-Yvette, France
Eva Schinnerer
Max Planck Institute for Astronomy, Königstuhl 17, D-69117 Heidelberg, Germany
Toshiki Saito
Max Planck Institute for Astronomy, Königstuhl 17, D-69117 Heidelberg, Germany
Adam Leroy
18 Department of Astronomy, The Ohio State University, 140 West 18th Ave, Columbus, OH 43210, USA
John Silverman
Kavli Institute for the Physics and Mathematics of the Universe, The University of Tokyo (Kavli IPMU, WPI), Kashiwa 277-8583, Japan
Department of Astronomy, School of Science, The University of Tokyo, 7-3-1 Hongo, Bunkyo, Tokyo 113-0033, Japan
Francesco Valentino
Cosmic Dawn Center (DAWN), Copenhagen, Denmark
Niels Bohr Institute, University of Copenhagen, Lyngbyvej 2, DK-2100 Copenhagen Ø, Denmark
Georgios Magdis
Cosmic Dawn Center (DAWN), Copenhagen, Denmark
DTU-Space, Technical University of Denmark, Elektrovej 327, DK-2800 Kgs. Lyngby, Denmark
Niels Bohr Institute, University of Copenhagen, Lyngbyvej 2, DK-2100 Copenhagen Ø, Denmark
Institute for Astronomy, Astrophysics, Space Applications and Remote Sensing, National Observatory of Athens, GR-15236 Athens, Greece
Yu Gao
Department of Astronomy, Xiamen University, Xiamen, Fujian 361005, People's Republic of China
Purple Mountain Observatory & Key Lab of Radio Astronomy, Chinese Academy of Sciences (CAS), Nanjing 210033, People's Republic of China
Shuowen Jin
Instituto de Astrofísica de Canarias (IAC), E-38205 La Laguna, Tenerife, Spain
Universidad de La Laguna, Dpto. Astrofísica, E-38206 La Laguna, Tenerife, Spain
Annagrazia Puglisi
Center for Extragalactic Astronomy, Durham University, South Road, Durham DH13LE, United Kingdom
Brent Groves
Research School of Astronomy and Astrophysics, Australian National University, Canberra ACT, 2611, Australia
International Centre for Radio Astronomy Research, University of Western Australia, Crawley, Perth, Western Australia, 6009, Australia
We study the Carbon Monoxide (CO) excitation, mean molecular gas density and interstellar radiation field (ISRF) intensity in a comprehensive sample of 76 galaxies from local to high redshift ($z\sim0-6$), selected based on detections of their CO transitions $J=2\to1$ and $5\to4$ and their optical/infrared/(sub-)millimeter spectral energy distributions (SEDs).
We confirm the existence of a tight correlation between CO excitation as traced by the CO(5-4)$/$(2-1) line ratio $\R52$, and the mean ISRF intensity $\Umean$ as derived from infrared SED fitting using dust SED templates.
By modeling the molecular gas density probability distribution function (PDF) in galaxies and predicting CO line ratios with large velocity gradient radiative transfer calculations, we present a framework linking global CO line ratios to the mean molecular hydrogen gas density $\nmean$ and kinetic temperature $\Tkin$.
Mapping in this way observed $\R52$ ratios to $\nmean$ and $\Tkin$ probability distributions, we obtain positive $\Umean$–$\nmean$ and $\Umean$–$\Tkin$ correlations, which imply a scenario in which the ISRF in galaxies is mainly regulated by $\Tkin$ and (non-linearly) by $\nmean$. A small fraction of starburst galaxies showing enhanced $\nmean$ could be due to merger-driven compaction.
Our work demonstrates that ISRF and CO excitation are tightly coupled, and that density-PDF modeling is a promising tool for probing detailed ISM properties inside galaxies.
§ INTRODUCTION
Star formation in galaxies is regulated by their reservoir of molecular gas.
Globally, the star formation rate (SFR) correlates with the total amount of molecular gas mass via the Kennicutt-Schmidt law ().
Meanwhile, physical properties like density and temperature of the molecular gas also play an important role.
For example, observations of different carbon monoxide (CO) rotational transition ($J$) lines reveal a relatively denser ($\nH2 \sim 10^{3-4} \; \percmcubic$), highly-excited phase of molecular gas in addition to a more diffuse ($\nH2 \sim 10^{2-3} \; \percmcubic$), less-excited phase (e.g., ; ; ; ; ; ; ; ; ; ; ; [hereafter ]; ; ).
While observations of rotational transition lines of high dipole moment molecules like hydrogen cyanide (HCN) reveal the densest phase of the gas ($\nH2 \gtrsim 10^{3-4} \; \percmcubic$; e.g., ).
In turbulent star formation theory,
variations of molecular gas properties are naturally created by turbulence which is ubiquitous in galaxies (e.g., , , ; ; ; ; ; ; ).
Turbulence generates certain gas density probability distribution functions (PDFs). At each gas density, CO molecules have different excitation conditions. By solving radiative transfer equations with the large velocity gradient (LVG) assumption (e.g., ), CO line fluxes can be calculated for each given state of gas volume density, column density, CO abundance, and LVG velocity gradient, etc.
The integrated CO line fluxes from all gas states give the total CO spectral line energy distribution (SLED) as observed.
Therefore, CO SLED could be a powerful tracer of turbulence and of molecular gas properties.
Meanwhile, dust grains are also important ingredients of the interstellar medium (ISM), mixed with gas. They are exposed to and heated by the interstellar radiation field (ISRF), and their thermal emission dominates the (far-)infrared/(sub-)millimeter part of galaxies' spectral energy distributions (SEDs). Like molecular gas, dust grains do not physically have a single state. Although observational studies sometimes approximate galaxies' dust SEDs by one- or two-components in modified-blackbody fitting, physical models based on assuming PDFs for the ISRF have been proposed and calculated by <cit.>, <cit.>, <cit.> and <cit.>. See also subsequent applications in <cit.>, <cit.>, <cit.>, <cit.>, <cit.> and <cit.>.
Through the study of both CO excitation and dust SED traced mean ISRF intensity ($\Umean$) in about 20 galaxies, <cit.> found that the CO(5-4)/(2-1) line ratio, $\R52$, is tightly correlated with $\Umean$. This indicates that CO excitation, or its related ISM properties, is indeed sensitive to the ISRF. However, how the underlying gas density and temperature correlate with ISRF, and how this relates to other known correlations like the Kennicutt-Schmidt law is still unclear.
In this work, we study the CO excitation, molecular gas density and ISRF in a large sample of 76 (unlensed) galaxies from local to high redshift.
The sample is selected from a large compilation of local and high-redshift CO observations from the literature, where we require galaxies to have both CO(2-1) and CO(5-4) detections together with well-sampled dust SEDs.
This also includes CO(5-4) observations newly presented here, from the Institute de Radioastronomie Millimétrique (IRAM) Plateau de Bure Interferometer (PdBI; now upgraded to the NOrthern Extended Millimeter Array [NOEMA]) for six starburst-type galaxies at $z\sim1.6$ in the COSMOS field, which have Atacama Large Millimeter/Submillimeter Array (ALMA) CO(2-1) from <cit.>.
To estimate gas density and temperature from observed line ratios, we model gas density PDFs following <cit.> but with a new approach incorporating assumptions based on the observed correlations between the gas volume density, column density and velocity dispersion.
We propose a conversion method from the line ratio to the mean molecular hydrogen gas density $\nmean$ and kinetic temperature $\Tkin$ for galaxies at global scale [A Python package (co-excitation-gas-modeling) is provided with this paper for the calculation: <https://pypi.org/project/co-excitation-gas-modeling>. It fits an input line ratio with error to our model grid and determines the probable ranges of $\nmean$ and $\Tkin$.].
Our model-predicated $J_{\mathrm{u}}<10$ CO SLEDs also show good agreement with the current data.
The structure of this paper is as follows.
Sect. <ref> describes the sample and data.
Sect. <ref> describes the SED fitting technique for $\Umean$ and other galaxy properties.
In Sect. <ref>, we present correlations between $\R52$ and various galaxy properties.
Then, in Sect. <ref>, we describe details of our gas modeling and the conversion from $\R52$ to $\nmean$ and $\Tkin$, while the resulting correlations between $\nmean$, $\Tkin$ and $\Umean$ are presented in Sect. <ref>.
We discuss the physical meaning of $\Umean$, the connection from the $\Umean$– and $\Tkin$–$\nmean$ correlations to the Kennicutt-Schmidt Law and the limitations and outlook of our study in Sect. <ref> .
Finally, we summarize in Sect. <ref>.
Throughout this paper, line ratios for CO are expressed as flux-flux ratio, where fluxes are in units of $\mathrm{Jy\,km\,s^{-1}}$.
We adopt a flat $\Lambda$CDM cosmology with $H_0=73\;\mathrm{km\,s^{-1}\,Mpc^{-1}}$, $\Omega_M=0.27$, and a <cit.> initial mass function (IMF).
§ SAMPLE AND DATA
We search the literature for CO observations of local and high-redshift galaxies, and seek for galaxies which have multiple CO line detections. This is not a complete search, but we have included 132 papers presenting CO observations from 1975 to 2020 [A MySQL/MariaDB database is available for interested readers by request.].
We require a galaxy to have one low-$J$ CO line, CO(2-1), and one mid/high-$J$ CO line, CO(5-4), for this work. This approach is chosen to maximize the sample size while covering most high-redshift main-sequence (MS) [MS is defined as a sequence between galaxies' stellar mass and SFR at each redshift, see, <cit.>, <cit.>, <cit.>. In this work we use the <cit.> MS equation.]
galaxies' CO observations.
We also require multi-wavelength coverage including optical, near-IR, far-infrared and (sub-)millimeter, in order to fit their panchromatic SEDs and obtain stellar and dust properties.
In this way, we build up a sample of 76 galaxies. They are divided into the following subsamples:
* 22 “local (U)LIRG”: local (ultra-)luminous infrared galaxies with IR luminosity $\LIR \ge 10^{11} \, \Lsun$. Their high-$J$ CO data are from the HerCULES () and GOALS () surveys using the Spectral and Photometric Imaging Receiver (SPIRE; ) Fourier Transform Spectrometer (FTS; ) on board the Herschel Space Observatory () and analyzed by . Their low-$J$ CO data are from ground-based observations in the literature (see references in Table <ref>).
* 16 “local SFG”: local star-forming galaxies (SFGs), most of which have high-$J$ CO from the KINGFISH () and VNGS (PI: C. Wilson) surveys using Herschel SPIRE FTS, also analyzed by . Many of them have low-$J$ CO mapping from the ground-based HERACLES survey (), while others have CO(2-1) single pointing observations in the literature.
* 6 “high-$z$ SB FMOS”: redshift $z\sim1.5$ starburst (SB) [
A SB galaxy is defined by its SFR being $4\times$ greater than the MS SFR (e.g., the equation) given its redshift and stellar mass. Vice versa, a MS galaxy is defined by its SFR being within $4\times$ the MS SFR.
galaxies from the FMOS-COSMOS survey (), where CO(2-1) data are from <cit.> and CO(5-4) data are newly presented in this work.
* 4 “high-$z$ MS BzK”: $z\sim1.5$ MS galaxies from <cit.>, selected using $BzK$ color criterion () and representing high-redshift massive star-forming disks.
* 4 “high-$z$ SB SMG”: $z\sim2-6$ starbursty, (sub)millimeter-selected galaxies (SMGs), including GN20 (), AzTEC-3 (), COSBO11 (), and HFLS3 ().
* 8 “high-$z$ MS V20”: high-redshift MS galaxies from <cit.>, with SFR within $4\times$ the MS SFR.
* 16 “high-$z$ SB V20”: high-redshift SB galaxies from <cit.>, with SFR greater than $4\times$ the MS SFR.
Our sample is shown in Table <ref>, where references for the CO(2-1) and CO(5-4) observations are provided. We note that there are also additional interesting galaxies observed in these CO lines: for example, strongly lensed galaxies (e.g., ), or galaxies that have observations of different CO lines (e.g., ). As the sample we compiled in this work already covers a large variety of galaxy types (e.g., MS/SB, local/high-redshift), we chose not to further include these data for simplicity and consistency. Applying our method to an extended sample of galaxies could be the subject of a future study.
In the following, we present more details about the CO and multi-wavelength photometry data for subsamples.
§.§ Local (U)LIRGs and SFGs
For local galaxies, all high-$J$ ($J_{\mathrm{u}}\sim4$ to $13$) CO observations are taken with Herschel SPIRE FTS.
explored the full public Herschel Science Archive and reduced the spectra for almost all (167) FTS-observed local galaxies [Their catalog is available at <https://zenodo.org/record/3632388>].
Based on their sample, we select galaxies with CO(5-4) $\SNR>3$ and cross-match them with low-$J$ ($J_{\mathrm{u}}\sim1$ and $2$) observations in the literature (i.e., 132 papers).
There are about 40 galaxies which meet our criterion.
The FTS's spatial pixel (“spaxel”) has a beam size of about 20–40$''$ across its frequency range of 447–1568 GHz. As we attempt to recover the total flux from the finite beam size as reliably as possible, a few interacting galaxies (e.g., NGC 4038/39; Arp 299 A/B/C) and very nearby, large galaxies (e.g., Cen A, NGC 891, M 83) have been excluded.
This gives us a sample of 38 galaxies with both CO(2-1) and CO(5-4) detections, of which 22 are local (U)LIRGs whose CO(5-4) transitions were mainly observed by the HerCULES and GOALS surveys, while ground-based CO(2-1) was provided by various works in the literature (see Table <ref>). Meanwhile, 16 are local star-forming spiral galaxies, whose CO(5-4) data are mostly taken by the KINGFISH and VNGS surveys, and 12 of which have CO(2-1) mapping from the HERACLES survey () [Their data are available at <http://www.mpia.de/HERACLES/Overview.html>].
We provide some notes about galaxies which have multiple, possibly inconsistent CO measurements in the literature in Appendix <ref>. In some cases these early observations do not fully agree with each other, even after accounting for the effect of different beam sizes. This could be due to absolute flux calibration or single-dish baseline issues. Thus it is likely that the uncertainty in these CO fluxes could be quite high, e.g., a factor of two.
To correct for the fact that FTS spaxel beam sizes are smaller than entire galaxies, measured the Herschel PACS 70–160$\,\mu$m aperture photometries within each FTS CO line beam size as well as for the entire galaxy, and calculated the ratio between the beam-aperture photometry and the entire galaxy photometry, namely “BeamFrac”, as listed in the full Table <ref> (online version). This BeamFrac is then used to scale the measured CO line flux in the FTS central spaxel to the entire galaxy scale. This method is based on the assumption that PACS 70–160$\,\mu$m luminosity linearly traces CO(5-4) luminosity, and is also adopted by other works, e.g., <cit.> and <cit.>.
For nearby galaxies which have CO(2-1) maps from HERACLES, we measure their CO(2-1) integrated fluxes using our own photometry method, as some of them do not have published line fluxes in <cit.>. Because the signal-to-noise ratio is relatively poor when reaching galaxies' outer disks in the HERACLES data, aperture photometry can be strongly affected by the choice of aperture size. We thus perform a signal masking of the HERACLES moment-0 maps to distinguish pure noise pixels from signal pixels. The mask is iteratively generated, median-filtered and binary-dilated based on pixels above 1-$\sigma$, where $\sigma$ is the rms noise iteratively determined on the pixels outside the signal mask. In this way, we obtain a Gaussian-distributed pixel value histogram outside the mask, and a total CO(2-1) line flux from the sum of pixels within the mask.
We compared our CO(2-1) line fluxes with those published in <cit.> for available galaxies, finding relative differences to be as small as 5–10%.
To study the dust SED and ISRF of these galaxies, we further collected multi-wavelength photometry data in the literature. In our sample, 22, 15, 7 and 6 galaxies have Herschel far-IR photometry from <cit.>, <cit.>, <cit.> and <cit.>, respectively. Eight have SCUBA2 850 $\mu$m photometry from <cit.>.
Note that <cit.> provide the full UV/optical-to-infrared/sub-mm SEDs [Including GALEX far-UV, near-UV, $B$, $V$, $R$, $I$, $u$, $g$, $r$, $i$, $z$, $J$, $H$, $K$, Spitzer/IRAC 3.6, 4.5, 5.8, 8.0 $\mu$m, WISE 12 $\mu$m, Spitzer/MIPS 24 $\mu$m, Herschel/PACS 70, 100, 160 $\mu$m, Herschel/SPIRE 250, 350, 500 $\mu$m, and JCMT/SCUBA 850 $\mu$m. See their Table 2.].
All of these local galaxies have Herschel PACS 70 or 100 $\mu$m and 160 $\mu$m photometry from . Fluxes are consistent among these works. For example, comparing with <cit.>, we find 13 galaxies in common, and their median flux ratio in logarithm is -0.01 dex, with a scatter of 0.04 dex.
For our SED fitting, we average all available fluxes for each band.
In addition, we cross-matched with <cit.>, <cit.>, <cit.>, and the NASA Extra-galactic Database (NED) for missing optical to near-/mid-infrared photometry.
All local galaxies have 2MASS near-IR photometry from <cit.> except for NGC 2369 and NGC 3256.
For 9 galaxies which do not have any optical photometry from <cit.> and <cit.>, we use the optical/near-/mid-IR photometry from NED [
These are:
NGC7469 and
Note that we carefully selected photometric data with large enough aperture to cover entire galaxies.
§.§ High-$z$ SB FMOS galaxies with new PdBI observations
We observed the CO(5-4) line emission in six $z\sim1.6$ starburst galaxies from the FMOS-COSMOS survey () with IRAM PdBI in the winter of 2014 (program ID W14DS).
These galaxies have ALMA CO(2-1) observations presented in <cit.>.
Our PdBI observations are at 1.3 mm. Phase centers are set to the ALMA CO(2-1) emission peak position for each galaxy, and the on-source integration time is 1.5 to 3.1 hrs per source. Sensitivity is 0.6–0.7 $\mathrm{mJy/beam}$ over the expected line widths of 200–600 MHz, depending on the ALMA CO(2-1) line properties of each source.
With robust weighting (robust factor 1), the cleaned images have synthesized beam FWHM of 2.0–3.3$''$.
As the ALMA CO(2-1) data have much higher $\SNR$ than the PdBI CO(5-4) data, we extract the CO(5-4) line fluxes in the $uv$ plane by Gaussian source fitting with fixed CO(2-1) positions and line widths (from ), using the GILDAS [<http://gildas.iram.fr>] MAPPING UV_FIT task.
The achieved line flux $\SNR$ are 1.8–5.4 within the subsample.
For two sources, PACS-819 and PACS-830, which are spatially resolved in ALMA CO(2-1) data, we also fix their CO(5-4) sizes to the measured CO(2-1) sizes ($\sim$0.3 to 1.0$''$, respectively) in the UV_FIT fitting, so as to account that they are marginally resolved in the PdBI data. For other galaxies with smaller ALMA CO(2-1) sizes, we consider them unresolved by the PdBI beam.
Furthermore, we partially observed their CO(1-0) line emission with the Very Large Array (VLA; project code 17A-233). The observing program is incomplete, and none have full integration (PACS-867, 299 and 164 each have about 90 minutes on-source integration), but we provide face-value measurements obtained as for CO(2-1).
We list the new CO(5-4) and CO(1-0) line fluxes and upper limits together with the <cit.> CO(2-1) line fluxes in Table <ref> in Appendix <ref>.
Multi-wavelength photometry is available from <cit.> and <cit.>, thanks to the rich observational data in the COSMOS deep field (see also ).
§.§ High-$z$ MS BzK galaxies
We include 4 $BzK$ color selected MS galaxies from <cit.> in our sample. They represent typical high-redshift star-forming MS galaxies and are consistent with having a disk-like morphology.
Their CO(5-4) observations were taken with IRAM PdBI in 2009 and 2011 by <cit.>, and CO(2-1) in 2007–2009 by <cit.>.
These galaxies have optical-to-near-IR photometry from <cit.>, far-IR to (sub-)mm and radio photometry from <cit.> based on the Herschel PEP (), HerMES () and GOODS-Herschel surveys (), ground-based SCUBA2 S2CLS () and AzTEC+MAMBO surveys ().
<cit.> presented a similar panchromatic SED fitting as in this work with the full dust models (see Sect. <ref>) to estimate ISRF $\Umean$ and other SED properties, but without including an AGN component in the modeling. Our SED fitting allows for the inclusion of a mid-IR AGN component, but we confirm that such an AGN component is not required, based on the chi-square statistics. Thus we obtain similar results in terms of $\Umean$ as <cit.>.
§.§ High-$z$ SB SMGs
We include 4 sub-mm selected high-redshift galaxies in our study:
GN20 (), AzTEC-3 (), COSBO11 (), and HFLS3 ().
Due to their sub-mm selection, they usually have very high SFRs compared to MS galaxies with similar stellar masses, therefore we consider them as SB.
We note that there are now more than one hundred sub-mm selected high-redshift ($z\gtrsim1$) galaxies that have CO detections, but only a few tens have both CO(5-4) and (2-1) detections.
We further excluded strongly-lensed galaxies lacking optical/near-IR SEDs, for example those from <cit.>, <cit.>, <cit.>, <cit.>, and <cit.>, despite
the fairly good sampling of their CO SLEDs. Their strong magnification ($\gtrsim 10$) largely reduces the observing time ($\times 1/100$) for CO observations compared to unlensed targets. Yet their optical to mid-IR SEDs are usually not well sampled. <cit.> present a study of CO excitation and far-IR/(sub-)mm dust SED modeling in strongly lensed galaxies, based on a similar gas density PDF modeling.
Among our SMG subsample, GN20 is in the GOODS-North field, and AzTEC-3 and COSBO11 are in the COSMOS field. They have rich multi-wavelength photometry as mentioned earlier.
<cit.> fitted the GN20 SED with templates without an AGN component, and our new fitting to the same photometry data shows that a mid-IR AGN component is indistinguishable from the warm dust component in the models. The inclusion of the AGN component in this work, however, leads to more realistic uncertainties in the derived $\Umean$ parameter.
§.§ High-$z$ MS and SB galaxies from V20
We further include 8 MS and 16 SB galaxies from <cit.> which have both CO(2-1) and CO(5-4) $\SNR>3$ detections and far-IR photometric data. <cit.> surveyed 123, 75 and 15 galaxies with ALMA through Cycle 3, 4, and 7, respectively. Cycle 3 and 4 observations targeted CO(5-4) and CO(2-1), respectively. Their sample is selected from the COSMOS field at $z \approx 1.1 - 1.7$ based on predicted CO line luminosities, which are further based on the CO–IR luminosity correlation (). By this selection, this sample contains both MS and SB galaxies. We divide MS and SB galaxies into two subsamples for illustration in the later sections.
These galaxies have multi-wavelength photometry similarly to the other COSMOS galaxies mentioned above, and most of them also have one or more ALMA dust continuum measurements from the public ALMA archive, reduced by <cit.> and from line-free channels of CO observations in <cit.>. <cit.> did multi-component SED fitting including stellar, AGN and warm and cold dust components following <cit.>. They adopt a slightly different definition of ISRF $\Umean_{\mathrm{V20}} = 1/125 \times \LIR / \Mdust$, where their $\LIR$ also includes the AGN contribution.
In this work, we assembled all available ALMA photometry and re-fitted their SEDs with our own code. To be consistent within this work, we still use the $\Umean$ definition according to (their Eq. 33), and use only the star-forming dust components without the contribution of AGN torus. Because of the different definition and treatment of the AGN component, there are some noticeable differences in $\Umean$ between <cit.> and our study. However, if we were to adopt the same $\Umean_{\mathrm{V20}}$ definition, the $\Umean$ derivations would become fully consistent.
l @ l @ c @ c c c c c @ C @ C
Sample of galaxies used in this work with measured and derived physical properties.
$ \log \nmean $
$ \Umean $
$ \log L_{\mathrm{IR}} $
$ \log M_{\star} $
23.5emRef. CO54
23.5emRef. CO21
Arp193 local (U)LIRG $ 0.023 $ $ 1.8 \pm 0.3 $ $ 2.4 \pm 0.4 $ $ 17.0 _{ +0.0} ^{ +2.4} $ $ 11.6 _{ -0.0} ^{ +0.0} $ $ 10.3 _{ -0.0} ^{ +0.0} $ L15 P14
Arp220 local (U)LIRG $ 0.018 $ $ 2.4 \pm 0.5 $ $ 2.7 \pm 0.7 $ $ 20.6 _{ -0.1} ^{ +0.4} $ $ 12.2 _{ -0.0} ^{ +0.0} $ $ 11.0 _{ -0.0} ^{ +0.0} $ L15 K14/G09
IRASF17207-0014 local (U)LIRG $ 0.043 $ $ 3.9 \pm 1.0 $ $ 3.5 \pm 1.3 $ $ 35.0 _{ -0.4} ^{ +0.2} $ $ 12.4 _{ -0.0} ^{ +0.0} $ $ 11.0 _{ -0.0} ^{ +0.1} $ L15 K14/B08/W08/P12
IRASF18293-3413 local (U)LIRG $ 0.018 $ $ 2.2 \pm 0.1 $ $ 2.6 \pm 0.4 $ $ 11.1 _{ -2.9} ^{ +0.9} $ $ 11.7 _{ -0.1} ^{ +0.0} $ $ 9.6 _{ -0.2} ^{ +0.2} $ L15 G93
M82 local SFG $ 0.001 $ $ 1.7 \pm 0.2 $ $ 2.4 \pm 0.3 $ $ 25.4 _{ -0.5} ^{ +0.2} $ $ 10.6 _{ -0.0} ^{ +0.0} $ $ 9.8 _{ -0.0} ^{ +0.0} $ L15 L09
Mrk231 local (U)LIRG $ 0.042 $ $ 3.0 \pm 0.9 $ $ 2.9 \pm 1.3 $ $ 50.0 _{ -2.5} ^{ +0.0} $ $ 12.3 _{ -0.0} ^{ +0.0} $ $ 10.9 _{ -0.0} ^{ +0.0} $ L15 K14/P12/A07
Mrk273 local (U)LIRG $ 0.038 $ $ 3.4 \pm 0.9 $ $ 3.2 \pm 1.3 $ $ 37.7 _{ +0.0} ^{ +5.6} $ $ 12.0 _{ -0.0} ^{ +0.0} $ $ 10.6 _{ -0.2} ^{ +0.0} $ L15 K14/P12
NGC0253 local SFG $ 0.001 $ $ 2.9 \pm 0.7 $ $ 3.0 \pm 1.0 $ $ 6.3 _{ -0.5} ^{ +0.0} $ $ 10.6 _{ -0.0} ^{ +0.0} $ $ 10.9 _{ -0.1} ^{ +0.0} $ L15 K14(43.5)/H99
NGC0828 local (U)LIRG $ 0.018 $ $ 0.9 \pm 0.4 $ $ 2.1 \pm 0.4 $ $ 3.5 _{ -0.0} ^{ +0.5} $ $ 11.3 _{ -0.0} ^{ +0.0} $ $ 11.2 _{ -0.0} ^{ +0.0} $ L15 P12(22)
NGC1068 local (U)LIRG $ 0.004 $ $ 0.8 \pm 0.2 $ $ 2.1 \pm 0.2 $ $ 5.8 _{ -0.7} ^{ +0.2} $ $ 11.2 _{ -0.0} ^{ +0.0} $ $ 10.5 _{ -0.0} ^{ +0.0} $ L15 K14(43.5)/K11/B08
NGC1266 local SFG $ 0.00724 $ $ 2.6 \pm 0.7 $ $ 2.8 \pm 1.0 $ $ 13.3 _{ -2.0} ^{ +3.3} $ $ 10.3 _{ -0.0} ^{ +0.1} $ $ 10.5 _{ -0.0} ^{ +0.0} $ L15 K14(43.5)/A11/Y11
NGC1365 local (U)LIRG $ 0.005 $ $ 1.5 \pm 0.4 $ $ 2.3 \pm 0.5 $ $ 3.5 _{ -0.0} ^{ +0.6} $ $ 11.2 _{ -0.0} ^{ +0.0} $ $ 10.9 _{ -0.0} ^{ +0.0} $ L15 K14(43.5)/S95
NGC1614 local (U)LIRG $ 0.016 $ $ 1.4 \pm 0.3 $ $ 2.3 \pm 0.3 $ $ 31.2 _{ +0.0} ^{ +28.6} $ $ 11.6 _{ -0.0} ^{ +0.1} $ $ 10.5 _{ -0.2} ^{ +0.0} $ L15 A95(22)
NGC2369 local (U)LIRG $ 0.011 $ $ 2.0 \pm 0.4 $ $ 2.5 \pm 0.5 $ $ 5.8 _{ -1.9} ^{ +0.2} $ $ 11.1 _{ -0.1} ^{ +0.0} $ $ 10.5 _{ -0.0} ^{ +0.0} $ L15 A95(22)/B08
NGC2623 local (U)LIRG $ 0.018 $ $ 4.1 \pm 0.8 $ $ 3.6 \pm 1.1 $ $ 19.1 _{ -2.0} ^{ +4.3} $ $ 11.4 _{ -0.0} ^{ +0.0} $ $ 10.3 _{ -0.0} ^{ +0.0} $ L15 P12/W08
NGC2798 local SFG $ 0.00576 $ $ 1.9 \pm 0.3 $ $ 2.5 \pm 0.4 $ $ 13.3 _{ -2.6} ^{ +2.8} $ $ 10.5 _{ -0.0} ^{ +0.0} $ $ 9.7 _{ -0.0} ^{ +0.0} $ L15 L09
NGC3256 local (U)LIRG $ 0.009 $ $ 1.7 \pm 0.2 $ $ 2.4 \pm 0.3 $ $ 23.9 _{ -9.5} ^{ +0.5} $ $ 11.6 _{ -0.1} ^{ +0.0} $ $ 10.4 _{ -0.0} ^{ +0.0} $ L15 A95(24)/B08/G93
NGC3351 local SFG $ 0.0026 $ $ 0.9 \pm 0.2 $ $ 2.1 \pm 0.2 $ $ 2.0 _{ -0.3} ^{ +0.4} $ $ 9.8 _{ -0.0} ^{ +0.0} $ $ 9.8 _{ -0.0} ^{ +0.0} $ L15 L09
NGC3627 local SFG $ 0.00243 $ $ 1.8 \pm 0.4 $ $ 2.4 \pm 0.6 $ $ 3.6 _{ -1.0} ^{ +0.8} $ $ 10.3 _{ -0.0} ^{ +0.0} $ $ 10.1 _{ -0.0} ^{ +0.2} $ L15 L09
NGC4321 local SFG $ 0.00524 $ $ 0.8 \pm 0.1 $ $ 2.1 \pm 0.2 $ $ 1.8 _{ +0.0} ^{ +0.7} $ $ 10.4 _{ -0.0} ^{ +0.0} $ $ 10.5 _{ -0.0} ^{ +0.0} $ L15 L09
NGC4536 local SFG $ 0.00603 $ $ 1.7 \pm 0.3 $ $ 2.4 \pm 0.4 $ $ 3.9 _{ -0.1} ^{ +0.6} $ $ 10.3 _{ -0.0} ^{ +0.0} $ $ 10.1 _{ -0.0} ^{ +0.0} $ L15 L09
NGC4569 local SFG $ -0.00078 $ $ 1.1 \pm 0.1 $ $ 2.2 \pm 0.2 $ $ 1.9 _{ -0.3} ^{ +0.2} $ $ 9.6 _{ -0.0} ^{ +0.0} $ $ 10.5 _{ -0.0} ^{ +0.0} $ L15 L09
NGC4631 local SFG $ 0.00202 $ $ 0.9 \pm 0.2 $ $ 2.1 \pm 0.3 $ $ 2.8 _{ -0.6} ^{ +0.4} $ $ 10.2 _{ -0.0} ^{ +0.0} $ $ 9.4 _{ -0.0} ^{ +0.0} $ L15 L09
NGC4736 local SFG $ 0.00103 $ $ 0.6 \pm 0.1 $ $ 2.0 \pm 0.1 $ $ 4.1 _{ -0.4} ^{ +1.5} $ $ 9.6 _{ -0.0} ^{ +0.0} $ $ 9.8 _{ -0.0} ^{ +0.0} $ L15 L09
NGC4826 local SFG $ 0.00136 $ $ 1.6 \pm 0.3 $ $ 2.4 \pm 0.4 $ $ 3.6 _{ -0.6} ^{ +0.8} $ $ 9.5 _{ -0.0} ^{ +0.0} $ $ 10.4 _{ -0.0} ^{ +0.0} $ L15 A95(28)
NGC4945 local (U)LIRG $ 0.002 $ $ 4.0 \pm 0.8 $ $ 3.6 \pm 1.2 $ $ 7.0 _{ -1.0} ^{ +0.3} $ $ 11.1 _{ -0.0} ^{ +0.0} $ $ 9.7 _{ -0.0} ^{ +1.1} $ L15 W04/B08(22)
NGC5135 local (U)LIRG $ 0.014 $ $ 1.6 \pm 0.4 $ $ 2.4 \pm 0.4 $ $ 8.2 _{ -1.2} ^{ +2.5} $ $ 11.2 _{ -0.1} ^{ +0.0} $ $ 11.1 _{ -0.7} ^{ +0.0} $ L15 P12(22)
NGC5194 local SFG $ 0.002 $ $ 0.7 \pm 0.1 $ $ 2.1 \pm 0.1 $ $ 3.0 _{ -0.2} ^{ +0.7} $ $ 10.2 _{ -0.0} ^{ +0.0} $ $ 9.6 _{ -0.0} ^{ +0.0} $ L15 L09
NGC5713 local SFG $ 0.00633 $ $ 1.0 \pm 0.2 $ $ 2.1 \pm 0.2 $ $ 5.2 _{ -1.1} ^{ +0.6} $ $ 10.4 _{ -0.0} ^{ +0.0} $ $ 10.1 _{ -0.0} ^{ +0.0} $ L15 L09
NGC6240 local (U)LIRG $ 0.024 $ $ 2.8 \pm 0.8 $ $ 2.9 \pm 0.9 $ $ 20.0 _{ -0.5} ^{ +0.2} $ $ 11.7 _{ -0.0} ^{ +0.0} $ $ 10.8 _{ -0.0} ^{ +0.0} $ L15 G09
NGC6946 local SFG $ 0.00013 $ $ 1.1 \pm 0.1 $ $ 2.2 \pm 0.2 $ $ 4.2 _{ -1.1} ^{ +0.4} $ $ 10.4 _{ -0.1} ^{ +0.0} $ $ 10.3 _{ -0.0} ^{ +0.3} $ L15 L09
NGC7469 local (U)LIRG $ 0.016 $ $ 1.1 \pm 0.3 $ $ 2.1 \pm 0.2 $ $ 13.1 _{ +0.0} ^{ +5.1} $ $ 11.6 _{ -0.0} ^{ +0.0} $ $ 10.0 _{ -0.0} ^{ +0.3} $ L15 P12
NGC7552 local (U)LIRG $ 0.005 $ $ 2.4 \pm 0.5 $ $ 2.7 \pm 0.7 $ $ 14.0 _{ -0.5} ^{ +0.2} $ $ 11.1 _{ -0.0} ^{ +0.0} $ $ 10.2 _{ -0.0} ^{ +0.0} $ L15 A95
NGC7582 local SFG $ 0.005 $ $ 1.5 \pm 0.3 $ $ 2.3 \pm 0.4 $ $ 11.7 _{ -0.3} ^{ +0.2} $ $ 10.9 _{ -0.0} ^{ +0.0} $ $ 10.9 _{ -0.0} ^{ +0.0} $ L15 A95
MCG+12-02-001 local (U)LIRG $ 0.016 $ $ 1.6 \pm 0.3 $ $ 2.3 \pm 0.3 $ $ 17.4 _{ -2.1} ^{ +3.6} $ $ 11.5 _{ -0.0} ^{ +0.0} $ $ 11.9 _{ -1.1} ^{ +0.6} $ L15 K16(43.5)
Mrk331 local (U)LIRG $ 0.018 $ $ 2.4 \pm 0.4 $ $ 2.7 \pm 0.7 $ $ 14.7 _{ -2.1} ^{ +0.4} $ $ 11.4 _{ -0.0} ^{ +0.0} $ $ 10.9 _{ -1.5} ^{ +0.0} $ L15 K16(43.5)
NGC7771 local (U)LIRG $ 0.014 $ $ 1.3 \pm 0.2 $ $ 2.2 \pm 0.3 $ $ 7.0 _{ -0.5} ^{ +0.1} $ $ 11.3 _{ -0.0} ^{ +0.0} $ $ 11.4 _{ -0.0} ^{ +0.0} $ L15 K16(43.5)
IC1623 local (U)LIRG $ 0.02 $ $ 1.8 \pm 0.3 $ $ 2.4 \pm 0.5 $ $ 13.2 _{ -2.1} ^{ +0.5} $ $ 11.6 _{ -0.0} ^{ +0.0} $ $ 9.1 _{ -0.0} ^{ +0.0} $ L15 K16(43.5)
BzK16000 high-z MS BzK $ 1.52 $ $ 1.5 \pm 0.3 $ $ 2.3 \pm 0.4 $ $ 15.2 _{ -13.3} ^{ +33.2} $ $ 11.8 _{ -0.0} ^{ +0.0} $ $ 11.0 _{ -0.2} ^{ +0.0} $ D15 D15/M12
BzK17999 high-z MS BzK $ 1.41 $ $ 2.2 \pm 0.2 $ $ 2.6 \pm 0.4 $ $ 14.4 _{ -7.7} ^{ +13.7} $ $ 12.0 _{ -0.0} ^{ +0.0} $ $ 10.7 _{ -0.0} ^{ +0.2} $ D15 D15/M12
BzK21000 high-z MS BzK $ 1.52 $ $ 2.3 \pm 0.2 $ $ 2.6 \pm 0.4 $ $ 25.2 _{ -12.2} ^{ +5.2} $ $ 12.3 _{ -0.0} ^{ +0.0} $ $ 11.0 _{ -0.2} ^{ +0.1} $ D15 D15/M12
BzK4171 high-z MS BzK $ 1.47 $ $ 1.8 \pm 0.2 $ $ 2.4 \pm 0.4 $ $ 16.5 _{ -8.1} ^{ +4.5} $ $ 12.0 _{ -0.0} ^{ +0.0} $ $ 10.7 _{ -0.1} ^{ +0.1} $ D15 D15/M12
GN20 high-z SB SMG $ 4.06 $ $ 3.4 \pm 0.4 $ $ 3.2 \pm 0.7 $ $ 35.4 _{ -8.8} ^{ +4.9} $ $ 13.3 _{ -0.1} ^{ +0.0} $ $ 11.2 _{ -0.1} ^{ +0.0} $ C10 D09
AzTEC-3 high-z SB SMG $ 5.3 $ $ 4.0 \pm 0.2 $ $ 3.6 \pm 0.5 $ $ 120.2 _{ -84.9} ^{ +8.9} $ $ 13.3 _{ -0.1} ^{ +0.0} $ $ 10.8 _{ -0.0} ^{ +0.2} $ R10 R10
COSBO11 high-z SB SMG $ 1.83 $ $ 3.7 \pm 0.1 $ $ 3.3 \pm 0.5 $ $ 20.0 _{ -3.0} ^{ +0.4} $ $ 12.9 _{ -0.0} ^{ +0.0} $ $ 10.8 _{ -0.0} ^{ +0.0} $ A08 A08
HFLS3 high-z SB SMG $ 6.34 $ $ 5.9 \pm 0.3 $ $ 5.0 \pm 0.4 $ $ 68.3 _{ -0.0} ^{ +10.5} $ $ 13.7 _{ -0.0} ^{ +0.1} $ $ 10.5 _{ -0.4} ^{ +0.5} $ R13 R13
PACS-819 high-z SB FMOS $ 1.45 $ $ 3.5 \pm 0.2 $ $ 3.2 \pm 0.5 $ $ 27.7 _{ -0.1} ^{ +5.6} $ $ 12.5 _{ -0.0} ^{ +0.1} $ $ 10.7 _{ -0.1} ^{ +0.1} $ THIS S15
PACS-830 high-z SB FMOS $ 1.46 $ $ 1.6 \pm 0.2 $ $ 2.4 \pm 0.4 $ $ 24.3 _{ -2.3} ^{ +6.7} $ $ 12.4 _{ -0.0} ^{ +0.0} $ $ 11.0 _{ -0.0} ^{ +0.0} $ THIS S15
PACS-867 high-z SB FMOS $ 1.57 $ $ 1.6 \pm 0.3 $ $ 2.4 \pm 0.4 $ $ 2.8 _{ -2.3} ^{ +18.6} $ $ 12.0 _{ -0.0} ^{ +0.0} $ $ 10.8 _{ -0.1} ^{ +0.1} $ THIS S15
PACS-299 high-z SB FMOS $ 1.65 $ $ 2.6 \pm 0.2 $ $ 2.7 \pm 0.4 $ $ 28.3 _{ -19.1} ^{ +38.5} $ $ 12.4 _{ -0.0} ^{ +0.0} $ $ 10.1 _{ -0.0} ^{ +0.4} $ THIS S15
PACS-325 high-z SB FMOS $ 1.65 $ $ 0.0 \pm 3.4 $ $ 1.2 _{ -0.7} ^{ +14.6} $ $ 11.8 _{ -0.1} ^{ +0.1} $ $ 10.4 _{ -0.1} ^{ +0.0} $ THIS S15
PACS-164 high-z SB FMOS $ 1.65 $ $ 1.9 \pm 0.4 $ $ 2.4 \pm 0.7 $ $ 18.1 _{ -15.4} ^{ +35.0} $ $ 12.5 _{ -0.0} ^{ +0.0} $ $ 10.2 _{ -0.2} ^{ +0.3} $ THIS S15
V20-ID41458 high-z SB V20 $ 1.29 $ $ 1.8 \pm 0.2 $ $ 2.4 \pm 0.3 $ $ 33.5 _{ +0.0} ^{ +11.0} $ $ 12.5 _{ -0.0} ^{ +0.0} $ $ 11.1 _{ -0.0} ^{ +0.0} $ V20 V20
V20-ID21060 high-z SB V20 $ 1.28 $ $ 3.6 \pm 0.9 $ $ 3.4 \pm 1.3 $ $ 51.5 _{ -1.5} ^{ +0.5} $ $ 12.3 _{ -0.0} ^{ +0.0} $ $ 10.0 _{ -0.0} ^{ +0.1} $ V20 V20
V20-ID51599 high-z SB V20 $ 1.17 $ $ 2.1 \pm 0.2 $ $ 2.5 \pm 0.4 $ $ 14.4 _{ -1.9} ^{ +3.7} $ $ 12.5 _{ -0.0} ^{ +0.0} $ $ 11.1 _{ -0.0} ^{ +0.1} $ V20 V20
V20-ID30694 high-z MS V20 $ 1.16 $ $ 1.2 \pm 0.2 $ $ 2.2 \pm 0.3 $ $ 15.0 _{ -2.9} ^{ +5.5} $ $ 12.0 _{ -0.0} ^{ +0.1} $ $ 10.9 _{ -0.0} ^{ +0.2} $ V20 V20
V20-ID38053 high-z SB V20 $ 1.15 $ $ 1.3 \pm 0.4 $ $ 2.3 \pm 0.4 $ $ 18.9 _{ -0.1} ^{ +11.6} $ $ 12.0 _{ -0.0} ^{ +0.0} $ $ 10.5 _{ -0.0} ^{ +0.0} $ V20 V20
V20-ID48881 high-z SB V20 $ 1.16 $ $ 1.9 \pm 0.5 $ $ 2.5 \pm 0.5 $ $ 42.9 _{ -0.2} ^{ +0.5} $ $ 12.3 _{ -0.0} ^{ +0.0} $ $ 10.6 _{ -0.0} ^{ +0.0} $ V20 V20
V20-ID37250 high-z SB V20 $ 1.15 $ $ 1.1 \pm 0.1 $ $ 2.2 \pm 0.2 $ $ 9.9 _{ -1.1} ^{ +3.7} $ $ 12.2 _{ -0.0} ^{ +0.0} $ $ 11.0 _{ -0.2} ^{ +0.0} $ V20 V20
V20-ID44641 high-z MS V20 $ 1.15 $ $ 1.0 \pm 0.3 $ $ 2.2 \pm 0.5 $ $ 9.4 _{ -2.8} ^{ +2.4} $ $ 12.0 _{ -0.1} ^{ +0.0} $ $ 11.2 _{ -0.3} ^{ +0.0} $ V20 V20
V20-ID51936 high-z SB V20 $ 1.4 $ $ 1.9 \pm 0.3 $ $ 2.4 \pm 0.3 $ $ 5.3 _{ -0.1} ^{ +2.1} $ $ 12.0 _{ -0.0} ^{ +0.0} $ $ 10.5 _{ -0.0} ^{ +0.0} $ V20 V20
V20-ID31880 high-z SB V20 $ 1.4 $ $ 2.2 \pm 0.4 $ $ 2.6 \pm 0.5 $ $ 20.5 _{ +0.0} ^{ +2.8} $ $ 12.3 _{ -0.0} ^{ +0.0} $ $ 11.0 _{ -0.0} ^{ +0.0} $ V20 V20
V20-ID2299 high-z SB V20 $ 1.39 $ $ 3.4 \pm 0.3 $ $ 3.2 \pm 0.5 $ $ 13.8 _{ -0.4} ^{ +0.4} $ $ 12.7 _{ -0.0} ^{ +0.0} $ $ 11.1 _{ -0.1} ^{ +0.0} $ V20 V20
V20-ID21820 high-z MS V20 $ 1.38 $ $ 2.1 \pm 0.4 $ $ 2.5 \pm 0.5 $ $ 15.7 _{ -2.3} ^{ +7.6} $ $ 12.2 _{ -0.0} ^{ +0.0} $ $ 11.0 _{ -0.1} ^{ +0.0} $ V20 V20
V20-ID13205 high-z SB V20 $ 1.27 $ $ 3.0 \pm 0.7 $ $ 3.1 \pm 1.2 $ $ 49.8 _{ -10.8} ^{ +16.6} $ $ 12.3 _{ -0.0} ^{ +0.0} $ $ 11.1 _{ -0.2} ^{ +0.0} $ V20 V20
V20-ID13854 high-z MS V20 $ 1.27 $ $ 1.8 \pm 0.3 $ $ 2.5 \pm 0.5 $ $ 20.0 _{ -3.0} ^{ +0.4} $ $ 12.2 _{ -0.0} ^{ +0.0} $ $ 11.1 _{ -0.0} ^{ +0.0} $ V20 V20
V20-ID19021 high-z SB V20 $ 1.26 $ $ 1.9 \pm 0.3 $ $ 2.4 \pm 0.4 $ $ 25.0 _{ +0.0} ^{ +5.4} $ $ 12.3 _{ -0.0} ^{ +0.0} $ $ 10.4 _{ -0.0} ^{ +0.0} $ V20 V20
V20-ID35349 high-z MS V20 $ 1.26 $ $ 0.8 \pm 0.2 $ $ 2.1 \pm 0.2 $ $ 8.2 _{ -0.6} ^{ +4.0} $ $ 12.0 _{ -0.0} ^{ +0.0} $ $ 11.2 _{ -0.1} ^{ +0.0} $ V20 V20
V20-ID42925 high-z SB V20 $ 1.6 $ $ 2.1 \pm 0.4 $ $ 2.5 \pm 0.5 $ $ 59.9 _{ -18.5} ^{ +0.4} $ $ 12.7 _{ -0.0} ^{ +0.0} $ $ 11.0 _{ -0.0} ^{ +0.0} $ V20 V20
V20-ID38986 high-z MS V20 $ 1.61 $ $ 2.8 \pm 0.9 $ $ 2.8 \pm 1.4 $ $ 19.5 _{ -16.1} ^{+155.1} $ $ 12.0 _{ -0.1} ^{ +0.0} $ $ 11.1 _{ -0.0} ^{ +0.0} $ V20 V20
V20-ID30122 high-z MS V20 $ 1.46 $ $ 2.0 \pm 0.4 $ $ 2.6 \pm 0.6 $ $ 13.4 _{ -4.2} ^{ +1.3} $ $ 12.2 _{ -0.0} ^{ +0.0} $ $ 10.9 _{ -0.0} ^{ +0.1} $ V20 V20
V20-ID41210 high-z SB V20 $ 1.31 $ $ 2.1 \pm 0.2 $ $ 2.5 \pm 0.4 $ $ 25.0 _{ -9.6} ^{ +0.4} $ $ 12.3 _{ -0.1} ^{ +0.0} $ $ 10.6 _{ -0.0} ^{ +0.0} $ V20 V20
V20-ID2993 high-z SB V20 $ 1.19 $ $ 1.4 \pm 0.3 $ $ 2.3 \pm 0.4 $ $ 13.1 _{ -3.0} ^{ +7.4} $ $ 12.2 _{ -0.0} ^{ +0.0} $ $ 11.0 _{ -0.2} ^{ +0.1} $ V20 V20
V20-ID48136 high-z MS V20 $ 1.18 $ $ 1.5 \pm 0.2 $ $ 2.3 \pm 0.3 $ $ 14.9 _{ -3.3} ^{ +3.0} $ $ 12.3 _{ -0.1} ^{ +0.0} $ $ 11.1 _{ -0.0} ^{ +0.1} $ V20 V20
V20-ID51650 high-z SB V20 $ 1.34 $ $ 2.8 \pm 0.4 $ $ 2.9 \pm 0.6 $ $ 21.1 _{ -5.0} ^{ +9.4} $ $ 12.2 _{ -0.0} ^{ +0.1} $ $ 10.9 _{ -0.0} ^{ +0.0} $ V20 V20
V20-ID15069 high-z SB V20 $ 1.21 $ $ 1.7 \pm 0.6 $ $ 2.4 \pm 1.0 $ $ 6.1 _{ -1.2} ^{ +2.3} $ $ 12.0 _{ -0.0} ^{ +0.0} $ $ 10.8 _{ -0.3} ^{ +0.1} $ V20 V20
THIS = This work (see Appendix <ref>);
L15 = <cit.>;
P14 = <cit.>;
K14 = <cit.>;
G09 = <cit.>;
E90 = <cit.>;
I14 = <cit.>;
B08 = <cit.>;
W08 = <cit.>;
P12 = <cit.>;
G93 = <cit.>;
L09 = <cit.>;
B06 = <cit.>;
A07 = <cit.>;
H99 = <cit.>;
B92 = <cit.>;
K11 = <cit.>;
A11 = <cit.>;
Y11 = <cit.>;
S95 = <cit.>;
A95 = <cit.>;
W04 = <cit.>;
K16 = <cit.>;
D15 = <cit.>;
M12 = <cit.>;
D09 = <cit.>;
W12 = <cit.>;
C10 = <cit.>;
R10 = <cit.>;
A08 = <cit.>;
R13 = <cit.>;
S15 = <cit.>;
V20 = <cit.>;
Only a few selected key columns are shown here. The full sample table has more columns including galaxy properties of warm and cold dust luminosities, AGN luminosities, offset from the MS, which are used in Fig. <ref>.
The full machine-readable table is available at <https://doi.org/10.5281/zenodo.3958271>.
Two examples of our SED fitting for PACS-819 (left) and Arp 193 (right) with our 2 code as described in Sect. <ref>.
Upper panels show the best-fit SED (black line) and SED components, which are stellar (cyan dashed line), mid-IR AGN (yellow dashed line, optional if AGN is present), PDR dust (red dashed line) and cold/ambient dust (blue dashed line). Photometry data are shown by circles with errorbars or downward arrows for upper limits if $\SNR<3$.
Lower panels show $1/\chi^2$ distributions for several galaxy properties from our SED fitting. In each sub-panel, the height of histogram indicates the highest $1/\chi^2$ in each bin of the $x$-axis galaxy property. A higher $1/\chi^2$ means a better fit. The 68% confidence level for our five SED component fitting is indicated by the yellow shading.
(Figures for all sources are available at <https://doi.org/10.5281/zenodo.3958271>.)
§ SPECTRAL ENERGY DISTRIBUTION (SED) FITTING: THE 2 CODE
The well-sampled SEDs from optical to far-IR/mm allow us to obtain accurate dust properties by fitting them with SED templates. Particularly, since dust grains do not have a single temperature in a galaxy, the mean ISRF intensity, $\Umean$, has been considered to be a more physical proxy of dust emission properties (). It represents the 0–13.6 eV intensity of interstellar UV radiation in units of the <cit.> ISRF intensity (see ).
The $\Umean$ parameter has advantages in describing mixture states of ISRF over using a single or several dust temperatures values to describe galaxy dust SEDs. In dust models,
the majority of dust grains are exposed to a minimum ambient ISRF with intensity $\Umin$, while the rest are exposed to the photon-dominated region (PDR) ISRF, with intensities ranging from $\Umin$ to $\Umax$ in a power-law PDF (in mass). The mass fraction of the latter dust grain population (“warm dust” or “PDR dust”) is expressed as $\fPDR$ in this work and is a free parameter in the fit. $\Umin$ is another free parameter, while $\Umax$ is empirically fixed, as well as the power-law index (see more detailed introduction in ; ).
As pointed out by <cit.>, such a physically driven dust model actually fits the mass distribution of molecular clouds (; ; ).
Based on this model, generated SED templates which can then be used for fitting by other works using their own SED fitting code.
In this work, we use our own-developed SED fitting code, 2 [<https://github.com/1054/Crab.Toolkit.michi2>], providing us the flexibility in combining multiple SED components and choosing SED templates for each component. Comparing with popular panchromatic (UV-to-mm/radio) SED fitting codes, e.g., MAGPHYS (), LePhare (), CIGALE (), our code fits SEDs well and produces similar best-fitting results
(see Appendix <ref>)
Our code also performs $\chi^2$-based posterior probability distribution analysis and estimates reasonable (asymmetric) uncertainties for each free or derived parameter (e.g., Fig. <ref>).
Our code can also handle an arbitrary number of
SED libraries as the components of the whole SED. For example, we use five SED libraries/components representing stellar, AGN, warm dust, cold dust, and radio emissions (see below). Our code samples their combinations in the five-dimensional space, then generates a composite SED (after multiplying the model with the filter curves), then fits to the observed photometric data and obtains $\chi^2$ statistics. The post-processing of the $\chi^2$ distribution provides the best-fit and probability range of each physical parameter in the SED libraries (following , chapter 15.6).
Details of the five SED libraries/components are:
* stellar component: for high-redshift ($z>1$) star-forming galaxies, we use the <cit.> code to generate solar metallicity, constant star formation history, <cit.>-IMF SED templates, then apply the <cit.> attenuation law with a range of $\mathrm{E(B-V)} = 0.0$ to $1.0$ to construct our SED library. For local galaxies, we use the FSPS () code to generate solar metallicity, $\tau$-declining star formation history, -IMF SED templates (also with attenuation law), as this generates a larger variety of SED templates which fit local galaxies better.
* mid-IR AGN component: we use the observationally calibrated AGN torus SED templates from <cit.>. They cover $6-100\,\mathrm{\mu m}$ in wavelengths, and can fit both Type 1, Type 2 and intermediate-type AGNs as demonstrated by <cit.>.
* warm dust component for dust grains exposed to the PDR ISRF with intensity ranging from $\Umin$ to $\Umax=10^{7}$ in a power-law PDF with an index of $-2$ (updated version; see ). The fraction of dust mass in Polycyclic Aromatic Hydrocarbons (PAHs) is described by $q_{\mathrm{PAH}}$. The contribution of such warm dust to total ISM dust in mass is described by $\fPDR$ in this work (i.e., the $\gamma$ in ). Free parameters are $\Umin$, $q_{\mathrm{PAH}}$ and $\fPDR$.
* cold dust component for dust grains exposed to the ambient ISRF with intensity of $\Umin$. The $\Umin$ and $q_{\mathrm{PAH}}$ of the cold dust are fixed to be the same as the warm dust in our fitting.
* radio component: a simple power-law with index $-0.8$ is assumed. Our code has the option to fix the normalization of the radio component at rest-frame 1.4 GHz to the total IR luminosity $L_{\mathrm{IR}\,(8-1000{\mu}{\mathrm{m}})}$ (integrating warm and cold dust components only) via assumptions about the IR-radio correlation (e.g., ; ; ; ) when galaxies lack sufficient IR photometric data and display no obvious radio excess due to AGN (e.g., ).
As radio is not the focus of this work, we only use the simple power-law assumption for illustration purposes.
Note that we do not balance the dust attenuated stellar light with the total dust emission. This has the advantage of allowing for optically thick dust emission that is only seen in the infrared.
Our fitting then outputs $\chi^2$ distributions for the following parameters of interest (see bottom panels in Fig. <ref>):
* Stellar properties, including stellar mass $\Mstar$, dust attenuation $\mathrm{E(B-V)}$, and light-weighted stellar age.
* AGN luminosity $L_{\mathrm{AGN}}$, integrated over the AGN SED component.
* IR luminosities for cold dust ($L_{\mathrm{IR,\,cold\,dust}}$), warm dust ($L_{\mathrm{IR,\,PDR\,dust}}$) and their sum ($L_{\mathrm{IR,\,total\,dust}}$).
* Mean ISRF intensity $\Umean$, minimum ISRF intensity $\Umin$, and the mass fraction of warm/PDR-like dust in the model $\fPDR$.
In Fig. <ref> we show two examples of our SED fitting.
Best-fit parameters and their errors are also listed in our full sample table (Table <ref> online version).
To verify our SED fitting, we also fit our high-$z$ galaxies' SEDs with MAGPHYS and CIGALE
(see more details in Appendix <ref>).
We find that for most high-$z$ galaxies the stellar masses and IR luminosities are agreed within
$\sim 0.2 - 0.3$ dex.
The IR luminosities are more consistent than stellar masses among the results of three fitting codes, with a scatter of $\sim 0.2$ dex.
In several outlier cases, our code produces more reasonable fitting to the data (e.g., AzTEC-3, Arp220, NGC0253), which is likely because we do not have an energy balance constraint in the code. Our code has no systematic bias against CIGALE, but there is a noticeable trend that MAGPHYS fits slightly larger stellar masses than the other two. A possible reason is the use of the <cit.> double attenuation law in MAGPHYS (see ) rather than the <cit.> attenuation law in our 2 and CIGALE fitting.
Given the general agreement between our code and CIGALE/MAGPHYS, and
to be consistent within this paper, we fit all SEDs with our 2 SED fitting code with the five SED libraries as mentioned above.
§ INTERSTELLAR RADIATION FIELD TRACES CO EXCITATION: THE $\UMEAN$–$\R52$ CORRELATION
CO(5-4) to(2-1) line ratio $\R52$ versus various galaxy properties:
(a) ambient ISRF intensity ($\Umin$);
(b) mean ISRF intensity ($\Umean$);
(c) dust IR luminosity;
(d) stellar mass;
(e) luminosity fraction of dust exposed to warm/PDR-like ISRF to total dust in ISM (does not include AGN torus);
(f) luminosity ratio between mid-IR AGN and total ISM dust (AGN luminosity is integrated over for all available wavelengths while dust luminosity is integrated over rest-frame 8-1000 $\mu$m);
and (g) the offset to the MS in terms of SFR.
The Pearson coefficient $P$ and scatter $\sigma$ for each correlation are shown at bottom right. We performed orthogonal distance regression (ODR) linear regression fitting to the data points and their $x$ and $y$ errors in panels (a), (b) and (c), where $P>0.5$. Dotted lines are the best-fits from this work, with slope $N$ and intercept $A$ shown at the bottom.
The dashed line in panel (b) is the best-fit linear regression from <cit.>.
We use our SED fitting results and the compiled CO data to study the empirical correlation between the CO(5-4)/CO(2-1) line ratio $\R52$ and the mean ISRF intensity $\Umean$. This correlation physically links molecular gas and dust properties together, supporting the idea that gas and dust are generally mixed together at large scales and exposed to the same local ISRF.
In Fig. <ref> we correlate $\R52$ with various galaxy properties derived from our SED fitting. Panel (a) shows a tight correlation between $\R52$ and the ambient ISRF intensity $\Umin$, and panel (b) confirms the tight correlation between $\R52$ and $\Umean$ which was first reported by <cit.>.
Panels (c) and (d) show that CO excitation is also well correlated with galaxies' dust luminosities, but not with their stellar masses. In panels (e) to (g), we show that $\R52$ exhibits no correlation with $\fPDR$ and mid-IR AGN fraction, while a very weak correlation seems to exist between $\R52$ and the offset to the MS SFR, $\SFR/\SFR_{\mathrm{MS}}$. In each panel, the Pearson correlation coefficient $P$ is computed and shown at bottom right. These correlations, or lack there-of, demonstrate that $\R52$ or mid-$J$ CO excitation is indeed mostly driven by dust-related quantities, i.e., $\LIR$, $\Umean$ and $\Umin$.
Our best fitting $\R52$–$\Umean$ correlation is close to the one found by <cit.>, yet somewhat shallower than that. <cit.> also reported a shallower slope of the $\R52$–$\Umean$ correlation, given that the high-$z$ V20 sample is used in both their and this work. Indeed, sub-samples behave slightly differently in Fig. <ref>. While local SFGs and local (U)LIRGs are scattered well around the average $\R52$–$\Umean$ correlation line, high-$z$ MS and SB galaxies from the FMOS and V20 subsamples tend to lie below it. Given the varied $\SNR$ of IR data as reflected by the $\Umean$ error bars, the majority of those high-$z$ galaxies do not have a high-quality constraint on $\Umean$. High-$z$ sample selections for CO observations are usually also biased to high-$z$ IR-bright galaxies. Therefore, it is difficult to draw a conclusion about any redshift evolution of the $\R52$–$\Umean$ correlation with the current dataset.
From panel (f) of Fig. <ref>, we can see that there are several galaxies showing a high AGN to ISM dust luminosity ratio (note that the AGN luminosity is integrated over all wavelengths, while the IR luminosity is only warm+cold dust integrated over 8–1000$\mu$m). The three galaxies with $L_{\mathrm{AGN},\,\mathrm{all}\,\lambda} / L_{\mathrm{IR,\,8\textnormal{--}1000{\mu}m}} \gtrsim 0.9$ are V20-ID38986, V20-ID51936 and V20-ID19021, from high to low respectively. They all clearly show power-law shape SEDs from the near-IR IRAC bands to mid-IR MIPS 24 $\mu$m and PACS 100 $\mu$m [Their SEDs figures are accessible at the link mentioned in the caption of Fig. <ref>. With high $\SNR$ IRAC to MIPS 24 $\mu$m data, their mid-IR AGN and any PAH feature if present can be well distinguished by our SED fitting. Yet, we note that for galaxies with low $\SNR$ IRAC to MIPS 24 $\mu$m data the uncertainty in AGN component identification could be high.].
However, their $\R52$ do not tend to be higher. This likely supports that these mid-$J$ ($J_{\mathrm{u}}\sim5$) CO lines are not overwhelmingly affected by AGN.
We note that the correlations in Fig. <ref> are not the only ones worth exploring. $\R52$ also correlates with dust mass in a way similar to $\Umean$ but with larger scatter, and $\Umean$ can be considered as the ratio of $\LIR/\Mdust$, therefore here we omit the correlation with $\Mdust$. <cit.> also investigated how SFR surface density ($\Sigma_{\SFR}$), star formation efficiency ($\SFR/\Mgas$), gas-to-dust ratio ($\deltaGDR$) and massive star-forming clumps affect $\Umean$ and $\R52$. Their results support the idea that a larger fraction of massive star-forming clumps with denser molecular gas compared to the diffuse, low density molecular gas is the key for a high CO excitation (as proposed by the simulation work of ). Therefore, to understand the key physical drivers of CO excitation, information on molecular gas density distributions is likely the most urgently required.
§ MODELING OF MOLECULAR GAS DENSITY DISTRIBUTION IN GALAXIES
CO line emission in galaxies arises mainly from the cold molecular gas, and CO line ratios/SLEDs are sensitive to local molecular gas physical conditions, i.e., volume density $\nH2$, column density $\NH2_$, and kinetic temperature $\Tkin$. These properties typically vary by one to three orders of magnitude within a galaxy, e.g., as seen in observations as reviewed by <cit.>, <cit.>, <cit.>, <cit.> and references therein,
and also in modeling and simulations, e.g., by <cit.>, <cit.>, <cit.>, <cit.>, <cit.>, <cit.>, <cit.>, <cit.>, <cit.> and <cit.>.
In practice, studies of the CO SLED at a global galaxy scale or at sub-kpc scales usually require the presence of a relative dense gas component ($\nH2 \sim 10^{3\textnormal{--}5}\mathrm{cm^{-3}}$; $\Tkin \gtrsim 50\textnormal{--}100 \,\mathrm{K}$) in addition to a relatively diffuse gas ($\nH2 \sim 10^{2\textnormal{--}3}\mathrm{cm^{-3}}$; $\Tkin \sim 20\textnormal{--}100 \,\mathrm{K}$), via non-local thermodynamic equilibrium (non-LTE) LVG radiative transfer modeling (e.g., ). A third state which is mostly responsible for $J_{\mathrm{u}} \gtrsim 10$ CO lines is also found in the case of AGN (e.g., ) or mechanical heating (e.g., ).
Therefore, a mid-to-low-$J$ CO line ratio like $\R52$ not only reflects the excitation condition of a single gas state, but also the relative amount of the denser, warmer to the more diffuse gas component.
<cit.> have conducted pioneer modeling of the sub-beam gas density PDF to understand line ratios of CO isotopologue and dense gas tracers. The method includes constructing a series of one-zone clouds, performing non-LTE LVG calculation, and compositing line fluxes by the gas density PDF.
They demonstrated that such modeling can successfully reproduce observed isotopologue or dense gas tracers to CO line ratios.
Inspired by this work, we present in this section similar sub-beam density-PDF gas modeling to study the CO excitation, and propose a useful conversion from $\R52$ observations to $\nmean$ and $\Tkin$ for galaxies at global scales.
Example of composite gas density PDFs in our modeling with varied log-normal PDF's mean gas density $\lgnmean$ (from 2.0 to 4.5 in panels from left to right and top to bottom) and a fixed power-law tail threshold gas density $\lgnthresh=4.5$. The $\lgnmean$ and $\lgnthresh=4.5$ are indicated by the vertical transparent bars and labels in each panel. The thick blue and thin green solid (dashed) lines represent the volume- and mass-weighted PDFs of the log-normal (power-law tail) gas component, respectively.
§.§ Observational evidences of gas density PDF
Observation of gas density PDF at molecular cloud scale requires high angular resolution (e.g., sub-hundred-pc scales) and full spatial information, therefore could only be obtained either with sensitive single-dish mapping in the Galaxy and nearest large galaxies, or with sensitive interferometric plus total power observations.
For external galaxies, the MAGMA survey by <cit.>, <cit.> and <cit.> mapped CO(1-0) in the LMC at 11 pc resolution with the Mopra 22m single-dish telescope. <cit.>, <cit.> and <cit.> mapped M 33 CO(2-1) emission at 50 pc scale with the IRAM 30m single-dish telescope. The PAWS survey provides M 51 CO maps at 40 pc obtained with the IRAM PdBI and with IRAM 30m data (). The on-going PHANGS-ALMA survey [<http://phangs.org>] maps CO(2-1) at $\sim$60–100 pc scales in more than 70 nearby galaxies using ALMA with total power (Leroy et al., submitted; see also ).
Meanwhile, higher physical resolution observations are also available for Galactic clouds and filaments, e.g., <cit.>, <cit.>, <cit.>, <cit.>.
These observations at large scales reveal a smooth gas density PDF which can be described by a log-normal distribution plus a high-density power-law tail (e.g., ). The width of the log-normal PDF and the slope of the power-law tail do slightly vary among galaxies, but the most prominent difference is seen for the mean of the log-normal PDF (hereafter $\nmean$), which changes by more than one order of magnitude (for a relatively small sample of $<10$ spiral galaxies; see Fig. 7 of ).
Interestingly, such a log-normal PDF is consistently predicted by isothermal homogeneous supersonic turbulent theories or diverse cloud models (e.g., ; ; ; ; see also references in ), and the additional power-law PDF is also expected, e.g., for a multi-phase ISM and/or due to the cloud evolution/star formation at late times (e.g., ; ; ; and references therein). Therefore, modeling gas density PDFs assuming a log-normal distribution plus a power-law tail appears to be a very reasonable approach.
§.§ Sub-beam gas density PDF modeling
We thus assume that the line-of-sight volume density of molecular gas in a galaxy follows a log-normal PDF, with a small portion of line-of-sights following a power-law PDF at the high-density tail.
Representative PDFs are shown in Fig. <ref>.
Each PDF samples the $\nH2$ from $1$ to $10^{7}\;\mathrm{cm^{-3}}$ in 100 bins in logarithmic space.
For each $\nH2$ bin, the height of the PDF is thus proportional to the number of sight lines with a density of $\nH2$. We assume that the CO line emission surface brightness from each line-of-sight can be computed from an equivalent “one-zone” cloud with a single $\nH2$, $\NH2_$, $\Tkin$, velocity gradient and CO abundance. Thus the total CO line emission surface brightness is the sum of all sight lines in the PDF.
The shape of the gas density PDF is described by the following parameters:
the mean gas density of the log-normal PDF $\nmean$, the threshold density of the power-law tail $\nthresh$,
the width of the log-normal PDF,
and the slope of the power-law tail.
We model a series of PDFs by varying the $\nmean$ from $10^{2.0}$ to $10^{5.0}\;\mathrm{cm^{-3}}$ in steps of 0.25 dex, and $\nthresh$ from $10^{4.0}$ to $10^{5.25}\;\mathrm{cm^{-3}}$ in steps of 0.25 dex, to build our model grid which can cover most situations observed in galaxies.
The slope of the power-law tail is fixed to $-1.5$, which is an intermediate value as indicated by simulations (), also previously adopted by <cit.>.
The width of the log-normal PDF is physically characterized by the Mach number of the supersonic turbulent ISM (see and Eq. 5 of ): $\sigma \approx 0.43 \sqrt{\ln (1 + 0.25 \mathcal{M}^2)}$, which ranges typically within 4 to 20 in star-forming regions as shown by simulations (e.g., ; ). Here we adopt a fiducial Mach number of 10, as done previously by <cit.>.
Note that a high Mach number $\sim80$ is also found in merger systems and starburst galaxies (e.g., ). It corresponds to a log-normal PDF width $1.56\times$ our fiducial value, and marginally affects the CO excitation in a similar way as a higher $\nmean$. Thus, for simplicity in this work we fix the Mach number and allow $\nmean$ to vary.
CO(5-4)/CO(2-1) line ratio ($\R52$) from single one-zone LVG calculation. The four panels show the calculations at four representative redshifts $z=0$, 1.5, 4 and 6, from left to right, respectively. Solid, dashed and long-dashed lines are for gas kinetic temperature $\Tkin=25$, $50$ and $100\;\mathrm{K}$, respectively.
The grey lines in the second, third and fourth panels are the corresponding $z=0$ lines.
$\R52$ as functions of the mean gas density ($\lgnmean$) as predicted from our composite gas modeling. The four panels show the models at four different representative redshifts. In each panel, color indicates the threshold density of the power-law tail ($\lgnthresh$; which alters the line ratio only slightly), and three line styles are models at three representative kinetic temperatures ($\Tkin=25$, $50$ and $100\,\mathrm{K}$ for solid, dashed and long-dashed lines, respectively).
§.§ One-zone gas cloud calculation
For a given gas density PDF, each $\nH2$ bin is composed of the same “one-zone” gas clouds for which we will compute the line surface brightness. A one-zone cloud has a single volume density $\nH2$, column density $\NH2_$, gas kinetic temperature ($\Tkin$), CO abundance $[\mathrm{CO}/\mathrm{H_2}]$ and velocity gradient $\dvddr$. Note that although an equivalent cloud size $r$ is implied from the ratio of $\NH2_$ and $\nH2$, given that the calculation is in 1D, $r$ should not be taken as a physical cloud size.
Also note that in our study we do not model the 3D distribution of one-zone models, therefore any radiative coupling between one-zone models along the same line of sight can not be accounted for. This is likely a minor issue for star-forming disk galaxies given their thin disks (a few hundred pc; ) and systematic rotation which separates molecular clouds in the velocity space for inclined disks, but the actual effects need to be studied by detailed numerical simulations (e.g., ).
Here we use RADEX () to compute the
1D non-LTE radiative transfer. For a given $\nH2$, we loop $\NH2_$ from $10^{21}$ to $10^{24}\,\mathrm{cm}^{-2}$, and $r$ is then determined by:
\begin{equation}
N_{\mathrm{H_2}} = 2 \times r \times n_{\mathrm{H_2}} = 6 \times 10^{18} \times \frac{r}{\mathrm{pc}} \times \frac{n_{\mathrm{H_2}}}{\mathrm{cm^{-3}}} \ \ [\mathrm{cm^{-2}}]
\end{equation}
We also loop over $\Tkin$ values of 25, 50, and 100 K, while we fix $[\mathrm{CO}/\mathrm{H_2}] = 5 \times 10^{-5}$, a reasonable guess for star-forming clouds (e.g., ; ; although it varies from cloud to cloud and depends on chemistry; e.g., ).
Note that there is one additional free parameter to set, i.e., either the LVG velocity gradient $\dvddr$, or the line width FWHM $\Delta{V}$, or the velocity dispersion $\sigma_{V}$. They are related to each other by:
\begin{equation}
\begin{split}
\Delta{V} = 2 \times r \times \dvddr \ \ [\mathrm{km\,s^{-1}}] \\
\sigma_{V} = \Delta{V} / (2 \sqrt{2 \ln 2}) \ \ [\mathrm{km\,s^{-1}}]
\end{split}
\end{equation}
To determine these quantities and effectively reduce the number of free parameters while being consistent with observations, we use an empirical correlation between $\NH2_$, $r$, $\sigma_{V}$ and the virial parameter $\alphavir$. $\alphavir$ describes the ratio of a cloud's kinetic energy and gravitational potential energy (e.g., ), and can be written as $\frac{5 \sigma_{V}^2 r}{f G M}$, where
$\sigma_{V}$ and $r$ are introduced above,
$G$ is the gravitational constant, $M$ is the cloud mass, and $f$ is a factor to account for the lack of balance between kinetic and gravitational potential (see Eq. 6 of ).
Observations show that clouds are not always virialized, i.e., $\alphavir$ is not always unity.
Based on $\sim60$ pc CO mapping of 11 galaxies in the PHANGS-ALMA sample, <cit.> reported the following correlation in their Eq. 13 (helium and other heavy elements are included; see also Eq. 2 in the review by ):
\begin{equation}
\begin{split}
\alpha_{\mathrm{vir}}
&= 5.77 \times
\left(\frac{\sigma_{V}}{\mathrm{km\,s^{-1}}}\right)^2
\left(\frac{\Sigma_{\mathrm{H_2}}}{\mathrm{M_{\odot}\,pc^{-2}}}\right)^{-1}
\left(\frac{r}{40\,\mathrm{pc}}\right)^{-1} \\
&= 5.77 \times
\left(\frac{\sigma_{V}}{\mathrm{km\,s^{-1}}}\right)^2
\left(\frac{N_{\mathrm{H_2}}}{1.55 \times 10^{20}\,\mathrm{cm^{-2}}}\right)^{-1}
\left(\frac{r}{\mathrm{pc}}\right)^{-1} \\
\end{split}
\label{Equation_alphavir}
\end{equation}
They find $\alphavir\approx1.5\,\textnormal{--}\,3.0$ with a $1\sigma$ width of 0.4 $\textnormal{--}$ 0.65 dex.
For simplicity and also with the idea of focusing primarily on the effect of gas density, we adopt a constant $\alphavir$ of 2.3. As shown in later sections, this is already sufficient to explain the observed CO line ratios/SLEDs by our modeling. But note that more comprehensive descriptions of $\alphavir$ can be achieved in simulations and can be compared with the results from this work to better understand how a changing $\alphavir$ could affect CO line ratio predictions.
Fig. <ref> presents how $\R52$ changes with the gas densities of one-zone cloud models for four representative redshifts where the cosmic microwave background (CMB) temperatures are different.
We repeat our calculations for three representative $\Tkin$ as labeled in each panel. The comparison shows that $\Tkin$ significantly affects the $\R52$ line ratio, especially at low densities and at low redshifts.
Note that due to the constant $\alphavir$ assumption, for a given $\nH2$, Eq. <ref> implies that $\sigma_{V} \propto r$, and that $\dvddr$ is not varying with $\NH2_$. Thus the actual choices of $\NH2_$ (or $r$) for each single one-zone model will not affect the modeling of $\R52$ (and of the optical depth $\tau$).
In addition, our modeling is also able to produce reasonable line optical depths ($\tau$) and [C i]/CO line ratios, as presented in Appendix <ref>.
Predicted CO SLEDs in Jansky units and normalized at CO(2-1). From top to bottom, CO SLEDs are at redshift $z=0$, $1.5$, $4$ and $6$, respectively. And from left to right, log-normal PDF's mean gas density $\lgnmean / \percmcubic$ changes from $2.0$ to $5.0$ in steps of 0.5.
In each panel, solid, dashed and long-dashed lines represent $T_{\mathrm{kin}}=25$, $50$ and $100\,\mathrm{K}$ models, respectively.
Line color coding indicates the threshold gas density of the power-law tail PDF, $\lgnthresh$.
Colored data points are CO line fluxes in the following galaxies, with references in parentheses:
the Milky Way (),
local spiral M51 (),
local ULIRGs Mrk231, Mrk273, IRAS F18293-3413 and IRAS F17207-0014 (; ),
$z=1.5$ BzK galaxies (),
$z=1.5$ starburst galaxies ( and this work),
$z=4.055$ SMG GN20 (; ; ),
$z=4.755$ SMG ALESS73.1 (; ),
$z=5.3$ SMG AzTEC-3 (),
and $z=6.3$ SMG HFLS3 ().
Fitted mean gas density $\nmean$ versus fitted gas kinetic temperature $\Tkin$ (left panel) and gas pressure $P/k$ (right panel; $k$ is the Boltzmann constant) based on $\R52$ and its errors in our galaxy sample. This reflects the internal degeneracy between $\nmean$ and $\Tkin$ in our model grid. See fitting method in Sect. <ref>.
§.§ Converting $\R52$ to $\nmean$ and $\Tkin$ with the model grid
We compute the global line surface brightness by summing one-zone line surface brightnesses at each $\nH2$ bin according to the gas density PDF.
With our assumptions, there are only four free parameters: the mean gas density of the log-normal PDF $\nmean$, the threshold density of the power-law tail $\nthresh$, gas kinetic temperature $\Tkin$, and redshift. Their grids are described in Sect. <ref>.
In Fig. <ref>, we present the predicted $\R52$ as a function of the four free parameters. $\R52$ increases smoothly with $\nmean$ and $\Tkin$, while $\nthresh$ does not substantially alter the $\R52$ ratio, as indicated by the color coding. The minimum $\R52$ at the lowest density ($\lgnmean/\percmcubic \sim 2$) is nearly doubled from redshift 0 to 6 due to the increasing CMB temperature, but such a redshift effect is less prominent ($<\times1.5$) at both higher density ($\lgnmean/\percmcubic > 3$) and for higher $\Tkin$.
In Fig. <ref>, we further show the full CO SLEDs at $J_{\mathrm{u}}=1$ to $9$ from our model grid, and compare them with a subsample of galaxies with multiple CO transitions at various redshifts. These galaxies are displayed in panels where the $\nmean$ is closest to their $\R52$-derived $\nmean$ (see below). Our modeling can generally match these CO SLEDs given certain choices of $\nmean$ and $\Tkin$. Yet we caution that this is not a thorough comparison, and our model grid might not fit entirely well the CO SLED shape due to our simplifying assumptions of fixed Mach number and power-law tail slope or $\alphavir$. While this work only focuses on $\R52$ with the simplest assumptions, the model predictions seem overall already quite promising for the whole CO SLEDs, and can be further improved in future works.
Based on the model grid, we describe below a method to determine the most probable $\nmean$, $\Tkin$ and $\nthresh$ ranges for a given $\R52$ and its error in galaxies with known redshift. This is done with a Monte Carlo approach. We first interpolate our 4D model grid to the exact redshift of each galaxy using Python scipy.interpolate.LinearNDInterpolator, then resample the 3D model grid to a finer grid, perturb the $\R52$ given its error over a normal distribution for 300 realizations, and find the minimum $\chi^2$ best-fits for each realization. Finally, we combine best-fits to obtain posterior distributions of $\nmean$, $\Tkin$ and $\nthresh$, and determine their median, 16% (L68) and 84% (H68) percentiles.
This fitting method is coded in our Python package co-excitation-gas-modeling that is made publicly available.
We note that although there is a single input observation ($\R52$) whereas there are three parameters to be determined ($\nmean$, $\Tkin$ and $\nthresh$), our method still produces reasonable results. In fact, our method is able to take into account the internal degeneracy between $\nmean$ and $\Tkin$ inside model grids, thus obtaining reasonable probability ranges. Fig. <ref> shows the fitted $\nmean$ and $\Tkin$ for our galaxy sample, resulting in a non-linear trend between $\nmean$ and $\Tkin$. The galaxy-wide mean pressures of gas can also be calculated as $\Tkin \times \nmean$, and are found to agree with estimates in local galaxies <cit.>.
In Figs. <ref> and <ref>, we present correlations between the $\R52$-fitted $\nmean$ and $\Tkin$, respectively, and various galaxy properties, similarly to what is presented in Fig. <ref> for $\R52$. We discuss them in detail in the next sections (Sect. <ref>).
§ RESULTS ON ISM PHYSICAL PROPERTIES AND DISCUSSION
§.§ The underlying meaning of $\Umean$: a mass-to-light ratio for dust
By definition, $\Umean$ is the mass-weighted ISRF intensity created by UV photons from stars in a galaxy.
As indicated by the model and many of its applications, e.g., <cit.>, <cit.>, <cit.>, <cit.>, <cit.> and <cit.>, $\Umean$ is actually a mass-weighted, average mass-to-light ratio for the mixture of dust grains in a galaxy. It is driven by the young stars emitting most of the UV photons, but also reflects the mean distance between young stars and interstellar dust and the efficiency of UV photons heating the dust.
For a given ISRF distribution power-law index ($=-2$) and $U_{\mathrm{max}}$ ($=10^{7}$; ), $\Umean$ is proportional to the ratio between $\LIR$ and $\Mdust$, with a coefficient $P_0 \approx 138$ from this work, where $P_0$ represents the power absorbed per unit dust mass in a radiation field $U=1$:
\begin{equation}
\begin{split}
& L_{\mathrm{IR,\,8-1000{\mu}m}} = P_0 \cdot \Umean \cdot \Mdust \\[0.5ex]
& \textnormal{where} \ P_0 \approx 120 - 150 \ (\textnormal{mean} = 138)
\end{split}
\label{Equation_Umean_LIR_Mdust}
\end{equation}
Note that the $P_0$ factor is calibrated to be equal to 125 in <cit.> due to a slightly different $U_{\mathrm{max}}=10^{6}$, a small 10% systematic difference.
$\Umean$ is also positively linked to dust temperature, but it depends on how dust temperature is defined. For example, <cit.> find that $T \approx 17 \cdot U^{1/6} \ \mathrm{[K]}$ for dust grains with sizes greater than 0.03 $\mu$m whose blackbody radiation peaks around 160 $\mu$m. <cit.> calibrate the light-weighted dust temperature $T_{\mathrm{dust}}^{\mathrm{light}} = 20.0 \cdot U^{1/5.57} \ \mathrm{[K]}$ (and mass-weighted $T_{\mathrm{dust}}^{\mathrm{mass}} = 0.91 \cdot T_{\mathrm{dust}}^{\mathrm{light}}$) by fitting Wien's law to each elementary <cit.> template.
Studies of $\Tdust$ and $\Umean$ have shown that dust (ISRF) is warmer (stronger) for increasing IR luminosity from local SFGs to (U)LIRGs (e.g., ; ; ), and increases with redshift for the majority of MS galaxies (e.g., ; ; ; ).
Some observations show colder dust temperatures in a few among the most extreme starburst systems (e.g., ; ; ). These are likely due to the presence of high dust opacity at shorter wavelengths which makes the dust SED apparently colder. Observations of SMGs also show colder dust temperatures in some of the less luminous ones. This phenomenon is likely driven by the fact that (sub-)mm selection favors cold-dust galaxies whose SEDs peak closer to (sub-)mm wavelengths (e.g., ;
; ; ; ; ).
There is also an interesting finding that for extreme SB galaxies with $\SFR/\SFR_{\mathrm{MS}}>4$, their $\Umean$ seem to not evolve with redshift (e.g., ). While $\Umean$ in MS galaxies does evolve with redshift, and extrapolation suggests that $\Umean$ in MS galaxies might become stronger than those in extreme SB galaxies at $z>2.5$, which seems at odds with the expectation. Yet this finding might also be limited by sample size and selection method, templates used for SED fitting, as well as the dust optically thin assumption in templates (e.g., ).
Combining with the results from this work, the $\Tdust$ or $\Umean$ trends are easier to understand when correlating them with molecular gas mean density and temperature. We propose a picture in which the general increase of dust temperature and ISRF is mainly due to the increase in cold molecular gas temperature, either due to higher CMB temperature at higher redshifts or more intense star formation and feedback. While the mean molecular gas density has a weaker, non-linear trend driving $\Umean$ in most galaxies, merger-driven compaction could strongly increase gas density hence boost $\nmean$, $\Tkin$ and $\Umean$ in a small number of SB galaxies. Such an increase in gas density creates more contrast at lower redshifts due to the general decrease of the cosmic molecular gas density and CMB temperature. This could explain why $\Umean$ is more different between MS and SB galaxies at lower redshifts.
§.§ Density or Temperature Regulated Star Formation? The $\Umean$–$\nmean$ and $\Umean$–$\Tkin$ Correlations
Fitted $\nH2$ versus galaxy properties same as in Fig. <ref>. Data points' $\nH2$ and errorbars are the median and 1-sigma ranges of the fitting using our model grid as presented in Sect. <ref> to the observed $\R52$.
Fitted $\Tkin$ versus galaxy properties same as in Fig. <ref>. $\Tkin$ is shown as median and errorbars representing the 1-sigma range of our model grid fitting to the observed $\R52$ as presented in Sect. <ref>.
Figs. <ref> and <ref> show that both $\nmean$ and $\Tkin$ positively correlate with $U$ and $\LIR$ but not with other properties like stellar mass or AGN fraction in our sample.
Yet $\nmean$ correlates with $U$ or $\LIR$ in a non-linear way. Except for high-$z$ SMGs and a few local galaxies with large error bars coming from their large $\R52$ uncertainties, most galaxies are constrained within a narrow range of $\nmean \sim 10^{2}-10^{3} \; \percmcubic$.
Despite the large scatter in the data, we observe a trend with $\nH2 \propto \Umean^{0.70}$ which seems to hold only within the intermediate $\Umean$ range ($\Umean \sim 5 - 20$).
Meanwhile, $\Tkin$ has a tighter correlation ($\sigma\sim0.11$) with $U$ and $\LIR$. We find relations $\Tkin \propto \Umean^{0.33}$ and $\Tkin \propto \LIR^{0.13}$.
Note that by calculating the [Ci] $^3P_2$–$^3P_1$ and $^3P_1$–$^3P_0$ excitation temperatures as a probe of gas kinetic temperature under thermalized condition, <cit.> and <cit.> also found positive correlation between the gas kinetic temperature and dust temperature which is proportional to $\Umean^{0.16}$.
There is also a weak trend that $\Tkin$ increases with $\SFR/\SFR_{\mathrm{MS}}$ (Pearson correlation coefficient $P=0.34$), and the trend between $\nH2$ and $\SFR/\SFR_{\mathrm{MS}}$ is also marginal ($P=0.40$).
Given these results, it is very reasonable that both mean gas density and temperature increase from less to more intensively star-forming galaxies.
Yet based on the datasets in this work, it is difficult to statistically decouple $\nmean$ and $\Tkin$ and hence to measure well the shapes of $\Umean$–$\nmean$ and $\Umean$–$\Tkin$ correlations.
However, the non-linear or broken $\Umean$–$\nmean$ correlation and the more smooth $\Umean$–$\Tkin$ might imply two scenarios, one for “normal” star-forming galaxies, and one for merger-driven starbursts.
“Normal” galaxies may have a smooth density- and temperature-regulated star formation, whereas strong gas compression in major merger events can induce extraordinarily high $\nmean$ with moderate $\Tkin$ and $\Umean$
(e.g., ).
Further insights will require higher quality, multiple transition CO SLEDs, as we briefly discuss below (Sect. <ref>).
§.§ Implication for star formation law slopes
The star formation (SF) law is known as the correlation between gas mass (or surface density) and star formation rate (or surface density), and can be expressed as:
\begin{equation}
\begin{split}
\SFR &= A \cdot \MH2_^{N} \quad \textnormal{or} \\[0.5ex]
\Sigma_{\SFR} &= A \cdot \Sigma_{\mathrm{gas}}^{N}
\end{split}
\end{equation}
where $A$ is the normalization and $N$ is the slope.
After the initial idea presented by <cit.>, <cit.> first systematically measured the SF law to be $\Sigma_{\SFR} \propto \Sigma_{\mathrm{gas}}^{1.4\pm0.15}$ based on observations of nearby spiral and starburst galaxies, where $\Sigma_{\mathrm{gas}}$ is the mass surface density of atomic plus molecular gas, and $\Sigma_{\SFR}$ is the SFR surface density traced by $\mathrm{H}\alpha$ and/or $\LIR$.
This Kennicutt-Schmidt law with $N \approx 1.4$ has been extensively studied in galaxies with $\Sigma_{\mathrm{gas}}\sim1-10^{5}\;\Msun\,\mathrm{pc}^{-2}$ and is widely used in numerical simulations (see reviews by ; ).
However, the actual slope $N$ of the SF law has been long debated. High-resolution (sub-kpc scale) observations in nearby spiral galaxies revealed that atomic gas does not correlate with SFR, whereas only molecular gas traces SFR, and $N$ is close to unity in these galaxies (e.g., ).
Meanwhile, from local SFGs to (U)LIRGs, observations suggest that $N$ is super-linear, ranging from $\sim 1$ to $\sim 2$ (e.g., ; ; ; ; ; ).
Furthermore, <cit.> and <cit.> found that high redshift MS and SB galaxies follow two parallel sequences in the SF law ($M_{\mathrm{H_2}}$–$\SFR$) diagram, each with substantial breadth, and both with $N\sim1.1-1.2$ but with a 0.6 dex mean offset in normalization.
Thus, why local SFG regions show a linear SF law, while high-$z$ SB galaxies have a much higher $\mathrm{SFE} \equiv \SFR / M_{\mathrm{H_2}}$ is still to be understood.
Here we decompose the SF law into $\Umean$ and $\nmean$ to gain some insights. First, it is known that the dust obscured SFR can be traced by the IR luminosity (e.g., ) as
$\SFR = \LIR / C_{\mathrm{IR}}$, where $C_{\mathrm{IR}} \sim 10^{10} \; [\Lsun \, (\Msun \mathrm{yr}^{-1})^{-1}]$ assuming a <cit.> IMF.
Second, as mentioned in the previous section,
$ \Umean = P_0^{-1} \cdot \LIR / \Mdust $.
Third, we use the gas-to-dust ratio $\deltaGDR \equiv \Mgas / \Mdust$ to link gas to dust mass. This ratio varies with metallicity (e.g., ; ; ; ; ), and also note that the definition of gas in $\deltaGDR$ is atomic plus molecular gas. We include an additional molecular hydrogen fraction $\fH2_ \equiv \MH2_ / \Mgas$ to the gas-to-dust ratio, having finally $\Mdust = \MH2_ \cdot (\fH2_ \deltaGDR)^{-1}$.
Fourth, we consider MS galaxies to be disks with radius $r$ and height $h$, and assume that $\nmean$ is the global mean gas density, thus the molecular gas mass can be expressed as the product of the volume and the mean molecular gas density:
$ \MH2_ = \pi \cdot \nmean \cdot r^2 \cdot h $.
And fifth, we ignore atomic gas and only considers molecular gas SF law.
Then, we rewrite the SF law equation as:
\begin{equation}
\begin{split}
& \SFR = A \cdot \MH2_^{N} \\[0.5ex]
& \implies \Umean \cdot \Mdust = A \cdot P_0^{-1} \cdot C_{\mathrm{IR}} \cdot \MH2_^{N} \\[0.5ex]
& \implies \Umean \cdot (\fH2_ \, \deltaGDR)^{-1} = A \cdot P_0^{-1} \cdot C_{\mathrm{IR}} \cdot \MH2_^{N-1} \\[0.5ex]
& \implies \Umean \cdot (\fH2_ \, \deltaGDR)^{-1} = \\
& \qquad \qquad \qquad A \cdot P_0^{-1} \cdot C_{\mathrm{IR}} \cdot (\pi \cdot \nmean \cdot r^2 \cdot h)^{N-1} \\[0.5ex]
\end{split}
\label{Equation_SFLaw_mol_1}
\end{equation}
Taking the logarithm of both sides, and assuming that $ \mathrm{log} \Umean $, $ \mathrm{log} (\fH2_ \deltaGDR) $ and $ \mathrm{log} (r^2 h) $ are functions of $ \mathrm{log} \nmean $, we have:
\begin{equation}
\begin{split}
& {\log \Umean} - {\log (\fH2_ \deltaGDR)} = \\[0.3ex]
& \quad \log (A \, P_0^{-1} \, C_{\mathrm{IR}}) \, + \, (N-1) \left[ \log \nmean + \log (\pi r^2 h) \right] \\[0.6ex]
& \implies N = \frac{ %
\frac{\mathrm{d} \log \Umean}{\mathrm{d} \log \nmean} - %
\frac{\mathrm{d} \log (\fH2_ \deltaGDR)}{\mathrm{d} \log \nmean} %
}{ %
1 + %
\frac{\mathrm{d} \log (r^2 h)}{\mathrm{d} \log \nmean} %
} + 1 \\
\end{split}
\label{Equation_SFLaw_mol}
\end{equation}
Therefore, the SF law slope $N$ depends on how $\Umean$, $\fH2_ \deltaGDR$ (metallicity) and $r^2 h$ (galaxy size) change with $\nmean$, which can further be described by the differentials $\frac{\mathrm{d} \log \Umean}{\mathrm{d} \log \nmean}$, $\frac{\mathrm{d} \log (\fH2_ \deltaGDR)}{\mathrm{d} \log \nmean}$ and $\frac{\mathrm{d} \log (r^2 h)}{\mathrm{d} \log \nmean}$, respectively. These differentials strongly depend on galaxy samples. When studying sub-kpc regions in local SFGs, if the ISRF, metallicity and galaxy size are similar among these SFGs, $N$ is close to 1. While when studying a sample including both SFGs and (U)LIRGs, $\Umean$ increases by a factor of a few tens with $\nmean$ changing from $10^{2}$ to $10^{4}\;\percmcubic$, and $r$ decreases by a factor of a few with $\nmean$ as (U)LIRGs are usually smaller and more compact (while the scale height $h$ seems constant, e.g., ). As for the $\fH2_ \deltaGDR$ term, because $\fH2_$ increases with metallicity while $\deltaGDR$ decreases with it, their product $\fH2_ \deltaGDR$ likely does not change much. Therefore, $N$ can be much higher than 1. The overall effect is that the SF law does not have a single slope, yet the overall $N$ is about 1–2.
§.§ Limitations and outlook
We discuss three limitations of this work: the overall quality of current datasets, the assumptions in the gas modeling, and the contamination from AGN. First, CO line ratio or SLED studies require two or more CO line observations. These observations have different observing conditions, beam sizes, flux calibrations, etc., thus uncertainties are very likely underestimated even when the S/N of the line measurements are formally large (e.g., $>3$). For example, for our local SFG subsample, CO(5-4) data are from the Herschel FTS with a certain beam size of $\sim40''$, which does not match the mapping area of CO(2-1) from ground-based telescope. The correction from the FTS beam to the entire galaxy can have a factor of two difference
, which is reflected in the scatter of our data points although not fully reflected in their errorbars. The absolute flux calibration uncertainty of the observations in the literature can also be as high as $\sim30\%$, which is much poorer than current IRAM 30m and ALMA (total power) observations ($<10\%$). This also increases the scatter in our plots and necessarily makes observed correlations less significant. As for high-redshift galaxies, we use a $\SNR$ of 3 in both two CO lines to select our sample, which usually only reflects the quality of line measurements while it does not include the absolute flux calibration uncertainty. Their dust SEDs are also much more poorly covered, thus their $\Umean$ have fairly large uncertainties. Future ALMA Band 3 to 8 mapping of CO lines from $J_{\mathrm{u}}=1$ to $4$ in local galaxies, and VLA plus ALMA observations for suitable galaxies at high redshift with high-quality CO and continuum data will be the key to both spatially understand and statistically verify correlations between $\Umean$, $\nmean$ and $\Tkin$, as well as to unveil any evolutionary trend with redshift.
Second, our assumptions in the gas modeling are also simplistic, in order to reflect only the effects of density and temperature on CO excitation. The constant $\alphavir$ assumption does not reflect the real situation in galaxies, e.g., as shown in <cit.>. Doubling the $\alphavir$ value from what we use in this work will result in a 20% lower $\R52$ at $\log_{10} (\nmean/\mathrm{cm^{-3}}) = 3$, $\Tkin = 25\,\mathrm{K}$ and $z=0$. The constant $\Tkin$ assumption for all one-zone clouds in a galaxy is also a simplified “toy model”-like condition. Adopting more realistic assumptions from observations (e.g., ) or from hydrodynamic+chemistry simulations (e.g., ; ) in our gas modeling would naturally be the next step.
Third, it is known that some galaxies host AGNs which significantly contribute to optical or mid-IR SEDs as well as affect the CO excitation. Our SED fitting has already included a mid-IR AGN component that can dominate rest-frame $5-50\,\mu\mathrm{m}$ emission. This substantially improves the fitting $\chi^2$ for a number of galaxies showing mid-IR power-law SED feature, which, however, also brings in larger uncertainties in $\Umean$ as reflected in the errorbars in our plots. The used AGN SED templates could also slightly affect our results, although this effect should be well captured by the quoted uncertainties. Additional mid-IR photometry from future space telescopes like the James Webb Space Telescope (JWST) and the Origins Space Telescope (OST) will be key to solve this degeneracy and provide accurate AGN/ISRF decomposition.
Meanwhile, an AGN can also boost highly-excited CO lines within X-ray dominated regions (XDRs) as shown by $J_{\mathrm{u}} \gtrsim 9$ CO studies (e.g., ). Decomposition of such AGN-dominated CO SLEDs usually requires three components, but the XDR component starts to dominate the CO SLED only at $J_{\mathrm{u}} \gtrsim 9$. Thus for this work, at CO(5-4) AGN likely contributes less than 10% (e.g., see Fig. 2 of ).
§ SUMMARY
In this work, we compiled a comprehensive sample of galaxies from local to high redshift with CO(2-1) and CO(5-4) detections and well-sampled IR SEDs.
This includes our new IRAM PdBI CO(5-4) observations of six $z\sim1.5$ COSMOS starburst galaxies.
With this large sample, we measure their mean ISRF intensity $\Umean$ from dust SED fitting (Sect. <ref>), and their mean molecular gas density $\nmean$ converted from $\R52=S_{\mathrm{CO(5\textnormal{-}4)}}/S_{\mathrm{CO(2\textnormal{-}1)}}$ line ratios based on our density-PDF gas modeling (Sect. <ref>).
Our results can be summarised as following.
We confirm the tight $\Umean$–$\R52$ correlation first reported by <cit.>, and find that $\Umean$, $U_{\mathrm{min}}$ and $\LIR$ all strongly correlate with $\R52$, while stellar mass, AGN fraction, and the SFR offset to the MS all show weaker or no correlation with $\R52$ (Fig. <ref>).
We conduct density-PDF gas modeling to connect the mean molecular gas density $\nmean$ and kinetic temperature $\Tkin$ to the observable CO line ratio $\R52$. Based on which, we provide a Monte Carlo method (and a Python package co-excitation-gas-modeling) to compute $\nmean$ and $\Tkin$'s probability ranges using our model grid for any given $J_{\mathrm{u}}=1-10$ CO line ratio (and for CO SLED as the next step; see, e.g., Fig. <ref>).
We find that both $\nmean$ and $\Tkin$ increase with $\Umean$, with $\Tkin$ having a tighter correlation with $\Umean$.
Based on these correlations, we propose a scenario in which the ISRF in the majority of galaxies is more directly regulated by the gas temperature and non-linearly by the gas density. A fraction of SB galaxies have gas densities larger by more than one order of magnitude with respect to MS galaxies and are possibly in a merger-driven compaction stage (Sects. <ref> and <ref>).
We link the $\Umean$–$\nmean$ correlation to the Kennicutt-Schmidt SF law, and discuss how the SF law slope $N$ can be inferred from the $\Umean$–$\nmean$ correlation slope and other galaxy properties versus $\nmean$ correlations. We find that $N\sim1-2$ can be inferred from the trends of how $\Umean$ and galaxy size change with $\nmean$ in different galaxy samples (Sects. <ref>).
Our study demonstrates that ISRF and molecular gas are tightly linked to each other, and density-PDF gas modeling is a promising tool for probing detailed ISM physical quantities, i.e., molecular gas density and temperature, from observables like CO line ratios/SLEDs.
Data availability:
Our 2 SED fitting code is publicly available at <https://ascl.net/code/v/2533>.
Our Python package co-excitation-gas-modeling for computing $\nmean$ and $\Tkin$ from CO line ratios is publicly available at: <https://pypi.org/project/co-excitation-gas-modeling>.
And our SED fitting figures as shown in Fig. <ref> and full Table <ref> are publicly available at: <https://doi.org/10.5281/zenodo.3958271>.
We thank the anonymous referee for helpful comments.
DL, ES and TS acknowledge funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (grant agreement No. 694343).
GEM acknowledges the Villum Fonden research grant 13160 “Gas to stars, stars to dust: tracing star formation across cosmic time” and the Cosmic Dawn Center of Excellence funded by the Danish National Research Foundation under then grant No. 140.
YG's research is supported by National Key Basic Research and Development Program of China (grant No. 2017YFA0402700), National Natural Science Foundation of China (grant Nos. 11861131007, 11420101002), and Chinese Academy of Sciences Key Research Program of Frontier Sciences (grant No. QYZDJSSW-SLH008).
SJ acknowledges financial support from the Spanish Ministry of Science, Innovation and Universities (MICIU) under grant AYA2017-84061-P, co-financed by FEDER (European Regional Development Funds).
AP gratefully acknowledges financial support from STFC through grants ST/T000244/1 and ST/P000541/1.
We thank A. Weiss and C. Wilson for helpful discussions.
This work used observations carried out under project number W14DS with the IRAM Plateau de Bure Interferometer (PdBI). IRAM is supported by INSU/CNRS (France), MPG (Germany) and IGN (Spain).
This work used observations carried out under project 17A-233 with the National Radio Astronomy Observatory's Karl G. Jansky Very Large Array (VLA). The National Radio Astronomy Observatory is a facility of the National Science Foundation operated under cooperative agreement by Associated Universities, Inc.
§ IRAM PDBI CO OBSERVATIONS OF $Z\SIM1.5$ FMOS COSMOS GALAXIES
We present the sample table and CO(5-4) imaging of our PdBI observations in Table <ref> and Fig. <ref>. The observations are described in Sect. <ref>.
CO observation results.
Source R.A.$_\mathrm{CO}$ Dec.$_{\mathrm{CO}}$ z$_{\mathrm{CO}}$ $\Delta{V}_{\mathrm{CO}}$ CO Size $S_\mathrm{\mathrm{CO}(1-0)}$ $S_\mathrm{\mathrm{CO}(2-1)}$ $S_\mathrm{\mathrm{CO}(5-4)}$
[$\mathrm{km\,s^{-1}}$] [”] [$\mathrm{Jy\,km\,s^{-1}}$] [$\mathrm{Jy\,km\,s^{-1}}$] [$\mathrm{Jy\,km\,s^{-1}}$]
(1) (2) (3) (4) (5) (6) (7) (8) (9)
PACS-819 09:59:55.552 02:15:11.70 1.4451 592 0.335 1.10 $\pm$ 0.07 3.850 $\pm$ 0.922
PACS-830 10:00:08.746 02:19:01.87 1.4631 436 0.973 1.18 $\pm$ 0.10 1.876 $\pm$ 0.387
PACS-867 09:59:38.078 02:28:56.73 1.5656 472 0.119 $\pm$ 0.064 0.46 $\pm$ 0.04 0.731 $\pm$ 0.218
PACS-299 09:59:41.295 02:14:43.03 1.6483 590 $<$ 0.210 $^{a}$ 0.67 $\pm$ 0.08 1.758 $\pm$ 0.325
PACS-325 10:00:05.475 02:19:42.61 1.6538 764 0.28 $\pm$ 0.06 $<$ 0.942 $^{a}$
PACS-164 10:01:30.530 01:54:12.96 1.6481 894 $<$ 0.222 $^{a}$ 0.61 $\pm$ 0.11 1.175 $\pm$ 0.465
Columns (1–6) and (8) are the ALMA CO(2-1) properties reported by <cit.>.
Column (7) and (9) show the results from this work for VLA CO(1-0) and IRAM PdBI CO(5-4), respectively.
$^{a}$ $3\,\sigma$ upper limits.
CO(5-4) line maps for PACS-819, 830, 867, 299, 325 and 164, respectively. In the last two panels, PACS-325 and 164 are undetected. The field of view is $12''\times12''$ in all panels. Contours have a spacing of $1\,\sigma$ noise in each panel. The cross-hair indicates the phase center and the box indicates the ALMA CO(2-1) emission peak position, which is also the position where we extract the CO(5-4) line fluxes.
§ SOME NOTES ON CO OBSERVATIONS OF INDIVIDUAL NEARBY GALAXIES IN THE LITERATURE
CenA: We excluded this galaxy because its CO(2-1) and CO(5-4) data only cover the center of the galaxy, and significant correction is needed for recovering the entire galaxy. For example, <cit.> applied a correction factor of $1/0.48$, where $0.48$ is the beam-aperture-to-entire-galaxy fraction denoted as “BeamFrac” and reported in the full Table <ref> (available online), to convert the CO(2-1) observed at the galaxy center with a beam of $22''$ () to a beam of $43''$ for their study. They derived this factor based on SPIRE $250\,\mu$m image aperture photometry. Based on PACS $70-160\,\mu$m (as presented in ), we obtain correction factors of $1/0.088$ and $1/0.185$ from a $22''$ and $43''$ beam to the entire galaxy, respectively. Thus the $22''$-to-$43''$ correction factors in two works fully agree ($0.088/0.185 \approx 0.48$).
Despite the good agreement in beam related correction among these works, we caution that using far-infrared data to correct CO(2-1) is very uncertain as low-$J$ CO lines do not linearly correlate with far-infrared emission ().
M83: We excluded this galaxy in this work as well. <cit.> reported a CO(2-1) line flux of $261 \pm 15 \; \Kkms_$ ($5501 \pm 316 \; \Jykms$) within a $22''$ beam (with SEST 15m) at the M83 galaxy center, <cit.> reported $98.1 \pm 0.8 \; \Kkms_$ ($2068 \pm 17 \; \Jykms$) within a $22''$ aperture (with JCMT 15m) at the same center, and <cit.> reported $67.4 \pm 2.2 \; \Kkms_$ ($2721 \pm 88 \; \Jykms$) within a $30.5''$ beam (with CSO 10.4m) also at the center position. <cit.> adopted the <cit.> line flux and applied a factor of $1/0.76$ correction to obtain the line flux within a $43''$ beam. This correction factor agrees with . However, if we want to obtain the entire flux for M83, we will need to correct the $43''$ flux by a factor of $1/0.232$ based on the Herschel PACS aperture photometry in . We caution that the uncertain in such a correction is large, and the CO(2-1) line fluxes at the galaxy center in the literature are already inconsistent by a factor of two.
NGC0253: At the galaxy center position, the reported CO(2-1) line fluxes are:
$6637 \pm 996 \; \Jykms$ within a $12''$ beam (),
$10684 \pm 1602 \; \Jykms$ within a $15''$ beam (),
$17757 \pm 3551 \; \Jykms$ within a $21''$ beam (),
$24428 \pm 2686 \; \Jykms$ within a $23''$ beam (),
$33800 \pm 3200 \; \Jykms$ within a $43.5''$ beam (),
and $34300 \pm 3600 \; \Jykms$ within a $43.5''$ beam (; corrected from the original beam in ).
The beam-to-entire-galaxy fraction, “BeamFrac”, is 0.518 from $43.5''$ to the entire galaxy based on .
These fluxes are roughly consistent, and the “BeamFrac”-based correction factor is only a factor of two, thus we take the last $43.5''$-beam flux and obtain $66216 \pm 15010 \; \Jykms$ as the CO(2-1) flux for the entire NGC0253, where we added a 0.2 dex uncertainty to the $43.5''$-beam flux error. We caution that even with the additional uncertainty, the flux error might still underestimate the true uncertainty, which includes original flux calibration and measurement error in <cit.>, correction from original beam to $43.5''$ by <cit.>, and from $43.5''$ beam to entire galaxy by
NGC0891: We excluded this galaxy in this work. <cit.> reported a CO(2-1) line flux of $86 \pm 6 \; \Kkms_$ ($1974 \pm 138 \; \Jykms$) within a convolved $23''$ beam at the galaxy center position. <cit.> converted the same line brightness temperature from <cit.> to a flux of $381 \pm 26 \; \Jykms$, which, however, is lower than our converted value in parentheses, and is possibly mistaking the original $12''$ beam for calculation while the brightness temperature that <cit.> reported has been convolved to $23''$ beam as mentioned in their Table 1 caption.
We note that the correction factor from $12''$ or $23''$ to the entire NGC0891 is as large as $\sim10$, e.g., the “BeamFrac” from $16.9''$ to entire galaxy is 0.112 as measured by . Thus it is too uncertain to consider this galaxy in this work.
As for CO(1-0), <cit.> reported a line flux of $96 \pm 5 \; \Kkms_$ ($551 \pm 28 \; \Jykms$) within a $23''$ beam at the galaxy center position. This can be corrected to the entire galaxy scale as $3908 \pm 204 \; \Jykms$ based on .
<cit.> reported a line flux of $35.5 \pm 5 \; \Kkms_$ ($963 \pm 136 \; \Jykms$) within a $50''$ beam (with FCRAO 14m), and a global scale integrated flux of $3733.7 \; \Jykms$.
They are consistent within errors.
NGC1068: <cit.> reported a CO(2-1) line flux of $240 \pm 10 \; \Kkms_$ ($5488 \pm 229 \; \Jykms$) within a convolved $23''$ beam at the galaxy center position.
<cit.> converted the same line brightness temperature from <cit.> to a flux of $1967.2 \pm 80 \; \Jykms$, which is also inconsistent with our converted value (in parentheses) and possibly due to the mistaking of the original $12''$ beam in their calculation.
<cit.> reported a CO(2-1) line flux of $11300 \pm 2200 \; \Jykms$ within the inner $40''$ of NGC1068 (originally from ).
<cit.> reported a CO(2-1) line flux of $8366 \pm 19 \; \Jykms$ within a beam of $30''$ (with CSO 10.4m), which is then corrected to $43''$-beam flux of $11700 \pm 1100 \; \Jykms$ by <cit.>.
<cit.> also cited <cit.>'s flux and reported a $43''$-beam flux of $12600 \pm 2500 \; \Jykms$ converted from a $12''$ beam. But note that <cit.> might have mistaken a $12''$ beam for the calculation.
If we directly correct the <cit.> $23''$-beam flux to a $43''$ beam, it is $8669 \; \Jykms$, which, however, is 30% smaller than that in <cit.>.
Meanwhile, if we correct <cit.>'s flux from $30''$-beam to $43''$-beam, it is $10542\; \Jykms$, consistent with both <cit.> and <cit.>.
Given the difference is only about 30%, in this work we adopt the average of these fluxes,
i.e., $10170 \; \Jykms$ for a $43''$-beam, or $15551 \; \Jykms$ corrected to the entire galaxy scale (based on BeamFrac).
For CO(1-0), we perform our own photometry using the Nobeyama 45m COAtlas Survey data () and obtain a flux of $5228 \; \Jykms$.
This is 40% higher than the global scale line flux of $3651.1 \; \Jykms$ measured by <cit.> using FCRAO mapping observations, but more close to the line flux of $4240 \; \Jykms$ within a $43''$-beam reported by <cit.> which is citing <cit.> and originally also from <cit.>.
NGC1365: NGC1365 were observed at two positions by Herschel SPIRE FTS, one at North-East (NGC1365-NE) and one at South-West (NGC1365-SW). They have similar CO(5-4) within 10% but the IR luminosity within each aperture differ by 25%. This means our aperture-based beam-to-entire-galaxy correction has at least 25% uncertainty (same in the independent analysis of the similar method by ). For CO(2-1) we use the same <cit.> SEST 15m ($24''$ beam) data as in <cit.>, and correct it to the entire galaxy scale to match our corrected CO(5-4).
NGC1614: CO(2-1) is from <cit.>, observed with SEST 15m ($22''$ beam; $\eta_{\mathrm{mb}}=0.5$, $\int T_{\mathrm{mb}} \mathrm{d} v = 56 \pm 2 \; \Kkms_$ or line flux $1180 \pm 42 \; \Jykms$). We correct from $22''$ beam to the entire galaxy with a BeamFrac of $0.792$ ().
Meanwhile, note that <cit.> reported an interferometric integrated CO(2-1) flux of $670 \pm 7 \; \Jykms$ (synthesized beam $3.7'' \times 3.3''$). The discrepancy of about 50% is likely due to the missing flux of the interferometry (see ).
NGC2369: CO(2-1) is from <cit.>, observed with SEST 15m ($22''$ beam; $\eta_{\mathrm{mb}}=0.5$, $\int T_{\mathrm{mb}} \mathrm{d} v = 74 \pm 2.4 \; \Kkms_$ or line flux $1560 \pm 51 \; \Jykms$).
Meanwhile, note that <cit.> reported $959.4 \pm 14.3 \; \Jykms$ which is originally from <cit.> also with SEST 15m ($\int T_{\mathrm{mb}} \mathrm{d} v = 46.8 \pm 0.7 \; \Kkms_$; with $\eta_{\mathrm{mb}}=0.54$). The reason for this factor of two discrepancy is unclear. Here we take their average ($1259.7 \; \Jykms$) and correct from the $22''$ beam to the entire galaxy with a BeamFrac of $0.808$ ().
NGC2623: <cit.> reported an interferometric integrated CO(2-1) flux of $267 \pm 8 \; \Jykms$ observed with SMA. <cit.> cited this flux in their study, and discussed that this flux is unlikely affected by missing flux.
NGC3256: <cit.> reported a CO(2-1) flux of $\int T_{\mathrm{mb}} \mathrm{d} v = 314 \pm 8 \; \Kkms_$ ($6619 \pm 169 \; \Jykms$) observed with SEST 15m ($22''$ beam; $\eta_{\mathrm{mb}}=0.5$).
Meanwhile, <cit.> reported $2980.7 \pm 14.3 \; \Jykms$ which is originally from <cit.> also observed with SEST 15m ($\int T_{\mathrm{mb}} \mathrm{d} v = 145.5 \pm 0.7 \; \Kkms_$; with $\eta_{\mathrm{mb}}=0.7$). Similar to NGC2369, the reason for the factor of two to three discrepancy is unclear.
We take their average ($4799.85 \; \Jykms$) and correct from the $22''$ beam to the entire galaxy with a BeamFrac of $0.744$ ().
NGC3351: We obtain CO(2-1) and CO(1-0) line fluxes for the entire galaxy with our own photometry as $2681 \; \Jykms$ and $1138 \; \Jykms$, respectively, to the HERACLES data and the Nobeyama 45m COAtlas Survey () data. Uncertainties contributed by the noise in the moment-0 maps are about 6% of the measured fluxes.
Note that <cit.> observed a CO(2-1) and CO(1-0) flux of about $642 \; \Jykms$ and $97 \; \Jykms$, respectively, convolved to a $23''$ beam. <cit.> reported a CO(2-1) luminosity of $0.78 \times 10^{5} \; \mathrm{K\,km\,s^{-1}\,arcsec^{2}}$, or a line flux of $2808 \; \Jykms$, for the entire galaxy, consistent with ours.
<cit.> reported a CO(1-0) flux of about $210 \; \Jykms$ within a $21.3''$ beam at the central position.
NGC3627: Similar to NGC3351, we obtain the CO(2-1) and CO(1-0) line fluxes for the whole galaxy via our photometry using the HERACLES and the NRO45m COAtlas data, respectively. We measured $9219 \; \Jykms$ and $7366 \; \Jykms$, respectively. Note that <cit.> reported a global CO(1-0) flux of $4477 \; \Jykms$, which is about 40% lower than ours.
NGC4321: Similar to NGC3351 and NGC3627, the global CO(2-1) and CO(1-0) line fluxes are obtained as $9088 \; \Jykms$ and $2251 \; \Jykms$, from the HERACLES and the NRO45m COAtlas data, respectively.
Note that <cit.> observed a CO(1-0) flux of $445 \; \Jykms$ within a $23''$ beam, which can be corrected to a consistent entire galaxy flux of $2280 \; \Jykms$ by a BeamFrac of $0.195$ (). While <cit.> observed a CO(1-0) flux of $174 \; \Jykms$ within a $16''$ beam, which is somehow lower than others.
NGC4945: <cit.> observed the central position of NGC4945 with SEST 15m and obtained a CO(2-1) flux of $\int T_{\mathrm{mb}} \mathrm{d} v = 920.9 \pm 0.6 \; \Kkms_$ ($19412 \pm 12.6 \; \Jykms$, for point source response in a $22''$ beam).
<cit.> cited the same <cit.> CO(2-1) flux as $18878.5 \pm 12.3 \; \Jykms$, which is consistent with our conversion.
<cit.> also observed the central position of NGC4945 with SEST 15m. They reported a CO(2-1) flux of $\int T_{\mathrm{mb}} \mathrm{d} v = 740 \pm 40 \; \Kkms_$, about 20% lower than that of <cit.>.
As discussed in <cit.>, the reason for the discrepancy is unclear, but this shows that the uncertainty in the CO(2-1) flux at the galaxy center is at least 20%.
We take the average ($17505 \; \Jykms$) in this work, and estimate the entire galaxy CO(2-1) flux to be $31770 \; \Jykms$ based on a BeamFrac of $0.551$ () from the $22''$ beam.
NGC6946: The global CO(2-1) and CO(1-0) line fluxes are obtained as $36296 \; \Jykms$ and $11454 \; \Jykms$, from the HERACLES and the NRO45m COAtlas data, respectively.
This is in good agreement with the global scale CO(1-0) flux of $11400.5 \; \Jykms$ reported by <cit.> using NRAO 12m mapping data.
§ COMPARISON OF SED FITTING CODES
We performed additional MAGPHYS () and CIGALE () SED fitting to verify our 2 SED fitting results. We use the updated MAGPHYS version with high-$z$ extension (<http://www.iap.fr/magphys/download.html>), and CIGALE version 2020.0 (June 29th, 2020) (<https://cigale.lam.fr/download/>). We modified the MAGPHYS FORTRAN source code to allow for longer photometry filter names and larger filter number. MAGPHYS and CIGALE require a list of preset filters, which we choose the following list:
GALEX FUV and NUV, KPNO MOSAIC1 $u$, CFHT MegaCam $u$ band, SDSS $ugriz$, Subaru SuprimeCam $BVriz$, GTC $griz$, VISTA VIRCAM $Y,\,J,\,H,\,K_s$, HST ACS F435W/F606W/F755W/F814W and WFC3 F125W/F140W/F160W, Spitzer IRAC ch1/2/3/4, IRS PUI 16 $\mu$m and MIPS 24 $\mu$m, Herschel PACS 70/100/160 and SPIRE 250/350/500 $\mu$m, SCUBA2 450/850 $\mu$m, VLA 3/1.4 GHz, and pseudo 880/1100/1200/2000 $\mu$m filters.
Other photometry data like sub-mm interferometry data (e.g., from ALMA) and some optical data are ignored. Note that in our 2 fitting these bands without a known filter curve are automatically used with a pseudo delta-function filter curve.
The current MAGPHYS code does not include the fitting of a mid-IR AGN SED component, although such an extension has been used non-publicly in some studies (). MAGPHYS has preset stellar libraries, dust attenuation laws and dust libraries, therefore no need to adjust any parameters. Except that we run MAGPHYS only for $z>0.03$ galaxies, as MAGPHYS computes the luminosity and mass properties with the luminosity distance, which does not match the physical distance at a very low-$z$.
CIGALE has the capability of including a mid-IR AGN component, so does our 2 code. The current version of CIGALE uses AGN emission models computed from physical modeling of AGN torus by <cit.>. It has much more freedom than the observationally-derived AGN templates by <cit.> used by 2. However, this can also easily over-fit the data when there are only a few broadband photometry data point at mid-IR $\sim8-100\,\mu\mathrm{m}$.
For our fitting with CIGALE, we fix several AGN parameters based on the fitting results of starburst galaxies in <cit.>: r_ratio = 60, beta = -1.0, gamma = 6.0, opening_angle = 140.0, and let following parameters to vary: tau = 1.0,3.0,6.0, psy = 0.001, 10.100, 20.100, 30.100, and fracAGN = 0.0,0.2,0.4,0.6.
For the stellar component in our CIGALE fitting of high-redshift galaxies, we use a constant SFH as in our 2 fitting. This is achieved by adding sfhperiodic into the CIGALE sed_modules, and setting type_bursts = 2, delta_bursts = 200, tau_bursts = 200. To allow the fitting of a range of stellar ages, we set age = 200,300,400,500,600,700,800,900,1000,2000 for the bc03 SED module. And we adopt the <cit.> dust attenuation law as in our 2 fitting by adding dustatt_modified_starburst to the SED modules, and setting E_BV_lines to 0.0 to 2.0 in steps of 0.2.
Meanwhile, for local galaxies ($z<0.03$) in our sample whose stellar ages are generally older, a constant SFH stellar component can not fit the stellar SED well. Thus, we adopt the exponentially declining SFH sfh2exp in CIGALE, and set tau_main = 200,500,1000,2000,4000 and age = 200,500,1000,2000,4000,6000,8000,10000. We turn off the burst model by setting f_burst = 0.0.
The dust template used in CIGALE fitting is also the same as used in our 2 fitting, i.e., the <cit.> updated DL07 templates. $\Umin$ (umin) is set to vary from 1.0 to 50, and $\fPDR$ (gamma) 0.0 to 1.0.
We also set lim_flag = True to allow CIGALE analyzing the photometry upper limits (to achieve this we need to flip the sign of the flux errors for the photometry with a $\SNR<3$). Then, each galaxy has about 696960 models fitted. In comparison, in MAGPHYS in general 13933 optical models and 24999 IR dust models are fitted for each galaxy.
In Fig. <ref>, we compare the fitted dust 8–1000 $\mu$m luminosities and stellar masses from the three SED fitting codes. In the left panel of Fig. <ref> we compare the fitted $\Umean$ from 2 and CIGALE, as MAGPHYS does not have the same <cit.> library. Note that not all fittings show a reasonable $\chi^2$, as can be seen in the right panel of Fig. <ref>, where the histograms of reduced-$\chi^2$ are shown for the three fitting codes. CIGALE fittings in general have a higher reduced-$\chi^2$, which means poorer fitting than 2, whereas MAGPHYS produces slightly better fittings than 2. However, both MAGPHYS and CIGALE have a number of very poor/failed fitting cases which have reduced-$\chi^2 \gtrsim 10$ and are the outlier data points in Figs. <ref>. The threshold reduced-$\chi^2 \sim 8-10$ is empirically estimated after visually examining the SED fitting results
[All SED fitting figures are available at <https://doi.org/10.5281/zenodo.3958271>.]
. There are only two sources exhibit reduced-$\chi^2 > 10$ in 2 fitting, and their IR-to-mm are actually well fitted, leaving the stellar part poorly constrained (CenA and NGC0253).
In comparison, there are 12 poor/failed cases in CIGALE fitting,
and 4 in MAGPHYS fitting.
The main reason for these poor/failed cases is likely the energy balance forced in MAGPHYS and CIGALE. In these cases, the stellar part of the SED fitting gives a dust attenuation that can not fully balance the far-IR/mm emission, and this is also mentioned in other studies of extremely dust-obscured high-redshift galaxies (e.g., ).
Except for these poor/failed fittings, the fitted IR luminosities and stellar masses are reasonably well agreed within about 0.3 dex.
In the comparison of $\Umean$ in Fig. <ref>, we excluded the 12 sources with reduced-$\chi^2 > 10$ in CIGALE fitting. Although most sources have consistent $\Umean$, a small number of sources do not have consistent $\Umean$, and they mostly come from the V20 subsample. This is mainly because they have very poor IR photometry except for one or two sub-mm interferometry photometry. But we chose to skip these interferometry photometry in our CIGALE fitting tests due to the filter setting. Adding a fake filter in CIGALE and re-run the fitting for each of these sources is required in order to fit the sub-mm interferometry photometry, then more consistent results are expected. Therefore, these comparisons show that for sources with good reduced-$\chi^2$ and photometry data, 2 and CIGALE have similar constraints on $\Umean$.
Comparison of the fitted 8–1000 $\mu$m dust luminosities (left panel) and stellar masses (right panel) from three SED fitting codes: 2, CIGALE and MAGPHYS. X-axes in both panels indicate the fitted parameters from the 2, whereas Y-axes indicate those from either CIGALE (blue circles) or MAGPHYS (orange triangles). The dashed line is a one-to-one relation, and the grey shading indicates a $\pm 0.3\,\mathrm{dex}$ range. Error bars show the fitted 16%- and 84%-percentiles and symbols center at the minimum-$\chi^2$/highest-probability values.
Left panel shows the comparison of the fitted $\Umean$ from 2 and CIGALE. Symbols are similar to Fig. <ref>, except that data points with reduced-$\chi^2 > 10$ are excluded.
Right panel shows the histograms of the reduced-$\chi^2$ from 2, CIGALE and MAGPHYS SED fittings. Given that our fittings span from UV/optical to mm/radio wavelengths, a reduced-$\chi^2$ larger than unity is not unexpected. A value of a few still indicates a reasonable fitting in our cases, but $>10$ usually means poor or failed fitting.
§ GAS MODELING PREDICTION ON LINE OPTICAL DEPTH AND [CI]/CO LINE RATIO
We present the predicted [C i]($^{3}P_1-^{3}P_0$) (hereafter [C i](1-0)) and CO(1-0) line optical depths and the [C i](1-0)/CO(1-0) line ratio in surface brightness unit ($R^{\prime}_{\mathrm{CICO}}$) in Fig. <ref>, Fig. <ref> and Fig. <ref>.
The optical depths shown in Fig. <ref> agree with normal conditions where CO(1-0) is optically thick while [C i](1-0) is roughly optically thin or has $\tau\sim1$.
Fig. <ref> and Fig. <ref> show $R^{\prime}_{\mathrm{CICO}}$ as a function of one-zone cloud molecular gas density and mean molecular gas density of the composite PDF, respectively. Similar as in Fig. <ref> and Fig. <ref>, we show our prediction at four redshifts, $z=0$, $1.5$, $4$ and $6$, and with three representative $\Tkin=25$, $50$, and $100\;\mathrm{K}$.
$R^{\prime}_{\mathrm{CICO}}$ increases with $\nH2$ or $\nmean$, but strongly decreases with $\Tkin$ at intermediate $\nH2\sim10^{3-4}\;\percmcubic$. Future study of this line ratio with our gas modeling will shed light on how to better constrain $\Tkin$ and $\nmean$.
Optical depths ($\tau$) of CO(1-0) and [CI](1-0) as a function of the gas density of each single LVG (one-zone) model in four redshift panels. Blue lines are CO(1-0) and orange lines are [CI](1-0). Line styles (solid, dashed and long-dashed) represent different gas kinetic temperatures as labeled. These show that the derived optical depths from our models (Sect. <ref>) roughly agree with observations which usually show optically thin ($\tau\sim1$) [CI](1-0) and optically thick CO(1-0).
Ratio between [CI](1-0) and CO(1-0) line surface brightness from our single LVG one-zone models. Lines and symbols are similar to those in Fig. <ref>, but note that the ratio in Fig. <ref> is flux ratio while here we show the surface brightness ratio ($R^{\prime}_{\mathrm{CI/CO}} \equiv L^{\prime}_{\mathrm{[CI](1-0)}} / L^{\prime}_{\mathrm{CO(1-0)}}$).
Ratio between [CI](1-0) and CO(1-0) line surface brightness from our density-PDF gas modeling. Lines and symbols are similar to those in Fig. <ref> (see also the note about the different ratio definition in Fig. <ref> caption).
|
# Towards tempered anabelian behaviour of Berkovich annuli
Sylvain Gaulhiac Institut de Mathématiques de Jussieu-Paris Rive Gauche,
Sorbonne-Université, Paris, France
###### Abstract
This work brings to light some partial _anabelian behaviours_ of analytic
annuli in the context of Berkovich geometry. More specifically, if $k$ is a
valued non-archimedean complete field of mixed characteristic which is
algebraically closed, and $\mathcal{C}_{1}$, $\mathcal{C}_{2}$ are two
$k$-analytic annuli with isomorphic tempered fundamental group, we show that
the lengths of $\mathcal{C}_{1}$ and $\mathcal{C}_{2}$ cannot be too far from
each other. When they are finite, we show that the absolute value of their
difference is bounded above with a bound depending only on the residual
characteristic $p$.
Keywords : Anabelian geometry, Berkovich spaces, tempered fundamental group,
analytic curves, cochain morphism, resolution of non-singularities.
###### Contents
1. 1 Berkovich analytic curves
1. 1.1 Points and skeleton of an analytic curve
2. 1.2 Analytic annuli : functions, length and torsors
3. 1.3 Tempered fundamental group
4. 1.4 Verticial, vicinal and cuspidal subgroups
2. 2 Harmonic cochains and torsors
1. 2.1 Splitting conditions of $\mu_{p}$-torsors
2. 2.2 Cochain morphism
3. 2.3 Cochains and minimality of the splitting radius on an annulus
4. 2.4 Characterisation of $\mu_{p}$-torsors with trivial cochain
3. 3 Resolution of non-singularities
1. 3.1 Definition and properties of solvable points
2. 3.2 Solvability of skeletons of annuli and "threshold" points
3. 3.3 Anabelianity of the triviality of $\mu_{p}$-torsors
4. 4 Partial anabelianity of lengths of annuli
1. 4.1 Lengths and splitting sets
2. 4.2 Results on lengths of annuli
## Introduction
Anabelian geometry is concerned with the following question :
To what extent is a geometric object determined by its fundamental group?
It is within the framework of algebraic geometry that Grothendieck gave the
first conjectures of anabelian geometry in a famous letter to Faltings in
$1983$, where the fundamental group is nothing other than the étale one. Some
deep results for hyperbolic curves have been obtained by Tamagawa and
Mochizuki, answering certain conjectures of Grothendieck. However, almost no
results are known for higher dimensions.
In the context of Berkovich analytic geometry, it is possible to define
several "fundamental groups" classifying for instance _topological_ , _finite
étale_ or _étale_ (in the sens of [DJg]) coverings. However, the group which
seems to best capture anabelian behaviours of analytic spaces over non-
archimedian fields is the _tempered fundamental group_ , introduced by Yves
André in [And]. This group classifies _tempered coverings_ , defined as étale
coverings which become topological after finite étale base change. Both finite
étale and topological coverings are examples of tempered coverings.
It is Yves André, in [And1], who obtains for the first time some results of
anabelian nature related to the tempered fundamental group. A few years later,
a huge step was made in this direction with some results of Shinichi Mochizuki
([Mzk$3$]) followed by Emmanuel Lepage ([Lep1] and [Lep2]). These results
relate the fundamental tempered group of the analytification of an algebraic
hyperbolic curve to the dual graph of its stable reduction. If $X$ is a
hyperbolic curve defined over some non-archimedian complete field $k$, the
homotopy type of its analytification $X^{\mathrm{an}}$ can be described it
terms of the stable model $\mathscr{X}$ of $X$. More precisely, if
$\mathscr{X}_{s}$ stands for the special fibre of $\mathscr{X}$, the _dual
graph of the stable reduction_ of $X$, denoted $\mathbb{G}_{X}$, is the finite
graph whose vertices are the irreducible components of $\mathscr{X}_{s}$, and
whose edges corresponds to the nodes (singularities in ordinary double points)
between irreducible components. If $\overline{X}$ denotes the normal
compactification of $X$, a cusp of $X$ is an element of $\overline{X}\setminus
X$. Let us denote by $\mathbb{G}_{X}^{\mathtt{c}}$ the graph obtained from
$\mathbb{G}_{X}$, adding one open edge to each cusp of $X$, call the _extended
dual graph of the stable reduction_ of $X$. There exists a canonical
topological embedding $\mathbb{G}_{X}^{\mathtt{c}}\hookrightarrow
X^{\mathrm{an}}$ which admits a topologically proper deformation retraction
$X^{\mathrm{an}}\twoheadrightarrow\mathbb{G}_{X}^{\mathtt{c}}$, thus
$X^{\mathrm{an}}$ and $\mathbb{G}_{X}^{\mathtt{c}}$ have the same homotopy
type.
Using the language of _semi-graphs of anabelioids_ and _temperoids_ introduced
in high generality in [Mzk$2$] and [Mzk$3$], Mochizuki proves in [Mzk$3$] that
the fundamental tempered group of the analytification of a hyperbolic curve
determines the dual graph of its stable reduction :
###### Theorem 0.1 ([Mzk$3$], Corollary $3.11$).
Let $X_{1}$ and $X_{2}$ be two hyperbolic curves over $\mathbb{C}_{p}$. Any
outer isomorphism of groups
$\varphi:\pi_{1}^{\mathrm{temp}}(X_{1}^{\mathrm{an}})\xrightarrow{\sim}\pi_{1}^{\mathrm{temp}}(X_{2}^{\mathrm{an}})$
determines, functorially in $\varphi$, a unique isomorphism of graphs :
$\overline{\varphi}:\mathbb{G}_{X_{1}}^{\mathtt{c}}\xrightarrow{\sim}\mathbb{G}_{X_{2}}^{\mathtt{c}}$.
Mochizuki shows more precisely that it is possible to reconstruct the graph of
the stable reduction $\mathbb{G}_{X}$ of a hyperbolic curve $X$ from a
$(p^{\prime})$-version
$\pi_{1}^{\mathrm{temp},\,(p^{\prime})}(X^{\mathrm{an}})$ of the tempered
fundamental group.
A few years later, Emmanuel Lepage refined this result. He proved that the
knowledge of the tempered fundamental group of the analytification of a
hyperbolic curve $X$ enables to not only reconstruct the graph
$\mathbb{G}_{X}$, but also, in some cases, its canonical metric. This metric
is such that the length of an edge corresponding to a node is the width of the
annulus corresponding to the generic fibre of the formal completion on this
node. It is, however, necessary to restrict this to _Mumford curves_ , which
are defined as proper algebraic curves $X$ over $\mathbb{C}_{p}$ such that the
normalized irreducible components of the stable reduction are isomorphic to
$\mathbb{P}^{1}$. This is equivalent to saying in Berkovich language that the
analytification $X^{\mathrm{an}}$ is locally isomorphic to open subsets of
$\mathbb{P}^{1,\mathrm{an}}$, or that $X^{\mathrm{an}}$ does not contains any
point of genus $>0$.
###### Theorem 0.2 ([Lep2]).
Let $X_{1}$ and $X_{2}$ be two hyperbolic Mumford curves over
$\mathbb{C}_{p}$, and
$\varphi:\pi_{1}^{\mathrm{temp}}(X_{1}^{\mathrm{an}})\xrightarrow{\sim}\pi_{1}^{\mathrm{temp}}(X_{2}^{\mathrm{an}})$
an isomorphism of groups. Then the isomorphism of graphs
$\overline{\varphi}:\mathbb{G}_{X_{1}}\xrightarrow{\sim}\mathbb{G}_{X_{2}}$ is
an isomorphism of metric graphs.
These two results deal with analytic curves which are of _algebraic nature_ ,
that is, analytifications of algebraic curves. Yet the theory of Berkovich
analytic spaces is rich enough to contain lots of curves which are of
_analytic nature_ without coming from algebraic curves. The most important
examples of such curves, which are still very simple to define, are _disks_
and _annuli_. In the wake of Mochizuki’s and Lepage’s results, one wonders
whether similar anabelian results exist for more general analytic curves
without imposing any algebraic nature. For such analytic curves, the
generalisation of Mochizuki’s results was carried out in the article [Gau],
whereas the investigation about some analogous of Lepage’s result is partially
answered in this present article.
Reconstruction of the analytic skeleton For a quasi-smooth analytic curve $X$,
the good analogous of the extended dual graph of the stable reduction is the
_analytic skeleton_ $S^{\mathrm{an}}(X)$, defined in 1.5. When the skeleton
meets all the connected components of $X$, there exists a canonical
topological embedding $S^{\mathrm{an}}(X)\hookrightarrow X$ which admits a
topologically proper deformation retraction $X\twoheadrightarrow
S^{\mathrm{an}}(X)$. Therefore $X$ and $S^{\mathrm{an}}(X)$ have the same
homotopy type. The restriction $S^{\mathrm{an}}(X)^{\natural}$ obtained from
the skeleton by removing non-relatively compact edges is called the _truncated
skeleton_ of $X$ (see 1.8), and is the analogous of the dual graph of the
stable reduction. Let $k$ be a complete algebraically closed non-archimedean
field of residual exponent $p$. In [Gau], $3.29$, is defined a certain of
class of $k$-analytic curves, called _$k$ -analytically hyperbolic_. Their
interest lies in the fact that for a $k$-analytically hyperbolic curve $X$ it
is possible to reconstruct its truncated skeleton
$S^{\mathrm{an}}(X)^{\natural}$ from the tempered group
$\pi_{1}^{\mathrm{temp}}(X)$, or even from a _prime-to- $p$_ version
$\pi_{1}^{\mathrm{temp},\,(p^{\prime})}(X)$, obtained by taking the projective
limit of all quotients of $\pi_{1}^{\mathrm{temp}}(X)$ admitting a normal
torsion-free subgroup of finite index prime to $p$. The reconstruction of
$S^{\mathrm{an}}(X)^{\natural}$ from this group is given by the following :
* •
the vertices correspond to the conjugacy classes of maximal compact subgroups
of $\pi_{1}^{\mathrm{temp},\,(p^{\prime})}(X)$;
* •
the edges correspond to the conjugacy classes of non-trivial intersections of
two maximal compact subgroups of $\pi_{1}^{\mathrm{temp},\,(p^{\prime})}(X)$.
The condition for a quasi-smooth $k$-analytic curve to be analytically
hyperbolic is stated in terms of non-emptiness of the sets of nodes of the
skeleton and some combinatorial hyperbolic condition at each of these nodes.
However, the analytical hyperbolicity may not be enough to recover all the
skeleton. In order to recover also the non relatively compact edges of
$S^{\mathrm{an}}(X)$ is defined in [Gau], $3.55$, a sub-class of
$k$-analytically hyperbolic curves called _$k$ -analytically anabelian_. A
$k$-analytically anabelian curve is a $k$-analytically hyperbolic curve
satisfying a technical condition called _ascendance vicinale_ , which enables
us to reconstruct open edges of the skeleton :
###### Theorem 0.3 ([Gau], 3.56).
Let $X_{1}$ and $X_{2}$ be two $k$-analytically anabelian curves. Any group
isomorphism
$\varphi:\pi_{1}^{\mathrm{temp}}(X_{1})\xrightarrow{\sim}\pi_{1}^{\mathrm{temp}}(X_{2})$
induces (functorially in $\varphi$) an isomorphism of semi-graphs between the
analytic skeletons :
$S^{\mathrm{an}}(X_{1})\xrightarrow{\sim}S^{\mathrm{an}}(X_{2})$.
Anabelianity of length ? This present article concentrates more on the
potential anabelianity of lengths of edges of the skeleton of a $k$-analytic
curve, inspired from the result of Lepage cited above. There is a natural way
to define the length of an analytic annulus (see 1.19), invariant by
automorphisms, which makes the skeleton $S^{\mathrm{an}}(X)$ of a quasi-smooth
$k$-analytic curve $X$ a _metric_ graph. The question naturally arising is the
following :
Does the tempered fundamental group $\pi_{1}^{\mathrm{temp}}(X)$ of a
$k$-analytically anabelian curve $X$ determine $S^{\mathrm{an}}(X)$ as a
metric graph?
Before tackling the general case, it seems a priori more simple to study first
the case of a $k$-analytic annulus, even if this latter is not a
$k$-analytically anabelian curve. The $\,(p^{\prime})$-tempered group
$\pi_{1}^{\mathrm{temp},\,(p^{\prime})}(\mathcal{C})$ of an annulus is always
isomorphic to the $p^{\prime}$-profinite completion
$\widehat{\mathbb{Z}}^{\,(p^{\prime})}$ of $\mathbb{Z}$, but its total
tempered group $\pi_{1}^{\mathrm{temp}}(\mathcal{C})$ depends on its length
whenever $k$ has mixed characteristic. The new question arising is the
following :
Does the tempered group $\pi_{1}^{\mathrm{temp}}(\mathcal{C})$ of a
$k$-analytic annulus $\mathcal{C}$ determine its length?
In order to investigate this question, one is tempted to follow the scheme of
proof that Lepage develops in [Lep2]. An idea would be to start from an
"ovoid" $\mu_{p}$-covering of the annulus totally split at the middle of the
skeleton, which would be analytically anabelian. Then knowing how to compute
the length of any cycle would be enough to know the length of the annulus (by
a limit argument). Yet one quickly faces problems of analytic nature that do
not appear with Mumford curves : problems of detection of
$\mu_{p^{h}}$-torsors with trivial $\mathbb{Z}/p^{h}\mathbb{Z}$-cochain.
Indeed, if $Y\to X$ is a $\mu_{n}$-torsor, associating to some edge $e$ of
$S^{\mathrm{an}}(X)$ the growth rate of any analytic function defining locally
this torsor over $e$ leads to a harmonic cochain on the graph
$S^{\mathrm{an}}(X)$ with values in $\mathbb{Z}/n\mathbb{Z}$. This growth rate
corresponds to the degree of the strictly dominant monomial (see remark 1.13)
of the corresponding analytic function. Therefore, when $X$ is a quasi-smooth
$k$-analytic curve, we show in lemma 2.4 that there exists a cochain morphism
$\theta:H^{1}(X,\mu_{n})\to\mathrm{Harm}(S^{\mathrm{an}}(X),\mathbb{Z}/n\mathbb{Z})$
for any $n\in\mathbb{N}^{\times}$. However, when $n=p^{h}$ with $h>1$, it
seems difficult to detect the kernel of $\theta$ from
$\pi_{1}^{\mathrm{temp}}(X)$, which makes the hoped scheme of proof illusory.
Nevertheless, the detection of $\ker(\theta)$ when $n=p$ is possible in some
cases :
Théorème 0 Let $X$ be a $k$-analytic curve satisfying one of the two following
conditions :
1. 1.
$X$ is an annulus
2. 2.
$X$ is a $k$-analytically hyperbolic curve, of finite skeleton without bridge,
without any point of genus $>0$, without boundary, with only annular cusps,
and such that there is never strictly more than one cusp coming from each
node.
Then the set of $\mu_{p}$-torsors of $X$ with trivial
$\mathbb{Z}/p\mathbb{Z}$-cochain, i.e. the $H^{1}(X,\mu_{p})\cap\ker(\theta)$,
is completely determined by $\pi_{1}^{\mathrm{temp}}(X)$.
This result uses _resolution of non-singularities_ (section 3) coupled with a
characterisation of non-triviality of cochains in terms of minimality of
splitting radius in rigid points (proposition 2.8). This characterisation can
be rephrased set-theoretically with the splitting sets of torsors (corollary
2.10), that can themself be characterised from the tempered group by means of
solvability of some "threshold" points (lemma 3.4).
As for the initial question about the potential anabelianity of lengths of
annuli, we found a partial answer, using the solvability of skeletons of
annuli (lemma 3.6) doubled with some considerations of splitting sets of
$\mu_{p}$-torsors :
Théorème 1 : Let $\mathcal{C}_{1}$ and $\mathcal{C}_{2}$ be two $k$-analytic
annuli whose tempered fundamental groups
$\pi_{1}^{\mathrm{temp}}(\mathcal{C}_{1})$ and
$\pi_{1}^{\mathrm{temp}}(\mathcal{C}_{2})$ are isomorphic. Then
$\mathcal{C}_{1}$ has finite length if and only if $\mathcal{C}_{2}$ has
finite length. In this case :
$|\ell(\mathcal{C}_{1})-\ell(\mathcal{C}_{2})|<\frac{2p}{p-1}.$
We also have $d\left(\ell(\mathcal{C}_{1}),p\mathbb{N}^{\times}\right)>1$ if
and only if $d\left(\ell(\mathcal{C}_{2}),p\mathbb{N}^{\times}\right)>1$, and
in this case :
$|\ell(\mathcal{C}_{1})-\ell(\mathcal{C}_{2})|<\frac{p}{p-1}.$
## 1 Berkovich analytic curves
In all this text $k$ will denote a complete algebraically closed non-
archimedean field of mixed characteristic $(0,p)$, i.e. $\mathrm{char}(k)=0$
and $\mathrm{char}(\widetilde{k})=p$, where $\widetilde{k}$ is the residue
field of $k$. Let’s assume that the absolute value on $k$ is normalized such
that $|p|=p^{-1}$. The field
$\mathbb{C}_{p}:=\widehat{\overline{\mathbb{Q}_{p}}}$ with the usual $p$-adic
absolute value is an example of such field.
### 1.1 Points and skeleton of an analytic curve
A $k$-analytic curve is defined as a separated $k$-analytic space of pure
dimension $1$. We send the reader to the fundational text [Ber$1$] for
references about analytic space, [Ber$2$] for the cohomology on analytic
spaces, and [Duc] for a precise and systematic study of analytic curves.
Let’s recall here some properties about analytic curves which will be
important in this text.
Any $k$-analytic curve, endowed with the Berkovich topology, has very nice
topological properties : locally compact, locally arcwise connected, and
locally contractible, which makes possible to apply to it the usual theory of
universal topological covering. Moreover, $k$-analytic curves are _real
graphs_ , with potentially infinite branching, as stated by the following
proposition :
###### Proposition 1.1 ([Duc], $3.5.1$).
Let $X$ be a non-empty connected $k$-analytic curve. The following statements
are equivalent :
* i)
the topological space $X$ is contractible,
* ii)
the topological space $X$ is simply connected,
* iii)
for any pair $(x,y)$ of points of $X$, there exists a unique closed subspace
of $X$ homeomorphic to a compact interval with extremities $x$ and $y$.
Moreover, any point of a $k$-analytic curve admits a basis of neighbourhood
which are _real trees_ , i.e. satisfying the equivalent properties above.
###### Remark 1.2.
Any real tree can be endowed with a topology called _topology of real tree_ ,
which might be different from its initial topology. The Berkovich topology on
an open subset of an analytic curve which is a tree is coarser than the
topology of real tree on this tree.
Points of a $k$-analytic curve : Let $X$ be a $k$-analytic curve, and $x\in
X$. If $\mathscr{H}(x)$ denotes the completed residual field of $x$, it is
possible to associate to the complete extension $\mathscr{H}(x)/k$ two
transcendental values :
$\displaystyle f_{x}$
$\displaystyle=\mathrm{degtr}_{\widetilde{k}}\widetilde{\mathscr{H}(x)}$
$\displaystyle e_{x}$
$\displaystyle=\mathrm{rang}_{\mathbb{Q}}\left(|\mathscr{H}(x)^{\times}|/|k^{\times}|\otimes_{\mathbb{Z}}\mathbb{Q}\right),$
which satisfy $f_{x}+e_{x}\leqslant 1$ (in accordance with the Abhyankar
inequality). The points of $X$ can be classified in 4 types according to the
these transcendental values :
###### Definition 1.3.
A point $x\in X$ is :
1. 1.
of _type- $1$_ if $\mathscr{H}(x)=k$ (in this case $f_{x}=e_{x}=0$)
2. 2.
of _type $2$_ if $f_{x}=1$
3. 3.
of _type $3$_ if $e_{x}=1$
4. 4.
of _type $4$_ if $f_{x}=e_{x}=0$ but $x$ is not of type $1$.
For $i\in\\{1,2,3,4\\}$, let $X_{[i]}$ be the subset of $X$ consisting of
type-$i$ points.
This definition of type-$1$ points holds here since we assumed that $k$ is
algebraically closed. In general, a point $x\in X$ is of type $1$ if
$\mathscr{H}(x)\subseteq\widehat{\overline{k}}$, where
$\widehat{\overline{k}}$ denotes the completion of an algebraic closure of
$k$. Since $k$ is algebraically closed, type-$1$ points are exactly the
_rigid_ points, i.e. the points $x\in X$ such that the extension
$\mathscr{H}(x)/k$ is finite. Since $k$ is by assumption non-trivially valued,
$X_{[2]}$ is dense in $X$.
Preservation of type of points by finite morphisms. If
$f:X^{\prime}\rightarrow X$ is a finite morphism of $k$-analytic curves, for
any $i\in\\{1,2,3,4\\}$, a point $x^{\prime}\in X^{\prime}$ is of type $i$ if
and only if $f(x)$ is of type $i$.
One of the specificity of Berkovich geometry compared to ridid geometry is the
existence of a _boundary_ which is embodied in the space. It is possible to
define two boundaries of a $k$-analytic space : the _analytic boundary_
$\Gamma(X)$ and the _Shilov boundary_ $\partial^{\mathrm{an}}X$. However,
specifically in the dimension $1$ case, i.e. for analytic curves, these two
notions coincide, which allows to speak without any ambiguity about the
_boundary of $X$_ $\partial^{\mathrm{an}}X\subseteq X$, potentially empty.
Description of the $k$-analytic affine line $\mathbb{A}_{k}^{1,\mathrm{an}}$ :
The analytification $\mathbb{A}_{k}^{1,\mathrm{an}}$ of the (algebraic) affine
line $\mathbb{A}_{k}^{1}$ is the smooth, without boundary and connected
$k$-analytic curve whose points are the multiplicative seminorms on the
polynomial ring $k[T]$ extending the absolute value of $k$. We are going to
give an explicit description of $\mathbb{A}_{k}^{1,\mathrm{an}}$. For
$r\geqslant 0$ and $a\in k$, let $B(a,r)=\\{x\in k,|x-a|\leqslant r\\}$ be the
closed ball (which is also open since $k$ is non-archimedean!) of $k$, centred
in $a$ and of radius $r$.
*
* •
Any element $a\in k$ determines a multiplicative seminorm on $k[T]$, the
_evaluation at $a$_, given by $P\in k[T]\mapsto|P(a)|$. It defines an element
of $\mathbb{A}_{k}^{1,\mathrm{an}}$ denoted $\eta_{a}$, or $\eta_{a,0}$. Then
$\mathscr{H}(\eta_{a})=k[T]/(T-a)\simeq k$, such that $\eta_{a}$ is a rigid
point.
* •
Let $a\in k$ et $r>0$. Consider the map :
$P\in k[T]\mapsto\sup_{b\in B(a,r)}|P(b)|=\sup_{b\in B(a,r)}|P|_{\eta_{b}}.$
It actually defines an element of $\mathbb{A}_{k}^{1,\mathrm{an}}$, denoted
$\eta_{a,r}$, and given by :
$|P(\eta_{a,r})|=\mathrm{max}_{0\leqslant i\leqslant
n}(|\alpha_{i}|r^{i})\;\;\text{as soon
as}\;\;P=\sum_{i=0}^{n}\alpha_{i}(T-a)^{i}.$
One can verify that $\eta_{a,r}$ only depends on $B(a,r)$ (i.e.
$\eta_{a,r}=\eta_{b,r}$ as soon as $b\in B(a,r)$). There are two cases :
When $r\in|k^{\times}|$,
$\widetilde{\mathscr{H}(\eta_{a,r})}=\widetilde{k}(T)$, and
$|\mathscr{H}(\eta_{a,r})^{\times}|=|k^{\times}|$, such that $\eta_{a,r}$ is a
type-$2$ point.
When $r\notin|k^{\times}|$,
$\widetilde{\mathscr{H}(\eta_{a,r})}=\widetilde{k}$, and
$|\mathscr{H}(\eta_{a,r})^{\times}|$ is the group generated by $|k^{\times}|$
and $r$, so $\eta_{a,r}$ is a type-$3$ point.
* •
If $\mathscr{B}=\left(B_{n}\right)_{n\in\mathbb{N}}$ is a decreasing sequence
of non-empty closed balls (i.e. $B_{n+1}\subseteq B_{n}$, for
$n\in\mathbb{N}$) of $k$. Let $|\cdot|_{B_{n}}$ be the unique point of
$\mathbb{A}_{k}^{1,\mathrm{an}}$ determined by $B_{n}$ (i.e.
$|\cdot|_{B_{n}}=\eta_{a_{n},r_{n}}$ as soon as $B_{n}=B(a_{n},r_{n})$). Then
the map :
$P\in k[T]\mapsto\inf_{n\in\mathbb{N}}|P|_{B_{n}}$
defines an element $|\cdot|_{\mathscr{B}}$ of
$\mathbb{A}_{k}^{1,\mathrm{an}}$.
If $\bigcap_{n}B_{n}$ is a point $a\in k$, then $|\cdot|_{\mathscr{B}}$
corresponds exactly to $\eta_{a}$. If $\bigcap_{n}B_{n}$ is a closed ball
centred in $a\in k$ and of radius $r\in\mathbb{R}_{+}^{*}$, then
$|\cdot|_{\mathscr{B}}$ corresponds to $\eta_{a,r}$. It is also possible that
$\bigcap_{n}B_{n}$ is empty, in this case $|\cdot|_{\mathscr{B}}$ is a
type-$4$ point.
This description is exhaustive : all the points of
$\mathbb{A}_{k}^{1,\mathrm{an}}$ can be described in this way.
###### Remark 1.4.
Points of type $4$ exist if and only if $k$ is not _spherically complete_. A
valued field is _spherically complete_ when it does not admit any _immediate_
extension. The field $\mathbb{C}_{p}$ is not spherically complete, therefore
there exist in $\mathbb{A}_{\mathbb{C}_{p}}^{1,\mathrm{an}}$ some type-$4$
points.
The $k$-analytic projective line $\mathbb{P}_{k}^{1,\mathrm{an}}$ is the
analytification of the algebraic $k$-projective line $\mathbb{P}_{k}^{1}$. It
is a proper (compact and without boundary) quasi-smooth connected curve. It
admits a rigid point $\infty$ such that there exists a natural isomorphism an
$k$-analytic curves :
$\rho:\mathbb{P}_{k}^{1,\mathrm{an}}\setminus\\{\infty\\}\xrightarrow{\sim}\mathbb{A}_{k}^{1,\mathrm{an}}$
The $k$-analytic affine and projective curves are trees (see 1.1), so for each
pair $(x,y)$ of points, there exists a unique closed subspace homeomorphic to
a compact interval (a segment) with extremities $x$ and $y$. If $a$ et $b$ are
in $k$, the segment joining the rigid points $\eta_{a}$ and $\eta_{b}$ is :
$[\eta_{a},\eta_{b}]=\\{\eta_{a,r}\\}_{0\leqslant
r\leqslant|b-a|}\cup\\{\eta_{b,s}\\}_{0\leqslant r\leqslant|b-a|}.$
The segment joining $\eta_{a}$ and $\infty$ is
$[\eta_{a},\infty]=\\{\eta_{a,r}\\}_{0\leqslant r\leqslant\infty}$, with
$\infty=\eta_{a,\infty}$.
The type of points of $\mathbb{P}_{k}^{1,\mathrm{an}}$ (or
$\mathbb{A}_{k}^{1,\mathrm{an}})$ can be read on the tree :
* •
type-$2$ points are the branching points of the tree,
* •
type-$3$ points are the points where nothing special happens (their valence is
$2$)
* •
type-$1$ or type-$4$ points are the unibranched ones on the tree, there are
the "leaves".
Analytic skeleton of an analytic curve : The following notion of _analytic
skeleton_ of a $k$-analytic curve is the analogous in the analytic word of the
dual graph of the special fiber of the stable model of an algebraic $k$-curve.
A _$k$ -analytic disk_ is a $k$-analytic curve isomorphic to the analytic
domain of $\mathbb{P}_{k}^{1,\mathrm{an}}$ defined by the condition $|T|\in
I$, where $I$ is an interval of the form $[0,r[$, $[0,r]$ for some $r>0$, or
$I=[0,+\infty[$.
###### Definition 1.5 (Analytic skeleton).
The _analytic skeleton_ of a quasi-smooth $k$-analytic curve $X$, denoted
$S^{\mathrm{an}}(X)$, is the subset of $X$ consisting of points which do not
belong to any open $k$-analytic disk.
###### Proposition 1.6 (see [Duc], $1.6.13$, $5.1.11$).
Let $X$ be a quasi-smooth $k$-analytic curve :
* •
the analytic skeleton $S^{\mathrm{an}}(X)$ is a locally finite graph contained
in $X_{[2,3]}$, and containing the boundary $\partial^{\mathrm{an}}X$ of $X$.
* •
if $S^{\mathrm{an}}(X)$ meets all the connected components of $X$, there
exists a canonical deformation retraction $r_{X}:X\to S^{\mathrm{an}}(X)$. In
particular, $X$ and $S^{\mathrm{an}}(X)$ has the homotopy type.
###### Remark 1.7.
In order to be coherent with the terminology of [Mzk$3$], we used in [Gau] the
term _semi-graphs_ for graphs with potentially "open" edges, i.e. edges which
are either not abutting to any vertex, or with only one extremity abutting to
one vertex. However we will not make this terminological distinction in this
text to avoid some unnecessary heaviness, and speak only _graphs_ instead of
semi-graphs.
###### Definition 1.8 (Truncated skeleton).
Let $X$ be a quasi-smooth connected $k$-analytic curve with non-empty skeleton
$S^{\mathrm{an}}(X)$, and $r_{X}:X\to S^{\mathrm{an}}(X)$ the canonical
retraction. The _truncated skeleton_ of $X$, denoted
$S^{\mathrm{an}}(X)^{\natural}$, is the subgraph of $S^{\mathrm{an}}(X)$
obtained from $S^{\mathrm{an}}(X)$ by removing the edges $e$ such that
$r_{X}^{-1}(e)$ is not relatively compact in $X$.
###### Remark 1.9.
The edges $e$ of $S^{\mathrm{an}}(X)$ such that $r_{X}^{-1}(e)$ is not
relatively compact in $X$ are exatly the "open" edges of $S^{\mathrm{an}}(X)$.
So in the terminology of [Gau], $S^{\mathrm{an}}(X)^{\natural}$ is actually
the biggest sub-semi-graph of the semi-graph $S^{\mathrm{an}}(X)$ which is a
graph.
###### Definition 1.10 (Nodes of the analytic skeleton).
If $x$ is a point of a $k$-analytic curve, its _genus_ , denoted $g(x)$, is
defined as being $0$ if $x$ is of type $1,3$ or $4$, and equals the genus of
the residual curve (see [Gau], $3.1.4$) $\mathscr{C}_{x}$ of $x$ when it is of
type $2$. A point $x\in S^{\mathrm{an}}(X)$ is a _node_ of
$S^{\mathrm{an}}(x)$ if it satisfies one of the following conditions :
* •
$x$ is a branching point of $S^{\mathrm{an}}(X)$ (i.e. $x$ is a vertex of
$S^{\mathrm{an}}(x)$ and at least three different branches of
$S^{\mathrm{an}}(X)$ abut to $x$)
* •
$x\in\partial^{\mathrm{an}}X$
* •
$g(x)>0$
### 1.2 Analytic annuli : functions, length and torsors
We are going to define and study the basic properties of $k$-analytic annuli,
central in this text.
If $I=[a,b]$ is a compact interval of $\mathbb{R}_{>0}$ (possibly reduced to
one point), let $\mathcal{C}_{I}$ be the $k$-analytic curve defined as an
$k$-affinoid space by :
$\mathcal{C}_{I}=\mathscr{M}\left(k\\{b^{-1}T,aU\\}/(TU-1)\right).$
If $I\subset J$ are compact intervals of $\mathbb{R}_{>0}$, there is a natural
morphism $\mathcal{C}_{I}\rightarrow\mathcal{C}_{J}$ which makes
$\mathcal{C}_{I}$ a analytic domain of $\mathcal{C}_{J}$. If $I$ is an
arbitrary interval of $\mathbb{R}_{>0}$, we can define
$\mathcal{C}_{I}=\varinjlim_{J\subset
I}\mathcal{C}_{J}\subset\mathbb{G}_{m}^{\mathrm{an}}$
where $J$ describes all compact intervals of $\mathbb{R}_{>0}$ containing $I$.
It would have been equivalent to define $\mathcal{C}_{I}$ as the analytic
domain of $\mathbb{P}_{k}^{1,\mathrm{an}}$ defined by the condition $|T|\in
I$.
* •
Analytic functions : The $k$-algebra of analytic functions on
$\mathcal{C}_{I}$ is given by :
$\mathscr{O}_{\mathcal{C}_{I}}(\mathcal{C}_{I})=\left\\{\sum_{i\in\mathbb{Z}}a_{i}T^{i},a_{i}\in
k,\lim_{|i|\to+\infty}|a_{i}|r^{i}=0,\forall r\in I\right\\}.$
* •
Boundary : If $s<r\in\mathbb{R}_{+}^{*}$,
$\partial^{\mathrm{an}}\mathcal{C}_{\\{r\\}}=\\{\eta_{0,r}\\}$, whereas
$\partial^{\mathrm{an}}\mathcal{C}_{[s,r]}=\\{\eta_{0,s},\eta_{0,r}\\}$.
###### Definition 1.11.
A $k$-analytic _annulus_ is defined as a $k$-analytic curve isomorphic to
$\mathcal{C}_{I}$ for some interval $I$ of $\mathbb{R}_{>0}$. Annuli are
quasi-smooth curves.
###### Proposition 1.12 (Condition of invertibility of an analytic function,
[Duc] $3.6.6.1$ et $3.6.6.2$).
Let $I$ be an interval of $\mathbb{R}_{>0}$, and
$f=\sum_{i\in\mathbb{Z}}a_{i}T^{i}\in\mathscr{O}_{\mathcal{C}_{I}}(\mathcal{C}_{I})$
an analytic function on $\mathcal{C}_{I}$. The function $f$ is invertible if
and only if there exists an integer $i_{0}\in\mathbb{Z}$ (necessarily unique)
such that $|a_{i_{0}}|r^{i_{0}}>\max_{i\neq i_{0}}|a_{i}|r^{i}$ for all $r\in
I$.
###### Remark 1.13.
For an analytic function
$f=\sum_{i\in\mathbb{Z}}a_{i}T^{i}\in\mathscr{O}_{\mathcal{C}_{I}}(\mathcal{C}_{I})$,
we will say that $f$ admits a _strictly dominant monomial_
$a_{i_{0}}T^{i_{0}}$ when there exists an interger $i_{0}\in\mathbb{Z}$ such
that $|a_{i_{0}}|r^{i_{0}}>\max_{i\neq i_{0}}|a_{i}|r^{i}$ for all $r\in I$.
Such a strictly dominant monomial is unique, and $i_{0}$ is the _degree_ of
this monomial. The last proposition says that
$f\in\mathscr{O}_{\mathcal{C}_{I}}(\mathcal{C}_{I})$ is invertible if and only
if it admits a strictly dominant monomial, $a_{i_{0}}T^{i_{0}}$, in which case
$f$ is written $f=a_{i_{0}}T^{i_{0}}(1+u)$ with
$u\in\mathscr{O}_{\mathcal{C}_{I}}(\mathcal{C}_{I})$ of norm $<1$ on
$\mathcal{C}_{I}$.
Let $f\in\mathscr{O}_{\mathcal{C}_{I}}(\mathcal{C}_{I})^{\times}$ be an
invertible function on $\mathcal{C}_{I}$ such that the degree $i_{0}$ of its
strictly dominant monomial is different from $0$. Let
$\varphi_{f}:\mathcal{C}_{I}\to\mathbb{A}_{k}^{1,\mathrm{an}}$ be the morphism
induced by $f$, and $\Lambda$ the map from $\mathbb{R}_{>0}$ to itself defined
by $r\mapsto|a_{i_{0}}|r^{i_{0}}$.
###### Proposition 1.14 ([Duc], $3.6.8$).
The map $\Lambda$ induces a homeomorphism from $I$ to the interval
$\Lambda(I)$ of $\mathbb{R}_{>0}$, and $\varphi_{f}$ induces a finite and flat
morphism of degree $|i_{0}|$ from $\mathcal{C}_{I}$ to
$\mathcal{C}_{\Lambda(I)}$.
###### Definition 1.15 (Coordinate functions).
If $\mathcal{C}$ is a $k$-analytic annulus, une function
$f\in\mathscr{O}_{\mathcal{C}}(\mathcal{C})$ is a _coordinate function_ when
it induces an isomorphism of $k$-analytic curves
$\mathcal{C}\xrightarrow{\sim}\mathcal{C}_{I}$ for some interval $I$ de
$\mathbb{R}_{>0}$.
###### Corollary 1.16 (Characterization of coordinate functions, [Duc],
$3.6.11.3$ et $3.6.12.3$).
An analytic function $f\in\mathscr{O}_{\mathcal{C}_{I}}(\mathcal{C}_{I})$ is a
coordinate function if and only if $f$ admits a strictly dominant monomial of
degree $i_{0}\in\\{-1,1\\}$. If it is the case, $f$ is invertible in
$\mathscr{O}_{\mathcal{C}_{I}}(\mathcal{C}_{I})$ and induces an analytic
isomorphism $\mathcal{C}_{I}\simeq\mathcal{C}_{|a_{i_{0}}|I^{i_{0}}}$.
One can directly deduce the following corollary :
###### Corollary 1.17.
Let $I$ and $I^{\prime}$ be two intervals of $\mathbb{R}_{>0}$, then
$C_{I^{\prime}}$ is isomorphic to $\mathcal{C}_{I}$ if and only if
$I^{\prime}\in|k^{\times}|\,I^{\pm 1}.$
###### Remark 1.18 (Algebraic characterization of coordinate functions of
annuli $(\cite[cite]{[\@@bibref{}{Duc}{}{}]},3.6.13.1)$).
Define $\mathscr{O}_{\mathcal{C}_{I}}(\mathcal{C}_{I})^{\circ\circ}$ as the
subset of $\mathscr{O}_{\mathcal{C}_{I}}(\mathcal{C}_{I})$ consisting in
functions of norms strictly lower than $1$ on $\mathcal{C}_{I}$. We saw that a
function $f\in\mathscr{O}_{\mathcal{C}_{I}}(\mathcal{C}_{I})$ is invertible if
and only if it admits a strictly dominant monomial, $a_{i_{0}}T^{i_{0}}$. In
this case, it can be written $f=a_{i_{0}}T^{i_{0}}(1+u)$ with
$u\in\mathscr{O}_{\mathcal{C}_{I}}(\mathcal{C}_{I})^{\circ\circ}$, and $|f|$
equals $|a_{i_{0}}|\cdot|T|^{i_{0}}$ on $\mathcal{C}_{I}$. Consequently, the
group
$\mathscr{Z}_{I}:=\mathscr{O}_{\mathcal{C}_{I}}(\mathcal{C}_{I})^{\times}/k^{\times}\cdot(1+\mathscr{O}_{\mathcal{C}_{I}}(\mathcal{C}_{I})^{\circ\circ})$
is isomorphic to $\mathbb{Z}$, such an isomorphism is given by the degree of
the strictly dominant monomial. From corollary 1.16, a function
$f\in\mathscr{O}_{\mathcal{C}_{I}}(\mathcal{C}_{I})$ is a coordinate function
of $\mathcal{C}_{I}$ if and only of it is invertible and its class in
$\mathscr{Z}_{I}$ is a generator of $\mathscr{Z}_{I}$.
Therefore, if $\mathcal{C}$ is any $k$-analytic annulus and
$f\in\mathscr{O}_{\mathcal{C}}(\mathcal{C})$, $f$ is a coordinate function if
and only if it is invertible and is sent to a generator of the free abelian
group of rank $1$ :
$\mathscr{Z}(\mathcal{C}):=\mathscr{O}_{\mathcal{C}}(\mathcal{C})^{\times}/k^{\times}\cdot(1+\mathscr{O}_{\mathcal{C}}(\mathcal{C})^{\circ\circ}).$
###### Definition 1.19 (Length of an analytic annulus).
If $I$ is an interval of $\mathbb{R}_{>0}$, the length of the annuls
$\mathcal{C}_{I}$ is defined as :
$\ell(\mathcal{C}_{I})=\log_{p}\left(\frac{\sup I}{\inf I}\right),$
with $\ell(\mathcal{C}_{I})=+\infty$ whenever $\inf I=0$ or $\sup I=+\infty$.
The _length_ of a general $k$-analytic annulus $\mathcal{C}$, denoted
$\ell(\mathcal{C})$, is defined as the length of $\mathcal{C}_{I}$ for any
interval $I$ of $\mathbb{R}_{>0}$ such that $\mathcal{C}$ is isomorphic to
$\mathcal{C}_{I}$. From corollary 1.17 we see that this definition does not
depend on the choice of such $I$.
There exists a natural distance on the set of type-$2$ and type-$3$ points of
$\mathbb{P}_{k}^{1,\mathrm{an}}$, which is coherent with this definition of
length of an annulus. However, we will not define it in this text.
Kummer torsors of an annulus : If $X$ is a $k$-analytic space and
$\ell\in\mathbb{N}^{\times}$ and integer (in general it is necessary to ask
that $\ell$ is not $0$ in $k$, but it is obviously the case here since we
assumed from the beginning that $\mathrm{char}(k)=0$), the _Kummer exact
sequence_ on $X_{\mathrm{\acute{e}t}}$ :
$1\longrightarrow\mu_{\ell}\longrightarrow\mathbb{G}_{m}\overset{z\mapsto
z^{\ell}}{\longrightarrow}\mathbb{G}_{m}\longrightarrow 1$
induces an injective morphism
$\mathscr{O}_{X}(X)^{\times}/(\mathscr{O}_{X}(X)^{\times})^{\ell}\overset{\iota}{\hookrightarrow}H^{1}(X_{\mathrm{\acute{e}t}},\mu_{\ell})$
whose image will be denoted $\mathsf{Kum}_{\ell}(X)$. It is known ([Ber$2$])
that any locally constant étale sheaf on $X_{\mathrm{\acute{e}t}}$ is
representable. Consequently, $H^{1}(X_{\mathrm{\acute{e}t}},\mu_{\ell})$
classyfies all the _analytic_ étale $\mu_{\ell}$-torsors on $X$ up to
isomorphism. If $f\in\mathscr{O}_{X}(X)^{\times}$, its image $(f)$ in
$H^{1}(X_{\mathrm{\acute{e}t}},\mu_{\ell})$ by $\iota$ corresponds to
$\mathscr{M}(\mathscr{O}_{X}[T]/(T^{\ell}-f))$. The elements of
$\mathsf{Kum}_{\ell}(X)$ seen as analytic étale $\mu_{\ell}$-torsors will be
called _Kummer $\mu_{\ell}$-torsors_.
###### Example 1.20.
If $I$ is a non-empty interval of $\mathbb{R}_{>0}$, the (invertible) function
$T^{\ell}\in\mathscr{O}_{\mathcal{C}_{I}}(\mathcal{C}_{I})^{\times}$ induces a
Kummer $\mu_{\ell}$-torsor $\mathcal{C}_{I}\rightarrow\mathcal{C}_{I^{\ell}}$,
identifying $\mathcal{C}_{I}$ with
$\mathscr{M}(\mathscr{O}_{\mathcal{C}_{I^{\ell}}}[T]/(T^{\ell}-S))$, where $S$
is the standard coordinate of $\mathcal{C}_{I^{\ell}}$.
###### Proposition 1.21.
Let $\mathcal{C}$ be a $k$-analytic annulus, and $\ell\in\mathbb{N}^{\times}$
an integer _prime to the residual characteristic $p$_ :
1. 1.
The group $\mathsf{Kum}_{\ell}(\mathcal{C})$ is isomorphic to
$\mathbb{Z}/\ell\mathbb{Z}$. This isomorphism is non-canonical as soon as
$\ell\geqslant 3$, but becomes canonical when one fixes an orientation of
$\mathcal{C}$.
2. 2.
Any connected component of a Kummer $\mu_{\ell}$-torsor of $\mathcal{C}$ is an
$k$-analytic annulus.
3. 3.
Any $\mu_{\ell}$-torsor of $\mathcal{C}$ is Kummer, which leads to an
isomorphism :
$H^{1}(\mathcal{C}_{\mathrm{\acute{e}t}},\mu_{\ell})\simeq\mathsf{Kum}_{\ell}(\mathcal{C})\simeq\mathbb{Z}/\ell\mathbb{Z}.$
###### Proof.
The proof of the two first points can be found in [Duc], $3.6.30$ and
$3.6.31$. The facts that $k$ is algebraically closed and that $\ell$ is prime
to $p$ are necessary for the first point, since it implies that the subgroup
$k^{\times}\cdot(1+\mathscr{O}_{\mathcal{C}}(\mathcal{C})^{\circ\circ})$ of
$\mathscr{O}_{\mathcal{C}}(\mathcal{C})^{\times}$ is $\ell$-divisible. It
means in terms of the group $\mathscr{Z}(\mathcal{C})$ defined in remark 1.18
that :
$\ell\mathscr{Z}(\mathcal{C})\simeq\left(\mathscr{O}_{\mathcal{C}}(\mathcal{C})^{\times}\right)^{\ell}/k^{\times}\cdot(1+\mathscr{O}_{\mathcal{C}}(\mathcal{C})^{\circ\circ})$.
Therefore, there is a canonical isomorphism :
$\mathrm{Kum}_{\ell}(\mathcal{C})\simeq\mathscr{O}_{\mathcal{C}}(\mathcal{C})^{\times}/\left(\mathscr{O}_{\mathcal{C}}(\mathcal{C})^{\times}\right)^{\ell}\simeq\mathscr{Z}(\mathcal{C})/\ell\mathscr{Z}(\mathcal{C})\simeq\mathbb{Z}/\ell\mathbb{Z}.$
The proof of the last point comes from [Ber$2$], $6.3.5$, where Berkovich
shows that any connected tame finite étale covering of a compact annulus is
Kummer, and that any $\mu_{\ell}$-torsor is tame since $\ell$ is assumed to be
prime to $p$. It easy to extend it to the case when $\mathcal{C}$ is not
compact, since it is then identified with the colimit of its compact
subannuli. ∎
### 1.3 Tempered fundamental group
Let $X$ be a quasi-smooth strictly $k$-analytic space (not necessarily a
curve). As defined in [DJg], an _étale covering_ of $X$ is a morphism
$\varphi:Y\to X$ such that $X$ admits an open covering $X=\bigcup_{i\in
I}U_{i}$ such that for each $i\in I$ :
$\varphi^{-1}(U_{i})=\coprod_{j\in J_{i}}Y_{i,j},$
where each $Y_{i,j}\to U_{i}$ is finite étale, with potentially infinite index
sets. If $X$ is connected, an étale covering $\varphi:Y\to X$ is _Galois_ when
$Y$ is connected and the action of the automorphism group
$G=\mathrm{Aut}(\varphi)$ is simply transitive.
For instance, finite étale coverings as well as topological coverings (for the
Berkovich topology) are étale coverings. Yves André defined in [And] the
notion of _tempered covering_ , defined as follows :
###### Definition 1.22.
An étale covering $\varphi:Y\to X$ is _tempered_ if it is the quotient of the
composition of a topological covering and of a finite étale covering, i.e. if
there exists a commutative diagram of $k$-analytic spaces :
$\textstyle{Z\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\psi}$$\textstyle{W\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\chi}$$\textstyle{Y\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\varphi}$$\textstyle{X}$
where $\chi$ is a finite étale covering and $\psi$ a topological covering. It
is equivalent to say that $\varphi$ becomes topological after pullback by some
finite étale covering. Let $\mathrm{Cov}^{\mathrm{temp}}(X)$ be the category
of tempered coverings of $X$.
If $x\in X$ is a geometric point, consider the fibre functor
$F_{x}:\mathrm{Cov}^{\mathrm{temp}}(X)\to\mathrm{Set}$
which maps a covering $Y\to X$ to the fibre $Y_{x}$. The _tempered fundamental
group pointed at $x$_ is defined as the automorphism group of the fibre
functor in $x$ :
$\pi_{1}^{\mathrm{temp}}(X,x):=\mathrm{Aut}(F_{x}).$
The group $\pi_{1}^{\mathrm{temp}}(X,x)$ becomes a topological group, by
considering the basis of open subgroups consisting of the stabilizers
$(\mathrm{Stab}_{F_{x}(Y)}(y))_{Y\in\mathrm{Cov}^{\mathrm{temp}}(X),\;y\in
F_{x}(Y)}$. It is a prodiscrete topological group.
If $x$ and $x^{\prime}$ are two different geometric points, the functors
$F_{x}$ and $F_{x^{\prime}}$ are (non canonically) isomorphic, and any
automorphism of $F_{x}$ induces an inner automorphism of
$\pi_{1}^{\mathrm{temp}}(X,x)$. Thus, one can consider the _tempered
fundamental group_ $\pi_{1}^{\mathrm{temp}}(X)$, defined up to unique outer
isomorphism.
If $\pi_{1}^{\mathrm{alg}}(X,x)$ (resp. $\pi_{1}^{\mathrm{top}}(X,x)$) denotes
the group classifying pointed finite étale (resp. topological) coverings of
$X$, the natural morphism
$\pi_{1}^{\mathrm{temp}}(X,x)\to\pi_{1}^{\mathrm{top}}(X,x)$ is always
surjective, and the natural morphism
$\pi_{1}^{\mathrm{temp}}(X,x)\to\pi_{1}^{\mathrm{alg}}(X,x)$ has dense image,
such that $\pi_{1}^{\mathrm{alg}}(X,x)$ can be identified with the profinite
completion of $\pi_{1}^{\mathrm{temp}}(X,x)$ :
$\pi_{1}^{\mathrm{alg}}(X,x)=\widehat{\pi_{1}^{\mathrm{temp}}(X,x)}.$
In dimension $1$, when $X$ is a $k$-analytic curve, the morphism
$\pi_{1}^{\mathrm{temp}}(X,x)\to\pi_{1}^{\mathrm{alg}}(X,x)$ is injective
(these results can be found in [And], $2.1.6$). As a consequence, the affine
and projective lines $\mathbb{A}_{k}^{1,\mathrm{an}}$ and
$\mathbb{P}_{k}^{1,\mathrm{an}}$ do not admit any non-trivial tempered
coverings :
$\pi_{1}^{\mathrm{temp}}(\mathbb{P}_{k}^{1,\mathrm{an}})\simeq\pi_{1}^{\mathrm{temp}}(\mathbb{A}_{k}^{1,\mathrm{an}})\simeq
0.$
###### Definition 1.23 (Moderate tempered coverings).
Let $\mathrm{Cov}^{\mathrm{temp},\,(p^{\prime})}(X)$ be the full subcategory
of $\mathrm{Cov}^{\mathrm{temp}}(X)$ consisting of tempered coverings which
are quotients of a topological covering and a Galois finite étale covering of
_degree prime to $p$_. In the same way than for the tempered case, it is
possible to consider a classifying group defined as the automorphism group of
a geometric fibre functor, and well defined up to unique outer automorphism :
this group $\pi_{1}^{\mathrm{temp},\,(p^{\prime})}(X)$ is called the _moderate
tempered group_ of $X$. It is naturally a topological pro-discrete group.
###### Remark 1.24.
When $X$ is a $k$-analytic curve, the group
$\pi_{1}^{\mathrm{temp},\,(p^{\prime})}(X)$ can be constructed group-
theoretically from $\pi_{1}^{\mathrm{temp}}(X)$ as the projective limit of
quotients of $\pi_{1}^{\mathrm{temp}}(X)$ admitting a torsion-free normal
subgroup of finite index prime to $p$.
### 1.4 Verticial, vicinal and cuspidal subgroups
We recall here some notions and terminology of [Gau] about $k$-analytically
hyperbolic curves. Let $X$ be a quasi-smooth connected $k$-analytic curve with
non-empty skeleton $S^{\mathrm{an}}(X)$, and $r_{X}:X\twoheadrightarrow
S^{\mathrm{an}}(X)$ be the canonical retraction. Let $\Sigma_{X}$ be the set
of vertices of $S^{\mathrm{an}}(X)$ (it is the set of _nodes_ of
$S^{\mathrm{an}}(X)$ in the langage of [Gau]).
An edge $e$ of $S^{\mathrm{an}}(X)$ can be of two different types :
* •
it is a _vicinal_ edge whenever the connected component of
$X\setminus\Sigma_{X}$ associated to $e$, i.e.
$r_{X}^{-1}(\overset{\circ}{e})$, is relatively compact in $X$, which is the
same than asking that each of the two extremities of $e$ abuts to a vertex (it
is a "closed" edge).
* •
it is a _cusp_ whenever the associated connected component of
$X\setminus\Sigma_{X}$ is non-relatively compact in $X$, in other words when
it is either a isolated edge, or only one of its extremities abuts to a
vertex.
###### Remark 1.25.
The connected component of $X\setminus\Sigma_{X}$ associated to a vicinal edge
is always a $k$-analytic annulus. However, it might not always be the case for
cusps (see [Gau], Remark $2.18$). A cusp whose associated connected component
of $X\setminus\Sigma_{X}$ is an annulus will be called _annular_.
Recall from [Gau], that an étale covering $\varphi:Y\to X$ of a quasi-smooth
connected curve $X$ is called _moderate_ if for any $y\in Y$, the degree
$[\mathscr{H}(y)^{\mathrm{gal}}:\mathscr{H}(\varphi(y))]$ is prime to $p$,
where $\mathscr{H}(y)^{\mathrm{gal}}$ stands for the Galois closure of the
extension $\mathscr{H}(y)/\mathscr{H}(\varphi(y))$. The category of moderate
covering of $X$ is a Galois category whose fundamental group is denoted
$\pi_{1}^{\mathrm{t}}(X)$ : the moderate fundamental group of $X$, which is a
profinite group.
Let $e$ be an edge of $S^{\mathrm{an}}(X)$ and $\mathcal{C}_{e}$ the
associated connected component of $X\setminus\Sigma_{X}$. Let
$\pi_{e}=\pi_{1}^{\mathrm{t}}(\mathcal{C}_{e})$ the _moderate_ fundamental
group of $\mathcal{C}_{e}$. If $v$ is a vertex of $S^{\mathrm{an}}(X)$, the
_star_ centred in $v$, denoted $\mathrm{St}(v,X)$, is defined by
$\mathrm{St(v,X)}=\\{v\\}\sqcup\bigsqcup_{e}\mathcal{C}_{e},$
where the disjoint union is taken over all edges $e$ of $S^{\mathrm{an}}(X)$
abutting to $v$. Let ${\pi_{v}=\pi_{1}^{\mathrm{t}}(\mathrm{St}(v,X))}$ be the
_moderate_ fundamental group of $\mathrm{St(v,X)}$.
We saw in [Gau] that if $X$ is $k$-analytically hyperbolic, for any component
$c$ of $S^{\mathrm{an}}(X)$ (vertex or edge), there is a natural embedding
$\pi_{c}\hookrightarrow\pi_{1}^{\mathrm{temp},\,(p^{\prime})}(X)$. This comes
from the fact that the semi-graph of anabelioids $\mathcal{G}(X,\Sigma_{X})$
is of _injective type_ and that there is a natural isomorphism
$\pi_{1}^{\mathrm{temp}}(\mathcal{G}(X,\Sigma_{X}))\simeq\pi_{1}^{\mathrm{temp},\,(p^{\prime})}(X)$
(see [Gau], Corollary $3.36$).
###### Definition 1.26.
If $X$ is a $k$-analytically hyperbolic curve, a compact subgroup of
$\pi_{1}^{\mathrm{temp},\,(p^{\prime})}(X)$ is called :
* •
_vicinal_ if it is of the form $\pi_{e}$ for some vicinal edge $e$ of
$S^{\mathrm{an}}(X)$,
* •
_cuspidal_ if it is of the form $\pi_{e}$ for some cusp $e$ of
$S^{\mathrm{an}}(X)$,
* •
_verticial_ if it is of the form $\pi_{v}$ for some vertex $v$ of
$S^{\mathrm{an}}(X)$.
###### Remark 1.27.
The Kummer nature of moderate coverings of an annulus implies that vicinal
subgroups as well as cuspidal subgroups are always isomorphic to
$\widehat{\mathbb{Z}}^{\,(p^{\prime})}$, even for non-annular cusps. However,
a compact subgroup of $\pi_{1}^{\mathrm{temp},\,(p^{\prime})}(X)$ cannot be at
the same time vicinal and cuspidal. Verticial subgroups are always isomorphic
to the _prime-to- $p$_-profinite completion of the fundamental group of a
hyperbolic Riemann surface (see [Gau], Corollary $3.23$ and proof of $3.30$).
For a $k$-analytically hyperbolic curve $X$, verticial and vicinal subgroups
can be characterised directly from the group
$\pi_{1}^{\mathrm{temp},\,(p^{\prime})}(X)$ : verticial subgroups correspond
exactly to (conjugacy classes of) maximal compact subgroup, whereas vicinal
subgroups correspond to (conjugacy classes of) non-trivial intersections of
two maximal compact subgroups. Therefore one can reconstruct the truncated
skeleton $S^{\mathrm{an}}(X)^{\natural}$ from the tempered group
$\pi_{1}^{\mathrm{temp},\,(p^{\prime})}(X)$ (so also from
$\pi_{1}^{\mathrm{temp}}(X)$, since the first one can be deduced from the
second taking a suited inverse limit, see 1.24).
## 2 Harmonic cochains and torsors
### 2.1 Splitting conditions of $\mu_{p}$-torsors
###### Lemma 2.1.
Let $\xi$ and $\xi^{\prime}$ be two distinct $p^{\mathrm{th}}$-roots of unity
in $k$ (recall that $k$ is assumed to be algebraically closed). Then
$|\xi-\xi^{\prime}|=p^{-\frac{1}{p-1}}$.
###### Proof.
Write
$\displaystyle{\Phi_{p}=\frac{X^{p}-1}{X-1}=\sum_{i=0}^{p-1}X^{i}=\prod_{\xi\in\mu^{\prime}_{p}}X-\xi\in\mathbb{Q}[X]}$
for the $p^{\mathrm{th}}$ cyclotomic polynomial, where $\mu_{p}^{\prime}$
stands for the set of the $p-1$ primitive $p^{\mathrm{th}}$-roots of unity in
$k$. The evaluation at $1$ gives : $p=\prod_{\xi\in\mu^{\prime}_{p}}1-\xi.$
For $\xi$ describing $\mu^{\prime}_{p}$, all the $1-\xi$ have the same norm
since they are on the same $\mathrm{Gal}(k/\mathbb{Q})$-conjugacy class.
Therefore, $|1-\xi|=p^{-\frac{1}{p-1}}$, and we obtain the result since
multiplication by any $p^{\mathrm{th}}$-root of unity is an isometry of $k$. ∎
An étale coverings $\varphi:Y\to X$ between two $k$-analytic curves _totally
splits_ over a point $x\in X$ if for any $y\in\varphi^{-1}(\\{x\\})$, the
extension $\mathscr{H}(x)\to\mathscr{H}(y)$ is an isomorphism. When $\varphi$
is of degree $n$, $\varphi$ totally splits over $x$ if and if the fibre
$\varphi^{-1}(\\{x\\})$ has exactly $n$ elements, which is the same as saying
that locally, over a neighbourhood of $x$, $\varphi$ is a topological covering
(see [And], III, $1.2.1$).
The following proposition, precising the splitting sets of the
$\mu_{p^{h}}$-torsor given by the function $\sqrt[p^{h}]{1+T}$, will be of
paramount important in this article.
###### Proposition 2.2.
If $h\in\mathbb{N}^{\times}$, the étale covering
$\mathbb{G}_{m}^{\mathrm{an}}\xrightarrow{z\mapsto
z^{p^{h}}}\mathbb{G}_{m}^{\mathrm{an}}$ totally splits over a point
$\eta_{z_{0},r}$ satisfying $r<|z_{0}|=:\alpha$ if and only if : $r<\alpha
p^{-h-\frac{p}{p-1}}$. More precisely, the inverse image of $\eta_{z_{0},r}$
contains :
* •
only one element when $r\in[\alpha p^{-\frac{p}{p-1}},\alpha];$
* •
$p^{i}$ elements when $r\in[\alpha p^{-i-\frac{p}{p-1}},\alpha
p^{-i-\frac{1}{p-1}}[,$ with $1\leqslant i\leqslant h-1$;
* •
$p^{h}$ elements when $r\in[0,\alpha p^{-h-\frac{1}{p-1}}[.$
###### Proof.
Let $f:\mathbb{G}_{m}^{\mathrm{an}}\rightarrow\mathbb{G}_{m}^{\mathrm{an}}$ be
the covering given by $f(z)=z^{p}$. Let $z_{1}\in k^{*}$ and
$\rho\in\mathbb{R}_{+}$ satisfying $\rho<|z_{1}|$ (such that
$\eta_{z_{1},\rho}\notin]0,\infty[$). In order to compute
$f(\eta_{z_{1},\rho})$, notice that for any polynomial $P\in k[T]$ :
$|P\left(f(\eta_{z_{1},\rho})\right)|=|(P\circ
f)(\eta_{z_{1},\rho})|=|P(T^{p})(\eta_{z_{1},\rho})|.$
Thus,
$|(T-z_{1}^{p})(f(\eta_{z_{1},\rho}))|=|(T^{p}-z_{1}^{p})(\eta_{z_{1},\rho})|$.
Moreover :
$T^{p}-z_{1}^{p}=\sum_{i=1}^{p}\binom{p}{i}z_{1}^{p-i}(T-z_{1})^{i}=\sum_{i=1}^{p}\gamma_{i}(T-z_{1})^{i},$
where $\gamma_{i}=\binom{p}{i}z_{1}^{p-i}$, with :
$|\gamma_{i}|=\left\\{\begin{array}[]{ll}1&\hbox{si }i=p\\\
p^{-1}|z_{1}|^{p-i}&\hbox{si }1\leqslant i\leqslant p-1\end{array}\right.$
Consequently, $|(T-z_{1}^{p})(f(\eta_{z_{1},\rho}))|=\max_{1\leqslant
i\leqslant
p}\\{|\gamma_{i}|\rho^{i}\\}=\max\\{\rho^{p},\left(p^{-1}\rho^{i}|z_{1}|^{p-i}\right)_{1\leqslant
i\leqslant p-1}\\}$. Since we assumed $\rho<|z_{1}|$, we get
$|(T-z_{1}^{p})(f(\eta_{z_{1},\rho}))|=\max\\{\rho^{p},p^{-1}\rho|z_{1}|^{p-1}\\}$,
that is to say :
$|(T-z_{1}^{p})(f(\eta_{z_{1},\rho}))|=\left\\{\begin{array}[]{ll}p^{-1}\rho|z_{1}|^{p-1}&\hbox{si
}\rho\leqslant|z_{1}|p^{-\frac{1}{p-1}}\\\ \rho^{p}&\hbox{si
}\rho\geqslant|z_{1}|p^{-\frac{1}{p-1}}\end{array}\right.$
Define $\widehat{\rho}:=|(T-z_{1}^{p})(f(\eta_{z_{1},\rho}))|$. As
$f(\eta_{z_{1},\rho})$ is a multiplicative seminorm, for any $k\in\mathbb{N}$
we have $|(T-z_{1}^{p})^{k}(f(\eta_{z_{1},\rho}))|=\widehat{\rho}^{k}$. By
writing :
$(T^{p}-z_{1}^{p})^{k}=\sum_{j=0}^{pk}\lambda_{k,j}(T-z_{1})^{j},$
we obtain : $\widehat{\rho}^{k}=\max_{0\leqslant j\leqslant
pk}\\{|\lambda_{k,j}|\rho^{j}\\}$.
Let $P=\sum_{k=0}^{n}\alpha_{k}(T-z_{1}^{p})^{k}\in k[X]$ be a polynomial. Up
to defining $\lambda_{k,j}:=0$ if $j>pk$, one can write :
$\displaystyle P(T^{p})$
$\displaystyle=\sum_{k=0}^{n}\alpha_{k}(T^{p}-z_{1}^{p})^{k}=\sum_{k=0}^{n}\alpha_{k}\sum_{j=0}^{pn}\lambda_{k,j}(T-z_{1})^{j}$
$\displaystyle=\sum_{j=0}^{pn}\underbrace{\left(\sum_{k=0}^{n}\alpha_{k}\lambda_{k,j}\right)}_{:=\widetilde{\alpha}_{j}}(T-z_{1})^{j}.$
Then we have :
$\displaystyle|P(f(\eta_{z_{1},\rho}))|$ $\displaystyle=\max_{0\leqslant
j\leqslant pn}\\{|\widetilde{\alpha}_{j}|\rho^{j}\\}=\max_{0\leqslant
k\leqslant n}\left\\{|\alpha_{k}|\cdot\max_{0\leqslant j\leqslant
pn}\\{|\lambda_{k,j}|\rho^{j}\\}\right\\}$ $\displaystyle=\max_{0\leqslant
k\leqslant n}\\{|\alpha_{k}|\widehat{\rho}^{k}\\}$
Therefore $f(\eta_{z_{1},\rho})=\eta_{z_{1}^{p},\widehat{\rho}}$, which can be
written :
$f(\eta_{z_{1},\rho})=\left\\{\begin{array}[]{ll}\eta_{z_{1}^{p},p^{-1}\rho|z_{1}|^{p-1}}&\hbox{si
}\rho\leqslant|z_{1}|p^{-\frac{1}{p-1}}\\\ \eta_{z_{1}^{p},\rho^{p}}&\hbox{si
}\rho\geqslant|z_{1}|p^{-\frac{1}{p-1}}\end{array}\right.$
Let’s try to find the preimages by $f$ of $\eta_{z_{0},r}$, where $0\leqslant
r<\alpha:=|z_{0}|$. Define :
$\widetilde{r}=\left\\{\begin{array}[]{ll}rp\alpha^{-\frac{p-1}{p}}&\hbox{si
}r\leqslant\alpha p^{-\frac{p}{p-1}}\\\ r^{\frac{1}{p}}&\hbox{si
}r\geqslant\alpha p^{-\frac{p}{p-1}}\end{array}\right.$
From what is above, if $\widetilde{z_{0}}$ is a $p^{\mathrm{th}}$-root of
$z_{0}$, then :
$\eta_{\widetilde{z_{0}},\widetilde{r}}\in
f^{-1}\left(\\{\eta_{z_{0},r}\\}\right),$
et $f^{-1}\left(\\{\eta_{z_{0},r}\\}\right)$ consists of all conjugates
$\eta_{\xi\widetilde{z_{0}},\widetilde{r}}$ of
$\eta_{\widetilde{z_{0}},\widetilde{r}}$ for $\xi\in\mu_{p}$. Therefore :
$f^{-1}\left(\\{\eta_{z_{0},r}\\}\right)=\left\\{\begin{array}[]{ll}\\{\eta_{\xi\widetilde{z_{0}},rp\alpha^{-\frac{p-1}{p}}}\\}_{\xi\in\mu_{p}}&\hbox{si
}r\leqslant\alpha p^{-\frac{p}{p-1}}\\\
\\{\eta_{\xi\widetilde{z_{0}},r^{\frac{1}{p}}}\\}_{\xi\in\mu_{p}}&\hbox{si
}r\geqslant\alpha p^{-\frac{p}{p-1}}\end{array}\right.$
Since $|\widetilde{z_{0}}|=\alpha^{\frac{1}{p}}$, we have
$|\xi\widetilde{z_{0}}-\xi^{\prime}\widetilde{z_{0}}|=\alpha^{\frac{1}{p}}p^{-\frac{1}{p-1}}$
as soon as $\xi\neq\xi^{\prime}\in\mu_{p}$, from lemma 2.1. Thus,
$f^{-1}\left(\\{\eta_{z_{0},r}\\}\right)$ has a unique element if
$r\geqslant\alpha p^{-\frac{p}{p-1}}$, $p$ otherwise.
For the general case, with $h\geqslant 1$, a recursive reasoning on $h$ leads
to the conclusion.
∎
Figure 1: Covering $\mathbb{G}_{m}^{\mathrm{an}}\xrightarrow{z\mapsto
z^{9}}\mathbb{G}_{m}^{\mathrm{an}}$ with $p=3$, $h=2$ and $z_{0}=1$.
### 2.2 Cochain morphism
We are going to define the important notion of _$\mathbb{Z}/n\mathbb{Z}$
-cochain_ associated to a $\mu_{n}$-torsor. This is exactly from a close look
at the behaviours of such cochains of torsors that it will be possible, in
section 4, to extract some information about lengths of annuli.
###### Definition 2.3 (Harmonic cochains).
Let $\Gamma$ be a locally finite graph, and $A$ an abelian group. A _harmonic
$A$-cochain on $\Gamma$_ is map $c:\\{\text{oriented edges of
}\Gamma\\}\rightarrow A$ satisfying the two following conditions :
1. 1.
if $e$ and $e^{\prime}$ correspond to the same edge with its two different
orientations : $c(e^{\prime})=-c(e)$.
2. 2.
if $x$ is a vertex of $\Gamma$ :
$\sum_{\text{edges oriented towards }x}c(e)=0_{A}.$
The set of harmonic $A$-cochains of $\Gamma$ forms an abelian group denoted
$\mathrm{Harm}(\Gamma,A)$. In the following, we will simply write
$A$-cochains, or cochains when $A$ is explicit.
Let $X$ be a non-empty $k$-analytic curve with skeleton
$\mathbb{G}=S^{\mathrm{an}}(X)$, and truncated skeleton
$\mathbb{G}^{\natural}=S^{\mathrm{an}}(X)^{\natural}$.
###### Lemma 2.4.
Let $n\in\mathbb{N}^{\times}$ :
* •
there exists a natural morphism :
$H^{1}(X,\mu_{n})\xrightarrow{\theta}\mathrm{Harm}(\mathbb{G},\mathbb{Z}/n\mathbb{Z}),$
* •
in the case where $X$ has finite skeleton, does not have any point of genus
$>0$, is without boundary and has only annular cusps, then the image of
$\theta$ contains
$\mathrm{Harm}(\mathbb{G}^{\natural},\mathbb{Z}/n\mathbb{Z})$ (seen as a
subgroup of $\mathrm{Harm}(\mathbb{G},\mathbb{Z}/n\mathbb{Z})$ by prolongation
of cochains by $0$ on all cuspidal edges of $\mathbb{G}$).
###### Proof.
The exact Kummer sequence gives the following exact sequence :
$1\rightarrow\mathscr{O}_{X}(X)^{\times}/\left(\mathscr{O}_{X}(X)^{\times}\right)^{n}\rightarrow
H^{1}(X,\mu_{n})\rightarrow\,_{n}H^{1}(X,\mathbb{G}_{m})\rightarrow 1,$
where ${}_{n}H^{1}(X,\mathbb{G}_{m})$ denotes the $n$-torsion subgroup of
$H^{1}(X,\mathbb{G}_{m})$.
Moreover $H^{1}(X,\mathbb{G}_{m})=H^{1}_{\mathrm{top}}(X,\mathbb{G}_{m})$ :
any étale $\mathbb{G}_{m}$-torsor is topological, this comes from [Ber$2$]
($4.1.10$).
Let $h\in H^{1}(X,\mu_{n})$ and $\overline{h}$ its image in
${}_{n}H^{1}(X,\mathbb{G}_{m})$. Thus, if $x\in X$, there exists an open
neighbourhood $\mathscr{U}$ of $x$ in $X$ such that $\overline{h}$ is trivial
on $\mathscr{U}$. Then $h_{|\mathscr{U}}$ comes from a function
$f\in\mathscr{O}_{\mathscr{U}}(\mathscr{U})^{\times}$ defined modulo $nth$
powers. There is a natural morphism :
$\displaystyle{\mathscr{O}_{\mathscr{U}}(\mathscr{U})^{\times}\xrightarrow{\theta_{\mathscr{U}}}\mathrm{Harm}(S^{\mathrm{an}}(\mathscr{U}),\mathbb{Z})}$
which factorises through :
$\displaystyle{\mathscr{O}_{\mathscr{U}}(\mathscr{U})^{\times}/\left(\mathscr{O}_{\mathscr{U}}(\mathscr{U})^{\times}\right)^{n}\rightarrow\mathrm{Harm}\left(S^{\mathrm{an}}(\mathscr{U}),\mathbb{Z}/n\mathbb{Z}\right)}.$
This morphism $\theta_{\mathscr{U}}$ can be constructed in the following way :
if $e$ is an oriented edge of $S^{\mathrm{an}}(\mathscr{U})$ and $r$ is the
canonical retraction of $\mathscr{U}$ on its skeleton, then $r^{-1}(e)$ is
isomorphic to some open annulus of $\mathbb{P}_{k}^{1,\mathrm{an}}$ defined by
the condition $\\{1<|T|<\lambda\\}$, where the beginning of the edge
corresponds to $1$, whereas the end corresponds to $\lambda$.
Let
$\tau:\\{z\in\mathbb{P}_{k}^{1,\mathrm{an}},1<|T(z)|<\lambda\\}\xrightarrow{\sim}r^{-1}(e)$
be such an isomorphism, and
$\alpha\in\mathscr{O}_{\mathscr{U}}(\mathscr{U})^{\times}$. There exists a
unique $m\in\mathbb{Z}$ such that $\alpha\circ\tau$ is written $z\mapsto
z^{m}g(z)$ with $g$ of constant norm. This comes from the characterization of
invertibility of analytic functions on an annulus, and $m$ is the degree of
the unique strictly dominant monomial of $\alpha\circ\tau$. It is enough to
write $\theta_{\mathscr{U}}(\alpha)(e)=m$, this defines an element of
$\mathrm{Harm}(S^{\mathrm{an}}(\mathscr{U}),\mathbb{Z})$.
We have $S^{\mathrm{an}}(X)\cap\mathscr{U}\subseteq
S^{\mathrm{an}}(\mathscr{U})$, but the inclusion can be a priori strict.
However, we are going to show that the support of $\theta_{\mathscr{U}}(f)$
(i.e. the set of edges $e$ of $S^{\mathrm{an}}(\mathscr{U})$ such that
$\theta_{\mathscr{U}}(f)(e)\neq 0$) is included in
$S^{\mathrm{an}}(X)\cap\mathscr{U}$. Let $e$ be an oriented edge of
$S^{\mathrm{an}}(\mathscr{U})$ not included in $S^{\mathrm{an}}(X)$. If $y\in
e$, $y$ belongs to an open disk $\mathcal{D}$ of $X$. Then there exists a
closed disk $\mathcal{D}_{0}\varsubsetneq\mathcal{D}$ containing $y$ in its
interior. As $\mathcal{D}_{0}$ is a closed disk, its Picard group
$\mathrm{Pic}\left(\mathcal{D}_{0}\right)$ is trivial. Therefore, the
$\mu_{n}$-torsor $h_{|\mathcal{D}_{0}}$ is given by a function
$f_{0}\in\mathscr{O}_{\mathcal{D}_{0}}(\mathcal{D}_{0})^{\times}$. Moreover,
any invertible function on a closed disk has constant norm, hence the cochain
associated to $f_{0}$ at a neighbourhood of $y$ is trivial. In particular,
$\theta_{\mathscr{U}\cap\mathcal{D}_{0}}(f_{0})$ is the trivial cochain on
$S^{\mathrm{an}}(\mathscr{U}\cap\mathcal{D}_{0})$. Moreover, all these local
construction are compatible between each other :
$\theta_{\mathscr{U}\cap\mathcal{D}_{0}}(f_{0})=\theta_{\mathscr{U}\cap\mathcal{D}_{0}}(f)$.
Thus $\theta_{\mathscr{U}}(f)(e)=0$, so the support of
$\theta_{\mathscr{U}}(f)$ is included in $S^{\mathrm{an}}(X)\cap\mathscr{U}$.
These local constructions $x\mapsto\theta_{\mathscr{U}}(f)$ can be glued
together to finally give a morphism:
$H^{1}(X,\mu_{n})\rightarrow\mathrm{Harm}(\mathbb{G},\mathbb{Z}/n\mathbb{Z})$.
For the second point, let’s first explain how to embed $X$ in the
analytification of a Mumford $k$-curve : let $X^{\prime}$ be a proper
$k$-analytic curve obtained from $X$ by prolongation of each cusp by a disk.
Then $X^{\prime}$ is the analytification $\mathscr{X}^{\mathrm{an}}$ of a
Mumford $k$-curve $\mathscr{X}$. Moreover
$\mathbb{G}^{\natural}=S^{\mathrm{an}}(X^{\prime})$ : the annular cusps of $X$
do not appear anymore in the skeleton of $X^{\prime}$ since they are prolonged
by disks.
As $\mathbb{G}^{\natural}=S^{\mathrm{an}}(X^{\prime})$, from [Lep2] we have a
morphism
$\overline{\theta}:H^{1}(X^{\prime},\mu_{n})\rightarrow\mathrm{Harm}(\mathbb{G},\mathbb{Z}/n\mathbb{Z})$
whose image exactly equals
$\mathrm{Harm}(\mathbb{G}^{\natural},\mathbb{Z}/n\mathbb{Z})$. If $\iota$
denotes the embedding of $X$ in $X^{\prime}$, there is a commutative diagram :
---
$\textstyle{H^{1}(X^{\prime},\mu_{n})\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\iota^{*}}$$\scriptstyle{\overline{\theta}}$$\textstyle{H^{1}(X,\mu_{n})\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\theta}$$\textstyle{\mathrm{Harm}(\mathbb{G},\mathbb{Z}/n\mathbb{Z})}$
which is enough to conclude that
$\mathrm{Harm}(\mathbb{G}^{\natural},\mathbb{Z}/n\mathbb{Z})\subseteq\mathrm{im}(\theta)$.
∎
###### Remark 2.5.
As the morphism
$H^{1}(X,\mu_{n})\xrightarrow{\theta}\mathrm{Harm}(\mathbb{G},\mathbb{Z}/n\mathbb{Z})$
exists for any $n$, we will now consider $\theta$ as an map :
$\theta:\bigsqcup_{n\in\mathbb{N}^{\times}}H^{1}(X,\mu_{n})\rightarrow\bigsqcup_{n\in\mathbb{N}^{\times}}\mathrm{Harm}(\mathbb{G},\mathbb{Z}/n\mathbb{Z})$
which induces for each $n\in\mathbb{N}^{\times}$ a morphism
$H^{1}(X,\mu_{n})\rightarrow\mathrm{Harm}(\mathbb{G},\mathbb{Z}/n\mathbb{Z})$.
### 2.3 Cochains and minimality of the splitting radius on an annulus
Let $\mathcal{C}$ be a $k$-analytic annulus of finite length, $\alpha\in k$
and $\eta_{\alpha,0}=\eta_{\alpha}$ (sometimes simply denoted $\alpha$) the
assiocated rigid point of $\mathbb{P}_{k}^{1,\mathrm{an}}$. We are going to
show that $\mu_{p}$-torsors with non-trivial cochains modulo $p$ on
$\mathcal{C}$ satisfy a minimality condition which enables to make them out
from trivial cochain torsors.
###### Definition 2.6.
If $X$ is a $k$-analytic curve and $f\in H^{1}(X,\mu_{n})$, let
$\mathscr{D}(f)$ denotes the set of points of $X$ over which the analytic
torsor defined by $f$ totally splits.
###### Definition 2.7 (Splitting radius of a torsor on a rigid point).
Assume $\mathcal{C}$ is the subannulus of $\mathbb{P}_{k}^{1,\mathrm{an}}$
defined by $|T|\in I$, where $I$ is an interval of $\mathbb{R}_{>0}$. If
$\eta_{\alpha}\in\mathcal{C}$, for any torsor ${f\in
H^{1}(\mathcal{C},\mu_{p})}$, let $\varrho_{f}(\alpha)$ be the _splitting
radius of $f$ in $\alpha$_, defined by :
$\varrho_{f}(\alpha)=\sup\left\\{r\in]0,|\alpha|[,\eta_{\alpha,r}\in\mathscr{D}(f)\right\\}$
The following proposition shows how one can detect with this notion the
triviality of the $\mathbb{Z}/p\mathbb{Z}$-cochain $\theta(f)$.
###### Proposition 2.8.
Fix the rigid point $\eta_{\alpha}\in\mathcal{C}$, then $\varrho_{f}(\alpha)$
is minimal exactly when the cochain of $f\in H^{1}(\mathcal{C},\mu_{p})$ is
non-trivial modulo $p$, i.e. when $f\notin\ker(\theta)$.
More precisely :
* •
$f\notin\ker(\theta)$ if and only if
$\displaystyle{\varrho_{f}(\alpha)=|\alpha|\,p^{-\frac{p}{p-1}}},$
* •
$f\in\ker(\theta)$ if and only if
$\displaystyle{\varrho_{f}(\alpha)>|\alpha|\,p^{-\frac{p}{p-1}}}$.
###### Proof.
The exact Kummer sequence gives a morphism
$\displaystyle{\mathscr{O}_{\mathcal{C}}(\mathcal{C})^{\times}/(\mathscr{O}_{\mathcal{C}}(\mathcal{C})^{\times})^{p}\hookrightarrow
H^{1}(\mathcal{C},\mu_{p})}$
which becomes an isomorphism when one restricts to any compact subannulus,
because of the triviality of the Picard group of any $k$-affinoid subspace. Up
to restricting $\mathcal{C}$, one can assume $f$ is given by a function
$g\in\mathscr{O}_{\mathcal{C}}(\mathcal{C})^{\times}$, which means that the
associated analytic torsor is defined by :
$\displaystyle{\mathscr{O}_{\mathcal{C}}(\mathcal{C})[S]/(S^{p}-g)}$.
Studying the splitting radius of $f$ along the interval
$[\eta_{\alpha},\eta_{\alpha,|\alpha|}]$ amounts to doing the change of
coordinate function $t:=T-\alpha$ and studying the convergence of
$\sqrt[p\;]{g(t+\alpha)}$.
* •
Assume $f\notin\ker(\theta)$.There exists $n\in\mathbb{Z}\setminus\\{0\\}$,
prime to $p$, such that $g$ has growth rate $n$, i.e. $n$ is the degree of the
strictly dominant monomial : $g(T)=a_{n}T^{n}(1+u(T))$, with $|u|<1$ on
$\mathcal{C}$. After normalization ($k$ is algebrically closed), one can
assume $a_{n}=1$.
The series defining $\sqrt[n\,]{1+T}$ has convergence radius equal to $1$
since $n$ is prime to $p$. Therefore, there exists a function $v(T)$ of norm
$<1$ on $\mathcal{C}$ such that $(1+v)^{n}=1+u$, so
$g(T)=\left(T(1+v)\right)^{n}$. As $T(1+v)$ is a coordinate function, we can
assume $g(T)=T^{n}$. Since $n$ is prime to $p$, the two $\mu_{p}$-torsors
given by functions $T^{n}$ et $T$ have the same sets of total splitting, so we
can assume $g(T)=T$. Then the result is given by proposition 2.2.
* •
Assume $f\in\ker(\theta)$. This means the degree of the strictly dominant
monomial of $g$ (the growth rate) is $0$ modulo $p$ : there exists
$m\in\mathbb{Z}$ such that $g(T)=a_{0}T^{mp}(1+u(T))$, with $|u|<1$ on
$\mathcal{C}$.
Up to division by $T^{mp}$ (it is the class of $g$ modulo
$(\mathscr{O}(\mathcal{C})^{\times})^{p}$ which determines the torsor $f$), we
can take $m=0$. Let’s write
$g(T)=\sum_{k\in\mathbb{Z}}a_{k}T^{k}.$
Thus, for all $r\in I$ and $k\in\mathbb{Z}\setminus\\{0\\}$,
$|a_{k}|r^{k}<|a_{0}|$. Up to normalization and restriction to a subannulus,
we can assume that $a_{0}=1$ and that the extremities of the interval $I$
(open or closed), are $1-\varepsilon$ and $1$ for some $\varepsilon\in]0,1[$.
In this case, for all $k\in\mathbb{N}^{\times}$, we have $|a_{k}|<1$ and
$|a_{-k}|<(1-\varepsilon)^{k}$. For all $i\geqslant 0$ and $k\in\mathbb{Z}$,
let’s write ${{k\choose i}=\frac{k(k-1)\ldots(k-i+1)}{i!}}$. Using the
generalised binomial expansion, write :
$\displaystyle g(t+\alpha)$
$\displaystyle=\sum_{k\in\mathbb{Z}}a_{k}(t+\alpha)^{k}$
$\displaystyle=\sum_{k\in\mathbb{Z}}a_{k}\left(\sum_{i\geqslant 0}{k\choose
i}\alpha^{k-i}t^{i}\right)$ $\displaystyle=\sum_{i\geqslant
0}\,\underbrace{\left(\sum_{k\in\mathbb{Z}}a_{k}{k\choose
i}\alpha^{k-i}\right)}_{A_{i}}t^{i}=\sum_{i\geqslant 0}A_{i}t^{i}.$
We have $|\alpha|\leqslant 1$ since $\eta_{\alpha}\in\mathcal{C}$, which
implies $|A_{0}|=|a_{0}|=1$. Writing $v(t)=\sum_{i\geqslant 1}A_{i}t^{i}$,
proposition 2.2 states that the torsor $f$ splits totally on $\eta_{\alpha,r}$
as soon as $\displaystyle{|v(\eta_{0,r})|<p^{-\frac{p}{p-1}}}$. Consequently,
$\varrho_{f}(\alpha)\geqslant r$ if $|A_{i}|r^{i}<p^{-\frac{p}{p-1}}$ for all
$i\geqslant 1$. Therefore :
$\varrho_{f}(\alpha)\geqslant\inf_{i\geqslant
1}\left\\{\sqrt[i]{|A_{i}|^{-1}}p^{-\frac{1}{i}\left(\frac{p}{p-1}\right)}\right\\}.$
Moreover, for all $k\in\mathbb{Z}\setminus\\{0\\}$, $|a_{k}\alpha^{k}|<1$.
Then, all $i\geqslant 1$ satisfy : $|A_{i}|<|\alpha|^{-i}$, so :
$\sqrt[i]{|A_{i}|^{-1}}p^{-\frac{1}{i}\left(\frac{p}{p-1}\right)}>|\alpha|\,p^{-\frac{1}{i}\left(\frac{p}{p-1}\right)}.$
We deduce :
$\varrho_{f}(\alpha)\geqslant\min\left\\{|A_{1}|^{-1}p^{-\frac{p}{p-1}},|\alpha|\,p^{-\frac{1}{2}\left(\frac{p}{p-1}\right)}\right\\}>|\alpha|\,p^{-\frac{p}{p-1}}.$
∎
###### Remark 2.9.
If $h>1$, it is not true anymore that $\varrho_{f}(\alpha)$ is minimal if and
only if $f\in H^{1}(\mathcal{C},\mu_{p^{h}})$ has a non-trivial
$\mathbb{Z}/p^{h}\mathbb{Z}$-cochain, i.e. when $f\notin\ker(\theta)$. It is
not difficult to show that if $f$ has a cochain _prime to $p$_, then :
$\displaystyle{\varrho_{f}(\alpha)=|\alpha|\,p^{-h-\frac{1}{p-1}}}.$
However, if $f^{\prime}\in H^{1}(\mathcal{C},\mu_{p^{h}})$ is the element
corresponding to the function
$T^{p}\in\mathscr{O}_{\mathcal{C}}(\mathcal{C})^{\times}$, then its
$\mathbb{Z}/p^{h}\mathbb{Z}$-cochain $\theta(f^{\prime})$ is non-trivial since
it equals $p$, but one can show that its splitting radius on $\alpha$
satisfies :
$\displaystyle{\varrho_{f^{\prime}}(\alpha)\geqslant|\alpha|\,p^{1-h-\frac{1}{p-1}}}=p\varrho_{f}(\alpha)>\varrho_{f}(\alpha),$
implying that $\varrho_{f^{\prime}}(\alpha)$ is not minimal even though
$f^{\prime}\notin\ker(\theta)$. Moreover, if the annulus $\mathcal{C}$ is for
instance given by the condition $|T|\in]1-\varepsilon,1[$ with
$\varepsilon>0$, the torsor $g\in H^{1}(\mathcal{C},\mu_{p^{h}})$ given by the
function $1+T\in\mathscr{O}_{\mathcal{C}}(\mathcal{C})^{\times}$ has trivial
cochain, so belongs to $\ker(\theta)$, but its splitting radius on a rigid
point $\alpha=\eta_{\alpha}$ is :
$\varrho_{g}(\alpha)=p^{-h-\frac{1}{p-1}}.$
Consequently, as soon as $|\alpha|\in]\frac{1}{p},1[$, we have :
$\varrho_{g}(\alpha)<\varrho_{f^{\prime}}(\alpha)$.
###### Corollary 2.10.
If $f\in H^{1}(\mathcal{C},\mu_{p})$, the triviality of the cochain
corresponding to $f$ can be detected set-theoretically from the splitting sets
of the different $\mu_{p}$-torsors on $\mathcal{C}$ :
* •
$\displaystyle{f\notin\ker(\theta)\;\Longleftrightarrow\;\mathscr{D}(f)_{[2]}\subseteq\bigcap_{f^{\prime}\in
H^{1}(\mathcal{C},\mu_{p})}\mathscr{D}(f^{\prime})_{[2]}}$,
$\displaystyle\;\;\;\;\;\bullet\;\;f\in\ker(\theta)\;$
$\displaystyle\Longleftrightarrow\;\exists f^{\prime}\in
H^{1}(\mathcal{C},\mu_{p}),\mathscr{D}(f)_{[2]}\nsubseteq\mathscr{D}(f^{\prime})_{[2]}$
$\displaystyle\Longleftrightarrow\forall f^{\prime}\in
H^{1}(\mathcal{C},\mu_{p})\setminus\ker(\theta),\mathscr{D}(f)_{[2]}\nsubseteq\mathscr{D}(f^{\prime})_{[2]}.$
###### Proof.
It is a direct consequence of 2.8 coupled with the density of $X_{[2]}$ in
$X$. ∎
### 2.4 Characterisation of $\mu_{p}$-torsors with trivial cochain
The study led so far, which gives a set-theoretic characterization of
$\mu_{p}$-torsors with trivial $\mathbb{Z}/p\mathbb{Z}$-cochain, only deals
with $k$-analytic annuli. In order to extend these considerations, we will
need a definition and a few restrictions.
###### Definition 2.11.
An edge $e$ of a graph $\mathbb{H}$ is a _bridge_ if an only if the map
$\pi_{0}(\mathbb{H}\setminus\\{e\\})\to\pi_{0}(\mathbb{H})$ is not injective,
which happens when the edge $e$ "separates" several connected components of
$\mathbb{H}\setminus\\{e\\}$. The graph is said to be _without bridge_ when
none of its edges is a bridge.
###### Proposition 2.12.
Let’s come back to the $k$-analytic curve $X$ considered in the second part of
lemma 2.4 : without boundary, of finite skeleton, without points of genus
$>0$, and whose cusps are annular. Assume moreover that
$\mathbb{G}=S^{\mathrm{an}}(X)$ is without bridge and there is never strictly
more than one cusp coming from each node.
If $f\in H^{1}(X,\mu_{p})$, then $f\in\ker(\theta)$ if and only if, for any
vicinal edge $e$ of $S^{\mathrm{an}}(X)$ of associated annulus
$\mathcal{C}_{e}$, there exists $f_{e}\in H^{1}(X,\mu_{p})$ such that :
$\left(\mathscr{D}(f)_{[2]}\setminus\mathscr{D}(f_{e})_{[2]}\right)\cap\mathcal{C}_{e\,[2]}\neq\emptyset.$
###### Proof.
The assumption that there is never strictly more than one cusp coming from a
node implies that a cochain
$c\in\mathrm{Harm}(\mathbb{G},\mathbb{Z}/p\mathbb{Z})$ is trivial if and only
if it is trivial on all vicinal edges of $\mathbb{G}$.
* •
Assume that for any vicinal edge $e$ of $\mathbb{G}$ of corresponding annulus
$\mathcal{C}_{e}$, there exists $f_{e}\in H^{1}(X,\mu_{p})$ such that :
$\left(\mathscr{D}(f)_{[2]}\setminus\mathscr{D}(f_{e})_{[2]}\right)\cap\mathcal{C}_{e\,[2]}\neq\emptyset.$
Let $f_{e\,|\,\mathcal{C}_{e}}$ and $f_{|\,\mathcal{C}_{e}}$ be the
restrictions of $f_{e}$ and $f$ to $\mathcal{C}_{e}$. Then we have
$\mathscr{D}(f_{|\,\mathcal{C}_{e}})_{[2]}\nsubseteq\mathscr{D}(f_{e\,|\,\mathcal{C}_{e}})_{[2]}$.
With corollary 2.10, it implies $\theta(f)(e)=0$. But it is true for any
vicinal edge $e$, so $\theta(f)$ is the trivial cochain.
* •
Let $f\in H^{1}(X,\mu_{p})$, $e$ a vicinal edge of annulus $\mathcal{C}_{e}$,
and assume $f\in\ker(\theta)$. From 2.8, as $\theta(f)(e)=0$, we have
$\mathscr{D}(f_{|\,\mathcal{C}_{e}})_{[2]}\nsubseteq\mathscr{D}(g_{e})_{[2]}$
for all $g_{e}\in H^{1}(\mathcal{C}_{e},\mu_{p})$ of non-trivial cochain.
It remains to show that there exists $f_{e}\in H^{1}(X,\mu_{p})$ with non-
trivial cochain at $e$.
From the assumption, the edge $e$ is not a bridge of $\mathbb{G}$, so is
neither a bridge of $\mathbb{G}^{\natural}$. Thus, the evaluation at $e$ :
$\mathrm{Harm}(\mathbb{G}^{\natural},\mathbb{Z}/p\mathbb{Z})\xrightarrow{\mathrm{ev_{e}}}\mathbb{Z}/p\mathbb{Z}$
is non-zero. Let’s choose
$c_{e}\in\mathrm{ev}_{e}^{-1}\left(\mathbb{Z}/p\mathbb{Z}\setminus\\{0\\}\right)$,
i.e. such that $c_{e}(e)\neq 0$. From lemma 2.4, the image of $\theta$
contains $\mathrm{Harm}(\mathbb{G}^{\natural},\mathbb{Z}/p\mathbb{Z})$. It is
enough to take $f_{e}\in\theta^{-1}(\\{c_{e}\\})$ : we have
$\mathscr{D}(f_{|\,\mathcal{C}_{e}})_{[2]}\nsubseteq\mathscr{D}(f_{e\,|\,\mathcal{C}_{e}})_{[2]}$,
which can be written :
$\left(\mathscr{D}(f)_{[2]}\setminus\mathscr{D}(f_{e})_{[2]}\right)\cap\mathcal{C}_{e\,[2]}\neq\emptyset.$
∎
## 3 Resolution of non-singularities
In algebraic geometry, resolution of non-singularities consists in knowing
whether a hyperbolic curve $\mathscr{X}$ admits a finite cover $\mathscr{Y}$
whose stable reduction has some irreducible components above the smooth locus
of the stable (or semi-stable) reduction of $\mathscr{X}$. Such techniques
happened to be useful in anabelian geometry, see for instance [PoSt] : if
$X_{0}$ is a geometrically connected hyperbolic curve over a finite extension
$K$ of $\mathbb{Q}_{p}$ such that $X_{0,\overline{\mathbb{Q}}_{p}}$ satisfies
such resolution of non-singularities, then any section of
$\pi_{1}^{\mathrm{alg}}(X_{0})\to\mathrm{Gal}(\overline{\mathbb{Q}}_{p}/K)$
has its image in a decomposition group of a unique valution point.
Lepage shows in [Lep3] that any Mumford curve over $\overline{\mathbb{Q}}_{p}$
satisfies a resolution of non-singularities, and applies this result to the
anabelian study of the tempered group of such curves. He shows for instance
that if $X_{1}$ and $X_{2}$ are two Mumford curves over
$\overline{\mathbb{Q}}_{p}$ whose analytifications have isomorphic tempered
fundamental groups, then $X_{1}^{\mathrm{an}}$ are $X_{2}^{\mathrm{an}}$ are
naturally homeomorphic ([Lep3], Theorem $3.9$).
### 3.1 Definition and properties of solvable points
In the framework of this article, we are going to give a _ad hoc_ definition
of _solvable point_ , and _resolution of non-singularities_ , in order to stay
in the language of analytic geometry without entering in the considerations of
(semi-)stable model.
###### Definition 3.1 (Solvable point).
Let $X$ be a $k$-analytic quasi-smooth curve. We will say that a point $x\in
X$ _satisfies the resolution of non-singularities_ , equivalently _is
solvable_ , when there exists a finite étale covering $Y$ of $X$ and a node
$y$ of $S^{\mathrm{an}}(Y)$ above $x$. This amounts to "singularising" $x$ to
some node of some finite étale covering of $X$, whence the terminology. The
set of _solvable points_ is denoted $X_{\mathrm{res}}$.
###### Remark 3.2.
One always has $X_{\mathrm{res}}\subseteq X_{[2]}$. We will say that $X$
_satisfies resolution of non-singularities_ when $X_{\mathrm{res}}=X_{[2]}$.
Lepage shows in [Lep3] (Theorem $2.6$) that the analytification of any Mumford
curve over $\overline{\mathbb{Q}}_{p}$ satisfies resolution of non-
singularities.
###### Definition 3.3.
If $f\in H^{1}(X,\mu_{n})$, define
$\mathscr{D}(f)_{\mathrm{res}}:=\mathscr{D}(f)\cap X_{\mathrm{res}}$ as the
set of solvable points of $X$ over which the analytic torsor defined by $f$
totally splits.
Resolution of non-singularities has a specific anabelian flavour : from the
tempered group $\pi_{1}^{\mathrm{temp}}(X)$ it is possible to determine the
set of solvable points, as well as the set of solvable points belonging to an
annulus defined by a vicinal edge, to the skeleton itself, or to the splitting
sets of analytic torsors on $X$.
Properties : If $X$ is a $k$-analytically hyperbolic curve, the tempered
fundamental group $\pi_{1}^{\mathrm{temp}}(X)$ enables to determine :
* •
the set $X_{\mathrm{res}}$ of solvable points of $X$,
* •
if $e$ is a vicinal edge of annulus $\mathcal{C}_{e}$, the set
$\mathcal{C}_{e}\cap X_{\mathrm{res}}$,
* •
the set $S^{\mathrm{an}}(X)_{\mathrm{res}}:=S^{\mathrm{an}}(X)\cap
X_{\mathrm{res}}$ of solvable points belonging to the skeleton,
* •
if $f$ is a $\mu_{n}$-torsor of $X$, the set $\mathscr{D}(f)_{\mathrm{res}}$.
More precisely :
1. 1.
The decomposition groups $D_{x}$ of solvable points of $X$ in
$\pi_{1}^{\mathrm{temp}}(X)$ correspond exactly to the maximal compact
subgroups $D$ of $\pi_{1}^{\mathrm{temp}}(X)$ such that there exists a open
finite index subgroup $H$ of $\pi_{1}^{\mathrm{temp}}(X)$ such that the image
of $D\cap H$ by the natural morphism $H\to H^{\,(p^{\prime})}$ is non-
commutative.
2. 2.
Let $e$ be a vicinal edge of $S^{\mathrm{an}}(X)$, and $\mathcal{C}_{e}$ the
associated annulus. If $D_{x}$ is a decomposition group of a point $x\in
X_{\mathrm{res}}$, then $x\in\mathcal{C}_{e}$ if and only if the image
$D_{x}^{\,(p^{\prime})}$ of $D_{x}$ by the morphism
$\pi_{1}^{\mathrm{temp}}(X)\to\pi_{1}^{\mathrm{temp},\,(p^{\prime})}(X)$ is
some open in some vicinal subgroup $\pi_{e}$ associated to $e$.
3. 3.
Let $x\in X_{\mathrm{res}}$ be a solvable point. Let $D_{x}^{\,(p^{\prime})}$
be a decomposition group of $x$ in
$\pi_{1}^{\mathrm{temp},\,(p^{\prime})}(X)$, and $Y$ a finite étale covering
such that there exists a node $y$ of $S^{\mathrm{an}}(Y)$ above $x$, which
amounts to considering a open subgroup $H$ of $\pi_{1}^{\mathrm{temp}}(X)$ of
finite index such that $\pi_{y}=D_{x}^{\,(p^{\prime})}\cap H^{\,(p^{\prime})}$
is non-commutative. Let $\iota$ be the morphism
$H^{\,(p^{\prime})}\to\pi_{1}^{\mathrm{temp},\,(p^{\prime})}$, then
$\iota(\pi_{y})$ is an open subgroup of $D_{x}^{\,(p^{\prime})}$. There are
three possibilities :
*
* •
Case $1$ : $x\notin S^{\mathrm{an}}(X)$, it is the case when $\iota(\pi_{y})$
is trivial.
* •
Case $2$ : $x$ is a vertex of the skeleton, it is the case when
$\iota(\pi_{y})$ is not commutative. In this case
$D_{x}^{\,(p^{\prime})}=\pi_{x}$ is the only verticial subgroup containing
$\iota(\pi_{y})$, and from Lemma $3.51$ of [Gau] it is also the commensurator
of $\iota(\pi_{y})$ in $\pi_{1}^{\mathrm{temp},\,(p^{\prime})}(X)$.
* •
Case $3$ : $x$ belongs to an egde $e$ of $S^{\mathrm{an}}(X)$, this is the
case when $\iota(\pi_{y})$ is non-trivial and commutative. In this case
$D_{x}^{\,(p^{\prime})}=\pi_{e}$, and $\pi_{e}$ is the only vicinal or
cuspidal subgroup (according to the nature of the edge $e$) containing
$\iota(\pi_{y})$, it also equals the commensurator of $\iota(\pi_{y})$ in
$\pi_{1}^{\mathrm{temp},\,(p^{\prime})}(X)$.
4. 4.
Let $f\in H^{1}(X,\mu_{n})$ and $D_{x}$ a decomposition group in
$\pi_{1}^{\mathrm{temp}}(X)$ of a point $x\in X_{\mathrm{res}}$. Then the
knowledge of $\pi_{1}^{\mathrm{temp}}(X),$ of $f$ (considered as morphism from
$\pi_{1}^{\mathrm{temp}}(X)$ to $\mathbb{Z}/n\mathbb{Z}$) and of $D_{x}$ is
enough to determine whether $x\in\mathscr{D}(f)$.
The point $(1)$, which appears in [Lep3], is a consequence of ([Lep4], prop.
$10$) : if $D$ is a compact subgroup of $\pi_{1}^{\mathrm{temp}}(X)$, there
exists $x\in X$ and a decomposition subgroup $D_{x}$ of $x$ in
$\pi_{1}^{\mathrm{temp}}(X)$ such that $D\subseteq D_{x}$. Therefore,
decomposition subgroups in $\pi_{1}^{\mathrm{temp}}(X)$ of points of $X$ are
exactly the maximal compact subgroups of $\pi_{1}^{\mathrm{temp}}(X)$. The
image $D_{x}^{\,(p^{\prime})}$ of $D_{x}$ by the morphism
$\pi_{1}^{\mathrm{temp}}(X)\to\pi_{1}^{\mathrm{temp},\,(p^{\prime})}(X)$ is
trivial if $x$ does not belong to $S^{\mathrm{an}}(X)$, non-trivial and
commutative (in fact isomorphic to $\widehat{\mathbb{Z}}^{\,(p^{\prime})})$)
if $x$ belongs to an edge of $S^{\mathrm{an}}(X)$, and non commutative if $x$
is a vertex of $S^{\mathrm{an}}(X)$.
The point $(2)$ comes from the following fact : if $Y\xrightarrow{f}X$ is a
finite étale Galois covering of group $G$ with a node $y$ of
$S^{\mathrm{an}}(Y)$ resolving $x\in X_{\mathrm{r\acute{e}s}}$, there exists a
canonical retraction $S^{\mathrm{an}}(Y)/G\twoheadrightarrow
f^{-1}(S^{\mathrm{an}}(X))/G\simeq S^{\mathrm{an}}(X)$, and the sub-graph
$f^{-1}(S^{\mathrm{an}}(X))\subseteq S^{\mathrm{an}}(Y)$ is such that
$f^{-1}(S^{\mathrm{an}}(X))\cap Y_{\mathrm{res}}$ is determined by the data of
$\pi_{1}^{\mathrm{temp}}(X)$ and of an open subgroup of finite index
$H\subseteq\pi_{1}^{\mathrm{temp}}(X)$ defining the covering $f$.
The point $(3)$ can be interpreted as a consequence of lemmas $3.6$ and $3.8$
of [Lep3].
For the point $(4)$, one needs to have in mind the following fact : if
$Y\xrightarrow{f}X$ is a finite étale Galois covering given by an open
subgroup $H\subseteq\pi_{1}^{\mathrm{temp}}(X)$, the data of the morphism
$\iota:H^{\,(p^{\prime})}\to\pi_{1}^{\mathrm{temp},\,(p^{\prime})}(X)$ enables
to know the preimage by $f$ of a fixed node $x\in S^{\mathrm{an}}(X)$. In
particular, when $f$ is a $\mu_{n}$-torsor, the data of $\iota$ enables to
know whether $f$ totally splits over $x$ (cf. [Lep2], prop. $7$). Now if $x\in
X_{\mathrm{res}}$, if $Z\xrightarrow{g}X$ is a finite étale Galois covering of
group $G$ with a node $z\in S^{\mathrm{an}}(Z)$ which resolves $x$, and if
$f\in H^{1}(X,\mu_{n})$ corresponds to the analytic torsor $Y\to X$, the pull-
back $Y\times_{X}Z\to Z$ inherites a natural action of $\mu_{n}\times G$.
Triviality of $f$ over $x$ can be read on the action of $G\times\mu_{n}$ over
$f^{-1}(x)$, i.e. over the $G$-orbit of $z$.
### 3.2 Solvability of skeletons of annuli and "threshold" points
Let $X$ be a quasi-smooth $k$-analytic curve without boundary, of finite
skeleton, without points of genus $>0$, and whose cusps are annular. We saw
that $X$ can be considered as a non-empty open subset of the analytification
$X^{\prime}$ of a Mumford $k$-curve. However, one cannot a priori conclude
that $X$ satisfies resolution of non-singularities, since it is not defined
over $\overline{\mathbb{Q}}_{p}$ : the proof of theorem $2.6$ of [Lep3] does
not work well anymore when $|k^{\times}|$ is uncountable. Nevertheless, we
will only need a very weak version of resolution of non-singularities : it
will be enough for us to have the solvability of type-$2$ points of skeletons
of annuli, as well as the one of "threshold" branching points of
$\mu_{p}$-torsors of non-trivial cochain.
###### Lemma 3.4.
Let $e$ be an edge of $S^{\mathrm{an}}(X)$ of corresponding annulus
$\mathcal{C}_{e}$, and $\alpha$ a rigid point of $\mathcal{C}_{e}$ which does
not belong to the skeleton of $\mathcal{C}_{e}$. Let $r$ be the canonical
retraction from $X$ to its skeleton, and $x_{\alpha}\in\mathcal{C}_{e}$ the
unique point of $]\alpha,r(\alpha)[$ situated to the distance $\frac{p}{p-1}$
of $r(\alpha)$. If $e$ is not a bridge of $\mathbb{G}$, then $x_{\alpha}\in
X_{\mathrm{res}}$.
###### Proof.
From proposition 2.8, we know that any analytic $\mu_{p}$-torsor $Y\rightarrow
X$ whose cochain is non-trivial modulo $p$ on $e$ is non-split with a unique
preimage over each point of $[x_{\alpha},r(\alpha)[$, and totally split over
$[\alpha,x_{\alpha}[$. Moreover, such a $\mu_{p}$-torsor of $X$ with non-
trivial cochain on $e$ exists : as in the proof of proposition 2.12, the fact
that $e$ is not a bridge implies that the set
$(\mathrm{ev}_{e}\circ\theta)^{-1}\left(\left(\mathbb{Z}/p\mathbb{Z}\right)^{\times}\right)\subset
H^{1}(X,\mu_{p})$
is non-empty.
If $Y\to X$ is such a torsor, the unique preimage $y\in Y$ of $x_{\alpha}$ is
a branching point, thus a node of $S^{\mathrm{an}}(Y)$ living above
$x_{\alpha}$. Therefore, $x_{\alpha}$ is a solvable point of $X$. ∎
###### Remark 3.5.
Such a point $x_{\alpha}$ situated to a distance $\frac{p}{p-1}$ of the
skeleton is a "threshold" point. Indeed, travelling along the segment
$[\alpha,r(\alpha)]$, $x_{\alpha}$ is exactly the threshold point until which
any $\mu_{p}$-torsor with non-trivial $\mathbb{Z}_{p}/\mathbb{Z}$-cochain on
$e$ totally splits.
###### Lemma 3.6.
Let $\mathcal{C}$ be a $k$-analytic annulus. Then all the type-$2$ points of
the analytic skeleton of $\mathcal{C}$ are solvable :
$\;\;S^{\mathrm{an}}(X)_{\mathrm{res}}=S^{\mathrm{an}}(X)_{[2]}.$
###### Proof.
One can assume there exists a non-empty interval $I$ of $\mathbb{R}_{>0}$ such
that $\mathcal{C}$ is the analytic domain of $\mathbb{P}_{k}^{1,\mathrm{an}}$
defined by the condition $|T-1|\in I$, that we will denote
$\mathcal{C}_{(I)}$. Let $x\in\mathcal{C}_{[2]}$, there exists $r\in I$ such
that $x=\eta_{\,1,r}$. Up to replacing the interval $I$ by
$J=r^{-1}p^{-\frac{p}{p-1}}I$ (in this case
$\mathcal{C}_{(I)}\simeq\mathcal{C}_{(J)}$ because
$r^{-1}p^{-\frac{p}{p-1}}\in|k^{\times}|$), one can assume
$x=\eta_{1,p^{-\frac{p}{p-1}}}$. From proposition 2.2, the point $x$ admits
only one preimage $y$ by the étale covering $Y\to\mathcal{C}$ induced by
$\mathbb{G}_{m}^{\mathrm{an}}\xrightarrow{z\mapsto
z^{p}}\mathbb{G}_{m}^{\mathrm{an}}$, and $y$ is a branching point of $Y$, i.e.
a node of $S^{\mathrm{an}}(Y)$. Therefore $x$ is a branching point of $X$. ∎
### 3.3 Anabelianity of the triviality of $\mu_{p}$-torsors
We are now up to proving some tempered anabelianity of the triviality of
cochains associated to $\mu_{p}$-torsors on a curve $X$ : either when $X$ is
an annulus, or a $k$-analytically hyperbolic curve which is some open subset
of the analytification of a Mumford $k$-curve.
###### Theorem 3.7.
Let $X$ be a $k$-analytic curve satisfying one of the two following
conditions:
1. 1.
$X$ is an annulus
2. 2.
$X$ is a $k$-analytically hyperbolic curve, of finite skeleton without bridge,
without any point of genus $>0$, without boundary, with only annular cusps,
and such that there is never strictly more than one cusp coming from each
node.
Then the set of $\mu_{p}$-torsors of $X$ of trivial
$\mathbb{Z}/p\mathbb{Z}$-cochain, i.e. the set
$H^{1}(X,\mu_{p})\cap\ker(\theta)$, is completely determined by
$\pi_{1}^{\mathrm{temp}}(X)$.
###### Proof.
Let’s concentrate on the second case, when the curve is $k$-analytically
hyperbolic. The case of an annulus is treated exactly the same, inspiring from
corollary 2.10, rather than proposition 2.12.
From 2.12, an element $f\in H^{1}(X,\mu_{p})$ belongs to $\ker(\theta)$ if and
only if, for any vicinal edge $e$ of $S^{\mathrm{an}}(X)$ of associated
annulus $\mathcal{C}_{e}$, there exists $f_{e}\in H^{1}(X,\mu_{p})$ such that
:
$\left(\mathscr{D}(f)_{[2]}\setminus\mathscr{D}(f_{e})_{[2]}\right)\cap\mathcal{C}_{e\,[2]}\neq\emptyset,$
and one can always choose $f_{e}$ such that $\theta(f_{e})(e)\neq 0$. In this
case, as soon as $\alpha$ is a rigid point of $\mathcal{C}_{e}$, the
"threshold" point $x_{\alpha}\in]\alpha,r(\alpha)[$ situated at a distance
$\frac{p}{p-1}$ of $r(\alpha)$ is split by $f$ but not by $f_{e}$ : this comes
from proposition 2.8. Therefore
$x_{\alpha}\in\mathscr{D}(f)\setminus\mathscr{D}(f_{e})$, and as such points
are solvable by lemma 3.4, we obtain :
$x_{\alpha}\in\mathscr{D}(f)_{\mathrm{res}}\setminus\mathscr{D}(f_{e})_{\mathrm{res}}.$
Since $X_{\mathrm{res}}\subseteq X_{[2]}$, we have $f\in\ker(\theta)$ if and
only if there exists $f_{e}\in H^{1}(X,\mu_{p})$ such that
$\left(\mathscr{D}(f)_{\mathrm{res}}\setminus\mathscr{D}(f_{e})_{\mathrm{res}}\right)\cap\mathcal{C}_{e}\neq\emptyset.$
(1)
From the properties of solvable points presented in 3.1, the sets
$\mathscr{D}(f)_{\mathrm{res}}$, $\mathscr{D}(f_{e})_{\mathrm{res}}$ and
$\mathcal{C}_{e}\cap X_{\mathrm{res}}$ are determined by the tempered group
$\pi_{1}^{\mathrm{temp}}(X)$, so the condition (1) above can be detected from
the tempered group, hence the result. ∎
###### Remark 3.8.
The second condition on the curve $X$ amounts to asking that $X$ is
$k$-analytically hyperbolic curve isomorphic to a open subset of the
analytification of Mumford $k$-curve, such that $S^{\mathrm{an}}(X)$ is
without bridge and there is never strictly more than one cusp coming from each
node. In this case, from Theorem $3.63$ of [Gau], $X$ is $k$-analytically
anabelian, not only hyperbolic.
###### Corollary 3.9.
Let’s stay in the framework of theorem 3.7. Let $h\in\mathbb{N}^{\times}$, and
$\mathrm{mod}(p):\mathrm{Harm}(\mathbb{G},\mathbb{Z}/p^{h}\mathbb{Z})\to\mathrm{Harm}(\mathbb{G},\mathbb{Z}/p\mathbb{Z})$
the reduction modulo $p$ of the $\mathbb{Z}/p^{h}\mathbb{Z}$-cochains. Then it
is possible to characterize from the tempered group
$\pi_{1}^{\mathrm{temp}}(X)$ the kernel of the composed morphism
$\mathrm{mod}(p)\circ\theta:H^{1}(X,\mu_{p^{h}})\to\mathrm{Harm}(\mathbb{G},\mathbb{Z}/p\mathbb{Z}).$
###### Proof.
We have a commutative diagram :
$\textstyle{H^{1}(X,\mu_{p^{h}})\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\theta}$$\textstyle{\mathrm{Harm}(\mathbb{G},\mathbb{Z}/p^{h}\mathbb{Z})\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\mathrm{mod}(p)}$$\textstyle{H^{1}(X,\mu_{p})\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\theta}$$\textstyle{\mathrm{Harm}(\mathbb{G},\mathbb{Z}/p\mathbb{Z})}$
where the first vertical arrow is induced by the exact sequence :
$\displaystyle{1\to\mu_{p^{h-1}}\to\mu_{p^{h}}\xrightarrow{\pi}\mu_{p}\to 1}$.
With the identification
$H^{1}(X,\mu_{p^{i}})\simeq\mathrm{Hom}(\pi_{1}^{\mathrm{temp}}(X),\mu_{p^{i}})$,
this morphism is nothing else than
$\mathrm{Hom}(\pi_{1}^{\mathrm{temp}}(X),\mu_{p^{h}})\xrightarrow{\pi_{*}}\mathrm{Hom}(\pi_{1}^{\mathrm{temp}}(X),\mu_{p}),$
so only depends on the tempered group $\pi_{1}^{\mathrm{temp}}(X)$. The
conclusion follows from 3.7 and the commutativity of the diagram. ∎
## 4 Partial anabelianity of lengths of annuli
We are going to show how all these set-theoretical considerations about the
intersection of the skeleton of an annulus with the splitting sets of its
$\mu_{p}$-torsors enable to extract some information about the length of the
annulus, before giving an anabelian interpretation.
### 4.1 Lengths and splitting sets
The following lemma enables to determine whether the length of an annulus is
$>\frac{2p}{p-1}$ from the knowledge of its $\mu_{p}$-torsors of trivial
cochain.
###### Lemma 4.1.
A $k$-analytic annulus $\mathcal{C}$ has a length strictly greater than
$\frac{2p}{p-1}$ if and only if any $\mu_{p}$-torsor of trivial cochain on
$\mathcal{C}$ totally splits over a non-empty portion of its analytic skeleton
:
$\ell(\mathcal{C})>\frac{2p}{p-1}\iff\forall f\in
H^{1}(\mathcal{C},\mu_{p})\cap\ker(\theta),\mathscr{D}(f)_{[2]}\cap
S^{\mathrm{an}}(\mathcal{C})\neq\emptyset.$
###### Proof.
Assume $\ell(\mathcal{C})>\frac{2p}{p-1}$, and consider $f\in
H^{1}(\mathcal{C},\mu_{p})\cap\ker(\theta)$. As in the proof of proposition
2.8, up to restricting $\mathcal{C}$ (but slightly, in order to keep the
condition on the length), one can assume that $\mathcal{C}$ is the subannulus
of $\mathbb{P}_{k}^{1,\mathrm{an}}$ given by the condition
$T\in]1-\varepsilon,1[$ with $1-\epsilon<p^{-\frac{2p}{p-1}}$, and that the
$\mu_{p}$-torsor $f$ is defined by a function
$g\in\mathscr{O}_{\mathcal{C}}(\mathcal{C})^{\times}$ written :
$g(T)=1+\underbrace{\sum_{k\in\mathbb{Z}\setminus\\{0\\}}a_{k}T^{k}}_{u(T)},$
with for all $k\in\mathbb{N}^{\times}$ : $|a_{k}|<1$ and
$|a_{-k}|<(1-\varepsilon)^{k}$. The skeleton of $\mathcal{C}$ is the interval
$]\eta_{0,1-\varepsilon},\eta_{0,1}[$, and the corresponding analytic torsor
totally splits over a point $\eta_{0,r}\in S^{\mathrm{an}}(\mathcal{C})$ as
soon as $\displaystyle{|u(\eta_{0,r})|<p^{-\frac{p}{p-1}}}$. Let
$k\in\mathbb{N}^{\times}$ :
* •
if $r<p^{-\frac{p}{p-1}}$, $|a_{k}r^{k}|<p^{-\frac{kp}{p-1}}\leqslant
p^{-\frac{p}{p-1}}$
* •
if $r>(1-\varepsilon)\,p^{\frac{p}{p-1}}$,
$|a_{-k}r^{-k}|<(1-\varepsilon)^{k}(1-\varepsilon)^{-k}p^{-\frac{kp}{p-1}}=p^{-\frac{kp}{p-1}}<p^{-\frac{p}{p-1}}$
But we have $1-\varepsilon<p^{-\frac{2p}{p-1}}$ (from the assumption on the
length of $\mathcal{C}$), hence :
$r_{1}=(1-\varepsilon)\,p^{\frac{p}{p-1}}<p^{-\frac{p}{p-1}}=r_{2}.$
Consequently, the torsor $f$ totally splits over the non-empty interval
$]\eta_{0,r_{1}},\eta_{0,r_{2}}[$ of the skeleton. From the density of
$S^{\mathrm{an}}(\mathcal{C})_{[2]}$ in $S^{\mathrm{an}}(\mathcal{C})$, we
obtain that $\mathscr{D}(f)_{[2]}\cap
S^{\mathrm{an}}(\mathcal{C})\neq\emptyset$.
Reciprocally, if $\ell(\mathcal{C})\leqslant\frac{2p}{p-1}$, one can check
that the torsor given by the function ${g(T)=1+T+(1-\varepsilon)T^{-1}}$ never
totally splits over any point of $S^{\mathrm{an}}(\mathcal{C})$. ∎
It is actually possible to reduce by half the previous bound from a finer
condition requiring to look at the set of $\mu_{p}$-torsors which totally
split _over a neighbourhood of a fixed extremity_. We need the following
definition :
###### Definition 4.2.
Let $\mathcal{C}$ a non-empty $k$-analytic annulus. Its skeleton
$S^{\mathrm{an}}(\mathcal{C})$ is an interval (open or close), and let
$\omega$ be one of its extremities. Let $H^{1}_{\omega}(\mathcal{C},\mu_{p})$
be the subgroup of $H^{1}(\mathcal{C},\mu_{p})$ of $\mu_{p}$-torsors which
totally split _over a neighbourhood of $\omega$_, i.e. which totally splits
over a subinterval of $S^{\mathrm{an}}(\mathcal{C})$ of non-empty interior,
and whose complementary in $S^{\mathrm{an}}(\mathcal{C})$ is an interval which
does not admit $\omega$ as extremity.
###### Lemma 4.3.
A $k$-analytic annulus $\mathcal{C}$ has a length strictly greater than
$\frac{p}{p-1}$ if and only if, for any extremity $\omega$ of
$S^{\mathrm{an}}(\mathcal{C})$ :
$\mathrm{Card}\left(\bigcap_{f\in
H^{1}_{\omega}(\mathcal{C},\mu_{p})}\mathscr{D}(f)_{[2]}\cap
S^{\mathrm{an}}(\mathcal{C})\right)\geqslant 2.$
###### Proof.
Assume $\ell(\mathcal{C})>\frac{p}{p-1}$, and consider $f\in
H^{1}_{\omega}(\mathcal{C},\mu_{p})$. Up to restriction of $\mathcal{C}$ (but
slightly, such that the condition on the length still holds), one can assume
$\mathcal{C}$ is the subannulus of $\mathbb{P}_{k}^{1,\mathrm{an}}$ given by
the condition $T\in]1-\varepsilon,1]$ with $1-\varepsilon<p^{-\frac{p}{p-1}}$.
Let $\mathcal{D}_{0}$ be the closed $k$-analytic disk of
$\mathbb{P}_{k}^{1,\mathrm{an}}$ centred in $0$ and of radius $1$, i.e.
defined by the condition $|T|\in[0,1]$. The annulus $\mathcal{C}$ is then a
$k$-analytic subspace of $\mathcal{D}_{0}$. From the assumption on $f$ it is
possible to extend $f$ into a torsor $\widetilde{f}\in
H^{1}(\mathcal{D}_{0},\mu_{p})$ of $\mathcal{D}_{0}$ trivial over
$\mathcal{D}_{0}\setminus\mathcal{C}$. Since $\mathrm{Pic}(\mathcal{D}_{0})$
is trivial ($\mathcal{D}_{0}$ is a $k$-affinoid space), $\widetilde{f}$ is
given by a function
$g\in\mathscr{O}_{\mathcal{D}_{0}}(\mathcal{D}_{0})^{\times}$ written
$g(T)=1+\underbrace{\sum_{k\in\mathbb{N}^{\times}}a_{k}T^{k}}_{v(T)},$
with $|a_{k}|<1$ for all $k\in\mathbb{N}^{\times}$.
The skeleton of $\mathcal{C}$ is the interval
$]\eta_{0,1-\varepsilon},\eta_{0,1}]$, and the torsor
$f=\widetilde{f}_{|\,\mathcal{C}}$ totally splits over the point
$\eta_{0,r}\in S^{\mathrm{an}}(\mathcal{C})$ as soon as
$\displaystyle{|v(\eta_{0,r})|<p^{-\frac{p}{p-1}}}$.
For all $k\in\mathbb{N}^{\times}$ and
$r\in]1-\varepsilon,p^{-\frac{p}{p-1}}[$, we have
$|a_{k}r^{k}|<p^{-\frac{p}{p-1}}$, so ${|v(\eta_{0,r})|<p^{-\frac{p}{p-1}}}$.
Thus, the interval $]\eta_{0,1-\varepsilon},\eta_{0,p^{-\frac{p}{p-1}}}[$
belongs to $\mathscr{D}(f)$. As the reasoning is independent of the choice of
$f\in H^{1}_{\omega}(\mathcal{C},\mu_{p})$, one obtain :
$]\eta_{0,1-\varepsilon},\eta_{0,p^{-\frac{p}{p-1}}}[\subseteq\bigcap_{f\in
H^{1}_{\omega}(\mathcal{C},\mu_{p})}\mathscr{D}(f)\cap
S^{\mathrm{an}}(\mathcal{C}),$
and the conclusion follows from density of type-$2$ points in $\mathcal{C}$.
Reciprocaly, consider an annulus $\mathcal{C}$ of length
$\ell(\mathcal{C})\leqslant\frac{p}{p-1}$, and assume there exist two distinct
points $\displaystyle{x_{1},x_{2}\in\bigcap_{f\in
H^{1}_{\omega}(\mathcal{C},\mu_{p})}\mathscr{D}(f)_{[2]}\cap
S^{\mathrm{an}}(\mathcal{C})}$. Let $y\in]x_{1},x_{2}[$ a type-$2$ point. Let
$I$ be the connected component of $S^{\mathrm{an}}(C)\setminus\\{y\\}$ which
does not abut to $\omega$, and $\mathcal{C}_{I}$ the subannulus of
$\mathcal{C}$ of skeleton $I$. Up to exanging $x_{1}$ and $x_{2}$, one can
assume $x_{2}\in I$. As the annulus $\mathcal{C}_{I}$ has a length
$<\frac{p}{p-1}$, there exists $h\in H^{1}(\mathcal{C}_{I},\mu_{p})$ such that
$\mathscr{D}(h)\cap S^{\mathrm{an}}(\mathcal{C}_{I})=]y,x_{2}[$. Therefore,
$h$ can be extended into a torsor $\widetilde{h}\in
H^{1}(\mathcal{C},\mu_{p})$ of $\mathcal{C}$, such that
$\mathscr{D}(\widetilde{h})\cap S^{\mathrm{an}}(\mathcal{C})=]x_{1},y[$ (or
$[x_{1},y[$, according to whether $\mathcal{C}$ is open or closed in $x_{1}$).
Then $\widetilde{h}\in H^{1}_{\omega}(\mathcal{C},\mu_{p})$, which leads to a
contradiction since $x_{2}\notin\mathscr{D}(\widetilde{h})$.
∎
###### Corollary 4.4.
It is possible to determine from the tempered fundamental group of a
$k$-analytic annulus whether the length of the latter is strictly greater than
$\frac{p}{p-1}$.
###### Proof.
We showed in lemma 3.6 that all type-$2$ points of the skeleton of
$\mathcal{C}$ are solvable :
$\;S^{\mathrm{an}}(\mathcal{C})_{\mathrm{res}}=S^{\mathrm{an}}(\mathcal{C})_{[2]}.$
Thus :
$\bigcap_{f\in H^{1}_{\omega}(\mathcal{C},\mu_{p})}\mathscr{D}(f)_{[2]}\cap
S^{\mathrm{an}}(\mathcal{C})=\bigcap_{f\in
H^{1}_{\omega}(\mathcal{C},\mu_{p})}\mathscr{D}(f)_{\mathrm{res}}\cap
S^{\mathrm{an}}(\mathcal{C})_{\mathrm{res}}.$
From the properties of solvable points presented in 3.1, the tempered group
$\pi_{1}^{\mathrm{temp}}(\mathcal{C})$ characterises the sets
$\mathscr{D}(f)_{\mathrm{res}}$ and
$S^{\mathrm{an}}(\mathcal{C})_{\mathrm{res}}$. Moreover, a torsor $f\in
H^{1}(\mathcal{C},\mu_{p})$ belongs to $H^{1}_{\omega}(\mathcal{C},\mu_{p})$
if and only if it totally splits over the set of type-$2$ points of a non-
empty neighbourhood of $\omega$ in $S^{\mathrm{an}}(\mathcal{C})$. But all the
type-$2$ points of $S^{\mathrm{an}}(\mathcal{C})$ are solvable, so
$H^{1}_{\omega}(\mathcal{C},\mu_{p})$ is itself characterized by the tempered
group. The result follows from lemma 4.3. ∎
### 4.2 Results on lengths of annuli
We are now in a position to state our result of partial anabelianity of
lengths of annuli. Even if we are not yet in a position to know whether the
fundamental group of an annulus determines its length, the following result
shows that the lengths of two annuli which have isomorphic tempered
fundamental groups cannot be too far from each other. When the lengths are
finite, we give an explicit bound, depending only on the residual
characteristic $p$, for the absolute value of the difference of these lengths.
###### Theorem 4.5.
Let $\mathcal{C}_{1}$ and $\mathcal{C}_{2}$ be two $k$-analytic annuli whose
tempered fundamental groups $\pi_{1}^{\mathrm{temp}}(\mathcal{C}_{1})$ and
$\pi_{1}^{\mathrm{temp}}(\mathcal{C}_{2})$ are isomorphic. Then
$\mathcal{C}_{1}$ has finite length if and only if $\mathcal{C}_{2}$ has
finite length. In this case :
$|\ell(\mathcal{C}_{1})-\ell(\mathcal{C}_{2})|<\frac{2p}{p-1}.$
We also have $d\left(\ell(\mathcal{C}_{1}),p\mathbb{N}^{\times}\right)>1$ if
and only if $d\left(\ell(\mathcal{C}_{2}),p\mathbb{N}^{\times}\right)>1$, and
in this case :
$|\ell(\mathcal{C}_{1})-\ell(\mathcal{C}_{2})|<\frac{p}{p-1}.$
###### Proof.
Let $n\in\mathbb{N}^{\times}$ prime to $p$, and $i\in\\{1,2\\}$. We know that
all $\mu_{n}$-torsors of $\mathcal{C}_{i}$ are Kummer. Thus, annuli defined by
torsors coming from $H^{1}(\mathcal{C}_{i},\mu_{n})$ have length
$\frac{\ell(\mathcal{C}_{i})}{n}$ (with potentially
$\ell(\mathcal{C}_{i})=+\infty$).
Moreover, all the $\mu_{n}$-torsors of $\mathcal{C}_{i}$ can be "read" on the
tempered group $\pi_{1}^{\mathrm{temp}}(\mathcal{C}_{i})$ since
$H^{1}(\mathcal{C}_{i},\mu_{n})\simeq\mathrm{Hom}(\pi_{1}^{\mathrm{temp}}(\mathcal{C}_{i}),\mu_{n})$.
From corollary 4.4 it is then possible, from
$\pi_{1}^{\mathrm{temp}}(\mathcal{C}_{i})$, to know whether
$\frac{\ell(\mathcal{C}_{i})}{n}>\frac{p}{p-1}$, and step by step to find the
smallest integer $j$ such that :
$\frac{\ell(\mathcal{C}_{i})}{n^{j+1}}\leqslant\frac{p}{p-1}<\frac{\ell(\mathcal{C}_{i})}{n^{j}},\;\;\;\;\;\text{i.e.
such that }\;\;\;\;n^{j}\frac{p}{p-1}<\ell(\mathcal{C}_{i})\leqslant
n^{j+1}\frac{p}{p-1}.$
But the tempered groups of these two annuli are isomorphic, so such a $j$ will
be the same for $\mathcal{C}_{1}$ and $\mathcal{C}_{2}$. In particular, for
any $N\in\mathbb{N}^{\times}$ prime to $p$ :
$N\frac{p}{p-1}<\ell(\mathcal{C}_{1})\iff
N\frac{p}{p-1}<\ell(\mathcal{C}_{2}),$
which leads to the conclusion. ∎
## References
* [And] Yves André, Period Mapping and Differential Equations : From $\mathbb{C}$ to $\mathbb{C}_{p}$, MSJ Memoirs, Mathematical Society of Japan 12 ($2003$).
* [And1] Yves André, On a geometric description of $\mathrm{Gal}(\overline{\mathbb{Q}}_{p}/\mathbb{Q}_{p})$ and a $p$-adic avatar of $\widehat{GT}$, Duke Mathematical Journal 119 ($2003$), pp. $1-39$.
* [Ber$1$] Vladimir Berkovich, Spectral Theory and Analytic Geometry Over Non-Archimedean Fields, Mathematical Surveys and Monographs, American Mathematical Society 33 ($1990$).
* [Ber$2$] Vladimir Berkovich, Étale cohomology for non-archimedean analytic spaces, Publications Mathématiques de l’Institut des Hautes Études Scientifiques 78 ($1993$), pp. $5-161$.
* [DJg] Aise De Jong, Etale fondamental groups of non-Archimedian analytic spaces, Compositio Math. 97 ($1995$), pp. $89-118$.
* [Duc] Antoine Ducros, La structure des courbes analytiques (project of book).
* [Gau] Sylvain Gaulhiac, Reconstruction du squelette des courbes analytiques (preprint) arXiv :$1904.03126$v$2$.
* [Lep1] Emmanuel Lepage, Tempered fundamental group and metric graph of a Mumford curve, Publications of the Research Institute for Mathematical Sciences 46 ($2010$), no.$4$, pp. $849-897$.
* [Lep2] Emmanuel Lepage, Tempered fundamental group and graph of the stable reduction, The Arithmetic of Fundamental Groups.
* [Lep3] Emmanuel Lepage, Resolution of non-singularities for Mumford curves, Publications of the Research Institute for Mathematical Sciences 49 ($2013$), pp. $861-891$.
* [Lep4] Emmanuel Lepage, Tempered fundamental group, in Geometric and differential Galois theories, Séminaires et Congrès 27, SMF, ($2013$), pp. $93-113$.
* [Mzk$2$] Shinichi Mochizuki, The Geometry of Anabelioids, Publications of the Research Institute for Mathematical Sciences 40 ($2004$), pp. $819-881$.
* [Mzk$3$] Shinichi Mochizuki, Semi-graphs of Anabelioids, Publications of the Research Institute for Mathematical Sciences 42 ($2006$).
* [PoSt] Florian Pop and Jakob Stix , Arithmetic in the fundamental group of a $p$-adic curve. On the $p$-adic section conjecture for curves., Journal für die Reine und Angewandte Mathematik 725 ($2017$), pp. $1-40$.
|
# Realizing the ultimate scaling of the convection turbulence by spatially
decoupling the thermal and viscous boundary layers
Shufan Zou State Key Laboratory for Turbulence and Complex Systems,
Department of Mechanics and Engineering Science, College of Engineering,
Peking University, Beijing 100871, China Yantao Yang<EMAIL_ADDRESS>State Key Laboratory for Turbulence and Complex Systems, Department of
Mechanics and Engineering Science, College of Engineering, Peking University,
Beijing 100871, China Beijing Innovation Center for Engineering Science and
Advanced Technology, Peking University, Beijing 100871, China Institute of
Ocean Research, Peking University, Beijing 100871, China
###### Abstract
Turbulent convection plays a crucial role in many natural environments,
ranging from Earth ocean, mantle and outer core, to various astrophysical
systems. For such flows with extremely strong thermal driving, an ultimate
scaling was proposed for the heat flux and velocity. Despite numerous
experimental and numerical studies, a conclusive observation of the ultimate
regime has not been reached yet. Here we show that the ultimate scaling can be
perfectly realized once the thermal boundary layer is fully decoupled from the
viscous boundary layer and locates inside the turbulent bulk. The heat flux
can be greatly enhanced by one order of magnitude. Our results provide
concrete evidences for the appearance of the ultimate state when the entire
thermal boundary layer is embedded in the turbulent region, which is probably
the case in many natural convection systems.
## I Introduction
The paradigmatic model for turbulent convection is the Rayleigh-Bénard (RB)
flow, namely the buoyancy-driven flow within a fluid layer heated from below
and cooled from above. Such flow system can be found in many natural
environments [1, 2, 3, 4]. For a given fluid with certain Prandtl number
$Pr=\nu/\kappa$, the driving force is measured by the Rayleigh number
$Ra=g\alpha\Delta H^{3}/\nu\kappa$. Here, $\nu$ is kinematic viscosity,
$\kappa$ is thermal diffusivity, $g$ is the gravitational acceleration,
$\alpha$ is thermal expansion coefficient, $\Delta$ is the temperature
difference across the layer, and $H$ is the layer height, respectively. The
most fundamental question is how the heat flux and the turbulent level of the
flow depend on $Ra$ and $Pr$ [5, 6, 7, 8]. Here, the heat flux is usually
measured by the Nusselt number $Nu$, i.e. the ratio between the convective
flux to the conductive flux. The turbulent level of the flow by the Reynolds
number $Re=UH/\nu$, with $U$ being some characteristic value of the flow
velocity.
In RB flow, thermal and viscous boundary layers develop next to the top and
bottom plates, with the convective bulk in between and full of thermal plumes.
In most early experiments and simulations the viscous boundary layer is
laminar, and the heat flux follows the “classic” scaling law $Nu\sim
Ra^{\gamma}$ with $\gamma\approx 1/3$ [9]. It was hypothesized that when $Ra$
is large enough, saying the thermal driving is extremely strong, the viscous
boundary layer becomes fully turbulent due to the shearing exposed by the
vigorous bulk flow, and the system enters the “ultimate” state where the heat
flux no longer depends on the viscosity of the fluid [10, 11, 12]. The heat
flux then follows the ultimate scaling with $\gamma=1/2$, which predicts $Nu$
many orders of magnitude larger than the classic scaling at very large $Ra$ as
in various natural environments.
Tremendous efforts have been made to achieve the ultimate scaling in RB flow
by both experiments and simulations. Fluid with very small $Pr$, such as
mercury, is used so that the thermal boundary layer is much thicker than the
viscous one and extends into the turbulent bulk [13, 14, 15]. Recent state-of-
art simulations and experiments push $Ra$ to very high values [15, 16, 17,
18]. Some of these studies reported evidences of the transition to the
ultimate regime [13, 16, 17]. Wall roughness was introduced to trigger the
transition of the momentum boundary layer to fully turbulent state, but this
only works for a limit range of $Ra$ for a given wall roughness [19, 20].
Homogeneous convection in fully periodic domain exhibits the ultimate scaling,
since in such flow all boundary layers are removed and only the turbulent bulk
plays a role [21, 22]. The radiative convection with internal heating and the
convection with background oscillation can both greatly enhance the exponent
$\gamma$ from the classic-scaling value, but such treatments change the
dynamics of the convection system by introducing extra source terms for the
thermal field or the momentum field [23, 24, 25, 26].
## II Governing equations and numerical methods
Here by specially designed numerical experiments, we confirm the physical
conjecture of the ultimate paradigm. Once the whole thermal boundary layer is
spatially decoupled from the viscous one and locate entirely in the turbulent
flow region, the ultimate scaling of the heat flux can be perfectly realized
even at low to moderate Rayleigh numbers. Specifically, we consider the RB
flow between two parallel plates which are perpendicular to the direction of
the gravity and separated by a distance $H$. A homogeneous layer of height $h$
is introduced at top and bottom of the domain. Within this layer the
temperature is uniform and equal to the temperature of the adjacent plate,
which can be readily realized by using an immersed-boundary technique. The
buoyancy-force term takes effect only between the two homogeneous layers. By
doing so the thermal boundary layer is lifted by a height $h$ from the plate
where the viscous boundary layer occurs. If $h$ is large enough, the two
boundary layers can be fully decoupled.
Specifically, we consider a fluid layer bounded by two parallel plates which
are perpendicular to the gravity and separated by a distance $H$. The
Oberbeck-Bbuossinesq approximation is utilized to take care of the buoyancy
effects. First, the fluid density depends linearly on the scalar field as
$\rho=\rho_{0}(1-\alpha\theta)$. Here $\rho_{0}$ is the reference density at
the reference temperature $T_{0}$, and $\theta$ is the temperature deviation
from $T_{0}$. Second, the variation of density is relatively small so that all
the fluid properties can be treated as constants, and only the buoyancy force
needs to be included. The governing equations then read
$\displaystyle\partial_{t}u_{i}+\mathbf{u}\cdot\nabla\mathbf{u}$
$\displaystyle=$ $\displaystyle-\frac{1}{\rho}\nabla
p+\nu\,\nabla^{2}\mathbf{u}+\alpha\,\theta\,\mathbf{g},$ (1)
$\displaystyle\partial_{t}\theta+\mathbf{u}\cdot\nabla\theta$ $\displaystyle=$
$\displaystyle\kappa\,\nabla^{2}\theta,$ (2)
$\displaystyle\nabla\cdot\mathbf{u}$ $\displaystyle=$ $\displaystyle 0.$ (3)
Here $\mathbf{u}$ is three dimensional velocity, $p$ is pressure, and
$\partial_{t}$ is the partial derivative with respect to time, respectively.
The above equations are non-dimensionalized by the layer height $H$, the
temperature difference between two plates $\Delta$, and the free-fall velocity
$U=\sqrt{g\alpha\Delta H}$. The two plates are no-slip and the periodic
boundary conditions are applied to the two horizontal directions.
Direct numerical simulations were conducted with the in-house code, which
employs the finite difference scheme and a fraction-time-step method. The code
has been tested for various wall-turbulence and convection turbulence [27]. To
introduce the homogeneous thermal layer, an extra source term is included in
the temperature equation (2) based on the idea of the immersed boundary method
[28]. Also the buoyancy force in the momentum equation (1) is turned off
inside the homogeneous thermal layer. Grid-independence test has been
conducted in order to ensure the resolution is adequate. The three-dimensional
volume rendering in figure 1 is generated by the open source software VisIt
[29].
## III Main results
In figure 1 we compare one thermal field of the normal RB flow and that with
homogeneous layers at $Ra=10^{6}$ and $Pr=1$. Here $h$ is chosen as $0.2H$. We
only show the fluid with temperature larger than $90\%\Delta$ and smaller than
$10\%\Delta$. For normal RB flow at this relatively low $Ra$, the large
thermal anomaly cannot be transported very far from the plates, which limits
the global heat flux. On the contrary, with two homogeneous thermal layers the
thermal plumes with large anomaly are significantly stronger, and spread over
the entire convection bulk. Accordingly, the heat flux is greatly enhanced in
the flow with homogeneous thermal layers.
Figure 1: Volume rendering of the thermal field at $Ra=10^{6}$ and $Pr=1$.
Yellow and blue indicate the temperature larger than $90\%\Delta$ and smaller
than $10\%\Delta$, respectively. A: the normal Rayleigh-Bénard flow. B: the
modified flow with two homogeneous thermal layers of the height $h=0.2H$. The
flow in B has much stronger thermal plumes in the bulk, while for the flow in
A the large temperature anomaly can hardly be transported into the bulk.
We then present the main results of the current study, i.e. the ultimate
scalings of heat flux and flow velocity, which in their full form read
$Nu\sim Pr^{1/2}Ra_{b}^{1/2},\quad Re\sim Pr^{-1/2}Ra_{b}^{1/2}.$ (4)
To account for the height of two homogeneous thermal layers, the bulk Rayleigh
number is calculated as $Ra_{b}=Ra(1-2h/H)^{3}$. The Nusselt number is defined
as $Nu=q(H-2h)/(\kappa\Delta)$ with $q$ being the total dimensional heat flux.
When $h=0$, both definitions recover the usual form of RB flow. The Reynolds
number is $Re=u_{rms}H/\nu$ and $u_{rms}$ is the root-mean-square velocity of
the whole domain, since the flow motion is confined by two plates, including
the homogeneous thermal layers. We first look at the dependences of $Nu$ and
$Re$ on $Ra_{b}$ for fixed $Pr=1$. Three decades of $Ra$ are covered from
$10^{5}$ up to $10^{8}$. The height of the homogeneous thermal layer is set at
$h=0.2H$ so that for all simulations the momentum boundary layer is fully
embedded inside the homogeneous thermal layer. Figures 2A and 2B show that
both $Nu$ and $Re$ follow the ultimate scaling law (4) perfectly, even though
the Rayleigh number is not very large. Moreover, both the heat flux and flow
velocity are greatly enhanced by introducing the homogeneous thermal layers.
For $Ra_{b}\geq 10^{6}$, the modified flow generates a heat flux more than one
order of magnitude higher than that for the normal RB flow. For instance, the
Nusselt number is comparable for the modified flow at $Ra\approx 10^{8}$ and
the normal RB flow at $Ra\approx 10^{11}$.
Figure 2: The ultimate scaling of the heat flux and flow velocity. Solid
symbols and lines depict the ultimate scalings of the modified flow with
homogeneous thermal layers. Open symbols and dashed lines show the normal RB
flow and the classic scaling. The corresponding exponents are determined by
linear fitting. A and B: The Nusselt number $Nu$ and Reynolds number $Re$
versus the increasing bulk Rayleigh number $Ra_{b}$ for a fixed Prandtl number
$Pr=1$. C and D: $Nu$ and $Re$ versus the increasing $Pr$ for fixed
$Ra=3\times 10^{6}$.
For the second group of simulations, we fix $Ra=3\times 10^{6}$ and increase
$Pr$ from $0.01$ to $10$, again covering three orders of magnitude. The height
of the homogeneous layers is $h=0.2H$. As shown in figures 2C and 2D, the
behaviors of $Nu$ and $Re$ are also very close to the ultimate scaling (4).
The scaling exponent for $Nu$ given by the linear fitting is indistinguishable
from the theoretically predicted value $1/2$. The exponent for $Re$ obtained
from the numerical data is slightly different from $-1/2$. For the moderate
$Ra=3\times 10^{6}$ considered here, the flow is not fully turbulent at high
Prandtl number regime. Therefore $Re$ decreases faster than that predicted by
the ultimate scaling. Another interesting observation is that at low $Pr$
region, the difference between the modified flow and the normal RB flow is
very small. This is expectable since for small $Pr$, the viscous boundary
layer is thinner than the thermal one in the normal RB flow, and part of the
thermal boundary layer is already in the turbulent bulk.
Figure 3: The effect of the thermal layer height on the heat-flux scaling. A:
The heat flux versus $Ra_{b}$ for different $h$, which increases from $0$ to
$0.2$ as color changes from blue to red. B: The scaling exponent $\gamma$
versus the height $h$. In both panels the solid and dashed lines mark the
ultimate scaling with the exponent $1/2$ and the classic scaling with the
exponent $1/3$, respectively.
To further demonstrate that the ultimate scaling is satisfied when the thermal
and viscous boundary layers are decoupled from each other, we run simulations
with increasing $h$ for three different Rayleigh numbers $Ra=10^{5}$,
$10^{6}$, and $10^{7}$. The Prandtl number is fixed at $Pr=1$. We focus on the
behaviors of Nusselt number, which are shown in figure 3. As $h$ increases,
i.e. the thermal boundary layers are gradually decoupled from the viscous
ones, the heat flux is enhanced and the scaling exponent transits from the
classic value of $1/3$ to the ultimate value $1/2$. In figure 3 we plot the
scaling exponent $\gamma$ versus the height of the homogeneous thermal layer
$h$. For small $h$, $\gamma$ increases rapidly to a value larger than $1/2$.
At this overshoot the thermal boundary layer already enters the turbulent bulk
for large $Ra$ but is still coupled with the viscous boundary layer for low
$Ra$. When $h$ is large enough, for all three $Ra$’s the thermal boundary
layer is fully within the turbulent region, and the exponent $\gamma$
gradually approaches the ultimate value $1/2$.
## IV Conclusion and discussion
In summary, we demonstrate that once the thermal boundary layers decouple with
the viscous ones and locate within the turbulent convection bulk, the ultimate
scaling of the heat flux and flow velocity can be perfectly realized even at
relatively low Rayleigh numbers. Our results support the physical picture of
the ultimate state of convection turbulence, namely when the momentum boundary
layers become fully turbulent, the heat flux is independent of the fluid
viscosity. In line with the unifying model for RB convection [11, 12], the
current flow corresponds to the IVl regime. Compared to the homogeneous
convection [21, 22], the current flow configuration is more close to the
ultimate regime since the thermal boundary layers are included and the
ultimate scaling is achieved.
A definitive proof for the existence of the ultimate regime relies on the
direct observation of the ultimate scaling in the experiments and simulations
of the normal RB convection. Nevertheless, one can anticipate that, based on
the present study, convection flows will eventually enter the ultimate state
when the boundary layer region becomes fully turbulent at extremely high
thermal driving. Such conditions are very likely satisfied in natural systems
such as the Earth outer core and stars. Transition towards the ultimate regime
has been reported in recent studies [16, 17], and we may expect that a fully
developed ultimate state will be observed in experiments in near future.
Acknowledgements: This work is supported by the Major Research Plan of
National Natural Science Foundation of China for Turbulent Structures under
the Grants 91852107 and 91752202. Y.Yang also acknowledges the partial support
from the Strategic Priority Research Program of Chinese Academy of Sciences
under the Grant No. XDB42000000.
## References
* Spiegel [1971] E. A. Spiegel, Convection in stars I. basic Boussinesq convection, Annu. Rev. Astro. Astrophys. 9, 323 (1971).
* Tackley _et al._ [1993] P. J. Tackley, D. J. Stevenson, G. A. Glatzmaier, and G. Schubert, Effects of an endothermic phase transition at 670 km depth in a spherical model of convection in the Earth’s mantle, Nature 361, 699 (1993).
* Marshall and Schott [1999] J. Marshall and F. Schott, Open-ocean convection: Observations, theory, and models,, Rev. Geophys 37, 1 (1999).
* Heimpel _et al._ [2012] M. Heimpel, J. Aurnou, and J. Wicht, Simulation of equatorial and high-latitude jets on Jupiter in a deep convection model, Phys. Rev. Lett. 108, 024502 (2012).
* Ahlers _et al._ [2009] G. Ahlers, S. Grossmann, and D. Lohse, Heat transfer and large scale dynamics in turbulent Rayleigh-Bénard convection, Rev. Mod. Phys. 81, 503 (2009).
* Chillà and Schumacher [2012] F. Chillà and J. Schumacher, New perspectives in turbulent Rayleigh?Bénard convection., Eur. Phys. J. E 35, 58 (2012).
* Xia [2013] K.-Q. Xia, Current trends and future directions in turbulent thermal convection, Theor. Appl. Mech. Lett. 3, 052001 (2013).
* Plumley and Julien [2019] M. Plumley and K. Julien, Scaling laws in Rayleigh-Bénard convection, Earth and Space Science 34, 1580 (2019).
* Malkus [1954] M. V. Malkus, The heat transport and spectrum of thermal turbulence, Proc. R. Soc. Lond. A 225, 196 (1954).
* Kraichnan [1962] R. H. Kraichnan, Turbulent thermal convection at arbitrary prandtl number, Phys. Fluids 5, 1374 (1962).
* Grossmann and Lohse [2000] S. Grossmann and D. Lohse, Scaling in thermal convection: a unifying theory, J. Fluid Mech. 407, 27 (2000).
* Grossmann and Lohse [2001] S. Grossmann and D. Lohse, Thermal convection for large Prandtl numbers, Phys. Rev. Lett. 86, 3316 (2001).
* Chavanne _et al._ [1997] X. Chavanne, F. Chilla, B. Castaing, B. Hebral, B. Chabraud, and J. Chaussy, Observation of the ultimate regime in Rayleigh-Bénard convection, Phys. Rev. Lett. 79, 3648 (1997).
* Glazier _et al._ [1999] J. A. Glazier, T. Segawa, A. Naert, and M. Sano, Evidence against ‘ultrahard’ thermal turbulence at very high Rayleigh numbers, Nature 398, 307 (1999).
* Niemela _et al._ [2000] J. J. Niemela, L. Skrbek, K. R. Sreenivasan, and R. J. Donnelly, Turbulent convection at very high Rayleigh numbers, Nature 404, 837 (2000).
* He _et al._ [2012] X. He, D. Funfschilling, H. Nobach, E. Bodenschatz, and G. Ahlers, Transition to the ultimate state of turbulent Rayleigh-Bénard convection, Phys. Rev. Lett. 108, 024502 (2012).
* Zhu _et al._ [2018] X. Zhu, V. Mathai, R. J. A. M. Stevens, R. Verzicco, and D. Lohse, Transition to the ultimate regime in two-dimensional Rayleigh-Bénard convection, Phys. Rev. Lett. 120, 144502 (2018).
* Iyer _et al._ [2020] K. P. Iyer, J. D. Scheel, J. Schumacher, and K. R. Sreenivasan, Classical 1/3 scaling of convection holds up to Ra=1015, Proc. Natl. Acad. Sci. U.S.A. 117, 7594 (2020).
* Zhu _et al._ [2017] X. Zhu, R. Stevens, R. Verzicco, and D. Lohse, Roughness-facilitated local 1/2 scaling does not imply the onset of the ultimate regime of thermal convection, Phys. Rev. Lett. 119, 154501 (2017).
* Zhu _et al._ [2019] X. Zhu, V. Mathai, R. J. A. M. Stevens, R. Verzicco, and D. Lohse, Nu $\sim$ Ra1/2 scaling enabled by multiscale wall roughness in Rayleigh-Bénard convection, J. Fluid Mech. 869, R4 (2019).
* Lohse and Toschi [2003] D. Lohse and F. Toschi, The ultimate state of thermal convection, Phys. Rev. Lett. 90, 034503 (2003).
* Calzavarini _et al._ [2005] E. Calzavarini, D. Lohse, F. Toschi, and R. Tripiccione, Rayleigh and Prandtl number scaling in the bulk of Rayleigh-Bénard turbulence, Phys. Fluids 17, 055107 (2005).
* Lepot _et al._ [2018] S. Lepot, S. Aumaitre, and B. Gallet, Radiative heating achieves the ultimate regime of thermal convection, Proc. Natl Acad. Sci. USA 115, 8973 (2018).
* Bouillaut _et al._ [2019] V. Bouillaut, S. Lepot, S. Aumaitre, and B. Gallet, Transition to the ultimate regime in a radiatively driven convection experiment, J. Fluid Mech. 861, R5 (2019).
* Creyssels [2020] M. Creyssels, Model for classical and ultimate regimes of radiatively driven turbulent convection, J. Fluid Mech. 900, A39 (2020).
* Wang _et al._ [2020] B.-F. Wang, Q. Zhou, and C. Sun, Vibration-induced boundary-layer destabilization achieves massive heat-transport enhancement, Sci. Adv. 6, ezza8239 (2020).
* Ostilla-Mónico _et al._ [2015] R. Ostilla-Mónico, Y. Yang, E. P. van der Poel, D. Lohse, and R. Verzicco, A multiple resolutions strategy for direct numerical simulation of scalar turbulence., J. Comput. Phys. 301, 308 (2015).
* Fadlun _et al._ [2000] E. A. Fadlun, R. Verzicco, P. Orlandi, and J. Mohd-Yusof, Combined immersed-boundary finite-difference methods for three-dimensional complex flow simulations, Journal of Computational Physics 161, 35 (2000).
* Childs _et al._ [2012] H. Childs, E. Brugger, B. Whitlock, J. Meredith, S. Ahern, D. Pugmire, K. Biagas, M. Miller, C. Harrison, G. H. Weber, H. Krishnan, T. Fogal, A. Sanderson, C. Garth, E. W. Bethel, D. Camp, O. Rübel, M. Durant, J. M. Favre, and P. Navrátil, VisIt: An End-User Tool For Visualizing and Analyzing Very Large Data, in _High Performance Visualization–Enabling Extreme-Scale Scientific Insight_ (2012) pp. 357–372.
|
# Searching for signatures of chaos in $\gamma$-ray light curves of selected
Fermi-LAT blazars
O. Ostapenko,1 M. Tarnopolski,2 N. Żywucka3 and J. Pascual-Granado4
1Department of Astronomy and Space Physics, Taras Shevchenko National
University of Kyiv, Akademika Hlushkova Ave 4, Kyiv, 03680, Ukraine
2Astronomical Observatory, Jagiellonian University, Orla 171, 30-244, Kraków,
Poland
3Centre of Space Research, North-West University, Potchefstroom, South Africa
4Instituto de Astrofísica de Andalucía (IAA-CSIC), Glorieta de la Astronomía
s/n, 18008, Granada, Spain
E-mail<EMAIL_ADDRESS><EMAIL_ADDRESS><EMAIL_ADDRESS><EMAIL_ADDRESS>
(Accepted XXX. Received YYY; in original form ZZZ)
###### Abstract
Blazar variability appears to be stochastic in nature. However, a possibility
of low-dimensional chaos was considered in the past, but with no unambiguous
detection so far. If present, it would constrain the emission mechanism by
suggesting an underlying dynamical system. We rigorously searched for
signatures of chaos in Fermi-Large Area Telescope light curves of 11 blazars.
The data were comprehensively investigated using the methods of nonlinear time
series analysis: phase-space reconstruction, fractal dimension, maximal
Lyapunov exponent (mLE). We tested several possible parameters affecting the
outcomes, in particular the mLE, in order to verify the spuriousness of the
outcomes. We found no signs of chaos in any of the analyzed blazars. Blazar
variability is either truly stochastic in nature, or governed by high-
dimensional chaos that can often resemble randomness.
###### keywords:
chaos – galaxies: active – galaxies: jets – BL Lacertae objects: general –
gamma-rays: galaxies – methods: data analysis
††pubyear: 2020††pagerange: Searching for signatures of chaos in $\gamma$-ray
light curves of selected Fermi-LAT blazars–Searching for signatures of chaos
in $\gamma$-ray light curves of selected Fermi-LAT blazars
## 1 Introduction
The Fermi-Large Area Telescope (LAT; Atwood et al., 2009) is a high energy
$\gamma$-ray telescope, sensitive to photons in the energy range from 20 MeV
to 300 GeV, which detected 5065 sources in the 100 MeV–100 GeV energy range
(Abdollahi et al., 2020). More than 3130 sources were identified as blazars, a
subclass of active galactic nuclei (AGNs), possessing a set of characteristic
properties such as strong continuous radiation observed throughout the
electromagnetic spectrum, flat-spectrum radio core, fast variability in any
energy band, and a high degree of optical-to-radio polarization. In the
unification scheme introduced by Urry & Padovani (1995), blazars are AGNs
pointing their relativistic jets toward the Earth (see e.g. Urry & Padovani,
1995; Böttcher et al., 2012; Padovani, 2017, for a review). Blazars are
usually divided into two groups: BL Lacertae objects (BL Lacs) and flat
spectrum radio quasars (FSRQs). This classification is historically based on
the strength of the optical emission lines, i.e. FSRQs have broad emission
lines with the equivalent width $>$5 Å, while BL Lacs possess weak lines or no
emission lines at all. Further classification is made taking into account
position of a synchrotron peak, $\nu^{s}_{\mathrm{peak}}$ in the $\nu-\nu$Fν
plane, in the multiwavelength spectral energy distribution and different
accretion regimes of AGNs. BL Lacs are commonly split into low-peaked,
intermediate-peaked, and high-peaked (HBL) BL Lacs (Abdo et al., 2010). An
additional group of extreme HBLs, having $\nu_{\mathrm{peak}}^{s}\gtrsim
10^{17}\,{\rm Hz}$, is also considered (Costamante et al., 2001; Akiyama et
al., 2016).
The search for chaos in AGNs has not been successful so far. One of the first
attempts was done by Lehto et al. (1993), who computed the correlation
dimensions $d_{C}$ of the X-ray light curves (LCs) of eight AGNs, and reported
evidence for $d_{C}<4.5$ for the Seyfert galaxy NGC 4051, suggesting that
variability of this source might be chaotic in its nature.
Provenzale et al. (1994) investigated a long (800 days) optical LC of the
quasar 3C 345 with the correlation dimension as well. While $d_{C}\approx 3.1$
was found, the authors demonstrated that this is a spurious detection owing to
the long-memory property of the nonstationary signal driven by a power-law
form of the power spectral density. They pointed at an intermittent stochastic
process that produced outputs consistent with the observations. Therefore, the
interpretation of any fractional correlation dimension of a phase-space
trajectory reconstructed from a univariate time series needs to be backed up
with additional evidence. The same technique was applied to microquasars
(Misra et al., 2004), but the initially reported saturation of the correlation
dimension was not found to be a signature of chaos, likely owing to the
nonstationarity of the data again (Mannattil et al., 2016). Indeed,
nonstationarity often leads to a spurious detection of chaos in a nonchaotic
system (Tarnopolski, 2015), hence a proper transformation is required.
Kidger et al. (1996) performed microvariability analysis of the BL Lac 3C 66A
in the optical and near infrared bands. They reported on a positive maximal
Lyapunov exponent (mLE) and very low correlation dimensions, $d_{C}<2$. These
are contradictory findings, since $1<d_{C}<2$ implies at most a two-
dimensional phase space (Seymour & Lorimer, 2013), in which, according to the
Poincaré-Bendixson theorem, chaos cannot occur (Lichtenberg & Lieberman,
1992). This can be most likely attributed to the very short LCs that were
investigated. Sadun (1996), in turn, conducted a broad nonlinear time series
analysis of the optical LCs of the famous OJ 287 double black hole (BH)
system, and reported $2\lesssim d_{C}\lesssim 4$, with positive mLEs as well.
The particular method for their calculation was not described explicitly, but
it should be mentioned that the algorithm of Wolf et al. (1985), frequently
employed in the past, is biased towards detecting positive mLEs, especially
for short data sets, since it does not test for exponential divergence, but
assumes it explicitly ad hoc (Tarnopolski, 2015). A more rigorous, up-to-date
analysis of OJ 287 is therefore appropriate.
Finally, most recently Bachev et al. (2015) analyzed a long (1.6 year),
densely sampled (160 000 points) optical Kepler LC of the BL Lac W2R 1926+42.
They aimed to constrain the correlation dimension of the reconstructed phase-
space trajectory, however the dimension did not saturate at any value smaller
than the maximal tested embedding dimension $m=10$. Overall, a saturated or
even fractional $d_{C}$ need not be due to the underlying chaotic dynamics,
and hence a larger suite of nonlinear time series analysis techniques should
be invoked, especially aiming at establishing the sign of the mLE, with a
careful consideration of the stationarity of the analyzed data.
In this work we search for signatures of chaos in the $\gamma$-ray LCs of some
of the brightest or otherwise famous blazars from Fermi-LAT. We examine five
BL Lacs (Mrk 501, Mrk 421, PKS 0716+714, PKS 2155-304, TXS 0506+056) and six
FSRQs (PKS 1510-089, 3C 279, B2 1520+31, B2 1633+38, 3C 454.3, PKS 1830-211),
i.e. the sample from Tarnopolski et al. (2020). The methodology for studying
chaotic behavior includes well-established methods of nonlinear time series
analysis, such as reconstruction of the phase-space, correlation dimension,
and mLE. We utilize the method of surrogates to ascertain the reliability of
the results.
## 2 Data
To ensure stationarity, we investigated the logarithmized LCs in the 7-day
binning in order to maximize the number of points (Tarnopolski et al., 2020),
i.e. we seek for chaotic behavior in the process $l(t)$ underlying the
observed variability $f(t)$. The two are connected via $f(t)=\exp[l(t)]$ since
(Uttley et al., 2005):
1. 1.
the distribution of fluxes is lognormal,
2. 2.
the root mean square–flux relation is linear.
### 2.1 Fermi data
We performed a spectral analysis of $\sim$11-year Fermi-LAT data of 11 well
known blazars, spanning between 54682 and 58592 MJD in an energy range of 100
MeV–300 GeV. We analyzed the data using a binned maximum likelihood
approach111https://fermi.gsfc.nasa.gov/ssc/data/analysis/scitools/binned_likelihood_tutorial.html
in a region of interest (ROI) of $10\hbox{${}^{\circ}$}$ around the position
of each blazar, with the latest 1.2.1 version of Fermitools, namely conda
distribution of the Fermi ScienceTools222https://github.com/fermi-
lat/Fermitools-conda/wiki, and fermipy (Wood et al., 2017). We used the
reprocessed Pass 8 data and the $\text{P8R3}\\_\text{SOURCE}\\_\text{V2}$
instrument response functions. A zenith angle cut of $90^{\circ}$ is used
together with the EVENT_CLASS = 128 and the EVENT_TYPE = 3, while the
_gtmktime_ cuts DATA_QUAL==1 $\&\&$ LAT_CONFIG==1 were chosen. We defined the
spatial bin size to be 0$\aas@@fstack{\circ}$1, and the number of energy bins
per decade of 8. The diffuse
components333https://fermi.gsfc.nasa.gov/ssc/data/access/lat/BackgroundModels.html
were modeled with the Galactic diffuse emission model gll_iem_v07.fits and the
isotropic diffuse model iso_P8R3_SOURCE_V2_v01.txt, including also all known
point-like foreground/background sources in the ROI from the LAT 8-year Source
Catalog (4FGL; The Fermi-LAT collaboration, 2020). The LCs of each blazar were
generated using 7-day time bins and selecting observations with the test
statistic $TS>25$.
### 2.2 Interpolation of missing points
Data loss introduces a bias in the estimated frequency content of the signal,
because the observed power spectrum is the result of the convolution of the
true power spectrum with the spectral window function. Thus, recovering the
entire duty cycle is necessary to identify the signatures of chaos in the LCs
without biases.
Missing data points in the LCs were interpolated using the method of
interpolation by autoregressive moving average algorithm (MIARMA; Pascual-
Granado et al., 2015), which is aimed to preserve the original frequency
content of the signal. This algorithm makes use of a forward-backward
predictor based on ARMA modeling. A local prediction is obtained for each
interpolation allowing also that weakly nonstationary signals can be
interpolated.
## 3 Methods
The whole analysis was performed for logarithmic LCs. We conducted the
analysis of surrogates as well to make sure our results are not due to a
chance occurrence. Every object was nonlinearly denoised (see Sect. 3.2)
before the phase-space reconstruction and the subsequent search for a positive
mLE. The routines implemented in the TISEAN 3.0.1
package444https://www.pks.mpg.de/~tisean/ (Hegger et al., 1999) were utilized
throughout.
### 3.1 Phase-space reconstruction
The phase-space representation of a dynamical system is one of the key points
in nonlinear data analysis. In theory, a dynamical system can be defined by a
set of first-order ordinary differential equations that can be directly
investigated to rigorously describe the structure of the phase space (Kantz &
Schreiber, 2004). However, in case of real-world dynamical systems, the
underlying equations are either too complex, or simply unknown. Observations
of a physical process usually do not provide all possible state variables.
Often just one observable is available, e.g. a series of flux values that form
a LC. Such a univariate time series can still be utilized to reconstruct the
phase space.
A basic technique is to reconstruct the phase-space trajectory via Takens time
delay embedding method (Takens, 1981). Having a series of scalar measurements
$x(t)$, evenly distributed at times $t$, one can form an $m$-dimensional
location vector of delay coordinates, $S(t)$, using only the values of $x(t)$
according to
$\vec{S}(t)=[x(t),x(t+\tau),x(t+2\tau),...,x(t+(m-1)\tau)].$ (1)
The main difficulty while attempting the phase-space reconstruction lies in
determining the values of the time delay $\tau$ and embedding dimension $m$.
These can be obtained with the help of mutual information (MI, see Sect. 3.3)
and the fraction of false nearest neighbors (FNN, see Sect. 3.4). To uncover
the structure buried in observational fluctuations, noise reduction techniques
are also employed.
### 3.2 Nonlinear noise reduction
Generally, noise reduction methods for nonlinear chaotic time series work
iteratively. In each iteration the noise is repressed by requiring locally
linear relations among the delay coordinates, i.e. by moving the delay vectors
toward some smooth manifold. We performed noise reduction with the algorithm
designed by Grassberger et al. (1993), implemented as the ghkss routine in the
TISEAN package. The concept is as follows: the dynamical system forms a
$q$-dimensional manifold $M_{1}$ containing the phase-space trajectory.
According to the Takens’ embedding theorem there exists a one-to-one image of
the path in the embedding space, if $m$ is sufficiently high. Thus, if the
measured time series was not corrupted with noise, all the embedding vectors
$\vec{v}_{n}$ would lie inside another manifold $M_{2}$ in the embedding
space. However, due to the noise this condition is no longer fulfilled. The
idea of the locally projective noise reduction scheme is that for each
$\vec{v}_{n}$ there exists a correction $\Theta_{n}$, with $||\Theta_{n}||$
small, in such a way that $\vec{v}_{n}-\Theta_{n}\in M_{2}$ and that
$\Theta_{n}$ is orthogonal on $M_{2}$. Of course a projection to the manifold
can only be a reasonable concept if the vectors are embedded in spaces which
are higher-dimensional than the manifold $M_{2}$. Thus we have to over-embed
in $m$-dimensional spaces with $m>q$.
With the metric tensor $G$ defined as
$G_{ij}=\begin{cases}1&i=j,\,i>1,\,j<m\\\ 0&{\rm otherwise}\end{cases},$ (2)
where $m$ is the dimension of the ”over-embedded” delay vectors, the
minimization problem $\sum\limits_{i}(\Theta_{i}G^{-1}\Theta_{i})=min$ is to
be solved, including the following constraints:
1. 1.
$a^{i}_{n}(\vec{v}_{n}-\Theta_{n})+b^{i}_{n}=0$ (for $i=q+1,...,m$);
2. 2.
$a^{i}_{n}Ga^{i}_{n}=\delta_{ij}$.
where the $a^{i}_{n}$ are the normal vectors of $M_{2}$ at the point
$\vec{v}_{n}-\Theta_{n}$; $b^{i}_{n}$ could be found solving a minimization
problem (Grassberger et al., 1993), where $b^{i}_{n}=-a^{i}_{n}\cdot\xi^{i}$
and $\xi$ is given as a linear combination $\xi^{i}_{n}=\sum_{v_{k}\in
U_{n}}\omega_{k}v_{k+n}$. The neighborhood for each point $\vec{v_{n}}$ is
$U_{n}$, and $\omega_{k}$ is a weight factor with $\omega_{k}>0$ and
$\sum_{k}\omega_{k}=1$.
### 3.3 Mutual Information (MI)
The most reasonable delay is chosen as the first local minimum of the MI. The
$\tau$ time delayed MI is defined as
$I_{i,j}(\tau)=-\sum_{i,j=1}^{n}{P_{ij}(\tau)\ln{\frac{P_{ij}(\tau)}{P_{i}P_{j}}}},$
(3)
where $P_{ij}(\tau)$ is the joint probability that an observation falls in the
$i$-th interval and the observation time $\tau$ falls in the $j$-th interval,
$P_{i}$ and $P_{j}$ are the marginal probabilities (Fraser & Swinney, 1986).
In other words, it gives the amount of information one can obtain about
$x_{t+\tau}$ given $x_{t}$. The absolute difference between $x_{\rm max}$ and
$x_{\rm min}$ of the data is binned into $n$ bins, and for each bin the MI as
a function of $\tau$ is constructed from the probabilities that the variable
lies in the $i$-th and $j$-th bins and the $P_{ij}(\tau)$ that $x_{t}$ and
$x_{t+\tau}$ are in the $i$-th and $j$-th bins, respectively (Tarnopolski,
2015).
Additionally, we used also the autocorrelation function (ACF), with the
criterion to choose as the delay $\tau$ the first lag at which the ACF drops
below $1/{\rm e}$, but the obtained delays did not always match with those
from the MI. Therefore, all delays inferred from MI, ACF, and values in
between were checked in subsequent steps of the analysis. Some parameters
could not give a clear interpretation as per the chaotic behavior, though.
Both MI and ACF were implemented within the mutual routine in the TISEAN
package.
### 3.4 False Nearest Neighbors (FNN)
The FNN method is a way of determining the minimal sufficient embedding
dimension $m$. This means that in an $m_{0}$-dimensional delay space the
reconstructed trajectory is a topological one-to-one image of the trajectory
in the original phase space. If one selects a point on it, then its neighbors
are mapped onto neighbors in the delay space. Thus the neighborhoods of points
are mapped onto neighborhoods, too. However, the shape and diameter of the
neighborhoods vary depending on the LEs. But if one embeds in an
$m$-dimensional space with $m<m_{0}$, points are projected onto neighborhoods
of other points to which they would not belong in higher dimensions (aka false
neighbors), because the topological structure is no longer retained. The FNN
algorithm looks for nearest neighbor $\vec{k}_{j}$, for each point
$\vec{k}_{i}$ in an $m$-dimensional space, and calculates the distance
$||\vec{k}_{i}-\vec{k}_{j}||$. It then iterates over both points and computes
$R_{i}=\frac{||\vec{k}_{i+1}-\vec{k}_{j+1}||}{||\vec{k}_{i}-\vec{k}_{j}||}.$
(4)
Thereby, a false neighbor is any neighbor for which $R_{i}>R_{\rm tol}$, where
$R_{\rm tol}$ is some threshold. This algorithm was firstly proposed by Kennel
et al. (1992), and next improved by Hegger & Kantz (1999). The FNN algorithm
is widely used for detecting chaotic behavior in data sets obtained from
astrophysical observations (Hanslmeier et al., 2013), to experimental
measurements connected with electronics (Kodba et al., 2005). We utilized the
false_nearest routine from the TISEAN package.
### 3.5 Lyapunov exponent
The LE is one of the main characteristics in the analysis of chaotic dynamical
system. The LE characterizes the rate of separation of infinitesimally close
trajectories Z(t) and $\textbf{Z}_{0}(t)$ in the phase space (Cecconi et al.,
2010). It describes the evolution of the separation
$\delta\textbf{Z}(t)=\textbf{Z}(t)-\textbf{Z}_{0}(t)$ via
$|\delta\textbf{Z}(t)|\approx e^{\lambda t}|\delta\textbf{Z}_{0}(t)|,$ (5)
where $\delta\textbf{Z}_{0}(t)=\textbf{Z}(0)-\textbf{Z}_{0}(0)$. The mLE is a
measure of predictability for a given solution to a dynamical system, and is
formally determined as:
$\lambda_{\rm max}=\lim_{t\to\infty}\lim_{\delta\textbf{Z}_{0}\to
0}\frac{1}{t}\ln\frac{|\delta\textbf{Z}(t)|}{|\delta\textbf{Z}_{0}(t)|}$ (6)
A positive mLE usually indicates that the system is chaotic, i.e. exhibits
sensitive dependence on initial conditions, manifesting itself through
exponential divergence.
For the estimation of the mLE of a given univariate time series set we use
Kantz method (Hegger et al., 1999) in our analysis, implemented as the lyap_k
routine in the TISEAN package. The algorithm takes points in the neighborhood
of some point $x_{i}$. Next, it computes the average distance of all acquired
trajectories to the reference, $i$-th one, as a dependence of the relative
time $n$. The average $S(n)$ of the logarithms of these distances (so-called
stretching factors) is plotted as a function of $n$. In the case of chaos,
three regions should be distinct: a steep increase for small $n$, a linear
part and a plateau (Seymour & Lorimer, 2013). The slope of the linear increase
is the mLE; its inverse is the Lyapunov time.
### 3.6 Correlation dimension
A fractal dimension (Mandelbrot, 1983) is often measured with the correlation
dimension, $d_{C}$ (Grassberger & Procaccia, 1983), which takes into account
the local densities of the points in the examined dataset. For usual 1D, 2D or
3D cases the $d_{C}$ is equal to 1, 2 and 3, respectively. Typically, a
fractional correlation dimension is obtained for fractals.555Although some
fractals can exhibit integer fractal dimensions, just different from the
embedding dimension; e.g. the boundary of the Mandelbrot set has a dimension
of exactly 2 (Shishikura, 1998).
The correlation dimension is defined as
$d_{C}=\lim_{R\rightarrow 0}\frac{\ln C(R)}{\ln R},$ (7)
with the estimate for the correlation function $C(R)$ being
$C(R)\propto\sum_{i=1}^{N}\sum_{j=i+1}^{N}\Theta\left(R-||x_{i}-x_{j}||\right),$
(8)
where the Heaviside step function $\Theta$ adds to $C(R)$ only points $x_{i}$
in a distance smaller than $R$ from $x_{j}$ and vice versa. The total number
of points in the reconstructed phase-space trajectory is denoted by $N$, and
the usual Euclidean distance is employed. The limit in Eq. (7) is attained by
fitting a straight line to the linear part of the obtained $\log C(R)$ vs.
$\log R$ dependency. The dimension $d_{C}$ is estimated as the slope of this
linear regression.
Eckmann & Ruelle (1992) argued that for a time series of length $N$, the
maximal meaningful value of $d_{C}$ is necessarily less than $2\log N$ (see
also Ruelle, 1990). For the LCs examined herein, we have $N\gtrsim 500$, hence
$d_{C}\lesssim 5$. We therefore search for low-dimensional chaos, i.e. with
$m\sim 3-5$.
### 3.7 Surrogate data
The method of surrogates is the most commonly employed one to provide a
reliable statistical evaluation in order to ensure that the observed results
are not obtained by chance, but are a true characteristic of the system.
Surrogates can be created as a data set that is generated from a model fitted
to the observed (original) data, or directly from the original data (by some
suitable transformation of it). Testing for the underlying nonlinearity with
surrogates requires an appropriate null hypothesis: the data are linearly
correlated in the temporal domain, but are random otherwise. In our employed
approach, surrogates are generated from the original data while destroying any
nonlinear structure by randomizing the phases of the Fourier transform
(Theiler et al., 1992; Oprisan et al., 2015).
We use the routine surrogates from the TISEAN package, that generates
multivariate surrogate data (i.e., implements the iterative Fourier scheme).
The idea behind this routine is to create a whole ensemble of different
realizations of a null hypothesis, and to apply statistical tests to reject
the null for a given data set. The algorithm creates surrogates with the same
Fourier amplitudes and the same distribution of values as in the original data
set (Kantz & Schreiber, 2004). If the chaotic signature is present in the
original data, but not in the surrogates, one can ascertain that the detection
of chaotic behavior is a real phenomenon.
## 4 Results
The 11 blazars in our sample were examined according to the methodology
outlined in Sect. 3. We cannot claim the presence of chaos in any of the
analyzed objects. In the following we illustrate the analysis with an example
of one blazar, i.e. 3C 279, leading to the conclusion of the lack of chaotic
behavior in this source. Similar results were obtained for the remaining 10
blazars.
### 4.1 Embedding dimension $m$
The FNN algorithm was employed to infer the proper embedding dimension $m$.
The FNN fraction for different $m$ is displayed in Fig. 1. A clear bending (a
knee) is seen at $m\simeq 4-5$ on the FNN plot (Fig. 1(a)). The three curves
represent one, two, and three iterations of the denoising procedure of the
original LC with the value $\tau=3$. However, there is no clear bend on the
FNN plot (b), which was obtained with the delay value $\tau=9$. In order to
ascertain that this result is not a chance occurrence, 100 surrogates were
generated for every data set and their FNN fractions were computed. A
representative subset of such surrogates and their mean value is displayed in
Fig. 1(c). The FNN fractions remain high for all $m$ tested, and overall no
clear bend is visible.
Figure 1: The FNN plot for the logarithmic LC of 3C 279. (a) The knee at
$m\simeq 4-5$ indicates the most appropriate embedding dimension; the value
$\tau=3$ was used. (b) One can not see such a sharp bending when was set to
the delay value $\tau=9$. (c) The lack of a sharp bending is also evident in
case of the surrogates.
### 4.2 Time delay $\tau$
After testing different values of $\tau$ for the phase-space reconstruction
and the LEs, we came to the conclusion that various $\tau$ values lead to
dramatically different results. MI yielded in general $\tau<15$. The same
range of $\tau$ was implied from the ACF, although often the particular values
were inconsistent. Fig. 2 shows illustrative plots of MI and ACF. By
investigating the phase-space reconstructions and the resulting mLEs, we
settled using the values $\tau=3$ and $\tau=8$ as representative.
Figure 2: Estimation of the delay $\tau$. (a) The MI has its first local
minimum at $\tau=3$. (b) The ACF drops below $1/{\rm e}$ at $\tau=8$.
### 4.3 Phase-space reconstruction
With the obtained values of $m$ and $\tau$, one can in principle produce a
phase-space reconstruction of the trajectory according to Eq. (1). However,
for obvious reasons, illustrating graphically the resulting 4- or
5-dimensional trajectory is impossible. For display purposes only, a
representation with $\tau=3$ and $\tau=8$ in a 3-dimensional space is
displayed in Fig. 3, together with a typical exemplary reconstruction of one
of the surrogates.
Figure 3: A 3-dimensional phase-space reconstruction of 3C 279. These are
projections of the underlying 5-dimensional trajectories. (a) The delay
$\tau=8$ was used. (b) In this case $\tau=3$. The topology is still similar.
(c) A reconstruction of surrogates. Any structure was destroyed.
### 4.4 Maximal Lyapunov exponent
Utilizing the obtained values of $m$ and $\tau$, we eventually attempted to
constrain the mLE. In Fig. 4, the stretching factors $S(n)$ are depicted for
the logarithmic LC itself, as well as for a representative example of a
surrogate. As mentioned in Sect. 3.5, in case of chaos three regions should be
clearly visible: a sharp increase for very small $n$, followed by a linear
section, and finally a plateau. None of these parts are present in Fig. 4(a),
also no such features are present in any of the surrogates (cf. Fig. 4(b)).
Such results were arrived at for all 11 blazars in our sample.
Figure 4: The stretching factors $S(n)$ of (a) 3C 279 and (b) a representative
surrogate. In both cases there is no unambiguous linear increase. Different
colors correspond to different embedding dimensions $m$.
### 4.5 Correlation dimension
We constructed a plot of $d_{C}$ as a function of $m$, which is presented in
Fig. 5, as means of comparing with other works that utilized this method. In
case of a chaotic system a linear increase followed by a plateau should be
seen. The analysis of 3C 279 (Fig. 5 (a)) did not provide evidence of chaotic
behavior of the system. The plot of the surrogate data in Fig. 5 (b) does not
exhibit a plateau part as well. This observation applies to all 11 blazars
considered herein.
Figure 5: Correlation dimension $d_{C}$ for 3C 279 of (a) the logarithmic LC,
and (b) of one of the surrogates. In both plots there is no clear plateau.
## 5 Discussion
Finding low-dimensional chaos in a phenomenon with not well constrained
physics is of great importance, since it provides information about the
complexity of the underlying laws governing its occurrence (Seymour & Lorimer,
2013; Bachev et al., 2015). This in particular refers to blazar LCs, in which
no unambiguous signs of chaos have been detected. Indeed, the analyses
presented herein also did not give the slightest hints allowing to suspect the
presence of chaos in any of the 11 objects examined. We displayed here the
results corresponding to 3C 279; the other 10 blazars yielded very similar
outcomes.
In principal, the behavior of a dynamical system can be described by a set of
first-order ordinary differential equations. Such system can be directly
investigated to uncover the structure of the phase space and to characterize
its dynamical properties. Attractors, their fractal dimensions, Lyapunov
exponents, etc. can be easily estimated, and their properties can be studied
analytically, semi-analytically, and numerically. However, the underlying
equations in most cases of real-world dynamical systems, such as blazar LCs,
are unknown. Hence the detection of chaos, or lack of thereof, is a
notoriously difficult task, especially in cases when the time series are
relatively short and contaminated by observational noise. Noisy data can
hinder the detection of chaos; moreover, high-dimensional chaos can be
disguised as randomness. Bachev et al. (2015) argued that in the single-zone
model there are only a handful of parameters that control the emission, which
is governed by the Fokker-Planck (continuity) equation. However, if the
radiation mechanism, and subsequently the variability of blazars, can indeed
be accurately described by the (partial differential) continuity equation with
some appropriate injection term (Stawarz & Petrosian, 2008; Finke & Becker,
2014, 2015; Chen et al., 2016), then the dynamical system is considered to be
infinite-dimensional. While chaos can be present in infinite-dimensional
systems (e.g. delayed systems; Wernecke et al. 2019), its detection in
astronomical LCs need not be unambiguously identifiable with the standard
tools, most commonly applied with a discovery of low-dimensional chaos in
mind—especially when the details of the fundamental dynamical processes remain
unknown, and the time series is not extremely long.
On the other hand, if the radiation is influenced by turbulence in the jets,
e.g. by chaotic magnetic flows caused by plasma instabilities, there might be
instances in which the behavior of the system settles on some low-dimensional
attractor. Therefore, further search for chaos in high-quality
(multiwavelength) data gathered by next-generation space instruments, like the
James Webb Space Telescope (Gardner et al., 2006), can be expected to give a
more definite answer. A uniform, rigorous analysis, in the spirit presented
herein, of the already abundantly available optical data from the Kepler space
telescope (Smith et al., 2018) is also called for.
## 6 Conclusions
The aim of this paper was to search for evidence of low-dimensional chaos in
$\gamma$-ray LCs of 11 blazars, i.e. five BL Lacs and six FSRQs. Data from
Fermi-LAT (10-year-long LCs with a 7-day binning) were investigated using the
phase-space reconstruction via embedding dimension $m$ and time delay $\tau$,
with the goal of eventually constraining the mLE (if positive) and correlation
dimension $d_{C}$.
All analyses implied no signs of chaos for all 11 blazars. Therefore, the
underlying physical processes that give rise to the observed variability are
either truly stochastic (Tavecchio et al., 2020), or are governed by high-
dimensional (possibly infinite-dimensional as well) chaos that can resemble
randomness.
## Acknowledgements
O.O. thanks the Astronomical Observatory of the Jagiellonian University for
the summer internship during which this research began. M.T. acknowledges
support by the Polish National Science Center through the OPUS grant No.
2017/25/B/ST9/01208. The work of N.Ż. is supported by the South African
Research Chairs Initiative (grant no. 64789) of the Department of Science and
Innovation and the National Research Foundation666Any opinion, finding and
conclusion or recommendation expressed in this material is that of the authors
and the NRF does not accept any liability in this regard. of South Africa.
J.P.-G. acknowledges funding support from Spanish public funds for research
under project ESP2017-87676-C5-5-R and financial support from the State Agency
for Research of the Spanish MCIU through the ”Center of Excellence Severo
Ochoa” award to the Instituto de Astrofísica de Andalucía (SEV-2017-0709).
## Data Availability
The data underlying this article will be shared on reasonable request to the
authors.
## References
* Abdo et al. (2010) Abdo A. A., et al., 2010, ApJ, 715, 429
* Abdollahi et al. (2020) Abdollahi S., et al., 2020, ApJS, 247, 33
* Akiyama et al. (2016) Akiyama K., Stawarz Ł., Tanaka Y. T., Nagai H., Giroletti M., Honma M., 2016, ApJ, 823, L26
* Atwood et al. (2009) Atwood W. B., et al., 2009, ApJ, 697, 1071
* Bachev et al. (2015) Bachev R., Mukhopadhyay B., Strigachev A., 2015, A&A, 576, A17
* Böttcher et al. (2012) Böttcher M., Harris D. E., Krawczynski H., 2012, Relativistic Jets from Active Galactic Nuclei
* Cecconi et al. (2010) Cecconi F., Cencini M., Vulpiani A., 2010, Chaos: from Simple Models to Complex Systems. World Scientific, Singapore
* Chen et al. (2016) Chen X., Pohl M., Böttcher M., Gao S., 2016, MNRAS, 458, 3260
* Costamante et al. (2001) Costamante L., et al., 2001, A&A, 371, 512
* Eckmann & Ruelle (1992) Eckmann J. P., Ruelle D., 1992, Physica D Nonlinear Phenomena, 56, 185
* Finke & Becker (2014) Finke J. D., Becker P. A., 2014, ApJ, 791, 21
* Finke & Becker (2015) Finke J. D., Becker P. A., 2015, ApJ, 809, 85
* Fraser & Swinney (1986) Fraser A. M., Swinney H. L., 1986, Phys. Rev. A, 33, 1134
* Gardner et al. (2006) Gardner J. P., et al., 2006, Space Sci. Rev., 123, 485
* Grassberger & Procaccia (1983) Grassberger P., Procaccia I., 1983, Physica D Nonlinear Phenomena, 9, 189
* Grassberger et al. (1993) Grassberger P., Hegger R., Kantz H., Schaffrath C., Schreiber T., 1993, Chaos, 3, 127
* Hanslmeier et al. (2013) Hanslmeier A., et al., 2013, A&A, 550, A6
* Hegger & Kantz (1999) Hegger R., Kantz H., 1999, Phys. Rev. E, 60, 4970
* Hegger et al. (1999) Hegger R., Kantz H., Schreiber T., 1999, Chaos, 9, 413
* Kantz & Schreiber (2004) Kantz H., Schreiber T., 2004, Nonlinear Time Series Analysis
* Kennel et al. (1992) Kennel M. B., Brown R., Abarbanel H. D. I., 1992, Phys. Rev. A, 45, 3403
* Kidger et al. (1996) Kidger M. R., Gonzalez-Perez J. N., Sadun A., 1996, in Miller H. R., Webb J. R., Noble J. C., eds, Astronomical Society of the Pacific Conference Series Vol. 110, Blazar Continuum Variability. p. 123
* Kodba et al. (2005) Kodba S., Perc M., Marhl M., 2005, EUROPEAN JOURNAL OF PHYSICS Eur. J. Phys, 26, 205
* Lehto et al. (1993) Lehto H. J., Czerny B., McHardy I. M., 1993, MNRAS, 261, 125
* Lichtenberg & Lieberman (1992) Lichtenberg A. J., Lieberman M. A., 1992, Regular and Chaotic Dynamics. Springer, New York
* Mandelbrot (1983) Mandelbrot B., 1983, The Fractal Geometry of Nature. W. H. Freeman and Company, New York
* Mannattil et al. (2016) Mannattil M., Gupta H., Chakraborty S., 2016, ApJ, 833, 208
* Misra et al. (2004) Misra R., Harikrishnan K. P., Mukhopadhyay B., Ambika G., Kembhavi A. K., 2004, ApJ, 609, 313
* Oprisan et al. (2015) Oprisan S., Lynn P., Tompa T., Lavin A., 2015, Frontiers in Computational Neuroscience, 9, 125
* Padovani (2017) Padovani P., 2017, Nature Astronomy, 1, 0194
* Pascual-Granado et al. (2015) Pascual-Granado J., Garrido R., Suárez J. C., 2015, A&A, 575, A78
* Provenzale et al. (1994) Provenzale A., Vio R., Cristiani S., 1994, ApJ, 428, 591
* Ruelle (1990) Ruelle D., 1990, Proceedings of the Royal Society of London Series A, 427, 241
* Sadun (1996) Sadun A., 1996, in Miller H. R., Webb J. R., Noble J. C., eds, Astronomical Society of the Pacific Conference Series Vol. 110, Blazar Continuum Variability. p. 86
* Seymour & Lorimer (2013) Seymour A. D., Lorimer D. R., 2013, MNRAS, 428, 983
* Shishikura (1998) Shishikura M., 1998, Annals of Mathematics, 147, 225
* Smith et al. (2018) Smith K. L., Mushotzky R. F., Boyd P. T., Malkan M., Howell S. B., Gelino D. M., 2018, ApJ, 857, 141
* Stawarz & Petrosian (2008) Stawarz Ł., Petrosian V., 2008, ApJ, 681, 1725
* Takens (1981) Takens F., 1981, Detecting strange attractors in turbulence. p. 366, doi:10.1007/BFb0091924
* Tarnopolski (2015) Tarnopolski M., 2015, Ap&SS, 357, 160
* Tarnopolski et al. (2020) Tarnopolski M., Żywucka N., Marchenko V., Pascual-Granado J., 2020, The Astrophysical Journal Supplement Series, 250, 1
* Tavecchio et al. (2020) Tavecchio F., Bonnoli G., Galanti G., 2020, MNRAS, 497, 1294
* The Fermi-LAT collaboration (2020) The Fermi-LAT collaboration 2020, ApJS, 247, 33
* Theiler et al. (1992) Theiler J., Eubank S., Longtin A., Galdrikian B., Doyne Farmer J., 1992, Physica D Nonlinear Phenomena, 58, 77
* Urry & Padovani (1995) Urry C. M., Padovani P., 1995, PASP, 107, 803
* Uttley et al. (2005) Uttley P., McHardy I. M., Vaughan S., 2005, MNRAS, 359, 345
* Wernecke et al. (2019) Wernecke H., Sándor B., Gros C., 2019, Physics Reports, 824, 1
* Wolf et al. (1985) Wolf A., Swift J. B., Swinney H. L., Vastano J. A., 1985, Physica D Nonlinear Phenomena, 16, 285
* Wood et al. (2017) Wood M., Caputo R., Charles E., Di Mauro M., Magill J., Perkins J. S., Fermi-LAT Collaboration 2017, in 35th International Cosmic Ray Conference (ICRC2017). p. 824 (arXiv:1707.09551)
|
# LaneRCNN: Distributed Representations for Graph-Centric Motion Forecasting
Wenyuan Zeng1,2 Ming Liang2 Renjie Liao1,2 Raquel Urtasun1,2
1University of Toronto 2 Uber Advanced Technologies Group
{wenyuan, rjliao<EMAIL_ADDRESS><EMAIL_ADDRESS>
###### Abstract
Forecasting the future behaviors of dynamic actors is an important task in
many robotics applications such as self-driving. It is extremely challenging
as actors have latent intentions and their trajectories are governed by
complex interactions between the other actors, themselves, and the maps. In
this paper, we propose LaneRCNN, a graph-centric motion forecasting model.
Importantly, relying on a specially designed graph encoder, we learn a local
lane graph representation per actor (LaneRoI) to encode its past motions and
the local map topology. We further develop an interaction module which permits
efficient message passing among local graph representations within a shared
global lane graph. Moreover, we parameterize the output trajectories based on
lane graphs, a more amenable prediction parameterization. Our LaneRCNN
captures the actor-to-actor and the actor-to-map relations in a distributed
and map-aware manner. We demonstrate the effectiveness of our approach on the
large-scale Argoverse Motion Forecasting Benchmark. We achieve the 1st place
on the leaderboard and significantly outperform previous best results.
## 1 Introduction
Autonomous vehicles need to navigate in dynamic environments in a safe and
comfortable manner. This requires predicting the future motions of other
agents to understand how the scene will evolve. However, depending on each
agent’s intention (e.g. turning, lane-changing), the agents’ future motions
can involve complicated maneuvers such as yielding, nudging, and acceleration.
Even worse, those intentions are not known a priori by the ego-robot, and
agents may also change their minds based on behaviors of nearby agents.
Therefore, even with access to the ground-truth trajectory histories of the
agents, forecasting their motions is very challenging and is an unsolved
problem.
By leveraging deep learning, the motion forecasting community has been making
steady progress. Most state-of-the-art models share a similar design
principle: using a single feature vector to characterize all the information
related to an actor, as shown in Fig. 1, left. They typically first encode for
each actor its past motions and the surrounding map (or other context
information) into a feature vector, which is computed either by feeding a 2D
rasterization to a convolutional neural network (CNN) [59, 60, 41, 4, 37, 8],
or directly using a recurrent neural network (RNN) [62, 49, 16, 61, 18, 2].
Next, they exchange the information among actors to model interactions,
_e.g_., via a fully-connected graph neural network (GNN) [54, 6, 41, 49, 7,
16] or an attention mechanism [26, 43, 52, 44, 34]. Finally, they predict
future motions per actor from its feature vector via a regression header [29,
49, 41, 59, 8, 30, 55].
Figure 1: Popular motion forecasting methods encode actor and its context
information into a feature vector, and treat it as a node in an interaction
graph. In contrast, we propose a graph-based representation LaneRoI per actor,
which is structured and expressive. Based on it, we model interactions and
forecast motion in a map topology aware manner. Figure 2: Overview of
LaneRCNN. It first encodes each actor with our proposed LaneRoI
representation, processes each LaneRoI with an encoder, and then models
interactions among actors with a graph-based interactor. Finally, LaneRCNN
predicts final positions of actors in a fully-convolutional manner, and then
decodes full trajectories based on these positions.
Although such a paradigm has shown competitive results, it has three main
shortcomings: 1) Representing the context information of large regions of
space, such as fast moving actors traversing possibly a hundred meters within
five seconds, with a single vector is difficult. 2) Building a fully-connected
interaction graph among actors ignores important map structures. For example,
an unprotected left turn vehicle should yield to oncoming traffic, while two
spatially nearby vehicles driving on opposite lanes barely interact with each
other. 3) The regression header does not explicitly leverage the lane
information, which could provide a good inductive bias for accurate
predictions. As a consequence, regression-based predictors sometimes forecast
shooting-out-of-road trajectories, which are unrealistic.
In this paper, we propose a graph-centric motion forecasting model, _i.e_.,
LaneRCNN, to address the aforementioned issues. We represent an actor and its
context in a distributed and map-aware manner by constructing an actor-
specific graph, called Lane-graph Region-of-Interest (LaneRoI), along with
node embeddings that encode the past motion and map semantics. In particular,
we construct LaneRoI following the topology of lanes that are relevant to this
actor, where nodes on this graph correspond to small spatial regions along
these lanes, and edges represent the topological and spatial relations among
regions. Compared to using a single vector to encode all the information of a
large region, our LaneRoI naturally preserves the map structure and captures
the more fine-grained information, as each node embedding only needs to
represent the local context within a small region. To model interactions, we
embed the LaneRoIs of all actors to a global lane graph and then propagate the
information over this global graph. Since the LaneRoI’s of interacting actors
are highly relevant, those actors will share overlapping regions on the global
graph, thus having more frequent communications during the information
propagation compared to irrelevant actors. Importantly, this process neither
requires any heuristics nor makes any oversimplified assumptions while
learning interactions conditioned on maps. We then predict future motions on
each LaneRoI in a _fully-convolutional_ manner, such that small regions along
lanes (nodes in LaneRoI) can serve as anchors and provide good priors . We
demonstrate the effectiveness of our method on the large-scale Argoverse
motion forecasting benchmark [10]. We achieve the first rank on the
challenging Argoverse competition leaderboard [1], significantly outperforming
previous results.
## 2 Related Work
#### Motion Forecasting:
Traditional methods use hand-crafted features and rules based on human
knowledge to model interactions and constraints in motion forecasting [12, 11,
14, 21, 33, 57, 32], which are sometimes oversimplified and not scalable.
Recently, learning-based approaches employ the deep learning and significantly
outperform traditional ones. Given the actors and the scene, a deep
forecasting model first needs to design a format to encode the information. To
do so, previous methods [41, 4, 37] often rasterize the trajectories of actors
into a Birds-Eye-View (BEV) image, with different channels representing
different observation timesteps, and then apply a CNN and RoI pooling [39, 20]
to extract actor features. Maps can be encoded similarly [59, 60, 8, 4, 49].
However, the square receptive fields of a CNN may not be efficient to encode
actor movements [29], which are typically long curves. Moreover, the map
rasterization may lose useful information like lane topologies. RNNs are an
alternative way to encode actor kinematic information [62, 49, 16, 61, 18, 2]
compactly and efficiently. Recently, VectorNet [16] and LaneGCN [29]
generalized such compact encodings to map representations. VectorNet treats a
map as a collection of polylines and uses a RNN to encode them, while LaneGCN
builds a graph of lanes and conducts convolutions over the graph. Different
from all these work, we encode both actors and maps in an unified graph
representation, which is more structured and powerful.
Modeling interactions among actors is also critical for a multi-agent system.
Pioneering learning-based work design a social-pooling mechanism [2, 18] to
aggregate the information from nearby actors. However, such a pooling
operation may potentially lose actor-specific information. To address this,
attention-mechanism [43, 52, 44, 48] or GNN-based methods [60, 26, 29, 6, 41,
49, 7, 16] build actor interaction graphs (usually fully-connected with all
actors or k-nearest neighbors based), and perform attention or message passing
to update actor features. Social convolutional pooling [62, 15, 47] has also
been explored, which maintains the spatial distribution of actors. However,
most of these work do not explicitly consider map structures, which largely
affects interactions among actors in reality.
To generate each actor’s predicted futures, many works sample multi-modal
futures under a conditional variational auto-encoder (CVAE) framework [25, 40,
49, 41, 7], or with a multi-head/mode regressor [29, 13, 34]. Others output
discrete sets of trajectory samples [60, 37, 9] or occupancy maps [22, 42].
Recently, TNT [61] concurrently and independently designs a similar output
parameterization as ours where lanes are used as priors for the forecasting.
Note that, in addition to the parameterization, we contribute a novel graph
representation and a powerful architecture which significantly outperforms
their results.
#### Graph Neural Networks:
Relying on operators like graph convolution and message passing, graph neural
networks (GNNs) and their variants [45, 5, 28, 24, 19, 31] generalize deep
learning on regular graphs like grids to ones with irregular topologies. They
have achieved great successes in learning useful graph representations for
various tasks [35, 38, 50, 27, 17]. We draw inspiration from the general
concept “ego-graph” and propose LaneRoI, which is specially designed for lane
graphs and captures both the local map topologies and the past motion
information of an individual actor. Moreover, to capture interactions among
actors, we further propose an interaction module which effectively
communicates information among LaneRoI graphs.
## 3 LaneRCNN
Our goal is to predict the future motions of all actors in a scene, given
their past motions and an HD map. Different from existing work, we represent
an actor and its context with a LaneRoI, an actor-specific graph which is more
structured and expressive than the single feature vector used in the
literature. Based on this representation, we design LaneRCNN, a graph-centric
motion forecasting model that encodes context, models interactions between
actors, and predicts future motions all in a map topology aware manner. An
overview of our model is shown in Fig. 2.
In the following, we first introduce our problem formulation and notations in
Sec. 3.1. We then define our LaneRoI representations in Sec. 3.2. In Sec. 3.3,
we explain how LaneRCNN processes features and models interactions via graph-
based message-passing. Finally, we show our map-aware trajectory prediction
and learning objectives in Sec. 3.4 and Sec. 3.5 respectively.
Figure 3: The LaneRoI of the actor $i$ is a collection of a graph
$\mathcal{G}_{i}$ (constructed following lane topology: nodes as lane segments
and edges as segment connectivities) and node embeddings $\mathbf{F}_{i}$
(encoding motions of the actor, as well as geometric and semantic properties
of lane segments).
### 3.1 Problem Formulation
We denote the past motion of the $i$-th actor as a set of 2D points encoding
the center locations over the past $L$ timesteps, _i.e_.,
$\left\\{(x_{i}^{-L},y_{i}^{-L}),\cdots,(x_{i}^{-1},y_{i}^{-1})\right\\}$,
with $(x,y)$ the 2D coordinates in bird’s eye view (BEV). Our goal is to
forecast the future motions of all actors in the scene
$\left\\{(x_{i}^{0},y_{i}^{0}),\cdots,(x_{i}^{T},y_{i}^{T})|i=1,\cdots,N\right\\}$,
where $T$ is our prediction horizon and $N$ is the number of actors.
In addition to the past kinematic information of the actors, maps also play an
important role for motion forecasting since (i) actors usually follow lanes on
the map, (ii) the map structure determines the right of way, which in turns
affects the interactions among actors. As is common practice in self-driving,
we assume an HD map is accessible, which contains lanes and associated
semantic attributes, _e.g_., turning lane and lane controlled by traffic
light. Each lane is composed of many consecutive lane segments $\ell_{i}$,
which are short segments along the centerline of the lane. In addition, a lane
segment $\ell_{i}$ can have pairwise relationships with another segment
$\ell_{j}$ in the same lane or in another lane, such as $\ell_{i}$ being a
successor of $\ell_{j}$ or a left neighbor.
### 3.2 LaneRoI Representation
#### Graph Representation:
One straight-forward way to represent an actor and its context (map)
information is by first rasterizing both its trajectory as well as the map to
form a 2D BEV image, and then cropping the underlying representation centered
in the actor’s location in BEV [60, 49, 62, 6]. However, rasterizations are
prone to information loss such as connectivities among lanes. Furthermore, it
is a rather inefficient representation since actor motions are expanded
typically in the direction along the lanes, not across them. Inspired by [29],
we instead use a graph representation for our LaneRoI to preserve the
structure while being compact. For each actor $i$ in the scene, we first
retrieve all relevant lanes that this actor can possibly go to in the
prediction horizon $T$ as well as come from in the observed history horizon
$L$. We then convert the lanes into a directed graph
$\mathcal{G}_{i}=\\{\mathcal{V},\\{\mathcal{E}_{\text{suc}},\mathcal{E}_{\text{pre}},\mathcal{E}_{\text{left}},\mathcal{E}_{\text{right}}\\}\\}$
where each node $v\in\mathcal{V}$ represents a lane segment within those lanes
and the lane topology is represented by different types of edges
$\mathcal{E}_{r}$, encoding the following relationships: predecessor,
successor, left and right neighbor. Two nodes are connected by an edge
$e\in\mathcal{E}_{r}$ if the corresponding lane segments $\ell_{i},\ell_{j}$
have a relation $r$, _e.g_., lane segment $\ell_{i}$ is a successor of lane
segment $\ell_{j}$. Hereafter, we will use the term node interchangeably with
the term lane segment.
#### Graph Input Encoding:
The graph $\mathcal{G}_{i}$ only characterizes map structures around the
$i$-th actor without much information about the actor. We therefore augment
the graph with a set of node embeddings to construct our LaneRoI . Recall that
each node $k$ in $\mathcal{G}_{i}$ is associated with a lane segment
$\ell_{k}$. We design its embedding $f_{k}\in\mathbb{R}^{C}$ to capture the
geometric and semantic information of $\ell_{k}$, as well as its relations
with the actor. In particular, geometric features include the center location,
the orientation and the curvature of $\ell_{k}$; semantic features include
binary features indicating if $\ell_{k}$ is a turning lane, if it is currently
controlled by a red light, _etc_. To encode the actor information into
$f_{k}$, we note that the past motion of an actor can be identified as a set
of 2D displacements, defining the movements between consecutive timesteps.
Therefore, we also include the relative positions and orientations of these 2D
displacements w.r.t. $\ell_{k}$ into $f_{k}$ which encodes actor motions in a
map-dependent manner. This is beneficial for understanding actor behaviors
w.r.t. the map, _e.g_., a trajectory that steadily deviates from one lane and
approaches the neighboring lane is highly likely a lane change. In practice,
it is important to clamp the actor information, _i.e_., if $\ell_{k}$ is more
than 5 meters away from the actor we replace the actor motion embedding in
$f_{k}$ with zeros. We hypothesize that such a restriction encourages the
model to learn better representations via the message passing over the graph.
To summarize, $(\mathcal{G}_{i},\mathbf{F}_{i})$ is the LaneRoI of the actor
$i$, encoding the actor-specific information for motion forecasting, where
$\mathbf{F}_{i}\in\mathbb{R}^{M_{i}\times C}$ is the collection of node
embeddings $f_{k}$ and $M_{i}$ is the number of nodes in $\mathcal{G}_{i}$.
### 3.3 LaneRCNN Backbone
As LaneRoIs have irregular graph structures, we can not apply standard 2D
convolutions to obtain feature representations. In the following, we first
introduce the lane convolution and pooling operators (Fig. 4), which serve
similar purposes as their 2D counterparts while respecting the graph topology.
Based on these operators, we then describe how our LaneRCNN updates features
of each LaneRoI as well as handles interactions among all LaneRoIs (actors).
#### Lane Convolution Operator:
We briefly introduce the lane convolution which was originally proposed in
[29] Given a LaneRoI $(\mathcal{G}_{i},\mathbf{F}_{i})$, a lane convolution
updates features $\mathbf{F}_{i}$ by aggregating features from its
neighborhood (in the graph). Formally, we use $\mathcal{E}_{i}(r)$ to denote
the binary adjacency matrix for $\mathcal{G}_{i}$ under the relation $r$,
_i.e_., the $(p,q)$ entry in this matrix is $1$ if lane segments $\ell_{p}$
and $\ell_{q}$ have the relation $r$ and $0$ otherwise. We denote the $n$-hop
connectivity under the relation $r$ as the matrix
$\operatorname{bool}\left(\mathcal{E}_{i}(r)\cdot\mathcal{E}_{i}(r)\cdots\mathcal{E}_{i}(r)\right)=\operatorname{bool}\left(\mathcal{E}_{i}^{n}(r)\right)$,
where the operator $\operatorname{bool}$ sets any non-zero entries to one and
otherwise keeps them as zero. The output node features are updated as follows,
$\mathbf{F}_{i}\leftarrow\Psi\left(\mathbf{F}_{i}\mathbf{W}+\sum_{r,n}\operatorname{bool}\left(\mathcal{E}_{i}^{n}(r)\right)\mathbf{F}_{i}\mathbf{W}_{n,r}\right),$
(1)
where both $\mathbf{W}$ and $\mathbf{W}_{n,r}$ are learnable parameters,
$\Psi(\cdot)$ is a non-linearity consisted of LayerNorm [3] and ReLU [36], and
the summation is over all possible relations $r$ and hops $n$. In practice, we
use $n\in\left\\{1,2,4,8,16,32\right\\}$. Such a multi-hop mechanism mimics
the dilated convolution [58] and effectively enlarges the receptive field.
Figure 4: An illustration for lane convolution and lane pooling operators,
which have similar functionalities as their 2D counterparts while respecting
the lane topology.
#### Lane Pooling Operator:
We design a lane pooling operator which is a learnable pooling function. Given
a LaneRoI $(\mathcal{G}_{i},\mathbf{F}_{i})$, recall $\mathcal{G}_{i}$
actually corresponds to a number of lanes spanned in the 2D plane (scene). For
an arbitrary 2D vector $\mathbf{v}$ in the plane, a lane pooling operator
pools, or ‘interpolates’, the feature of $\mathbf{v}$ from $\mathbf{F}_{i}$.
Note that $\mathbf{v}$ can be a lane segment in another graph
$\mathcal{G}_{j}$ (spatially close to $\mathcal{G}_{i}$). Therefore, lane
pooling helps communicate information back and forth between graphs, which we
will explain in the interaction part. To generate the feature $f_{\mathbf{v}}$
of vector $\mathbf{v}$, we first retrieve its ‘neighboring nodes’ in
$\mathcal{G}_{i}$, by checking if the center distance between a lane segment
$\ell_{k}$ in $\mathcal{G}_{i}$ and vector $\mathbf{v}$ is smaller than a
certain threshold. A naive pooling strategy is to simply take a mean of those
$\ell_{k}$. However, this ignores the fact that relations between $\ell_{k}$
and $\mathbf{v}$ can vary a lot depending on their relative pose: a lane
segment that is perpendicular to $\mathbf{v}$ (conflicting) and the one that
is aligned with $\mathbf{v}$ have very different semantics. Inspired by the
generalized convolution on graphs/manifolds [35, 53, 29], we use the relative
pose and some non-linearities to learn a pooling function. In particular, we
denote the set of surrounding nodes on $\mathcal{G}_{i}$ as $\mathcal{N}$, and
the relative pose between $\mathbf{v}$ and $\ell_{k}$ as
$\Delta_{\mathbf{v}k}$ which includes relative position and orientation. The
pooled feature $f_{\mathbf{v}}$ can then be written as,
$f_{\mathbf{v}}=\mathcal{M}_{b}\left(\sum_{k\in\mathcal{N}}\mathcal{M}_{a}\left(\left[f_{k},\Delta_{\mathbf{v}k}\right]\right)\right),$
(2)
where $[\cdots]$ means concatenation and $\mathcal{M}$ is a two-layer multi-
layer perceptron (MLP).
#### LaneRoI Encoder:
Equipped with operators introduced above, we now describe how LaneRCNN
processes features for each LaneRoI. Given a scene, we first construct a
LaneRoI per actor and encode its input information into node embeddings as
described in Sec. 3.2. Then, for each LaneRoI, we apply four lane convolution
layers and get the updated node embeddings $\mathbf{F}_{i}$. Essentially, a
lane convolution layer propagates information from a node to its (multi-hop)
connected nodes. Stacking more layers builds larger receptive fields and has a
larger model capacity. However, we find deeper networks do not necessarily
lead to better performances in practice, possibly due to the well-known
difficulty of learning long-term dependencies. To address this, we introduce a
graph shortcut mechanism on LaneRoI. The graph shortcut layer can be applied
after any layer of lane convolution: we aggregate $\mathbf{F}_{i}$ output from
the previous layer into a global embedding with the same dimension as node
embeddings, and then add it to embeddings of all nodes in $\mathcal{G}_{i}$.
Recall that the actor past motions are a number of 2D vectors, _i.e_.,
movements between consecutive timesteps. We use the lane pooling to extract
features for these 2D vectors. A 1D CNN with downsampling is then applied to
these features to build the final shortcut embedding. Intuitively, a lane
convolution may suffer from the diminishing information flow during the
message-passing, while such a shortcut can provide an auxiliary path to
communicate among far-away nodes efficiently. We will show that the shortcut
significantly boosts the performance in the ablation study.
#### LaneRoI Interactor:
So far, our LaneRoI encoder provides good features for a given actor, but it
lacks the ability to model interactions among different actors, which is
extremely important for the motion forecasting in a multi-agent system. We now
describe how we handle actor interactions under LaneRoI representations. After
processing all LaneRoIs with the LaneRoI encoder (shared weights), we build a
global lane graph $\mathcal{G}$ containing all lanes in the scene. Its node
embeddings are constructed by projecting all LaneRoIs to $\mathcal{G}$ itself.
We then apply four lane convolution layers on $\mathcal{G}$ to perform message
passing. Finally, we distribute the ‘global node’ embeddings back to each
LaneRoI . Our design is motivated by the fact that actors have interactions
since they share the same space-time region. Similarly, in our model, all
LaneRoIs share the same global graph $\mathcal{G}$ where they communicate with
each other following map structures.
In particular, suppose we have a set of LaneRoIs
$\\{(\mathcal{G}_{i},\mathbf{F}_{i})|i=1,\cdots,N\\}$ encoded from previous
layers and a global lane graph $\mathcal{G}$. For each node in $\mathcal{G}$,
we use a lane pooling to construct its embedding: retrieving its neighbors
from all LaneRoIs as $\mathcal{N}$, measured by center distance, and then
applying Eq. 2. This ensures each global node has the information of all those
actors that could interact with it. The distribute step is an inverse process:
for each node in $\mathcal{G}_{i}$, find its neighbors, apply a lane pooling,
and add the resulted embedding to original $\mathbf{F}_{i}$ (serving as a
skip-connection).
| Method | K=1 | K=6
---|---|---|---
| minADE | minFDE | MR | minADE | minFDE | MR
Argoverse Baseline [10] | NN | 3.45 | 7.88 | 87.0 | 1.71 | 3.28 | 53.7
NN+map | 3.65 | 8.12 | 94.0 | 2.08 | 4.02 | 58.0
LSTM+map | 2.92 | 6.45 | 75.0 | 2.08 | 4.19 | 67.0
Leaderboard [1] | TNT (4th) [61] | 1.78 | 3.91 | 59.7 | 0.94 | 1.54 | 13.3
Jean (3rd) [34] | 1.74 | 4.24 | 68.6 | 1.00 | 1.42 | 13.1
Poly (2nd) [1] | 1.71 | 3.85 | 59.6 | 0.89 | 1.50 | 13.1
Ours-LaneRCNN (1st) | 1.69 | 3.69 | 56.9 | 0.90 | 1.45 | 12.3
Table 1: Argoverse Motion Forecasting Leaderboard. All metrics are lower the
better and Miss-Rate (MR, K=6) is the official ranking metric.
### 3.4 Map-Relative Outputs Decoding
The future is innately multi-modal and an actor can take many different yet
possible future motions. Fortunately, different modalities can be largely
characterized by different goals of an actor. Here, a goal means a final
position of an actor at the end of prediction horizon. Note that actors mostly
follow lane structures and thus their goals are usually close to a lane
segment $\ell$. Therefore, our model can predict the final goals of an actor
in a fully convolutional manner, based on its LaneRoI features. Namely, we
apply a 2-layer MLP on each node feature $f_{k}$, and output five values
including the probability that $\ell_{k}$ is the closest lane segment to final
destination $p(\ell_{k}=\text{goal})$, as well as relative residues from
$\ell_{k}$ to the final destination $x_{gt}-x_{k}$, $y_{gt}-y_{k}$,
$\sin(\theta_{gt}-\theta_{k})$, $\cos(\theta_{gt}-\theta_{k})$.
Based on results of previous steps, we select the top K111On Argoverse, we
follow the official metric and use K=6. We also remove duplicate goals if two
predictions are too close, where the lower confidence one is ignored. For each
prediction, we use the position and the direction of the actor at $t=0$ as
well as those at the goal to interpolate a curve, using Bezier quadratic
parameterization. We then unroll a constant acceleration kinematic model along
this curve, and sample 2D points at each future timestep based on the curve
and the kinematic information. These 2D points form a trajectory, which serves
as an initial proposal of our final forecasting. Despite its simplicity, this
parameterization gives us surprisingly good results.
Our final step is to refine those trajectory proposals using a learnable
header. Similar to the shortcut layer introduced in Sec. 3.3, we use a lane
pooling followed by a 1D CNN to pool features of this trajectory. Finally, we
decode a pair of values per timestep, representing the residue from the
trajectory proposal to the ground-truth future position at this timestep
(encoded in Frenet coordinate of this trajectory proposal). We provide more
detailed definitions of our parameterization and output space in the
supplementary A.
### 3.5 Learning
We train our model end-to-end with a loss containing the goal classification,
the goal regression, and the trajectory refinements. Specifically, we use
$\mathcal{L}=\mathcal{L}_{\text{cls}}+\alpha\mathcal{L}_{\text{reg}}+\beta\mathcal{L}_{\text{refine}},$
where $\alpha$ and $\beta$ are hyparameters determining relative weights of
different terms. As our model predicts the goal classification and regression
results per node, we simply adopt a binary cross entropy loss for
$\mathcal{L}_{\text{cls}}$ with online hard example mining [46] and a
smooth-L1 loss for $\mathcal{L}_{\text{reg}}$, where the
$\mathcal{L}_{\text{reg}}$ is only evaluated on positive nodes, _i.e_. closest
lane segments to the ground-truth final positions. The
$\mathcal{L}_{\text{refine}}$ is also a smooth-L1 loss with training labels
generated on the fly: projecting ground-truth future trajectories to the
predicted trajectory proposals, and use the Frenet coordinate values as our
regression targets.
Module | Ablation | K=1 | K=6
---|---|---|---
minADE | minFDE | minADE | minFDE | MR
LaneRoI Encoder | LaneRoI | Shortcut | | | | |
| | 1.68 | 3.79 | 0.86 | 1.46 | 14.5
✓ | | 1.68 | 3.84 | 0.82 | 1.36 | 12.9
✓ | Global Pool | 1.69 | 3.84 | 0.84 | 1.38 | 12.8
✓ | Center Pool | 1.67 | 3.80 | 0.83 | 1.35 | 12.4
✓ | Ours x 1 | 1.55 | 3.45 | 0.81 | 1.29 | 11.1
✓ | Ours x 2 | 1.54 | 3.45 | 0.80 | 1.29 | 10.8
LaneRoI Interactor | Interactor-Arch | Pooling | | | | |
| | 1.54 | 3.45 | 0.80 | 1.29 | 10.8
Attention | Global | 1.42 | 3.10 | 0.78 | 1.24 | 9.8
Attention | Shortcut | 1.47 | 3.22 | 0.80 | 1.25 | 10.1
GNN | Global | 1.45 | 3.15 | 0.79 | 1.25 | 9.9
GNN | Shortcut | 1.45 | 3.21 | 0.79 | 1.25 | 10.0
Ours | AvgPool | 1.42 | 3.11 | 0.79 | 1.25 | 9.9
Ours | LanePool | 1.33 | 2.85 | 0.77 | 1.19 | 8.2
Table 2: Ablations on different modules of LaneRCNN. Metrics are reported on
the validation set. In the upper half, we examine our LaneRoI Encoder,
comparing per-actor 1D feature vector v.s. LaneRoI representations as well as
different designs for the shortcut mechanism. In the lower half, we compare
different strategies to model interactions, including a fully connected graph
among actors with GNN / attention, as well as ours. Pooling refers to how we
pool a 1D actor feature from each LaneRoI which are used by GNN / attention.
Rows shaded in gray indicate the architecture used in our final model.
## 4 Experiment
We evaluate the effectiveness of LaneRCNN on the large-scale Argoverse motion
forecasting benchmark (Argoverse), which is publicly available and provides
annotations of both actor motions and HD maps. In the following, we first
explain our experimental setup and then compare our method against state-of-
the-art on the leaderboard. We also conduct ablation studies on each module of
LaneRCNN to validate our design choices. Finally, we present some qualitative
results.
### 4.1 Experimental Settings
#### Dataset:
Argoverse provides a large-scale dataset [10] for the purpose of training,
validating and testing models, where the task is to forecast 3 seconds future
motions given 2 seconds past observations. This dataset consists of more than
30K real-world driving sequences collected in Miami and Pittsburgh. Those
sequences are further split into train, validation, and test sets without any
geographical overlapping. Each of them has 205942, 39472, and 78143 sequences
respectively. In particular, each sequence contains the positions of all
actors in a scene within the past 2 seconds history, annotated at 10Hz. It
also specifies one interesting actor in this scene, with type ‘agent’, whose
future 3 seconds motions are used for the evaluation. The train and validation
splits additionally provide future locations of all actors within 3 second
horizon labeled at 10Hz, while annotations for test sequences are withheld
from the public and used for the leaderboard evaluation. Besides, HD map
information can be retrieved for all sequences.
#### Metrics:
We follow the benchmark setting and use Miss-Rate (MR), Average Displacement
Error (ADE) and Final Displacement Error (FDE), which are also widely used in
the community. MR is defined as the ratio of data that none of the predictions
has less than 2.0 meters L2 error at the final timestep. ADE is the averaged
L2 errors of all future timesteps, while FDE only counts the final timestep.
To evaluate the mutli-modal prediction, we also adopt the benchmark setting:
predicting K=6 future trajectories per actor and evaluating the
$\text{min}_{K}\text{MR}$, $\text{min}_{K}\text{ADE}$,
$\text{min}_{K}\text{FDE}$ using the trajectory that is closest to the ground-
truth.
#### Implementation Details:
We train our model on the _train_ set with the batch size of 64 and terminate
at 30 epochs. We use Adam [23] optimizer with the learning rate initialized at
0.01 and decayed by 10 at 20 epochs. To normalize the data, we translate and
rotate the coordinate system of each sequence so that the origin is at current
position ($t=0$) of ‘agent’ actor and x-axis is aligned with its current
direction. During training, we further apply a random rotation data
augmentation within $(-\frac{2}{3}\pi,\frac{2}{3}\pi)$. No other data
processing is applied such as label smoothing. More implementation details are
provided in the supplementary C.
### 4.2 Comparison with State-of-the-art
We compare our approach with top entries on Argoverse motion forecasting
leaderboard [1] as well as official baselines provided by the dataset [10] as
shown in Table 1. We only submit our final model once to the leaderboard and
achieve state-of-the-art performance.222Snapshot of the leaderboard at the
submission time: Nov. 12, 2020. This is a very challenging benchmark with
around 100 participants at the time of our submission. Note that for the
official ranking metric MR (K=6), previous leading methods are extremely close
to each other, implying the difficulty of further improving the performance.
Nevertheless, we significantly boost the number which verifies the
effectiveness of our method. Among the competitors, both Jean [34] and TNT
[61] use RNNs to encode actor kinematic states and lane polylines. They then
build a fully-connected interaction graph among all actors and lanes, and use
either the attention or GNNs to model interactions. As a result, they
represent each actor with a single feature vector, which is less expressive
than our LaneRoI representations. Moreover, the fully-connected interaction
graph may also discard valuable map structure information. Note that TNT
shares a similar output parameterization as ours, yet we perform better on all
metrics. This further validates the advantages of our LaneRoI compared against
traditional representations. Unfortunately, since Poly team does not publish
their method, we can not compare with it qualitatively.
| | |
---|---|---|---
Figure 5: Qualitative results on Argoverse validation set. Here we show (from
left-to-right): 1) curved lanes 2) lane changing 3) intersection 4)
overtaking.
### 4.3 Ablation Studies
#### Ablations on LaneRoI Encoder:
We first show the ablation study on one of our main contributions, _i.e_.,
LaneRoI , in the upper half of Table 2. The first row shows a representative
of the traditional representations. Specifically, we first build embeddings
for lane graph nodes using only the map information and 4 lane convolution
layers. We then use a 1D CNN (U-net style) to extract a motion feature vector
from actor kinematic states, concatenate it with every graph node embedding
and make predictions. Conceptually, this is similar to TNT [61] except that we
modify the backbone network to make comparisons fair. On the second row, we
show the result of our LaneRoI representations with again four lane
convolution layers (no shortcuts). Hence, the only difference is whether the
actor is encoded with a single motion vector shared by all nodes, or encoded
in a distributed and structured manner as ours. As shown in the table, our
LaneRoI achieves similar or better results on all metrics, exhibiting its
advantages. Note that this row is not yet our best result in terms of using
LaneRoI representations, as the actor information is only exposed to a small
region during the input encoding (clamping at input node embeddings) and can
not efficiently propagate to the full LaneRoI without the help of the
shortcut, which we will show next.
Subsequent rows in Table 2 compare different design choices for the shortcut
mechanism, in particular how we pool the global feature for each LaneRoI.
‘Global Pool’ refers to average-pooling all node embeddings within a LaneRoI,
and ‘Center Pool’ means we pool a feature from a LaneRoI using nodes that
around the last observation of the actor and a lane pooling. As we can see,
although these two approaches can possibly spread out information to every
node in a LaneRoI (and thus build a shortcut), they barely improve the
performance. On the contrary, ours achieve significant improvements. This is
because we pool features along the past trajectory of the actor, which results
in a larger and actor-motion-specific receptive field. Here, $\times 1$ and
$\times 2$ refer to an encoder with 1 shortcut per 4 and 2 lane convolution
layers respectively. This shows stacking more shortcuts provides some, but
diminishing, benefits.
#### Ablations on LaneRoI Interactor:
To verify the effectiveness of our map-aware interaction module, we compare
against several model variants based on the fully-connected interaction graph
among actors. Specifically, for each actor, we apply a LaneRoI encoder333We
choose LaneRoI encoder rather than other encoder, _e.g_., CNN, for fair
comparisons with ours. to process node embeddings, and then pool an actor-
specific feature vector from LaneRoI via either the global average pooling or
our shortcut mechanism. These actor features are then fed into a transformer-
style [51] attention module or a fully-connected GNN. Finally, we add the
output actor features to nodes in their LaneRoI respectively and make
predictions using our decoding module. As a result, these variants have the
same pipeline as ours, with the only difference on how to communicate across
actors. To make comparisons as fair as possible, both the attention and GNN
have the same numbers of layers and channels as our LaneRoI Interactor.444The
GNN here is almost identical to our lane convolution used in Interactor except
for removing the multi-hop as the graph is fully-connected.
As shown in Table 2, all interaction-based models outperform the one without
considering interactions (row 1) as expected. In addition, our approach
significantly improves the performance compared to both the attention and GNN.
Interestingly, all fully-connected interaction graph based model reach similar
performance, which might imply such backbones may saturate the performance (as
also shown by leading methods on the leaderboard). We also show that naively
using the average pooling to embed features from LaneRoIs to global graph does
not achieves good performance because it ignores local structures.
### 4.4 Qualitative results
In Figure 5, we show some qualitative results on Argoverse validation set. We
can see that our method generally follows the map very well and demonstrates
good multi-modalities. From left to right, we show 1) when an actor follows a
curved lane, our model predicts two direction modes with different velocities;
2) when it is on a straight lane, our model covers the possibilities of lane
changing; 3) when it’s approaching an intersection, our model captures both
the go-straight and the turn-right modes, especially with lower speeds for
turning right, which are quite common in the real world; 4) when there is an
actor blocking the path, we predict overtaking behaviors matching exactly with
the ground-truth. Moreover, for the lane-following mode, we predict much
slower speeds which are consistent with this scenario, showing the
effectiveness of our interaction modeling. We provide more qualitative results
in the supplementary E.
## 5 Conclusion
In this paper, we propose LaneRCNN, a graph-centric motion forecasting model.
Relying on learnable graph operators, LaneRCNN builds a distributed lane-
graph-based representation (LaneROI) per actor to encode its past motion and
the local map topology. Moreover, we propose an interaction module which
effectively captures the interactions among actors within the shared global
lane graph. And lastly, we parameterize the output trajectory using lane
graphs which helps improve the prediction. We demonstrate that LaneRCNN
achieves state-of-the-art performance on the challenging Argoverse motion
forecasting benchmark.
## Acknowledgement
We would like to sincerely thank Siva Manivasagam, Yun Chen, Bin Yang, Wei-
Chiu Ma and Shenlong Wang for their valuable help on this paper.
## References
* [1] Argoverse motion forecasting competition. 2019\. https://eval.ai/web/challenges/challenge-page/454/leaderboard/1279.
* [2] Alexandre Alahi, Kratarth Goel, Vignesh Ramanathan, Alexandre Robicquet, Li Fei-Fei, and Silvio Savarese. Social lstm: Human trajectory prediction in crowded spaces. In CVPR, 2016.
* [3] Jimmy Lei Ba, Jamie Ryan Kiros, and Geoffrey E Hinton. Layer normalization. arXiv, 2016.
* [4] Mayank Bansal, Alex Krizhevsky, and Abhijit Ogale. Chauffeurnet: Learning to drive by imitating the best and synthesizing the worst. arXiv, 2018.
* [5] Joan Bruna, Wojciech Zaremba, Arthur Szlam, and Yann LeCun. Spectral networks and locally connected networks on graphs. arXiv, 2013.
* [6] Sergio Casas, Cole Gulino, Renjie Liao, and Raquel Urtasun. Spatially-aware graph neural networks for relational behavior forecasting from sensor data. arXiv, 2019.
* [7] Sergio Casas, Cole Gulino, Simon Suo, Katie Luo, Renjie Liao, and Raquel Urtasun. Implicit latent variable model for scene-consistent motion forecasting. In ECCV, 2020.
* [8] Sergio Casas, Wenjie Luo, and Raquel Urtasun. Intentnet: Learning to predict intention from raw sensor data. In Proceedings of The 2nd Conference on Robot Learning, 2018.
* [9] Yuning Chai, Benjamin Sapp, Mayank Bansal, and Dragomir Anguelov. Multipath: Multiple probabilistic anchor trajectory hypotheses for behavior prediction. arXiv, 2019.
* [10] Ming-Fang Chang, John Lambert, Patsorn Sangkloy, Jagjeet Singh, Slawomir Bak, Andrew Hartnett, De Wang, Peter Carr, Simon Lucey, Deva Ramanan, et al. Argoverse: 3d tracking and forecasting with rich maps. In CVPR, 2019.
* [11] Wongun Choi and Silvio Savarese. A unified framework for multi-target tracking and collective activity recognition. In ECCV, 2012.
* [12] Wongun Choi and Silvio Savarese. Understanding collective activitiesof people from videos. PAMI, 2013.
* [13] Henggang Cui, Vladan Radosavljevic, Fang-Chieh Chou, Tsung-Han Lin, Thi Nguyen, Tzu-Kuo Huang, Jeff Schneider, and Nemanja Djuric. Multimodal trajectory predictions for autonomous driving using deep convolutional networks. In ICRA, 2019.
* [14] Nachiket Deo, Akshay Rangesh, and Mohan M Trivedi. How would surround vehicles move? a unified framework for maneuver classification and motion prediction. IEEE Transactions on Intelligent Vehicles, 2018.
* [15] Nachiket Deo and Mohan M Trivedi. Convolutional social pooling for vehicle trajectory prediction. In CVPR, 2018.
* [16] Jiyang Gao, Chen Sun, Hang Zhao, Yi Shen, Dragomir Anguelov, Congcong Li, and Cordelia Schmid. Vectornet: Encoding hd maps and agent dynamics from vectorized representation. In CVPR, 2020.
* [17] Victor Garcia and Joan Bruna. Few-shot learning with graph neural networks. In ICLR, 2018.
* [18] Agrim Gupta, Justin Johnson, Li Fei-Fei, Silvio Savarese, and Alexandre Alahi. Social gan: Socially acceptable trajectories with generative adversarial networks. In CVPR, 2018.
* [19] Will Hamilton, Zhitao Ying, and Jure Leskovec. Inductive representation learning on large graphs. In NeurIPS, 2017.
* [20] Kaiming He, Georgia Gkioxari, Piotr Dollár, and Ross Girshick. Mask r-cnn. In ICCV, 2017.
* [21] Dirk Helbing and Peter Molnar. Social force model for pedestrian dynamics. Physical review E, 1995.
* [22] Ajay Jain, Sergio Casas, Renjie Liao, Yuwen Xiong, Song Feng, Sean Segal, and Raquel Urtasun. Discrete residual flow for probabilistic pedestrian behavior prediction. arXiv, 2019.
* [23] Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv, 2014.
* [24] Thomas N Kipf and Max Welling. Semi-supervised classification with graph convolutional networks. arXiv, 2016.
* [25] Namhoon Lee, Wongun Choi, Paul Vernaza, Christopher B Choy, Philip HS Torr, and Manmohan Chandraker. Desire: Distant future prediction in dynamic scenes with interacting agents. In CVPR, 2017.
* [26] Lingyun Li, Bin Yang, Ming Liang, Wenyuan Zeng, Mengye Ren, Sean Segal, and Raquel Urtasun. End-to-end contextual perception and prediction with interaction transformer. In IROS, 2020.
* [27] Ruiyu Li, Makarand Tapaswi, Renjie Liao, Jiaya Jia, Raquel Urtasun, and Sanja Fidler. Situation recognition with graph neural networks. In ICCV, 2017.
* [28] Yujia Li, Daniel Tarlow, Marc Brockschmidt, and Richard Zemel. Gated graph sequence neural networks. arXiv, 2015.
* [29] Ming Liang, Bin Yang, Rui Hu, Yun Chen, Renjie Liao, Song Feng, and Raquel Urtasun. Learning lane graph representations for motion forecasting. In ECCV, 2020.
* [30] Ming Liang, Bin Yang, Wenyuan Zeng, Yun Chen, Rui Hu, Sergio Casas, and Raquel Urtasun. Pnpnet: End-to-end perception and prediction with tracking in the loop. In CVPR, 2020.
* [31] Renjie Liao, Zhizhen Zhao, Raquel Urtasun, and Richard Zemel. Lanczosnet: Multi-scale deep graph convolutional networks. In ICLR, 2019.
* [32] Wei-Chiu D. Ma, De-An Huang, Namhoon Lee, and Kris M Kitani. Forecasting interactive dynamics of pedestrians with fictitious play. In CVPR, 2017.
* [33] Ramin Mehran, Alexis Oyama, and Mubarak Shah. Abnormal crowd behavior detection using social force model. In CVPR, 2009.
* [34] Jean Mercat, Thomas Gilles, Nicole El Zoghby, Guillaume Sandou, Dominique Beauvois, and Guillermo Pita Gil. Multi-head attention for multi-modal joint vehicle motion forecasting. In ICRA, 2020.
* [35] Federico Monti, Davide Boscaini, Jonathan Masci, Emanuele Rodola, Jan Svoboda, and Michael M Bronstein. Geometric deep learning on graphs and manifolds using mixture model cnns. In CVPR, 2017.
* [36] Vinod Nair and Geoffrey E Hinton. Rectified linear units improve restricted boltzmann machines. In ICML, 2010.
* [37] Tung Phan-Minh, Elena Corina Grigore, Freddy A Boulton, Oscar Beijbom, and Eric M Wolff. Covernet: Multimodal behavior prediction using trajectory sets. In CVPR, 2020.
* [38] Xiaojuan Qi, Renjie Liao, Jiaya Jia, Sanja Fidler, and Raquel Urtasun. 3d graph neural networks for rgbd semantic segmentation. In ICCV, 2017.
* [39] Shaoqing Ren, Kaiming He, Ross Girshick, and Jian Sun. Faster r-cnn: Towards real-time object detection with region proposal networks. In NeurIPS, 2015.
* [40] Nicholas Rhinehart, Kris M Kitani, and Paul Vernaza. R2p2: A reparameterized pushforward policy for diverse, precise generative path forecasting. In ECCV, 2018.
* [41] Nicholas Rhinehart, Rowan McAllister, Kris Kitani, and Sergey Levine. Precog: Prediction conditioned on goals in visual multi-agent settings. arXiv, 2019.
* [42] Abbas Sadat, Sergio Casas, Mengye Ren, Xinyu Wu, Pranaab Dhawan, and Raquel Urtasun. Perceive, predict, and plan: Safe motion planning through interpretable semantic representations. In ECCV, 2020.
* [43] Amir Sadeghian, Vineet Kosaraju, Ali Sadeghian, Noriaki Hirose, Hamid Rezatofighi, and Silvio Savarese. Sophie: An attentive gan for predicting paths compliant to social and physical constraints. In CVPR, 2019.
* [44] Amir Sadeghian, Ferdinand Legros, Maxime Voisin, Ricky Vesel, Alexandre Alahi, and Silvio Savarese. Car-net: Clairvoyant attentive recurrent network. In ECCV, 2018.
* [45] Franco Scarselli, Marco Gori, Ah Chung Tsoi, Markus Hagenbuchner, and Gabriele Monfardini. The graph neural network model. IEEE Transactions on Neural Networks, 2008.
* [46] Abhinav Shrivastava, Abhinav Gupta, and Ross Girshick. Training region-based object detectors with online hard example mining. In CVPR, 2016.
* [47] Haoran Song, Wenchao Ding, Yuxuan Chen, Shaojie Shen, Michael Yu Wang, and Qifeng Chen. Pip: Planning-informed trajectory prediction for autonomous driving. arXiv, 2020.
* [48] Chen Sun, Abhinav Shrivastava, Carl Vondrick, Rahul Sukthankar, Kevin Murphy, and Cordelia Schmid. Relational action forecasting. In CVPR, 2019.
* [49] Yichuan Charlie Tang and Ruslan Salakhutdinov. Multiple futures prediction. arXiv, 2019.
* [50] Damien Teney, Lingqiao Liu, and Anton van Den Hengel. Graph-structured representations for visual question answering. In CVPR, 2017.
* [51] Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. Attention is all you need. In NeurIPS, 2017.
* [52] Anirudh Vemula, Katharina Muelling, and Jean Oh. Social attention: Modeling attention in human crowds. In ICRA, 2018.
* [53] Shenlong Wang, Simon Suo, Wei-Chiu Ma, Andrei Pokrovsky, and Raquel Urtasun. Deep parametric continuous convolutional neural networks. In CVPR, 2018.
* [54] Tsun-Hsuan Wang, Sivabalan Manivasagam, Ming Liang, Bin Yang, Wenyuan Zeng, and Urtasun Raquel. V2vnet: Vehicle-to-vehicle communication for joint perception and prediction. In ECCV, 2020.
* [55] Bob Wei, Mengye Ren, Wenyuan Zeng, Ming Liang, Bin Yang, and Raquel Urtasun. Perceive, attend, and drive: Learning spatial attention for safe self-driving. arXiv, 2020.
* [56] Moritz Werling, Julius Ziegler, Sören Kammel, and Sebastian Thrun. Optimal trajectory generation for dynamic street scenarios in a frenet frame. In 2010 IEEE International Conference on Robotics and Automation, pages 987–993. IEEE, 2010.
* [57] Kota Yamaguchi, Alexander C Berg, Luis E Ortiz, and Tamara L Berg. Who are you with and where are you going? In CVPR, 2011.
* [58] Fisher Yu and Vladlen Koltun. Multi-scale context aggregation by dilated convolutions. arXiv, 2015.
* [59] Wenyuan Zeng, Wenjie Luo, Simon Suo, Abbas Sadat, Bin Yang, Sergio Casas, and Raquel Urtasun. End-to-end interpretable neural motion planner. In CVPR, 2019.
* [60] Wenyuan Zeng, Shenlong Wang, Renjie Liao, Yun Chen, Bin Yang, and Raquel Urtasun. Dsdnet: Deep structured self-driving network. arXiv, 2020.
* [61] Hang Zhao, Jiyang Gao, Tian Lan, Chen Sun, Benjamin Sapp, Balakrishnan Varadarajan, Yue Shen, Yi Shen, Yuning Chai, Cordelia Schmid, et al. Tnt: Target-driven trajectory prediction. arXiv, 2020.
* [62] Tianyang Zhao, Yifei Xu, Mathew Monfort, Wongun Choi, Chris Baker, Yibiao Zhao, Yizhou Wang, and Ying Nian Wu. Multi-agent tensor fusion for contextual trajectory prediction. In CVPR, 2019.
## Appendix A Map-Relative Output Decoding
Our output decoding can be divided into three steps: predicting the final goal
of an actor based on node embeddings, propose an initial trajectory based on
the goal and the initial pose, and refine the trajectory proposal with a
learnable header. As the first step is straight-forward, we now explain how we
perform the second and third steps in details.
### A.1 Trajectory Proposal
Given a predicted final pose $\left(x^{T},y^{T},dx^{T},dy^{T}\right)$ and
initial pose $\left(x^{0},y^{0},dx^{0},dy^{0}\right)$ of an actor, where
$(x,y)$ is the 2D location and $(dx,dy)$ is the tangent vector, we fit a
Bezier quadratic curve satisfying these boundary conditions, _i.e_., zero-th
and first order derivative values. Specifically, the curve can be
parameterized by
$\displaystyle x(s)$ $\displaystyle=a_{0}s^{2}+a_{1}s+a_{2},$ $\displaystyle
y(s)$ $\displaystyle=b_{0}s^{2}+b_{1}s+b_{2},$ $\displaystyle s.t.\quad x(0)$
$\displaystyle=x^{0},\quad
x(1)=x^{T},\quad\frac{x^{\prime}(0)}{x^{\prime}(1)}=\frac{dx^{0}}{dx^{T}},$
$\displaystyle y(0)$ $\displaystyle=y^{0},\quad
y(1)=y^{T},\quad\frac{y^{\prime}(0)}{y^{\prime}(1)}=\frac{dy^{0}}{dy^{T}}.$
Here, $s$ is the normalized distance. As a result, each predicted goal
uniquely defines a 2D curve.
Next, we unroll a velocity profile along this curve to get 2D waypoint
proposals at every future timestamp. Assuming the actor is moving with a
constant acceleration within the prediction horizon, we can compute the
acceleration based on the initial velocity $v$ and the traveled distance $s$
(from $(x^{0},y^{0})$ to $(x^{T},y^{T})$ along the Bezier curve) using
$a=\frac{2\times(s-vT)}{T^{2}}.$
Therefore, the future position of the actor at any timestamp $t$ can be
evaluated by querying the position along the curve at
$s(t)=vt+\frac{1}{2}at^{2}$.
### A.2 Trajectory Refinement
Simply using our trajectory proposals for motion forecasting will not be very
accurate, as not all actors move with constant accelerations and follow Bezier
curves, yet the proposals provide us good initial estimations which we can
further refine. To do so, for each trajectory proposal, we construct its
features using a shortcut layer on top our LaneRoI node embeddings. We then
use a 2 layer MLP to decode a pair of values $\left(s^{t},d^{t}\right)$ for
each future timestamp, representing the actor position at time $t$ in the
Frenet Coordinate System [56]. The Cartesian coordinates
$\left(x^{t},y^{t}\right)$ can be mapped from $\left(s^{t},d^{t}\right)$ by
first traversing along the Bezier curve distance $s^{t}$ (a.k.a.
longitudinal), and then deviating perpendicularly from the curve distance
$d^{t}$ (a.k.a. lateral). The sign of $d^{t}$ indicates the deviation is
either to-the-left or to-the-right.
Sampling | Avg Length ($\ell_{k}$) | Reg | $\text{min}_{1}$ADE | $\text{min}_{1}$FDE | $\text{min}_{6}$ADE | $\text{min}_{6}$FDE | $\text{min}_{6}$MR
---|---|---|---|---|---|---|---
up 1x | 2.1 m | | 1.52 | 3.32 | 0.94 | 1.69 | 24.0
up 1x | 2.1 m | ✓ | 1.41 | 3.03 | 0.83 | 1.35 | 14.1
up 2x | 1.1 m | | 1.43 | 3.09 | 0.85 | 1.39 | 13.4
up 2x | 1.1 m | ✓ | 1.39 | 2.99 | 0.80 | 1.24 | 10.2
uniform | 2 m | | 1.39 | 2.94 | 0.86 | 1.44 | 10.5
uniform | 2 m | ✓ | 1.35 | 2.86 | 0.80 | 1.29 | 8.2
uniform | 1 m | | 1.37 | 2.90 | 0.83 | 1.32 | 9.9
uniform | 1 m | ✓ | 1.33 | 2.85 | 0.77 | 1.19 | 8.2
Table 3: Ablation studies on lane segments sampling strategy and regression
branch. We compare different sampling strategies including using the original
segment labels (upsample 1x) or upsample the labels (upsample 2x), as well as
uniformly sample segments along lanes (uniform 2/1m). For each, we also
compare using the regression branch or not (only classification branch). Our
final model is shaded in grey.
## Appendix B LaneRoI Construction
To construct a LaneRoI, we need to retrieve all relevant lanes of a given
actor. In our experiments, we use a simple heuristic for this purpose. Given
an HD map of a scene and an arbitrary actor in this scene, we first uniformly
sample segments $\ell_{k}$ with 1 meter length along each lane’s centerline .
Then, for the actor’s location at each past timestamp, we find the nearest
lane segment and collect them together into a set. This is a simplified
version of lane association and achieve very high recall of the true ego-lane.
Finally, we retrieve predecessing and successing lane segments (within a range
$D$) of those segments in the set to capture lane-following behaviors, as well
as left and right neighbors of those predecessing and successing lanes which
are necessary to capture lane-changing behaviors. The range $D$ equals to an
expected length of future movement by integrating the current velocity within
the prediction horizon, plus a buffer value, _e.g_., 20 meters. Therefore, $D$
is dynamically changed based on actor velocity. This is motivated by the fact
that high speed actors travel larger distances and thus should have larger
LaneRoIs to capture their motions as well as interactions with other actors.
## Appendix C Architecture and Learning Details
Our LaneRCNN is constructed as follows: we first feed each input LaneRoI
representation into an encoder, consists of 2 lane convolution layers and a
shortcut layer, followed by another 2 lane convolution layers and a shortcut
layer. We then use a lane pooling layer to build the node embeddings of the
global graph (interactor), where the neighborhood threshold is set to 2
meters. Another four layers of lane convolution are applied on top of this
global graph. Next, we distribute the global node embeddings to each LaneRoI
by using a lane pooling layer and adding the pooled features to original
LaneRoI embeddings (previous encoder outputs) as a skip-connection
architecture. Another 4 layers of lane convolution and 2 layers of shortcut
layers are applied afterwards. Finally, we use two layers of MLP to decode a
classification score per node using its embeddings, and another two layer for
regression branch as well. All layers except the final outputs have 64
channels. We use Layer Normalization [3] and ReLU [36] for normalization and
non-linearity respectively.
During training, we apply online hard example mining [46] for
$\mathcal{L}_{cls}$ (Eq. 3.5). Recall each node predict a binary
classification score indicating whether this node is the closest lane segment
to the final actor position. We use the closest lane segment to the ground-
truth location as our positive example. Negative examples are all nodes
deviating from the ground-truth by more than 6 meters and the remaining nodes
are ‘don’t care’ which do not contribute any loss for $\mathcal{L}_{cls}$.
Then, we randomly subsample one fourth of all negative examples and among them
we use the hardest 100 negative examples for each data sample to compute our
loss. The final $\mathcal{L}_{cls}$ is the average of positive example loss
plus the average of negative example loss. Finally, we add $\mathcal{L}_{reg}$
and $\mathcal{L}_{refine}$ with relative weights of $\alpha=0.5$ and
$\beta=0.2$ respectively to form our total loss $\mathcal{L}$.
## Appendix D Ablation Studies
When constructing our LaneRoI, we define a lane segment $\ell$ to be a node in
the graph. However, there are different ways to sample lane segments and we
find such a choice largely affects the final performance. In Table 3 we show
different strategies of sampling lane segments. The first four rows refer to
upsampling the original lane segment labels provided in the dataset.555Lanes
are labeled in the format of polylines in Argoverse, thus points on those
polylines naturally divide lanes into segments. Such a strategy provides
segments with different lengths, _e.g_., shorter and denser segments where the
geometry of the lanes changes more rapidly. The last four rows sample segments
uniformly along lanes with a predefined length.
As we can observe from Table 3, even though the ‘upsample’ strategy can result
in similar average segment length as the ‘uniform’ strategy, it performs much
worse in all metrics. This is possible because different lengths of segments
introduce additional variance and harm the representation learning process. We
can also conclude from the table that using denser sampling can achieve better
results. Besides, we show the effectiveness of adding a regression branch for
each node in addition to a classification branch, shown in ‘Reg’ column.
Our output parameterization explicitly leverages the lane information and thus
ease the training. In Fig. 6, we validate such an argument where we compare
against a regression-based variant of our model. In particular, we use the
same backbone as ours, and then perform a shortcut layer on top of each
LaneRoI to extract an actor-specific feature vector. We then build a multi-
modal regression header which directly regresses future motions in the 2D
Cartesian coordinates.666As building such a header is non-trivial, we borrow
an open-sourced implementation in LaneGCN [29] which has tuned on the same
dataset and shown strong results. We can see from Fig. 6 that our model
achieves decent performance when only small amounts of training data are
available: with only 1% of training data, our method can achieve 20% of miss-
rate. On the contrary, the regression-based model requires much more data.
This shows our method can exploit map structures as good priors and ease
learning.
In Table 4, we summarize ablations on different trajectory parameterizations.
We can see a constant acceleration rollout slightly improves over constant
speed assumption, and the Bezier curve significantly outperforms a straight
line parameterization, indicating it is more realistic. In addition, adding a
learnable header to refine the trajectory proposals (_e.g_., Bezier curve) can
further boost performance.
Figure 6: Model performance when using different amounts of data for training. Our output parameterization explicitly leverages lanes as priors for motion forecasting, and thus significantly ease the learning compared to directly regressing the future motions in the 2D plane. Curve | Velocity | Learnable | $\text{min}_{1}$ADE | $\text{min}_{6}$ADE
---|---|---|---|---
line | const | | 1.53 | 1.04
line | acc | | 1.52 | 1.02
line | acc | ✓ | 1.41 | 0.86
Bezier | const | | 1.46 | 0.96
Bezier | acc | | 1.44 | 0.94
Bezier | acc | ✓ | 1.33 | 0.77
Table 4: Ablation studies on output parameterizations. We compare different ways of proposing curves (straight line v.s. Bezier quadratic curve), unrolling velocities (const as constant velocity and acc as constant acceleration), as well as using learnable refinement header or not. Our final model is shaded in grey. | | |
---|---|---|---
| | |
| | |
| | |
| | |
Figure 7: Qualitative results on Argoverse validation set. We show a various
of scenarios including, turning (row 1-2), curved roads (row 3), breaking and
overtaking (row 4), abnormal behaviors (row 5).
## Appendix E Qualitative Results
Lastly, we provide more visualization of our model outputs, as we believe the
metric numbers can only reveal part of the story. We present various scenarios
in Fig. 7, including turning, following curved roads, interacting with other
actors and some abnormal behaviors. On the first two rows, we show turning
behaviors under various map topologies. We can see our motion forecasting
results nicely capture possible modes: turning into different directions or
occupying different lanes after turning. We also do well even if the actor is
not strictly following the centerlines. On the third row, we show predictions
following curved roads, which are difficult scenarios for auto-regressive
predictors [59, 8]. On the fourth row, we show that our model can predict
breaking or overtaking behaviors when leading vehicles are blocking the ego-
actor. Finally, we show in the fifth row that our model also works well when
abnormal situations occur, _e.g_., driving out of roads. This is impressive as
the model relies on lanes to make predictions, yet it shows capabilities to
predict non-map-compliant behaviors.
|
Barbara Kaltenbacher
Department of Mathematics,
Alpen-Adria-Universität Klagenfurt
9020 Klagenfurt, Austria
William Rundell$^*$
Department of Mathematics,
Texas A&M University
College Station, Texas 77843, USA
(Communicated by Fioralba Cakoni)
This paper considers the inverse problem of recovering both the
unknown, spatially-dependent conductivity $a(x)$ and the nonlinear reaction
term $f(u)$ in a reaction-diffusion equation
from overposed data.
These measurements can consist of:
the value of two different solution measurements taken at a later time $T$;
time-trace profiles from two solutions;
or both final time and time-trace measurements from a single forwards
solve data run.
We prove both uniqueness results and the convergence of iteration
schemes designed to recover these coefficients.
The last section of the paper shows numerical reconstructions based on these
§ INTRODUCTION
In their most basic form reaction-diffusion equations are nonlinear
parabolic equation of the type
$ u_t(x,t) + \mathbb{L}u = f(u)$ where $\,\mathbb{L}$ is an elliptic operator
defined on a domain $\Omega\subset \mathbb{R}^d$, $d\geq 1$
and $f(u)$ is a function only of the state or dependent variable $u$
which is defined on $\Omega\times [0,t]$ for some $T>0$.
Such equations arise in a wide variety of applications
and two examples of these models are in ecology where $u$
represents the population
density at a fixed point $x$ and time $t$ and $f(u)$ is frequently taken
to be quadratic in $u$ as in the Fisher model;
or in chemical reactions where $f$ is cubic in
the case of the combustion theory of Zeldovich and Frank-Kamenetskii,
[12, 23, 19].
In these models $\mathbb{L}$ is of second order but higher order operators
also feature in significant applications, for example in the Cahn-Hilliard
equation [5, 8].
The physical interpretation of these equations is the time rate
of change of $u$ is a sum of two components:
the diffusion term given by $\mathbb{L}u$ and the driving term $f(u)$.
Often these are in competition, the former tending to disperse the value
of $u$ and the latter, if $f>0$, to increase it.
If $f$ is negative or positive but sublinear, then on a bounded domain
the solution will usually decay exponentially (but depending on the boundary
conditions/values and the eigenvalues of the combined operator $\mathbb{L}$
being positive).
At the other extreme if the growth rate of $f$ is sufficiently large
then global existence in time of the solution can fail.
This is a complex but well-documented situation, see for example,
[1, 20] and the references within.
More general references for semilinear parabolic equations
are [9, 10].
In most applications one assumes these terms in the equation are known,
that is, the coefficients in the elliptic operator and the exact form
of $f(u)$. Indeed, frequently, $f(u)$ is taken to be a polynomial
or possibly a rational function of low degree so that only a few parameters
have to be determined by a least squares fit to the data.
Even if the diffusion constant has also to be determined the method remains
fundamentally the same.
In this paper we make no such assumptions: both our diffusion coefficient and
the nonlinear term can be arbitrary functions within a given smoothness class.
We shall take $\mathbb{L}u = -\nabla\cdot\bigl(a(x)\nabla u\bigr)$ where
the diffusion coefficient $a(x)$ is unknown.
With this background setting our basic model equation is thus
\begin{equation}\label{eqn:basic_pde_parabolic}
u_t(x,t) -\nabla\cdot(a(x)\nabla u(x,t)) = f(u) + r(x,t,u)
\end{equation}
where $r(x,t,u)$ is a known forcing function.
Boundary conditions for (<ref>)
will be of the impedance form
\begin{equation}\label{eqn:bdry_cond}
a\frac{\partial u}{\partial\vec{n}} + \gamma u = {b},\qquad
x\in\partial\Omega,\quad t>0,
\end{equation}
with possibly space dependent but known coefficient $\gamma$ (where $\gamma=\infty$ indicates the case of Dirichlet boundary conditions) as well as given space and time dependent right hand side $b$.
We impose the initial condition
\begin{equation}\label{eqn:init_cond}
u(x,0) = u_0(x),\qquad x\in\Omega.
\end{equation}
Now of course the recovery of the coefficient $a$ and the term $f(u)$ requires
over-posed data and we shall consider several different forms and combinations.
The two basic forms are final time and boundary measurements of $u$.
The former consists of the value $g(x) = u(x,T)$ at some later time $T$
and corresponds to such conditions as a thermal or concentration map of $u$
within the region $\Omega$ or census data taken at a fixed time.
The latter consists of a time trace $h(t) = u(\tilde x,t)$ for some fixed
point $\tilde x$ on $\partial\Omega$.
Combinations include measurements of one of these for two different
sets of initial / boundary conditions corresponding to solutions $u(x,t$ and
$v(x,t)$ or a measurement of both of these for a single data run.
As we will see, some of these combinations are superior to others and the
difference can be substantial.
The main technique we will use follows that of previous work by the
authors and others: we project the differential equation onto the
boundaries containing the overposed data and obtain nonlinear equations
for our unknowns $a(x)$ and $f(u)$ which we then solve by iterative methods.
This idea for recovering the nonlinearity $f(u)$ in equations such as
was first used in [26, 25]
under time-trace data.
References for recovering coefficients in parabolic equations in general,
but specifically including final-time data,
are [13, 14].
A recent such problem for the subdiffusion case based on fractional derivatives
is found in [31].
More recently, the authors have considered several inverse problems
based on the above equation.
In [15] the equation taken was
$u_t - \triangle u = q(x)g(u)$ where $g(u)$ was assumed known and $q(x)$
which determines the unknown spatial strength of $g$, had to be recovered
from final time data.
In [16] the equation taken was
$u_t - \triangle u = f(u)$ and recovery of the unknown $f(u)$ was again from
final time data.
The case of $\mathbb{L}u = -\nabla\cdot\bigl(a(x)\nabla u\bigr) + q(x)u$
where both $a(x)$ and $q(x)$ are unknown and have to be determined
from a pair of runs each providing final-time data was considered in
In this work the equation taken was linear, so that $f(u)$ did not appear.
Our main tool is based on the technique of projecting the differential
operator onto that segment of the boundary where overposed data lies
and thereby obtain an operator equation suitable for fixed point iteration
and the attendant methods for showing convergence of such schemes.
Other possibilities exist and have been widely used such as
Newton-type schemes.
For the problems at hand these techniques are certainly viable candidates
but require, often considerably, more complex analysis,
in particular, convergence conditions that are likely
not satisfied by highly nonlinear problems.
In a departure from most of the recent work in this area we will
use a Schauder space setting for the parabolic equation.
This offers several advantages including the fact that regularity estimates
for parabolic equations set in Schauder space are independent of spatial
In each of the above papers the diffusion process was extended to include
the subdiffusion case where the time-derivative was of fractional type.
The very rough picture that evolves from this modification is no change
to the uniqueness questions of recovery of terms but possibly
substantial changes in the degree of ill-conditioning, especially when
either only very short or when very long time values are present.
Certainly, the analysis is more complex when fractional derivatives are
In the present work we will consider only the classical parabolic situation
as we don't see sufficient novelty of outcome in the central ideas
to merit the analytical complexities that a comprehensive inclusion would
entail although we will sometimes comment on particular instances.
We make a simple, but essential, overview comment on inverse problems
seeking to recover a term that involves the dependent variable $u$.
One can only recover such a term over a range of values that are contained
in the data measurement range and, further, during the evolution process
the range of the solution $u$ cannot have taken values outside of this range.
This makes such problems quite distinct from determining coefficients
where the required range is known from the geometry of the domain $\Omega$.
From a practical perspective this means that the experimental set up
must take this point into consideration and an arbitrary mix of initial
and boundary conditions and values is unlikely to satisfy this range
There is some way around this issue by representing the unknown function,
here $f(u)$, in a chosen basis set.
If $f$ is known to have a simple form then a few well-chosen elements
may suffice, but otherwise the required extrapolation will lead to
a severely ill-posed problem.
We are not considering this case as we seek a methodology that includes
quite complex situations.
In addition, the iteration schemes we propose are best formulated
in a pointwise form and this would be precluded without a range condition.
In the current work we seek to determine a pair of unknown coefficients.
One of these, $a(x)$ is only spatially dependent but strongly influences the
resulting solution profile.
The reaction term $f(u)$ also might have a strong influence on the
solution profile but
it can also be such that it is small in magnitude and its influence on the
solution is dominated by the diffusion process governed by $a(x)$.
This case is actually more difficult than when the two terms have roughly
equal effect on the solution.
In either situation the range condition must still be in place.
In section sect:recons we will see a situation where violation of
this can occur in a rather subtle way leading to incorrect reconstructions.
The plan of the paper is to introduce the iterative schemes that will
be used to recover both $a$ and $f$ in section sec:conv
and present conditions under which we are able to show a convergence analysis.
This section is broken down into three cases depending on the data type being
measured: two runs with different initial/boundary data and each
measuring a final time $g(x) = u(x,T)$ for some fixed $t=T$;
two runs each measuring a time trace of the solution $u$ at a fixed
point $\tilde x\in\partial\Omega$; and a single run with measurements of
both a time trace and a final time.
We will also provide a convergence analysis of each of these schemes.
The following section sect:recons will show actual reconstructions
using these iterative schemes and demonstrate both their capabilities and their
Before proceeding to the above agenda we should make some comments
on physical scaling as this will be very relevant to our reconstructions in
section sect:recons.
${\mathbb L}$ has been written with a single coefficient $a(x)$ but
more general possibilities would include:
$\;{\mathbb L}u := -\nabla\cdot(a\nabla u) + d(x)\nabla_i u + q(x) u$.
In this setting $d$ would correspond to a drift term and $q$ a potential
or as a linear forcing term with a space-dependent modifier of its
Equation (<ref>) then appears with several
terms all of which have specific physical attributes and in consequence
scale factors depending on the situation.
By this we mean that depending on the context the range of value
taken by the various terms, including our assumed unknowns $a(x)$ and $f(u)$
may have enormous differences.
Examples for the diffusion coefficient range from around
$10^{-6}$ to $10^{-5}$ metres$^2$/sec for molecules in a gas to about
$10^{-10}$ metres$^2$/sec for molecules in a liquid such as water,
and in this case the effect of drift represented by $d(x)$
can overshadow the diffusion effect
(of course the domain size of $\Omega$ also plays a role).
Because of this we must be careful in interpreting the results of
numerical experiments which have taken generic order unity values
as the default.
§ CONVERGENCE ANALYSIS
We consider the inverse problem of recovering the spatially varying diffusion $a(x)$ and the nonlinearity $f(u)$ in the context of three different observation settings:
* observation of two states (driven by different initial and boundary conditions as well as forcing terms) at the same final time $T$
* observation of one or two states at final time and at a (boundary or interior) point $\tilde{x}$ over time
* observation of two different states at $\tilde{x}$ over time
We discuss each of these cases in a separate subsection below.
In the first two cases we can follow the approach of projecting the PDE onto the
observation manifold; in the third case this is only partly possible, since
time trace measurements do not map directly into space dependence (of $a$).
While the methods themselves clearly extend to higher space dimensions in the
first two cases (in the third case they cannot, by an elementary dimension count)
we provide an analysis in one space dimension $\Omega=(0,L)$ only.
Possibilities and limitations with respect to to higher space dimensions will
be indicated in some remarks.
Since well-definedness of the methods to be discussed relies on existence, uniqueness and regularity of solutions to the semilinear parabolic initial boundary value problem (<ref>), (<ref>), (<ref>), we will provide a statement from the literature [9] on well-posedness of this forward problem.
For our purposes it suffices to consider the case of homogeneous boundary conditions ($b=0$ in (<ref>)). As a matter of fact, it would be enough to look at the spatially 1-d case for which we do the analysis; however this would not change so much in the assumptions, since the Schauder space results in [9] are independent of the space dimensions.
In case $\gamma=\infty$ (Dirichlet boundary value problem) we can directly make use of <cit.> for existence and of <cit.> for uniqueness.
In case of Neumann or impedance conditions $\gamma<\infty$, these can be extended by means of the results from <cit.>, noting that these corresond to the second boundary value problem in [9] with $\beta=\frac{\gamma}{a}$, $g=\frac{b}{a}$.
Let $\Omega$ be a $C^{2+s}$ domain for some $s>0$, let $f$ and $r$ be Hölder continuous on bounded sets and $f$ satisfy the growth conditions
$uf(u)\leq A_1 u^2+A_2$
$|f(u)|\leq A(u)$ for some positive monitonically increasing function $A$
and let $a$, $\nabla a$ be Hölder continuous (note that the differential operator in [9] is in nondivergence form) with $a$ being positive and bounded away from zero on $\overline{\Omega}$.
Moreover, let $u_0\in C^{2+s}(\overline{\Omega})$ satisfy the compatibility conditions
$u_0=0$ and $\nabla\cdot(a(x)\nabla u_0(x)) = f(u_0(x)) + r(x,0,u_0)$ for $x\in\partial\Omega$ if $\gamma=\infty$,
or vanish in a neighborhood of $\partial \Omega$ in case $\gamma<\infty$.
Then there exists a classical solution $u$ with $u_t,\,u_{x_i},\,u_{x_ix_j}$ Hölder continuous of (<ref>), (<ref>), (<ref>) with $b=0$.
If additionally $f$ is Lipschitz continuous, then this solution is unique.
Note that the results taken from [9] deal with solutions on bounded time domains, which allows to impose less restrictive growth conditions than those needed on all of $[0,\infty)$ as a time interval.
Since we will consider functions $f$ that are nonconstant on a bounded domain only, Lipschitz continuity of $f$ is sufficient for the smoothness and growth conditions in Theorem th:wellposedforward.
§.§ Final time observations of two states
Identify $a(x)$, $f(u)$ in
\begin{eqnarray}
&&u_t-(a u_x)_x=f(u)+r^u \quad t\in(0,T)\,, \quad u(0)=u_0\label{eqn:u-f}\\
&&v_t-(a v_x)_x=f(v)+r^v \quad t\in(0,T)\,, \quad v(0)=v_0\label{eqn:v-f}
\end{eqnarray}
with homogeneous impedance boundary conditions
\begin{equation}\label{eqn:bc}
a\partial_{\vec{n}} u+\gamma^u u =0\,, \quad a\partial_{\vec{n}} v+\gamma^v v =0 \quad\mbox{ on }\partial\Omega
\end{equation}
from final time data
\begin{equation}\label{eqn:finaltimedata}
g^u(x)=u(x,T)\,, \quad g^v=v(x,T)\,, \quad x\in\Omega,
\end{equation}
where $\Omega=(0,L)$.
In order for this data to contain enough information to enable identification
of both $a$ and $f$, it is essential that the two solutions $u$ and $v$ are
sufficiently different, which is mainly achieved by the choice of the two
different driving terms $r^u$ and $r^v$, while the boundary conditions and
later on also the initial data will just be homogeneous.
Projecting the PDEs (<ref>), (<ref>) onto the measurement
manifold $\Omega\times\{T\}$ and denoting by $u(x,t;a,f)$, $v(x,t;a,f)$ their
solutions with coefficients $a$ and $f$, we obtain equivalence of the inverse
problem (<ref>)–(<ref>) to a fixed point equation
with the fixed point operator
$\mathbb{T}$ defined by $(a^+,f^+)=\mathbb{T}(a,f)$ with
\begin{equation}\label{eqn:fp_fiti-fiti}
\begin{aligned}
&(a^+ g^u_x)_x(x)+f^+(g^u(x))=D_tu(x,T;a,f)-r^u(x,T) \quad x\in\Omega\\
&(a^+ g^v_x)_x(x)+f^+(g^v(x))=D_tv(x,T;a,f)-r^v(x,T) \quad x\in\Omega\,.
\end{aligned}
\end{equation}
Thus, the iterates are implicitly defined by
\[
\mathbb{M}(a^+,f^+)=\mathbb{F}(a,f) +\mathrm{b}\,,
\]
with the linear operator $\mathbb{M}$, the nonlinear operator $\mathbb{F}$, and the inhomogeneity $\mathrm{b}$ defined by
\begin{equation}\label{eqn:MFb}
\begin{aligned}
\mathbb{M}(a^+\!,f^+)=
\left(\begin{array}{c}\!(a^+ g^u_x)_x+f^+(g^u)\\(a^+ g^v_x)_x+f^+(g^v)\end{array}\right)\,, \quad
\\
\mathbb{F}(a,f)=
\left(\begin{array}{c}u_t(\cdot,T;a,f)\\ v_t(\cdot,T;a,f)\end{array}\right)\,, \quad
\mathrm{b}=
-\left(\begin{array}{c}\!r^u(\cdot,T)\!\\ \!r^v(\cdot,T)\end{array}\right)\,.
\end{aligned}\end{equation}
The question of self mapping and contractivity of $\mathbb{T}$ in a neighbourhood of an exact solution $(a_{ex},f_{ex})$ leads us to consider the differences $\hat{a}^+=a^+-a_{ex}$, $\hat{f}^+=f^+-f_{ex}$, $\hat{a}=a-a_{ex}$, $\hat{u}=u(\cdot,\cdot;a,f)-u_{ex}$, $\hat{v}=v(\cdot,\cdot;a,f)-v_{ex}$, along with the identity
\[
\mathbb{M}(\hat{a}^+,\hat{f}^+)=\mathbb{F}(a,f)-\mathbb{F}(a_{ex},f_{ex})
\]
(note that the inhomogeneities cancel out)
so that
\begin{eqnarray*}
&&(\hat{a}^+ g^u_x)_x(x)+\hat{f}^+(g^u(x))=D_t\hat{u}(x,T) \quad x\in\Omega \\
&&(\hat{a}^+ g^v_x)_x(x)+\hat{f}^+(g^v(x))=D_t\hat{v}(x,T) \quad x\in\Omega \,.
\end{eqnarray*}
As in [17], the convergence estimates consist of two steps:
(a) prove bounded invertibility of $\mathbb{M}$, i.e., estimate appropriate norms of $a^+$, $f^+$
\begin{eqnarray}
&&(\hat{a}^+ g^u_x)_x(x)+\hat{f}^+(g^u(x))=\check{u}(x,T) \quad x\in\Omega\label{eqn:diffu}\\
&&(\hat{a}^+ g^v_x)_x(x)+\hat{f}^+(g^v(x))=\check{v}(x,T) \quad x\in\Omega\label{eqn:diffv}
\end{eqnarray}
by appropriate norms of $\check{u}$, $\check{v}$.
(b) prove smallness of $\mathbb{F}(a,f)-\mathbb{F}(a_{ex},f_{ex})$, i.e., estimate $\check{u}$, $\check{v}$ in terms of the above chosen norms of $\hat{a},\hat{f}$, using the fact that $\check{u}$, $\check{v}$ satisfy certain
Step (a), bounding $\mathbb{M}^{-1}$:
Unlike that of [17],
the elimination strategies are limited.
In this reference $\hat{q}^+$ was eliminated in order to estimate
$\hat{a}^+$ and then $\hat{a}^+$ was eliminated for estimating $\hat{q}^+$.
In the current situation we cannot eliminate $\hat{f}^+$ any more;
but we can in fact eliminate $\hat{a}^+$ and express $\hat{f}^+$ in terms of $\check{u}$ and $\check{v}$.
This step is considerably more complicated than the $\hat{a}^+$
elimination in [17]
and is carried out by the following steps.
We eliminate $\hat{a}^+$ by integrating (<ref>), (<ref>) with respect to $x$ and multiplying with $g^v_x(x)$ and $-g^u_x(x)$, respectively,
\[
\begin{aligned}
&g^v_x(x) \int_x^L \hat{f}^+(g^u(\xi))\, d\xi - g^u_x(x) \int_x^L \hat{f}^+(g^v(\xi))\, d\xi \\
&= g^v_x(x) \int_x^L \check{u}(\xi,T)\, d\xi - g^u_x(x) \int_x^L \check{v}(\xi,T)\, d\xi
\end{aligned}
\]
Here we have avoided boundary terms involving $\hat{a}$ by assuming that either $a(L)$ is known (so that $\hat{a}(L)=0$) or homogeneous Neumann boundary conditions are imposed on the right hand boundary (so that $g^u_x(L)=g^v_x(L)=0$).
In order to arrive at a first kind Volterra integral equation for $\hat{f}^+$, we assume that $g^u$ is strictly monotone (w.l.o.g. increasing) with
\begin{equation}\label{eqn:mu}
g^u_x(x)\geq\mu>0 \quad x\in\Omega
\end{equation}
and divide by $g^u_x(x)$ to get
\[
\begin{aligned}
&\tfrac{g^v_x}{g^u_x}(x) \int_x^L \hat{f}^+(g^u(\xi))\, d\xi - \int_x^L \hat{f}^+(g^v(\xi))\, d\xi
= \tfrac{g^v_x}{g^u_x}(x) \int_x^L \check{u}(\xi,T)\, d\xi - \int_x^L \check{v}(\xi,T)\, d\xi
\end{aligned}
\]
and after differentiation
\begin{equation}\label{eqn:fhatplusVolterra}
\begin{aligned}
& \hat{f}^+(g^v(x))
-\tfrac{g^v_x}{g^u_x}(x) \hat{f}^+(g^u(x))
+(\tfrac{g^v_x}{g^u_x})_x(x)\int_x^L \hat{f}^+(g^u(\xi))\, d\xi
\\
&= \check{v}(x,T)
- \tfrac{g^v_x}{g^u_x}(x) \check{u}(x,T)
+(\tfrac{g^v_x}{g^u_x})_x(x)\int_x^L \check{u}(\xi,T)\, d\xi \,.
\end{aligned}
\end{equation}
Further differentiations of (<ref>) yield the identities
\begin{equation}\label{eqn:fhatplusprime}
\begin{aligned}
& (\fhatplusp (g^v(x)) - \fhatplusp (g^u(x))) g^v_x(x)
-2(\tfrac{g^v_x}{g^u_x})_x(x) \hat{f}^+(g^u(x))
+(\tfrac{g^v_x}{g^u_x})_{xx}(x)\int_x^L \hat{f}^+(g^u(\xi))\, d\xi
\\
&= \check{v}_x(x,T)
- \tfrac{g^v_x}{g^u_x}(x) \check{u}_x(x,T)
- 2(\tfrac{g^v_x}{g^u_x})_x(x) \check{u}(x,T)
+(\tfrac{g^v_x}{g^u_x})_{xx}(x)\int_x^L \check{u}(\xi,T)\, d\xi \,.
\end{aligned}
\end{equation}
\begin{equation}\label{eqn:fhatplusprimeprime}
\begin{aligned}
(&\fhatpluspp(g^v(x))g^v_x(x) - \fhatpluspp(g^u(x))g^u_x(x)) g^v_x(x)\\
&\quad=-(\fhatplusp (g^v(x)) - \fhatplusp (g^u(x))) g^v_{xx}(x)
+2(\tfrac{g^v_x}{g^u_x})_x(x)g^u_x(x)\, \fhatplusp (g^u(x))\\
&\qquad+3(\tfrac{g^v_x}{g^u_x})_{xx}(x) \hat{f}^+(g^u(x))
-(\tfrac{g^v_x}{g^u_x})_{xxx}(x)\int_x^L \hat{f}^+(g^u(\xi))\, d\xi
\\
&\qquad+ \check{v}_{xx}(x,T)
- \tfrac{g^v_x}{g^u_x}(x) \check{u}_{xx}(x,T)
- 3(\tfrac{g^v_x}{g^u_x})_x(x) \check{u}_x(x,T)\\
&\qquad- 3(\tfrac{g^v_x}{g^u_x})_{xx}(x) \check{u}(x,T)
+(\tfrac{g^v_x}{g^u_x})_{xxx}(x)\int_x^L \check{u}(\xi,T)\, d\xi \\
&\quad =:\ \Phi(\fhatplusp ,{\hat{f}^+})(x)+ b(x)\,.
\end{aligned}
\end{equation}
To analyse this integral equation we assume that the slopes of $g^u$ and $g^v$ are sufficiently different in the sense that
\begin{equation}\label{eqn:kappa}
\left|\frac{g^v_x(x)}{g^u_x(x)}\right|\leq \kappa<1 \quad x\in\Omega
\end{equation}
and at the same time still ensuring that
\begin{equation}\label{eqn:delta}
g^v_x(x)\geq\delta \quad x\in\Omega.
\end{equation}
Moreover, since the strategy is to control the full $C^2$ norm by bounding $\fhatpluspp$, we assume $f(g^u(0))$, $f(g^v(0))$, $f'(g^u(0))$, $f'(g^v(0))$ to be known so that
\begin{equation}\label{eqn:fhatplus0}
{\hat{f}^+}(g^u(0))={\hat{f}^+}(g^v(0))=\fhatplusp (g^u(0))=\fhatplusp (g^v(0))=0.
\end{equation}
As was found in
[16], [18],
we have to impose range conditions
\begin{equation}\label{eqn:rangecondition_finaltime}
J=g^u(\Omega)\supseteq g^v(\Omega)\,, \quad
J\supseteq u_{ex}(\Omega\times(0,T))\,, \quad J\supseteq v_{ex}(\Omega\times(0,T))\,.
\end{equation}
Of course the roles of $g^u$ and $g^v$ could be reversed here. Note however, that the assumption of $g^u$ being the data with the larger range conforms with the condition (<ref>) of $g^u$ being the function with the steeper slope.
Let $\beta\in[0,1]$, $g^u,g^v\in C^4(\Omega)$, $\check{u}(T),\check{v}(T)\in C^{2,\beta}(\Omega)$, and assume that (<ref>), (<ref>), (<ref>), (<ref>) hold.
Then (<ref>) is uniquely solvable in
$C^{2,\beta}(J)$ and
\[
\|\fhatpluspp\|_{C^{0,\beta}(J)}
\leq C(g^u,g^v) \Bigl(\|\check{u}(T)\|_{C^{2,\beta}(\Omega)}+\|\check{v}(T)\|_{C^{2,\beta}(\Omega)}\Bigr)
\]
and therefore, by (<ref>),
\begin{equation}\label{eqn:estf}
\begin{aligned}
\|\hat{f}^+\|_{C^{2,\beta}(J)}\leq \tilde{C} \|\fhatpluspp\|_{C^{0,\beta}(J)}
&\leq \bar{C}(g^u,g^v) \Bigl(\|\check{u}(T)\|_{C^{2,\beta}(\Omega)}+\|\check{v}(T)\|_{C^{2,\beta}(\Omega)}\Bigr)\,,
\end{aligned}\end{equation}
where $\bar{C}(g^u,g^v)$ only depends on smoothness of $g^u,g^v$, as well as on $\kappa, \delta, \mu$.
By (<ref>), (<ref>) we get, after division by $-g^u_x(x) g^v_x(x)$,
\[
\hat{f}^{+\,\prime\prime}(g^u(x))-\hat{f}^{+\,\prime\prime}(g^v(x))\tfrac{g^v_x}{g^u_x}(x)
= -\frac{1}{g^v_x(x)g^u_x(x)} \Bigl(\Phi(\hat{f}^{+\,\prime},{\hat{f}^+})(x)+b(x)\Bigr)\,.
\]
Hence, taking the supremum over $x\in\Omega$ and using (<ref>)
we obtain, using (<ref>), (<ref>), (<ref>),
\[
\|\hat{f}^{+\,\prime\prime}\|_{C(J)}\leq \kappa \|\hat{f}^{+\,\prime\prime}\|_{C(J)} + \frac{1}{\mu\delta}
\|\Phi(\hat{f}^{+\,\prime},{\hat{f}^+})+b\|_{C(\Omega)}
\]
that is,
\[
\|\hat{f}^{+\,\prime\prime}\|_{C(J)}\leq \frac{1}{\mu\delta(1-\kappa)}
\|\Phi(\hat{f}^{+\,\prime},{\hat{f}^+})\|_{C(\Omega)}\,.
\]
By (<ref>) we can write
\[
\begin{aligned}
\hat{f}^{+\,\prime}(g^u(x))&=\int_0^x \hat{f}^{+\,\prime\prime}(g^u(\xi))g^u_x(\xi)\, d\xi\,, \\
%=\int_0^x \hat{f}^{+\,\prime}(g^u(\xi))g^u_x(\xi)\, d\xi
&=\int_0^x \int_0^\xi \hat{f}^{+\,\prime\prime}(g^u(\sigma))g^u_x(\sigma)\, d\sigma g^u_x(\xi)\, d\xi \,,\\
\int_x^L {\hat{f}^+}(g^u(\xi)) \, d\xi
&= \int_x^L \int_0^\xi \int_0^\sigma \hat{f}^{+\,\prime\prime}(g^u(\tau))g^u_x(\tau)\, d\tau g^u_x(\sigma)\, d\sigma \, d\xi\,,
\end{aligned}
\]
\[
\Phi(\hat{f}^{+\,\prime},{\hat{f}^+}) = K \hat{f}^{+\,\prime\prime}
\]
with a compact operator $K:C(J)\to C(\Omega)$.
On the other hand, with the isomorphism $B:C(J)\to C(\Omega)$ defined by
\[
(Bj)(x):= (j(g^v(x))g^v_x(x)-j(g^u(x))g^u_x(x))g^v_x(x)
= -\Bigl(j(g^u(x))-j(g^v(x))\tfrac{g^v_x(x)}{g^u_x(x)}\Bigl)g^u_x(x) g^v_x(x)
\]
(and noting that $\,\|B^{-1}\|_{C(\Omega)\to C(J)}\leq \frac{1}{\mu\delta(1-\kappa)}$, see above),
we can write the problem of recovering $\hat{f}^{+\,\prime\prime}$ as a second kind Fredholm equation
\[
j - B^{-1}K j = B^{-1}b
\]
for $j=\hat{f}^{+\,\prime\prime}$ and apply the Fredholm alternative in $C(J)$. To this end, we have to prove that the kernel of $I-B^{-1}K$ is trivial. For this purpose we have to be able to conclude $j\equiv0$ from
$J\equiv0$ where
\[
\begin{aligned}
&J(x)=: j\bigl((g^v(x))g^v_x(x) -(g^u(x))g^u_x(x)\bigr) g^v_x(x)\\
&\quad+\int_0^x\! j(g^v(\xi))g^v_x(\xi)\, d\xi -
g^v_{xx}(x)\int_0^x\! j(g^u(\xi))g^u_x(\xi)\, d\xi
\\
&\quad-2(\tfrac{g^v_x}{g^u_x})_x(x) g^u_x(x) \int_0^x\!j(g^u(\xi))g^u_x(\xi)\, d\xi -3(\tfrac{g^v_x}{g^u_x})_{xx}(x)
\int_0^x \!\int_0^\xi\!j(g^u(\sigma))g^u_x(\sigma)\, d\sigma g^u_x(\xi)\, d\xi
\\
\int_x^L\!\int_0^\xi\!\int_0^\sigma\!j(g^u(\tau))g^u_x(\tau)\, d\tau g^u_x(\sigma)\, d\sigma \, d\xi.
\end{aligned}
\]
We then have
\[
\begin{aligned}
J(x) =& j\bigl(g^v(x))g^v_x(x) - (g^u(x))g^u_x(x)\bigr) g^v_x(x)
+\int_{g^v(0)}^{g^v(x)}\!j(z)\, dz\\
&\quad- (g^v_{xx}(x)+2(\tfrac{g^v_x}{g^u_x})_x(x))
\int_{g^u(0)}^{g^u(x)}\!j(z)\, dz
\int_{g^u(0)}^{g^u(x)}\!\int_{g^u(0)}^{z} j(y)\, dy \, dz
\\
\int_x^L
\int_{g^u(0)}^{g^u(\xi)}\!\int_{g^u(0)}^{z} j(y)\, dy \, dz \, d\xi \,.
\end{aligned}
\]
After division by $-(g^u_x\cdot g^v_x)(x)$ and the change of variables $\zeta=g^u(x)$, this is equivalent to
\[
\begin{aligned}
0=& j(\zeta)-\mathrm{a}(\zeta)\,j(\mathrm{b}(\zeta))
+\int_{\underline{g}}^\zeta \mathrm{k} (\zeta,z) j(z)\, dz\\
&+\mathrm{c}(\zeta)\int_{g^u(0)}^\zeta \int_{g^u(0)}^{z} j(y)\, dy \, dz
+\mathrm{d}(\zeta)\int_\zeta^{g^u(L)}\!\frac{1}{g^u_x((g^u)^{-1}(r)}\int_{g^u(0)}^\zeta \int_{g^u(0)}^{z} j(y)\, dy \, dz\, dr
\end{aligned}
\]
\[
\begin{aligned}
&\mathrm{a}(g^u(x))=(\tfrac{g^v_x}{g^u_x})(x), \qquad\mathrm{b}(g^u(x))=g^v(x), \\
\frac{g^v_{xx}+2(\tfrac{g^v_x}{g^u_x})_x}{(g^u_x\cdot g^v_x)}((g^u)^{-1}(\zeta))
-\frac{1}{(g^u_x\cdot g^v_x)}((g^u)^{-1}(\zeta)) 1\!\mathrm{I}_{[g^v(0),\mathrm{b}(\zeta)]}(z),
\end{aligned}
\]
that is, to an integral equation of the form
\[
0= j(\zeta)-\mathrm{a}(\zeta)\,j(\mathrm{b}(\zeta))
+\int_{\underline{g}}^\zeta \widetilde{\mathrm{k}} (\zeta,z) j(z)\, dz
\]
for all $\zeta\in [g^u(0),g^u(L)]$.
Since this is a homogeneous second kind Volterra integral equation, using Gronwall's inequality and the fact that $0\leq\mathrm{a}\leq\kappa<1$, it follows that $j\equiv0$.
Thus we have
\[
\|\hat{f}^{+\,\prime\prime}\|_{C(J)}\leq C \|b\|_{C(\Omega)}\,.
\]
Moreover we can replace $C(J)$ by $C^\beta(J)$ since the operator $B$ defined above is an isomorphism also between $C^\beta(J)$ and $C^\beta(\Omega)$ as $Bj=b$ implies
\[
\begin{aligned}
&\frac{|j(g^u(x))-j(g^u(y))|}{|g^u(x)-g^u(y)|^\beta} \\
&= g^v_x(x))g^u_x(x)^{-1}\frac{|j(g^v(x))-j(g^v(y))|}{|g^u(x)-g^u(y)|^\beta}
+\frac{g^v_x(x))g^u_x(x)^{-1}-g^v_x(y))g^u_x(y)^{-1}}{|g^u(x)-g^u(y)|^\beta} j(g^v(y))\\
\end{aligned}
\]
hence, by taking the supremum over $x\in\Omega$ and using the range condition as well as the fact that by (<ref>), $|g^u(x)-g^u(y)|^\beta\geq |g^v(x)-g^v(y)|^\beta$,
\[
\begin{aligned}
&\leq \frac{1}{1-\kappa} \Bigl( C_1 \|j\|_{C(J)} + \|b\|_{C^{0,\beta}(\Omega)}\Bigr)
\leq \frac{1}{1-\kappa} \Bigl( \frac{C_1}{\mu\delta(1-\kappa)} \|b\|_{C(\Omega)} + \|b\|_{C^{0,\beta}(\Omega)}\Bigr)
\end{aligned}
\]
under the smoothness assumptions made on $g^u$, $g^v$.
Here in case $\beta=0$, $C^{0,0}(\Omega)$ could as well be replaced by $L^\infty(\Omega)$, so we have the choice of spaces
\[
X_f= C^{2,\beta}(J) \mbox{ for some }\beta\in[0,1]\mbox{ or }X_f=W^{2,\infty}(J)
\]
Now we bound $\hat{a}^+$ in
\begin{equation}\label{eqn:diffu1}
(\hat{a}^+ g^u_x)_x(x)+\hat{f}^+(g^u(x))=\check{u}(x,T) \quad x\in\Omega
\end{equation}
(cf. (<ref>)), in terms of $\check{u}$, $\check{v}$ and $\hat{f}^+$ where the latter is then bounded by means of (<ref>).
Here we first of all try to choose the space for $\hat{a}^+$ analogously to [17] as
\[
X_a = \{d\in H^1(\Omega)\cap L^\infty(\Omega)\,:\, d(L)=0\} \,,
\]
where we have assumed to know $a$ at one of the boundary points (we take the right hand one again) for otherwise (<ref>) would not uniquely determine $\hat{a}^+$.
The estimate can simply be carried out as follows.
After integration with respect to space and dividing by $g^u_x$ we get
\begin{equation}\label{eqn:idahatplus}
\begin{aligned}
\hat{a}^+(x)&=\frac{1}{g^u_x(x)}\int_x^L (\check{u}(\xi,T)-\hat{f}^+(g^u(\xi)))\, d\xi\\
\hat{a}^+_x(x)&=
-\frac{g^u_{xx}(x)}{(g^u_x(x))^2}\int_x^L (\check{u}(\xi,T)-\hat{f}^+(g^u(\xi)))\, d\xi%\\
\end{aligned}
\end{equation}
and therefore
\[
\begin{aligned}
\|\hat{a}^+\|_{L^\infty(\Omega)}&\leq\frac{1}{\mu}\bigl(\|\check{u}(\cdot,T)\|_{L^1(\Omega)}
\|\hat{a}^+_x\|_{L^2(\Omega)}&\leq
\frac{\sqrt{L}\|g^u_{xx}\|_{L^2(\Omega)}}{\mu^2}\bigl(\|\check{u}(\cdot,T)\|_{L^1(\Omega)}
&\quad +\frac{1}{\mu}\bigl(\|\check{u}(\cdot,T)|_{L^2(\Omega)}+\sqrt{L}\|\hat{f}^+\|_{L^\infty(J)}\bigr)
\end{aligned}
\]
This gives an obvious mismatch with the much higher norm of $\hat{f}$ (and consequently of $\check{u}$, $\check{v}$) in the estimate (<ref>) which from [16] and [18] we know to be needed, though.
Also, Lemma <ref> would not work when applied to the first in place of the second derivative of $\hat{f}^+$, as both $\fhatplusp $ terms in the left hand side of (<ref>) have the same factor.
Thus we next try a space for $a^+$ that is more aligned to estimate (<ref>).
From (<ref>) we get, using the fact that $C^{2,\beta}(\Omega)$ is a Banach algebra and $\|j(g)\|_{C^{2,\beta}(\Omega))}\leq \|j\|_{C^{2,\beta}(J))}\|g\|_{C^{2,1}(\Omega))}$,
\begin{equation}\label{eqn:esta_withf}
\begin{aligned}
\Bigl(L\|\bigl(\tfrac{1}{g^u_x}\bigr)_x\|_{C^{2,\beta}(\Omega)}+ \|\tfrac{1}{g^u_x}\|_{C^{2,\beta}(\Omega)}\Bigr) \bigl(\|\check{u}(T)\|_{C^{2,\beta}(\Omega)}+\|g^u\|_{C^{2,1}(\Omega))} \|\hat{f}^+\|_{C^{2,\beta}(J))}\bigr)
\end{aligned}
\end{equation}
and thus, together with (<ref>),
\begin{equation}\label{eqn:esta}
\begin{aligned}
\|\hat{a}^+\|_{C^{3,\beta}(\Omega)}\leq \bar{C}(g^u,g^v) \Bigl(\|\check{u}(T)\|_{C^{2,\beta}(\Omega)}+\|\check{v}(T)\|_{C^{2,\beta}(\Omega)}\Bigr)\,,
\end{aligned}
\end{equation}
where again $\bar{C}(g^u,g^v)$ only depends on the smoothness of $g^u,g^v$, as well as on and $\kappa, \delta, \mu$.
This completes step (a).
For $g^u\in C^{4,\beta}(\Omega)$, $g^v\in C^4(\Omega)$ satisfying (<ref>), (<ref>), (<ref>), (<ref>), there exists a constant $\bar{C}(g^u, g^v)$ depending only on
$\|g^u\|_{C^{4,\beta}(\Omega)}$, $\|g^v\|_{C^4(\Omega)}$, $\mu$, $\kappa$, $\delta$ such that the operator $\mathbb{M}:X_a\times X_f\to C^{2,\beta}(\Omega)$ as defined in (<ref>) with
\begin{equation}\label{eqn:XaXf}
\begin{aligned}
&X_a=\{d\in C^{3,\beta}(\Omega)\, : \, d(L)=0\}\,,\\
&X_f=\{j\in C^{2,\beta}(J)\, : \, j(g^u(0))=j(g^v(0))=j'(g^u(0))=j'(g^v(0))=0\}
\end{aligned}
\end{equation}
is bounded and invertible with
\[
\|\mathbb{M}^{-1}\|_{C^{2,\beta}(\Omega)\to X_a\times X_f}\leq \bar{C}(g^u, g^v)\,.
\]
This part of the convergence proof would directly carry over to the case of a fractional
time derivative $D_t^\alpha$ in place of $D_t$ in
(<ref>), (<ref>).
However, further bounding $\bar{u}=D^\alpha u$ using the fact that it satisfies
the pde
$D_t^\alpha \bar{u}-(a \bar{u}_x)_x=D_t^\alpha f(u)+ D_t^\alpha r^u =D_t^\alpha (\frac{f(u)}{u} \cdot u )+ D_t^\alpha r^u$ becomes problematic due to the lack of a chain or product rule for the fractional time derivative.
Extension to higher space dimensions does not appear to be possible, since the strategy of eliminating $\hat{a}^+$ relies on one dimensional integration.
Besides some assumptions directly on the the searched for coefficients $a$ and $f$ (see (<ref>)), the proof of Proposition (<ref>) shows that we also need to impose some conditions on the data $g^u$, $g^v$. The monotonicity assumption (<ref>) allows for inversion of $g^u$ in order to recover values of $f$ from values of $f(g^u)$, avoiding potentially contradictory double or multiple assignments; likewise for (<ref>) and $g^v$.
Also, it is clear that the range of the data must cover all values that will actually appear as arguments in $f$; note that the range condition (<ref>) is only imposed on the exact states. The range of the data is therefore the natural maximal domain for any reasonable reconstruction of $f$ and evaluation of $f$ outside this domain must be avoided since it could lead to false results in a forward simulation.
Finally, (<ref>) is a condition on sufficient deviation of slopes between $g^u$ and $g^v$, in order to allow for unique and stable recovery of $f^+$ according to Lemma (<ref>). It is probably the most technical and most difficult to realize of these condition. Still note that it is compatible with (<ref>) in the sense that the steeper function $g^u$ is the one with larger range. Note that no such condition as (<ref>) will be needed in the analysis of the final time - time trace observation setting of the next section.
All these conditions can in principle be achieved by appropriate excitations $r^u$, $r^v$ and their validity can be checked during the reconstruction process. Maximum and comparison principles can provide a good intuition for this choice. However, the concrete design of excitations to enable / optimize reconstruction is certainly a topic on its own.
Step (b), smallness of
The differences $\hat{u}$, $\hat{v}$ along with $\bar{u}=u_t$, $\bar{v}=v_t$ solve
\begin{eqnarray}
&&D_t\hat{u}-(a_{ex} \hat{u}_x)_x+q^u\,\hat{u} = (\hat{a} u_x)_x +\hat{f}(u) \quad t\in(0,T)\,, \quad \hat{u}(0)=0 \label{eqn:uhat}\\
&&D_t\hat{v}-(a_{ex} \hat{v}_x)_x+q^v\,\hat{v} = (\hat{a} v_x)_x +\hat{f}(v) \quad t\in(0,T)\,, \quad \hat{v}(0)=0 \label{eqn:vhat}
\end{eqnarray}
\[
\begin{aligned}
q^u&=-\frac{f_{ex}(u)-f_{ex}(u_{ex})}{u-u_{ex}}=-\int_0^1f_{ex}'(u_{ex}+\theta\hat{u})\, d\theta
\,, \quad
q^v&=-\frac{f_{ex}(v)-f_{ex}(v_{ex})}{v-v_{ex}}%=-\int_0^1f_{ex}'(v_{ex}+\theta\hat{v})\, d\theta
\end{aligned}
\]
\begin{eqnarray*}
&&\bar{u}_t-(a \bar{u}_x)_x-f'(u)\bar{u}=D_tr^u \quad t\in(0,T)\,, \quad
\bar{u}(0)=(a u_{0x})_x+f(u_0)+r^u(0)\\
&&\bar{v}_t-(a \bar{v}_x)_x-f'(v)\bar{v}=D_tr^v \quad t\in(0,T)\,, \quad
\bar{v}(0)=(a v_{0x})_x+f(v_0)+r^v(0)\,.
\end{eqnarray*}
Moreover we can write $\check{u}=D_t\hat{u}$, $\check{v}=D_t\hat{v}$ as solutions to the pdes
\begin{eqnarray}
&&\check{u}_t-(a_{ex} \check{u}_x)_x-f_{ex}'(u_{ex})\,\check{u} = (\hat{a} \bar{u}_x)_x
+\Bigl(\hat{f}'(u) + \int_0^1f_{ex}''(u_{ex}+\theta\hat{u})\, d\theta\hat{u}\Bigr) \bar{u}
\,, \label{equ:ucheck}\\
&&\check{v}_t-(a_{ex} \check{v}_x)_x-f_{ex}'(v_{ex})\,\check{v} = (\hat{a} \bar{v}_x)_x
+\Bigl(\hat{f}'(v) + \int_0^1f_{ex}''(v_{ex}+\theta\hat{v})\, d\theta\hat{v}\Bigr) \bar{v}
\label{equ:vcheck}
\end{eqnarray}
with initial conditions
\begin{equation}\label{eqn:initucheckvcheck}
\check{u}(0)= (\hat{a} u_{0x})_x +\hat{f}(u_0)\,, \qquad
\check{v}(0)= (\hat{a} v_{0x})_x +\hat{f}(v_0)\,,
\end{equation}
where we have used the identity $f'(u)D_tu-f_{ex}'(u_{ex})D_tu_{ex}$
+ \Bigl((f-f_{ex})'(u)+(f_{ex}'(u)-f_{ex}'(u_{ex}))\Bigr)D_tu$.
We can estimate $\check{u}$, $\check{v}$ from (<ref>), (<ref>), (<ref>) in the same way as we estimate $z$ in the estimate preceding Theorem 3.4 of [18],
taking into account the additional term $(\hat{a} \bar{u}_x)_x$ in the right hand side and $(\hat{a} u_{0x})_x$ in the initial condition, which yields the following.
We split the solution of (<ref>) $\check{u}=\check{u}^r+\check{u}^0$ into a part $\check{u}^r$ satisfying the inhomogeneous pde
\[
\check{u}^r_t-(a_{ex} \check{u}^r_x)_x - {f_{ex}}'(u_{ex})\, \check{u}^r = (\hat{a} \bar{u}_x)_x
+\Bigl(\hat{f}'(u) + \int_0^1f_{ex}''(u_{ex}+\theta\hat{u})\, d\theta\hat{u}\Bigr) \bar{u}
\]
with homogeneous initial conditions $\check{u}^r(x,0)=0$
and a part $\check{u}^0$ satisfying the homogeneous pde
$\check{u}^0_t-(a_{ex} \check{u}^0_x)_x - {f_{ex}}'(u_{ex}) \check{u}^0 = 0$ with inhomogeneous initial conditions
\begin{equation}\label{eqn:checku0_init}
\check{u}^0 (x,0)=(\hat{a} u_{0x})_x(x) +\hat{f}(u_0(x)) \quad x\in\Omega\,.
\end{equation}
Using a classical estimate from the book by Friedman <cit.>, and abbreviating $\Theta_t=(0,t)\times\Omega$, $\Theta=(0,T)\times\Omega$,
we estimate
\begin{equation}\label{eqn:estzr0}
\begin{aligned}
\leq
\sum_{|m|\leq2}\|D^m_x \check{u}^r\|_{C^{0,\beta}(\Theta_t)}\\
&\leq K\Bigl(\|(\hat{a} \bar{u}_x)_x\|_{C^{0,\beta}(\Theta_t)}+
\|\int_0^1 {f_{ex}}''(u_{ex}+\theta\hat{u})\, d\theta \, \hat{u} + \hat{f}'(u)\|_{C^{0,\beta}(\Theta)} \|\bar{u}\|_{C^{0,\beta}(\Theta_t)}\Bigr) \\
&\leq K\Bigl(\|\hat{a}\|_{C^{2,\beta}(\Theta_t)}+
\Bigr)
\|\bar{u}\|_{C^{2,\beta}(\Theta_t)}\,,
\end{aligned}
\end{equation}
\[
\begin{aligned}
\end{aligned}
\]
\[
\|\check{u}\|_{C^{2,\beta}(\Theta_t)}
\leq \|\check{u}^r\|_{C^{2,\beta}(\Theta_t)}+\|\check{u}^0\|_{C^{2,\beta}(\Theta_t)}
\leq \sum_{|m|\leq2}\|D^m_x \check{u}^r\|_{C^{0,\beta}(\Theta_t)}
\]
Note that we actually estimate $\|\check{u}(T)\|_{C^{2,\beta}(\Omega)}$ by $\|\check{u}\|_{C^{2,\beta}(\Omega\times(0,T))}:=\sum_{m\leq2}
\|D_x^k\check{u}\|_{C^{\beta}(\Omega\times(0,T))}$
with the definition of $\|\cdot\|_{C^{\beta}(\Omega\times(0,T))}$ from [9] (note that $\|\cdot\|_{C^{2,\beta}(\Omega\times(0,T))}$ has a different meaning in [9]).
Here, $\hat{u}$ solves (<ref>), so applying again <cit.> we obtain
\[
\begin{aligned}
\|\hat{u}\|_{C^{2,\beta}(\Theta_t)}&\leq
\sum_{|m|\leq2}\|D^m_x \hat{u}\|_{C^{0,\beta}(\Theta_t)}
\leq K \|(\hat{a}u_x)_x+\hat{f}(u)\|_{C^{0,\beta}(\Theta_t)} \\
K \Bigl(\|\hat{a}\|_{C^{2,\beta}(\Theta_t)}\|\hat{u}\|_{C^{2,\beta}(\Theta_t)}+\|\hat{f}\|_{C^{0,\beta}(J)} (1+\|u_{ex}\|_{C^{0,1}(\Theta_t)}+\|\hat{u}\|_{C^{0,1}(\Theta_t)})\Bigr)\\
K \Bigl(\|\hat{a}\|_{C^{2,\beta}(\Theta_t)}+\|\hat{f}\|_{C^{0,\beta}(J)}\Bigr) (1+\|u_{ex}\|_{C^{0,1}(\Theta_t)}+\|\hat{u}\|_{C^{2,\beta}(\Theta_t)})\Bigr),
\end{aligned}
\]
thus, for $\|\hat{a}\|_{C^{2,\beta}(\Theta_t)}+\|\hat{f}\|_{C^{0,\beta_0}(J)}\leq\rho_0<\frac{1}{K}$,
\[
\|\hat{u}\|_{C^{0,1}(\Theta_t)}
\leq \|\hat{u}\|_{C^{2,\beta}(\Theta_t)}
\leq \frac{1}{1-\rho_0 K}
\Bigl(\|\hat{a}\|_{C^{2,\beta}(\Theta_t)}+\|\hat{f}\|_{C^{0,\beta}(J)}\Bigr)
\]
Altogether, for $\|\hat{a}\|_{C^{2,\beta}(\Theta_t)}+\|\hat{f}'\|_{C^{0,\beta}(J)}\leq\rho_1$ with $\rho_1<\frac{1}{K}$, we end up with an estimate for $\check{u}^r$ of the form
\begin{equation}\label{eqn:estucheckr}
\begin{aligned}
\|\check{u}^r\|_{C([0,t];C^{2,\beta}(\Omega))}
\leq &C(K,\rho_0,\rho_1,\|{f_{ex}}''\|_{C^{0,\beta}(J)}) \Bigl(
\|\hat{a}\|_{C^{2,\beta}(\Omega)}
\end{aligned}
\end{equation}
To estimate $\check{u}^0$, we do not use the same result from <cit.>, since this would not give a contraction estimate with respect to $\hat{a}$, $\hat{f}$. Rather we attempt to employ dissipativity of the equation, however, this fails like in [18], as we will now illustrate.
Let us first of all point out that the difficulty here lies in the fact that the coefficient $f_{ex}'(u_{ex})$ is time-dependent and thus the abstract ODE corresponding to (<ref>) is non-autonomous.
Thus semigroup decay estimates would require the zero order coefficient to be
constant in time or at least time periodic,
see, e.g., <cit.>. To illustrate why indeed the general
form of $f_{ex}'(u_{ex})$ most probably prohibits decay of solutions, we follow
a perturbation approach, with a time constant potential
$q\approx -f_{ex}'(u_{ex})$, which we assume to be positive.
Using the series expansion of $\check{u}^0$ in terms of the eigenvalues
and -functions $(\lambda_n,\phi_n)_{n\in\mathbb{N}}$ of the elliptic operator
$\mathbb{L}$ defined by
$\mathbb{L}v=-(a_{ex}v_x)_x+q\,v$ as well as the induced Hilbert spaces
\[
\dot{H}^\sigma(\Omega)=\{v\in L^2(\Omega)\ : \ \sum_{n=1}^\infty \lambda_n^{\sigma/2} \langle v,\phi_n\rangle \phi_n \, \in L^2(\Omega)\}
\]
where $\langle\cdot,\cdot\rangle$ denotes the $L^2$ inner product on $\Omega$, with the norm
\|v\|_{\dot{H}^\sigma(\Omega)}
= \Bigl(\sum_{n=1}^\infty \lambda_n^\sigma \langle v,\phi_n\rangle^2 \Bigr)^{\!\frac{1}{2}}
that is equivalent to the $H^\sigma(\Omega)$ Sobolev norm, we can write
\begin{equation}\label{eqn:checku0}
\begin{aligned}
\check{u}^0(x,t)=&\sum_{n=1}^\infty \bigl(e^{-\lambda_n t} \langle (\hat{a} u_{0x})_x \!+\!\hat{f}(u_0),\phi_n\rangle
\!+\! \int_0^t e^{-\lambda_n (t-s)} \langle (f_{ex}'(u_{ex}(s))+q)\check{u}^0(s),\phi_n\rangle\, ds\bigr) \phi_n(x)\\
\end{aligned}
\end{equation}
From Sobolev's embedding theorem with $\sigma > d/2+2+\beta = 5/2 + \beta$,
\begin{equation}\label{eqn:estucheck01C2beta}
\begin{aligned}
\|\check{u}^{0,1}(t)\|_{C^{2,\beta}(\Omega)}&\leq C_{\dot{H}^\sigma,C^{2,\beta}}^\Omega
\Bigl(\sum_{n=1}^\infty \lambda_n^\sigma e^{-2\lambda_n t} \langle (\hat{a} u_{0x})_x+\hat{f}(u_0),\phi_n\rangle^2\Bigr)^{1/2}\\
&\leq C_{\dot{H}^\sigma,C^{2,\beta}}^\Omega
\sup_{\lambda\geq\lambda_1}\lambda^{\sigma/2-1} e^{-\lambda t}
\|(\hat{a} u_{0x})_x+\hat{f}(u_0)\|_{\dot{H}^2(\Omega)}\\
&\leq C_{\dot{H}^\sigma,C^{2,\beta}}^\Omega
\Psi(t;\sigma,\lambda_1)
\Bigl(\|\hat{a}\|_{C^2(\Omega)} \|u_{0\,xx}\|_{\dot{H}^2(\Omega)} \\
&\qquad +\|\hat{a}\|_{C^3(\Omega)} \|u_{0\,x}\|_{\dot{H}^2(\Omega)}
+ C_\mathbb{L}\|\hat{f}\|_{C^2(J)} \|u_0\|_{\dot{H}^2(\Omega)}\Bigr)
\end{aligned}
\end{equation}
\begin{equation}\label{eqn:estucheck02C2beta}
\begin{aligned}
\|&\check{u}^{0,2}(t)\|_{C^{2,\beta}(\Omega)}\leq C_{\dot{H}^\sigma,C^{2,\beta}}^\Omega
\Bigl(\int_0^\tau\sum_{n=1}^\infty \lambda_n^\sigma e^{-2\lambda_n (t-s)}
\langle ({f_{ex}}'(u_{ex}(s))+q)\check{u}^0(s),\phi_n\rangle^2\, ds\Bigr)^{1/2}\\
&\leq C_{\dot{H}^\sigma,C^{2,\beta}}^\Omega C(\Omega)
\left(\int_0^t\Psi(\tau-s;\sigma,\lambda_1)\|{f_{ex}}'(u_{ex}(s))+q\|_{H^2(\Omega)}^2\, ds \right)^{1/2}
\|\check{u}^0\|_{C([0,t];C^2(\Omega))}\,,
\end{aligned}
\end{equation}
with $C_{\mathbb{L}}$ such that
$\|\mathbb{L} j(v)\|_{L^2(\Omega)}\leq C_{\mathbb{L}} \|j\|_{C^2(\mathbb{R})} \|v\|_{\dot{H}^2(\Omega)}$ for all $j\in C^2(\mathbb{R})$, $v\in \dot{H}^2(\Omega)$ and
\begin{equation}\label{eqn:Psi}
\Psi(t;\sigma,\lambda_1) = \begin{cases}
(\sigma/2-1)^{\sigma/2-1} e^{1-\sigma/2}\, t^{1-\sigma/2}&\mbox{ for }t\leq \frac{\sigma-2}{2\lambda_1}\\
\lambda_1^{\sigma/2-1} e^{-\lambda_1 t}&\mbox{ for }t\geq \frac{\sigma-2}{2\lambda_1}
\,.\end{cases}
\end{equation}
The appearance of $\|\check{u}^0\|_{C([0,t];C^2(\Omega))}$ on the right hand side of (<ref>) forces us to look at the supremum of $\|\check{u}^0(t)\|_{C^2(\Omega)}$ over $t\in[0,T]$, which, however, due to the singularity of $\Psi$ at $t=0$ cannot be estimated appropriately by (<ref>).
Thus the convolution term in (<ref>) inhibits exponential decay as would be expected from the estimate of the first term in (<ref>).
Thus we have no means of establishing contractivity in the presence of the
initial term (<ref>) and therefore will assume that
$u_0$ and $f(0)$ vanish.
Thus, putting this altogether
\begin{equation}\label{eqn:estucheck}
\begin{aligned}
\|\check{u}^r\|_{C([0,t];C^{2,\beta}(\Omega))}\\
&\leq C
\|u_{ex,t}\|_{C^{2,\beta}(\Theta)}
\,.
\end{aligned}
\end{equation}
provided $\|\hat{a}\|_{C^{2,\beta}(\Theta_t)}+\|\hat{f}\|_{C^{1,\beta}(J)}\leq\rho$ small enough.
The same estimate can be used for bounding $\|\check{v}\|_{C^{2,\beta}(\Omega\times(0,T))}$
This together with (<ref>), (<ref>) yields contractivity of $\mathbb{T}$ on a ball of sufficiently small radius $\rho$.
Under the assumptions of Proposition <ref> and if additionally
$\|u_{ex,t}\|_{C^{0,\beta}(\Theta)}$ is sufficiently small and $u_0=0$, $f_{ex}(0)=0$, $g^u(0)=0$, there exists $\rho>0$ such that $\mathbb{T}$ is a self-mapping on $B_\rho^{X_a\times X_f}(a_{ex},f_{ex})$ and the convergence estimate
\[
\|\mathbb{T}(a,f)-\mathbb{T}(a_{ex},f_{ex})\|_{X_a\times X_f}\leq
q \|(a,f)-(a_{ex},f_{ex})\|_{X_a\times X_f}
\]
holds for some $q\in(0,1)$ and $X_a$, $X_f$ as in (<ref>).
In <cit.> we have
alternatively proven contractivity for monotone $f$.
However, the approach taken there does not seem to go through here
for the following reasons.
Additionally to the exponential decay of $\bar{u}$,
we would need to show exponential decay of $(\hat{a}\, \bar{u}_x)_x$.
Another obstacle is that the strategy of showing contractivity of $\mathbb{T}$ by exploiting the regularity gain
\[
\begin{aligned}
\|\mathbb{T}^2(a,f)-\mathbb{T}^2(a_{ex},f_{ex})\|_{C^{3,\beta}(\Omega)\times C^{2,\beta}(J)}
&\leq C(g^u,g^v)\|(\check{u},\check{v})\|_{C^{2,\beta}(\Omega)^2}\\
&\leq \tilde{C}\,C(g^u,g^v)\|\mathbb{T}(a,f)-\mathbb{T}(a_{ex},f_{ex})\|_{C^{1,\beta}(\Omega)\times C^{1,\beta}(J)}
\end{aligned}
\]
does not work here.
This is because we have no possibility to estimate the $C^{1,\beta}(J)$
norm of the $f$ part of
$\|\mathbb{T}(a,f)-\mathbb{T}(a_{ex},f_{ex})\|_{C^{1,\beta}(\Omega)\times C^{1,\beta}(J)}$ in terms of the $C^{1,\beta}(\Omega)$ norm of $\check{u},\check{v}$. (So far the Volterra integral equation approach from step (a) only works for differentiability order $\geq2$.)
However, the maximal parabolic regularity approach in [18] relies on the embedding $W^{2,p}(\Omega)\to C^{1,\beta}(\Omega)$ applied to $\check{u},\check{v}$ and therefore only works with the $C^{1,\beta}$ norm of $\hat{f}$.
§.§ Final time / time trace observations of one or two states
Identify $a(x)$, $f(u)$ in (<ref>), (<ref>)
with homogeneous impedance boundary conditions (<ref>),
from observations
\begin{equation}\label{eqn:mixeddata}
g(x)=u(x,T), \quad x\in\Omega, \qquad h(t)=v(\tilde{x},t), \quad t\in\Theta\subseteq[0,T]\,,
\end{equation}
for some $\tilde{x}\in\overline{\Omega}$, typically on the boundary. We here assume that $\tilde{x}$ is the right hand boundary point $L$; other cases can be treated analogously.
However, if $\tilde{x}$ is an interior point, then some term that we can eliminate by using the boundary condition would have to be additionally be taken into account in the analysis.
More precisely, we make use of the right hand boundary condition that implies
$a^+_x(L) v_x(L,t;a,f) =- \frac{a^+_x(L)}{a^+(L)} \gamma^v v(L,t;a,f)$ and replace $v(L,t;a,f)$ by the data $h(t)$ here.
Thus, again projecting onto the measurement manifolds $\Omega\times\{T\}$ and $\{L\}\times(0,T)$,
we define the fixed point operator $\mathbb{T}$ by $(a^+,f^+)=\mathbb{T}(a,f)$ with
\begin{equation}\label{eqn:fp_fiti-titr}
\begin{aligned}
&(a^+ g_x)_x(x)+f^+(g(x))=D_tu(x,T;a,f)-r^u(x,T) \quad x\in\Omega\\
&a^+(L) v_{xx}(L,t;a,f) - \frac{a^+_x(L)}{a^+(L)} \gamma^v h(t)+f^+(h(t))=h_t(t)-r^v(L,t) \quad t\in\Theta\,,
\end{aligned}
\end{equation}
which, when assuming that $a^L=a_{ex}(L)$ is known and $\gamma^v(L)=0$, simplifies to
\begin{eqnarray}
(a^+ g_x)_x(x)+f^+(g(x))&=&D_tu(x,T;a,f)-r^u(x,T) \quad x\in\Omega \label{eqn:faplus_mixed_u}\\
f^+(h(t))&=&h_t(t)-r^v(L,t)-a^L v_{xx}(L,t;a,f) \quad t\in\Theta\,. \label{eqn:faplus_mixed_v}
\end{eqnarray}
The recursion for the the differences $\hat{a}^+=a^+-a_{ex}$, $\hat{f}^+=f^+-f_{ex}$, $\hat{a}=a-a_{ex}$,
reads as
\begin{eqnarray}
(\hat{a}^+ g_x)_x(x)+\hat{f}^+(g(x))&=&D_t\hat{u}(x,T) \quad x\in\Omega\label{eqn:ahat_mixed}\\
\hat{f}^+(h(t))&=&-a^L \hat{v}_{xx}(L,t;a,f) \quad t\in\Theta\,.\label{eqn:fhat_mixed}
\end{eqnarray}
where $\hat{u}$, $\hat{v}$ solve (<ref>), (<ref>), that is
\[
\mathbb{M}(\hat{a}^+,\hat{f}^+)=\mathbb{F}(a,f)-\mathbb{F}(a_{ex},f_{ex})\,,
\]
with the operators $\mathbb{M}$ and $\mathbb{F}$ defined by
\[
\mathbb{M}(a^+,f^+)=
\left(\begin{array}{c}(a^+ g_x)_x+f^+(g^u)\\f^+(h)\end{array}\right)\,, \quad
\mathbb{F}(a,f)=
\left(\begin{array}{c}u_t(\cdot,T;a,f)\\ -a^Lv_{xx}(L,\cdot;a,f)\end{array}\right)\,.
\]
Inverting $\mathbb{M}$ is therefore much easier this time. We can first invert (<ref>) for $f^+$ relying on the range condition
\begin{equation}\label{eqn:rangecondition_mixed}
J=h(\Theta)\,, \quad
J\supseteq u_{ex}(\Omega\times(0,T))\,, \quad J\supseteq v_{ex}(\Omega\times(0,T))
\end{equation}
(in place of (<ref>)) and then compute $a^+$ from (<ref>) in exactly the same way as we have done before, cf. (<ref>). It will turn out that a lower regularity estimate of $a^+$ and $f^+$ (or more precisely, of $\hat{a}^+$ and $\hat{f}^+$, in order to prove contractivity) suffices, that is, we will use the function spaces
$X_a=C^{2,\beta}(\Omega)$, $X_f=C^{1,\beta}(J)$ (with further boundary conditions).
Step (a), bounding $\mathbb{M}^{-1}$:
From (<ref>), we get, after differentiation,
\[
\|\fhatplusp (h)\, D_th\|_{C^{0,\beta}(\Theta)}\leq |a^L| \|D_t\hat{v}_{xx}\|_{C^{0,\beta}(\Omega\times(0,T))}
\]
thus, assuming strict monotonicity of $h$, that is,
\begin{equation}\label{eqn:mu_h}
\end{equation}
we get, using $\hat{f}^+(p)=0$ for some $p\in J$,
\begin{equation}\label{eqn:estf_mixed}
\begin{aligned}
\|\hat{f}^+\|_{C^{1,\beta}(J)}\leq C(h) |a^L| \|D_t\hat{v}\|_{C^{2,\beta}(\Omega\times(0,T))}
\end{aligned}
\end{equation}
where $C(h)$ only depends in $\mu$ and $\|h\|_{C^{1,1}(\Theta)}$.
Likewise, we could attempt to estimate a higher order norm of $f^+$ by differentiating a second time
\[
\|\fhatpluspp(h)\, (D_th)^2\|_{C^{0,\beta}(\Theta)}\leq
\|\fhatplusp (h)\, D_t^2h\|_{C^{0,\beta}(\Theta)}
+|a^L| \|D_t^2\hat{v}_{xx}\|_{C^{0,\beta}(\Omega\times(0,T))}\,.
\]
However, this would involve too high derivatives of $\hat{v}$.
From (<ref>), that is (<ref>), we get
when taking only the $C^{1,\beta}$ norm,
(i.e., the $C^{2,\beta}$ norm of $\hat{a}$)
\begin{equation}\label{eqn:esta_withf_mixed}
\begin{aligned}
\bigl(L\|\bigl(\tfrac{1}{g_x}\bigr)_x\|_{C^{1,\beta}(\Omega)}+ \|\tfrac{1}{g_x}\|_{C^{1,\beta}(\Omega)}\bigr) \bigl(\|D_t\hat{u}(T)\|_{C^{1,\beta}(\Omega)}+\|g\|_{C^{1,1}(\Omega))} \|\hat{f}^+\|_{C^{1,\beta}(J)}\bigr)
\end{aligned}
\end{equation}
in place of (<ref>), thus, together with (<ref>), using again the abbreviations $\check{u}=D_t\hat{u}$, $\check{v}=D_t\hat{v}$,
\begin{equation}\label{eqn:esta_mixed}
\begin{aligned}
\|\hat{a}^+\|_{C^{2,\beta}(\Omega)}\leq
\bar{C}(g,h)\Bigl(\|\check{u}(T)\|_{C^{1,\beta}(\Omega)}+
\|\check{v}\|_{C^{2,\beta}(\Omega\times(0,T))}\Bigr)
\end{aligned}
\end{equation}
that is an estimate in norms that are by one order lower than those in (<ref>).
For $g\in C^{3,\beta}(\Omega)$, $h\in C^{1,1}(\Theta)$ satisfying (<ref>), (<ref>), (<ref>), $p\in J$,
there exists a constant $\bar{C}(g^u, g^v)$ depending only on
$\|g\|_{C^{3,\beta}(\Omega)}$, $\|h\|_{C^{1,1}(\Theta)}$, $\mu$, $\delta$, such that the operator $\mathbb{M}:X_a\times X_f\to C^{1,\beta}(\Omega)$ as defined in (<ref>) with
\begin{equation*}
\begin{aligned}
&X_a=\{d\in C^{2,\beta}(\Omega)\, : \, d(L)=0\}\,,\\
&X_f=\{j\in C^{1,\beta}(J)\, : \, j(p)=0\}
\end{aligned}
\end{equation*}
is boundedly invertible with
\[
\|\mathbb{M}^{-1}\|_{C^{1,\beta}(\Omega)\to X_a\times X_f}\leq \bar{C}(g^u, g^v)\,.
\]
The obvious spatially higher dimensional version of (<ref>), (<ref>) is
\begin{eqnarray}
\nabla\cdot(\hat{a}^+ \nabla g)(x)+\hat{f}^+(g(x))&=&D_t\hat{u}(x,T;a,f) \quad x\in\Omega \label{eqn:ahat_mixed1}\\
\hat{f}^+(h(t))&=&-a^L \triangle\hat{v}(\tilde{x},t;a,f) \quad t\in\Theta\,.\label{eqn:fhat_mixed1}
\end{eqnarray}
However, (<ref>), which is a transport equation for $\hat{a}^+$, in general only yields an estimate of $\hat{a}^+$ in a weaker norm than the one needed here.
step (b) smallness of $\mathbb{F}(a,f)-\mathbb{F}(a_{ex},f_{ex})$:
For $D_t\hat{u}=\check{u}$ we can just use the estimate (<ref>),
and again assume that $u_0$ and $f(0)$ vanish.
For $g\in C^{3,\beta}(\Omega)$, $h\in C^{2,\beta}(\theta)$ satisfying (<ref>), (<ref>), (<ref>), and if additionally $f_{ex}(0)=0$, $u_0=0$,
$\|u_{ex,t}\|_{C^{2,\beta}(\Theta)}$ is sufficiently small, there exists $\rho>0$ such that $\mathbb{T}$ is a self-mapping on $B_\rho^{X_a\times X_f}(a_{ex},f_{ex})$ and the convergence estimate
\[
\|\mathbb{T}(a,f)-\mathbb{T}(a_{ex},f_{ex})\|_{X_a\times X_f}\leq
q \|(a,f)-(a_{ex},f_{ex})\|_{X_a\times X_f}
\]
holds for some $q\in(0,1)$ and
\begin{equation}\label{eqn:XaXf_mixed}
\begin{aligned}
&X_a=\{d\in C^{2,\beta}(\Omega)\, : \, d(L)=0\}\,,\\
&X_f=\{j\in C^{1,\beta}(J)\, : \, j(0)=0\}
\end{aligned}
\end{equation}
As opposed to the previous section, here the strategy of proving contractivity for monotone $f$ from [18] would also work here since we can estimate the $C^{1,\beta}$ norm of $\hat{f}^+$. This would also allow to tackle nonhomogeneous initial data by using a maximal parabolic regularity estimate, see <cit.>.
Since no such condition as (<ref>)
(that is sufficiently different slopes of $g_u$, $g_v$) is needed here, the result from Theorem th:contr_fiti_titr remains valid with a single run, i.e., $v=u$.
§.§ Time trace observations of two states
A well-known “folk theorem” suggests that trying to
reconstruct a function depending on a variable $x$
from data measured in an orthogonal direction to $x$ is
inevitably severely ill-conditioned.
This has been borne out in innumerable cases.
The recovery of $a(x)$ in the parabolic equation
$u_t - \nabla.(a\nabla u) = f(u)$ from time-valued data
is an example of the above.
It is known that recovery of spatially-dependent coefficients in an
elliptic operator from time data-valued measurements made on the boundary
$\partial\Omega$ is extremely ill-conditioned with an exponential dependence
of the unknown in terms of the data.
We give a synopsis of the reasons for this that have particular relevance
to our current theme; while recovery of the diffusion coefficient $a(x)$ from
final time data is only a mildly ill-conditioned problem, that from
time-trace data is extremely ill-conditioned.
We will set the problem in one space dimension taking
$\Omega=[0,1]$ and remove all non-essential terms.
Thus we consider the recovery of the uniformly positive $a(x)$ from
$u_t - (au_x)_x = 0$ with (say) boundary conditions
$u_x(0,t) = 0$, $u(1,t) = 0$ with initial value $u_0(x)$
where our overposed data is the value of $h(t) = u(0,t)$.
Note that this is a single function recovery from a linear problem.
The standard approach here is due to Pierce [24].
The solution to the direct problem has the representation
$u(x,t) = \sum_{n=1}^\infty \langle u_0,\phi_n\rangle\,e^{-\lambda_nt}\phi_n(x)$
where $\{\lambda_n,\phi_n\}$ are the eigenvalues / eigenfunctions of
$\mathbb{L} = -(a u_x)_x$ under the conditions $u_x(0)=u(1)=0$
and we normalise these by
the standard Sturm-Liouville condition $\phi_n'(0)=1$.
For a given $a(x)$ this is transmutable into the norming constants
and more usual Hilbert space norm
[28, 29].
We have to recover $a(x)$ from $h(t) = u(0,t)$, that is from
\begin{equation}\label{eqn:Dirichlet_series}
h(t) = \sum_{n=1}^\infty A_n e^{-\lambda_n t}
\qquad
A_n = \langle u_0,\phi_n\rangle.
\end{equation}
Note that $\lambda_n>0$ and that $h(t)$ is in fact analytic.
It is well-known that the pair $\{\lambda_n,A_n\}$ is uniquely recoverable
from $h(t)$ and this is easily seen by taking Laplace transform
(analytically extending $h(t)$ to all $t>0$ if need be).
Then (<ref>) becomes
\begin{equation}\label{eqn:rat_func_series}
\mathcal{L}(h(t))\to s := H(s) = \sum_{n=1}^\infty \frac{A_n}{s+\lambda_n}
\end{equation}
and shows that knowing the meromorphic function $H(s)$ provides the location
$\lambda_n$ and residues $A_n$ of its poles.
Of course, the extreme ill-conditioning of this step is now apparent;
we know the values of the analytic function $H(s)$ on the positive real line
and have to recover information on the negative real axis – in fact
all the way to $-\infty$ as the eigenvalues have the asymptotic behaviour
$-\lambda_n \sim -c n^2$.
Recovery of just the eigenvalues isn't sufficient to recover $a(x)$
but we have further information, namely the sequence $\{A_n\}$.
The simplest way to use this is to select $u_0$ to be impulsive – say
$u_0(x) = \delta(x)$ for then $A_n = \phi_n(0)$.
Thus with this we have both the spectrum and an endpoint condition
on the eigenfunctions and standard results from inverse Sturm-Liouville theory
show this is sufficient for the recovery of $a(x)$,
[4, 11, 28].
Actually, with this approach the computational cost of an effective
reconstruction of $a(x)$ is relatively low.
Determining the coefficients from a finite sum in (<ref>)
is just Padé approximation of $H(s)$ and there are efficient methods
such as in [3] to effect this recovery.
The reconstruction of $a(x)$ from this finite spectral can be accomplished
extremely quickly, [28].
An entirely similar argument holds if instead our purpose was to recover
the potential $q(x)$ in $\,\mathbb{L} = -u_{xx} + q(x)u = 0$.
The recovery of two spectral sequences proceeds identically and
this potential form for $\mathbb{L} $ is the canonical version
[29, 6] of the inverse Sturm-Liouville problem
to which all others can be transformed by means of the Liouville transform.
In fact, this part of the process is only mildly ill-conditioned in terms
of mappings between spaces for we have
$\|q_1 - q_2\|_{L_2} \leq C\|\lambda_{n,1}-\lambda_{n,2}\|_{\ell^2}$.
However, the above paints a rather too rosy picture of the difficulties
The basic asymptotic form for the eigenvalues for a $q\in L^2$ is
\begin{equation}\label{eqn:sturm_liouville_eigen}
\lambda_n(q) = \bigl(n-{\textstyle{\frac{1}{2}}}\bigr)^2\pi^2 +
\int_0^1 q(s)\,ds + \epsilon_n, \qquad \epsilon_n \in \ell^2
\end{equation}
The dominant term in this expansion is identical for all $q\in L^2$.
The more regularity assumed of $q(x)$ the more rapid the decay of
the “information sequence” $\{\epsilon_n\}$,
[27, 6, 30].
For example, if $q\in H^{m,2}$ then
$\epsilon_n \sim \sigma_n/n^{2[m/2]+1}$ where again $\sigma_n\ell^2$.
Thus the approximation to the eigenvalues obtained from inverting
(<ref>) will inevitably contain errors and
from this we have to recover the sequence $\{\epsilon_n\}$ by first
subtracting off the term asymptotic to effectively $n^2\pi^2$.
For $n=10$ this is the order of a thousand and meantime the information
is in a possibly rapidly decreasing sequence of the difference.
In short, the inverse Sturm-Liouville problem is very mildly ill-conditioned from a space domain/range definition. but the strong masking by the
fixed leading term in the eigenvalue expansion which is independent of $q$
makes recovery of anything more than a few Fourier modes impractical from
other than exceedingly accurate spectral data.
From the existence of the Liouville transform one might conclude that
the picture is identical for the determination of the coefficient $a(x)$.
This is indeed so up to the point where we recover the spectral information.
However, the version of (<ref>) for this case is
\begin{equation}\label{eqn:sturm_liouville_eigen_a}
\lambda_n(q) = {\textstyle \frac{(n-\frac{1}{2})^2\pi^2}{L^2}} + \epsilon_n,
\qquad\mbox{where }\
L = \int_0^1 [a(s)]^{-1/2}\,ds.
\end{equation}
Thus unlike the potential case where the leading and dominant term is
independent of $q(x)$, the leading term now contains information about the
coefficient $a(x)$.
For a fixed error in the data, this can make a substantial difference.
All of the above has been predicated on two factors; that we are in one
space dimension and our underlying parabolic equation is linear.
With a nonlinear reaction term $f(u)$ the whole approach above fails
(and this is precisely our case of interest).
In higher space dimensions there is no sequel to the inverse Sturm-Liouville
theory and much more information is required over the one dimensional case.
There is an approach that can transcend these issues.
This is to look directly at the maps from unknown coefficient to the data:
$F_a[a] \to u(0,t)$ or $F_q[q] \to u(0,t)$.
We can get a sense of the practical invertibility by looking at the
Jacobian matrix $J$ arising from a finite dimensional version of
$F'$, where $F := [F_a\; F_q]$, taken in a direction
$\delta a$ or $\delta q$:
\begin{equation}\label{eqn:F'_map}
\begin{aligned}
&\hat u_t - \nabla\cdot(a(x)\nabla \hat u) = f'(u)\hat u -
\nabla.(\delta a\nabla u) \\
&\hat u_t - \triangle \hat u + q(x) \hat{u}= f'(u)\hat u - \delta q\,u \\
\end{aligned}
\end{equation}
An important question is whether this is feasible in respect to even the
theoretical invertibility of the map $F'$:
either as one of its two components or in combination.
We provide a sketch below to show that in the linear case $f(u) = c(x)u$
$F'$ is indeed locally invertible in $\mathbb{R}^d$ for $d>1$,
then later consider the issue of its
practical invertibility by estimating the singular values
of the matrix $J$ representing $F'$.
With this $f$ we can absorb the contribution into the zero'th order term of
$\mathbb{L}$ and use the same notation for the eigenvalues/eigenfunctions
of $\mathbb{L}$ on $\Omega$ with respect to homogeneous impedance boundary
With this, the solution to (<ref>) in the potential case is given by
\begin{equation}\label{eqn:F'_map_pot}
\hat u(x,t) = \sum_1^\infty \int_0^t e^{-\lambda_n(t-\tau)}
\langle \delta q\, u_0,\phi_n\rangle e^{-\lambda_1\tau}\phi_n(x).
\end{equation}
Suppose now we have $\hat u(\tilde{x},t) = 0$ for $x=\tilde{x}\in\partial\Omega$
(if we only have this prescribed on $[0,T]$ then we simply extend
it by zero for $t>T$) and taking Laplace transforms gives
\begin{equation}\label{eqn:F'_map_zero}
\sum_1^\infty \frac{1}{p+\lambda_n}\frac{\phi_n(\tilde{x})}{p+\lambda_1}
\langle \delta q\; u_0,\phi_n\rangle = 0
\end{equation}
for all $p>0$.
Now we can select the point $\tilde{x}$ so that $\phi_n(\tilde{x}) \not=0$
since $\phi_n(x)$ satisfies homogeneous impedance conditions on
$\partial\Omega$ (or more exactly can select the origin of the eigenfunctions
to accomplish this for a prescribed $\tilde{x}$).
Then analyticity in (<ref>) shows that
$\langle \delta q\, u_0,\phi_n\rangle = 0$ for all $n$ and completeness
implies that $\delta q \,u_0 =0$ and, by our assumption that $u_0$ was
nonzero, implies that $\delta q = 0$.
An entirely analogous argument holds for the case of $a(x)$.
It also trivially extends to an initial value $u_0$ that is
a finite combinations of eigenfunctions and with a little more work
extends to a general initial value $u_0$ that is not
identically zero in any subset of $\Omega$ of positive measure.
Finally, by considering the situation on the interval $[\epsilon,T]$
we have the new initial value $u(x,\epsilon)$ which is strictly positive
from the maximum principle provided only that $u_0(x)$ is non-negative
and non-trivial.
Thus we have proven,
Suppose the nonlinear term $f(u)$ is zero, the initial condition
$u_0(x)\not=0$ is nonnegative in $\Omega$, then the maps $F':q(x) \to u(\tilde{x},t)$
and $F':a(x) \to u(\tilde{x},t)$ are injective.
We have carried out the process of computing the singular values of $F'$
taking the case of one spatial dimension and about a constant value for
$a$ or $q$, namely $a=1$ and $q=0$.
The initial condition was the first eigenfunction for
the associated elliptic operator, here $\sin(\frac{\pi}{2}x)$ and
the basis functions used for both $\delta a$ and $\delta q$ were
$\{\sin(n\pi x)\}_{n=1}^{20}$.
The resulting singular values are shown in the Figure fig:sv_a-q
plotted on a $\log_{10}$ scale.
Logarithms of the singular values for $a(x)$ and $q(x)$
recovery from time trace data using $F:\delta c \to u(1,t)$ with $\{\delta c =\sin(n\pi x)\}$
The expected exponential decay of these singular values is cleary evident.
From this data $h(t)$ – which arose from a problem with the initial data
being the exact lowest eigenfunction and hence the best possible for
reconstructions, we see that only a few of the lowest singular values
are useable under anything but exceedingly high accuracy data.
These correspond to the lowest frequencies in the basis set.
While the same exponential decay is true for both $a(x)$ and $q(x)$, the rate
is greater for $q$, but more crucially, the first singular values in each
case differ considerably - by roughly a factor of ten.
Note also the effect that smaller time steps makes.
It is much less evident in the case shown due to the solution $u$
to the base problem having only the lowest spectral mode.
In problems where higher modes are present and one has
to recover $\lambda_n$ from $e^{-\lambda_n t}$; the smaller the time step
$dt$ the better one can recover the eigenvalues and coefficients
(especially the largest index ones sought)
from data error effecting this term.
For example, for $n=10$ and Dirichlet boundary conditions with $a=1$ and $q=0$
this means the factor $e^{-100\pi^2 dt}$ should lie above the noise level.
Of course, if the diffusion coefficient is substantially smaller
then this changes the above estimate.
Our scaling of this problem where we have assumed the coefficients are of
order unity must be taken into account.
As noted in the introduction, in many physical settings
the diffusion coefficient could be several orders of magnitude smaller
and this would give a corresponding change in the spectrum $\{\lambda_n\}$.
In addition, for diffusion in fluids there is often a significant
dependence of the diffusion coefficient on temperature $u$ in addition
to any spatial dependence.
For example a standard model here is due to Arrhenius
[2, 7]
that supposes that $a = a_0\,e^{-E/R\,u}$ where $R$ is the
universal gas constant and $E$ the activation energy which is typically
such that $E >> R\,u$.
Of course, incorporating this into our model would dramatically increase
the complexity of any analysis.
It is entirely possible to extend these computation to higher
space dimensions and also in fact to include nonlinear reaction terms
although in the latter situation the above theory and hence the proof of
Theorem thm:q-a-injectivity will no longer be valid.
This same effect, namely the superior reconstruction of $a(x)$ against
$q(x)$ from their simultaneous recovery from final time data,
was noticed in [17].
There is another “folk theorem” at play here.
The dependence of the solution of an elliptic equation on changes in its
coefficients is far greater from those appearing in higher order terms
(as in the $a(x)$ here) than in terms appearing in lower order coefficients.
One should expect that this translates into data terms arising from
projections of the solution onto a surface,
and in turn expect that one should be able to
more easily reconstruct coefficients appearing in the highest derivatives of
the operator as distinct from those appearing in the lowest order term.
from knowledge of projections of the solution onto a surface.
The reconstruction of both $a(x)$ and $f(u)$ requires two experiments
providing solutions and the corresponding data obtained from them.
The theoretical results of section <ref> shows this can
be sufficient under appropriate conditions.
In many applications one could attempt an oversampling by taking measurements
from $m>2$ data runs generating $m$ solutions providing overposed data.
Under the unrealistic situation of no data error this would simply lead
to redundancy in terms of uniqueness but might very well lead to
superior reconstructions due to the fact that exact optimality conditions
are difficult to determine and more measurements would likely lead to a
better case being obtained in one of the experiments that in turn would lead
to a larger number of singular values above a given threshold.
In the case of noise in the data one would also expect that oversampling
by utilizing $N$ solutions would lead to an effective lower noise rate
and in consequence superior reconstructions.
In the case of $N=2$ and 1% random noise there is certainly a detectable
variation in the corresponding reconstructions and even a modest increase
in the number of measurements makes a significant reduction.
The extent to which this can be quantified depends on several factors.
One of these is the underlying source of the noise and its probability
density distribution. If this is Gaussian then $N$ samples would reduce
the effective error by a factor of $\sqrt{N}$, but this holds only
for large $N$ and other distributions would give different answers.
However we are dealing with a far from linear process.
The reconstruction process is itself highly nonlinear in addition to the basic
forwards map being so due to the nonlinear term $f(u)$.
As a result of this, quantification of the overall uncertainty is far
from an easy question.
§ RECONSTRUCTIONS
We will show the results of numerical experiments to recover both
$a$ and $f$ with two different types of data measurements.
The first of these is when we are only able to obtain “census-type”
information and thus measure $g_u := u(x,T)$ and $g_v := v(x,T)$ for some
fixed time $T$ and as the result of two experiments obtained by
altering the initial, boundary or forcing term conditions.
The second is when we are able to measure multiple types of data from a
single experimental run: specifically both the final data and a
boundary measurement of either the solution $u$ or its normal derivative at a
fixed point $\tilde x\in\partial\Omega$.
To procure data for the reconstructions a direct solver based on a
Crank-Nicolson scheme produced output values and data
values were produced from these by sampling at a relatively small
number $N_x$ and $N_t$ of points in both the spatial and temporal directions:
This sampled data was then both interpolated to a full working size to obtain
data values $g(x)$ and $h(t)$ commensurate to the grid being used by the solver
used in the inverse problem and such that the value of $g(x)$ was
smoothed by an $H^2$ filter while that of $h(t)$ by an $H^1$ filter.
All the reconstructed solutions we show are set in one space dimension.
This is in part to make the graphical results more transparent but also because
some of the algorithms have an $\mathbb{R}^1$ restriction from a
computational aspect and certainly from an analysis one as noted in the
previous section.
However, we will make note when extensions to higher space dimensions are feasible.
§.§ Final time with two runs
We shall pose the setting in one space variable as the exposition is simpler
as is the graphical representation of the reconstructions obtained.
The problem is to identify $a(x)$, $f(u)$ in
\begin{equation*}%\label{eqn:u-v_pair}
\begin{aligned}
u_t-(a u_x)_x &=f(u) + r_u(x,t,u)\\
v_t-(a v_x)_x &=f(v) + r_v(x,t,v)\\
\end{aligned}
\end{equation*}
where $r_u$ and $r_v$ are known driving terms and where $u(x,t)$ and $v(x,t)$
are subject to homogeneous impedance boundary conditions
a\frac{\partial u}{\partial x} +\alpha u =0
\qquad
a\frac{\partial v}{\partial x} +\beta v =0
and initial values
u(x,0)=u_0(x) \qquad
v(x,0)=v_0(x) \qquad
from observations $g_u(x)=u(x,T)$, $g_v=v(x,T)$.
Some restrictions must be imposed on this data.
An ideal situation will include the fact $\nabla g_u$ and $\nabla g_v$
will not vanish on the interior of $\Omega$.
This can be accomplished by varying the impedance coefficients
over the boundary $\partial\Omega$ and by imposing a larger positive flux into
the interior where $\alpha$, $\beta$ are smallest - for example Neumann
Based on the fixed point operator defined in (<ref>), we now
derive some reconstruction schemes based on different ways of resolving
(<ref>) in an efficient way,
and provide computational results obtained with these schemes.
The first method is one of sequential iteration: recovering first
a new approximation to $a(x)$ using one direct solve
then followed by an update of $f(u)$ using a second direct solve,
but now using the new value of $a$.
More precisely,
let $f^0$, $a^0$ to some initial approximation, then for $k=0,1,2,\ldots$
compute $u_t(x,T;a^k,f^k)$ by solving (<ref>)
using the first set of initial/boundary conditions then differentiate
$u$ with respect to time at $t=T$ and for all $x\in\Omega$.
Set $\phi(x):= u_t(x,T;a^k,f^k) - f^k\bigl(g_u(x)\bigr) - r_u(x,T,g_u)$
so that $\bigl((a(x) g_u'(x)\bigr)' = \phi(x)$.
Integrate this equation over $(0,x)$ using the boundary conditions
inherited by $g_u$ and $u(x,T;a^k,f_k)$ to obtain
$\Phi(x) := \int_0^x [u_t(s,T) - f^k(g_u(s)) - r_u(s,T)]\, ds$
which then gives the update
\begin{equation}\label{eqn:iter1}
\end{equation}
Use this new approximation $a^{k+1}$ for $a(x)$ together with the previous
value $f^k$
for $f$ to compute $v(x,T;a^{k+1},f^k)$ from the imposed second set of
initial/boundary conditions and thereby obtain the value of $v_t(x,T)$.
We now update $f$ by
f^{k+1}(g_v)= v_t(x,T)-\bigl(a^{k+1} g'_v(x)\bigr)'
One drawback of this method is the need to differentiate the just-updated
value of $a(x)$ that can lead to numerical instabilities.
Here is a way to avoid this
and works well in a single space dimension.
We again let $f^0$, $a^0$ be some initial approximation, then for
Compute both $u_t(T)$ and $v_t(T)$ by solving
(<ref>), using the two sets of initial/boundary values
\begin{equation}\label{eqn:u-v_pair}
\begin{aligned}
u_t-(a u_x)_x &=f(u) + r_u(x,t,u)\qquad u(x,0)=u_0,\quad
a\frac{\partial u}{\partial x} +\alpha u = k_u(x,t)\\
v_t-(a v_x)_x &=f(v) + r_v(x,t,v)\qquad v(x,0) = v_0\quad
a\frac{\partial u}{\partial x} +\beta v = k_v(x,t)\\
\end{aligned}
\end{equation}
and hence obtain $u_t(x,T;a^k,f^k)$ and $v_t(x,;a^k,f^kT)$.
Let $W(x) = g_u(x) g'_v(x) - g_v(x) g'_u(x)$ and
$W_p(x) = g_u(x) g''_v(x) - g_v(x) g''_u(x)$.
Multiply the first equation in (<ref>) by $g_u$,
the second by $g_v$ then subtract obtaining
\bigl(a(x)g_u'(x)\bigr)'g_v(x) - \bigl(a(x)g_v'(x)\bigr)'g_u(x) = \psi(x)
\begin{equation*}
\begin{aligned}
\psi(x) &= [u_t(x,T;a^k,f^k) - r_u(x,T,g_u) - f(g_u(x))]g_v(x) \\
&\quad -[v_t(x,T;a^k,f^k) - r_v(x,T,g_v) - f(g_v(x))]g_u(x)
\end{aligned}
\end{equation*}
Now integrating over $(0,x)$ we obtain
\begin{equation}\label{eqn:aW}
a(x)W(x) = \int_0^x \psi(s)\,ds
\end{equation}
from which an update $a^{k+1}(x)$ can be obtained by division by $W$.
It is obviously an advantage if $W$ doesn't vanish and this condition can
be obtained by controlling the values of $\alpha$, $\beta$ and
the input flux values $k_u$ and $k_v$.
(Even if $W$ has isolated zeros there are several approaches that can be
taken to still resolve (<ref>) in a stable manner.)
To recover the next approximation for $f$ we multiply the first equation in
(<ref>) by $g'_v$, the second by $g'_u$ then subtract obtaining
\tilde\phi = (u_t - r_u(x,T)g_v' - (v_t - r_v(x,T))g_u' - a(x)W_p(x)
Then $f^{k+1}$ is determined by
\begin{equation}\label{eqn:f-update}
f^{k+1}\bigl(g_u(x)\bigr)g_v'(x) - f^{k+1}\bigl(g_v(x))\bigr)g'_u(x) =
\tilde\phi(x).
\end{equation}
The effect of this is to eliminate the derivative of the just computed $a(x)$.
Solving (<ref>) is not completely straightforward.
It is a delayed argument equation for $f$ if the maximum
value achieved for both $u$ and $v$ occurs at or near the same
point on $\partial\Omega$ but in fact the resolution of the entire scheme
is better if both $u$ and $v$ are “more independent.”
One approach is to use a basis representation for $f$ to resolve this
Another, which works well if one of $g'_u$ or $g'_v$ is monotone,
is to use the approximation $\tilde f_1(u) = f^k(u)$ and then iterate
successively on (<ref>) by
\begin{equation}\label{eqn:f_successive-update}
\tilde f_{j+1}\bigl(g_u(x)\bigr)g'_v(x) =
\tilde f_j\bigl(g_v(x))\bigr)g'_u(x) + {\tilde\phi}(x).
\end{equation}
This method worked well over a wide variety of examples of functions $f(u)$
and one such pairing of $a(x)$ and $f(u)$ recoveries is shown in
Figure fig:fiti_fiti below.
Recovery of $a(x)$ and $f(u)$ from final data
The initial approximations were $a^0(x)=1$ and $f^0(u)=0$.
An important point to observe is the speed of convergence of the
scheme updating $a(x)$. The second iteration was already extremely good
and the difference between the third and tenth iteration shown would
not be distinguishable at the figure resolution.
On the other hand, the iterations for $f$ proceeded much more slowly.
One factor that should be noted here is that we used both data sets to
recover $a$ initially, and then similarly for $f$.
Much the same picture would have resulted from reversing this order.
The fact is that the diffusion coefficient $a(x)$ dominates the
equation in terms of its ability to modify solutions $u(x,t:a;f)$ in
comparison to that for $f(u)$.
This situation was true whether pointwise or basis reconstructions schemes
were used for representing the unknowns.
In higher space dimensions it is possible from a reconstruction perspective
to measure less data. For example, in $\mathbb{R}^d$ with $d=2,\,3$ one
data run for $g_u(x) = u(x,T)$ measures this quantity for all $x\in\Omega$, but
the second run measures only $g_v(x) = v(s(x),T)$ for some
one-dimensional curve $C$ parametrized by $s(x)$ that connects points
$x_1,\,x_2$ on $\partial\Omega$.
The strategy is to use the first of these to recover $a(x)$ then since $f(u)$
depends only the single variable $u$ we seek to recover $f(s(x))$ from this
second measurement.
The points $x_1$ and $x_2$ cannot be chosen arbitrarily but must in fact
correspond to the maximum and minimum values of $v(\cdot,T)$ on
$\partial\Omega$ in order to satisfy the range condition necessary
for recovering $f$.
We note here that we have no theoretical analysis for this case;
neither a uniquness result nor a convergence theorem.
§.§ Final time and time-trace data
As noted previously, an alternative data measurement set is to measure both
boundary values as well as final time information, namely
\begin{equation}\label{eqn:fitititr_data}
u(x,T) = g_u(x) \qquad u(\tilde x,t) = h_u(t)
\end{equation}
where we have assumed that $\tilde x \in\partial\Omega$ is such that that
the maximum range of $u(x,t)$ occurs at that point.
It will also be convenient to take the impedance value to be zero
at $\tilde x$ and arrange that $g_u(x)$ is monotonic in $\Omega$.
We have chosen a particular choice of a time trace measurements.
Other possibilities include a time trace of $u(x^\star,t)$ for
some interior point $x^\star\in\Omega$.
The difficulty here is in ensuring that the range condition is satisfied
and this is likely to be prohibited by the maximum principle.
An alternative is a measurement of the energy
$E(t) = \int_\Omega |\nabla u|^2\,dx$ or some similar functional.
Again, a determining factor would be the range condition.
Again, starting from the fixed point iteration defined by
(<ref>), we derive a numerical reconstruction algorithm.
There are several ways to proceed and we start with pointwise updates
of both $a(x)$ and $f(u)$ in sequence at each full iteration step.
We again let $f^0$, $a^0$ be some initial approximation, then for
Compute $u(x,t;a^k,f^k)$.
Update $a(x)$ using the final time information $g_u(x)$ by setting
\begin{equation}\label{eqn:a_successive-update_2}
\begin{aligned}
\phi(x) &= u_t(x,T;a^k,f^k) - r_u(x,T,gu) - f^k\bigl(g_u(x)\bigr)
\qquad \Phi(x) = \int_0^x \phi(s)\,ds \\
a^{k+1}(x) &= \Phi(x)/g_u'(x) \\
\end{aligned}
\end{equation}
Now update $f(u)$ by projecting onto the boundary at $x=\tilde x$ using the
data $h_u(t)$ and computing $\triangle u(\tilde x,t) = u_{xx}(\tilde x,t)$
so that
\begin{equation}
\psi(t) = h_u(t) - r_u\bigl(\tilde x,t,h_u(t)\bigr)
- a^{k+1}(\tilde x) u_{xx}(\tilde x,t).
\end{equation}
Here the advantage of the imposed boundary condition $u_x(\tilde x,t) = 0$
is evident as there is no need to involve the derivative of $a$.
Now if $h_u(t)$ is monotone then $f = f^{k+1}(u)$ is recovered from
\begin{equation}
f^{k+1}\bigl(h_u(t)\bigr) = \psi(t).
\end{equation}
In Figure fig:fiti_titr_no_noise we show a pair of reconstructions of
$a(x)$ and $f(u)$ using the above scheme.
The spatial interval was taken as $0,L$ with $L=1$ and the final time
of measurement was $T=0.5$.
Thus data consisted of measurements of $g(x_i) = u(x_i,T)$ and
$h(t_j)=u(L,t_j)$ where the numbers of sampled points were
$N_x=20$ and $N_t=25$.
In this reconstruction the initial values were once again
taken as $a^0 = 1$ and $f^0=0$.
In most applications a better initial approximation would likely be available.
As can be seen from this figure, the convergence of the scheme was rather
slow initially and in fact the first approximation to the diffusion
coefficient $a(x)$ and the reaction $f(u)$ were both uniformly lower than
the actual values.
One might expect that a uniform too small value of $a$ would result
in an $f$ that was larger then the actual and this did occur on the second
iteration for $f$. The reason it did not occur at the first iterative step
is at each step the updates for $a$ and $f$ are independent:
the former only using $g(x)$ and the latter only using $h(t)$.
This in fact indicates an almost limiting distance apart of
the initial approximations and the actual values of $a$ and $f$.
If an iterate of $a$ becomes too close to a zero value the scheme becomes
very unstable.
Also, in the presence of noise, one would in general have to take
a much closer initial approximation.
The reason for the final small amount of mis-fit of the reconstructed
and the actual functions is the necessity to use a smoothing scheme on the
data to restore the correct mapping properties.
Recovery of $\,a(x)\,$ and $\,f(u)\,$ from space-and-time data
Recovery of $\,a(x),$ and $\,f(u)\,$ from space-and-time data: 1% noise
Notice here the considerable degradation of the reconstructions under
even modest amounts of noise.
This is not an artifact of the method but inherent in the problem.
Reconstructions using a pair of final values as in the last section
are more forgiving under data noise but still show the limitations
of reconstructions in these inverse problems.
§.§ Non-trivial initial values
In section sec:conv we required that the initial value $u_0(x) = u(x,0) = 0$.
This was to remove a possible singularity arising from the convolution
in equation (<ref>).
This issue has been noted in previous work attempting to recover
a reaction term $f(u)$ from time-valued data on the boundary $\partial\Omega$,
[26, 25, 18]
even when this term is the only unknown in the model.
The question thus arises if this is a fundamental requirement or
merely a delicate technical issue to be surmounted.
Suppose that $u$ satisfies
u_t - (au_x)_x = f(u) + r(x,t),\qquad u(x,0) = u_0(x),\quad
{\mathbb B} u = \phi
where ${\mathbb B}$ is the boundary operator on $\partial\Omega$,
For continuity we should impose the condition ${\mathbb B} u_0 = \phi$
at $t=0$ but, in particular we suppose $u_0$ is positive and has support
contained in the strict interior of $(0,1)$ to remove all boundary effects.
Now let $v(x,t) = u(x,t) - u_0(x)$ so that $v$ satisfies
v_t - (av_x)_x = \tilde f(v) + s(x,t),\qquad v(x,0) = 0 ,\quad {\mathbb B} v = 0
where $\tilde f(v) = f(u) = f(v+u_0)$ and $s(x,t) = r(x,t) - (a u_0')'$.
The overposed data $u(1,t)=h(t)$ is converted into $v=\tilde h(t)$ but
relating $\tilde h$ to $h$ follows from the fact that $u_0$ vanishes
at $x=0$ and $x=1$ gives that $\tilde h(t) = h(t)$.
Thus in principle, the problem with nontrivial $u_0$ can be converted into
one with $u_0=0$ by the above transformation:
we get a new, known, right hand side driving term
(assuming $a(x)$ is known) and reconstruct as before to get
$\tilde f$. Then from this, knowing $u_0$, recover $f$.
Does this then amount to resolution of the issue?
Not quite. The previous convolution situation is not resolved by this
as the argument of $f$ in $\tilde f$ is now shifted.
However, the transformation does point to a definite issue for nontrivial $u_0$.
We have to ensure that the standard range condition is satisfied:
$h(t)$ must encompass all values that the domain of $f$ requires.
The given data $h(t)$ is the same for both $u$ and $v$ but
$\tilde f$ requires a different range than $f$ (since $u_0>0$)
and thus appears to have missing information.
In Figure fig:fiti_titr_u0 below we show final reconstructions
(taken as the $10^{\hbox{\sevenrm th}}$ iteration which certainly corresponded
to effective numerical convergence) for both $a(x)$ and $f(u)$.
This was under no data-noise conditions and sampling at a large number of
A nontrivial value of $u_0$ was taken; namely $u_0(x) = \beta\,x^2(1-x^2)$.
for the values $\,\beta = 1,\,2,\,5,\,10,\,20$.
This function and its first derivative is zero at both endpoints
so ${\mathbb B}u_0 = 0$. The value $\beta=20$ corresponds to a maximum value of
$1.25$ for $u_0$; $\beta=1$ gives a height of $0.0625$.
The rightmost graphic shows the reconstructed $f(u)$ for these choices.
In the case $\beta =0$ the reconstruction would have been indistinguishable
from the actual $f$ as the data was noise-free and sampled at a large number
of points.
Notice how rapidly the reconstructed $f$ deteriorates from the actual
with increasing $\beta$ and hence size of $u_0$, as the
range condition violation,
which predominantly affected the smaller values of $u$, became stronger.
For sufficiently large $u_0$ this finally affects the entire reconstruction.
Recovery of $\,a(x)\,$ and $\,f(u)\,$ from space-and-time data when $u_0\not=0$
The coefficient $a(x)$ which is reconstructed solely from the final time
values $g(x)=u(x,T)$ is relatively immune to this effect -
until the reconstructed $f$ becomes sufficiently poor that this has an
effect on subsequent iterations of the combined scheme.
Once again, the smaller values of the data $g(x)$ used are most sensitive
to a nontrivial $u_0$ and since our $g(x)$ function is monotonically
increasing away from $x=0$, the values of $a(x)$ nearer this point are
most affected.
The leftmost graphic shows only values for $\beta=5$, $\beta=10$ and $\beta=20$ since
for smaller $\beta$ the reconstructed $a$ is indistinguishable from the actual
$a(x)$ at the resolution of the figure.
However, this does also illustrate the point that the diffusion term
$\nabla.(a\nabla u) $ plays a dominant role in the behaviour of the equation.
The experiment was repeated but with $u_0$ taken to be a narrow Gaussian
of unit fixed height and center $x_0$.
A similar effect to the above was observed:
a greater disparity between the reconstructions and the actual $f$
occurred as $x_0$ approached the boundary $x=1$ where the time-trace data
$h(t) = u(1,t)$ was measured.
This is in keeping with fact that the initial value is perturbing the
range condition and this is more pronounced the closer the support of the
perturbation is to the measurement boundary.
In short, nontrivial initial values used either directly or through the
lifting device can be used to extract information on both $a$ and $f$,
but this has a definite limitation which becomes more acute as $u_0(x)$
increases in magnitude and so more significantly affects the range condition.
§ ACKNOWLEDGMENTS
The work of the first author was supported by the Austrian Science Fund fwf
under the grant P30054.
The work of the second author was supported in part by the
National Science Foundation through award dms-1620138.
Moreover, the authors would like to thank both reviewers for their careful
reading of the manuscript and their valuable comments and suggestions that
have led to an improved version of the paper.
[1] (MR511740) [10.1016/0001-8708(78)90130-5]
D. G. Aronson and H. F. Weinberger,
Multidimensional nonlinear diffusion arising in population genetics,
Advances in Mathematics, 30 (1978), 33–76.
[2]
R. A. Arrhenius,
über die dissociationswärme und den einflußder temperatur auf den dissociationsgrad der elektrolyte,
Z. Phys. Chem., 4 (1889), 96–116.
[3] (MR1383091) [10.1017/CBO9780511530074]
G. A. Baker and P. Graves-Morris,
Padé Approximants,
Cambridge University Press, Cambridge, second edition, 1996.
[4] (MR15185) [10.1007/BF02421600]
G. Borg,
Eine umkehrung der Sturm-Liouville eigenwertaufgabe,
Acta Mathematica, 76 (1946), 1–96.
[5] [10.1002/9781118788295.ch4]
J. W. Cahn and J. E. Hilliard,
Free energy of a nonuniform system. I. Interfacial free energy,
The Journal of Chemical Physics, 28 (1958), 258–267.
[6] (MR1445771) [10.1137/1.9780898719710]
K. Chadan, D. Colton, L. Päivärinta and W. Rundell,
An Introduction to Inverse Scattering and Inverse Spectral Problems,
SIAM Monographs on Mathematical Modeling and Computation, SIAM, 1997.
[7]
E. L. Cussler,
Diffusion: Mass Transfer in Fluid Systems,
Cambridge University Press., 1997.
[8] (MR855754) [10.1007/BF00251803]
C. M. Elliott and Z. Songmu,
On the cahn-hilliard equation,
Arch. Rational Mech. Anal, 96 (1986), 339–357.
[9] (MR0181836)
A. Friedman,
Partial Differential Equations of Parabolic Type,
Prentice-Hall, 1964.
[10] (MR0445088)
A. Friedman,
Partial Differential Equations,
Holt, Rinehart and Winston, New York, 1969.
[11] (MR0073805) [10.1090/trans2/001/11]
I. M. Gel'fand and B. M. Levitan,
On the determination of a differential equation from its spectral function,
Amer. Math. Soc. Transl., 1 (1951), 253–291.
[12] (MR1423804)
P. Grindrod,
The Theory and Applications of Reaction-Diffusion Equations: Patterns and Waves,
2$^{nd}$ edition, Oxford Applied Mathematics and Computing Science Series, The Clarendon Press, Oxford University Press, New York, 1996.
[13] (MR1085828) [10.1002/cpa.3160440203]
V. Isakov,
Inverse parabolic problems with the final overdetermination,
Comm. Pure Appl. Math., 44 (1991), 185–209.
[14] (MR2193218)
V. Isakov,
Inverse problems for partial differential equations,
2$^{nd}$ edition, Applied Mathematical Sciences, 127, Springer, New York, 2006.
[15] (MR3975371) [10.1088/1361-6420/ab109e]
B. Kaltenbacher and W. Rundell,
On an inverse potential problem for a fractional reaction-diffusion equation,
Inverse Problems, 35 (2019), 065004.
[16] (MR4019539) [10.1088/1361-6420/ab2aab]
B. Kaltenbacher and W. Rundell,
On the identification of a nonlinear term in a reaction-diffusion equation,
Inverse Problems, 35 (2019), 115007.
[17] (MR4002155) [10.1016/j.jmaa.2019.123475]
B. Kaltenbacher and W. Rundell,
Recovery of multiple coefficients in a reaction-diffusion equation,
J. Math. Anal. Appl., 481 (2019), 123475.
[18]
B. Kaltenbacher and W. Rundell,
The inverse problem of reconstructing reaction-diffusion systems,
Inverse Problems, 36 (2020), 065011.
[19] (MR3839508)
C. Kuttler,
Reaction-diffusion equations and their application on bacterial communication,
in Disease Modelling and Public Health, Handbook of Statistics, Elsevier Science, 2017.
[20] (MR1056055) [10.1137/1032046]
H. A. Levine,
The role of critical exponents in blowup theorems,
SIAM Review, 32 (1990), 262–288.
[21] (MR3012216)
A. Lunardi,
Analytic Semigroups and Optimal Regularity in Parabolic Problems,
Modern Birkhäuser Classics, Springer Basel, 1995.
[22]
E. A. Mason,
Gas, State of Matter,
Encyclopedia Britannica, 2020.
[23] (MR1908418)
J. D. Murray,
Mathematical Biology I,
3$^{rd}$ edition, Interdisciplinary Applied Mathematics, 17, Springer-Verlag, New York, 2002.
[24] (MR534419) [10.1137/0317035]
A. Pierce,
Unique identification of eigenvalues and coefficients in a parabolic problem,
SIAM J. Control Optim., 17 (1979), 494–499.
[25] (MR1108124) [10.1002/num.1690030404]
M. Pilant and W. Rundell,
Iteration schemes for unknown coefficient problems in parabolic equations,
Numer. Methods for P.D.E., 3 (1987), 313–325, 1987.
[26] (MR829324) [10.1080/03605308608820430]
M. S. Pilant and W. Rundell,
An inverse problem for a nonlinear parabolic equation,
Comm. Partial Differential Equations, 11 (1986), 445–457.
[27] (MR894477)
J. Pöschel and E. Trubowitz,
Inverse spectral theory,
in Pure and Applied Mathematics, 130, Academic Press, Inc., Boston, MA, 1987.
[28] (MR1166492) [10.1088/0266-5611/8/3/007]
W. Rundell and P. E. Sacks,
The reconstruction of Sturm-Liouville operators,
Inverse Problems, 8(1992), 457–482.
[29] (MR1106979) [10.1090/S0025-5718-1992-1106979-0]
W. Rundell and P. E. Sacks,
Reconstruction techniques for classical inverse Sturm-Liouville problems,
Math. Comp., 58(1992), 161–183.
[30] (MR2311614) [10.1007/s11006-006-0204-6]
A. M. Savchuk and A. A. Shkalikov,
On the eigenvalues of the Sturm-Liouville operator with potentials in Sobolev spaces,
Mat. Zametki, 80 (2006), 864–884.
[31] (MR3671483) [10.1093/imamat/hxx004]
Z. Zhang and Z. Zhou,
Recovering the potential term in a fractional diffusion equation,
IMA J. Appl. Math., 82 (2017), 579–600.
Received March 2020; revised May 2020.
E-mail address<EMAIL_ADDRESS>
E-mail address<EMAIL_ADDRESS>
|
A blowing-up formula for Intersection cohomology of moduli of Higgs]A blowing-up formula for the Intersection cohomology of the moduli of rank $2$ Higgs bundles over a curve with trivial determinant
Sang-Bum Yoo
Department of Mathematics Education, Gongju National University of Education, Gongju-si, Chungcheongnam-do, 32553, Republic of Korea
We prove that a blowing-up formula for the intersection cohomology of the moduli space of rank $2$ Higgs bundles over a curve with trivial determinant holds. As an application, we derive the Poincaré polynomial of the intersection cohomology of the moduli space under a technical assumption.
§ INTRODUCTION
Let $X$ be a smooth complex projective curve of genus $g\ge2$ and let $G$ be $\GL(r,\CC)$ or $\SL(r,\CC)$. Let $\bM_{\Dol}^{d}(G)$ be the moduli space of $G$-Higgs bundles $(E,\phi)$ of rank $r$ and degree $d$ on $X$ (with fixed $\det E$ and traceless $\phi$ in the case $G=\SL(r,\CC)$) and let $\bM_{\B}^{d}(G)$ be the character variety for $G$ defined by
$$\bM_{\B}^{d}(G)=\{(A_{1},B_{1},\cdots,A_{g},B_{g})\in G\,|\,[A_{1},B_{1}]\cdots[A_{g},B_{g}]=e^{2\pi\sqrt{-1}d/r}I_{r}\}/\!/G.$$
By the theory of harmonic bundles ([5], [33]), we have a homeomorphism $\bM_{\Dol}^{d}(G)\cong\bM_{\B}^{d}(G)$ as a part of the nonabelian Hodge theory. If $r$ and $d$ are relatively prime, these moduli spaces are smooth and their underlying differentiable manifold is hyperkähler. But the complex structures do not coincide under this homeomorphism.
Under the assumption that $r$ and $d$ are relatively prime, motivated by this fact, there have been several works calculating invariants of these moduli spaces on both sides over the last 30 years. The Poincaré polynomial of the ordinary cohomology is calculated, for $\bM_{\Dol}^{d}(\SL(2,\CC))$ by N. Hitchin in [17], and for $\bM_{\Dol}^{d}(\SL(3,\CC))$ by P. Gothen in [16]. The compactly supported Hodge polynomial and the compactly supported Poincaré polynomial for $\bM_{\Dol}^{d}(\GL(4,\CC))$ can be obtained by the motivic calculation in [11]. By counting the number of points of these moduli spaces over finite fields with large characteristics, the compactly supported Poincaré polynomials for $\bM_{\Dol}^{d}(\GL(r,\CC))$ and $\bM_{\B}^{d}(\GL(r,\CC))$ are obtained in [31]. By using arithmetic methods, T. Hausel and F. Rodriguez-Villegas expresses the E-polynomial of $\bM_{\B}^{d}(\GL(r,\CC))$ in terms of a simple generating function in [18]. By the same way, M. Mereb calculates the E-polynomial of $\bM_{\B}^{d}(\SL(2,\CC))$ and expresses the E-polynomial of $\bM_{\B}^{d}(\SL(r,\CC))$ in terms of a generating function in [26].
Without the assumption that $r$ and $d$ are relatively prime, there have been also some works calculating invariants of $\bM_{\B}^{d}(\SL(r,\CC))$. For $g=1,2$ and any $d$, explicit formulas for the E-polynomials of $\bM_{\B}^{d}(\SL(2,\CC))$ are obtained by a geometric technique in [24]. The E-polynomial of $\bM_{\B}^{d}(\SL(2,\CC))$ is calculated, for $g=3$ and any $d$ by a further geometric technique in [27], and for any $g$ and $d$ in [28].
When we deal with a singular variety $\bM_{\Dol}^{d}(G)$ under the condition that $r$ and $d$ are not relatively prime, the intersection cohomology is a natural invariant. Our interest is focused on the intersection cohomology of $\bM:=\bM_{\Dol}^0(\SL(2,\CC))$.
For a quasi-projective variety $V$, $IH^{i}(V)$ and $\ic^{\bullet}(V)$ denote the $i$-th intersection cohomology of $V$ of the middle perversity and the complex of sheaves on $V$ whose hypercohomology is $IH^{*}(V)$ respectively. $IP_{t}(V)$ denotes the Poincaré polynomial of $IH^{*}(V)$ defined by
$$IP_{t}(V)=\sum_{i}\dim IH^{i}(V).$$
Recently, $IP_{t}(\bM)$ was calculated in [25] by using ways different from ours. First of all, the $E$-polynomial of the compactly supported intersection cohomology of $\bM$ was calculated in [9] over a smooth curve of genus $2$ and in [25] over a smooth curve of genus $g\ge 2$. Then the author of [25] proved the purity of $IH^{*}(\bM)$ from the observation of the semiprojectivity of $\bM$. He used the purity of $IH^{*}(\bM)$ and the Poincaré duality for the intersection cohomology (Theorem <ref>) to calculate $IP_{t}(\bM)$.
From now on, $\GL(n,\CC)$, $\SL(n,\CC)$, $\PGL(n,\CC)$, $\O(n,\CC)$ and $\SO(n,\CC)$ will be denoted by $\GL(n)$, $\SL(n)$, $\PGL(n)$, $\O(n)$ and $\SO(n)$ respectively for the simplicities of notations.
§.§ Main result
In this paper, we prove that a blowing-up formula for $IH^{*}(\bM)$ holds.
It is known that $\bM$ is a good quotient $\bR/\!/\SL(2)$ for some quasi-projective variety $\bR$ (Theorem <ref>, Theorem <ref>). $\bM$ is decomposed into
where $\bM^{s}$ denotes the stable locus of $\bM$ and $J:=\Pic^0(X)$. The singularity along the locus $\ZZ_{2}^{2g}$ is the quotient $\Upsilon^{-1}(0)/\!/\SL(2)$, where $\Upsilon^{-1}(0)$ is the affine cone over a reduced irreducible complete intersection of three quadrics in $\PP(\CC^{2g}\otimes sl(2))$ and $\SL(2)$ acts on $\CC^{2g}\otimes sl(2)$ as the adjoint representation. The singularity along the locus $T^{*}J/\ZZ_{2}-\ZZ_{2}^{2g}$ is $\Psi^{-1}(0)/\!/\CC^{*}$, where $\Psi^{-1}(0)$ is the affine cone over a smooth quadric in $\PP((\CC^{g-1})^{4})$ and $\CC^{*}$ acts on $(\CC^{g-1})^{4}$ with weights $-2,2,2$ and $-2$. Let us consider the Kirwan's algorithm consisting of three blowing-ups $\bK:=\bR_{3}^{s}/\SL(2)\to\bR_{2}^{s}/\SL(2)\to\bR_{1}^{ss}/\!/\SL(2)\to\bR/\!/\SL(2)=\bM$ induced from the composition of blowing-ups $\pi_{\bR_{1}}:\bR_{1}\to\bR$ along the locus $\ZZ_{2}^{2g}$, $\pi_{\bR_{2}}:\bR_{2}\to\bR_{1}^{ss}$ along the strict transform $\Sigma$ of the locus over $T^{*}J/\ZZ_{2}-\ZZ_{2}^{2g}$ and $\bR_{3}\to\bR_{2}^{s}$ along the locus of points with stabilizers larger than the center $\ZZ_{2}$ in $\SL(2)$ (Section <ref>).
We also have local pictures of the Kirwan's algorithm mentioned above. For any $x\in\ZZ_{2}^{2g}$, we have $\pi_{\bR_{1}}^{-1}(x)=\PP\Upsilon^{-1}(0)$ which is a subset of $\PP\Hom(sl(2),\CC^{2g})$ and $\pi_{\bR_{1}}^{-1}(x)\cap\Sigma$ is the locus of rank $1$ matrices (Section <ref>). Thus the strict transform of $\PP\Upsilon^{-1}(0)/\!/\PGL(2)$ in the second blowing-up of the Kirwan's algorithm is just the blowing-up
along the image of the locus of rank $1$ matrices in $\PP\Upsilon^{-1}(0)/\!/\PGL(2)$.
In this setup, we have the following main result.
Let $I_{2g-3}$ be the incidence variety given by
$$I_{2g-3}=\{(p,H)\in\PP^{2g-3}\times\breve{\PP}^{2g-3}|p\in H\}.$$
Then we have
(1) $\dim IH^{i}(\bR_{1}^{ss}/\!/\SL(2))=\dim IH^{i}(\bM)$
$$+2^{2g}\dim IH^{i}(\PP\Upsilon^{-1}(0)/\!/\PGL(2))-2^{2g}\dim IH^{i}(\Upsilon^{-1}(0)/\!/\PGL(2))$$
for all $i\ge0$.
(2) $\dim H^{i}(\bR_{2}^{s}/\SL(2))=\dim IH^{i}(\bR_{1}^{ss}/\!/\SL(2))$
$$+\sum_{p+q=i}\dim[H^{p}(\widetilde{T^{*}J})\otimes H^{t(q)}(I_{2g-3})]^{\ZZ_{2}}$$
for all $i\ge0$, where $t(q)=q-2$ for $q\le\dim I_{2g-3}=4g-7$ and $t(q)=q$ otherwise, where $\alpha:\widetilde{T^{*}J}\to T^{*}J$ is the blowing-up along $\ZZ_{2}^{2g}$.
(3) $\dim IH^{i}(Bl_{\PP\Hom_{1}}\PP\Upsilon^{-1}(0)^{ss}/\!/\SL(2))=\dim IH^{i}(\PP\Upsilon^{-1}(0)/\!/\SL(2))$
$$+\sum_{p+q=i}\dim[H^{p}(\PP^{2g-1})\otimes H^{t(q)}(I_{2g-3})]^{\ZZ_{2}}$$
for all $i\ge0$, where $t(q)=q-2$ for $q\le\dim I_{2g-3}=4g-7$ and $t(q)=q$ otherwise.
It is an essential process to apply this blowing-up formula to calculate $IP_{t}(\bM)$.
§.§ Method of proof of Theorem <ref>
We follow the same steps as in the proof of <cit.>, but we give a proof in each step because our setup is different from that of [21]. We start with the following formulas coming from the decomposition theorem (Proposition <ref>-(1)) and the same argument as in the proof of <cit.> :
$$\dim IH^{i}(\bR_{1}^{ss}/\!/\SL(2))=\dim IH^{i}(\bM)+\dim IH^{i}(\tU_{1})-\dim IH^{i}(U_{1}),$$
$$\dim IH^{i}(\bR_{2}^{s}/\SL(2))=\dim IH^{i}(\bR_{1}^{ss}/\!/\SL(2))+\dim IH^{i}(\tU_{2})-\dim IH^{i}(U_{2})$$
$$\dim IH^{i}(Bl_{\PP\Hom_{1}}\PP\Upsilon^{-1}(0)^{ss}/\!/\SL(2))=\dim IH^{i}(\PP\Upsilon^{-1}(0)/\!/\SL(2))+\dim IH^{i}(\tU)-\dim IH^{i}(U),$$
where $U_{1}$ is a disjoint union of sufficiently small open neighborhoods of each point of $\ZZ_{2}^{2g}$ in $\bM$, $\tU_{1}$ is the inverse image of the first blowing-up, $U_{2}$ is an open neighborhood of the strict transform of $T^{*}J/\ZZ_{2}$ in $\bR_{1}/\!/\SL(2)$, $\tU_{2}$ is the inverse image of the second blowing-up, $U$ is an open neighborhood of the locus of rank $1$ matrices in $\PP\Upsilon^{-1}(0)/\!/\PGL(2)$ and $\tU$ is the inverse image of the blowing-up map $Bl_{\PP\Hom_{1}}\PP\Upsilon^{-1}(0)^{ss}/\!/\PGL(2)\to\PP\Upsilon^{-1}(0)/\!/\PGL(2)$. By Proposition <ref>, we can see that $U_{1},\tU_{1},U_{2},\tU_{2},U$ and $\tU$ are analytically isomorphic to relevant normal cones respectively. By Section <ref> and Lemma <ref>, these relevant normal cones are described as free $\ZZ_{2}$-quotients of nice fibrations with concrete expressions of bases and fibers. By the calculations of the intersection cohomologies of fibers (Lemma <ref> and Lemma <ref>) and applying the perverse Leray spectral sequences (Proposition <ref>-(2)) of intersection cohomologies associated to these fibrations, we complete the proof.
§.§ Towards a formula for the Poincaré polynomial of $IH^{*}(\bM)$
For a topological space $W$ on which a reductive group $G$ acts, $H_{G}^{i}(W)$ and $P_{t}^{G}(W)$ denote the $i$-th equivariant cohomology of $W$ and the Poincaré series of $H_{G}^{*}(W)$ defined by
$$P_{t}^{G}(W)=\sum_{i}\dim H_{G}^{i}(W).$$
We start with the formula of $P_{t}^{\SL(2)}(\bR)$ that comes from that of [8]. Then we use a standard argument to obtain the follows.
* $P_{t}^{\SL(2)}(\bR_{1})=P_{t}^{\SL(2)}(\bR)+2^{2g}(P_{t}^{\SL(2)}(\PP\Upsilon^{-1}(0))-P_{t}(\BSL(2)))$.
* $P_{t}^{\SL(2)}(\bR_{2})=P_{t}^{\SL(2)}(\bR_{1})+P_{t}^{\SL(2)}(E_2)-P_{t}^{\SL(2)}(\Sig)$.
Then we set up the following conjecture.
* $P_{t}^{\SL(2)}(\bR_{1}^{ss})=P_{t}^{\SL(2)}(\bR)+2^{2g}(P_{t}^{\SL(2)}(\PP\Upsilon^{-1}(0)^{ss})-P_{t}(\BSL(2)))$.
* $P_{t}^{\SL(2)}(\bR_{2}^{s})=P_{t}^{\SL(2)}(\bR_{1}^{ss})+P_{t}^{\SL(2)}(E_2^{ss})-P_{t}^{\SL(2)}(\Sig)$.
We use this conjectural blowing-up formula for the equivariant cohomology to get $P_{t}^{\SL(2)}(\bR_{2}^{s})$ from $P_{t}^{\SL(2)}(\bR)$. Since $\bR_{2}^{s}/\SL(2)$ has at worst orbifold singularities (Section <ref>), $P_{t}^{\SL(2)}(\bR_{2}^{s})=P_{t}(\bR_{2}^{s}/\SL(2))$ (Section <ref>). Now we use the blowing-up formula for the intersection cohomology (Theorem <ref>) to get $IP_{t}(\bM)$ from $P_{t}(\bR_{2}^{s}/\SL(2))$.
Assume that Conjecture <ref> holds. Then
$$-\frac1 2(
$$-\frac1 2(
which is a polynomial with degree $6g-6$.
This conjectural formula for $IP_{t}(\bM)$ coincides with that of [25].
§.§ Notations
Throughout this paper, $X$ denotes a smooth complex projective curve of genus $g\ge2$ and $K_X$ the canonical bundle of $X$.
§.§ Acknowledgements
I would like to thank Young-Hoon Kiem for suggesting problem, and for his help and encouragement. This work is based and developed on the second topic in my doctoral dissertation [38].
§ HIGGS BUNDLES
In this section, we introduce two kinds of constructions of the moduli space of Higgs bundles on $X$. For details, see [17], [34] and [35].
§.§ Simpson's construction
An $\SL(2)$-Higgs bundle on $X$ is a pair of a rank $2$ vector bundle $E$ with trivial determinant on $X$ and a section $\phi\in H^0(X,\End_{0}(E)\otimes K_{X})$, where $\End(E)$ denotes the bundle of endomorphisms of $E$ and $\End_0(E)$ the subbundle of traceless endomorphisms of $\End(E)$. We must impose a notion of stability to construct a separated moduli space.
An $\SL(2)$-Higgs bundle $(E,\phi)$ on $X$ is stable (respectively, semistable) if for any $\phi$-invariant line subbundle $F$ of $E$, we have
$$\deg(F)<0\text{ (respectively, }\le\text{)}.$$
Let $N$ be a sufficiently large integer and $p=2N+2(1-g)$. We list C.T. Simpson's results to construct a moduli space of $\SL(2)$-Higgs bundles.
There is a quasi-projective scheme $Q$ representing the moduli functor which parametrizes the isomorphism classes of triples $(E,\phi,\alpha)$ where $(E,\phi)$ is a semistable $\SL(2)$-Higgs bundle and $\alpha$ is an isomorphism $\alpha:\CC^{p}\to H^0(X,E\otimes\cO_{X}(N))$.
Fix $x\in X$. Let $\tQ$ be the frame bundle at $x$ of the universal object restricted to $x$. Then the action of $\GL(p)$ lifts to $\tQ$ and $\SL(2)$ acts on the fibers of $\tQ\to Q$ in an obvious fashion. Every point of $\tQ$ is stable with respect to the free action of $\GL(p)$ and
represents the moduli functor which parametrizes triples $(E,\phi,\beta)$ where $(E,\phi)$ is a semistable $\SL(2)$-Higgs bundle and $\beta$ is an isomorphism $\beta:E|_{x}\to\CC^{2}$.
Every point in $\bR$ is semistable with respect to the action of $\SL(2)$. The closed orbits in $\bR$ correspond to polystable $\SL(2)$-Higgs bundles, i.e. $(E,\phi)$ is stable or $(E,\phi)=(L,\psi)\oplus(L^{-1},-\psi)$ for $L\in\Pic^0(X)$ and $\psi\in H^0(K_{X})$. The set $\bR^{s}$ of stable points with respect to the action of $\SL(2)$ is exactly the locus of stable $\SL(2)$-Higgs bundles.
The good quotient $\bR/\!/\SL(2)$ is $\bM$.
$\bR$ and $\bM$ are both irreducible normal quasi-projective varieties.
§.§ Hitchin's construction
Let $E$ be a complex Hermitian vector bundle of rank $2$ and degree $0$ on $X$. Let $\cA$ be the space of traceless connections on $E$ compatible with the Hermitian metric. $\cA$ can be identified with the space of holomorphic structures on $E$ with trivial determinant. Let
$$\cB=\{(A,\phi)\in\cA\times\Om^0(\End_0(E)\otimes K_{X}):d''_{A}\phi=0\}.$$
Let $\cG$ (respectively, $\cG^{\CC}$) be the gauge group of $E$ with structure group $SU(2)$ (respectively, $SL(2)$). These groups act on $\cB$ by
$$g\cdot(A,\phi)=(g^{-1}A'' g+g^{*}A'(g^{*})^{-1}+g^{-1}d'' g-(d' g^{*})(g^{*})^{-1},g^{-1}\phi g),$$
where $A''$ and $A'$ denote the $(0,1)$ and $(1,0)$ parts of $A$ respectively.
The cotangent bundle $T^{*}\cA\cong\cA\times\Om^0(\End_0(E)\otimes K_{X})$ admits a hyperkähler structure preserved by the action of $\cG$ with the moment maps for this action
\mu_{2}=-i(d''_{A}\phi+d'_{A}\phi^{*})\\
\mu_{3}=-d''_{A}\phi+d'_{A}\phi^{*}.\end{matrix}$$
$\mu_{\CC}=\mu_{2}+i\mu_{3}=-2id''_{A}\phi$ is the complex moment map. Then
Consider the hyperkähler quotient
Let $\cB^{ss}=\{(A,\phi)\in\cB:((E,d''_{A}),\phi)\text{ is semistable}\}$.
§ INTERSECTION COHOMOLOGY THEORY
In this section, we introduce some basics on the intersection cohomology ([13], [14]) and the equivariant intersection cohomology ([1], [12]) of a quasi-projective complex variety. Let $V$ be a quasi-projective complex variety of pure dimension $n$ throughout this section.
§.§ Intersection cohomology
It is well-known that $V$ has a Whitney stratification
$$V=V_{n}\supseteq V_{n-1}\supseteq\cdots\supseteq V_{0}$$
which is embedded into a topological pseudomanifold of dimension $2n$ with filtration
$$W_{2n}\supseteq W_{2n-1}\supseteq\cdots\supseteq W_{0},$$
where $V_{j}$ are closed subvarieties such that $V_{j}-V_{j-1}$ is either empty or a nonsingular quasi-projective variety of pure dimension $j$ and $W_{2k}=W_{2k+1}=V_{k}$.
Let $\bar{p}=(p_{2},p_{3},\cdots,p_{2n})$ be a perversity. For a triangulation $T$ of $V$, $(C_{\bullet}^{T}(V),\partial)$ denotes the chain complex of chains with respect to $T$ with coefficients in $\QQ$. We define $I^{\bar{p}}C_{i}^{T}(V)$ to be the subspace of $C_{i}^{T}(V)$ consisting of those chains $\xi$ such that
$$\dim_{\RR}(|\xi|\cap V_{n-c})\le i-2c+p_{2c}$$
$$\dim_{\RR}(|\partial\xi|\cap V_{n-c})\le i-1-2c+p_{2c}.$$
Let $IC_{i}^{\bar{p}}(V)=\dss\varinjlim_{T}I^{\bar{p}}C_{i}^{T}(V)$. Then $(IC_{\bullet}^{\bar{p}}(V),\partial)$ is a chain complex. The $i$-th intersection homology of $V$ of perversity $\bar{p}$, denoted by $IH_{i}^{\bar{p}}(V)$, is the $i$-th homology group of the chain complex $(IC_{\bullet}^{\bar{p}}(V),\partial)$. The $i$-th intersection cohomology of $V$ of perversity $\bar{p}$, denoted by $IH_{\bar{p}}^{i}(V)$, is the $i$-th homology group of the chain complex $(IC_{\bullet}^{\bar{p}}(V)^{\vee},\partial^{\vee})$.
When we consider a chain complex $(IC_{\bullet}^{cl,\bar{p}}(V),\partial)$ of chains with closed support instead of usual chains, we can define the $i$-th intersection homology with closed support (respectively, intersection cohomology with closed support) of $V$ of perversity $\bar{p}$, denoted by $IH_{i}^{cl,\bar{p}}(V)$ (respectively, $IH_{cl,\bar{p}}^{i}(V)$)
There is an alternative way to define the intersection homology and cohomology with closed support. Let $\ic_{\bar{p}}^{-i}(V)$ be the sheaf given by $U\mapsto IC_{i}^{cl,\bar{p}}(U)$ for each open subset $U$ of $V$. Then $\ic_{\bar{p}}^{\bullet}(V)$ is a complex of sheaves as an object in the bounded derived category $D^{b}(V)$. Then we have $IH_{i}^{cl,\bar{p}}(V)=\cH^{-i}(\ic_{\bar{p}}^{\bullet}(V))$ and $IH_{cl,\bar{p}}^{i}(V)=\cH^{i-2\dim(V)}(\ic_{\bar{p}}^{\bullet}(V))$, where $\cH^{i}(\bA^{\bullet})$ is the $i$-th hypercohomology of a complex of sheaves $\bA^{\bullet}$.
When $\bar{p}$ is the middle perversity $\bar{m}$, $IH_{i}^{\bar{m}}(V)$, $IH_{\bar{m}}^{i}(V)$, $IH_{i}^{cl,\bar{m}}(V)$, $IH_{cl,\bar{m}}^{i}(V)$ and $\ic_{\bar{m}}^{\bullet}(V)$ are denoted by $IH_{i}(V)$, $IH^{i}(V)$, $IH_{i}^{cl}(V)$, $IH_{cl}^{i}(V)$ and $\ic^{\bullet}(V)$ respectively.
§.§ Equivariant intersection cohomology
Assume that a compact connected algebraic group $G$ acts on $V$ algebraically. For the universal principal bundle $\E G\to \B G$, we have the quotient $V\times_{G}\E G$ of $V\times \E G$ by the diagonal action of $G$. Let us consider the following diagram
$$\xymatrix{V&V\times \E G\ar[l]_{p}\ar[r]^{q}&V\times_{G}\E G}.$$
The equivariant derived category of $V$, denoted by $D_{G}^{b}(V)$, is defined as follows:
* An object is a triple $(F_{V},\bar{F},\beta)$, where $F_{V}\in D^{b}(V)$, $\bar{F}\in D^{b}(V\times_{G}\E G)$ and $\beta:p^{*}(F_{V})\to q^{*}(\bar{F})$ is an isomorphism in $D^{b}(V\times \E G)$.
* A morphism $\alpha:(F_{V},\bar{F},\beta)\to(G_{V},\bar{G},\gamma)$ is a pair $\alpha=(\alpha_{V},\bar{\alpha})$, where $\alpha_{V}:F_{V}\to G_{V}$ and $\bar{\alpha}:\bar{F}\to\bar{G}$ such that $\beta\circ p^{*}(\alpha_{V})=q^{*}(\bar{\alpha})\circ\beta$.
$\ic_{G,\bar{p}}^{\bullet}(V)$ (respectively, $\QQ_{V}^{G}$) denotes $(\ic_{\bar{p}}^{\bullet}(V),\ic_{\bar{p}}^{\bullet}(V\times_{G}\E G),\beta)$ (respectively, $(\QQ_{V},\QQ_{V\times_{G}\E G},\id)$) as an object of $D_{G}^{b}(V)$. The $i$-th equivariant cohomology of $V$ can be obtained by $H_{G}^{i}(V)=\cH^{-i}(\QQ_{V\times_{G}\E G})$. The equivariant intersection cohomology of $V$ of perversity $\bar{p}$, denoted by $IH_{G,\bar{p}}^{*}(V)$, is defined by $IH_{G,\bar{p}}^{i}(V):=\cH^{-i}(\ic_{\bar{p}}^{\bullet}(V\times_{G}\E G))$.
When $\bar{p}$ is the middle perversity $\bar{m}$, $IH_{G,\bar{m}}^{i}(V)$ and $\ic_{G,\bar{m}}^{\bullet}(V)$ are denoted by $IH_{G}^{i}(V)$ and $\ic_{G}^{\bullet}(V)$ respectively.
The equivariant cohomology and the equivariant intersection cohomology can be described as a limit of a projective limit system coming from a sequence of finite dimensional submanifolds of $\E G$. Let us consider a sequence of finite dimensional submanifolds $\E G_{0}\subset \E G_{1}\subset\cdots\subset \E G_{n}\subset\cdots$ of $\E G$, where $G$ acts on all of $\E G_{n}$ freely, $\E G_{n}$ are $n$-acyclic, $\E G_{n}\subset \E G_{n+1}$ is an embedding of a submanifold, $\dim \E G_{n}<\dim \E G_{n+1}$ and $\dss \E G=\bigcup_{n}\E G_{n}$. Since $G$ is connected, such a sequence exists by <cit.>. Then we have a sequence of finite dimensional subvarieties $V\times_{G}\E G_{0}\subset V\times_{G}\E G_{1}\subset\cdots\subset V\times_{G}\E G_{n}\subset\cdots$ of $V\times_{G}\E G$. Hence we have $\dss H_{G}^{*}(V)=\varprojlim_{n}H^{*}(V\times_{G}\E G_{n})$ and $\dss IH_{G,\bar{p}}^{*}(V)=\varprojlim_{n}IH_{\bar{p}}^{*}(V\times_{G}\E G_{n})$.
§.§ The generalized Poincaré duality and the decomposition theorem
In this subsection, we state two important theorems. One is the generalized Poincaré duality and the other is the decomposition theorem.
If $\bar{p}+\bar{q}=\bar{t}$, then there is a non-degenerate bilinear form
$$IH_{i}^{\bar{p}}(V)\times IH_{2\dim(V)-i}^{cl,\bar{q}}(V)\to\QQ.$$
(1) Suppose that $\varphi:W\to V$ is a projective morphism of quasi-projective varieties. Then there is an isomorphism
in the derived category $D^{b}(V)$ and closed subvarieties $V_{i,\alpha}$ of $V$, local systems $L_{i,\alpha}$ on the non-singular parts $(V_{i,\alpha})_{nonsing}$ of $V_{i,\alpha}$ for each $i$ such that there is a canonical isomorphism
in $Perv(V)$, where $\leftidx{^p}\cH$ is the perverse cohomology functor and $\ic^{\bullet}(V_{i,\alpha},L_{i,\alpha})$ is the complex of sheaves of intersection chains with coefficients in $L_{i,\alpha}$.
(2) Suppose that $\varphi:W\to V$ is a projective $G$-equivariant morphism of quasi-projective varieties. Then there exist closed subvarieties $V_{\alpha}$ of $V$, $G$-equivariant local systems $L_{\alpha}$ on the non-singular parts $(V_{\alpha})_{nonsing}$ of $V_{\alpha}$ and integers $l_{\alpha}$ such that there is an isomorphism
in the derived category $D_{G}^{b}(V)$, where $\ic_{G}^{\bullet}(V_{\alpha},L_{\alpha})$ is the complex of equivariant intersection cohomology sheaves with coefficients in $L_{\alpha}$.
There are three special important consequences of the decomposition theorem.
(1) Suppose that $\varphi:W\to V$ is a resolution of singularities. Then $\ic^{\bullet}(V)$ (respectively, $IH^{*}(V)$) is a direct summand of $R\varphi_{*}\ic^{\bullet}(W)$ (respectively, $IH^{*}(W)$).
(2) Suppose that $\varphi:W\to V$ is a projective surjective morphism. Then there is a perverse Leray spectral sequence $E_{r}^{ij}$ converging to $IH^{i+j}(W)$ with $E_2$ term $E_{2}^{ij}=IH^{i}(V,\leftidx{^p}\cH^{j}R\varphi_{*}\ic^{\bullet}(W))$. The decomposition theorem for $\varphi$ is equivalent to the degeneration of $E_{r}^{ij}$ at the $E_{2}$ term.
(3) Suppose that $\varphi:W\to V$ is a $G$-equivariant resolution of singularities. Then $\ic_{G}^{\bullet}(V)$ (respectively, $IH_{G}^{*}(V)$) is a direct summand of $R\varphi_{*}\ic_{G}^{\bullet}(W)$ (respectively, $IH_{G}^{*}(W)$).
* Applying the decomposition theorem to $\varphi$ and to the shifted constant sheaf $\QQ_{W}[\dim W]$, we get the result. The details of the proof can be found in <cit.>.
* The degeneration follows from the decomposition
that comes from Theorem <ref>-(1).
* We know that $\ic_{G}^{\bullet}(V)=(\ic^{\bullet}(V),\ic^{\bullet}(V\times_{G}\E G),\alpha)$ and $\ic_{G}^{\bullet}(W)=(\ic^{\bullet}(W),\ic^{\bullet}(W\times_{G}\E G),\beta)$. It follows from item (1) that $\ic^{\bullet}(V)$ is a direct summand of $R\varphi_{*}\ic^{\bullet}(W)$ and that $\ic^{\bullet}(V\times_{G}\E G_{n})$ is a direct summand of $R\varphi_{*}\ic^{\bullet}(W\times_{G}\E G_{n})$ for all $n$. Since $\ic^{\bullet}(V\times_{G}\E G)=\dss\varprojlim_{n}\ic^{\bullet}(V\times_{G}\E G_{n})$ and $\ic^{\bullet}(W\times_{G}\E G)=\dss\varprojlim_{n}\ic^{\bullet}(W\times_{G}\E G_{n})$, $\ic^{\bullet}(V\times_{G}\E G)$ is a direct summand of $R\varphi_{*}\ic^{\bullet}(W\times_{G}\E G)$.
Let $i:\ic^{\bullet}(V)\hookrightarrow R\varphi_{*}\ic^{\bullet}(W)$ and $\bar{i}:\ic^{\bullet}(V\times_{G}\E G)\hookrightarrow R\varphi_{*}\ic^{\bullet}(W\times_{G}\E G)$ be the inclusions from the decomposition theorem. It is easy to see that the following diagram
$$\xymatrix{&p_{V}^{*}\ic^{\bullet}(V)\ar[r]^{\alpha}\ar[d]_{p^{*}(i)}&q_{V}^{*}\ic^{\bullet}(V\times_{G}\E G)\ar[d]^{q^{*}(\bar{i})}&\\
R\varphi_{*}p_{W}^{*}\ic^{\bullet}(W)\ar@^{=}[r]&p_{V}^{*}R\varphi_{*}\ic^{\bullet}(W)\ar[r]_{R\varphi_{*}(\beta)\quad\quad}&q_{V}^{*}R\varphi_{*}\ic^{\bullet}(W\times_{G}\E G)\ar@^{=}[r]&R\varphi_{*}q_{W}^{*}\ic^{\bullet}(W\times_{G}\E G)}$$
commutes, where $p_{V}:V\times \E G\to V$ (respectively, $p_{W}:W\times \E G\to W$) is the projection onto $V$ (respectively, $W$) and $q_{V}:V\times \E G\to V\times_{G}\E G$ (respectively, $q_{W}:W\times \E G\to W\times_{G}\E G$) is the quotient.
Assume that $V$ is smooth in Proposition <ref>-(2). Then
§ KIRWAN'S DESINGULARIZATION OF $\BM$
In this section, we briefly explain how $\bM$ can be desingularized by three blowing-ups by the Kirwan's algorithm introduced in [20]. For details, see [22] and [30].
We first consider the loci of type (i) of $(L,0)\oplus(L,0)$ with $L\cong L^{-1}$ in $\bM\setminus\bM^{s}$ and in $\bR\setminus\bR^{s}$, where $\bR^{s}$ is the stable locus of $\bR$. The loci of type (i) in $\bM$ and in $\bR$ are both isomorphic to the set of $\ZZ_2$-fixed points $\ZZ_{2}^{2g}$ in $J:=\Pic^0(X)$ by the involution $L\mapsto L^{-1}$. The singularity of the locus $\ZZ_{2}^{2g}$ of type (i) in $\bM$ is the quotient
where $\Upsilon:[H^0(K_{X})\oplus H^1(\cO_{X})]\otimes sl(2)\to H^1(K_{X})\otimes sl(2)$ is the quadratic map given by the Lie bracket of $sl(2)$ coupled with the perfect pairing $H^0(K_{X})\oplus H^1(\cO_{X})\to H^1(K_{X})$ and the $\SL(2)$-action on $\Upsilon^{-1}(0)$ is induced from the adjoint representation $\SL(2)\to\Aut(sl(2))$.
Next we consider the loci of type (iii) of $(L,\psi)\oplus (L^{-1},-\psi)$ with $(L,\psi)\ncong (L^{-1},-\psi)$ in $\bM\setminus\bM^{s}$ and in $\bR\setminus\bR^s$. It is clear that the
locus of type (iii) in $\bM$ is isomorphic to
$$J\times_{\ZZ_2}H^0(K_X)-\ZZ_2^{2g}\cong T^*J/\ZZ_2-\ZZ_2^{2g}$$
where $\ZZ_2$ acts on $J$ by $L\mapsto L^{-1}$ and on $H^0(K_X)$
by $\psi\mapsto -\psi$. The locus of type (iii) in $\bR$ is a $\PP
\SL(2)/\CC^*$-bundle over $T^*J/\ZZ_2-\ZZ_2^{2g}$ and in particular
it is smooth. The singularity along the
locus of type (iii) in $\bM$ is the quotient
\[ \Psi^{-1}(0)/\!/\CC^*,\]
where $\Psi:[H^0(L^{-2}K_X)\oplus H^1(L^2)]\oplus[H^0(L^{2}K_X)\oplus H^1(L^{-2})]\to H^1(K_X)$ is the quadratic map given by the sum of perfect pairings $H^0(L^{-2}K_X)\oplus H^1(L^2)\to H^1(K_X)$ and $H^0(L^{2}K_X)\oplus H^1(L^{-2})\to H^1(K_X)$ over $(L,\psi)\oplus (L^{-1},-\psi)\in T^*J/\ZZ_2-\ZZ_2^{2g}$ and the $\CC^{*}$-action on $\Psi^{-1}(0)$ is induced from the $\CC^{*}$-action on $[H^0(L^{-2}K_X)\oplus H^1(L^2)]\oplus[H^0(L^{2}K_X)\oplus H^1(L^{-2})]$ given by
Since we have identical singularities as in [30], we can follow K.G. O'Grady's arguments to construct the Kirwan's desingularization $\bK$ of $\bM$. Let $\bR_{1}$ be the blowing-up of $\bR$ along the locus $\ZZ_{2}^{2g}$ of type (i). Let $\bR_{2}$ be the blowing-up of $\bR_{1}^{ss}$ along the strict transform $\Sigma$ of the locus of type (iii), where $\bR_{1}^{ss}$ is the locus of semistable points in $\bR_{1}$. Let $\bR_{2}^{ss}$ (respectively, $\bR_{2}^{s}$) be the locus of semistable (respectively, stable) points in $\bR_{2}$. Then it follows from the same argument as in <cit.> that
(a) $\bR_{2}^{ss}=\bR_{2}^{s}$,
(b) $\bR_{2}^{s}$ is smooth.
In particular, $\bR_{2}^{s}/\SL(2)$ has at worst orbifold singularities. When $g=2$, this is smooth. When $g\ge3$, we blow up $\bR_{2}^{s}$ along the locus of points with stabilizers larger than the center $\ZZ_{2}$ of $\SL(2)$ to obtain a variety $\bR_{3}$ such that the orbit space $\bK:=\bR_{3}^{s}/\SL(2)$ is a smooth variety obtained by blowing up $\bM$ along $\ZZ_{2}^{2g}$, along the strict transform of $T^*J/\ZZ_2-\ZZ_2^{2g}$ and along a nonsingular subvariety contained in the strict transform of the exceptional divisor of the first blowing-up. $\bK$ is called the Kirwan's desingularization of $\bM$.
Throughout this paper, $\pi_{\bR_{1}}:\bR_{1}\to\bR$ (respectively, $\pi_{\bR_{2}}:\bR_{2}\to\bR_{1}^{ss}$) denotes the first blowing-up map (respectively, the second blowing-up map). $\overline{\pi}_{\bR_{1}}:\bR_{1}^{ss}/\!/\SL(2)\to\bR/\!/\SL(2)$ and $\overline{\pi}_{\bR_{2}}:\bR_{2}^{ss}/\!/\SL(2)\to\bR_{1}^{ss}/\!/\SL(2)$ denote maps induced from $\pi_{\bR_{1}}$ and $\pi_{\bR_{2}}$ respectively.
§ LOCAL PICTURES IN KIRWAN'S ALGORITHM ON $\BR$
In this section, we list local pictures that appear in Kirwan's algorithm on $\bR$ for later use. For details, see <cit.>.
We first observe that $\pi_{\bR_{1}}^{-1}(x)=\PP\Upsilon^{-1}(0)$ for any $x\in\ZZ_{2}^{2g}$. We identify $\HH^{g}$ with $T_{x}(T^{*}J)=H^{1}(\cO_X)\oplus H^{0}(K_X)$ for any $x\in T^{*}J$, where $\HH$ is the division algebra of quaternions. Since the adjoint representation gives an identification $\PGL(2)\cong \SO(sl(2))$, $\PGL(2)$ acts on both $\Upsilon^{-1}(0)$ and $\PP\Upsilon^{-1}(0)$. Since $\PGL(2)=\SL(2)/\{\pm\id\}$ and the action of $\{\pm\id\}$ on $\Upsilon^{-1}(0)$ and $\PP\Upsilon^{-1}(0)$ are trivial,
$$\Upsilon^{-1}(0)/\!/\SL(2)=\Upsilon^{-1}(0)/\!/\PGL(2)\text{ and }\PP\Upsilon^{-1}(0)/\!/\SL(2)=\PP\Upsilon^{-1}(0)/\!/\PGL(2).$$
We have an explicit description of semistable points of $\PP\Upsilon^{-1}(0)$ with respect to the $\PGL(2)$-action as following.
A point $[\varphi]\in\PP\Upsilon^{-1}(0)$ is $\PGL(2)$-semistable if and only
$$\rk\varphi\begin{cases}\geq 2,&\text{or}\\
=1&\text{and }[\varphi]\in\PGL(2)\cdot\PP\{\left(\begin{array}{cc}\lambda&0\\0&-\lambda\end{array}\right)|\lambda\in\HH^{g}\setminus\{O\}\}.
\end{cases}$$
Let $\Hom^{\omega}(sl(2),\HH^{g}):=\{\varphi:sl(2)\rightarrow\HH^{g}|\varphi^{*}\omega=0\}$, where $\omega$ is the Serre duality pairing on $\HH^{g}$. Let $(m,n)=4\Tr(mn)$ be the killing form on $sl(2)$. The killing form gives isomorphisms
$$\HH^{g}\otimes sl(2)\cong\Hom(sl(2),\HH^{g})\text{ and }sl(2)\cong\wedge^{2}sl(2)^{\vee}.$$
By the above identification, $\Upsilon:\Hom(sl(2),\HH^{g})\to\wedge^{2}sl(2)^{\vee}$ is given by $\varphi\mapsto\varphi^{*}\omega$. Then we have
$$\Hom_{k}(sl(2),\HH^{g}):=\{\varphi\in\Hom(sl(2),\HH^{g})|\rk\varphi\leq k\}$$
We have a description of
points of $E_{1}\cap\Sigma$ as following.
Let $x\in\ZZ_2^{2g}$. Then
where $\PP\Hom_{1}(sl(2),\HH^{g})^{ss}$ denotes the set of semistable points of $\PP\Hom_{1}(sl(2),\HH^{g})$ with respect to the $\PGL(2)$-action.
Assume that $\varphi\in\Hom_{1}(sl(2),\HH^{g})$. Since the Serre duality pairing is skew-symmetric, we can choose bases $\{e_{1},\cdots,e_{2g}\}$ of $\HH^{g}$ and $\{v_{1},v_{2},v_{3}\}$ of $sl(2)$ such that $\varphi=e_{1}\otimes v_{1}$ and so that
$$<e_{i},e_{j}>=\begin{cases}1 &\text{if }i=2q-1,j=2q,q=1,\cdots,g,\\
-1&\text{if }i=2q,j=2q-1,q=1,\cdots,g,\\
Every element in $\Hom(sl(2),\HH^{g})$ can be written as
$\sum_{i,j}Z_{ij}e_{i}\otimes v_{j}$. Then we have a description of the normal cone $C_{\PP\Hom_1(sl(2),\HH^{g})}\PP\Upsilon^{-1}(0)$ to $\PP\Hom_{1}(sl(2),\HH^{g})$ in $\PP\Upsilon^{-1}(0)$.
Let $[\varphi]\in\PP\Hom_1(sl(2),\HH^{g})$ and let
$\omega^{\varphi}$ be the bilinear form induced by $\omega$ on
$\im\varphi^{\perp}/\im\varphi$. There is a
$\st([\varphi])$-equivariant isomorphism
Following the idea of proof of <cit.>, both sides are defined by the equation
$$\sum_{2\leq q\leq 2g}(Z_{2q-1,2}Z_{2q,3}-Z_{2q,2}Z_{2q-1,3})=0.$$
under the choice of basis as above.
We now explain how $\st([\varphi])$ acts on $(C_{\PP\Hom_{1}(sl(2),\HH^{g})}\PP\Upsilon^{-1}(0))_{[\varphi]}$. If we add the condition that
$$\left.\begin{array}{ccc}(v_{1},v_{i})=-\delta_{1i}\\(v_{j},v_{j})=0,&j=2,3 \\(v_{2},v_{3})=1, \end{array}\right.$$
and $v_{1}\wedge v_{2}\wedge v_{3}$ is the volume form, where
$\wedge$ corresponds to the Lie bracket in $sl(2)$, then $\st([\varphi])=\O(\ker\varphi)=\O(2)$ is generated by
$$\{\theta_{\lambda}:=\left(\begin{array}{ccc}1&0&0\\0&\lambda&0\\0&0&\lambda^{-1}\end{array}\right)|\lambda\in\CC^{*}\}\text{ and }\tau:=\left(\begin{array}{ccc}-1&0&0\\0&0&1\\0&1&0\end{array}\right)$$
as a subgroup of $\SO(sl(2))$. $\O(2)$ can be also realized as the
subgroup of $\PGL(2)$ generated by
$$\SO(2)=\big\{\theta_{\lambda}:=\left(\begin{array}{cc}\lambda&0\\0&\lambda^{-1}\end{array}\right)|\lambda\in\CC^{*}\big\}/\{\pm\id\},\quad \tau=\left(\begin{array}{cc}0&1\\1&0\end{array}\right).$$
The action of $\st([\varphi])$ on
is given by
$$\theta_{\lambda}(\sum_{i=3}^{2g}(Z_{i,2}e_{i}\otimes v_{2}+Z_{i,3}e_{i}\otimes v_{3}))=\sum_{i=3}^{2g}(\lambda Z_{i,2}e_{i}\otimes v_{2}+\lambda^{-1}Z_{i,3}e_{i}\otimes v_{3}),$$
$$\tau(\sum_{i=3}^{2g}(Z_{i,2}e_{i}\otimes v_{2}+Z_{i,3}e_{i}\otimes v_{3}))=\sum_{i=3}^{2g}(-Z_{i,3}e_{i}\otimes v_{2}-Z_{i,2}e_{i}\otimes v_{3}).$$
Let us consider the blowing-up
of $\PP\Upsilon^{-1}(0)^{ss}$ along $\PP\Hom_{1}(sl(2),\HH^{g})^{ss}$ with the exceptional divisor $E$, where $\PP\Upsilon^{-1}(0)^{ss}$ is the locus of semistable points of $\PP\Upsilon^{-1}(0)$ with respect to the $\PGL(2)$-action. It is obvious that $(\pi_{\bR_{1}}\circ\pi_{\bR_{2}})^{-1}(x)=Bl_{\PP\Hom_{1}}\PP\Upsilon^{-1}(0)^{ss}$ for any $x\in\ZZ_{2}^{2g}$.
$Bl_{\PP\Hom_{1}}\PP\Upsilon^{-1}(0)^{ss}$ is smooth.
§ BLOWING-UP FORMULA FOR INTERSECTION COHOMOLOGY
In this section, we prove that a blowing-up formula for the intersection cohomology holds in Kirwan's algorithm introduced in Section <ref>.
Let $E_{1}$ (respectively, $E_{2}$) be the exceptional divisor of $\pi_{\bR_{1}}$ (respectively, $\pi_{\bR_{2}}$). Let $\cC_{1}$ be the normal cone to $\ZZ_{2}^{2g}$ in $\bR$, $\tcC_{1}$ the normal cone to $E_{1}^{ss}:=E_{1}\cap\bR_{1}^{ss}$ in $\bR_{1}^{ss}$, $\cC_{2}$ the normal cone to $\Sigma$ in $\bR_{1}$, $\tcC_{2}$ the normal cone to $E_{2}^{ss}:=E_{2}\cap\bR_{2}^{ss}$ in $\bR_{2}^{ss}$, $\cC$ the normal cone to $\PP\Hom_{1}(sl(2),\HH^{g})^{ss}$ in $\PP\Upsilon^{-1}(0)^{ss}$ and $\tcC$ the normal cone to $E^{ss}:=E\cap(Bl_{\PP\Hom_{1}}\PP\Upsilon^{-1}(0)^{ss})^{ss}$ in $(Bl_{\PP\Hom_{1}}\PP\Upsilon^{-1}(0)^{ss})^{ss}$, where $(Bl_{\PP\Hom_{1}}\PP\Upsilon^{-1}(0)^{ss})^{ss}$ is the locus of semistable points of $Bl_{\PP\Hom_{1}}\PP\Upsilon^{-1}(0)^{ss}$ with respect to the lifted $\PGL(2)$-action. Note that
where $(Bl_{\PP\Hom_{1}}\PP\Upsilon^{-1}(0)^{ss})^{s}$ is the locus of stable points of $Bl_{\PP\Hom_{1}}\PP\Upsilon^{-1}(0)^{ss}$ with respect to the $\PGL(2)$-action (<cit.>). Then we have the following formulas.
(1) $\dim IH^{i}(\bR_{1}^{ss}/\!/\SL(2))=\dim IH^{i}(\bR/\!/\SL(2))$
$$+\dim IH^{i}(\tcC_{1}/\!/\SL(2))-\dim IH^{i}(\cC_1/\!/\SL(2))$$
$$=\dim IH^{i}(\bR/\!/\SL(2))+2^{2g}\dim IH^{i}(Bl_{0}\Upsilon^{-1}(0)/\!/\PGL(2))-2^{2g}\dim IH^{i}(\Upsilon^{-1}(0)/\!/\PGL(2))$$
for all $i\ge0$, where $Bl_{0}\Upsilon^{-1}(0)$ is the blowing-up of $\Upsilon^{-1}(0)$ at the vertex.
(2) $\dim IH^{i}(\bR_{2}^{s}/\SL(2))=\dim IH^{i}(\bR_{1}^{ss}/\!/\SL(2))$
$$+\dim IH^{i}(\tcC_{2}/\!/\SL(2))-\dim IH^{i}(\cC_2/\!/\SL(2))$$
for all $i\ge0$.
(3) $\dim IH^{i}((Bl_{\PP\Hom_{1}}\PP\Upsilon^{-1}(0)^{ss})^{s}/\!/\SL(2))=\dim IH^{i}(\PP\Upsilon^{-1}(0)/\!/\SL(2))$
$$+\dim IH^{i}(\tcC/\!/\SL(2))-\dim IH^{i}(\cC/\!/\SL(2))$$
for all $i\ge0$.
For the proof, we need to review a useful result by C.T. Simpson. Let $A^i$ (respectively, $A^{i,j}$) be the sheaf of smooth $i$-forms
(respectively, $(i,j)$-forms) on $X$. For a polystable Higgs bundle
$(E,\phi)$, consider the complex
\begin{equation}\label{e4.1}
0\to \End_{0}(E)\otimes A^0\to \End_{0}(E)\otimes A^1\to \End_{0}(E)\otimes
A^2\to 0\end{equation}
whose differential is given by
$D''=\overline{\partial}+\phi$. Because $A^1=A^{1,0}\oplus
A^{0,1}$ and $\phi$ is of type $(1,0)$, we have an exact sequence
of complexes with (<ref>) in the middle
& 0\ar[d] & 0\ar[d] &0\ar[d] & \\
0\ar[r] & 0\ar[r]\ar[d] &\End_{0}(E)\otimes
A^{1,0}\ar[r]^{\overline{\partial}}\ar[d] &\End_{0}(E)\otimes
A^{1,1}\ar[r]\ar[d]^= &0\\
0\ar[r]& \End_{0}(E)\otimes A^0\ar[r]^{D''}\ar[d]_= & \End_{0}(E)\otimes
A^1\ar[r]^{D''}\ar[d] & \End_{0}(E)\otimes A^2\ar[r]\ar[d] & 0\\
0\ar[r] & \End_{0}(E)\otimes A^{0,0}\ar[r]^{\overline{\partial}}\ar[d]
\End_{0}(E)\otimes A^{0,1}\ar[r]\ar[d] & 0\ar[r]\ar[d] & 0 \\
& 0 & 0 &0 }$$
This gives us a long exact sequence
\[\xymatrix{ 0\ar[r] & T^0\ar[r] &H^0(\End_{0}(E))\ar[r]^(.42){[\phi,-]}
& H^0(\End_{0}(E)\otimes K_X)\ar[r]& }\]
\[\xymatrix{ \ar[r]& T^1\ar[r] &H^1(\End_{0}(E))\ar[r]^(.42){[\phi,-]}
& H^1(\End_{0}(E)\otimes K_X)\ar[r] &T^2\ar[r] &0 }
\]
where $T^i$ is the $i$-th cohomology of (<ref>). The Zariski
tangent space of $\bM$ at polystable $(E,\phi)$ is isomorphic to
Let $C$ be the quadratic cone in $T^1$
defined by the map $T^1\to T^2$ which sends an $\End_{0}(E)$-valued 1-form $\eta$ to
$[\eta,\eta]$. Let $y=(E,\phi,\beta:E|_{x}\to\CC^{2})\in \bR$ be a point with
closed orbit and $\bar{y}\in\bM$ the image of $y$. Then the formal completion ${(\bR,y)}^\wedge$ is
isomorphic to the formal completion ${(C\times
\mathfrak{h}^\perp,0)}^\wedge$ where $\mathfrak{h}^\perp$ is the
perpendicular space to the image of $T^0\to H^0(\End_{0}(E))\to
sl(2)$. Furthermore, if we let $Y$ be the étale slice at $y$ of
the $\SL(2)$-orbit in $\bR$, then
$(Y,y)^\wedge \cong (C,0)^\wedge$ and $(\bM,\bar{y})^\wedge=(Y/\!/\st(y),\bar{y})^\wedge\cong(C/\!/\st(y),v)^\wedge$ where $\st(y)$ is the stabilizer of $y$ and $v$ is the cone point of $C$.
* Let $U_{x}$ be a sufficiently small open neighborhood of $x\in\ZZ_{2}^{2g}$ in $\bR/\!/\SL(2)$, let $\displaystyle U_{1}=\sqcup_{x\in\ZZ_{2}^{2g}}U_{x}$ and $\tU_{1}=\overline{\pi}_{\bR_{1}}^{-1}(U_{1})$. By the same argument as in the proof of <cit.>, we have
$$\dim IH^{i}(\bR_{1}/\!/\SL(2))=\dim IH^{i}(\bR/\!/\SL(2))+\dim IH^{i}(\tU_{1})-\dim IH^{i}(U_{1})$$
for all $i\ge0$. By <cit.> and Proposition <ref>, there is an analytic isomorphism $U_{1}\cong\cC_1/\!/\SL(2)$. Since $\tcC_1/\!/\SL(2)$ is naturally isomorphic to the blowing-up of $\cC_1/\!/\SL(2)$ along $\ZZ_{2}^{2g}$, we also have an analytic isomorphism $\tU_{1}\cong\tcC_1/\!/\SL(2)$. Since $\cC_1/\!/\SL(2)$ (respectively, $\tcC_1/\!/\SL(2)$) is the $2^{2g}$ copy of
$\Upsilon^{-1}(0)/\!/\PGL(2)$ (respectively, of $Bl_{0}\Upsilon^{-1}(0)/\!/\PGL(2)$),
we get the formula.
* Let $U_{2}$ be a sufficiently small open neighborhood of the strict transform of $T^*J/\ZZ_2$ in $\bR_{1}/\!/\SL(2)$ and let $\tU_{2}=\overline{\pi}_{\bR_{2}}^{-1}(U_{2})$. By the same argument as in the proof of <cit.>, we have
$$\dim IH^{i}(\bR_{2}/\SL(2))=\dim IH^{i}(\bR_{1}/\!/\SL(2))+\dim IH^{i}(\tU_{2})-\dim IH^{i}(U_{2})$$
for all $i\ge0$. By <cit.> and Proposition <ref> and Hartogs's Extension Theorem, there is an analytic isomorphism $U_{2}\cong\cC_2/\!/\SL(2)$. Since $\tcC_2/\!/\SL(2)$ is naturally isomorphic to the blowing-up of $\cC_2/\!/\SL(2)$ along the strict transform of $T^*J/\ZZ_2$ in $\bR_{1}/\!/\SL(2)$, we also have an analytic isomorphism $\tU_{2}\cong\tcC_2/\!/\SL(2)$. Hence we get the formula.
* Let $\overline{\pi}:Bl_{\PP\Hom_{1}}\PP\Upsilon^{-1}(0)^{ss}/\!/\PGL(2)\to\PP\Upsilon^{-1}(0)/\!/\PGL(2)$ be the map induced from $\pi$. Since $\cC=\cC_{2}|_{\Sigma\cap\PP\Upsilon^{-1}(0)^{ss}}$ and $\tcC=\tcC_{2}|_{E_{2}\cap Bl_{\PP\Hom_{1}}\PP\Upsilon^{-1}(0)^{ss}}$, it follow from the argument of the proof of item (2) that $\cC/\!/\SL(2)$ (respectively, $\tcC/\!/\SL(2)$) can be identified with an open neighborhood $U$ of $\PP\Hom_{1}(sl(2),\HH^{g})/\!/\PGL(2)$ (respectively, with $\overline{\pi}^{-1}(U)$). Again by the same argument as in the proof of <cit.>, we get the formula.
We give a computable fomula from Lemma <ref> by more analysis on $Bl_{0}\Upsilon^{-1}(0)/\!/\PGL(2)$, $\cC_{2}/\!/\SL(2)$, $\tcC_{2}/\!/\SL(2)$, $\cC/\!/\SL(2)$ and $\tcC/\!/\SL(2)$.
We first give explicit geometric descriptions for $\cC_{2}/\!/\SL(2)$, $\tcC_{2}/\!/\SL(2)$, $\cC/\!/\SL(2)$ and $\tcC/\!/\SL(2)$. Let $\alpha:\widetilde{T^{*}J}\to T^{*}J$ be the blowing-up along $\ZZ_{2}^{2g}$. Let $(\cL,\psi_{\cL})$ be the pull-back to $\widetilde{T^{*}J}\times X$ of the universal pair on $T^{*}J\times X$ by $\alpha\times 1$ and let $p:\widetilde{T^{*}J}\times X\to\widetilde{T^{*}J}$ the projection onto the first factor.
(1) $\cC_{2}|_{\Sigma\setminus E_{1}}/\!/\SL(2)$ is a free $\ZZ_{2}$-quotient of $\Psi^{-1}(0)/\!/\CC^{*}$-bundle over $\widetilde{T^{*}J}\setminus\alpha^{-1}(\ZZ_{2}^{2g})$.
(2) $\tcC_{2}|_{\Sigma\setminus E_{1}}/\!/\SL(2)$ is a free $\ZZ_{2}$-quotient of $Bl_{0}\Psi^{-1}(0)/\!/\CC^{*}$-bundle over $\widetilde{T^{*}J}\setminus\alpha^{-1}(\ZZ_{2}^{2g})$, where $Bl_{0}\Psi^{-1}(0)$ is the blowing-up of $\Psi^{-1}(0)$ at the vertex.
(3) $\cC_{2}|_{\Sigma\cap E_{1}}/\!/\SL(2)$ is a free $\ZZ_{2}$-quotient of $\Hom^{\om_{\varphi}}(\ker\varphi,\im\varphi^{\perp}/\im\varphi)/\!/\CC^{*}$-bundle over $\alpha^{-1}(\ZZ_{2}^{2g})$, where $[\varphi]\in\Sigma\cap E_{1}$.
(4) $\tcC_{2}|_{\Sigma\cap E_{1}}/\!/\SL(2)$ is a free $\ZZ_{2}$-quotient of $Bl_{0}\Hom^{\om_{\varphi}}(\ker\varphi,\im\varphi^{\perp}/\im\varphi)/\!/\CC^{*}$-bundle over $\alpha^{-1}(\ZZ_{2}^{2g})$, where $[\varphi]\in\Sigma\cap E_{1}$ and $Bl_{0}\Hom^{\om_{\varphi}}(\ker\varphi,\im\varphi^{\perp}/\im\varphi)$ is the blowing-up of $\Hom^{\om_{\varphi}}(\ker\varphi,\im\varphi^{\perp}/\im\varphi)$ at the vertex.
(5) $\cC/\!/\SL(2)$ is a free $\ZZ_{2}$-quotient of $\Hom^{\om_{\varphi}}(\ker\varphi,\im\varphi^{\perp}/\im\varphi)/\!/\CC^{*}$-bundle over $\PP^{2g-1}$, where $[\varphi]\in\PP\Hom_{1}(sl(2),\HH^{g})^{ss}$.
(6) $\tcC/\!/\SL(2)$ is a free $\ZZ_{2}$-quotient of $Bl_{0}\Hom^{\om_{\varphi}}(\ker\varphi,\im\varphi^{\perp}/\im\varphi)/\!/\CC^{*}$-bundle over $\PP^{2g-1}$, where $[\varphi]\in\PP\Hom_{1}(sl(2),\HH^{g})^{ss}$.
Let $x$ be a point of $X$.
* Consider the principal $\PGL(2)$-bundle
$\PGL(2)$ acts on $\cO_{\widetilde{T^{*}J}}^{2}$ and $\O(2)$ acts on $\cL|_{x}\oplus\cL^{-1}|_{x}$. By the same argument as in the proof of <cit.>,
$\cC_{2}|_{\Sigma\setminus E_{1}}/\!/\SL(2)$ is the quotient of $q^{*}\Psi_{\cL_{III}}^{-1}(0)/\!/\O(2)$ by the $\PGL(2)$-action, where $\cL_{III}=\cL|_{\widetilde{T^{*}J}\setminus\alpha^{-1}(\ZZ_{2}^{2g})}$ and
$$\Psi_{\cL_{III}}:[p_{*}(\cL_{III}^{-2}K_X)\oplus R^{1}p_{*}(\cL_{III}^2)]\oplus[p_{*}(\cL_{III}^{2}K_X)\oplus R^{1}p_{*}(\cL_{III}^{-2})]\to R^{1}p_{*}(K_X)$$
is the sum of perfect pairings $p_{*}(\cL_{III}^{-2}K_X)\oplus R^{1}p_{*}(\cL_{III}^2)\to R^{1}p_{*}(K_X)$ and $p_{*}(\cL_{III}^{2}K_X)\oplus R^{1}p_{*}(\cL_{III}^{-2})\to R^{1}p_{*}(K_X)$. Since the actions of $\PGL(2)$ and $\O(2)$ commute and $q$ is the principal $\PGL(2)$-bundle,
$$\cC_{2}|_{\Sigma\setminus E_{1}}/\!/\SL(2)=\Psi_{\cL_{III}}^{-1}(0)/\!/\O(2)=\frac{\Psi_{\cL_{III}}^{-1}(0)/\!/\SO(2)}{\O(2)/\SO(2)}=\frac{\Psi_{\cL_{III}}^{-1}(0)/\!/\CC^{*}}{\ZZ_{2}}.$$
Hence we get the description.
* Since $\tcC_{2}|_{\Sigma\setminus E_{1}}/\!/\SL(2)$ is isomorphic to the blowing-up of $\cC_{2}|_{\Sigma\setminus E_{1}}/\!/\SL(2)$ along $T^{*}J/\ZZ_{2}\setminus\ZZ_{2}^{2g}$, it is isomorphic to $\displaystyle\frac{\widetilde{\Psi_{\cL_{III}}^{-1}(0)}/\!/\CC^{*}}{\ZZ_{2}}$, where $\widetilde{\Psi_{\cL_{III}}^{-1}(0)}$ is the blowing-up of $\Psi_{\cL_{III}}^{-1}(0)$ along $\widetilde{T^{*}J}\setminus\alpha^{-1}(\ZZ_{2}^{2g})\cong T^{*}J\setminus\ZZ_{2}^{2g}$.
* Note that $E_{1}$ is a $2^{2g}$ disjoint union of $\PP\Upsilon^{-1}(0)$. It follows from Proposition <ref> that $\Sigma\cap E_{1}$ is a $2^{2g}$ disjoint union of $\PP\Hom_{1}(sl(2),\HH^{g})^{ss}$. By Proposition <ref>, we have
$$\PP\Hom_{1}(sl(2),\HH^{g})/\!/\PGL(2)\cong Z/\!/\O(2)=Z_{1}=\PP^{2g-1},$$
where $Z=Z_{1}\cup Z_{2}\cup Z_{3}$, $Z^{ss}$ is the set of
semistable points of $Z$ for the action of $\O(2)$,
$Z_{2}=\PP\{v_{2}\otimes\HH^{g}\}$ and
$Z_{3}=\PP\{v_{3}\otimes\HH^{g}\}$ for the basis $\{v_{1},v_{2},v_{3}\}$ of $sl(2)$ chosen in Section <ref>. Then we have
Since $\cC_{2}|_{Z^{ss}}$ is a $\Hom^{\om_{\varphi}}(\ker\varphi,\im\varphi^{\perp}/\im\varphi)$-bundle over $Z^{ss}$ by Proposition <ref>,
is a $\Hom^{\om_{\varphi}}(\ker\varphi,\im\varphi^{\perp}/\im\varphi)/\!/\CC^{*}$-bundle over $Z/\!/\SO(2)=Z_{1}=\PP^{2g-1}$. Since $\alpha^{-1}(\ZZ_{2}^{2g})$ is a $2^{2g}$ disjoint union of $\PP^{2g-1}$, we get the description.
* Since $\tcC_{2}|_{\Sigma\cap E_{1}}/\!/\SL(2)$ is isomorphic to the blowing-up of $\cC_{2}|_{\Sigma\cap E_{1}}/\!/\SL(2)$ along $2^{2g}$ disjoint union of $\PP\Hom_{1}(sl(2),\HH^{g})/\!/\PGL(2)\cong\PP^{2g-1}$, it is isomorphic to $2^{2g}$ disjoint union of $\displaystyle\frac{\widetilde{\cC_{2}|_{Z^{ss}}}/\!/\SO(2)}{\ZZ_{2}}$, where $\widetilde{\cC_{2}|_{Z^{ss}}}$ is the blowing-up of $\cC_{2}|_{Z^{ss}}$ along $Z^{ss}$.
* Since $\cC=\cC_{2}|_{\Sigma\cap\PP\Upsilon^{-1}(0)^{ss}}$, we get the description from (3).
* Since $\tcC=\tcC_{2}|_{E_{2}\cap Bl_{\PP\Hom_{1}}\PP\Upsilon^{-1}(0)^{ss}}$, we get the description from (4).
We next explain how to compute the terms
$$\dim IH^{i}(Bl_{0}\Upsilon^{-1}(0)/\!/\PGL(2))-\dim IH^{i}(\Upsilon^{-1}(0)/\!/\PGL(2)),$$
$$\dim IH^{i}(\tcC_{2}/\!/\SL(2))-\dim IH^{i}(\cC_2/\!/\SL(2))$$
$$\dim IH^{i}(\tcC/\!/\SL(2))-\dim IH^{i}(\cC/\!/\SL(2))$$
that appear in Lemma <ref>. We start with the following technical lemma.
Let $V$ be a complex variety on which a finite group $F$ acts. Then
$$IH^{*}(V/F)\cong IH^{*}(V)^{F}$$
where $IH^{*}(V)^{F}$ denotes the
invariant part of $IH^{*}(V)$ under the action of $F$.
Now recall that $\Upsilon^{-1}(0)/\!/\SL(2)=\Upsilon^{-1}(0)/\!/\PGL(2)$ and $\PP\Upsilon^{-1}(0)/\!/\SL(2)=\PP\Upsilon^{-1}(0)/\!/\PGL(2)$ from section <ref>.
We can compute $IH^{*}(\Upsilon^{-1}(0)/\!/\SL(2))$ (respectively, $IH^{*}(\Psi^{-1}(0)/\!/\CC^*)$ and
in terms of $IH^{*}(\PP\Upsilon^{-1}(0)/\!/\SL(2))$ (respectively, $IH^{*}(\PP\Psi^{-1}(0)/\!/\CC^*)$ and
In order to explain this, we need the following lemmas. The first lemma shows the surjectivities of the Kirwan maps on the fibers of normal cones and exceptional divisors.
(1) The Kirwan map
$$IH_{\SL(2)}^{*}(\PP\Upsilon^{-1}(0)^{ss})\rightarrow IH^{*}(\PP\Upsilon^{-1}(0)/\!/\SL(2))$$
is surjective.
(2) The Kirwan map
$$IH_{\SL(2)}^{*}(\Upsilon^{-1}(0))\rightarrow IH^{*}(\Upsilon^{-1}(0)/\!/\SL(2))$$
is surjective.
(3) The Kirwan map
$$H_{\CC^*}^{*}(\PP\Psi^{-1}(0)^{ss})\rightarrow IH^{*}(\PP\Psi^{-1}(0)/\!/\CC^*)$$
is surjective.
(4) The Kirwan map
$$IH_{\CC^*}^{*}(\Psi^{-1}(0))\rightarrow IH^{*}(\Psi^{-1}(0)/\!/\CC^*)$$
is surjective.
(5) The Kirwan map
$$H_{\CC^*}^{*}(\PP\Hom^{\om_{\varphi}}(\ker\varphi,\im\varphi^{\perp}/\im\varphi)^{ss})\rightarrow IH^{*}(\PP\Hom^{\om_{\varphi}}(\ker\varphi,\im\varphi^{\perp}/\im\varphi)/\!/\CC^*)$$
is surjective.
(6) The Kirwan map
$$IH_{\CC^*}^{*}(\Hom^{\om_{\varphi}}(\ker\varphi,\im\varphi^{\perp}/\im\varphi))\rightarrow IH^{*}(\Hom^{\om_{\varphi}}(\ker\varphi,\im\varphi^{\perp}/\im\varphi)/\!/\CC^*)$$
is surjective.
* Consider the quotient map
In <cit.>, Bernstein and Lunts define a functor
that extends the pushforward of sheaves $f_{*}$.
By the same arguments as those of <cit.>, we can obtain morphisms
$$\lambda_{\PP\Upsilon^{-1}(0)}:\ic^{\bullet}(\PP\Upsilon^{-1}(0)/\!/\SL(2))[3]\to Qf_{*}\ic^{\bullet}_{\SL(2)}(\PP\Upsilon^{-1}(0)^{ss})$$
such that $\kappa_{\PP\Upsilon^{-1}(0)}\circ\lambda_{\PP\Upsilon^{-1}(0)}=\id$. $\lambda_{\PP\Upsilon^{-1}(0)}$ induces a map
$$IH^{*}(\PP\Upsilon^{-1}(0)/\!/\SL(2))\to IH_{\SL(2)}^{*}(\PP\Upsilon^{-1}(0)^{ss})$$
which is an inclusion.
Hence $\kappa_{\PP\Upsilon^{-1}(0)}$ induces a map
$$\tilde{\kappa}_{\PP\Upsilon^{-1}(0)}:IH_{\SL(2)}^{*}(\PP\Upsilon^{-1}(0)^{ss})\to IH^{*}(\PP\Upsilon^{-1}(0)/\!/\SL(2))$$
which is split by the inclusion $IH^{*}(\PP\Upsilon^{-1}(0)/\!/\SL(2))\to IH_{\SL(2)}^{*}(\PP\Upsilon^{-1}(0)^{ss})$.
* Let $R:=\CC[T_0,T_1,\cdots,T_{6g-1}]$. For an
$\SL(2)$-invariant ideal $I\subset R$ generated by three quadratic
homogeneous polynomials in $R$ defining $\Upsilon^{-1}(0)$, we can
Let $\overline{\Upsilon^{-1}(0)}$ be the Zariski closure of
$\Upsilon^{-1}(0)$ in $\PP^{6g}$. Since the homogenization of
$I$ equals to $I$, we can write
$$\overline{\Upsilon^{-1}(0)}=\proj(R[T]/I\cdot R[T])$$
where $\SL(2)$ acts trivially on the variable $T$. Thus
$$\Upsilon^{-1}(0)/\!/\SL(2)=\spec(R^{\SL(2)}/I\cap R^{\SL(2)})$$
$$\overline{\Upsilon^{-1}(0)}/\!/\SL(2)=\proj(R[T]/I\cdot R[T])^{\SL(2)}.$$
Since $\SL(2)$ acts trivially on the variable $T$,
$$\overline{\Upsilon^{-1}(0)}/\!/\SL(2)=\proj(R^{\SL(2)}[T]/(I\cap R^{\SL(2)})\cdot R^{\SL(2)}[T]).$$
Hence we have an open immersion
given by $\mathfrak{p}\mapsto\mathfrak{p}^{hom}$ where
$\mathfrak{p}^{hom}$ is the homogenization of $\mathfrak{p}$.
Note that
$$=\proj(R^{\SL(2)}[T]/(I\cap R^{\SL(2)})\cdot R^{\SL(2)}[T]+((T)\cap R^{\SL(2)}[T]))$$
$$\cong\proj(R^{\SL(2)}/I\cap R^{\SL(2)})=\PP\Upsilon^{-1}(0)/\!/\SL(2)$$
where $(T)$ is the ideal of $R[T]$ generated by $T$.
Consider the quotient map
In <cit.>, Bernstein and Lunts define a functor
that extends the pushforward of sheaves $f_{*}$.
By the same arguments as those of <cit.>, we can obtain morphisms
$$\lambda_{\overline{\Upsilon^{-1}(0)}}:\ic^{\bullet}(\overline{\Upsilon^{-1}(0)}/\!/\SL(2))[3]\to Qf_{*}\ic^{\bullet}_{\SL(2)}(\overline{\Upsilon^{-1}(0)}^{ss})$$
such that $\kappa_{\overline{\Upsilon^{-1}(0)}}\circ\lambda_{\overline{\Upsilon^{-1}(0)}}=\id$. $\lambda_{\overline{\Upsilon^{-1}(0)}}$ induces a map
$$IH^{*}(\overline{\Upsilon^{-1}(0)}/\!/\SL(2))\to IH_{\SL(2)}^{*}(\overline{\Upsilon^{-1}(0)}^{ss})$$
which is an inclusion.
Hence $\kappa_{\overline{\Upsilon^{-1}(0)}}$ induces a map
$$\tilde{\kappa}_{\overline{\Upsilon^{-1}(0)}}:IH_{\SL(2)}^{*}(\overline{\Upsilon^{-1}(0)}^{ss})\to IH^{*}(\overline{\Upsilon^{-1}(0)}/\!/\SL(2))$$
which is split by the inclusion $IH^{*}(\overline{\Upsilon^{-1}(0)}/\!/\SL(2))\to IH_{\SL(2)}^{*}(\overline{\Upsilon^{-1}(0)}^{ss})$.
Consider the following commutative diagram:
\begin{equation}\label{Seq-Cpx}\xymatrix{\vdots\ar[d]&\vdots\ar[d]\\IH_{\SL(2)}^{i-2}(\PP\Upsilon^{-1}(0)^{ss})\ar[r]^{\tilde{\kappa}_{\PP\Upsilon^{-1}(0)}}\ar[d]&IH^{i-2}(\PP\Upsilon^{-1}(0)/\!/\SL(2))\ar[d]\\
\vdots&\vdots}\end{equation}
Vertical sequences are Gysin sequences and $\tilde{\kappa}_{\Upsilon^{-1}(0)}$ is induced from $\tilde{\kappa}_{\PP\Upsilon^{-1}(0)}$ and $\tilde{\kappa}_{\overline{\Upsilon^{-1}(0)}}$. Since
$\tilde{\kappa}_{\PP\Upsilon^{-1}(0)}$ and
$\tilde{\kappa}_{\overline{\Upsilon^{-1}(0)}}$ are surjective, $\tilde{\kappa}_{\Upsilon^{-1}(0)}$ is surjective.
* Following the idea of the proof of item (1), we get the result.
* Following the idea of the proof of item (2), we get the result.
* Following the idea of the proof of item (1), we get the result.
* Following the idea of the proof of item (2), we get the result.
The second lemma shows how to compute the intersection cohomologies of the fibers of the normal cones of the singularities of $\bM$ via those of the projectivizations of the fibers.
It is well known that there is a very ample line
bundle $\cL$ (respectively, $\cM_{1}$ and $\cM_{2}$) on
$\PP\Upsilon^{-1}(0)/\!/\SL(2)$ (respectively, $\PP\Psi^{-1}(0)/\!/\CC^*$ and $\PP\Hom^{\om_{\varphi}}(\ker\varphi,\im\varphi^{\perp}/\im\varphi)/\!/\CC^*$),
pullback to $\PP\Upsilon^{-1}(0)^{ss}$ (respectively, $\PP\Psi^{-1}(0)^{ss}$ and $\PP\Hom^{\om_{\varphi}}(\ker\varphi,\im\varphi^{\perp}/\im\varphi)^{ss}$) is the $M$th (respectively, $N_{1}$th and $N_{2}$th) tensor
power of the hyperplane line bundle on $\PP\Upsilon^{-1}(0)$ (respectively, $\PP\Psi^{-1}(0)$ and $\PP\Hom^{\om_{\varphi}}(\ker\varphi,\im\varphi^{\perp}/\im\varphi)$) for some $M$ (respectively, $N_{1}$ and $N_{2}$).
Let $C_{\cL}(\PP\Upsilon^{-1}(0)/\!/\SL(2))$ (respectively, $C_{\cM_{1}}(\PP\Psi^{-1}(0)/\!/\CC^*)$ and
be the affine cone on $\PP\Upsilon^{-1}(0)/\!/\SL(2)$ (respectively, $\PP\Psi^{-1}(0)/\!/\CC^*$ and
with respect to the projective embedding induced by the sections of $\cL$ (respectively, $\cM_{1}$ and $\cM_{2}$).
(1) $IH^{*}(\Upsilon^{-1}(0)/\!/\SL(2))=IH^{*}(C_{\cL}(\PP\Upsilon^{-1}(0)/\!/\SL(2)))$ and
(2) $IH^{*}(\Psi^{-1}(0)/\!/\CC^{*})=IH^{*}(C_{\cM_{1}}(\PP\Psi^{-1}(0)/\!/\CC^{*}))$ and
(3) $IH^{*}(\Hom^{\om_{\varphi}}(\ker\varphi,\im\varphi^{\perp}/\im\varphi)/\!/\CC^{*})=IH^{*}(C_{\cM_{2}}(\PP\Hom^{\om_{\varphi}}(\ker\varphi,\im\varphi^{\perp}/\im\varphi)/\!/\CC^{*}))$ and
* We first follow the idea of the proof of <cit.> to see that
where $F$ is the finite subgroup of $\GL(6g)$ consisting of all diagonal matrices
$diag(\eta,\cdots,\eta)$ such that $\eta$ is an $M$th root of unity.
The coordinate ring of $C_{\mathcal{L}}(\PP\Upsilon^{-1}(0)/\!/\SL(2))$ is the subring $(\CC[Y_{0},\cdots,Y_{6g-1}]/I)_{M}^{\SL(2)}$ of the coordinate ring $\CC[Y_{0},\cdots,Y_{6g-1}]/I$ of $\Upsilon^{-1}(0)$ which is generated by homogeneous polynomials fixed by the natural action of $\SL(2)$ and of degree $M$. Since
we have
$$=\CC[Y_{0},\cdots,Y_{6g-1}]^{\SL(2)\times F}/I\cap\CC[Y_{0},\cdots,Y_{6g-1}]^{\SL(2)\times F}=(\CC[Y_{0},\cdots,Y_{6g-1}]/I)^{\SL(2)\times F}.$$
Thus we get
and then
by Lemma <ref>.
It remains to show that the action of $F$ on
$IH^{*}(\Upsilon^{-1}(0)/\!/\SL(2))$ is trivial. Since the Kirwan map
$$IH_{\SL(2)}^{*}(\Upsilon^{-1}(0))\rightarrow IH^{*}(\Upsilon^{-1}(0)/\!/\SL(2))$$
is surjective by Lemma <ref>-(2), it suffices to show that
the action of $F$ on $IH_{\SL(2)}^{*}(\Upsilon^{-1}(0))$ is trivial.
$$\pi_1:Bl_0\Upsilon^{-1}(0)\to \Upsilon^{-1}(0)$$
be the blowing-up of $\Upsilon^{-1}(0)$ at the vertex and let
$$\pi_2:Bl_{\Hom_1}Bl_0\Upsilon^{-1}(0)\to Bl_0\Upsilon^{-1}(0)$$
be the blowing-up of $Bl_0\Upsilon^{-1}(0)$ along
$\widetilde{\Hom_1(sl(2),\HH^g)}$, where $\widetilde{\Hom_1(sl(2),\HH^g)}$ is the strict transform of $\Hom_1(sl(2),\HH^g)$. By the universal property of blowing-up, the action of $F$ on $\Upsilon^{-1}(0)$ lifts to an action of $F$ on $Bl_{\Hom_1}Bl_0\Upsilon^{-1}(0)$. Since $\pi_1\circ\pi_2$ is proper and $Bl_{\Hom_1}Bl_0\Upsilon^{-1}(0)$ is smooth (See the proof of Lemma 1.8.5 in [30]), by Proposition <ref>-(3), $IH_{\SL(2)}^{*}(\Upsilon^{-1}(0))$ is a direct summand
Since $Bl_{\Hom_1}Bl_0\Upsilon^{-1}(0)$ is homotopically equivalent
to $Bl_{\PP\Hom_1}\PP\Upsilon^{-1}(0)$, it suffices to
show that the action of $F$ on
$H_{\SL(2)}^{*}(Bl_{\PP\Hom_1}\PP\Upsilon^{-1}(0))$ is
trivial. But this is true because the action of $F$ on
$\PP\Upsilon^{-1}(0)$ is trivial and it lifts to the trivial action of $F$ on $Bl_{\PP\Hom_1}\PP\Upsilon^{-1}(0)$. Hence $F$ acts trivially on $IH^{*}(\Upsilon^{-1}(0)/\!/\SL(2))$.
Similarly, we next see that $Bl_{v}(C_{\cL}(\PP\Upsilon^{-1}(0)/\!/\SL(2)))$ is naturally isomorphic to
where $v$ is the vertex of $C_{\cL}(\PP\Upsilon^{-1}(0)/\!/\SL(2))$.
Let $J$ be the ideal of $\CC[Y_{0},\cdots,Y_{6g-1}]/I$ corresponding to the vertex $O$ of $\Upsilon^{-1}(0)$. Then we have $Bl_{0}\Upsilon^{-1}(0)=\bproj(\oplus_{m\ge 0}J^{m})$. Then
$$(Bl_{0}\Upsilon^{-1}(0)/\!/\SL(2))/F=Bl_{0}\Upsilon^{-1}(0)/\!/\SL(2)\times F=\bproj(\oplus_{m\ge 0}(J^{m})^{\SL(2)\times F}).$$
Since $(J^{m})^{\SL(2)\times F}=J^{m}\cap(\CC[Y_{0},\cdots,Y_{6g-1}]/I)^{\SL(2)\times F}=(J\cap(\CC[Y_{0},\cdots,Y_{6g-1}]/I)^{\SL(2)\times F})^{m}$
$$=(J^{\SL(2)\times F})^{m}$$
and $J^{\SL(2)\times F}$ is the ideal corresponding to $v=O/\!/\SL(2)\times F$,
we have
$$\bproj(\oplus_{m\ge 0}(J^{m})^{\SL(2)\times F})=\bproj(\oplus_{m\ge 0}(J^{\SL(2)\times F})^{m})=Bl_{v}(\Upsilon^{-1}(0)/\!/\SL(2)\times
By the same idea of the proof of the first statement, $F$ acts trivially on $IH^{*}(Bl_{0}\Upsilon^{-1}(0)/\!/\SL(2))$ and then
Since $Bl_{v}(C_{\cL}(\PP\Upsilon^{-1}(0)/\!/\SL(2)))$ is homeomorphic to the line bundle $\cL^{\vee}$ over $\PP\Upsilon^{-1}(0)/\!/\SL(2)$, there is a Leray spectral sequence $E_{r}^{pq}$ converging to
$$E_{2}^{pq}=IH^{p}(\PP\Upsilon^{-1}(0)/\!/\SL(2),IH^{q}(\CC))=\begin{cases}IH^{p}(\PP\Upsilon^{-1}(0)/\!/\SL(2))&\text{if }q=0\\
Hence we get
* Following the idea of the proof of item (1), we get the result.
* Following the idea of the proof of item (1), we get the result.
By the standard argument of <cit.>, we get the third lemma as follows. It gives a way to compute the intersection cohomology of affine cones of projective GIT quotients.
(1) Let $n=\dim_{\CC}C_{\cL}(\PP\Upsilon^{-1}(0)/\!/\SL(2))$. Then
$$IH^{i}(C_{\cL}(\PP\Upsilon^{-1}(0)/\!/\SL(2)))\cong\begin{cases}0&\text{for }i\ge n\\
IH^{i}(C_{\cL}(\PP\Upsilon^{-1}(0)/\!/\SL(2))-\{0\})&\text{for }i<n.\end{cases}$$
(2) Let $n=\dim_{\CC}C_{\cM_{1}}(\PP\Psi^{-1}(0)/\!/\CC^{*})$. Then
$$IH^{i}(C_{\cM_{1}}(\PP\Psi^{-1}(0)/\!/\CC^{*}))\cong\begin{cases}0&\text{for }i\ge n\\
IH^{i}(C_{\cM_{1}}(\PP\Psi^{-1}(0)/\!/\CC^{*})-\{0\})&\text{for }i<n.\end{cases}$$
(3) Let $n=\dim_{\CC}C_{\cM_{2}}(\PP\Hom^{\om_{\varphi}}(\ker\varphi,\im\varphi^{\perp}/\im\varphi)/\!/\CC^{*})$. Then
$$\cong\begin{cases}0&\text{for }i\ge n\\
IH^{i}(C_{\cM_{2}}(\PP\Hom^{\om_{\varphi}}(\ker\varphi,\im\varphi^{\perp}/\im\varphi)/\!/\CC^{*})-\{0\})&\text{for }i<n.\end{cases}$$
The following lemma explains how $IH^{*}(\Upsilon^{-1}(0)/\!/\SL(2))$ (respectively, $IH^{*}(\Psi^{-1}(0)/\!/\CC^*)$ and $IH^{*}(\Hom^{\om_{\varphi}}(\ker\varphi,\im\varphi^{\perp}/\im\varphi)/\!/\CC^*)$) can be computed in terms of $IH^{*}(\PP\Upsilon^{-1}(0)/\!/\SL(2))$ (respectively, $IH^{*}(\PP\Psi^{-1}(0)/\!/\CC^*)$ and $IH^{*}(\PP\Hom^{\om_{\varphi}}(\ker\varphi,\im\varphi^{\perp}/\im\varphi)/\!/\CC^*)$) as desired.
(1) $\begin{cases}
IH^{i}(\Upsilon^{-1}(0)/\!/\SL(2))=0&\text{for }i\geq\dim \Upsilon^{-1}(0)/\!/\SL(2)\\
IH^{i}(\Upsilon^{-1}(0)/\!/\SL(2))\cong\coker\lambda&\text{for }i<\dim \Upsilon^{-1}(0)/\!/\SL(2),
\end{cases}$
where $\lambda:IH^{i-2}(\PP\Upsilon^{-1}(0)/\!/\SL(2))\rightarrow IH^{i}(\PP\Upsilon^{-1}(0)/\!/\SL(2))$ is an injection.
(2) $\begin{cases}
IH^{i}(\Psi^{-1}(0)/\!/\CC^{*})=0&\text{for }i\geq\dim \Psi^{-1}(0)/\!/\CC^{*}\\
IH^{i}(\Psi^{-1}(0)/\!/\CC^{*})\cong\coker\lambda&\text{for }i<\dim \Psi^{-1}(0)/\!/\CC^{*},
\end{cases}$
where $\lambda:IH^{i-2}(\PP\Psi^{-1}(0)/\!/\CC^{*})\rightarrow IH^{i}(\PP\Psi^{-1}(0)/\!/\CC^{*})$ is an injection.
(3) $\begin{cases}
IH^{i}(\Hom^{\om_{\varphi}}(\ker\varphi,\im\varphi^{\perp}/\im\varphi)/\!/\CC^{*})=0&\text{for }i\geq\dim \Hom^{\om_{\varphi}}(\ker\varphi,\im\varphi^{\perp}/\im\varphi)/\!/\CC^{*}\\
IH^{i}(\Hom^{\om_{\varphi}}(\ker\varphi,\im\varphi^{\perp}/\im\varphi)/\!/\CC^{*})\cong\coker\lambda&\text{for }i<\dim \Hom^{\om_{\varphi}}(\ker\varphi,\im\varphi^{\perp}/\im\varphi)/\!/\CC^{*},
\end{cases}$
where $\lambda:IH^{i-2}(\PP\Hom^{\om_{\varphi}}(\ker\varphi,\im\varphi^{\perp}/\im\varphi)/\!/\CC^{*})\rightarrow IH^{i}(\PP\Hom^{\om_{\varphi}}(\ker\varphi,\im\varphi^{\perp}/\im\varphi)/\!/\CC^{*})$ is an injection.
We follow the idea of the proof of <cit.>. We only prove item (1) because the proofs of item (2) and item (3) are similar to that of item (1).
By Lemma <ref>-(1),
Let $n=\dim_{\CC}\Upsilon^{-1}(0)/\!/\SL(2)$. By Lemma <ref>-(1),
$$IH^{i}(C_{\cL}(\PP\Upsilon^{-1}(0)/\!/\SL(2)))\cong\begin{cases}0&\text{if }i\ge n\\
IH^{i}(C_{\cL}(\PP\Upsilon^{-1}(0)/\!/\SL(2))-\{0\})&\text{if }i<n.\end{cases}$$
Since $C_{\cL}(\PP\Upsilon^{-1}(0)/\!/\SL(2))-\{0\}$ fibers over
$\PP\Upsilon^{-1}(0)/\!/\SL(2)$ with fiber $\CC^{*}$, there is a Leray spectral sequence
$E_{r}^{pq}$ converging to
$$E_{2}^{pq}=IH^{p}(\PP\Upsilon^{-1}(0)/\!/\SL(2),IH^{q}(\CC^{*}))=\begin{cases}IH^{p}(\PP\Upsilon^{-1}(0)/\!/\SL(2))&\text{if }q=0,1\\
It follows from <cit.> and <cit.> that the differential
$$\lambda:IH^{i-2}(\PP\Upsilon^{-1}(0)/\!/\SL(2))\rightarrow IH^{i}(\PP\Upsilon^{-1}(0)/\!/\SL(2))$$
is given by the multiplication by $c_{1}(\cL)$. By the Hard
Lefschetz theorem for intersection cohomology, $\lambda$ is injective for $i<n$. Hence we get the result.
The quotients $\PP\Psi^{-1}(0)/\!/\CC^{*}$ and $\PP\Hom^{\om_{\varphi}}(\ker\varphi,\im\varphi^{\perp}/\im\varphi)/\!/\CC^{*}$ can be identified with some incidence variety.
Let $I_{2g-3}$ be the incidence variety given by
$$I_{2g-3}=\{(p,H)\in\PP^{2g-3}\times\breve{\PP}^{2g-3}|p\in H\}.$$
* $\PP\Psi^{-1}(0)/\!/\CC^{*}\cong I_{2g-3}$,
* $\PP\Hom^{\om_{\varphi}}(\ker\varphi,\im\varphi^{\perp}/\im\varphi)/\!/\CC^{*}\cong I_{2g-3}$.
* Consider the map $f:\PP\Psi^{-1}(0)\to I_{2g-3}$ given by
Since $f$ is $\CC^{*}$-invariant, we have the induced map
$$\bar{f}:\PP\Psi^{-1}(0)/\!/\CC^{*}\to I_{2g-3}.$$
We claim that $\bar{f}$ is injective. Assume that $\bar{f}([a_{1},b_{1},c_{1},d_{1}])=\bar{f}([a_{2},b_{2},c_{2},d_{2}])$ where $[a,b,c,d]$ denotes the closed orbit of $(a,b,c,d)$. Then there are nonzero complex numbers $\lambda$ and $\mu$ such that $(b_{1},c_{1})=\lambda(b_{2},c_{2})$ and $(-a_{1},d_{1})=\mu(-a_{2},d_{2})$. Then
$$[a_{1},b_{1},c_{1},d_{1}]=[\mu a_{2},\lambda b_{2},\lambda c_{2},\mu d_{2}]=[(\lambda\mu)^{1/2}a_{2},(\lambda\mu)^{1/2}b_{2},(\lambda\mu)^{1/2}c_{2},(\lambda\mu)^{1/2}d_{2}]=[a_{2},b_{2},c_{2},d_{2}].$$
Thus $\bar{f}$ is injective.
Since the domain and the range of $\bar{f}$ are normal varieties with the same dimension and the range $I_{2g-3}$ is irreducible, $\bar{f}$ is an isomorphism.
* Consider the map $g:\PP\Hom^{\om_{\varphi}}(\ker\varphi,\im\varphi^{\perp}/\im\varphi)\to I_{2g-3}$ given by
Since $g$ is $\CC^{*}$-invariant, we have the induced map
$$\bar{g}:\PP\Hom^{\om_{\varphi}}(\ker\varphi,\im\varphi^{\perp}/\im\varphi)/\!/\CC^{*}\to I_{2g-3}.$$
We can see that $\bar{g}$ is injective by the similar way as in the proof of (<ref>). Since the domain and the range of $\bar{g}$ are normal varieties with the same dimension and the range $I_{2g-3}$ is irreducible, $\bar{g}$ is an isomorphism.
By the proof of Lemma <ref>, $\cC_{2}/\!/\SL(2)=(Y/\!/\CC^{*})/\ZZ_{2}$ and $\tcC_{2}/\!/\SL(2)=(Bl_{\widetilde{T^{*}J}}Y/\!/\CC^{*})/\ZZ_{2}$, where $Y$ is either a $\Psi^{-1}(0)$-bundle or a $\Hom^{\om_{\varphi}}(\ker\varphi,\im\varphi^{\perp}/\im\varphi)$-bundle over $\widetilde{T^{*}J}$.
To give computable fomulas from Lemma <ref>, we need the following technical statements for $Y/\!/\CC^{*}$ and $Bl_{\widetilde{T^{*}J}}Y/\!/\CC^{*}$.
Let $g:Y/\!/\CC^{*}\to\widetilde{T^{*}J}$ be the map induced by the projection $Y\to\widetilde{T^{*}J}$ and let $h:Bl_{\widetilde{T^{*}J}}Y/\!/\CC^{*}\to\widetilde{T^{*}J}$ be the map induced by the composition of maps $Bl_{\widetilde{T^{*}J}}Y\to Y\to\widetilde{T^{*}J}$. Then $R^{i}g_{*}\ic^{\bullet}(Y/\!/\CC^{*})$ and $R^{i}h_{*}\ic^{\bullet}(Bl_{\widetilde{T^{*}J}}Y/\!/\CC^{*})$ are constant sheaves for each $i\ge0$.
Following the idea of proof of <cit.>, we can see that $R^{i}g_{*}\ic^{\bullet}(Y/\!/\CC^{*})$ and $R^{i}h_{*}\ic^{\bullet}(Bl_{\widetilde{T^{*}J}}Y/\!/\CC^{*})$ are locally constant sheaves for each $i\ge0$. We get the conclusion from Lemma <ref>-(2), (3), Lemma <ref>-(2), (3) and Lemma <ref>.
Then we have the following computable blowing-up formula.
(1) $\dim IH^{i}(\bR_{1}^{ss}/\!/\SL(2))=\dim IH^{i}(\bR/\!/\SL(2))$
$$+2^{2g}\dim IH^{i}(\PP\Upsilon^{-1}(0)/\!/\PGL(2))-2^{2g}\dim IH^{i}(\Upsilon^{-1}(0)/\!/\PGL(2))$$
for all $i\ge0$.
(2) $\dim IH^{i}(\bR_{2}^{s}/\SL(2))=\dim IH^{i}(\bR_{1}^{ss}/\!/\SL(2))$
$$+\sum_{p+q=i}\dim[H^{p}(\widetilde{T^{*}J})\otimes H^{t(q)}(I_{2g-3})]^{\ZZ_{2}}$$
for all $i\ge0$, where $t(q)=q-2$ for $q\le\dim I_{2g-3}=4g-7$ and $t(q)=q$ otherwise.
(3) $\dim IH^{i}((Bl_{\PP\Hom_{1}}\PP\Upsilon^{-1}(0)^{ss})^{s}/\!/\SL(2))=\dim IH^{i}(\PP\Upsilon^{-1}(0)/\!/\SL(2))$
$$+\sum_{p+q=i}\dim[H^{p}(\PP^{2g-1})\otimes H^{t(q)}(I_{2g-3})]^{\ZZ_{2}}$$
for all $i\ge0$, where $t(q)=q-2$ for $q\le\dim I_{2g-3}=4g-7$ and $t(q)=q$ otherwise.
* Since it follows from Lemma <ref>-(1) that
we get the formula.
* Let $g:Y/\!/\CC^{*}\to\widetilde{T^{*}J}$ and $h:Bl_{\widetilde{T^{*}J}}Y/\!/\CC^{*}\to\widetilde{T^{*}J}$ be the maps induced by the projections $Y\to\widetilde{T^{*}J}$ and $Bl_{\widetilde{T^{*}J}}Y\to\widetilde{T^{*}J}$. By Proposition <ref>-(2) and Remark <ref>, the perverse Leray spectral sequences of intersection cohomology associated to $g$ and $h$ have $E_2$ terms given by
Then we have
$$E_{2}^{pq}=IH^{p}(\widetilde{T^{*}J})\otimes IH^{q}(\widehat{I_{2g-3}})$$
$$E_{2}^{pq}=IH^{p}(\widetilde{T^{*}J})\otimes IH^{q}(I_{2g-3})$$
by Lemma <ref>, Lemma <ref> and Lemma <ref>, where $\widehat{I_{2g-3}}$ is the affine cone of $I_{2g-3}$. It follows from Proposition <ref>-(2) that the decomposition theorem for $h$ implies that the perverse Leray spectral sequence of intersection cohomology associated to $h$ degenerates at the $E_2$ term. Since $IH^{q}(\widehat{I_{2g-3}})$ embeds in $IH^{q}(I_{2g-3})$ by Lemma <ref>-(2), (3) and Lemma <ref>, the perverse Leray spectral sequence of intersection cohomology associated to $g$ also degenerates at the $E_2$ term. Since $\cC_{2}/\!/\SL(2)=(Y/\!/\CC^{*})/\ZZ_{2}$ and $\tcC_{2}/\!/\SL(2)=(Bl_{\widetilde{T^{*}J}}Y/\!/\CC^{*})/\ZZ_{2}$, we have
$$IH^{i}(\cC_{2}/\!/\SL(2))=\bigoplus_{p+q=i}[H^{p}(\widetilde{T^{*}J})\otimes H^{q}(\widehat{I_{2g-3}})]^{\ZZ_{2}}$$
$$IH^{i}(\tcC_{2}/\!/\SL(2))=\bigoplus_{p+q=i}[H^{p}(\widetilde{T^{*}J})\otimes H^{q}(I_{2g-3})]^{\ZZ_{2}}$$
by Lemma <ref>. Applying Lemma <ref>-(2), (3) again, we get the formula.
* Note that $\cC/\!/\SL(2)=(Y|_{\PP^{2g-1}}/\!/\CC^{*})/\ZZ_{2}$ and $\tcC_{2}/\!/\SL(2)=(Bl_{\PP^{2g-1}}Y|_{\PP^{2g-1}}/\!/\CC^{*})/\ZZ_{2}$. Let $g':Y|_{\PP^{2g-1}}/\!/\CC^{*}\to\PP^{2g-1}$ and $h':Bl_{\PP^{2g-1}}Y|_{\PP^{2g-1}}/\!/\CC^{*}\to\PP^{2g-1}$ be the maps induced by the projections $Y|_{\PP^{2g-1}}\to\PP^{2g-1}$ and $Bl_{\PP^{2g-1}}Y|_{\PP^{2g-1}}\to\PP^{2g-1}$. Since $\PP^{2g-1}$ is simply connected, $R^{i}g'_{*}\ic^{\bullet}(Y|_{\PP^{2g-1}}/\!/\CC^{*})$ and $R^{i}h'_{*}\ic^{\bullet}(Bl_{\PP^{2g-1}}Y|_{\PP^{2g-1}}/\!/\CC^{*})$ are constant sheaves for each $i\ge0$ and then the perverse Leray spectral sequences of intersection cohomology associated to $g'$ and $h'$ have $E_2$ terms given by
$$E_{2}^{pq}=IH^{p}(\PP^{2g-1})\otimes IH^{q}(\widehat{I_{2g-3}})$$
$$E_{2}^{pq}=IH^{p}(\PP^{2g-1})\otimes IH^{q}(I_{2g-3})$$
by Lemma <ref> and Lemma <ref>. By the same argument as in the remaining part of the proof of item (2), we get the formula.
§ A STRATEGY TO GET A FORMULA FOR THE POINCARÉ POLYNOMIAL OF $IH^{*}(\BM)$
Since $\bR_{2}^{s}/\SL(2)$ has an orbifold singularity, we have $H^{i}(\bR_{2}^{s}/\SL(2))\cong H_{\SL(2)}^{i}(\bR_{2}^{s})$ for each $i\ge0$. If we have a blowing-up formula for the equivariant cohomology that can be applied to get $\dim H_{\SL(2)}^{i}(\bR_{2}^{s})$ from $\dim H_{\SL(2)}^{i}(\bR)$ for each $i\ge0$, Theorem <ref> can be used to calculate $\dim IH^{i}(\bM)$ from $\dim H^{i}(\bR_{2}^{s}/\SL(2))$ for each $i$.
§.§ Towards blowing-up formula for the equivariant cohomology
In this subsection, we give a strategy to get a blowing-up formula for the equivariant cohomology in Kirwan's algorithm, and prove that the blowing-up formula for the equivariant cohomology on the blowing-up $\pi:Bl_{\PP\Hom_{1}}\PP\Upsilon^{-1}(0)^{ss}\to\PP\Upsilon^{-1}(0)^{ss}$ holds. Assume that $G$ is a compact connected algebraic group throughout this subsection.
For a normal quasi-projective complex variety $Y$ on which $G$ acts algebraically, we have an injection
$$i_{G}:H_{G}^{i}(Y)\hookrightarrow IH_{G}^{i}(Y).$$
For each $j$, we have a morphism
$$i_{j}:H^{i}(Y\times_{G}\E G_{j})=IH_{2\dim(Y\times_{G}\E G_{j})-i}^{\bar{0}}(Y\times_{G}\E G_{j})\to IH_{2\dim(Y\times_{G}\E G_{j})-i}^{\bar{m}}(Y\times_{G}\E G_{j})=IH^{i}(Y\times_{G}\E G_{j})$$
induced from the inclusion $IC_{2\dim(Y\times_{G}\E G_{j})-i}^{\bar{0}}(Y\times_{G}\E G_{j})\hookrightarrow IC_{2\dim(Y\times_{G}\E G_{j})-i}^{\bar{m}}(Y\times_{G}\E G_{j})$ given by $\xi\to\xi$.
We claim that $i_{j}$ is injective for each $j$. Assume that $\xi\in IC_{2\dim(Y\times_{G}\E G_{j})-i}^{\bar{0}}(Y\times_{G}\E G_{j})$ and $\xi=\partial\eta$ for some $\eta\in IC_{2\dim(Y\times_{G}\E G_{j})-i+1}^{\bar{m}}(Y\times_{G}\E G_{j})$. Then
$$\dim_{\RR}(|\xi|\cap Y_{n-c})\le(2\dim(Y\times_{G}\E G_{j})-i)-2c.$$
Since $\dim_{\RR}(|\eta|\cap Y_{n-c})-\dim_{\RR}(|\partial\eta|\cap Y_{n-c})\le1$,
$$\dim_{\RR}(|\eta|\cap Y_{n-c})\le\dim_{\RR}(|\partial\eta|\cap Y_{n-c})+1\le (2\dim(Y\times_{G}\E G_{j})-i+1)-2c,$$
that is,
$$\eta\in IC_{2\dim(Y\times_{G}\E G_{j})-i+1}^{\bar{0}}(Y\times_{G}\E G_{j}).$$
Then $\xi=\partial\eta=0$ in $IH_{2\dim(Y\times_{G}\E G_{j})-i}^{\bar{0}}(Y\times_{G}\E G_{j})$. Thus $i_{j}$ is injective for each $j$.
Since $i_{G}=\dss\varprojlim_{j}i_{j}$, we get the result.
Let $Y_{1}$ and $Y_{2}$ be normal irreducible quasi-projective complex varieties on which $G$ acts algebraically. For any $G$-equivariant morphism $f:Y_{1}\to Y_{2}$, there exists $\lambda_{f}:IH_{G}^{i}(Y_{2})\to IH_{G}^{i}(Y_{1})$ such that the following diagram
We follow the idea of [36]. Note that $i_{G}:H_{G}^{i}(Y)\hookrightarrow IH_{G}^{i}(Y)$ is induced from the canonical morphism of $G$-equivariant sheaves $\om_{Y}:\QQ_{Y}^{G}\to\ic_{G}^{\bullet}(Y)$. Let $\pi_{Y_{2}}:\tY_{2}\to Y_{2}$ be a resolution of $Y_{2}$. $\tY_{1}$ denotes the fiber product $Y_{1}\times_{Y_{2}}\tY_{2}$. Then the following diagram
commutes. Here $p_{1}$ and $\pi_{Y_{2}}$ are proper.
We have only to show the following diagram of sheaves over $Y_{2}$
\ic_{G}^{\bullet}(Y_{2})\ar[u]^{i}&\QQ_{Y_{2}}^{G}\ar[l]_{\om_{Y_{2}}}\ar[r]^{f^{*}}\ar[u]_{\pi_{Y_{2}}^{*}}&Rf_{*}\QQ_{Y_{1}}^{G}\ar[r]^{Rf_{*}(\om_{Y_{1}})\quad}\ar[u]_{Rf_{*}(p_{1}^{*})}&Rf_{*}\ic_{G}^{\bullet}(Y_{1})}$$
commutes, where the inclusion $i:\ic_{G}^{\bullet}(Y_{2})\hookrightarrow R\pi_{Y_{2}*}\ic_{G}^{\bullet}(\tY_{2})$ and the projection $p:Rp_{1*}\ic_{G}^{\bullet}(\tY_{1})\to\ic_{G}^{\bullet}(Y_{1})$ are induced from Proposition <ref>-(3). It suffices to show that
$$\text{(respectively, }\xymatrix{\QQ_{Y_{1}}^{G}\ar[r]^{p_{1}^{*}\quad}&Rp_{1*}\QQ_{\tY_{1}}^{G}\ar[r]^{Rp_{1*}(\om_{\tY_{1}})\quad}&Rp_{1*}\ic_{G}^{\bullet}(\tY_{1})\ar[r]^{\quad p}&\ic_{G}^{\bullet}(Y_{1})}$$
coincide over $Y_{2}$ (respectively, $Y_{1}$).
Let $U$ (respectively, $V$) be the regular part of $Y_{2}$ (respectively, $Y_{1}$). Note that these morphisms are equal on $U$ (respectively, $V$) after multiplication by a constant if necessary. Thus we have only to show that the restriction morphisms
are injective.
Since the restriction morphisms
$$\xymatrix{\Hom(\QQ_{Y_{2}\times_{G}\E G_{j}},R(\pi_{Y_{2}}\times 1_{\E G_{j}})_{*}\QQ_{\tY_{2}\times_{G}\E G_{j}})\ar[r]^{\rho_{U,j}\quad}&\Hom(\QQ_{Y_{2}\times_{G}\E G_{j}}|_{U},R(\pi_{Y_{2}}\times 1_{\E G_{j}})_{*}\QQ_{\tY_{2}\times_{G}\E G_{j}}|_{U})}$$
$$\xymatrix{\Hom(\QQ_{Y_{1}\times_{G}\E G_{j}},\ic^{\bullet}(Y_{1}\times_{G}\E G_{j}))\ar[r]^{\rho_{V,j}\quad}&\Hom(\QQ_{Y_{1}\times_{G}\E G_{j}}|_{V},\ic^{\bullet}(Y_{1}\times_{G}\E G_{j})|_{V})}$$
are injective for all $j$ by [36], $\rho_{U}$ and $\rho_{V}$ are also injective.
Hence $\lambda_{f}$ is defined as the morphism on hypercohomologies induced from the composition $Rf_{*}(p)\circ Rf_{*}Rp_{1*}(\om_{\tY_{1}})\circ R\pi_{Y_{2}*}(p_{2}^{*})\circ i:\ic_{G}^{\bullet}(Y_{2})\to Rf_{*}\ic_{G}^{\bullet}(Y_{1})$.
Let $Y$ be an irreducible normal quasi-projective complex variety on which $G$ acts algebraically and let $f:Y'\to Y$ be a $G$-equivariant blow up of $Y$. Then we have the following commutative diagram
and $\lambda_{f}:IH_{G}^{i}(Y)\to IH_{G}^{i}(Y')$ is an inclusion induced from Proposition <ref>-(3).
By using Corollary <ref>, we can use a standard argument to get the following formula.
(1) $P_{t}^{\SL(2)}(\bR_{1})=P_{t}^{\SL(2)}(\bR)+2^{2g}(P_{t}^{\SL(2)}(\PP\Upsilon^{-1}(0))-P_{t}(\BSL(2)))$.
(2) $P_{t}^{\SL(2)}(\bR_{2})=P_{t}^{\SL(2)}(\bR_{1})+P_{t}^{\SL(2)}(E_2)-P_{t}^{\SL(2)}(\Sig)$.
Since the action of the center $\{\pm\id\}\subset\SL(2)$ on $\bR$ is trivial by <cit.>, the action of $\SL(2)$ and that of $\PGL(2)$ on $\bR$ coincide. Further, since the lifted actions of $\{\pm\id\}\subset\SL(2)$ on $\bR_{1}$ and $\bR_{2}$ are trivial by <cit.>, the actions of $\SL(2)$ and those of $\PGL(2)$ on $\bR_{1}$ and $\bR_{2}$ coincide. Thus $\SL(2)$-equivariant cohomologies of $\bR$, $\bR_{1}$, $\bR_{2}$ and their $\SL(2)$-invariant subvarieties are equal to $\PGL(2)$-equivariant cohomologies of $\bR$, $\bR_{1}$, $\bR_{2}$ and their $\PGL(2)$-invariant subvarieties respectively. Let $G=\PGL(2)$.
* Let $U_{x}$ be a sufficiently small open neighborhood of $x\in\ZZ_{2}^{2g}$ in $\bR$, let $U_{1}=\sqcup_{x\in\ZZ_{2}^{2g}}U_{x}$ and let $\tU_{1}=\pi_{\bR_{1}}^{-1}(U_{1})$. Let $V_{1}=\bR\setminus\ZZ^{2g}$. We can identify $V_{1}$ with $\bR_{1}\setminus E_{1}$ under $\pi_{\bR_{1}}$. Then we have the following commutative diagram
$$\xymatrix{\cdots\ar[r]&H_{G}^{i-1}(U_{1}\cap V_{1})\ar[r]^{\alpha}\ar@^{=}[d]&H_{G}^{i}(\bR)\ar[r]\ar[d]&H_{G}^{i}(U_{1})\ar@<-4ex>[d]\oplus H_{G}^{i}(V_{1})\ar[r]^{\beta}\ar@<4ex>@^{=}[d]&H_{G}^{i}(U_{1}\cup V_{1})\ar[r]\ar@{=}[d]&\cdots\\
\cdots\ar[r]&H_{G}^{i-1}(U_{1}\cap V_{1})\ar[r]^{\tal}&H_{G}^{i}(\bR_{1})\ar[r]&H_{G}^{i}(\tU_{1})\oplus H_{G}^{i}(V_{1})\ar[r]^{\tbe}&H_{G}^{i}(U_{1}\cup V_{1})\ar[r]&\cdots,}$$
where the horizontal sequences are Mayer-Vietoris sequences and the vertical maps are $\pi_{\bR_{1}}^{*}$. It follows from Corollary <ref> that the vertical maps are injective. So $\ker\alpha=\ker\tal$ and then $\im\beta=\im\tbe$. Thus we have
By <cit.> and Proposition <ref>, $U_{1}$ is analytically isomorphic to $2^{2g}$ copies of $\Upsilon^{-1}(0)$ and then $\tU_{1}$ is analytically isomorphic to $2^{2g}$ copies of $Bl_{0}\Upsilon^{-1}(0)$. Since $\PP\Upsilon^{-1}(0)$ is a deformation retract of $Bl_{0}\Upsilon^{-1}(0)$ and $P_{t}(\BG)=P_{t}(\BSO(3))=P_{t}(\BSL(2))$, we get
* Let $U_{2}$ be a sufficiently small open neighborhood of $\Sig$ and let $\tU_{2}=\pi_{\bR_{2}}^{-1}(U_{2})$. Let $V_{2}=\bR_{1}\setminus\Sig$. We can identify $V_{2}$ with $\bR_{2}\setminus E_{2}$ under $\pi_{\bR_{2}}$. By Corollary <ref> and the same way as in the proof of item (1), we have
By <cit.> and Proposition <ref>, $U_{2}$ is analytically isomorphic to $C_{\Sig}\bR_{1}$ and then $\tU_{2}$ is analytically isomorphic to $Bl_{\Sig}(C_{\Sig}\bR_{1})$. Since $\Sig$ is a deformation retract of $C_{\Sig}\bR_{1}$ and $E_{2}$ is a deformation retract of $Bl_{\Sig}(C_{\Sig}\bR_{1})$, we get
* Recall that $\pi:Bl_{\PP\Hom_{1}}\PP\Upsilon^{-1}(0)^{ss}\to\PP\Upsilon^{-1}(0)^{ss}$ is the blowing-up of $\PP\Upsilon^{-1}(0)^{ss}$ along $\PP\Hom_{1}(sl(2),\HH^{g})^{ss}$. By Corollary <ref> and the same way as in the proof of item (1), we have
for some sufficiently open neighborhood $U$ of $\PP\Hom_{1}(sl(2),\HH^{g})^{ss}$ and $\tU=\pi^{-1}(U)$.
By <cit.> and Proposition <ref>, $U$ is analytically isomorphic to
and then $\tU$ is analytically isomorphic to
Since $\PP\Hom_{1}(sl(2),\HH^{g})^{ss}$ is a deformation retract of $C_{\PP\Hom_{1}(sl(2),\HH^{g})^{ss}}\PP\Upsilon^{-1}(0)^{ss}$ and $E$ is a deformation retract of $Bl_{\PP\Hom_{1}(sl(2),\HH^{g})^{ss}}(C_{\PP\Hom_{1}(sl(2),\HH^{g})^{ss}}\PP\Upsilon^{-1}(0)^{ss})$, we get
The blowing-up formula for the equivariant cohomology on the blowing-up
follows from the same argument as in [20]
By Proposition <ref>, $Bl_{\PP\Hom_{1}}\PP\Upsilon^{-1}(0)^{ss}$ is a smooth projective variety. Thus the same argument as in [20] can be applied. We sketch the proof briefly.
There is a Morse stratification $\{S_{\beta}|\beta\in\bfB\}$ of $Bl_{\PP\Hom_{1}}\PP\Upsilon^{-1}(0)^{ss}$ associated to the lifted action of $\SL(2)$. Then $\{S_{\beta}\cap E|\beta\in\bfB\}$ is the Morse stratification of $E$. By <cit.>, $\{S_{\beta}|\beta\in\bfB\}$ and $\{S_{\beta}\cap E|\beta\in\bfB\}$ are equivariantly perfect. Thus we have
$$P_{t}^{\SL(2)}(E)=P_{t}^{\SL(2)}(E^{ss})+\sum_{\beta\ne0}t^{2d(\beta)}P_{t}^{\SL(2)}(S_{\beta}\cap E),$$
where $d'(\beta)$ (respectively, $d(\beta)$) is the codimension of $S_{\beta}$ (respectively, $S_{\beta}\cap E$) in $Bl_{\PP\Hom_{1}}\PP\Upsilon^{-1}(0)^{ss}$ (respectively, $E$). We also have
$$P_{t}^{\SL(2)}(S_{\beta}\cap E)=P_{t}^{\SL(2)}(\SL(2)Z_{\beta}^{ss}\cap E),$$
where $Z_{\beta}^{ss}$ denotes the set of points of $S_{\beta}$ fixed by the one-parameter subgroup generated by $\beta$. Since $Z_{\beta}^{ss}\subseteq E$ by <cit.> and $S_{\beta}\not\subseteq E$ for any $\beta\in\bfB$ by <cit.>, we have
The following blowing-up formulas for the blowing-ups $\pi_{\bR_{1}}:\bR_{1}\to\bR$ and $\pi_{\bR_{2}}:\bR_{2}\to\bR_{1}^{ss}$ are what we desire.
(1) $P_{t}^{\SL(2)}(\bR_{1}^{ss})=P_{t}^{\SL(2)}(\bR)+2^{2g}(P_{t}^{\SL(2)}(\PP\Upsilon^{-1}(0)^{ss})-P_{t}(\BSL(2)))$.
(2) $P_{t}^{\SL(2)}(\bR_{2}^{s})=P_{t}^{\SL(2)}(\bR_{1}^{ss})+P_{t}^{\SL(2)}(E_2^{ss})-P_{t}^{\SL(2)}(\Sig)$.
Since $\bR_{1}$ and $\bR_{2}$ are neither smooth nor projective, we cannot directly apply the Morse theory of [19] developed by F. Kirwan for a proof of Conjecture <ref>.
§.§ Intersection Poincaré polynomial of the deepest singularity of $\bM$
In order to use Theorem <ref>-(1), we must calculate $IP_{t}(\PP\Upsilon^{-1}(0)/\!/\PGL(2))$ and $IP_{t}(\Upsilon^{-1}(0)/\!/\PGL(2))$.
Recall that it follows from Proposition <ref> that
$$\PP\Hom_{1}(sl(2),\HH^{g})/\!/\PGL(2)\cong Z/\!/\O(2)=Z/\!/\SO(2)=Z_{1}=\PP^{2g-1},$$
where $Z=Z_{1}\cup Z_{2}\cup Z_{3}$, $Z^{ss}$ is the set of
semistable points of $Z$ for the action of $\O(2)$,
$Z_{2}=\PP\{v_{2}\otimes\HH^{g}\}$ and
$Z_{3}=\PP\{v_{3}\otimes\HH^{g}\}$. Here $\{v_{1},v_{2},v_{3}\}$ is the basis of $sl(2)$ chosen in Section <ref>.
We see that
where $P_{t}^{+}(Z/\!/\SO(2))$ and $P_{t}^{-}(Z/\!/\SO(2))$ are Poincaré polynomials of the invariant part and variant part of $H^{*}(Z/\!/\SO(2))$ with respect to the action of $\ZZ_{2}=\O(2)/\SO(2)$ on $Z/\!/\SO(2)$.
By <cit.> and Theorem <ref>-(3),
$$+\sum_{p+q=i}\dim[H^{p}(\PP^{2g-1})\otimes H^{t(q)}(I_{2g-3})]^{\ZZ_{2}}t^{i}$$
$$=IP_{t}(\PP\Upsilon^{-1}(0)/\!/\PGL(2))+\sum_{p+q=i}\dim[H^{p}(\PP^{2g-1})^{\ZZ_{2}}\otimes H^{t(q)}(I_{2g-3})^{\ZZ_{2}}]t^{i}$$
\begin{equation}\label{2nd local blowing-up}=IP_{t}(\PP\Upsilon^{-1}(0)/\!/\PGL(2))+\frac{1-t^{4g}}{1-t^{2}}\cdot\frac{t^2(1-t^{4g-6})(1-t^{4g-4})}{(1-t^2)(1-t^4)}.\end{equation}
be the blowing-up of
$\PP\Hom_{2}^{\omega}(sl(2),\HH^{g})$ along
$\PP\Hom_{1}(sl(2),\HH^{g})^{ss}$ and let
be the blowing-up of
$Bl_{\PP\Hom_{1}}\PP\Upsilon^{-1}(0)^{ss}$ along
Assume that $g\ge3$. Denote
By <cit.>, $D_{1}$ is a
$\widehat{\PP}^5$-bundle over $Gr^{\omega}(3,2g)$ where
$\widehat{\PP}^5$ is the blowing-up of $\PP^5$
(projectivization of the space of $3\times 3$ symmetric matrices)
along $\PP^2$ (the locus of rank $1$ matrices). Since $D_1$
is a nonsingular projective variety over $\CC$,
Thus it follows from <cit.> that
$$IP_{t}(D_{1})=P_{t}(D_{1})=E(D_{1};-t,-t)=(\frac{1-t^{12}}{1-t^{2}}-\frac{1-t^{6}}{1-t^{2}}+(\frac{1-t^{6}}{1-t^{2}})^{2})\cdot\prod_{1\leq i\leq 3}\frac{1-t^{4g-12+4i}}{1-t^{2i}}.$$
Moreover by the proof of <cit.>
\PP(S^{2}\cA)$$
where $\cA$ is the
tautological rank $2$ bundle over $Gr^{\omega}(2,2g)$. Following the proof of <cit.>, we can see that the exceptional divisor of $D_1$ is a $\PP^{2g-5}$-bundle over $\PP(S^{2}\cA)$.
By the usual blowing-up formula mentioned in <cit.>, we have
$$\dim H^{i}(D_1)=\dim H^{i}(Bl_{\PP\Hom_{1}}\PP\Upsilon^{-1}(0)^{ss}/\!/\PGL(2))$$
\begin{equation}\label{3rd local blowing-up}+\big(\sum_{p+q=i}\dim[H^{p}(\PP(S^{2}\cA))\otimes H^{q}(\PP^{2g-5})]-\dim H^{i}(\PP(S^{2}\cA))\big).\end{equation}
$\PP(S^{2}\cA)$ is the $\PP^2$-bundle over
$$P_{t}(\PP(S^{2}\cA))=P_{t}(\PP^2)P_{t}(Gr^{\omega}(2,2g))=\frac{1-t^{6}}{1-t^{2}}\cdot\prod_{1\leq i\leq 2}\frac{1-t^{4g-8+4i}}{1-t^{2i}}$$
by Deligne's criterion (see [6]).
Therefore it follows from (<ref>) that
i\leq 2}\frac{1-t^{4g-8+4i}}{1-t^{2i}}$$
Assume that $g=2$. In this case, we know from <cit.> that
is already nonsingular and that it is a $\PP^2$-bundle over
$Gr^{\omega}(2,4)$. Then by Deligne's criterion (See [6]),
$$=P_{t}(\PP^2)P_{t}(Gr^{\omega}(2,4))=\frac{1-t^{6}}{1-t^{2}}\cdot\prod_{1\leq i\leq 2}\frac{1-t^{4i}}{1-t^{2i}}=\frac{(1-t^{6})(1-t^{8})}{(1-t^{2})^2}.$$
Combining these with (<ref>), we obtain
By Lemma <ref>-(1), we also obtain
§.§ Intersection Poincaré polynomial of $\bM$
In this subsection, we compute a conjectural formula for $IP_{t}(\bM)$.
§.§.§ Computation for $P_{t}^{\SL(2)}(\bR)$
We start with the following result.
In this subsection, we show that $P_{t}^{\SL(2)}(\bR)=P_{t}^{\cG_{\CC}}(\cB^{ss})$. To prove this, we need some technical lemmas.
Choose a base point $x\in X$. Let $E$ be a complex Hermitian vector bundle of rank $2$ and degree $0$ on $X$. Let $p:E\to X$ be the canonical
projection. Let $(\cG_{\CC})_0$ be the normal subgroup of
$\cG_{\CC}$ which fixes the fiber $E|_{x}$.
We first claim that $(\cG_{\CC})_0$ acts freely on $\cB^{ss}$. In
fact, assume that $g\cdot(A,\phi):=(g^{-1}Ag,\phi)=(A,\phi)$ for $g\in(\cG_{\CC})_0$. For an arbitrary point $y\in X$ and for any smooth path
$\gamma:[0,1]\to X$ starting at $\gamma(0)=x$ and ending at
$\gamma(1)=y\in X$, there is a parallel transport mapping
$P_{\gamma}:E|_{x}\to E|_{y}$ defined as follows. If $v\in E|_{x}$,
there exists a unique path $\gamma_v:[0,1]\to E$ such that
$p\circ\gamma_v=\gamma$, $\gamma_v(0)=v$ given by $A$. Define
$P_{\gamma}(v)=\gamma_v(1)$. By the assumption, $P_{\gamma}\circ
g|_{x}=g|_{y}\circ P_{\gamma}$. Since $g|_{x}$ is the identity on
$E|_{x}$, $g|_{y}$ is also the identity on $E|_{y}$. Therefore $g$
is the identity on $E$.
Since the surjective map $\cG_{\CC}\to\SL(2)$ given by $g\mapsto g|_{x}$ has the kernel $(\cG_{\CC})_{0}$, we have
\begin{equation}\label{G mod G0 isom SL2}
\cG_{\CC}/(\cG_{\CC})_{0}\cong\SL(2).
\end{equation}
Let $\cG_{\CC}/(\cG_{\CC})_0\times_{\cG_{\CC}}\cB^{ss}$ be the quotient space of
$\cG_{\CC}/(\cG_{\CC})_0\times\cB^{ss}$ by the action of
$\cG_{\CC}$ given by
where $\overline{f}$ is the image of $f\in\cG_{\CC}$ under the
quotient map $\cG_{\CC}\to\cG_{\CC}/(\cG_{\CC})_0$. Since
$(\cG_{\CC})_0$ acts freely on $\cB^{ss}$, $\cG_{\CC}$ acts freely
on $\cG_{\CC}/(\cG_{\CC})_0\times\cB^{ss}$.
There exists a homeomorphism between
$\SL(2)\times_{\cG_{\CC}}\cB^{ss}$ and $\cB^{ss}/(\cG_{\CC})_0$.
By (<ref>), it suffices to show that there exists a homeomorphism between
$\cG_{\CC}/(\cG_{\CC})_0\times_{\cG_{\CC}}\cB^{ss}$ and $\cB^{ss}/(\cG_{\CC})_0$.
Consider the continuous surjective map
given by $(g,(A,\phi))\mapsto(\ovg,(A,\phi))$. If $\cG_{\CC}$ acts on $\cG_{\CC}\times\cB^{ss}$ by
$h\cdot(g,(A,\phi))=(gh^{-1},h\cdot(A,\phi))$, $q$ is
Taking quotients of both spaces
by $\cG_{\CC}$, $q$ induces the continuous surjective map
given by $[g,(A,\phi)]\mapsto[\ovg,(A,\phi)]$.
If $(\cG_{\CC})_0$ acts on $\cG_{\CC}\times_{\cG_{\CC}}\cB^{ss}$ by
$h\cdot[g,(A,\phi)]=[g,h\cdot(A,\phi)]$, $\ovq$ is
$(\cG_{\CC})_0$-invariant. Precisely for $g_0\in(\cG_{\CC})_0$,
Thus $\ovq$ induces the continuous surjective map
given by $\overline{[g,(A,\phi)]}\mapsto[\ovg,(A,\phi)]$.
Furthermore $\tq$ is injective. In fact, assume that
that is,
Then there is $k\in\cG_{\CC}$ such that
Then $g_1=g_2 k^{-1}l$ for some $l\in(\cG_{\CC})_0$. Thus
because $(\cG_{\CC})_0$ is the normal subgroup of $\cG_{\CC}$.
On the other hand, since both $q$ and the quotient map $\cG_{\CC}/(\cG_{\CC})_0\times\cB^{ss}\to\cG_{\CC}/(\cG_{\CC})_0\times_{\cG_{\CC}}\cB^{ss}$ are open, $\ovq$ is open. Moreover since the quotient map $\dss\cG_{\CC}\times_{\cG_{\CC}}\cB^{ss}\to\frac{\cG_{\CC}\times_{\cG_{\CC}}\cB^{ss}}{(\cG_{\CC})_0}$ is open,
$\tq$ is also open.
Hence $\tq$ is a homeomorphism. Since there is a homeomorphism $\xymatrix{\cG_{\CC}\times_{\cG_{\CC}}\cB^{ss}\ar[r]^{\quad\quad\cong}&\cB^{ss}}$ given by $[g,(A,\phi)]\mapsto g\cdot(A,\phi)$, we get the conclusion.
There is an isomorphism of complex analytic spaces
There is a bijection between
$\SL(2)\times_{\cG_{\CC}}\cB^{ss}$ and
$\bR$. In fact, consider a map
given by
where $(\overline{\beta},(\overline{A},\overline{\phi}))$ is the
image of $(\beta,(A,\phi))$ of the quotient map
\SL(2)\times_{\cG_{\CC}}\cB^{ss}$. Since $f$
is surjective and $\cG_{\CC}$-invariant, $f$ induces
a bijection between
$\SL(2)\times_{\cG_{\CC}}\cB^{ss}$ and
Further, the family $E\times(\SL(2)\times_{\cG_{\CC}}\cB^{ss})$ over $X\times(\SL(2)\times_{\cG_{\CC}}\cB^{ss})$ gives a complex
analytic map $g:\SL(2)\times_{\cG_{\CC}}\cB^{ss}\to\bR$ by <cit.>, and $f((\overline{\beta},(\overline{A},\overline{\phi})))=g((\overline{\beta},(\overline{A},\overline{\phi})))$ for all $(\overline{\beta},(\overline{A},\overline{\phi}))\in\SL(2)\times_{\cG_{\CC}}\cB^{ss}$.
Hence $f$ is an isomorphism of complex analytic spaces $\SL(2)\times_{\cG_{\CC}}\cB^{ss}\cong\bR$.
There is a technical lemma for equivariant cohomologies.
Let $H$ be a closed normal subgroup of $G$ and $M$ be a $G$-space on
which $H$ acts freely. Then $G/H$ acts on $M/H$ and
Use the fibration $\E G\times_{G}M\cong(\E G\times \E(G/H))\times_{G}M\to
\E(G/H)\times_{G}M\cong \E(G/H)\times_{G/H}(M/H)$ whose fibers $\E G$ is
The following equality is an immediate consequence from Lemma
<ref>, Lemma <ref> and Lemma <ref>.
Thus we get the same formula for $P_{t}^{\SL(2)}(\bR)$ as Theorem
§.§.§ Computation for $P_{t}^{\SL(2)}(\Sigma)$
In the proof of Lemma <ref>-(1), we observed that
Since $\{\pm\id\}\subset\SL(2)$ acts trivially on $\PP\Isom(\cO_{\widetilde{T^{*}J}}^{2},\cL|_{x}\oplus\cL^{-1}|_{x})$, $\{\pm\id\}\subset\SL(2)$ also acts trivially on $\Sigma$. Then
Since $\O(2)$ acts on $\PP\Isom(\cO_{\widetilde{T^{*}J}}^{2},\cL|_{x}\oplus\cL^{-1}|_{x})$ freely and both actions of $\PGL(2)$ and $\O(2)$ commute,
where $\sim$ denotes the homotopic equivalence. Thus
where $P_{t}^{+}(W)$ (respectively, $P_{t}^{-}(W)$) denotes the Poincaré polynomial of the invariant (respectively, variant) part of $H^{*}(W)$ with respect to the action of $\ZZ_{2}$ on $W$ for a $\ZZ_{2}$-space $W$.
Note that $\BSO(2)\cong\PP^{\infty}$. Since the action of $\ZZ_{2}\setminus\{\id\}$ on $H^*(\BSO(2))$ represents reversing of orientation and $\PP^{n}$ possess an orientation-reversing self-homeomorphism only when $n$ is odd, we have $\dss P_{t}^{+}(\BSO(2))=\frac{1}{1-t^{4}}$ and $\dss P_{t}^{-}(\BSO(2))=\frac{t^{2}}{1-t^{4}}$.
Further, by the computation mentioned in <cit.> and <cit.>, we have
§.§.§ Computation for $P_{t}^{\SL(2)}(\PP\Upsilon^{-1}(0)^{ss})$
Since $E/\!/\SL(2)$ has an orbifold singularity and $E/\!/\SL(2)\cong\PP\cC/\!/\SL(2)$ is a free $\ZZ_{2}$-quotient of $I_{2g-3}$-bundle over $\PP^{2g-1}$ by Lemma <ref>-(5) and Lemma <ref>, we use <cit.> to have
By Proposition <ref>,
On the other hand
from the subsection <ref> for any $g\ge2$.
§.§.§ Computation for $P_{t}^{\SL(2)}(E_{2}^{ss})$
Since $E_2/\!/\SL(2)$ has an orbifold singularity and $E_2/\!/\SL(2)\cong\PP\cC_{2}/\!/\SL(2)$ is a free $\ZZ_{2}$-quotient of a $I_{2g-3}$-bundle over $\widetilde{T^{*}J}$ by Lemma <ref>-(1) and Lemma <ref>-(3), we use <cit.> to have
§.§.§ A conjectural formula for $IP_{t}(\bM)$
Combining Theorem <ref>, Conjecture <ref>, Proposition <ref>, section <ref>, section <ref>, section <ref> and section <ref>, we get a conjectural formula for $IP_{t}(\bM)$ as following. The residue calculations show that the coefficients of the terms of $t^{i}$ are zero for $i>6g-6$ and the coefficient of the term of $t^{6g-6}$ is nonzero.
Assume that Conjecture <ref> holds. Then
$$-\frac1 2(
$$-\frac1 2(
which is a polynomial with degree $6g-6$.
In low genus, we have $IP_{t}(\bM)$ as follows :
* $g=2$ : $IP_{t}(\bM)=1+t^{2}+17t^{4}+17t^{6}$
* $g=3$ : $IP_{t}(\bM)=1+t^{2} +6t^{3} +2t^{4} +6t^{5} +17t^{6} +6t^{7} +81t^{8} +12t^{9} +396t^{10} +6t^{11} +66t^{12}$
* $g=4$ : $IP_{t}(\bM)=1+t^{2} +8t^{3} +2t^{4} +8t^{5} +30t^{6} +16t^{7} +31t^{8} +72t^{9} +59t^{10} +72t^{11} +385t^{12}+ 80t^{13} + 3955t^{14} + 80t^{15} + 3885t^{16} + 16t^{17} + 259t^{18}$
* $g=5$ : $IP_{t}(\bM)=1+t^{2} +10t^{3} +2t^{4} +10t^{5} +47t^{6} +20t^{7} +48t^{8} +140t^{9} +93t^{10} +150t^{11}+ 304t^{12} + 270t^{13} + 349t^{14} + 522t^{15} + 1583t^{16} + 532t^{17} + 29414t^{18} + 532t^{19}+ 72170t^{20} + 280t^{21}+ 28784t^{22} + 30t^{23} + 1028t^{24}$.
[1]
J. Bernstein and V. Lunts.
Equivariant sheaves and functors,
Lecture Notes in Mathematics 1578, Springer-Verlag, Berlin, 1994.
[2]
J. Cheeger, M. Goresky and R.MacPherson.
$L^2$-cohomology and intersection homology for singular algebraic varieties,
Ann. Math. Stud. 102:303–340, 1982.
[3]
J. Choy and Y-H. Kiem.
On the existence of a crepant resolution of some moduli spaces of sheaves on an abelian surface.
Math. Z. 252, no. 3: 557–575, 2006.
[4]
J. Choy and Y-H. Kiem.
Nonextstence of a crepant resolution of some moduli spaces of sheaves on a K3 surface,
J. Korean Math. Soc. 44, no. 1:35–54, 2007.
[5]
K. Corlette.
Flat G-bundles with canonical metrics,
J. Differential Geom. 28:361–-382, 1988.
[6]
P. Deligne.
Théorème de Lefschetz et criteres de dégénérescence de suites spectrales,
Publ. Math. Inst. Hautes Etudes Sci. no. 35:107–126, 1968.
[7]
A. Dimca.
Sheaves in Topology,
Universitext Springer-Verlag, 2004.
[8]
G.D. Daskalopoulos, J. Weitsman and G. Wilkin.
Morse theory and hyperkähler kirwan surjectivity for Higgs bundles,
J. Differential Geom. 87, no. 1:81–115, 2011.
[9]
C. Felisetti
Intersection cohomology of the moduli space of Higgs bundles on a genus 2 curve
[10]
P. Griffiths and J. Harris.
Principles of algebraic geometry,
Wiley, New York, 1978.
[11]
O. García-Prada, J. Heinloth and A. Schmitt
On the motives of moduli of chains and Higgs bundles,
J. Eur. Math. Soc. (JEMS) 16:2617–2668, 2014.
[12]
M. Goresky, R. Kottwitz and R. MacPherson.
Equivariant cohomology, Koszul duality, and the localization theorem,
Invent. Math. 131:25–83, 1998.
[13]
M. Goresky and R. MacPherson.
Intersection homology theory,
Topology 19:135–162, 1980.
[14]
M. Goresky and R. MacPherson.
Intersection Homology II,
Invent. Math. 71:77–129, 1983.
[15]
W.M. Goldman and J.J. Milson.
The deformation theory of representations of fundamental groups of compact Kähler manifolds,
Inst. Hautes Etudes Sci. Publ. Math. no. 67:43–96, 1988.
[16]
P. Gothen.
The Betti numbers of the moduli space of stable rank 3 Higgs bundles on a Riemann surface,
Internat. J. Math. 5, no. 6:861–875, 1994.
[17]
N.J. Hitchin.
The self-duality equations on a Riemann surface,
Proc. London Math. Soc. (3) 55:59–126, 1987.
[18]
T. Hausel and F. Rodriguez-Villegas.
Mixed Hodge polynomials of character varieties,
Invent. Math. 174:555–624, 2008.
[19]
F. Kirwan.
Cohomology of Quotients in Symplectic and Algebraic Geometry,
Mathematical Notes, Princeton University Press, 1985.
[20]
F. Kirwan.
Partial Desingularisations of Quotients of Nonsingular Varieties and their Betti Numbers,
Ann. of Math. 122:41–85, 1985.
[21]
F. Kirwan.
Rational intersection cohomology of quotient varieties,
Invent. Math. 86, no. 3:471–505, 1986.
[22]
Y.-H. Kiem and S.-B. Yoo.
The stringy E-function of the moduli space of Higgs bundles with trivial determinant,
Math. Nachr. 281, no. 6:817–838, 2008.
[23]
F. Kirwan and J. Woolf.
An introduction to intersection homology theory (second edition),
Chapman & Hall/CRC, Boca Raton, FL, 2006.
[24]
M. Logares, V. Muñoz and P.E. Newstead.
Hodge polynomials of $\SL(2,\CC)$-character varieties for curves of small genus,
Rev. Mat. Complut. 26:635–703, 2013.
[25]
M. Mauri.
Intersection cohomology of rank two character varieties of surface groups,
[26]
M. Mereb.
On the E-polynomials of a family of $\Sl_{n}$-character varieties,
Math. Ann. 363:857–892, 2015.
[27]
J. Martínez and V. Muñoz.
E-polynomials of $\SL(2,\CC)$-character varieties of complex curves of genus 3,
Osaka J. Math. 53:645–681, 2016.
[28]
J. Martínez and V. Muñoz.
E-Polynomials of the $\SL(2,\CC)$-character varieties of surface groups,
International Mathematics Research Notices 2016, no. 3:926–961, 2016.
[29]
K.G. O'Grady.
Desingularized moduli spaces of sheaves on a K3,
[30]
K.G. O'Grady.
Desingularized moduli spaces of sheaves on a K3,
J. Reine Angew. Math. 512:49–117, 1999.
[31]
O. Schiffmann.
Indecomposable vector bundles and stable Higgs bundles over smooth projective curves,
Ann. of Math. 183:297–362, 2016.
[32]
C.T. Simpson.
Constructing variations of Hodge structure using Yang-Mills theory and applications to uniformization,
J. Amer. Math. Soc. 1, no. 4:867–918, 1988.
[33]
C.T. Simpson.
Higgs bundles and local systems,
Inst. Hautes Études Sci. Publ. Math. no. 75:5-–95, 1992.
[34]
C.T. Simpson.
Moduli of representations of the fundamental group of a smooth projective variety. I,
Inst. Hautes Etudes Sci. Publ. Math. no. 79:47–129, 1994.
[35]
C.T. Simpson.
Moduli of representations of the fundamental group of a smooth projective variety. II,
Inst. Hautes Etudes Sci. Publ. Math. no. 80:5–79, 1994.
[36]
A. Weber.
A morphism of intersection homology induced by an algebraic map,
Proc. Amer. Math. Soc. 127, no. 12:3513–3516, 1999.
[37]
J. Woolf.
The decomposition theorem and the intersection cohomology of quotients in algebraic geometry,
J. Pure App. Algebra. 182, no. 2-3:317–328, 2003.
[38]
S.-B. Yoo.
Stringy E-function and intersection cohomology of the moduli space of Higgs bundles over an algebraic curve
Doctoral dissertation, Seoul National University, 2009.
|
# OH maser toward IRAS 06056+2131: polarization parameters and evolution
status
Darwish, M. S.1,2, Richards, A. M. S.3, Etoka, S.3, Edris, K. A.4, Saad, S.
M.1,2, Beheary, M. M.4, Fuller, G. A.3
1Astronomy Department, National Research Institute of Astronomy and Geophysics
(NRIAG), 11421 Helwan, Cairo, Egypt.
2Kottamia Center of Scientific Excellence in Astronomy and Space Science
(KCScE, STDF No. 5217, ASRT), Cairo, Egypt.
3Jodrell Bank Centre for Astrophysics, Department of Physics $\&$ Astronomy,
The University of Manchester, M13 9PL, UK.
4Astronomy and Meteorology Department, Faculty of Science, Al-Azhar
University, Cairo, Egypt.
E-mail<EMAIL_ADDRESS>
(Accepted 2020 September 11. Received 2020 September 9; in original form 2020
July 22)
###### Abstract
We present high angular resolution observations of OH maser emission towards
the high-mass star forming region IRAS 06056+2131. The observations were
carried out using the UK radio interferometer array, Multi-Element Radio
Linked Interferometer Network (MERLIN) in the OH main lines at 1665- and
1667-MHz, in addition to the OH satellite line at 1720-MHz. The results of
this study reveal the small upper limit to the size of emission in the
1665-MHz line with an estimated total intensity of $\sim 4$ Jy. We did not
detect any emission from the 1667-MHz and 1720-MHz lines. The full
polarization mode of MERLIN enables us to investigate the magnetic field in
the OH maser region. In this transition, a Zeeman pair is identified from
which a magnetic strength of $\sim-1.5$ mG is inferred. Our results show that
IRAS 06056+2131 is highly polarized, with $\sim$ 96 $\%$ circular polarization
and $\sim$ 6 $\%$ linear polarization. The linear polarization angle is $\sim
29$∘, implying a magnetic field which could be aligned with the outflow
direction detected toward this region, but the actual magnetic field direction
has an uncertainty of up to 110∘ due to the possible effects of Faraday
rotation. The star forming evolutionary status of the embedded proto-stellar
object is discussed.
###### keywords:
Stars: formation – stars: massive – stars: individual: IRAS 06056+2131 –
masers – Polarization
††pubyear: 2020††pagerange: OH maser toward IRAS 06056+2131: polarization
parameters and evolution status–References
## 1 Introduction
Despite the important role that the high-mass stars play in the formation of
stars and galaxies as well as the evolution of the universe, many questions
are still open in this issue. Different theoretical approaches have been
introduced in order to answer the question of how massive stars form. One
approach, proposed by McKee & Tan (2003), is known as the core accretion
model. They suggested that the formation of massive stars is similar to that
of low mass stars, where the dominant force is the magnetic field and
sufficient mass accretion only occurs when the magnetic support is removed
through jets and outflows (e.g. Mouschovias & Paleologou, 1979; Mouschovias et
al., 2006; Commerçon et al., 2011; Tan et al., 2013; Klassen et al., 2017).
On the other hand, other authors (e.g. Padoan & Nordlund, 2002; Mac Low &
Klessen, 2004; Kitsionas et al., 2005; Vázquez-Semadeni et al., 2011) argue
that dynamical influences such as turbulence play more effective roles than
the magnetic field, particularly in the early stages of massive star
formation. Therefore, understanding the role of the magnetic field during the
formation of massive stars can lead us to a better understanding of how such
stars are formed.
Maser emission lines provide exceptionally high resolution probes to measure
the small scale magnetic field strength and structure within 10s – 1000s AU of
high mass protostars (e.g. Fish & Reid, 2007; Vlemmings et al., 2010; Surcis
et al., 2013; Goddi et al., 2017; Crutcher & Kemball, 2019). Masers are also
used to investigate the kinematics as well as the physical conditions
surrounding massive protostellar objects from the onset of formation (e.g.
Caswell, 1997; Stahler et al., 2000; Szymczak & Gérard, 2004; Breen et al.,
2010b; Darwish et al., 2020). Hydroxyl (OH), water (H2O) and methanol (CH3OH)
masers are commonly used to investigate the kinematics as well as the physical
conditions surrounding massive protostellar objects in the early stage(s) of
their formation. Due to its paramagnetic nature, the OH radical is consider to
be more sensitive than CH3OH and H2O for measuring the Zeeman effect directly
and consequently measuring the magnetic fields strength toward these objects
(e.g. Cohen, 1985; Edris et al., 2005; Green et al., 2007, 2012; Vlemmings,
2007; Etoka et al., 2012; Edris et al., 2017). However, H2O and CH3OH are also
important particularly in studying the morphology of the magnetic field since
their linear polarization vectors are less affected by Faraday rotation than
OH masers (Vlemmings et al., 2006b; Surcis et al., 2011a, b; Momjian & Sarma,
2017). Zeeman splitting of OH maser lines can provide the 3D orientation of
magnetic fields towards massive protostellar objects. The compactness and
brightness of masers allow polarization observations at high angular
resolution using interferometers, such as e-MERLIN, the VLBA (Very Long
Baseline Array) and the EVN (European VLBI Network).
OH masers, particularly the main lines at 1665 and 1667 MHz, are known to be
frequently associated with different evolutionary stages of high-mass star
forming regions (HMSFRs) (e.g. Ellingsen et al., 2007). Caswell et al. (2011)
reported that OH masers are associated with Ultra-Compact H II (UC H II)
regions (see also, Cohen et al., 1988; Braz et al., 1990; Breen et al.,
2010a), other authors (e.g. Breen et al., 2010b; Edris et al., 2007; Garay &
Lizano, 1999) found that OH masers can also be associated with HII regions,
which represent a more advanced stage in the massive star formation time
scale.
IRAS 06056+2131 (also known as AFGL6366S) is a high mass star-forming region
located in the Gemini OB1 cloud, which is known to be one of the most massive
molecular cloud complexes in the outer Galaxy. It is named after the IRAS
source located at RA (2000)= 06h 08m 40s.9, Dec (2000)= 21∘ 31′
$00^{\prime\prime}$ with an error ellipse of 30$\times$5 arcsec at a position
angle (PA) of 91∘ (Joint IRAS Science Working Group, 1988), giving an
uncertainty of $2\aas@@fstack{s}15$, $5\aas@@fstack{\prime\prime}0$ in RA and
Dec, respectively.
This source is one of a large sample of candidate high-mass young stellar
objects (YSO) which were identified by Palla et al. (1991). The sample of 260
IRAS sources was divided into two subsamples based on their IRAS color. The
first subsample, so-called “high", is composed of sources where [25$\mu$m
-12$\mu$m] > 0.57111[$\lambda_{2}-\lambda_{1}$] is defined as
$\log_{10}[F_{\lambda 2}/F_{\lambda 1}]$, where $F_{\lambda i}$ is the IRAS
flux density in wavelength band $\lambda_{i}$., which fulfils Wood &
Churchwell (1989) criteria for objects associated with ultra-compact H II (UC
H II) regions. The second subsample, so-called “low", is composed of sources
with [60$\mu$m-12$\mu$m] > 1.3, where different evolutionary stages can be
found, extending from the stage prior to the UC H II detection to evolved
sources (Molinari et al., 1996). According to Palla et al. (1991), IRAS
06056+2131 belongs to the “high" subsample. The estimated far-infrared
luminosity of IRAS 06056+2131 is 5.83$\times$103 ${L_{\odot}}$ (Yoo et al.,
2018, and references therein) while the distance to the source is thought to
be in the range of 0.8 to 2.5 kpc (Koempe et al., 1989; Kurtz et al., 1994).
We adopt a distance of 1.5 kpc as an average (Zhang et al., 2005). A CO
(J=2-1) bipolar outflow was detected toward IRAS 06056+2131 by Snell et al.
(1988); Wu et al. (2004) and Zhang et al. (2005). Kurtz et al. (1994), using
the VLA at a resolution $\leq$ 1′ detected weak radio continuum flux (less
than 1 mJy) at 3.6 cm, within $8\aas@@fstack{\prime\prime}0$ of the IRAS
source, while Rosolowsky et al. (2010), through the Bolocam Galactic Plane
Survey (BGPS), indicated that IRAS 06056+2131 is associated with mm continuum
emission.
IRAS 06056+2131 is known to be associated with several maser species. A Class
II 6.7-GHz methanol maser was first detected toward the source at 12.7 Jy and
found to be offset from the nominal IRAS position by
$7\aas@@fstack{\prime\prime}6$ (see, Caswell et al., 1995; Szymczak et al.,
2000; Xu et al., 2009; Fontani et al., 2010). Water maser emission at 22-GHz
was detected by Koempe et al. (1989) and Sunada et al. (2007) with peak flux
densities of 23 Jy and 2.43 Jy, respectively, although Palla et al. (1991)
failed to detect any emission in this transition, suggesting that the water
masers toward this source are variable. The first detection of the OH maser
toward IRAS 06056+2131 was reported by Turner (1979), while the first
positional association was made by Cohen et al. (1988). The LSR (Local
Standard of Rest) velocity for IRAS 06056+2131 given in SIMBAD is $2.5\pm 1.6$
km s-1, measured using SEST (Swedish-ESO Submillimeter Telescope) observations
of CS by Bronfman et al. (1996) but there may be multiple sources within their
$50^{\prime\prime}$ beam.
In this study, we aim to estimate the accurate position of the OH maser within
the IRAS 06056+2131 region and contribute to a better understanding of the
evolutionary status of the IRAS source. Additionally, we investigate the
magnetic field in the maser region using full-polarization MERLIN
observations. In Section 2, the observations and data reduction are described.
The data analysis and results of the imaging are given in Section 3. Our
discussion and conclusion are presented in Section 4 and 5, respectively.
## 2 Observations and data reduction
The observations of the source IRAS 06056+2131 were carried out using the
MERLIN interferometer. Observations of the OH maser emission were performed in
full-polarization mode during February 2007. IRAS 06056+2131 was observed for
3 full tracks, switching between 1665 MHz and 1667 MHz in addition to 1720 MHz
transitions about every 30 minutes during the observations.
The target was observed in bands centred on 1665.402, 1667.357 and 1720.520
MHz, adjusted to fixed velocity with respect to the LSR in the correlator
assuming a target velocity of 10 km s-1 (Edris et al., 2007). The compact
quasar 0617+210 was observed as a phase reference calibrator with position RA
= 06h 20m 19s.529, Dec = 21∘ 02′ $29\aas@@fstack{\prime\prime}501$. This was
observed alternately with the target in an approximately 10 minute cycle,
giving a total of about 6 hr useful data on-target at each frequency. 0617+210
was observed in the full 16 MHz bandwidth (‘wide’, 13 MHz useful) with two
tunings, one covering 1665 and 1667 MHz and the other covering 1720 MHz.
The data were extracted from the MERLIN archive and converted to FITS at
Jodrell Bank Center for Astrophysics (JBCA) using local software (dprogs)
(Diamond et al., 2003) and the Astronomical Image Processing System (AIPS)
software package (http://www.aips.nrao.edu/cook.html). In order to calibrate
and reduce the data we have used the Common Astronomy Software Application
(CASA) software package version 5.1.6 (McMullin et al., 2007), the full
documentation can be found at https://casa.nrao.edu/casadocs. The
observational parameters for IRAS 06056+2131 are listed in Table 1. The
observations and data reduction followed normal MERLIN procedures, see the
MERLIN User Guide (Diamond et al., 2003).
The OH maser lines were observed in a spectral bandwidth of 0.5 MHz (‘narrow’)
corresponding to 80 km s-1 useful velocity range with a channel separation of
0.18 km s-1. The source 3C84 was observed as a bandpass calibrator in both
wide and narrow configurations. At this frequency the flux density for 3C84
was set to be 17.05 Jy based on previous scaling using 3C286, which has a flux
density around 13.6 Jy Baars et al. (1977), allowing for the resolution of
MERLIN.
We used 3C84 to correct for the wide-narrow phase offset. The polarization
leakage for each antenna was estimated using the un-polarized source 3C84
while the calibration and correction for the polarization position angle was
carried out using the source 3C286, which has a known polarization angle of
33∘ in the image plane. These corrections, along with the bandpass table and
the phase reference solutions for phase and amplitude, were applied to the
target.
The CLEAN algorithm in CASA package “tclean" was used to clean and de-convolve
the image cubes, using the default Cotton-Schwab based deconvolution (Schwab,
1984) and the Hogbom minor cycle deconvolver (Högbom, 1974). We cleaned all
Stokes parameters $I$, $Q$, $U$ and $V$ (total intensity, linear and circular
products, see Section 3, Equations 1-4), with the same mask, and similarly
cleaned the correlator circular polarization products RR and LL. The
$\sigma_{\mathrm{rms}}$ noise in quiet channels was 0.03 Jy in $I$, $Q$, $U$
and $V$, and 0.04 Jy in RR and LL. The synthesised beam size was
$0\aas@@fstack{\prime\prime}19$ $\times$ $0\aas@@fstack{\prime\prime}13$ at PA
= 29.39∘. We made a linearly-polarized intensity (POLI) image using 0.03 Jy
for de-biasing, and a polarization angle (POLA) cube, using a cut-off of 3
times the POLI $\sigma_{\mathrm{rms}}$ of 0.04 Jy (see Sect. 3 , Eqs. 5 and
6). Figure 1 displays the brightest channels of IRAS 06056+2131 after cleaning
each circular polarization separately.
In this work we use ‘component’ to refer to a single patch of emission in a
single channel (sometimes referred to as ‘spots’), and ‘feature’ to refer to a
series of components which are likely to make up a physical association. We
used the CASA task “imfit" to determine the positions of the maser
components(‘spots’) by fitting 2D Gaussian components to each patch of
emission above a 3$\sigma_{\mathrm{rms}}$ threshold in each channel map for
the total intensity (Stokes $I$) cube. The OH maser position errors due to the
angular separation and the cycle time between the phase-reference source and
the target, are about 0$\aas@@fstack{\prime\prime}$008 and
0$\aas@@fstack{\prime\prime}$022, respectively. The stochastic error due to
noise for the peak is 0$\aas@@fstack{\prime\prime}$007\. Allowing for the
phase reference and telescope position errors, which are comparatively very
small, we obtain a total astrometric error of 0$\aas@@fstack{\prime\prime}$025
in each direction.
The position error for fitting emission imaged using a sparse array such as
MERLIN is approximately (synthesised beam)/(signal-to-noise ratio) so the
components were required to appear in at least three consecutive channels with
positions within 0$\aas@@fstack{\prime\prime}$05 of their occurrence in each
consecutive channel, such groups forming spectral features. It was apparent
that the resulting components all occurred at the same position within the
errors (see Section 3), in the $V_{\mathrm{LSR}}$ range 8.95 – 10.53 km s-1.
Figure 1: Maps of the clean image of the brightest target channel ($V_{\mathrm{LSR}}$= 10 km/s). The left panel shows left-hand circular polarization, while the right panel shows right-hand circular polarization. The ellipse in the left corner represents the MERLIN primary beam shape. The intensity is shown by the grey scale while the red lines represent the linear polarization vectors direction. Table 1: IRAS 06056+2131 observational parameters. Date of observation | 6,7, 8 and 9 Feb.2007
---|---
No. antenna | six (Cm, Da, De, Kn, Pi and Mk2)
Field centre (RA, Dec J2000) | 06${}^{h}\\!$ 08${}^{m}\\!$ 40s.97, 21${}^{\circ}\\!$ 31${}^{\prime}\\!$ $00\aas@@fstack{\prime\prime}60$
Rest frequencies (MHz) | 1665.402 1667.359 1720.530
No. of frequency channels | 255
Total band width (MHz) | 0.25
Bandpass calibrator | 3C84
Polarization angle calibrator | 3C 286
Phase calibrator | 0617+210
rms in quiet channel (Jy beam-1) | 0.03
We therefore measured the flux densities of the other polarization cubes at
the same position as the error-weighted Stokes $I$ position. In the case of
$Q$, one 2$\sigma_{\mathrm{rms}}$ result is given because the other
polarization products are significant. We did this in two ways, firstly by
using imfit to fit an unresolved Gaussian component the size of the restoring
beam and secondly by using imstat to measure the maximum (or minimum, for $Q$,
$U$ and $V$) flux density within the beam area. The results of both methods
were the same to within the noise-based positional error.
## 3 Results and Data analysis
We detected OH maser emission at the 1665-MHz main line toward IRAS
06056+2131. At the time of our MERLIN observations, the other main line at
1667 MHz and the satellite line at 1720 MHz are absent.
We detected total intensity (Stokes $I$) maser components in 9 successive
channels (Table 2). The absolute position of the total intensity peak is RA =
06h 08m 40s.6791, Dec = 21∘ 31′ $6\aas@@fstack{\prime\prime}929$ at
$V_{\mathrm{LSR}}$= 10 km s-1. The positions of total intensity components in
other channels were close to this, within the noise-based errors. Thus, we
adopted the error-weighted centroid of the Stokes $I$ emission as common
position for all channels and polarizations, of RA = 06h 08m 40s.6775, Dec =
21∘ 31′ $6\aas@@fstack{\prime\prime}918$, standard deviations of
0$\aas@@fstack{\prime\prime}$021 in RA and 0$\aas@@fstack{\prime\prime}$017 in
Dec.
Left-hand circular (LHC) polarization masers were detected in 8 of the 9
channels, and right-hand circular (RHC) in 4 channels. The LHC peak coincides
with the total intensity peak. Their spectra are shown in Figure 2. If we
assume that these are a Zeeman pair then all the OH 1665 MHz masers detected
comprise a single feature.
Figure 2: 1665-MHz OH maser spectra towards IRAS 06056+2131 at RA = 06h 08m
40s.6775, Dec = 21∘ 31′ $6\aas@@fstack{\prime\prime}918$. The LHC and RHC
polarization peaks are labeled No.1 and No.2 respectively. Solid lines are
used for the data while dashed lines are for the Gaussian fits.
In order to measure the Zeeman splitting we fitted spectral Gaussian curves to
the LHC and RHC peaks. The peak $V_{\mathrm{LSR}}$ and intensities are given
in Table 3. The letter Z is associated with the RHC and LHC polarized peaks of
the identified Zeeman pair. From the identified Zeeman pair, we are able to
determine the line-of-sight magnetic field ($B_{\parallel}$) in the massive
star forming region IRAS 06056+2131 by measuring the velocity difference
between the two polarization hands Elitzur (1996).
We also measured the Stokes parameters ($I$, $Q$, $U$ and $V$), following the
radio definitions:
$\displaystyle I(\textit{u},\textit{v})$
$\displaystyle=1/2\,[\mathrm{RR}(\textit{u},\textit{v})+\mathrm{LL}(\textit{u},\textit{v})]$
(1) $\displaystyle V(\textit{u},\textit{v})$
$\displaystyle=1/2\,[\mathrm{RR}(\textit{u},\textit{v})-\mathrm{LL}(\textit{u},\textit{v})]$
(2) $\displaystyle Q(\textit{u},\textit{v})$
$\displaystyle=1/2\,[\mathrm{RL}(\textit{u},\textit{v})+\mathrm{LR}(\textit{u},\textit{v})]$
(3) $\displaystyle U(\textit{u},\textit{v})$
$\displaystyle=1/2\textit{i}\,[\mathrm{LR}(\textit{u},\textit{v})-\mathrm{RL}(\textit{u},\textit{v})]$
(4)
$I$($\textit{u},\textit{v}$) is the Stokes integrated (total) flux density,
$Q$($\textit{u},\textit{v}$) and $U$($\textit{u},\textit{v}$) are the Stokes
flux densities corresponding to the two orthogonal components of linear
polarization, while $V$($\textit{u},\textit{v}$) represents the circular
component of polarized emission.
Table 2: Measured peak flux densities per channel and errors ($\sigma_{\mathrm{rms}}$) in Stokes $I,Q,U,V$, $\mathrm{LL}$, $\mathrm{RR}$, polarized intensity $P$ and polarization angle $\chi$. The symbol – means that no emission was detected above 3$\times$ the given $\sigma_{\mathrm{rms}}$ error. $V_{\mathrm{LSR}}$ | $I$ | $I$err | $V$ | $V$err | $\mathrm{RR}$ | $\mathrm{RR}$err | $\mathrm{LL}$ | $\mathrm{LL}$err | $Q$ | $Q$err | $U$ | $U$err | $P$ | $P$err | $\chi$ | $\chi$err
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---
(km s-1) | (mJy b-1) | (mJy b-1) | (mJy b-1) | (mJy b-1) | (mJy b-1) | (mJy b-1) | (mJy b-1) | (degrees)
10.53 | 131 | 19 | –180 | 34 | – | 40 | 298 | 35 | – | 30 | – | 30 | – | – | – | –
10.35 | 127 | 42 | –197 | 42 | – | 40 | 282 | 65 | – | 30 | – | 30 | – | – | – | –
10.18 | 1119 | 48 | –1138 | 51 | – | 40 | 2257 | 89 | – | 30 | – | 30 | – | – | – | –
10.00 | 3808 | 130 | –3673 | 133 | 250 | 58 | 7504 | 250 | 112 | 28 | 197 | 36 | 217 | 33 | 29 | 4
9.82 | 1702 | 60 | –1665 | 91 | – | 40 | 3375 | 126 | 55 | 30 | 203 | 31 | 205 | 34 | 36 | 4
9.47 | 102 | 31 | – | 30 | – | 40 | 138 | 55 | – | 30 | – | 30 | – | – | – | –
9.30 | 192 | 39 | 299 | 29 | 476 | 39 | – | 40 | – | 30 | – | 30 | – | – | – | –
9.12 | 545 | 19 | 391 | 35 | 927 | 42 | 164 | 34 | – | 30 | – | 30 | – | – | – | –
8.95 | 230 | 26 | 163 | 38 | 375 | 49 | 251 | 58 | – | 30 | – | 30 | – | – | – | –
Table 3: Gaussian fit parameters of the RHC and LHC polarization peaks of the 1665-MHz OH maser detected towards IRAS 06056+2131. Peak | $V_{\mathrm{LSR}}$ | Peak Flux | Zeeman Pair | $B_{\parallel}$
---|---|---|---|---
(km s-1) | (Jy/beam) | (mG) | |
1\. LHC | 9.981 | 7.675 | Z | $-1.5$
2\. RHC | 9.139 | 0.934
The degree of linear ($P_{\mathrm{linear}}$) and circular
($P_{\mathrm{circular}}$) polarization was calculated as:
$\displaystyle P_{\mathrm{linear}}$ $\displaystyle=(Q^{2}+U^{2})^{0.5}/I$ (5)
$\displaystyle P_{\mathrm{circular}}$ $\displaystyle=V/I$ (6)
The total polarization degree $P_{\mathrm{total}}$ can be then calculated from
Eq.7 as:
$\displaystyle P_{\mathrm{total}}$ $\displaystyle=(Q^{2}+U^{2}+V^{2})^{0.5}/I$
(7)
From Equations 3 and 4, the polarization position angle $\chi$ was calculated
as:
$\displaystyle\chi$ $\displaystyle=0.5\times\arctan(U/Q)$ (8)
The Stokes and other polarization parameters were measured from the data cubes
as described in Section 2 and listed in Table 2. Note that the POLI cube
corresponds to $(Q^{2}+U^{2})$0.5 and POLA to $\chi$. Once we determine
$P_{\mathrm{linear}}$ and $P_{\mathrm{circular}}$ polarization degrees from
equations 5 and 6 respectively, the percentage of those parameters can be
identified by mL, mC and mT for the linear, circular and total polarization,
respectively and are listed in Table 4.
Table 4: The polarization angle ($\chi$) and percentages of linear (m${}_{{}_{\textit{L}}}$), circular (m${}_{{}_{\textit{C}}}$) and total (m${}_{{}_{\textit{T}}}$) polarization, with respect to total intensity, of the 1665-MHz OH maser emission detected towards IRAS 06056+2131. $\chi$ | $m_{\mathrm{L}}$ | $m_{\mathrm{C}}$ | $m_{\mathrm{T}}$
---|---|---|---
($\circ$) | ($\%$) | ($\%$) | ($\%$)
29 | 6.0 | 96.5 | 96.6
## 4 Discussion
Our MERLIN OH maser observations represent the first polarimetric study at
high spectral and angular resolution of IRAS 06056+2131. The first detection
of OH maser emission towards this source was made by Turner (1979). They
observed the 4 ground-state OH transitions. Though they tabulated putative
detections of 1667 MHz maser emission, the strongest component, at
$V_{\mathrm{LSR}}$=9.2 km s-1, is in the range 3 to 5 sigma according to their
peak-to-peak noise values. They nonetheless made a clear detection of the
1665-MHz transition, centred at a velocity of $V_{\mathrm{LSR}}$=8.9 km s-1
with a full width half maximum of 4.6 km s-1 (note that their channel
resolution is 1.4 km s-1).
The detection of 1665-MHz OH emission, centred at $V_{\mathrm{LSR}}$=10 km
s-1, was then confirmed by Cohen et al. (1988) within 1′ from the IRAS source.
They measured a total flux density of 3.4 Jy and 0.3 Jy in the LHC and RHC
polarization respectively. Moreover, Edris et al. (2007) using Nançay and
Green Bank Telescope observations detected emission at 1667 and 1720 MHz in
addition to the 1665 MHz line at velocities 9.44, 3.37 and 10.14 km s-1 with
flux densities 0.22, 0.6 and 3.23 Jy respectively. Despite the similarity in
velocities of the 1667 and 1665 MHz masers, our non-detection at 1667 MHz
could be due to variability (since it was previously much weaker than 1665
MHz) but could also indicate that the environment is becoming warmer (Cragg et
al. 2002; Gray et al. 1991).
Previous surveys of the 1720 MHz OH maser line by Fish et al. (2003) and Ruiz-
Velasco et al. (2016) suggested that this line is clearly observed from
regions with relatively high magnetic field strengths which coincide with H II
regions. The significant difference in the velocity between the 1720-MHz line
detected by Edris et al. (2007) and the main lines could be an indication that
the 1720-MHz is tracing another protostellar object or that it is excited by
different phenomena.
### 4.1 Polarization
The derived polarization parameters listed in Table 4 indicate that IRAS
06056+2131 is highly circularly polarized source. However, at the velocity of
the LHC peak we found a linear polarization percentage of 6% and a circular
polarization percentage of 96.5%, which means that the emission is
elliptically polarized. The line-of-sight component of the magnetic field at
the location of the Zeeman pair is $B_{\parallel}=-1.5$ mG, indicating a
magnetic field pointing toward us. Figure 3 shows the detected OH maser
emission through different channels of IRAS 06056+2131 with their circular
polarization contours and linear polarization vectors. Figure 4 shows the
total intensity (Stokes $I$) of IRAS 06056+2131, where we can see clearly the
Zeeman pair. It also displays the measured linear polarization intensity
labeled with their polarization angles. The contribution of the magnetic field
in the plane of the sky, which is perpendicular to the polarisation vectors
could suggest that it is broadly aligned with the possible SE-NW CO outflow
detected by Xu et al. (2006), see Section 4.2 and Figure 5. However, the
foreground Faraday rotation can potentially be strong at low frequencies hence
making it harder to interpret the magnetic field direction (e.g. Vlemmings et
al., 2006a).
In order to estimate this effect on the orientation of the measured
polarization position angle ($\chi$), we used the same method as in Etoka &
Diamond (2010) relying on the measurement of the dispersion measure (DM) and
the rotation measure (RM) from pulsars. The closest pulsar to IRAS 06056+2131
is found to be PSR J0609+2130. This pulsar is located at 1.2 kpc and has a
dispersion measure (DM) of 38.77 cm-3 pc (Lorimer et al., 2004).
Unfortunately, no RM is provided for this pulsar. The next closest pulsar with
both a known DM of 96.91 cm-3 pc and a known RM of 66.0 rad m2, PSR B0611+22,
is located with an offset of 1.6∘ from IRAS 06056+2131 at a comparable
distance of 1.74 kpc (Sobey, priv. communication).
Assuming that the ambient Galactic magnetic field at the location of PSR
J0609+2130 and PSR B0611+22 is similar, the RM for PSR J0609+2130 is then
inferred to be 26.4 rad m2, based on Eq. 1 from Han et al. (1999), given by:
$\displaystyle B_{\parallel}=1.232(\mathrm{RM/DM})$ (9)
Since the degree of Faraday rotation is given by RM$\lambda$2 (Noutsos et al.,
2008), this implies that the values of $\chi$ measured here potentially suffer
from Faraday rotation as high as 50∘.
Additionally, internal Faraday rotation can also affect the linear
polarisation information if the OH masers are located right behind the UC H II
region. Using Fish & Reid (2006) eq. 5 and the electron density in addition to
the diameter of the masing region from Kurtz et al. (1994) (cf. their Table
5), would lead to a very high value of internal Faraday rotation. On the other
hand, Fish & Reid (2006) show that even if there is a substantial amount of
Faraday rotation (i.e., >1 rad) along the amplification length, the key length
is the gain length. The fact that we measure a significant percentage of
linear polarisation implies that internal Faraday rotation potentially
affecting the measurements of $\chi$ along the gain length is < 1 rad (i.e.,
<57∘), suggesting that the masers arise from the limb or the near side of the
UC H II region. Since the sense of the external and internal potential Faraday
rotation is nonetheless not known, the measurements of $\chi$ can suffer from
0∘ to up to 110∘ of rotation.
The alignment of the polarisation vectors with the orientation of the CO
outflow can consequently be fortuitous. It is to be noted though, that the
velocity of the masers is in good agreement with their suggested association
with the red lobe of the CO outflow.
Figure 3: Channel maps of IRAS 06056+2131. The contour levels are adapted to
the dynamic range per panel. The grey scale shows the total intensity, cut at
[-0.1,1] Jy. The light blue contours show Stokes $V$ at (1, 2) x 12 mJy/beam.
The dark blue contours show Stokes $V$ at (1, 2, 4, 8) $\times$ –32 mJy/beam
in the first panels, –12 mJy/beam ($3\sigma_{\mathrm{rms}}$) in the last panel
(at $V_{\rm LSR}$=10.4 km s-1). The filled beam is shown at lower left of the
panel at $V_{\rm LSR}$=10.0 km s-1. The red vectors represent the linear
polarization position angle where the linear polarization is significant (at a
threshold of $3\sigma_{\mathrm{rms}}$). See Table 2 for details. Figure 4: The
total and polarized intensity spectra of 1665 MHz toward IRAS 06056+2131. The
green solid bar is labeled with the linear polarization angle per channel. RR
and LL have been multiplied by 0.5 to facilitate comparison with Stokes $I$
and provide a more compact plot.
We performed a rough comparison between the energy density in the magnetic
field of strength $B$ and the kinetic energy density in the CO outflow (see
also Section 4.2) in order to investigate whether the magnetic field could
have a significant effect, following the method summarised in Assaf et al.
(2013). We assume that $B=B_{\parallel}$; if $B$ is greater then the magnetic
energy density will be even greater. The magnetic energy density is given by
$E_{\mathrm{B}}=B^{2}/(2\mu_{0})$ where $\mu_{0}$ is the permeability of free
space. For $B$ in Gauss, this gives $E_{\mathrm{B}}\approx 0.004B^{2}\approx
9\times 10^{-9}$ J m-3.
The thermal energy density is given by
$E_{\mathrm{th}}=3/2nk_{\mathrm{B}}T_{\mathrm{K}}$ where $k_{\mathrm{B}}$ is
Boltzmann’s constant. The non-detection of the 1667 GHz maser implies that the
gas temperature $T_{\mathrm{K}}>75$ K and the number density $n>10^{10}$ m-3
(Cragg et al. 2002; Gray et al. 1991). This leads to
$E_{\mathrm{th}}>1.5\times 10^{-11}$ J m-3. Even for a much higher temperature
of 200 K, $E_{\mathrm{th}}<E_{\mathrm{B}}$ as long as $n<2\times 10^{12}$ m-3.
The bulk energy density is given by $E_{\mathrm{Bulk}}=0.5\rho v^{2}$ where we
assume the velocity $v$ as the CO outflow velocity of 7.5 km s-1, half the
maximum velocity span measured by Xu et al. (2006). The gas density
$\rho=n\times 1.25m_{\mathrm{H}_{2}}$, where $m_{\mathrm{H}_{2}}$ is the mass
of the hydrogen molecule. Thus, $E_{\mathrm{Bulk}}<E_{\mathrm{B}}$ for
$n<8\times 10^{10}$ m-3.
These estimates are very crude but show that the magnetic field is likely to
influence the outflow for number densities in the range $1-8\times 10^{10}$
m-3 ($1-8\times 10^{7}$ cm-3) within a temperature range of $75-200$ K.
### 4.2 The evolutionary status
To understand the evolutionary status of the star forming region IRAS
06056+2131, we put together all the results of the different observations with
known position performed close to the OH maser. This is presented in Figure 5.
This figure has the MERLIN OH maser position, RA = 06h 08m 40s.6775, Dec = 21∘
31′ $6\aas@@fstack{\prime\prime}918$ at the origin, marked by a blue cross the
size of which represents the position error
($0\aas@@fstack{\prime\prime}025$).
We estimated the positions of the OH masers observed using the Greenbank
Telescope (GBT) using maps provided by (Edris et al., 2007). The 1665-MHz LHC
maser peak at 10.14 km s-1, is at RA = 06h 08m 41s.095, Dec = 21∘ 31′
$21\aas@@fstack{\prime\prime}21$, uncertainty 38 arcsec. This is offset by
(5.8, 14.3) arcsec from the MERLIN position, within the GBT uncertainty.
However, the 1720 MHz maser peak, at 3.4 km s-1, is at RA = 06h 08m 44s.348,
Dec = 21∘ 31′ $40\aas@@fstack{\prime\prime}01$, uncertainty 40 arcsec, an
offset of (51, 33) arcsec. The highest-resolution observation of the 6.7 GHz
methanol maser was made by Xu et al. (2009) using MERLIN, with $\sim$30 mas
astrometric accuracy (based on analysis of the original data) at a peak
position of RA = 06h 08m 40s.671, Dec = 21∘ 31′
$06\aas@@fstack{\prime\prime}89$, velocity 8.8 km s-1. This is marked by the
magenta diamond ($\sim 10$ times larger than the position error) in Figure 5.
In a recent observation, Hu et al. (2016) detected the maser at the same
position within uncertainties, peaking at 9.39 km s-1.
Koempe et al. (1989), using Effelsberg, found that the 22-GHz H2O maser is
offset by –20′′, 8′′ from the IRAS source, position accuracy
10$\aas@@fstack{\prime\prime}$ , at a velocity 2 km s-1, shown by the cyan
circle and error bars in Figure 5.
Fontani et al. (2010) using the Nobeyama 45-m telescope failed to detect any
Class-I methanol maser emission lines (44 GHz and 95GHz) towards IRAS
06056+2131, On the other hand, their Effelsberg observations of the Class-II
6.7 GHz methanol maser transition show a complex spectrum with emission in the
entire velocity range [8.5;11] km s-1, with at least 3 spectral components,
the strongest being at 8.8 km s-1 with peak flux density = 12.7 Jy.
The most precise radio continuum observation of the UC H II region was made by
Hu et al. (2016) using the VLA, giving a position of RA = 06h 08m 40s.67, Dec
= 21∘ 31′ $07\aas@@fstack{\prime\prime}2$. The astrometry is accurate to at
least $0\aas@@fstack{\prime\prime}3$. The extent of the UC H II region is
shown by the red ellipse (with angular size of 3$\aas@@fstack{\prime\prime}$5
x 2$\aas@@fstack{\prime\prime}$9 at position angle 41.4∘) in Figure 5.
Figure 5: Overview of the star-forming region IRAS 06056+2131. The origin of
coordinates (0,0) is the location of the OH maser peak at RA = 06h 08m
40s.6775, Dec = 21∘ 31′ $06\aas@@fstack{\prime\prime}918$, shown by the blue
cross. The error circle of the 1665-MHz observation by Edris et al. (2007) is
shown by the blue arc. The magenta diamond and cyan circle mark the methanol
(Xu et al., 2009) and water (Koempe et al., 1989) masers, respectively. The
red ellipse marks the UC H II region (Hu et al., 2016). The dark green star
symbols mark the IR sources identified by Kwon et al. (2018), while the light
green triangle symbols labeled from 1 to 12 represent the Smithsonian
Millimetre Array (SMA) detections by Rodón et al. (2012). The yellow ellipse
marks the IRAS source position bounds, and the shaded area marks an estimate
of the CO outflow direction from Xu et al. (2006). See Section 4 for details.
The position of IRAS 06056+2131 is RA = 06h 08m 40s.973, Dec = 21∘ 31′
$00\aas@@fstack{\prime\prime}61$, with an error ellipse of axes (30, 5) arcsec
with the long axis almost exactly E-W (Joint IRAS Science Working Group,
1988)(shown in yellow in Figure 5), is likely to represent an aggregate of
sources within the IRAS beam. Several of the Infra-red Reflection Nebulae
(IRN) and Near Infra-Red Illuminators (NIRI) identified by Kwon et al. (2018)
(cf. their Tables 1 and 2) are within 10 arcsec of the OH maser. We assumed
that the astrometric accuracy was typical for the detector used, probably
$\sim$ $1\aas@@fstack{\prime\prime}5$ ($\sim$3 pixels). These sources are
shown and labeled in dark green in Figure 5. Note the existence of a WISE
source, at RA = 06h 08m 40s.45, Dec = 21∘ 31′ $02\aas@@fstack{\prime\prime}0$,
which is within the positional uncertainty of the IRAS source (Kwon et al.,
2018). Consequently, as noted by Kwon et al. (2018) the WISE source probably
corresponds to the IRAS source and is not shown separately in Figure 5.
Rodón et al. (2012) detected 12 mm-wave sources in the region, using the SMA.
We assumed that the published positions (estimated from their figure 1) were
relative to the pointing position RA = 06h 08m 40s.31, Dec = 21∘ 31′
$03\aas@@fstack{\prime\prime}6$ listed in the SMA (Smithsonian Millimetre
Array) archive, with an astrometric accuracy of 0$\aas@@fstack{\prime\prime}$4
(1/3 of a synthesised beam). These sources are shown and numbered in light
green in Figure 5.
Figure 5 shows that the hot core SMA 2, the OH 1665 MHz maser and the methanol
maser are closely associated. SMA 2 is approximately 0.130 arcsec west of the
OH maser, within the position error. The methanol maser is (0.095$\pm$0.039)
arcsec from the OH maser, which corresponds to $\sim 140\pm 60$ au in the
plane of the sky at assumed distance of 1.5 kpc. Such a small separation
between the OH and Class II methanol masers indicates that they are from a
very similar region at overlapping velocities (around 8–10 km s-1). These all
lie in the direction of the UC H II region, in fact SMA 2 is the closest hot
core to the region centre.
The close association (including similar velocities) between OH and Class II
methanol masers is likely to indicate a relatively later evolutionary stage of
massive star formation than regions which only have methanol masers (e.g
Caswell, 1997; Breen et al., 2010a).
The water maser is offset from these masers by much more than the combined
errors and peaks at a lower velocity($\sim 2$ km s-1). The position and the
velocity of the water maser suggests that it might be pumped by shocks
associated with the CO outflow.
The pale red-blue shaded strip represents the orientation of the CO outflow
mapped by Xu et al. (2006) from their figure 1, along the SE – NW axis joining
the blue- and red-shifted peaks. It should be noted that CO emission was
detected throughout the area and the association of these peaks with the same
outflow is tentative.
Zhang et al. (2005), using the NRAO 12 m telescope detected a CO J=2-1 outflow
which they associated with a deeply embedded IR source and the UC H II region.
These are located at an offset
of(–14$\aas@@fstack{\prime\prime}$0,14$\aas@@fstack{\prime\prime}$0) in RA and
Dec from the IRAS source (see figure 2 of Zhang et al., 2005). Xu et al.
(2006) carried out higher resolution imaging of CO J=2-1 using the Nobeyama
Radio Telescope. The integrated emission peak is at 2.5 km s-1 and two pairs
of red and blue-shifted peaks are resolved, covering a total velocity range of
about - 5 to +10 km s-1. The CO outflow appears to be oriented close to the
line of sight although Kwon et al. (2018) point out that at high resolution
the region is complex and other interpretations are possible. The OH maser
velocity appears to be close to the most red-shifted CO velocity of 10 km s-1
and the polarization vectors (which are perpendicular to the magnetic field
lines), suggest that the magnetic field orientation might be in agreement with
that of the SE-NW CO outflow reported by Xu et al. (2006) (see their figure
1). The direction indicated by $B_{\parallel}$ suggests that the magnetic
field associated with the far outflow points towards the observer. In addition
to the outflow, Yoo et al. (2018) reported evidence for infalling material,
using observations of lines of HCO+ (1$-$0), HCO+ (3$-$2), H2CO (212$-$111)
and H13CO+ (1$-$0).
The presence of both outflow and infall material could be additional evidence
for a protostellar object in an early evolutionary stage, when the radiation
force is still lower than the gravitational force at the outer boundary.
The observations conducted toward IRAS 06056+2131 suggest that it is a high-
mass star-forming region at a not so early evolutionary phase, associated with
an UC H II region detected by Koempe et al. (1989); Kurtz et al. (1994) and at
highest resolution by Hu et al. (2016).
### 4.3 Comparison with other sources
Main-line OH masers towards the IRAS sources 20126+4104 and 19092+0841
(hereafter IRAS 20126, 19092) were also imaged, using MERLIN, at a similar
angular resolution to this study, by Edris et al. (2005) and Edris et al.
(2007), respectively. The velocity range of the OH maser emission at 1665-MHz
detected towards them is 17 and 3 km s-1, respectively.
IRAS 20126 is at 1.7 kpc (Wilking et al., 1989), a similar distance as our
target IRAS 06065+2131, which has a velocity range of the maser emission at
1665-MHz of 1.5 km s-1. In IRAS 20126, the two brightest OH maser features at
1665 MHz (peak fluxes > 0.9Jy), which are separated by 14 km s-1, are so
bright that they would have been detected with MERLIN toward both IRAS 19092
(D=4.48 kpc, Molinari et al. 1996) and IRAS 06056+2131. However, these two
sources show an intrinsically smaller $V_{\mathrm{LSR}}$ span than IRAS 20126.
Compared with IRAS sources 20126 and 19092 (with extents of 2000 and 22400 au,
respectively), the OH maser is very compact in IRAS 06056+2131 with an upper
limit to its distribution of 0.03 arcsec which corresponds to 45 au at an
assumed distance of 1.5 kpc. Both OH maser main lines were detected toward
IRAS 19092 while only 1665 MHz was found toward IRAS 20126 and IRAS
06056+2131, suggesting a higher gas temperature and density for the latter two
sources (Gray et al., 1991).
There is no evidence for OH masers tracing a circumstellar disk either in IRAS
06056+2131 or IRAS 19092, while Edris et al. (2005) reported that the OH
masers are tracing a circumstellar disk in IRAS 20126. The small upper limit
to the size of the OH maser towards IRAS 06056+2131 makes it improbable that
the small observed $V_{\mathrm{LSR}}$ range is a projection effect of an
outflow in the plane of the sky.
IRAS 06056+2131 is relatively less luminous in comparison with IRAS 20126 and
IRAS 19092 (5.83 $\times 10$ 3 L⊙, 104 L⊙ and 104 L⊙, respectively) while the
line-of-sight magnetic field measured from the OH Zeeman pair has the lowest
magnitude (1.5, 11 and 4.4 mG, respectively). The 6.7 GHz methanol (Class II)
maser has been detected toward the three IRAS sources, while class I has been
detected only toward IRAS 19092.
No radio-continuum was detected towards IRAS 20126 (Molinari et al., 1998) and
Edris et al. (2017) showed that the nearest detection is offset by about 2
arcmin from IRAS 19092, suggesting that the OH masers in these sources are not
associated with UC H II regions, unlike the presence of a UC H II region
associated with IRAS 06056+2131.
Bearing in mind all the results summarised above, we suggest that IRAS
06056+2131 is at an evolutionary stage comparable with IRAS 20126, which is
more evolved than IRAS 19092. This result is consistent with what was reported
by Edris et al. (2007), where they found the sources with higher OH intensity
to be more evolved. The absence of any Class I methanol maser from IRAS
06056+2131 is a further evidence that it is more evolved than IRAS 19092
(Ellingsen, 2006).
## 5 Conclusions
We have presented a high angular resolution observations of OH maser emission
toward IRAS 0605+2131. At the time of the observation of the three OH
transitions at 1665, 1667 and 1720 MHz, only 1665 MHz was detected. The small
upper limit to the size of the OH maser emitting region is estimated to be
$\sim$ 45 au at 1.5 kpc. We measured the line-of-sight magnetic field from the
identified OH Zeeman pair to be $\sim$ –1.5 mG, and the corresponsing magnetic
field energy density is strong enough to influence the outflow. The linear
polarization vectors might suggest that the magnetic field orientation in the
plane of the sky is roughly NW to SE. This might then be aligned with the
possible CO outflow direction (Xu et al., 2006). However, Faraday rotation
gives an uncertainty of 110∘ in the actual magnetic field direction.
Our results are found to be in a good agreement with Caswell (1998), which
tested the association between OH and 6.7-GHz masers with a large sample and
concluded that, 80$\%$ of OH masers have 6.7-GHz methanol maser counterparts.
The spatial and velocity coincidence between OH and Class II methanol maser
indicates that they are probably tracing the same physical phenomena in IRAS
06056+2131, however there is no evidence of a circumstellar disk. The close
association of a Class-II methanol maser and UC H II region with the OH
1665-MHz maser and the absence of Class-I methanol masers suggest that IRAS
06056+2131 is at a later stage of evolution than sources without detectable OH
masers. In comparison with other OH maser sources investigated by Edris et al.
(2005) and Edris et al. (2017), the properties of IRAS 06056+2131 also
suggests that is at a relatively more evolved evolutionary stage.
Finally, higher angular resolution would provide a better estimate of the
location and properties of the protostellar object driving the IRAS 06056+2131
outflow, for example VLBI measurements of masers and ALMA measurements of the
mm/sub-mm core.
## acknowledgements
We gratefully thank Prof. R. Battye, M. Gray and R. Beswick, and the rest of
the e-MERLIN team for guidance in reducing these data. We also remember the
important role of the late Dr Jim Cohen in initiating this project. We thank
the anonymous referee for very insightful and helpful comments which have
improved this paper. e-MERLIN is the UK radio interferometer array, operated
by the University of Manchester on behalf of STFC. We acknowledge the use of
MERLIN archival data as well as NASA’s Astrophysics Data System Service.
M.Darwish would like to acknowledge the Science and Technology Development
Fund (STDF) N5217, Academy of Scientific Research and Technology (ASRT),
Cairo, Egypt and Kottamia Center of Scientific Excellence for Astronomy and
Space Sciences (KCSEASSc), National Research Institute of Astronomy and
Geophysics (NRIAG). Our sincere thanks to S. Ellingsen and J. Allotey for
their helpful discussion and to C. Sobey for pulsar data.
## Data availability
The raw visibility data and the clean images cubes in FITS format for all OH
maser polarization products of the MERLIN observations of IRAS 06056+2131 can
be found at: https://doi.org/10.5281/zenodo.3961902.
## References
* Assaf et al. (2013) Assaf K. A., Diamond P. J., Richards A. M. S., Gray M. D., 2013, MNRAS, 431, 1077
* Baars et al. (1977) Baars J. W. M., Genzel R., Pauliny-Toth I. I. K., Witzel A., 1977, A&A, 61, 99
* Braz et al. (1990) Braz M. A., Lepine J. . R. D., Sivagnanam P., Le Squeren A. M. ., 1990, A&A, 236, 479
* Breen et al. (2010a) Breen S. L., Ellingsen S. P., Caswell J. L., Lewis B. E., 2010a, MNRAS, 401, 2219
* Breen et al. (2010b) Breen S. L., Caswell J. L., Ellingsen S. P., Phillips C. J., 2010b, MNRAS, 406, 1487
* Bronfman et al. (1996) Bronfman L., Nyman L. A., May J., 1996, A&AS, 115, 81
* Caswell (1997) Caswell J. L., 1997, MNRAS, 289, 203
* Caswell (1998) Caswell J. L., 1998, MNRAS, 297, 215
* Caswell et al. (1995) Caswell J. L., Vaile R. A., Ellingsen S. P., Whiteoak J. B., Norris R. P., 1995, MNRAS, 272, 96
* Caswell et al. (2011) Caswell J. L., Kramer B. H., Reynolds J. E., 2011, MNRAS, 415, 3872
* Cohen (1985) Cohen R. J., 1985, in Kahn F. D., ed., Cosmical Gas Dynamics. pp 223–235
* Cohen et al. (1988) Cohen R. J., Baart E. E., Jonas J. L., 1988, MNRAS, 231, 205
* Commerçon et al. (2011) Commerçon B., Hennebelle P., Henning T., 2011, ApJ, 742, L9
* Cragg et al. (2002) Cragg D. M., Sobolev A. M., Godfrey P. D., 2002, MNRAS, 331, 521
* Crutcher & Kemball (2019) Crutcher R. M., Kemball A. J., 2019, Frontiers in Astronomy and Space Sciences, 6, 66
* Darwish et al. (2020) Darwish M. S., Edris K. A., Richards A. M. S., Etoka S., Saad M. S., Beheary M. M., Fuller G. A., 2020, MNRAS, 493, 4442
* Diamond et al. (2003) Diamond P. J., Garrington S. T., Gunn A. G., Leahy J. P., McDonald A., Muxlow T. W. B., Richards A. M. S., Thomasson P., 2003, Technical report, The MERLIN User Guide. Jodrell Bank Observatory, UK., http://www.merlin.ac.uk/user_guide/MUG.ps.gz
* Edris et al. (2005) Edris K. A., Fuller G. A., Cohen R. J., Etoka S., 2005, A&A, 434, 213
* Edris et al. (2007) Edris K. A., Fuller G. A., Cohen R. J., 2007, A&A, 465, 865
* Edris et al. (2017) Edris K. A., Fuller G. A., Etoka S., Cohen R. J., 2017, A&A, 608, A80
* Elitzur (1996) Elitzur M., 1996, ApJ, 457, 415
* Ellingsen (2006) Ellingsen S. P., 2006, ApJ, 638, 241
* Ellingsen et al. (2007) Ellingsen S. P., Voronkov M. A., Cragg D. M., Sobolev A. M., Breen S. L., Godfrey P. D., 2007, in Chapman J. M., Baan W. A., eds, IAU Symposium Vol. 242, Astrophysical Masers and their Environments. pp 213–217 (arXiv:0705.2906), doi:10.1017/S1743921307012999
* Etoka & Diamond (2010) Etoka S., Diamond P. J., 2010, MNRAS, 406, 2218
* Etoka et al. (2012) Etoka S., Gray M. D., Fuller G. A., 2012, MNRAS, 423, 647
* Fish & Reid (2006) Fish V. L., Reid M. J., 2006, ApJS, 164, 99
* Fish & Reid (2007) Fish V. L., Reid M. J., 2007, ApJ, 670, 1159
* Fish et al. (2003) Fish V. L., Reid M. J., Argon A. L., Menten K. M., 2003, ApJ, 596, 328
* Fontani et al. (2010) Fontani F., Cesaroni R., Furuya R. S., 2010, A&A, 517, A56
* Garay & Lizano (1999) Garay G., Lizano S., 1999, PASP, 111, 1049
* Goddi et al. (2017) Goddi C., Surcis G., Moscadelli L., Imai H., Vlemmings W. H. T., van Langevelde H. J., Sanna A., 2017, A&A, 597, A43
* Gray et al. (1991) Gray M. D., Doel R. C., Field D., 1991, MNRAS, 252, 30
* Green et al. (2007) Green J. A., Richards A. M. S., Vlemmings W. H. T., Diamond P., Cohen R. J., 2007, MNRAS, 382, 770
* Green et al. (2012) Green J. A., McClure-Griffiths N. M., Caswell J. L., Robishaw T., Harvey-Smith L., 2012, Monthly Notices of the Royal Astronomical Society, 425, 2530
* Han et al. (1999) Han J. L., Manchester R. N., Qiao G. J., 1999, MNRAS, 306, 371
* Högbom (1974) Högbom J. A., 1974, A&AS, 15, 417
* Hu et al. (2016) Hu B., Menten K. M., Wu Y., Bartkiewicz A., Rygl K., Reid M. J., Urquhart J. S., Zheng X., 2016, ApJ, 833, 18
* Joint IRAS Science Working Group (1988) Joint IRAS Science Working Group 1988, Infrared Astronomical Satellite (IRAS) Catalogs and Atlases: The Point source catalog declination range $30^{\circ}>\delta>0^{\circ}$. Vol. 1190, Scientific and Technical Information Division, National Aeronautics and …
* Kitsionas et al. (2005) Kitsionas S., Jappsen A. K., Klessen R. S., Whitworth A. P., 2005, in Protostars and Planets V Posters. p. 8555
* Klassen et al. (2017) Klassen M., Pudritz R. E., Kirk H., 2017, MNRAS, 465, 2254
* Koempe et al. (1989) Koempe C., Baudry A., Joncas G., Wouterloot J. G. A., 1989, A&A, 221, 295
* Kurtz et al. (1994) Kurtz S., Churchwell E., Wood D. O. S., 1994, ApJS, 91, 659
* Kwon et al. (2018) Kwon J., et al., 2018, AJ, 156, 1
* Lorimer et al. (2004) Lorimer D. R., et al., 2004, MNRAS, 347, L21
* Mac Low & Klessen (2004) Mac Low M.-M., Klessen R. S., 2004, Reviews of Modern Physics, 76, 125
* McKee & Tan (2003) McKee C. F., Tan J. C., 2003, ApJ, 585, 850
* McMullin et al. (2007) McMullin J. P., Waters B., Schiebel D., Young W., Golap K., 2007, in Shaw R. A., Hill F., Bell D. J., eds, Astronomical Society of the Pacific Conference Series Vol. 376, Astronomical Data Analysis Software and Systems XVI. p. 127
* Molinari et al. (1996) Molinari S., Brand J., Cesaroni R., Palla F., 1996, A&A, 308, 573
* Molinari et al. (1998) Molinari S., Brand J., Cesaroni R., Palla F., Palumbo G. G. C., 1998, A&A, 336, 339
* Momjian & Sarma (2017) Momjian E., Sarma A. P., 2017, ApJ, 834, 168
* Mouschovias & Paleologou (1979) Mouschovias T. C., Paleologou E. V., 1979, ApJ, 230, 204
* Mouschovias et al. (2006) Mouschovias T. C., Tassis K., Kunz M. W., 2006, ApJ, 646, 1043
* Noutsos et al. (2008) Noutsos A., Johnston S., Kramer M., Karastergiou A., 2008, MNRAS, 386, 1881
* Padoan & Nordlund (2002) Padoan P., Nordlund Å., 2002, ApJ, 576, 870
* Palla et al. (1991) Palla F., Brand J., Cesaroni R., Comoretto G., Felli M., 1991, A&A, 246, 249
* Rodón et al. (2012) Rodón J. A., Beuther H., Schilke P., Zhang Q., 2012, Boletin de la Asociacion Argentina de Astronomia La Plata Argentina, 55, 199
* Rosolowsky et al. (2010) Rosolowsky E., et al., 2010, ApJS, 188, 123
* Ruiz-Velasco et al. (2016) Ruiz-Velasco A. E., Felli D., Migenes V., Wiggins B. K., 2016, ApJ, 822, 101
* Schwab (1984) Schwab F. R., 1984, AJ, 89, 1076
* Snell et al. (1988) Snell R. L., Huang Y. L., Dickman R. L., Claussen M. J., 1988, ApJ, 325, 853
* Stahler et al. (2000) Stahler S. W., Palla F., Ho P. T. P., 2000, in Mannings V., Boss A. P., Russell S. S., eds, Protostars and Planets IV. pp 327–352
* Sunada et al. (2007) Sunada K., Nakazato T., Ikeda N., Hongo S., Kitamura Y., Yang J., 2007, PASJ, 59, 1185
* Surcis et al. (2011a) Surcis G., Vlemmings W. H. T., Curiel S., Hutawarakorn Kramer B., Torrelles J. M., Sarma A. P., 2011a, A&A, 527, A48
* Surcis et al. (2011b) Surcis G., Vlemmings W. H. T., Torres R. M., van Langevelde H. J., Hutawarakorn Kramer B., 2011b, A&A, 533, A47
* Surcis et al. (2013) Surcis G., Vlemmings W. H. T., van Langevelde H. J., Hutawarakorn Kramer B., Quiroga-Nuñez L. H., 2013, A&A, 556, A73
* Szymczak & Gérard (2004) Szymczak M., Gérard E., 2004, A&A, 414, 235
* Szymczak et al. (2000) Szymczak M., Hrynek G., Kus A. J., 2000, A&AS, 143, 269
* Tan et al. (2013) Tan J. C., Kong S., Butler M. J., Caselli P., Fontani F., 2013, ApJ, 779, 96
* Turner (1979) Turner B. E., 1979, A&AS, 37, 1
* Vázquez-Semadeni et al. (2011) Vázquez-Semadeni E., Banerjee R., Gómez G. C., Hennebelle P., Duffin D., Klessen R. S., 2011, MNRAS, 414, 2511
* Vlemmings (2007) Vlemmings W. H. T., 2007, in Chapman J. M., Baan W. A., eds, IAU Symposium Vol. 242, Astrophysical Masers and their Environments. pp 37–46 (arXiv:0705.0885), doi:10.1017/S1743921307012549
* Vlemmings et al. (2006a) Vlemmings W. H. T., Harvey-Smith L., Cohen R. J., 2006a, MNRAS, 371, L26
* Vlemmings et al. (2006b) Vlemmings W. H. T., Diamond P. J., van Langevelde H. J., Torrelles J. M., 2006b, A&A, 448, 597
* Vlemmings et al. (2010) Vlemmings W. H. T., Surcis G., Torstensson K. J. E., van Langevelde H. J., 2010, MNRAS, 404, 134
* Wilking et al. (1989) Wilking B. A., Mundy L. G., Blackwell J. H., Howe J. E., 1989, ApJ, 345, 257
* Wood & Churchwell (1989) Wood D. O. S., Churchwell E., 1989, ApJ, 340, 265
* Wu et al. (2004) Wu Y., Wei Y., Zhao M., Shi Y., Yu W., Qin S., Huang M., 2004, A&A, 426, 503
* Xu et al. (2006) Xu Y., et al., 2006, AJ, 132, 20
* Xu et al. (2009) Xu Y., Voronkov M. A., Pandian J. D., Li J. J., Sobolev A. M., Brunthaler A., Ritter B., Menten K. M., 2009, A&A, 507, 1117
* Yoo et al. (2018) Yoo H., Kim K.-T., Cho J., Choi M., Wu J., Evans Neal J. I., Ziurys L. M., 2018, ApJS, 235, 31
* Zhang et al. (2005) Zhang Q., Hunter T. R., Brand J., Sridharan T. K., Cesaroni R., Molinari S., Wang J., Kramer M., 2005, ApJ, 625, 864
|
# Brightening the Optical Flow through Posit Arithmetic
Vinay Saxena1, Ankitha Reddy1, Jonathan Neudorfer1, John Gustafson3,
Sangeeth Nambiar1, Rainer Leupers2, Farhad Merchant2
{vinay.saxena, ankitha.reddy, jonathan.neudorfer,
<EMAIL_ADDRESS><EMAIL_ADDRESS>
{farhad.merchant<EMAIL_ADDRESS>1Bosch Research and Technology
Centre - India, Bangalore 2Institute for Communication Technologies and
Embedded Systems, RWTH Aachen University, Germany 3National University of
Singapore, Singapore
###### Abstract
As new technologies are invented, their commercial viability needs to be
carefully examined along with their technical merits and demerits. The _posit_
TM data format, proposed as a drop-in replacement for IEEE 754TM float format,
is one such invention that requires extensive theoretical and experimental
study to identify products that can benefit from the advantages of posits for
specific market segments. In this paper, we present an extensive empirical
study of posit-based arithmetic vis-à-vis IEEE 754 compliant arithmetic for
the optical flow estimation method called Lucas-Kanade (LuKa). First, we use
_SoftPosit_ and _SoftFloat_ format emulators to perform an empirical error
analysis of the LuKa method. Our study shows that the average error in LuKa
with SoftPosit is an order of magnitude lower than LuKa with SoftFloat. We
then present the integration of the hardware implementation of a posit adder
and multiplier in a RISC-V open-source platform. We make several
recommendations, along with the analysis of LuKa in the RISC-V context, for
future generation platforms incorporating posit arithmetic units.
###### Index Terms:
Optical flow, computer arithmetic, posits, floating-point, Lucas-Kanade
algorithm
††publicationid: pubid: ©2021 IEEE
## I Introduction
The _posit_ data type is proposed as a drop-in replacement for IEEE 754
compliant floating-point format [1]. Posit format offers compelling advantages
over IEEE 754 compliant float format, such as higher accuracy and wider
dynamic range. For arithmetic operations, posits require simpler hardware
compared to a fully-compliant implementation of IEEE 754 floats [2][3]. It has
been shown experimentally that an $n$-bit floating-point adder/multiplier can
be replaced by an $m$-bit posit adder/multiplier where $m<n$, without
compromising accuracy and range [4][5]. This is due to greater information-
per-bit in the posit data type compared to its IEEE-compliant counterpart.
Several researchers around the world are working on the efficient realization
of posit arithmetic units; studies of posit arithmetic for different
application domains have been published [6][7]. The _SoftPosit_ emulation
library supports float-like arithmetic operations with different posit
configurations and is closely patterned after the _SoftFloat_ library from
Berkeley. We believe the time has arrived to apply SoftPosit and SoftFloat to
analyze the merits of posits versus floats for widely-used commercial
applications.
Since the inception of posit data representation, there have been several
implementations in the literature of posit arithmetic operations. The early
and open-source hardware implementations of a posit adder and multiplier were
presented in [2] and [4]. In [4], the authors covered the design of a
parametric adder/subtractor while in [2], the authors presented parametric
designs of float-to-posit and posit-to-float converters, and a multiplier
along with the design of an adder/subtractor. A major disadvantage of the
designs presented in [2] and [4] is that the designs are yet to be fully
verified and contain multiple errors. The PACoGen open-source framework that
can generate a pipelined adder/subtractor, multiplier, and divider is
presented in [7]. The design presented in [7] has a disadvantage that of not
synthesizing for the exponent size zero, and hence cannot be considered a
fully parametric implementation. A more complete implementation of a
parametric posit adder and multiplier generator is presented in [5].
_Optical flow_ is caused by the relative motion of an observer and a scene
that has objects in motion. Out of several methods in the literature, we
choose the Lucas-Kanade (LuKa) method for our experiments due to its
simplicity and computational intensity [8].
Recently, the open-source instruction set architecture (ISA) called RISC-V has
gained a following in industry and academia. We integrated a posit adder and
multiplier with the RI5CY core [9] to create a posit-enabled RISC-V
implementation. We compare area and energy numbers for field-programmable gate
array (FPGA) synthesis of a RI5CY core with IEEE 754 compliant and with posit
arithmetic. The major contributions in this paper are as follows:
* •
A detailed empirical study of LuKa using SoftPosit and SoftFloat where we
compare numerical accuracy in LuKa for posits and IEEE 754 compliant floats
* •
RISC-V-based comparison of area and delay using posits versus fully-compliant
IEEE 754 floats (to the best of our knowledge, this is the first such study)
* •
Performance analysis of LuKa on RISC-V with posit and IEEE 754 compliant
floats, and discussion of current research issues in posit arithmetic
The rest of the paper is organized as follows: In Section II, we present an
overview of IEEE 754-2019 format, posit number format, and the LuKa method
along with the relevant literature. In Section III, accuracy analyses of LuKa
using SoftFloat and SoftPosit are discussed in detail. A hardware
implementation is presented in Section IV along with performance measurements.
We summarize our conclusions in Section V.
## II Background
### II-A IEEE 754 Compliant and Posit Number Systems
The IEEE 754-2019 binary floating-point format numbers have three parts for
normal floats: a sign, an exponent, and a fraction (see Fig. 1). The sign is
the most significant bit and indicates whether the number is positive or
negative. In single precision, the next $8$ bits represent the exponent of the
binary number ranging from $-126$ to $127$. The remaining $23$ bits represent
the fractional part. The format is:
$\textit{val}=(-1)^{\textit{sign}}\times
2^{\textit{exp}-\textit{bias}}\times(1.\textit{ fraction})$ (1)
When the exponent bits express the minimum (all 0 bits) or maximum (all 1
bits), an exception value is indicated. It is currently common for vendors to
claim IEEE 754 compliance in their hardware while actually complying only for
the case of normal floats. Full IEEE 754 compliance for exception cases,
deemed to be rare, is seldom supported in hardware; instead, traps to software
or microcode are used. This approach degrades both performance and security;
data-dependent timing creates a side-channel security hole.
Posit arithmetic was proposed as a drop-in replacement for IEEE 754 arithmetic
in 2017 [1]. Posit arithmetic has several advantages over IEEE 754 arithmetic:
higher accuracy for the most commonly-used values, simpler hardware
implementation, smaller chip area, and lower energy cost [5] [10]. Unlike IEEE
754 floats, there are no subnormal posit numbers, nor is there any need for
them; $|x-y|$ produces a zero result if and only if $x=y$. There are only two
exception cases: zero and not-a-real (NaR). For all other cases, the value val
of a posit is given by
$\displaystyle\textit{val}=$
$\displaystyle(-1)^{\textit{sign}}\times\textit{useed}^{k}\times
2^{\textit{exp}}\times(1+\sum_{i=1}^{\textit{fn}-1}b_{\textit{fn}-1-i}2^{-i})$
(2)
Figure 1: Generic comparison of IEEE 754 floating-point (_float_) and _posit_
number formats for non-exception values
The regime indicates a scale factor of $\textit{useed}^{k}$ where
$\textit{useed}=2^{2^{\textit{es}}}$ and es is the exponent size. The
numerical value of $k$ is determined by the _run length_ of 0 or 1 bits in the
string of regime bits. Run-length encoding of the regime automatically allows
more fraction bits for the more common values for which magnitudes are closer
to 1, and thus provides tapered accuracy in a bit-efficient way that preserves
ordering. Further details about the posit number format and posit arithmetic
can be found in [1]. The posit format is depicted in Fig. 1.
### II-B Lucas-Kanade Method
Despite its limitations in determining optical flow information in uniform
images, the LuKa technique and its variants are the widely used methods for
estimation of the optical flow in commercial products [11]. Suppose $I$ is the
brightness of the pixel at position $(x(t),y(t))$ at time $t$. We wish to
solve
$\displaystyle I_{x}u_{x}+I_{y}u_{y}+I_{t}=0$ (3)
where $I_{x}$, $I_{y}$, and $I_{t}$ represent the $x$, $y$, and $t$
directional gradients, respectively, and $u_{x}$ and $u_{y}$ represent the
optical flow to calculate. To solve this equation, a local smoothness
constraint is added, which assumes the change in $u_{x}$ and $u_{y}$ in a
small neighborhood of pixels to be extremely small. The final vector $\vec{u}$
containing the flow components is obtained from the equation
$\displaystyle\vec{u}=(A^{T}A)^{-1}A^{T}B$ (4)
where $A$ is the directional derivative matrix of the image and $B$ is the
time derivative vector. The derivatives here are simple deltas from one image
to the next with a resolution of 1/255. Since, the matrix $A{{}^{T}}A$ is a
$2\times 2$ matrix, we use Cramer’s rule for the matrix inversion.
### II-C Related Work
There have been several attempts of posit implementation since the first
proposal. The early parameterized designs were presented in [2], [4], [7], and
[5]. The designs presented in [2], [4], and [7] are open-source but do not
synthesize for exponent size _zero_ while the design presented in [5] is not
open-source. A power-efficient posit multiplier is presented in [12]. The
authors in [12] present a scheme where they divide the fraction part of the
multiplier into several chunks and use them efficiently resulting in 16% power
efficiency over the base-line implementation.
Several posit implementations are explicitly focused on machine learning
applications. Performance-efficiency trade-off for deep neural network (DNN)
inference is presented in [13]. The authors have discussed overall neural
network efficiency and performance trade-offs in [13]. A template-based posit
multiplier is presented in [14] where authors have incorporated training and
inference of the neural networks. Authors have shown that 8-bit posits are as
good as floats in inference. The _Deep Positron_ DNN architecture presented in
[15] shows trade-offs between performance and hardware resources. The Deep
Positron architecture uses an FPGA-based soft core to control the multiply-
accumulate unit hardware (fixed-point, floating-point, and posit). The
_Cheetah_ framework presented in [16] incorporates mixed-precision arithmetic
alongside support for the conventional formats.
RISC-V integration of posit arithmetic hardwares are presented in PERI [17],
PERC [18], and Clarinet [19]. PERI presented in [17] uses SHAKTI C-class core
as a base to attach posit arithmetic hardware, first as a tightly-coupled
unit, and then as an accelerator connected through _rocket custom coprocessor_
(RoCC) interface. PERC presented in [18] delves into a similar aspects while
using _RocketCore_ as a base. Flute RISC-V core from Bluespec Inc is used for
posit arithmetic experimentation in Clarinet [19] where Melodica is the
tightly-coupled posit core. Clarinet has a unique feature that it supports the
_quire_ register as well for exact dot products; fused multiply-accumulation
is a special case of the accumulation in the quire register. There also exists
a couple of commercial attempts, such as the CRISP core by Calligo
Technologies [20] and VividSparks [21].
Very few implementations in the literature focus on application-specific posit
arithmetic tuning wherein extensive analyses are performed before delving into
the hardware designs. In our approach, we first emphasize application analyses
followed by RISC-V integration of posit arithmetic unit. For our
implementation of a posit adder and multiplier, we have used the improvised
implementations of the designs proposed in [5], and for the divider, we have
used the design presented in [7].
## III LuKa using SoftPosit and SoftFloat
The study is conducted with synthetic images of a sphere (slightly rotated in
each successive frame) and real-life images a human being (slightly translated
in each successive frame), as shown in Fig. 2a and 2b. It is ensured that the
images are well textured and the motion is very small for consecutive frames,
eliminating the need to use regularization-based methods or multi-scale
estimation.
To perform the error analysis, LuKa is implemented in the C programming
language. We ensure that the implementation has no dependency on any third-
party or open source libraries. The reference implementation uses double
precision floating-point arithmetic. The code is executed with all consecutive
pairs of frames as inputs over the whole data set of images and the optical
flow values are obtained.
Figure 2: Optical flow in consecutive frames in (a) synthetic images, and (b)
real-life images. Images on the left and right side are the consecutive input
images and the middle images represent the optical flow. None of the images
are manipulated to support any particular number format. Figure 3: Error heat
maps for synthetic ((a), (b), and (c)) and real-life ((d), (e), and (f))
images in $y$ and $x$ for fixed-point, float, and posit formats (IEEE 754-2008
64-bit reference)
### III-A Accuracy analysis for $16$-bit floats, fixed-point, and posits
A primary goal of this study was to compare low-precision ($16$-bit) posit and
float result accuracy. We also test a $16$-bit fixed-point format. The grey-
scale pixel values ($0$ to $255$) are first scaled by dividing by norm; we
tested norm values ranging from $1$ to $255$. Each format has a preferred
norm; for example, a too-small norm for floats leads to catastrophic overflow
in the matrix multiplication step of the algorithm, since the largest real
value they can represent is $65504$.
For all three formats, we compare the results with a reference result. We pick
the norm value that gives the smallest absolute error. Heat maps of the
absolute error for both $u$ and $v$ are generated to visualize the
distribution of error (Fig. 3). The heat maps and data presented are for the
errors in the optical flow between two particular frames selected from the
synthetic and real-life image data sets, representative for the whole
experiment.
For the $16$-bit (half-precision) float study, we use the Berkeley SoftFloat
library by John Hauser, which provides an excellent stable software
implementation of this precision that conforms to the IEEE 754 Standard. All
optical flow values are calculated with $16$-bit floating-point variables and
operations. For the $16$-bit floats, the best norm value is found to be $32$
for both synthetic and real-life images (We discuss the cause and implication
of this in detail later in this section).
For the fixed-point implementation, we take advantage of the libfixmath fixed-
point math library. As with SoftFloat, a Q16.16 implementation of the code is
prepared and executed on the same input data set. Heat maps (Fig. 3) are
generated for the best cases (norm factors of 28 and 18 for the synthetic and
real-life images respectively).
(a)
(b)
Figure 4: Trend in accuracy for different normalization factors in (a) synthetic images and (b) real-life images TABLE I: Absolute errors in optical flow | Fixed16.16 | Float 16 | Posit 16,1 | Posit 16,2
---|---|---|---|---
Max Error (synthetic) | 0.01579 | 0.0047 | 0.00272 | 0.00163
RMS Error (synthetic) | 0.00057 | 0.00049 | 0.00016 | 0.00015
Std. Deviation (synthetic) | 0.00056 | 0.00046 | 0.00015 | 0.00046
Max Error (real-life) | 5.6692 | 0.125 | 0.13412 | 0.08333
RMS Error (real-life) | 0.12940 | 0.00109 | 0.00234 | 0.00108
Std. Deviation (real-life) | 0.12885 | 0.00108 | 0.00233 | 0.00107
The posit implementation uses Cerlane Leong’s SoftPosit library. It supports
two $16$-bit configurations with $\textit{es}=1$ and $\textit{es}=2$; we found
$\textit{es}=2$ the better fit for this application. The code was ported with
all variables, and operations changed to posits. The use of the _quire_ ,
supported by the SoftPosit library, is out of the scope of this work, but in
future work may further improve the accuracy in the matrix multiplication
step. Pixel values are again normalized. Optical flow values and errors are
calculated as before. Norm values of $16$ and $4$ present the smallest error
for synthetic and real-life images respectively.
Table I summarizes the results obtained. “posit 16,$k$” refers to 16-bit
posits with $\textit{es}=k$. For the synthetic images, the maximum error for
the fixed-point format is an order of magnitude higher than the other formats.
However, this is not the case with RMSE which is very close to the RMSE for
float 16, albeit $\sim 4\times$ more than posit 16,2. This is also evident
from visualizing the heat maps and confirms that fixed-point format gives
mostly accurate values with few of very high absolute error. Posit 16,2 has
$\sim 3\times$ lower maximum and RMS errors compared to float 16 while posit
16,1 has an error profile intermediate to the two formats.
For the real-life images, results are slightly different. It should be noted
that no additional filters were applied to the images before the optical flow
calculations and they were extracted from video as-is. They lack the texture
and sharpness of synthetic images and are noisier in general. It is found that
both the maximum and RMS errors for fixed-point format in this case are two
orders of magnitude higher compared to floats and posits. Float 16 performs
equivalent to posit 16,2 and better than posit 16,1 in terms of RMS error,
although, the max error for float is $\sim 1.5\times$ the max error for posit
16,2.
The summary in Table I presents the best-in-class results, but for a more
generic view of the performance of the formats, the max errors are plotted
against the normalization factors in Fig. 4 in the form of bar charts.
Normalizing by $16$ gives very high accuracy for posits in both data sets
(best for synthetic and next-best for real-life). In other words, scaling the
original pixel values from ($0$–$255$) to ($0$–$16$) leads to further
improvement in result accuracy. This is because of the _tapered accuracy_
property of posits; accuracy is maximized for values close to $0$ in
magnitude. Dividing by $16$ centers the (nonzero) pixel values $x$ in the
range $\tfrac{1}{16}\leqslant x<16$. Posit 16,2 has its maximum accuracy in
exactly this range, $12$ significant bits. Float 16 is consistently less
accurate than posit 16,2 and fixed-point is consistently less accurate than
both floats and posits. The red bars in Fig. 4 (b) indicate NaN float values
that are generated for norm values in the range of 1 to 8 that are too small
to prevent overflow. (Posit 16,2 can represent real values up to about
$7.2\times 10^{16}$.)
Next, we delve deeper into the float 16 and posit 16,2 formats to understand
why float 16 performs so well in certain regions (such as $\text{norm}=32$ for
floats). Data values generated from each and every intermediate arithmetic
operation performed in the LuKa algorithm is collected in the reference
implementation for norm values of $255$ (scaling pixels to range from $0$ to
$1$) and $32$ (scaling pixels to range from $0$ to $8$). This is done for both
synthetic and real-life images. From this intermediate data, all the unique
values are extracted and analyzed. It is found that normalizing by $32$ limits
the dynamic range of the data values generated during the calculation,
bringing them within the dynamic range of float 16 (Fig. 5 and Fig. 6). Fig. 5
and 6 also present overlapped histograms of float 16, posit 16,2 and the
unique data values generated. A good overlap entails a better number system
for the application in hand. Posits have a far wider dynamic range than floats
and hence perform better in general across all norm factors. For the norm
factor of $32$ where float 16 has adequate dynamic range, the tapered nature
of data (with high density of values around $0$) gives a slight edge to posits
resulting in a marginally lower error at that norm, though not as low as
posits using their optimum norm. Fig. 5 also shows that a relatively smaller
error in the larger data values carries more weight in the final result
accuracy than the larger error in smaller data values. However, a deeper study
with more applications is needed to substantiate this claim. This study shows
the advantages of using posits over other formats for calculating the optical
flow using the LuKa method.
Figure 5: Histogram overlap of posits, floats, and unique data values
generated during reference (double precision) implementation run of LuKa with
normalization factors of (a) 255 and (b) 32 for synthetic images Figure 6:
Histogram overlap of posits, floats, and unique data values generated during
reference (double precision) implementation run of LuKa with normalization
factors of (a) 255 and (b) 32 for real-life images
## IV Hardware implementation of LuKa
Figure 7: RISC-V integration of posit core TABLE II: Adder, multiplier synthesis results (delays in ns) | Adder | Multiplier
---|---|---
($n$,es) | LUT | Logic Delay | Net Delay | LUT | Logic Delay | Net Delay
(8,0) | 185 (0) | 8.83 | 21.12 | 95 (1) | 7.43 | 13.13
(8,2) | 181 (0) | 9.68 | 20.92 | 96 (0) | 4.28 | 14.07
(16,1) | 400 (0) | 12.77 | 19.01 | 229 (1) | 10.16 | 13.55
(16,2) | 391 (0) | 14.78 | 20.07 | 226 (1) | 10.76 | 13.09
(32,2) | 866 (0) | 17.30 | 24.57 | 572 (4) | 15.55 | 16.38
TABLE III: FPGA synthesis results for FPU and Posit cores | LUT Count | Delay (ns)
---|---|---
FPU [9] | 2669 | 50 (20 MHz)
Posit (16,1) | 2082 | 55 (18.18 MHz)
Posit (16,2) | 2024 | 42 (23.81 MHz)
Posit (28,2) | 2780 | 71 (14.08 MHz)
Posit (32,2) | 2810 | 71 (14.08 MHz)
TABLE IV: LUT count comparison | Adder | Multiplier
---|---|---
($n$,es) | Ours | [5] | [7] | [22] | Ours | [5] | [7] | [22]
(8,0) | 185 | NS | NS | NR | 95 (1) | NS | NS | NR
(8,2) | 181 | 208 | 196 | NR | 96 (0) | 131 (0) | 123 (0) | NR
(16,1) | 400 | 391 | 460 | 320 | 229 (1) | 218 (1) | 271 (1) | 253 (1)
(16,2) | 391 | 404 | 492 | NR | 226 (1) | 223 (1) | 272 (1) | NR
(32,2) | 866 | 981 | 1115 | 745 | 572 (4) | 572 (4) | 648 (4) | 469 (4)
We integrate a verified posit adder and multiplier to the RI5CY core of the
Pulpino platform [9]. We disintegrate the existing floating-point unit (FPU)
from the RI5CY core and integrate the posit arithmetic unit (PAU) generated
adder and multiplier in the core, as shown in Fig. 7. RI5CY is a 32-bit core
based on RISC-V ISA supporting floating-point instructions. We generate a
32-bit adder and multiplier using PAU for the integration. The developed
parametric posit hardware generator allows us to choose any posit
configuration ($n$ or es) and generate adder, multiplier, integer to posit
converter (int2pos) and posit to integer converter (pos2int) hardware
operators. The PAU has been exhaustively tested against the SoftPosit library
for $(n,\textit{es})=(8,0)$, $(7,2)$, $(8,2)$, $(9,2)$, $(10,2)$, $(11,2)$,
and $(12,2)$ configurations. Furthermore, we have also tested for $(16,1)$
configuration for $\sim 1.31$ billion combinations ($\approx$ $31\%$), mainly
covering the corner cases.
Based on the conclusions obtained from the optical flow study, we synthesize
and integrate a $(16,2)$ PAU core with Pulpino. The results obtained post-
integration are shown in Table III. The baseline version of the RI5CY core is
the version with the native IEEE 754 FPU. Switching to the $(16,2)$ configured
PAU core affords enormous savings in data RAM usage at a tolerable loss in
accuracy for our optical flow application. Integration results for other
configurations of the posit core are also provided for reference. The table
also shows the FPGA delay for various configurations of the PAU core. $16$-bit
versions show a delay of $55$ ns and $42$ ns, comparable to the $50$ ns
achieved by the baseline FPU from Pulpino.
Table II presents the detailed synthesis results of the PAU adder and
multiplier. This PAU is synthesized for Zedboard with Xilinx Zynq-7000 SoC.
Vivado 2018.3 is used for the FPGA synthesis results. Both the adder and
multiplier implementations are purely combinational in nature and without any
pipelining. The DSP counts are given in parentheses next to LUT counts for all
the configurations. In general, reasonably good area and delay numbers are
observed. We benchmark our PAU against the other published results on posit
hardware in Table IV. NS marks configurations that are not synthesizable, and
NR signifies not reported in the paper. Again, the DSP counts are given in
parentheses next to the LUT counts for multiplier. The adders do not use any
DSP blocks across all the implementations. The LUT count of this work shows a
significant improvement over existing parametric posit hardware generators [7,
5]. It is also more extensively tested compared to these previous works. [22]
shows lower area footprint and is a good candidate for our future
implementations. To the best of our knowledge, [22] and this work are the best
published implementations of a parametric posit hardware generator.
## V Conclusion
The purpose of this work is to analyze the benefits and shortcomings of posit
arithmetic over floats in real-world applications. We have demonstrated the
clear advantage of using posits instead of floats for calculating optical flow
using LuKa. An order of magnitude improvement in accuracy is observed when the
algorithm is implemented using posits instead of floats in synthetic images.
In contrast, for real-life images, the accuracy is comparable. A fixed-point
approach has accuracy too low to be viable. The algorithm was then further
implemented in hardware on a RISC-V core that has been modified to support
posit 16,1 and 32,2. The synthesis results of the modified core, as well as
the Posit Arithmetic Unit, were presented; Pulpino performs well with a lower
LUT count than the single-precision FPU. Our PAU is also shown to be
comparable (if not better) in terms of area, to other state-of-the-art posit
hardware.
## References
* [1] J. Gustafson and I. Yonemoto, “Beating floating point at its own game: Posit arithmetic,” Supercomput. Front. Innov.: Int. J., vol. 4, no. 2, pp. 71–86, June 2017.
* [2] Manish Kumar Jaiswal and Hayden Kwok-Hay So, “Universal number posit arithmetic generator on FPGA,” in DATE 2018, Dresden, Germany, March 19-23, 2018, 2018, pp. 1159–1162.
* [3] A. Guntoro, C. De La Parra, F. Merchant, F. De Dinechin, J. L. Gustafson, M. Langhammer, R. Leupers, and S. Nambiar, “Next generation arithmetic for edge computing,” in 2020 Design, Automation Test in Europe Conference Exhibition (DATE), 2020, pp. 1357–1365.
* [4] M. K. Jaiswal and H. So, “Architecture generator for type-3 unum posit adder/subtractor,” in (ISCAS) 2018, May 2018, pp. 1–5.
* [5] R. Chaurasiya, J. Gustafson, R. Shrestha, J. Neudorfer, S. Nambiar, K. Niyogi, F. Merchant, and R. Leupers, “Parameterized posit arithmetic hardware generator,” in ICCD 2018, pp. 334–341.
* [6] Suresh Nambi, Salim Ullah, Aditya Lohana, Siva Satyendra Sahoo, Farhad Merchant, and Akash Kumar, “Expan(n)d: Exploring posits for efficient artificial neural network design in fpga-based systems,” arXiv 2020.
* [7] M. K. Jaiswal and H. K. . So, “PACoGen: A hardware posit arithmetic core generator,” IEEE Access, vol. 7, pp. 74586–74601, 2019.
* [8] Berthold K.P. Horn and Brian G. Schunck, “Determining optical flow,” Tech. Rep., Cambridge, MA, USA, 1980.
* [9] M. Gautschi, P. D. Schiavone, A. Traber, I. Loi, A. Pullini, D. Rossi, E. Flamand, F. K. Gürkaynak, and L. Benini, “Near-threshold risc-v core with dsp extensions for scalable iot endpoint devices,” IEEE TVLSI, vol. 25, no. 10, pp. 2700–2713, Oct 2017.
* [10] Ihsen Alouani, Anouar BEN KHALIFA, Farhad Merchant, and Rainer Leupers, “An investigation on inherent robustness of posit data representation,” in Proceedings of the International Conference on VLSI Design (VLSID), Feb. 2021.
* [11] H. Seong, C. E. Rhee, and H. Lee, “A novel hardware architecture of the lucas–kanade optical flow for reduced frame memory access,” IEEE TCSVT, vol. 26, no. 6, pp. 1187–1199, June 2016.
* [12] H. Zhang and S. Ko, “Design of power efficient posit multiplier,” IEEE Transactions on Circuits and Systems II: Express Briefs, vol. 67, no. 5, pp. 861–865, 2020.
* [13] Zachariah Carmichael, Hamed F. Langroudi, Char Khazanov, Jeffrey Lillie, John L. Gustafson, and Dhireesha Kudithipudi, “Performance-efficiency trade-off of low-precision numerical formats in deep neural networks,” in Proceedings of the Conference for Next Generation Arithmetic 2019, New York, NY, USA, 2019, CoNGA’19, Association for Computing Machinery.
* [14] Raúl Murillo Montero, Alberto A. Del Barrio, and Guillermo Botella, “Template-based posit multiplication for training and inferring in neural networks,” arXiv 2019.
* [15] Z. Carmichael, H. F. Langroudi, C. Khazanov, J. Lillie, J. L. Gustafson, and D. Kudithipudi, “Deep positron: A deep neural network using the posit number system,” in 2019 Design, Automation Test in Europe Conference Exhibition (DATE), 2019, pp. 1421–1426.
* [16] Hamed F. Langroudi, Zachariah Carmichael, David Pastuch, and Dhireesha Kudithipudi, “Cheetah: Mixed low-precision hardware & software co-design framework for dnns on the edge,” 2019.
* [17] Sugandha Tiwari, Neel Gala, Chester Rebeiro, and V. Kamakoti, “PERI: A posit enabled risc-v core,” 2019.
* [18] Arunkumar M V, Ganesh Bhairathi, and Harshal Hayatnagarkar, “Perc: Posit enhanced rocket chip,” 05 2020.
* [19] Riya Jain, Niraj Sharma, Farhad Merchant, Sachin Patkar, and Rainer Leupers, “Clarinet: A risc-v based framework for posit arithmetic empiricism,” arXiv 2020.
* [20] Calligo Technologies, “Calligo risc-v with posits (crisp),” Calligo Technologies, 2020.
* [21] VividSparks, “Posit for next generation computer arithmetic,” VividSparks, 2020.
* [22] Y. Uguen, L. Forget, and F. de Dinechin, “Evaluating the hardware cost of the posit number system,” in FPL, Sep. 2019, pp. 106–113.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.